text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
{"url":"http:\/\/www.cis.jhu.edu\/education\/introPatternTheory\/chapters\/lie\/lie3.html","text":"# Lie Groups\n\n## Sphere as a 4-Manifold\n\nIn the same way that we found two coordinate patches that completely covered the circle, we need to find coordinate patches that cover the sphere. Parametrization gives us a function from 2-space to the sphere (S2). The variables in $$\\mathbb{R}^2$$ are $$\\theta$$ and $$\\phi$$:\n\n\\begin{eqnarray} \\gamma:\\mathbb{R}^2\\rightarrow\\mathbb{S}^2 \\\\ {\\theta \\choose \\phi} \\mapsto \\left( \\begin{matrix} \\sin\\theta\\cos\\theta \\\\ \\sin\\theta\\sin\\phi \\\\ \\cos\\theta \\end{matrix}\\right) \\end{eqnarray}\n\nNow the challenge is to determine how to restrict the domain in $$\\mathbb{R}^2$$ to ensure that the function is injective, while keeping the domain open and covering the whole sphere. The two obvious patches are:\n\nSorry, your browser does not support the element.\n\\begin{align} \\gamma_1:\\left(0,\\pi\\right)\\times\\left(-\\pi,\\pi\\right)\\rightarrow\\mathbb{S}^2 \\\\ {\\theta_1 \\choose \\phi_1} \\mapsto \\left(\\begin{matrix} \\sin\\theta_1\\cos\\theta_1 \\\\ \\sin\\theta_1\\sin\\phi_1 \\\\ \\cos\\theta_1 \\end{matrix}\\right) \\\\ \\gamma_2:\\left(0,\\pi\\right)\\times\\left(0,2\\pi\\right)\\rightarrow\\mathbb{S}^2 \\\\ {\\theta_2 \\choose \\phi_2} \\mapsto \\left(\\begin{matrix} \\sin\\theta_2\\cos\\theta_2 \\\\ \\sin\\theta_2\\sin\\phi_2 \\\\ \\cos\\theta_2 \\end{matrix}\\right) \\end{align}\n\nAs with the circle, the domains of the coordinate patches have to be open sets in order to be differentiable This creates the problem of \"incomplete coverage\" of the manifold. The first coordinate system does not address the arc defined where $$\\phi = \\pi$$ and the second does not address the arc $$\\phi = 0$$. Both patches neglect to include the north and south poles.\n\nThe problem of covering the north and south poles still remains, but is easily solved. All that is needed is a \"cap\" for each hemisphere. If we define D as the open unit disc in the xy-plane, then the points on the two hemispheres can be defined based on D. If\n\nD := \\{ {x \\choose y} \\in \\mathbb{R}^2 : x^2 + y^2 = 1 \\}\n\nthen only two more coordinate patches are needed:\n\n\\begin{align} \\gamma_3 : D\\rightarrow\\mathbb{S}^2 \\\\ {x_1 \\choose y_1} \\mapsto \\left(\\begin{matrix} x_1 \\\\ y_1 \\\\ \\sqrt{1-x_1^2 - y_1^2} \\end{matrix}\\right) \\\\ \\gamma_4 : D\\rightarrow\\mathbb{S}^2 \\\\ {x_2 \\choose y_2} \\mapsto \\left(\\begin{matrix} x_2 \\\\ y_2 \\\\ -\\sqrt{1-x_2^2 - y_2^2} \\end{matrix}\\right) \\end{align}\n\nConditions one and two are satisfied by these four patches. It is up to you to prove that the sphere is a smooth manifold by showing that the transition functions between the patches are infinitely differentiable.","date":"2017-12-18 01:24:52","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 2, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 1.0000098943710327, \"perplexity\": 896.7601871530263}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-51\/segments\/1512948599549.81\/warc\/CC-MAIN-20171218005540-20171218031540-00484.warc.gz\"}"}
null
null
\section{Introduction} Let $G = (V(G), E(G))$ be a connected and undirected graph, and $X\subseteq V(G)$ a subset of the vertices of $G$. If $x,y\in V(G)$, then we say that $x$ and $y$ are {\em $X$-visible}, if there exists a shortest $x,y$-path whose internal vertices are all not in $X$. $X$ is a \emph{mutual-visibility set} if its vertices are pairwise $X$-visible. The cardinality of a largest mutual-visibility set is the \emph{mutual-visibility number} of $G$, and it is denoted by $\mu(G)$. Each largest mutual-visibility set is also called {\em $\mu$-set} of $G$. These concepts were introduced by Di Stefano in~\cite{DiStefano-2022}. They were in particular motivated by the significance that mutual-visibility properties play within problems that arise in mobile entity models. Some of the numerous works that deal with such models are~\cite{aljohani-2018a, bhagat-2020, diluna-2017, poudel-2021}. Mutual-visibility sets in graphs are in a way dual to general position sets in graphs, the latter concepts being widely investigated in the last years~\cite{klavzar-2021, manuel-2018, patkos-2020, tian-2021, ullas-2016}. Among other results, it was proved in~\cite{DiStefano-2022} that the decision problem concerning the mutual-visibility number is NP-complete and the invariant was determined for several classes of graphs including block graphs, grids, and cographs. The research was continued in~\cite{Cicerone-2022+} emphasizing on Cartesian products and graphs $G$ with $\mu(G) = 3$. Interestingly, determining the mutual-visibility number of the Cartesian product of two complete graphs turns out to be equivalent to a case of the celebrated Zarankiewicz's problem which is a long-standing open combinatorial problem. Continuing the investigation of the mutual-visibility in graph products, we investigate in this paper strong products. In the next section we introduce the necessary concepts and recall some known results. Then, in Section~\ref{sec:mut}, we introduce total mutual-visibility sets which turned out to be useful for the investigation of mutual-visibility sets in strong products, and give some basic properties of total mutual-visibility sets. In the subsequent section we first bound from below the (total) mutual-visibility number of strong products. Then we determine the mutual-visibility number for the strong grids of arbitrary dimension which shows the tightness of the lower bound. In addition, we find families of strong product graphs for which the bound is not tight and complete the section with another lower bound. In Section~\ref{sec:strong-prisms} we focus on strong prisms where we give a couple of tight bounds for the mutual-visibility number. We conclude our exposition with several open problems and directions for further investigation. \section{Preliminaries}\label{sec:preliminaries} Since two vertices from different components of a graph are not mutually visible, all graphs in the paper are connected unless stated otherwise. For a natural number $n$, we set $[n] = \{1,\ldots, n\}$. Given a graph $G = (V(G), E(G))$, its order will be denoted by $n(G)$. The distance function $d_G$ on a graph $G$ is the usual shortest-path distance. The subgraph $G'$ is \emph{convex}, if for every two vertices of $G'$, every shortest path in $G$ between them lies completely in $G'$. The \emph{convex hull} of $V'\subseteq V(G)$, denoted as $\mathit{hull}(V')$, is defined as the smallest convex subgraph containing $V'$. The {\em degree}, $\deg_G(x)$, of a vertex $x$ is the number of its neighbors. If $X\subseteq V(G)$, then $\overline{X}$ denotes the complement of $X$, that is the set containing all vertices of $G$ not in $X$. Moreover, $G[X]$ denotes the subgraph of $G$ induced by $X$, that is the maximal subgraph of $G$ with vertex set $X$. The subgraph of $G$ induced by $\overline{X}$ is denoted by $G-X$, and by $G-v$ when $X=\{v\}$. Two vertices $u$ and $v$ are \emph{false twins} if $uv\not\in E(G)$ and $N_G(u) = N_G(v)$, where $N_G(x)$ is the open neighborhood of $x$, and are \emph{true twins} if $uv\in E(G)$ and $N_G[u] = N_G[v]$, where $N_G[x]$ is the closed neighborhood of $x$. Vertices are {\em twins} if they are true or false twins. Adding a new vertex to a graph $G$ that is a true/false twin of an existing vertex of $G$ is an operation called \emph{splitting}. Another one-vertex extending operation is that of attaching a \emph{pendant vertex}, that is a vertex connected by a single edge to an existing vertex of the graph. A graph is a \emph{block graph} if every block (i.e., a maximal 2-connected component) is a clique. Block graphs can be generated by using true twins and pendant vertices. Notice that the connected block graphs are exactly the graphs in which there is a unique induced path connecting every pair of vertices. A graph is called \emph{cograph} whenever it is obtained by a sequence of splittings starting from $K_1$. From this generative definition it follows a useful structural property. Let $G$ be a cograph, and let $v_1$ be the starting vertex for a sequence of splitting operations that build $G$. If $G$ is connected, the first operation must be a true twin of $v_1$ (that produces $v_2$ adjacent to $v_1$). Let $V_1 = \{v_1\}$ and $V_2 = \{v_2\}$. Now, for each further vertex $v$ which must be added to build $G$, if $v$ is a twin of a vertex in $V_1$ ($V_2$, respectively), then add it to $V_1$ (to $V_2$, respectively). We obtain that $V(G)$ can be partitioned into $V_1$ and $V_2$ where $v'v''\in E(G)$ for each $v'\in V_1$ and $v''\in V_2$. Cographs include complete split graphs and complete $k$-partite graphs. A graph is a {\em complete split graph} if it can be partitioned into an independent set and a clique such that every vertex in the independent set is adjacent to every vertex in the clique. A {\em $k$-partite graph} (alias $k$-chromatic graph) is a graph whose vertices are (or can be) partitioned into $k$ different independent sets; hence, a complete $k$-partite graph is a $k$-partite graph in which there is an edge between every pair of vertices from different independent sets. The {\em strong product} $G\boxtimes H$ of graphs $G$ and $H$ has vertex set $V(G\boxtimes H) = V(G)\times V(H)$, with vertices $(g,h)$ and $(g',h')$ being adjacent in $G\boxtimes H$ if either $gg'\in E(G)$ and $h=h'$, or $g=g'$ and $hh'\in E(H)$, or $gg'\in E(G)$ and $hh'\in E(H)$, see~\cite{Hammack-2011}. A {\em $G$-layer} through a vertex $(g,h)$ is the subgraph of $G\boxtimes H$ induced by the vertices $\{(g',h):\ g'\in V(G)\}$. Analogously {\em $H$-layers} are defined. Finally, we recall the following result which is implicitly used throughout the paper. \begin{proposition}\label{prop:sp-distance} {\rm \cite[Proposition 5.4]{Hammack-2011} } If $(g, h)$ and $(g', h')$ are vertices of a strong product $G\boxtimes H$, then $$ d_{G\boxtimes H} ((g, h), (g', h')) = \max\{d_G(g, g'), d_H (h, h')\}.$$ \end{proposition} \section{Total mutual-visibility} \label{sec:mut} The following definition introduces a variation of mutual-visibility. It will be useful to provide bounds on the mutual-visibility number of strong product graphs, although we consider that the concept might be also of independent interest. If $G$ is a graph and $X\subseteq V(G)$, then $X$ is a {\em total mutual-visibility set} of $G$ if every pair of vertices $x$ and $y$ of $G$ is $X$-visible. The term ``total" comes from observing that if $X$ is a total mutual-visibility set of $G$, then for every pair $x,y\in V(G)$ there exists a shortest $x,y$-path whose internal vertices are all not in $X$. The cardinality of a largest total mutual-visibility set of $G$ is the {\em total mutual-visibility number} of $G$ and is denoted by $\mu_{\rm t}(G)$. Notice that there could be graphs $G$ which do not contain total mutual-visibility sets, for such situations we set $\mu_{\rm t}(G) = 0$. For the sake of brevity, we say that $X$ is a {\em $\mu_{\rm t}(G)$-set} (or $\mu_{\rm t}$-set if we are not interested in the graph) if it is a total mutual-visibility set such that $|X| = \mu_{\rm t}(G)$. \medskip Clearly, every total mutual-visibility set is a mutual-visibility set, hence we have the following inequality \begin{equation}\label{eq:mut-bounds} 0\le \mu_{\rm t}(G) \le \mu(G). \end{equation} In the following we show that such bounds can actually be achieved by the total mutual-visibility number. Concerning the lower bound of~\eqref{eq:mut-bounds}, it can be easily checked that $\mu_{\rm t}(C_n) = 0$ for $n\ge 5$. The variety of graphs with this property appears to be large as the next result confirms. \begin{proposition} \label{prop:cover-with-convex} Let $G$ be a graph. If $V(G) = \bigcup_{i=1}^k V_i$, where $G[V_i]$ is a convex subgraph of $G$ and $\mu_{\rm t}(G[V_i]) = 0$ for each $i\in [k]$, then $\mu_{\rm t}(G) = 0$. \end{proposition} \begin{proof} Suppose on the contrary that $G$ contains a total mutual-visibility set $X$ with $|X| \ge 1$. Select an arbitrary vertex $x\in X$. Then there exists an $i\in [k]$ such that $x\in V_i$. Hence clearly, $|X\cap G[V_i]| \ge 1$. However, since $G[V_i]$ is convex, we get that $X\cap G[V_i]$ is a total mutual-visibility set of $G[V_i]$, a contradiction to the assumption $\mu_{\rm t}(G[V_i]) = 0$. \hfill $\square$ \bigskip \end{proof} In what follow we show that there also exist graphs $G$ with $\mu_{\rm t}(G) = 0$ such that they belong to well known graph classes and they are not covered by the Proposition~\ref{prop:cover-with-convex}. To this end, recall that a {\em cactus graph} is a graph whose blocks are cycles and/or complete graphs $K_2$. Fig.~\ref{fig:cactus} shows four examples of cactus graphs. \begin{proposition} \label{prop:cactus} Let $G$ be a cactus graph. Then $\mu_{\rm t}(G)= 0$ if and only if $G$ has minimum degree $2$ and for each cycle $C$ in $G$ with $n(C)\leq 4$ all the vertices in $C$ have degree at least $3$. \end{proposition} \begin{proof} ($\Leftarrow$) Assume that $G$ does not contain pendant vertices and that for each cycle $C$ of $G$, either $n(C)\le 4$ and each vertex in $C$ has degree at least $3$, or $n(C)\ge 5$. Suppose now $\mu_{\rm t}(G)>0$ and consider any total mutual-visibility set $X$ of $G$ with $|X|\ge 1$ and let $v\in X$. If $v$ does not belong to any cycle of $G$, since there are no pendant vertices, then $v$ must have at least two neighbors and such neighbors are not $X$-visible, which is not possible. Thus, we may consider $v$ belongs to a cycle $C$. If $n(C)\ge 5$, then the two neighbors of $v$ belonging to $C$ are not $X$-visible. If $n(C)\le 4$ and each vertex in $C$ has degree at least 3, then again there must exist a pair of neighbors of $v$ which are non $X$-visible, a contradiction again. Hence $\mu_{\rm t}(G)=0$ must hold. ($\Rightarrow$) It can be readily observed that each pendant vertex of $G$ forms a total mutual-visibility set of $G$. Thus, $G$ has minimum degree $2$, since $\mu_{\rm t}(G)= 0$. Moreover, if $C$ is a cycle in $G$ such that $n(C)\le 4$ and there exists $v\in V(C)$ with $\deg(v)=2$, then the set $\{v\}$ is a total mutual-visibility set of $G$, which is not possible. Therefore, the second claim follows. \hfill $\square$ \bigskip \end{proof} \begin{figure}[ht!] \begin{center} \includegraphics[height=4cm]{cactus} \caption{\small Some cactus graphs. The first two on the left do not fulfill the conditions of Proposition~\ref{prop:cactus}, and hence their total mutual-visibility number is greater than zero.} \label{fig:cactus} \end{center} \end{figure} As an application of this lemma, consider Fig.~\ref{fig:cactus}. From the left, the first two cactus graphs have total mutual-visibility number greater than zero since they both do not fulfill the conditions of the above lemma. On the contrary, the other two graphs have total mutual-visibility number equal to zero. Moreover, notice that among the cactus graph it is possible to find infinitely many graphs $G$ with $\mu_{\rm t}(G) = 0$ which are not covered by Proposition~\ref{prop:cover-with-convex}. For instance, if $G$ is a cactus graph with minimum degree at least $2$, girth at least $5$, and contains at least one path of length at least $2$ whose edges lie in no cycle, then $\mu_{\rm t}(G) = 0$ but $G$ might not admit a proper convex cover as in Proposition~\ref{prop:cover-with-convex}. The rightmost graph in Fig.~\ref{fig:cactus} is an example. \medskip Concerning the upper bound in~\eqref{eq:mut-bounds}, we introduce the following definition. A graph $G$ is a {\em $(\mu, \mu_{\rm t})$-graph} if $\mu(G) = \mu_{\rm t}(G)$. \begin{proposition}\label{prop:mu-perfect} Block graphs (and hence trees and complete graphs) and graphs containing a universal vertex are all $(\mu, \mu_{\rm t})$-graphs. \end{proposition} \begin{proof} If $G$ is a complete graph, then $\mu(G) = \mu_{\rm t}(G) = n(G)$. If $G$ is not complete and has a universal vertex, then it can be easily observed that $\mu(G) = \mu_{\rm t}(G) = n(G)-1$. Assume that $G$ is a block graph. From~\cite[Theorem 4.2]{DiStefano-2022} we know that if $G$ is a block graph and $X$ the set of its cut-vertices, then $V(G)\setminus X$ is a $\mu$-set of $G$ and hence $\mu(G) = |V(G)\setminus X|$. We now show that $V(G)\setminus X$ is also a $\mu_{\rm t}$-set of $G$. To this end, let us first observe that (1) each vertex in $V(G)\setminus X$ is adjacent to a vertex in $X$ and that (2) $G[X]$ is a convex subgraph of $G$. Hence, every $x,y\in V(G)$ are $(V(G)\setminus X)$-visible regardless their membership to $V(G)\setminus X$. This proves that $V(G)\setminus X$ is also a $\mu_{\rm t}$-set of $G$. \hfill $\square$ \bigskip \end{proof} In the following we characterize those cographs which are $(\mu, \mu_{\rm t})$-graphs. To this aim we first recall a result from~\cite{DiStefano-2022}. \begin{lemma}\label{lem:enabling} {\rm \cite[Lemma 4.8]{DiStefano-2022} } Given a graph $G$, then $\mu(G)\ge n(G)-1$ if and only if there exists a vertex $v$ in $G$ adjacent to each vertex $u$ in $G-v$ such that $\deg_{G-v} (u) < n(G) - 2$. \end{lemma} In what follows, any vertex $v$ of $G$ fulfilling the condition in the above lemma will be called \emph{enabling}. \begin{proposition}\label{prop:mu-perfect-cographs} A cograph $G$ is a $(\mu, \mu_{\rm t})$-graph if and only if it has a universal vertex or no enabling vertices. \end{proposition} \begin{proof} ($\Leftarrow$) If $G$ has a universal vertex, then clearly $\mu_{\rm t}(G)=\mu(G)$. If $G$ has no enabling vertices, then $\mu(G)< n(G)-1$ by Lemma~\ref{lem:enabling}. Since $\mu(G)\geq n(G)-2$ by~\cite[Theorem 4.11]{DiStefano-2022}, we get $\mu(G) = n(G)-2$. According to the structural property of cographs recalled in Section~\ref{sec:preliminaries}, the vertices of $G$ can be partitioned into two sets $V_1$ and $V_2$ such that each vertex in $V_1$ is adjacent to each vertex of $V_2$. If $v_1$ ($v_2$, respectively) is an arbitrary vertex in $V_1$ ($V_2$, respectively), then it can be easily observed that $X=V(G) \setminus \{v_1,v_2\}$ is a total mutual-visibility set. Hence, $\mu_{\rm t}(G)=\mu(G)= n(G)-2$. ($\Rightarrow$) We show that $G$ is not a $(\mu, \mu_{\rm t})$-graph by assuming that $G$ has an enabling vertex $v$ but no universal vertices. In this case $V(G)$ can be partitioned in three sets: $A=\{v\}$, $B$ the set of neighbors of $v$, and $C$ that contains all the remaining vertices. Notice that $C$ must be not empty otherwise $v$ would be a universal vertex, against the hypothesis. By definition of enabling vertex, $B$ contains all the vertices $u\in G$ such that $\deg_{G-v}(v) < n(G)-2$. This implies that for each $u\in C$ we have $\deg_{G-v}(u) \ge n(G)-2$. As a consequence, we have that (1) $G[C]$ is a clique, and (2) $bc \in E(G)$ for each $b\in B$ and $c\in C$. Then $B\cup C$ is a mutual-visibility set and hence $\mu(G)\geq n(G)-1$. As $G$ has no universal vertices, $\mu(G)=n(G)-1$. We now show that $\mu_{\rm t}(G)$ cannot be equal to $n(G)-1$. In fact, let $u\in V(G)$ and assume that $X= V(G)\setminus \{u\}$ is a $\mu_{\rm t}$-set. Clearly, $u\ne v$ because $v$ is not $X$-visible with vertices in $C$. Vertex $u$ cannot be in $B$ since it is not a universal vertex and so there is a vertex $w \in B$ such that $uw\not \in E(G)$. But then $u$ and $w$ are not $X$-visible. Finally, $u$ cannot be in $C$, because in this case $u$ and $v$ are not $X$-visible. \hfill $\square$ \bigskip \end{proof} The following result is a straightforward consequence of the characterization provided by Proposition~\ref{prop:mu-perfect-cographs}. \begin{corollary} Complete split graphs and complete $k$-partite graphs ($k \geq 2$) with at least three vertices in each partition are $(\mu, \mu_{\rm t})$-graphs. \end{corollary} Observe that since $\mu(C_n) = 3$ and $\mu_{\rm t}(C_n) = 2$ for $n\le 4$, the inequality $\mu_{\rm t}(G) \le \mu(G)$ can be strict. Moreover, even if the equality is attained, it can happen that some $\mu$-sets are $\mu_{\rm t}$-sets but some are not. For an example consider the graph from Fig.~\ref{fig:mu-sets}. \begin{figure}[ht!] \begin{center} \includegraphics[height=1.8cm]{mu-sets} \caption{\small A graph $G$ with two $\mu$-sets (represented by red vertices). On the right-hand side a $\mu$-set which is also a $\mu_{\rm t}$-set is shown, while on the left-hand side the $\mu$-set is not a $\mu_{\rm t}$-set (the pair of vertices not in the $\mu$-set are not visible).} \label{fig:mu-sets} \end{center} \end{figure} \section{Mutual-visibility in strong products} In this section we show how the total mutual-visibility of factor graphs can be used to provide lower bounds for the mutual-visibility number of their strong products. \begin{theorem}\label{thm:mut-lb} If $S_G$ and $S_H$ are total mutual-visibility sets of graphs $G$ and $H$, respectively, where $|S_G| < n(G)$ and $|S_H| < n(H)$, then $$\mu_{\rm t}(G\boxtimes H) \ge |S_G| n(H) + |S_H| n(G) - |S_G| \cdot |S_H|\,.$$ In particular, if $S_G$ and $S_H$ are $\mu_{\rm t}$-sets, and $G$ and $H$ are non-complete graphs, then $$\mu_{\rm t}(G\boxtimes H) \ge \mu_{\rm t}(G) n(H) + \mu_{\rm t}(H) n(G) - \mu_{\rm t}(G) \mu_{\rm t}(H)\,.$$ \end{theorem} \begin{proof} Let $S = (V(G)\times V(H)) \setminus (\overline{S_G} \times \overline{S_H})$; see Fig.~\ref{fig:prod} for an example of the construction of $S$. \begin{figure}[ht!] \begin{center} \includegraphics[height=3.7cm]{prod} \caption{\small A representation of $G\boxtimes P_3$, where $G$ is the graph in the right side of Fig.~\ref{fig:mu-sets}. The represented $\mu_{\rm t}$-set is that defined by Theorem~\ref{thm:mut-lb}.} \label{fig:prod} \end{center} \end{figure} In the following we prove that $S$ is a total mutual-visibility set of $G\boxtimes H$. Let $(g,h)$ and $(g',h')$ be arbitrary but distinct vertices from $V(G\boxtimes H)$. Consider first the case in which $g\neq g'$ and $h\neq h'$. Regardless the membership of $g,g'$ to $S_G$, since $S_G$ is a total mutual-visibility set of $G$ there exists a shortest $g,g'$-path $P_G$ in $G$ such that no internal vertex of $P_G$ (if any) is in $S_G$. Let the consecutive vertices of $P_G$ be $g=g_0, g_1, \ldots, g_k=g'$, with $k\ge 1$ since $g\neq g'$. Similarly there is a shortest $h,h'$-path $P_H$ in $H$ such that no internal vertex of $P_H$ (if any) is in $S_H$. Let the consecutive vertices of $P_H$ be $h=h_0, h_1, \ldots, h_\ell=h'$, with $\ell\ge 1$ since $h\neq h'$. Assume without loss of generality that $\ell\le k$. Then the vertices $$(g,h) = (g_0,h_0), (g_1,h_1), \ldots, (g_\ell, h_\ell), (g_{\ell+1}, h_\ell), \ldots, (g_k, h_\ell) = (g',h')$$ induce a shortest $(g,h),(g',h')$-path $Q$ in $G\boxtimes H$. Clearly, no internal vertex of $Q$ is in $S$. Consider now the remaining case in which $g=g'$ or $h=h'$ (but not both). By the commutativity of the strong product we may without loss of generality assume $h= h'$ (and hence $g\neq g'$). Let $g = g_0, g_1, \ldots, g_k = g'$ be the shortest $g,g'$-path in $G$, with $k\ge 1$. If $k=1$, then $(g,h)(g',h')\in E(G\boxtimes H)$ and there is nothing to prove. Assume now $k\ge 2$. Since $2\le |S_H| < n(H)$, there exists a vertex $z\notin S_H$ such that $hz\in E(H)$. Consider the path $Q'$ induced by the sequence of vertices $$(g,h) = (g_0,h), (g_1,z), (g_2, z), \ldots, (g_{k-1}, z), (g_k,h') = (g',h')\,.$$ The length of $Q'$ is $k$, hence $Q'$ is a shortest $(g,h),(g',h')$-path. Moreover, as $z\not\in S_H$ we get that each internal vertex of $Q$ does not belong to $S$. Consequently, the set $S$ is a total mutual-visibility set. Since \begin{align*} |S| & = n(G)n(H) - (n(G) - |S_G|)(n(H) - |S_H|) \\ & = |S_G| n(H) + |S_H| n(G) - |S_G|\cdot |S_H| \end{align*} we are done with the first inequality. When $S_G$ and $S_H$ are $\mu_{\rm t}$-sets and $G$ and $H$ are non-complete graphs, the second inequality follows directly from the first one. \hfill $\square$ \bigskip \end{proof} Of course, when both $G$ and $H$ are $(\mu, \mu_{\rm t})$-graphs, the lower bound expressed by Theorem~\ref{thm:mut-lb} can be reformulated as follows: \begin{equation}\label{eq:mut-lb-perfect} \mu(G\boxtimes H) \ge \mu_{\rm t}(G\boxtimes H) \ge \mu(G)n(H) + \mu(H)n(G) - \mu(G) \mu(H)\,. \end{equation} Theorem~\ref{thm:mut-lb} extends to an arbitrary number of factors as follows. \begin{corollary}\label{cor:mut-lb-recursive} Let $H_k = G_1\boxtimes G_2\boxtimes\cdots \boxtimes G_k$, $k\ge 2$. If $G_i$ is a non-complete graph for each $1\le i\le k$, then $$\mu_{\rm t}(H_k) \ge \prod_{i=1}^k n(G_i) - \prod_{i=1}^k (n(G_i) - \mu_{\rm t}(G_i)).$$ \end{corollary} \begin{proof} For each $1\le i\le k$, let $X_i$ be a $\mu_{\rm t}(G_i)$-set. Let $$S_k = (V(G_1)\times \cdots \times V(G_k)) \setminus (\overline{X_1} \times \cdots \times \overline{X_k}).$$ We prove that $S_k$ is a total mutual-visibility set of $H_k$ and proceed by induction on $k$. By Theorem~\ref{thm:mut-lb} we get that the assertion holds for $H_2$. Let us assume it is true for $H_{k}$, $k\ge 2$, and consider $H_{k+1} = H_{k} \boxtimes G_k$. Let us assume that $S_{k}$ is a total mutual-visibility set of $H_{k}$. By the proof of Theorem~\ref{thm:mut-lb}, $X_{k+1}$ is a total mutual-visibility set of $H_{k+1}$. Thus \[ \begin{array}{rcl} \mu_{\rm t}(H_{k+1}) & \ge & n(H_k) n(G_{k+1}) - (n(H_k) - \mu_{\rm t}(H_k))(n(G_{k+1}) - \mu_{\rm t}(G_{k+1})) \\ & \ge & n(H_k) n(G_{k+1}) - \\ & & \left( n(H_k) - \left( \prod_{i=1}^{k} n(G_i) - \prod_{i=1}^{k} (n(G_i) - \mu_{\rm t}(G_i))\right)\right)(n(G_{k+1}) - \mu_{\rm t}(G_{k+1})) \\ & = & n(H_{k+1}) - \left(\prod_{i=1}^k (n(G_i) - \mu_{\rm t}(G_i))\right)(n(G_{k+1}) - \mu_{\rm t}(G_{k+1})) \\ & = & n(H_{k+1}) - \prod_{i=1}^{k+1} (n(G_i) - \mu_{\rm t}(G_i)) \end{array} \] and we are done. \hfill $\square$ \bigskip \end{proof} The following result (cf. Theorem~\ref{thm:path-product-multi}) shows that there are $(\mu, \mu_{\rm t})$-graphs for which the lower bound provided by~\eqref{eq:mut-lb-perfect} coincides with the mutual-visibility number of the strong product. Notice that it concerns the strong product of paths with at least three vertices, whereas Theorem~\ref{thm:P2-block-strong} (cf. Section~\ref{sec:strong-prisms} where strong prisms are considered) will provide the exact value of $\mu(P_2\boxtimes G)$ for every block graph $G$ (and hence also $\mu(P_2\boxtimes P_n)$ with $n\ge 2$). We first recall the following result that uses convex hulls to provide an upper bound to $\mu(G)$. \begin{lemma}\label{lem:hull} {\rm \cite[Lemma 2.3]{DiStefano-2022} } Given a graph $G$, let $V_1,\ldots,V_k$ be subsets of $V(G)$ such that $\bigcup_{i=1}^k V_i = V(G)$. Then, $\mu(G) \le \sum_{i=1}^k \mu( \mathit{hull}(V_i) )$. \end{lemma} \begin{theorem} \label{thm:path-product-multi} If $H_k = P_{n_1}\boxtimes \cdots \boxtimes P_{n_k}$, where $k\ge 2$ and $n(P_{n_i})\ge 3$ for $i\in [k]$, then $$\mu(H_k) = \prod_{i=1}^k n(P_{n_i}) - \prod_{i=1}^k (n(P_{n_i}) - 2).$$ \end{theorem} \begin{proof} Let $X_i\subseteq V(P_{n_i})$ be the (total) mutual-visibility set of $P_{n_i}$ formed by the end-vertices of the path. According to the proof of Corollary~\ref{cor:mut-lb-recursive}, we get that \begin{equation}\label{eq:Sk} S_k = (V(P_{n_1})\times \cdots \times V(P_{n_k})) \setminus (\overline{X_i} \times \cdots \times \overline{X_i}) \end{equation} is a total mutual-visibility set of $H_k$. By the same corollary we also get the following lower bound: $$\mu(H_k) \ge \mu_{\rm t}(H_k) \ge \prod_{i=1}^k n(P_{n_i}) - \prod_{i=1}^k (n(P_{n_i}) - \mu_{\rm t}(P_{n_i})) = \prod_{i=1}^k n(P_{n_i}) - \prod_{i=1}^k (n(P_{n_i}) - 2) .$$ Let the tuple $(\ell_1,\ldots,\ell_k)$ denote the generic vertex of $H_k$, where $\ell_i\in [n(P_{n_i})]$, $i\in [k]$. We define the following two subsets of $V(H_k)$: \begin{itemize} \item $V_\mathrm{Int} = \{ (\ell_1,\ldots,\ell_k):\ \forall~ i\in [k], \ell_i\neq 1 \mbox{ and } \ell_i\neq n(P_{n_i}) \}$; \item $V_\mathrm{Ext} = \{ (\ell_1,\ldots,\ell_k):\ \exists~ i\in [k], \ell_i= 1 \mbox{ or } \ell_i = n(P_{n_i}) \}$. \end{itemize} From these definitions, it can be easily observed that $V_\mathrm{Int}$ and $V_\mathrm{Ext}$ form a partition of $V(H_k)$. Moreover, according to this notation we get the following characterization of the total mutual-visibility set $S_k$ defined in~\eqref{eq:Sk}: \begin{equation}\label{eq:SkExt} S_k=V_\mathrm{Ext}. \end{equation} To prove the upper bound for $\mu(H_k)$, we use Lemma~\ref{lem:hull}. To this end, we determine the (minimum) number of induced and convex \emph{diagonals} which cover all the vertices of $H_k$. A diagonal is either \emph{degenerated} or \emph{non-degenerated}: non-degenerated diagonals are paths of $H_k$ formed by at least two vertices and having the form $((\ell_1,\ldots,\ell_k), (\ell_1+1,\ldots,\ell_k+1), (\ell_1+2,\ldots,\ell_k+2), \ldots)$, whereas each degenerated diagonal consists of a single vertex. These two kinds of diagonals are formally defined as follows (see. Fig.~\ref{fig:diagonals-in-3D} for two examples): \begin{description} \item[$(i)$] Each vertex in $I= \{ (\ell_1,\ldots,\ell_k):\ \exists~ i\in [k], \ell_i=1 \mbox{ and } \forall j\in [k], \ell_j\neq n(P_{n_j}) \}$ belongs to non-degenerated diagonals. In particular, each vertex in $I$ is the \emph{initial vertex} (i.e., one of its end-vertices) of such kind of diagonals. \item[$(ii)$] If $(\ell_1,\ldots,\ell_k)$ belongs to a non-degenerated diagonal $d$, then also its neighbor $(\ell_1+1,\ldots,\ell_k+1)$ (if it exists in $H_k$) belongs to $d$. This property allows to define non-degenerated diagonals, along all their maximal length, till some \emph{terminating vertex} having at least one coordinate $\ell_i$ such that $\ell_i = n(P_{n_i})$. We denote by $T$ all the terminating vertices of non-degenerated diagonals. \item[$(iii)$] The set $D= \{ (\ell_1,\ldots,\ell_k) ~:~ \exists~ i,j\in [k], \ell_i=1 \textit{ and } \ell_j = n(P_{n_j}) \}$ contains all vertices forming degenerated diagonals. \end{description} \begin{figure}[ht!] \begin{center} \includegraphics[height=4.2cm]{diagonals_new.pdf} \hspace{10mm} \includegraphics[height=7.5cm]{diagonals-in-3D.pdf} \caption{\small Visualization of diagonals as defined in the proof of Theorem~\ref{thm:path-product-multi}. (\emph{Left}) In this strong product $H_2 = P_5\boxtimes P_6$, the thicker and bolder lines represent non-degenerated diagonals. (\emph{Right}) A representation of $H_3 = P_5\boxtimes P_6\boxtimes P_6$ as an ``opaque rectangular cuboid'' where the position of the vertex with coordinates $(1,1,1)$ is shown. Black vertices represents the elements of set $I$, that is the starting points of non-degenerate diagonals; white vertices represents the elements of set $D$, that is vertices forming degenerated diagonal. All such diagonals cover the whole graph $H_3$. } \label{fig:diagonals-in-3D} \end{center} \end{figure} \noindent Notice that the non-degenerated diagonals are pairwise vertex disjoint. The requested covering of $H_k$ is given by all the maximal non-degenerated diagonals along with all the degenerated diagonals. Now, let $X\subseteq V(H_k)$ be the set containing the end-vertices of each non-degenerated diagonal and all the vertices forming degenerated diagonals; formally, $X=I \cup T \cup D$. According to Lemma~\ref{lem:hull} we know that $\mu(H_k)\le |X|$. By Eq.~\ref{eq:SkExt}, we complete the proof by showing that $V_\mathrm{Ext} = X$. \begin{itemize} \item Let $v=(\ell_1,,\ldots,\ell_k)\in V_\mathrm{Ext}$. By definition of $V_\mathrm{Ext}$, there exists a coordinate $\ell_i$ of $v$ for which $\ell_i = 1$ or $\ell_i= n(P_{n_i})$. If both $\ell_i = 1$ and $\ell_i= n(P_{n_i})$ hold, then property $(iii)$ in the definition of diagonals holds. This means that $v\in D$ and hence $v\in X$. If $\ell_i = 1$ and $\ell_i\neq n(P_{n_i})$ hold, then property $(i)$ in the definition of diagonals holds. This means that $v\in I$ and hence $v\in X$. If $\ell_i \neq 1$ and $\ell_i= n(P_{n_i})$ hold for each $i$, then consider the smallest coordinate $\ell_j$ of $v$. According to property $(ii)$, the vertex $v'= (\ell_1-(\ell_j-1),\ldots,\ell_k-(\ell_j-1))$ lies in the set $I$ from which a non-degenerated diagonal starts. This implies that $v\in T$ and hence $v\in X$. So, in all cases we have $v\in X$. \item Let $v=(\ell_1,\ldots,\ell_k)\in X$. If $v\in I\cup D$, then $v\in V_\mathrm{Ext}$ trivially holds. Assume now $v\in T$, that is $v$ is the end-vertex of a non-degenerated diagonal $d$ starting at some vertex $v'=(\ell_1',\ldots,\ell_k')$ for which $(i)$ holds, and made maximal by iteratively applying property $(ii)$. According to $(ii)$, an end-vertex of $d$ must be in $\{ n(P_{n_1}), \ldots, n(P_{n_k})\}$, and hence $v\in V_\mathrm{Ext}$. \end{itemize} This proves that $V_\mathrm{Ext} = X$ holds. \hfill $\square$ \bigskip \end{proof} It seems worth pointing out that the result of Theorem~\ref{thm:path-product-multi} for two- and three-dimensional strong grids reads as follows: \begin{align*} \mu(P_{n_1}\boxtimes P_{n_2}) & = 2n_1 + 2n_2 - 4\,,\\ \mu(P_{n_1}\boxtimes P_{n_2}\boxtimes P_{n_3}) & = 2(n_1n_2 + n_1n_3 + n_2n_3) - 4(n_1 + n_2 + n_3) + 8\,. \end{align*} To conclude the analysis, notice there are examples of graphs for which the bound of Theorem~\ref{thm:mut-lb} is not sharp. An example of this situation is given in Fig.~\ref{fig:no_tree_solution}. \begin{figure}[ht!] \begin{center} \includegraphics[height=7.5cm]{no_tree_solution_1} \caption{\small The graph $T\boxtimes P_5$ and its mutual-visibility set of cardinality $36$.} \label{fig:no_tree_solution} \end{center} \end{figure} Let $T$ be the tree obtained from $K_{1,3}$ by subdividing each of its edges three times. Then Theorem~\ref{thm:mut-lb} implies $\mu(T\boxtimes P_5)\ge \mu(P_5)n(T) + \mu(T)n(P_5) - \mu(P_5)\mu(T) = 35$, but in Fig.~\ref{fig:no_tree_solution} we can see a mutual-visibility set of cardinality $36$ found by computer search. This example also shows that even when both factors of a strong product are $(\mu, \mu_{\rm t})$-graphs, their strong product does not achieve the equality in the bound of Theorem~\ref{thm:mut-lb}. Note that this particular example can be generalized to an infinite family of graphs where the difference between the mutual-visibility number and the bound of the theorem becomes arbitrarily large. This situation also suggests that generalizing Theorem~\ref{thm:path-product-multi} (when $k=2$) to the strong product of two arbitrary trees might be a challenging problem. \begin{corollary}\label{cor:universal} If $G_1, \ldots, G_k$ are non-complete graphs, each containing a universal vertex, then $$\mu_{\rm t}(G_1\boxtimes \cdots \boxtimes G_k) = \prod_{i=1}^k n(G_i) - 1\,.$$ \end{corollary} \begin{proof} Since each $G_i$ is a $(\mu, \mu_{\rm t})$-graph, by Corollary~\ref{cor:mut-lb-recursive} we get $\mu_{\rm t}(G_1\boxtimes \cdots \boxtimes G_k) \ge \prod_{i=1}^k n(G_i) - 1$. The claim follows by simply observing that $G_1\boxtimes \cdots \boxtimes G_k$ is not a clique and hence $\mu(G_1\boxtimes \cdots \boxtimes G_k) < \prod_{i=1}^k n(G_i)$. \hfill $\square$ \bigskip \end{proof} We conclude the section with another lower bound on $\mu(G\boxtimes H)$ in terms of the mutual-visibility number of the factors. \begin{theorem} \label{thm:mu-product} If $G$ and $H$ are graphs, then $$\mu(G\boxtimes H) \ge \mu(G)\mu(H)\,.$$ \end{theorem} \noindent{\bf Proof.\ } Let $S_G$ be a $\mu$-set of $G$ and $S_H$ be a $\mu$-set of $H$. Then we claim that $S = S_G \times S_H$ is a mutual-visibility set of $G\boxtimes H$. Let $(g,h)$ and $(g',h')$ be arbitrary two vertices from $S$. Since $g,g'\in S_G$, there exists a shortest $g,g'$-path $P_G$ in $G$ such that no internal vertex of $P_G$ is in $S_G$. Let the consecutive vertices of $P_G$ be $g=g_0, g_1, \ldots, g_k=g'$. Similarly there is a shortest $h,h'$-path $P_H$ in $H$ such that no internal vertex of $P_H$ is in $S_H$. Let the consecutive vertices of $P_H$ be $h=h_0, h_1, \ldots, h_\ell=h'$. Note that it is possible that $k=0$ or $\ell = 0$ (but not both). Assume without loss of generality that $\ell\le k$. Then the vertices $$(g,h) = (g_0,h_0), (g_1,h_1), \ldots, (g_\ell, h_\ell), (g_{\ell+1}, h_\ell)), \ldots, (g_k, h_\ell)) = (g',h')$$ induce a shortest $(g,h),(g',h')$-path $Q$ in $G\boxtimes H$. Clearly, no internal vertex of $Q$ is in $S$, hence we conclude that $S$ is a mutual-visibility set. \hfill $\square$ \bigskip \section{Mutual-visibility in strong prisms}\label{sec:strong-prisms} In this section we study the mutual-visibility number of strong prisms, that is graphs in the form $G\boxtimes P_2$. We begin with the following general lower bound. \begin{theorem} \label{thm:P2-lower} If $G$ is a graph, then $\mu(G\boxtimes P_2) \ge \max\{n(G), 2\mu(G)\}$. \end{theorem} \begin{proof} Assume that $V(P_2)=\{p,q\}$ and that $X$ is a $\mu$-set of $G$. We prove the statement by showing that both $S_1 = V(G) \times \{p\}$ and $S_2 = X\times P_2$ are mutual-visibility sets in $G\boxtimes P_2$. Let $(g,p)$ and $(g',p)$, with $q\neq g'$, be arbitrary two distinct vertices from $S_1$. Consider a shortest $g,g'$-path $P_G$ in $G$. Let the consecutive vertices of $P_G$ be $g=g_0, g_1, \ldots, g_k=g'$. Since $g\neq g'$ we get $k\ge 1$. If $k=1$, then $(g,p)$ and $(g',p')$ are connected and there is nothing to prove. If $k\ge 2$, then the vertices $$(g,p) = (g_0,p), (g_1,q), \ldots, (g_{k-1},q), (g_k, p) = (g', p) $$ induce a shortest $(g,p),(g',p)$-path $Q$ in $G\boxtimes P_2$. Clearly, no internal vertex of $Q$ is in $S_1$, hence we conclude that $S_1$ is a mutual-visibility set of $G\boxtimes P_2$. Concerning $S_2$, let $(g,r)$ and $(g',r)$, with $g\neq g'$, be two distinct vertices from $S_2$. We may without loss of generality assume that $r = p$. Since $g,g'\in X$, then there exists a shortest $(g,g')$-path $P$ in $G$ such that the internal vertices of $P$ are not in $X$. Hence, $P\times P_2$ is a shortest $(g,p),(g',p)$-path in $G\boxtimes P_2$ whose internal vertices are not in $S_2$. Let $(g,p)$ and $(g',q)$ be two vertices from $S$ located on different layers of $G\boxtimes P_2$. If $g=g'$ these two vertices are adjacent and noting must be proved. When $g\neq g'$, consider again the shortest $(g,p),(g',p)$-path $P$ as before. Now, from this path remove the last vertex and replace it with $(g',q)$. The resulting path $P'$ is a shortest $(g,p),(g',q)$-path in $G\boxtimes P_2$ whose internal vertices are not in $S_2$. Hence, $S_2$ is a mutual-visibility set of $G\boxtimes P_2$. \hfill $\square$ \bigskip \end{proof} Theorem~\ref{thm:P2-lower} can be improved for $(\mu, \mu_{\rm t})$-graphs as we next show. \begin{theorem}\label{thm:P2-G-lower} If $G$ is a $(\mu, \mu_{\rm t})$-graph, then $\mu(G\boxtimes P_2) \ge \mu(G) + n(G)$. \end{theorem} \begin{proof} If $G$ is a complete graph, $G\boxtimes P_2$ is also complete, then $\mu(G\boxtimes P_2) = 2 n(G) = \mu(G) + n(G)$. Otherwise, consider a $\mu_{\rm t}$-set (which is also a $\mu$-set) of $G$ and a total mutual-visibility set $X$ for $P_2$ composed by only one vertex. Then we can apply the first inequality of Theorem~\ref{thm:mut-lb} and~\eqref{eq:mut-lb-perfect} as follows: $$ \begin{array}{rcl} \mu(G\boxtimes P_2) & \ge & \mu(G)n(P_2) + |X|n(G) - \mu(G)|X| \\ & = & \mu(G)\cdot 2 + n(G) - \mu(G) \\ & = & \mu(G) + n(G). \end{array}$$ \hfill $\square$ \bigskip \end{proof} Next we show that the lower bound of Theorem~\ref{thm:P2-G-lower} is attained by block graphs. To do so, we need the following lemma. \begin{lemma} \label{lem:cut} Let $x$ be a cut vertex of a graph $G$. Then there exists a $\mu$-set of $G\boxtimes K_2$ which contains at most one copy of $x$ in the two $G$-layers. \end{lemma} \begin{proof} Let $S$ be a $\mu$-set of $G\boxtimes K_2$ and suppose that $x', x''\in S$, where $x'$ and $x''$ are the copies of $x$ in the $G$-layers. Let $H$ and $H'$ be two components of $(G\boxtimes K_2) \setminus \{x',x''\}$. Then $S\cap V(H) = \emptyset$ or $S\cap V(H') = \emptyset$, say $S\cap V(H) = \emptyset$, for otherwise $S$ is not even a general position set. Now the set $S' = (S \cup \{z\})\setminus \{x'\}$, where $z$ is an arbitrary vertex of $H$, is also a gp-set of $G\boxtimes K_2$. \hfill $\square$ \bigskip \end{proof} \begin{theorem} \label{thm:P2-block-strong} If $G$ is a block graph, then $\mu(G\boxtimes P_2) = n(G) + \mu(G)$. \end{theorem} \begin{proof} Let $X$ be the set of the cut vertices of $G$. By Lemma~\ref{lem:cut} there exists a $\mu$-set $S$ with at most one copy of each vertex in $X$. We show that $S$ includes one copy of any vertex $v$ of $G$ if $v$ is a cut vertex and two copies of $v$ otherwise. This proves the statement. Consider two vertices $u,v$ of the same copy $G'$ of $G$ and the shortest $u,v$-path in $G$. If $u$ and $v$ belong to the same block then they are adjacent since a block is a clique by definition. Otherwise, consider the shortest $u,v$-path in $G'$: it is unique and passes only through cut vertices of $G'$. Since for each vertex in $X$ only one copy is in $S$, there exists a shortest $u,v$-path in $G\boxtimes P_2$ without internal vertices in $S$. Then $u$ and $v$ are $X$-visible. Assume that $u$ and $v$ do not belong to the same copy of $G$. If they belong to two copies of the same block, then they are adjacent. Otherwise, as above, there exists a shortest $u,v$-path in $G\boxtimes P_2$ passing through copies of cut vertices of $G$ and without internal vertices in $S$. \hfill $\square$ \bigskip \end{proof} We conclude the paper by demonstrating sharpness of the bound of Theorem~\ref{thm:P2-G-lower}. \begin{theorem} \label{thm:P2-cycle-strong} If $n\ge 3$, then $$ \mu(C_n\boxtimes P_2) = \begin{cases} 6; & n\le 6,\\ n; & \mbox{otherwise}. \\ \end{cases} $$ \end{theorem} \begin{proof} Recall from~\cite{DiStefano-2022} that $\mu(C_n)= 3$, $n\geq 3$. Hence, Theorem~\ref{thm:P2-G-lower} implies that $\mu(C_n\boxtimes P_2) \ge 6$ when $n\le 6$ and $\mu(C_n\boxtimes P_2) \ge n$ for $n\ge 6$. It can be checked that $\mu(C_n\boxtimes P_2) \le 6$ when $n\leq 6$. Hence $\mu(C_n\boxtimes P_2) = 6$ in these cases. Assume in the rest that $n > 6$ which means that $\mu(C_n\boxtimes P_2) \ge n$. Let $S$ be a $\mu$-set of $C_n\boxtimes K_2$. We need to show that $|S| \le n$. Let $v_0,v_1,\ldots, v_{n-1}$ and $v'_0,v'_1,\ldots, v'_{n-1}$ be the vertices of the two $C_n$-layers. Then a pair $v_i, v'_i$ is called a \emph{separating pair}. $S$ cannot contain three separating pairs since $|S| \ge n\ge 7$ for otherwise a vertex from $S$ which is not in three fixed separating pairs cannot be in visibility with all the vertices in the separating pairs. Hence $|S| \leq n+2$. If $|S| \in \{n+1, n+2\}$, then consider one separating pair $v_j, v'_j$. Then there exists a vertex in $\{v_{j-1}, v'_{j-1}\} \cap S$ and a vertex in $\{v_{j+1}, v'_{j+1}\} \cap S$ which are not $S$-visible. We conclude that $|S| \le n$. \hfill $\square$ \bigskip \end{proof} \section{Concluding remarks and future work}\label{sec:conclusion} This work suggests some further research directions. We have shown that block graphs and certain cographs are all $(\mu, \mu_{\rm t})$-graphs. Notice that cographs can be generated by using true and false twins, and that block graphs can be generated by using true twins and pendant vertices. A superclass of both cographs and block graphs is that formed by \emph{distance-hereditary graphs}. In fact, these graphs can be generated by using true twins, false twins, and pendant vertices. It would be interesting to characterize all the distance-hereditary graphs that are $(\mu, \mu_{\rm t})$-graphs. We left open the general question about characterizing the larger class $\mathcal{G}$ of graphs formed by $(\mu, \mu_{\rm t})$-graphs. In addition, another characterization that would be of interest concerns finding all graphs $G$ for which $\mu_{\rm t}(G)=0$. Concerning specific results, in view of Theorem~\ref{thm:path-product-multi} (when we consider $k=2$), it would be interesting to study $\mu(T\boxtimes T')$ for any two trees $T$ and $T'$. Also, Theorem~\ref{thm:P2-G-lower} provides the lower bound $\mu(G\boxtimes P_2) \ge \mu(G) + n(G)$ for each $(\mu, \mu_{\rm t})$-graph $G$, whereas Theorem~\ref{thm:P2-block-strong} states that the equality is attained in the case of block graphs. We wonder if this equality holds for each $(\mu, \mu_{\rm t})$-graph. Finally, another interesting point is studying other possible variations of the general concept of mutual-visibility sets and their relationships, as well as relationships with the concept of general position sets. \section*{Acknowledgments} S. Cicerone and G. Di Stefano were partially supported by the European project ``Geospatial based Environment for Optimisation Systems Addressing Fire Emergencies'' (GEO-SAFE), contract no. H2020-691161. S. Klav\v{z}ar was partially supported by the Slovenian Research Agency (ARRS) under the grants P1-0297, J1-2452, and N1-0285. I. G. Yero has been partially supported by the Spanish Ministry of Science and Innovation through the grant PID2019-105824GB-I00. Moreover, this investigation was partially developed while I. G. Yero was visiting the University of Ljubljana supported by ``Ministerio de Educaci\'on, Cultura y Deporte'', Spain, under the ``Jos\'e Castillejo'' program for young researchers (reference number: CAS21/00100).
{ "redpajama_set_name": "RedPajamaArXiv" }
3,281
Q: Pusher connection - socketId is null Pusher pusher = new Pusher(app key); String socketId = pusher.getConnection().getSocketId(); The socketId is null when trying to connect to pusher. The uri the Pusher client is using to make a websocket call is ws://ws.pusherapp.com:80/app/{app Key}?client=java-client&protocol=5&version=0.3.3 This returns a NULL socket Id But, if I make a Websocket connection using a test client using the same URI, I get a valid socketId. What am I doing wrong? A: The socketId won't be set until the connection has been established. Please see the onConnectionStateChange interface method here: https://github.com/pusher/pusher-websocket-java#api-overview Here's the code updated specifically to get the socketId: // Create a new Pusher instance Pusher pusher = new Pusher(YOUR_APP_KEY); pusher.connect(new ConnectionEventListener() { @Override public void onConnectionStateChange(ConnectionStateChange change) { String socketId = pusher.getConnection().getSocketId(); System.out.printLn("The socketId is: " + socketId); } @Override public void onError(String message, String code, Exception e) { System.out.println("There was a problem connecting!"); } }, ConnectionState.Connected);
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,884
Q: JqGrid columns getting cut off in IE (scroll bar shows up) but fine (no scroll bar) in Firefox I am using jqGrid with a large number of columns. I have an issue where some of the columns are cut off in Internet Explorer (I am testing on IE9) All columns are shown fine in Firefox. Can anyone think of a reason or a workaround to this issue. I have put a screenshot below of the last cols in Firefox versus IE. IE does popup with a horizontal scrollbar but this is a pain because i am showing 100s of rows so you have to scroll down a lot in the browser to then scroll to see the last columns. Trying to see if there is anyway to get rid of that grid horizontal scroll bar Here is Firefox where all 16 columns are shown Here is Internet Explorer where only 14 full columns are shown and the 15th column (ResourceType) is cut off 1/2 way through the column Any suggestions on how to get all columns to show up in Internet Explorer A: I suppose that you have the same problem as I described in the bug report. In the case you should change the last line of the internal cellWidth function from return testCell !== 5; to return Math.abs(testCell – 5) > 0.1; You can do this in jquery.jqGrid.src.js or just try with the file. It fixed the problem with the wrong calculation of the grid width in my case. A: I used the Jqgird 4.5.2 and the chrome version is Version 28.0.1500.71. The solution list above is not work in my environment.I donot why,so i try to log the testCell!==5. And it is return false in firefox and false in chrome. Finaly i add return false in the cellwidth not care the compare result. And it works. cellWidth : function () { var $testDiv = $("<div class='ui-jqgrid' style='left:10000px'><table class='ui-jqgrid-btable' style='width:5px;'><tr class='jqgrow'><td style='width:5px;'></td></tr></table></div>"), testCell = $testDiv.appendTo("body") .find("td") .width(); $testDiv.remove(); return false; }, Hope it can helpfully.
{ "redpajama_set_name": "RedPajamaStackExchange" }
8,983
These loans are totally free from the hassles of credit check. If you are thinking about an easy way to make an application for no credit check loans, you should not think much and opt for online registration procedure. Here, you just require filling a free of cost application form and submitting it on the spot to the lender. This way, you will not be asked to get into the formalities of additional paper work and visiting the lender's office. Also, there is no need to pay off any additional applying charges once your loan request gets processed. Followed by that, your loan will easily get approved and you will soon be getting the hold of entire loan amount which be credited directly in your checking account.
{ "redpajama_set_name": "RedPajamaC4" }
9,822
Win 'Cadillac Records' on Blu-ray! Win Cadillac Records on Blu-ray! Movieweb Contributor — March 9, 2009 in DVD, Blu-ray Release Dates Cadillac Records will be released on DVD and Blu-ray on March 10 and we have to celebrate this new release. We have another contest lined up and not only are we giving away copies of the Cadillac Records Blu-ray disc, but we're also giving away copies of the film's soundtrack CD as well. These big prizes will surely go fast, so enter this contest today. Winners Receive: - Cadillac Records Blu-ray disc RELATED: Win Spongebob Movie: Sponge Out of Water on Blu-ray - Cadillac Records soundtrack CD CLICK HERE to enter this big giveaway today. Cadillac Records chronicles the rise of Leonard Chess' (Adrien Brody) Chess Records and its recording artists including Muddy Waters (Jeffrey Wright), Little Walter (Columbus Short), Chuck Berry (Mos Def), Willie Dixon (Cedric The Entertainer) and the great Etta James (Beyonce Knowles). In this tale of sex, violence, race and rock and roll in Chicago of the 1950s and 60s, the film follows the exciting but turbulent lives of some of America's greatest musical legends. Cadillac Records Soundtrack Leonard Chess co-founded Chess Records, the pre-eminent blues label of the Fifties and Sixties, with his brother Phil. Originally, the Chess brothers, Polish immigrants whose family settled in Chicago, formed Aristocrat Records in 1947. The Chess label followed two years later, and with it a mind-boggling flood of blues, R&B and rock and roll talent that included Muddy Waters, Howlin' Wolf, Bo Diddley, Chuck Berry, Willie Dixon, Etta James and Little Walter. While Phil focused on jazz, Leonard Chess honed in on roots music, making Chess the greatest repository of black music at mid-century. It was under Chess' tutelage that Muddy Waters electric blues fomented a revolution that led directly to rock and roll in the person of Chuck Berry, another Chess artist. The movie follows the rise and fall of the company, which launched these legendary careers and made history. CADILLAC RECORDS MOVIE stars Adrien Brody, Jeffrey Wright, Beyonce Knowles and Mos Def. - Standard CD includes (13) songs. - 2-CD Deluxe Edition includes (26) songs. - First single At Last performed by Beyonce as seen on the THE NEIGHBORHOOD BALL for the Inauguration of Barack Obama. - Soundtrack features Beyonce, Nas, Elvis, Mos Def, Jeffrey Wright, Buddy Guy, Mary Mary, Raphael Saadiq and more! 1. I'm A Man (Jeffrey Wright) 2. At Last (Beyonce) 3. No Particular Place To Go (Mos Def) 4. I'm Your Hoochie Coochie Man (Jeffrey Wright) 5. Once In A Lifetime (Beyonce) 6. Let's Take A Walk (Raphael Saadiq) 7. 6 O'Clock Blues (Solange) 8. Nadine (Mos Def) 9. The Sound (Mary Mary) 10. Last Night (Little Walter) 11. I'd Rather Go Blind (Beyonce) 12. My Babe (Columbus Short) 13. Bridging The Gap (Nas featuring Olu Dara) Topics: Contests Win Stephen King's The Stand on Blu-ray Signed by Director Mick Garris! Win Star Wars: The Last Jedi IMAX Tickets, Funko Figures & More Win Halt and Catch Fire Season 2 DVD Win Mahogany on DVD for Mother's Day
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,155
{"url":"http:\/\/edboost.org\/practice\/fractions-decimals","text":"# Fractions to Decimals\n\nPacket includes:\n72 practice problems and an answer key.\n\nThis packet helps students practice converting fractions to decimals. Many of these fractions are common ones, that the students will probably eventually memorize, but the packet contains a wide range of fractions (with denominators up to 12) that students can convert by dividing the numerator by the denominator (e.g., 1\/5 = 1 divided by 5 = .2).\n\nSample Problem(s):\n\nConvert the fraction to a decimal:\n\nSimple: $\\dfrac{1}{2}$ = ____________\n\nAdvanced: $\\dfrac{3}{8}$ = __________\n\nNotes:\n\nConverting fractions to decimals requires understanding how to divide and use decimals in the answers (not just remainders).\n\nVideo lesson(s) showing you how to do this type of problem:","date":"2018-04-24 12:15:13","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.7537598609924316, \"perplexity\": 1483.0884399352076}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-17\/segments\/1524125946688.88\/warc\/CC-MAIN-20180424115900-20180424135900-00364.warc.gz\"}"}
null
null
Lana: "I'm Always Open To Go Back to WWE and Tell Compelling Stories and Same With AEW" BY Derek Stoughton – ON October 23, 2022 IN News, WWE Former WWE superstar CJ Perry, known as Lana in WWE, was recently interviewed by In The Kliq Podcast to promote her upcoming role on VH1's "The Surreal Life." She discussed a number of topics, including a return to professional wrestling. Below are some highlights from the interview: Lana On returning to professional wrestling: "I'm open to everything in life. If I put my dreams in a box, I wouldn't be where I am today. So, I mean, Triple H, he's a genius. Stephanie's a genius. I got hired by Triple H. He paired me with Miro and helped really cultivate that, in 2014, that story and that gimmick and those characters. He really helped me develop the Ravishing Russian, so I think he has an incredible creative mind. I think he is an incredible storyteller and entertaining. If the story is right, if it makes sense, I'm always open to go back to WWE and tell compelling stories and same with AEW. I'm totally open to that too. I love working with my husband and creating and telling stories." "My biggest thing is like, I don't want to do anything mediocre. I don't want to be average. I think that's what Dennis Rodman (on Surreal Life) was saying to me is like, don't be good, do something that you're great at. So if I come back, when I come back to wrestling, I want it to be a great story and be compelling. So until then, I don't have to go back until I feel like okay, this is going to be great. I look forward to it because there's nothing like the wrestling fans and having that emotional connection with the crowd and with the fans. I love our fans. There's nothing like it." Lana On the Rusev Day storyline: "The simplest way of putting it for all the listeners that might not be as familiar with wrestling is Vince McMahon was the director, the Steven Spielberg of our show. Just like any television show or movies, there's casting. It comes down to the executive of the network to the showrunner, and if they see you, if they want to cast you as a villain, you know, that's their choice. I think at the end of the day, Vince loves Miro as a villain. So that was really the bottom line of the struggle was he wanted him to be his Bulgarian Brute, 300 pounds, crazy, killing. It was his company, still is, and that was his creative vision, and I think that was always the conflict of it all, really, to come down to the bottom line. I can have a ton of opinions, but it's show business at the end of the day." Lana On being put through tables multiple weeks in a row: "Ironically, I really enjoyed it. I mean, yeah, it definitely hurt. You're going through a commentary table, which is much thicker than a normal table. A beautiful Samoan Dragon is dropping you and landing on you. So there's nothing that doesn't hurt about it. But I mean, that's why I wrestled. It is painful, but I love it. There's nothing like it in the world." [h/t to WrestlingNews.co for the transcription] "GET EXCLUSIVE UPDATES! Click here to sign up for the exclusive Wrestling Rumors daily newsletter, delivered right to your inbox." Follow @WrestleRumors LOOK: Sasha Banks Back In The Ring Around The World WATCH: Ronda Rousey Goes After A Fan At WWE Live Event NXT Halloween Havoc 2022 Results Backstage Update On CM Punk's Popularity In AEW How Close WWE And AEW Got To Working Together For DX Reunion Special AEW Rampage Results – October 21, 2022 NXT Halloween Havoc 2022 Preview, Predictions And Thoughts REVIEW: Joey Janela's Spring Break 6 Night One: Needs Less Good WATCH: Huge Title Match Takes Place After SmackDown AEW Star Injured, Pulled From Scheduled Rampage Match Alleged Reason Why WWE Canceled Day 1 Premium Live Event WATCH: Former World Champion Written Off SmackDown With Arm Injury Tags: Lana, WWE
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
7,902
Birdies Bistro St Ives Bay Holiday Park > Takeaways > Birdies Bistro Birdies Bistro sits on the edge of the RSPB's Hayle nature reserve in Lelant and gives prime views over the estuary. We are open 7 days a week and welcome both regulars and visitors to the area. Dom has worked at a variety of high profile establishments that include Harrods, Raymond Blanc's Le Manoir aux Quat'Saisons and Ferrari. He also worked with the prestigious Salisbury group, who own gastro pubs in Buckinghamshire where he created British colonial food made with foraged ingredients and local seasonal produce. Consequently, he is passionate about real food and is a keen supporter of the county's smaller food producers. He says: "Cornwall has a fantastic variety of local seasonal ingredients that are full of flavour. Our local suppliers, from dairy through to game, are excellent and it's a joy to create menus using their produce." Dom continued, "It has long been a dream of ours to open our own restaurant and to find such a unique venue allowed the dream to come true." The bistro serves classic English dishes with an imaginative difference, such as bubble and squeak served with bacon, poached egg and hollandaise sauce. Birdies Bistro has a bright, airy, vintage style with stunning views over the estuary that change as the tide comes and goes. The fence surrounding the garden has been customised with large viewing holes at both child and adult level, to ensure that birds and other wildlife can be watched with minimal disturbance. http://www.birdiesbistro.co.uk/index.html Phone: 01736 759307 TR27 6JG
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,034
Popular Science Monthly/Volume 59/May 1901/Obituary of William Jay Youmans < Popular Science Monthly‎ | Volume 59‎ | May 1901 Popular Science Monthly Volume 59 May 1901 (1901) Obituary of William Jay Youmans Our Forest Reservations→ 1409415Popular Science Monthly Volume 59 May 1901 — Obituary of William Jay Youmans1901 The death of Dr. William Jay Youmans is a personal loss not only to his many friends, but also to the thousands of those who knew him only as editor of this journal. Youmans was born near Saratoga on October 14, 1838, and the boyhood on his father's farm gave him the training which has so often led to the elevation of public and professional life in this country. He was descended, as his name witnesses, from the British Yeomanry, and the sterling stock that settled in New England was typified in his person and character. He loved his home in the country, and had purchased a farm nearby, to which it was his intention to retire to pass the years of rest that he had so well earned. After leaving the home farm at the age of seventeen, Youmans studied under his brother, the late Dr. E. L. Youmans, and later at Yale, Columbia and New York Universities, and in London under Huxley. He practised medicine for several years in Minnesota, and in 1872 joined his brother in New York to establish the Popular Science Monthly. For twenty-eight years his life was devoted to this journal, first in association with his brother—who was seventeen years the older, and died in 1887—and afterwards as editor-in-chief. The two brothers not only edited the journal, but as advisers of the house of Appleton, gave them their high standing as publishers of scientific books in the renaissance of science based on the doctrine of evolution. The teachings of Spencer, Darwin and other great leaders were for them a religion to which their lives were consecrated. Their influence through this journal and other publications of the Appletons was great and permanent. Youmans died at Mount Vernon on April 10 from typhoid fever, after a ten days' illness. His life was devoted with rare singleness of purpose to the diffusion of science; it was a privilege to know him; he was gentle, kind and noble. STANISLAUS RESERVATION. Retrieved from "https://en.wikisource.org/w/index.php?title=Popular_Science_Monthly/Volume_59/May_1901/Obituary_of_William_Jay_Youmans&oldid=8842802"
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,620
{"url":"https:\/\/support.bioconductor.org\/p\/p132900\/","text":"VST Failing Error in estimateDispersionsFit(object, quiet = TRUE, fitType)\n1\n0\nEntering edit mode\nMia \u25b4 10\n@mia-24145\nLast seen 11 months ago\n\nHi all!\n\nFirst post here ever! Never thought I'd be confused enough and not find an answer on the internet for my problem, lol. Anyway!\n\nI am using a public dataset of cancer brain samples and want to create a heatmap of sample distances as according to: http:\/\/bioconductor.org\/packages\/devel\/bioc\/vignettes\/DESeq2\/inst\/doc\/DESeq2.html#heatmap-of-the-sample-to-sample-distances\n\nMy code is the following:\n\n\nde = DESeqDataSetFromMatrix(countData = exprs(recountDataCombat),\ncolData = pData(recountDataCombat),\ndesign = formula)\n\nde <- estimateSizeFactors(de)\nde <- estimateDispersionsGeneEst(de)\ndispersions(de) <- mcols(de)$dispGeneEst glm_all_nb_combat <- nbinomWaldTest(de) res <- results(glm_all_nb_combat, name=resultsNames(glm_all_nb_combat)[2]) .myMAPlot(res, name=title_figure_2) # Sample Distances dds <- glm_all_nb_combat vsd <- vst(dds, blind=FALSE) But I get the following error Error in estimateDispersionsFit(object, quiet = TRUE, fitType) : all gene-wise dispersion estimates are within 2 orders of magnitude from the minimum value, and so the standard curve fitting techniques will not work. One can instead use the gene-wise estimates as final estimates: dds <- estimateDispersionsGeneEst(dds) dispersions(dds) <- mcols(dds)$dispGeneEst\n...then continue with testing using nbinomWaldTest or nbinomLRT\n\n\nI am confused by this since I did follow this advice in my code when I did\n\nde <- estimateSizeFactors(de)\nde <- estimateDispersionsGeneEst(de)\ndispersions(de) <- mcols(de)\\$dispGeneEst\nglm_all_nb_combat <- nbinomWaldTest(de)\n\n\nI want to avoid rlog at all costs since I have around 600~ samples. But my questions are:\n\nDoes anyone know away around this?\n\nIf not does anyone know where I can find the source code?\n\nIs VST not how I should be normalizing these samples since the dispersion estimates are so low?\n\nThank you in advance! :D Mia Altieri\n\nvst varianceStabilizingTransformation DESeq2 \u2022 312 views\n2\nEntering edit mode\n@mikelove\nLast seen 3 hours ago\nUnited States\n\nThe message above is saying that the data you are looking at is close to Poisson (it's not so easy to interpret, but basically all the dispersion estimates are close to 1e-8).\n\nNot sure why you may have near Poisson data, but in that case, the shifted logarithm is a good approach:\n\nldat <- normTransform(dds)\nplotPCA(ldat)\n...\n\n1\nEntering edit mode\n\nYes! That worked! I see your posts all the time and I think the world of you, thank you so much for helping me!\n\nAnd yeah, I am not sure why this is happening either, its odd because it doesn't happen until after I run Combat, so I want to compare with other batch correction methods.","date":"2021-10-27 22:09:53","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.2775932252407074, \"perplexity\": 2999.977553789572}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-43\/segments\/1634323588244.55\/warc\/CC-MAIN-20211027212831-20211028002831-00528.warc.gz\"}"}
null
null
Q: Using BATCH to get country and ISP name from an IP or Hostname? I need to complete this job in Batch. I use NSLOOKUP command to find the Hostname of an IP, But how I can locate the country of that IP And the ISP Name? Is it possibly? Thankyou for read. EDIT: For example i want to do this in a bat code: IP address **.***.30.113 Hostname **-***-30-113.rdns.blackberry.net ISP Research In Motion UK Limited Country United Kingdom United Kingdom EDIT 2: I accept an external APP to do it, please I've tryed "whois" from sysinternals, But it's giving me bad info cause if i put my local ip in the program, it gives me the city of the organization (Madrid, Spain), not the most nearest location of mi isp provider (Valencia, Spain), If i geo-locate my ip in internet it gives me the good info (Valencia, Spain) Any ideas about that? A: You cannot get that information without an IP-to-Country table. Try http://www.maxmind.com/app/geolitecountry, it is free and works great. A: Here is a batch file I coded that does an IP Lookup using ipinfo.io's API. https://pastebin.com/Ucs2Kuqn echo sUrl = "http://ipinfo.io/%ip%/json" > %temp%\%webclient%.vbs
{ "redpajama_set_name": "RedPajamaStackExchange" }
1,833
HomeU.S.At Least 8 People Dead After Severe Storms Strike Southern U.S. At least eight people were killed and dozens wounded after storms ripped through the South over the weekend. The severe weather struck portions of Texas, Mississippi, Georgia, Louisiana, and Arkansas by Sunday afternoon leaving around 90,000 without power. Texas officials said a tornado flattened nearly all of the city of Franklin, which is around 125 miles south of Dallas leaving 55 homes and a church destroyed among other structures. Two children died around 100 miles northeast of Franklin after a tree fell on a car amid the severe weather. Flooding left at least two people dead in Louisiana, among them a child who drowned in a canal during the flooding. A 95-year-old died in Mississippi after a tree fell on his trailer, and an additional two people died in separate locations in Texas. An Alabama county official also died after being hit by a car while he was clearing away trees near the Birmingham area. Heavy rain and flooding ripped through Vicksburg, Mississippi, though no one was reported injured.
{ "redpajama_set_name": "RedPajamaC4" }
8,108
package com.aspose.cells.examples.articles; import com.aspose.cells.AbstractCalculationEngine; import com.aspose.cells.CalculationOptions; import com.aspose.cells.Workbook; import com.aspose.cells.Worksheet; public class ImplementDirectCalculationOfCustomFunction { public abstract class CalculateCustomFunctionWithoutWritingToWorksheet extends AbstractCalculationEngine { public void Run() { // TODO Auto-generated method stub // Create a workbook Workbook wb = new Workbook(); // Accesss first worksheet Worksheet ws = wb.getWorksheets().get(0); // Add some text in cell A1 ws.getCells().get("A1").putValue("Welcome to "); // Create a calculation options with custom engine CalculationOptions opts = new CalculationOptions(); opts.setCustomEngine(new CustomEngine()); // This line shows how you can call your own custom function without // a need to write it in any worksheet cell // After the execution of this line, it will return // Welcome to Aspose.Cells. Object ret = ws.calculateFormula("=A1 & MyCompany.CustomFunction()", opts); // Print the calculated value on Console System.out.println("Calculated Value: " + ret.toString()); } } public static void main(String[] args) throws Exception { CalculateCustomFunctionWithoutWritingToWorksheet pg = new CalculateCustomFunctionWithoutWritingToWorksheet(); pg.Run(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
3,355
package org.apache.phoenix.end2end; import static org.apache.phoenix.util.TestUtil.TEST_PROPERTIES; import static org.apache.phoenix.util.TestUtil.assertResultSet; import static org.junit.Assert.assertEquals; import static org.junit.Assert.assertFalse; import static org.junit.Assert.assertTrue; import static org.junit.Assert.fail; import java.sql.Connection; import java.sql.DriverManager; import java.sql.PreparedStatement; import java.sql.ResultSet; import java.sql.SQLException; import java.util.Properties; import org.apache.phoenix.jdbc.PhoenixDatabaseMetaData; import org.apache.phoenix.schema.AmbiguousColumnException; import org.apache.phoenix.util.PropertiesUtil; import org.apache.phoenix.util.QueryBuilder; import org.apache.phoenix.util.TestUtil; import org.junit.Test; public class AggregateIT extends BaseAggregateIT { @Test public void testGroupByWithAliasWithSameColumnName() throws SQLException { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); Connection conn = DriverManager.getConnection(getUrl(), props); String tableName1 = generateUniqueName(); String tableName2 = generateUniqueName(); String tableName3 = generateUniqueName(); String ddl = "create table " + tableName1 + " (pk integer primary key, col integer)"; conn.createStatement().execute(ddl); ddl = "create table " + tableName2 + " (pk integer primary key, col integer)"; conn.createStatement().execute(ddl); ddl = "create table " + tableName3 + " (notPk integer primary key, col integer)"; conn.createStatement().execute(ddl); conn.createStatement().execute("UPSERT INTO " + tableName1 + " VALUES (1,2)"); conn.createStatement().execute("UPSERT INTO " + tableName2 + " VALUES (1,2)"); conn.createStatement().execute("UPSERT INTO " + tableName3 + " VALUES (1,2)"); conn.createStatement().executeQuery("select " + tableName1 + ".pk as pk from " + tableName1 + " group by pk"); conn.createStatement().executeQuery("select " + tableName1 + ".pk as pk from " + tableName1 + " group by " + tableName1 + ".pk"); conn.createStatement().executeQuery("select " + tableName1 + ".pk as pk from " + tableName1 + " as t group by t.pk"); conn.createStatement().executeQuery("select " + tableName1 + ".col as pk from " + tableName1); conn.createStatement() .executeQuery("select " + tableName1 + ".pk as pk from " + tableName1 + " join " + tableName3 + " on (" + tableName1 + ".pk=" + tableName3 + ".notPk) group by pk"); try { conn.createStatement().executeQuery("select " + tableName1 + ".col as pk from " + tableName1 + " group by pk"); fail(); } catch (AmbiguousColumnException e) {} try { conn.createStatement().executeQuery("select col as pk from " + tableName1 + " group by pk"); fail(); } catch (AmbiguousColumnException e) {} try { conn.createStatement() .executeQuery("select " + tableName1 + ".pk as pk from " + tableName1 + " join " + tableName2 + " on (" + tableName1 + ".pk=" + tableName2 + ".pk) group by pk"); fail(); } catch (AmbiguousColumnException e) {} conn.close(); } @Test public void testGroupByCoerceExpressionBug3453() throws Exception { final Connection conn = DriverManager.getConnection(getUrl()); try { //Type is INT String intTableName=generateUniqueName(); String sql="CREATE TABLE "+ intTableName +"("+ "ENTITY_ID INTEGER NOT NULL,"+ "CONTAINER_ID INTEGER NOT NULL,"+ "SCORE INTEGER NOT NULL,"+ "CONSTRAINT TEST_PK PRIMARY KEY (ENTITY_ID DESC,CONTAINER_ID DESC,SCORE DESC))"; conn.createStatement().execute(sql); conn.createStatement().execute("UPSERT INTO "+intTableName+" VALUES (1,1,1)"); conn.commit(); sql="select DISTINCT entity_id, score from ( select entity_id, score from "+intTableName+" limit 1)"; ResultSet rs=conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{1,1}}); conn.createStatement().execute("UPSERT INTO "+intTableName+" VALUES (2,2,2)"); conn.createStatement().execute("UPSERT INTO "+intTableName+" VALUES (3,3,3)"); conn.commit(); sql="select DISTINCT entity_id, score from ( select entity_id, score from "+intTableName+" limit 3) order by entity_id"; rs=conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{1,1},{2,2},{3,3}}); sql="select DISTINCT entity_id, score from ( select entity_id, score from "+intTableName+" limit 3) order by entity_id desc"; rs=conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{3,3},{2,2},{1,1}}); //Type is CHAR String charTableName=generateUniqueName(); sql="CREATE TABLE "+ charTableName +"("+ "ENTITY_ID CHAR(15) NOT NULL,"+ "CONTAINER_ID INTEGER NOT NULL,"+ "SCORE INTEGER NOT NULL,"+ "CONSTRAINT TEST_PK PRIMARY KEY (ENTITY_ID DESC,CONTAINER_ID DESC,SCORE DESC))"; conn.createStatement().execute(sql); conn.createStatement().execute("UPSERT INTO "+charTableName+" VALUES ('entity1',1,1)"); conn.createStatement().execute("UPSERT INTO "+charTableName+" VALUES ('entity2',2,2)"); conn.createStatement().execute("UPSERT INTO "+charTableName+" VALUES ('entity3',3,3)"); conn.commit(); sql="select DISTINCT entity_id, score from ( select entity_id, score from "+charTableName+" limit 3) order by entity_id"; rs=conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{"entity1",1},{"entity2",2},{"entity3",3}}); sql="select DISTINCT entity_id, score from ( select entity_id, score from "+charTableName+" limit 3) order by entity_id desc"; rs=conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{"entity3",3},{"entity2",2},{"entity1",1}}); } finally { if(conn!=null) { conn.close(); } } } @Test public void testNestedGroupedAggregationWithBigInt() throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); String tableName = generateUniqueName(); try(Connection conn = DriverManager.getConnection(getUrl(), props);) { String createQuery="CREATE TABLE "+tableName+" (a BIGINT NOT NULL,c BIGINT NOT NULL CONSTRAINT PK PRIMARY KEY (a, c))"; String updateQuery="UPSERT INTO "+tableName+"(a,c) VALUES(4444444444444444444, 5555555555555555555)"; String query="SELECT a FROM (SELECT a, c FROM "+tableName+" GROUP BY a, c) GROUP BY a, c"; conn.prepareStatement(createQuery).execute(); conn.prepareStatement(updateQuery).execute(); conn.commit(); PreparedStatement statement = conn.prepareStatement(query); ResultSet rs = statement.executeQuery(); assertTrue(rs.next()); assertEquals(4444444444444444444L,rs.getLong(1)); assertFalse(rs.next()); } } @Test public void testAvgGroupByOrderPreservingWithStats() throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); Connection conn = DriverManager.getConnection(getUrl(), props); String tableName = generateUniqueName(); QueryBuilder queryBuilder = new QueryBuilder() .setSelectExpression("COUNT(*)") .setFullTableName(PhoenixDatabaseMetaData.SYSTEM_STATS_NAME) .setWhereClause(PhoenixDatabaseMetaData.PHYSICAL_NAME + " ='" + tableName + "'"); ResultSet rs = executeQuery(conn, queryBuilder); assertTrue(rs.next()); assertEquals(0,rs.getInt(1)); initAvgGroupTable(conn, tableName, PhoenixDatabaseMetaData.GUIDE_POSTS_WIDTH + "=20 "); testAvgGroupByOrderPreserving(conn, tableName, 13); rs = executeQuery(conn, queryBuilder); assertTrue(rs.next()); assertEquals(13,rs.getInt(1)); conn.setAutoCommit(true); conn.createStatement().execute("DELETE FROM " + "\"SYSTEM\".\"STATS\""); rs = executeQuery(conn, queryBuilder); assertTrue(rs.next()); assertEquals(0,rs.getInt(1)); TestUtil.doMajorCompaction(conn, tableName); rs = executeQuery(conn, queryBuilder); assertTrue(rs.next()); assertEquals(13,rs.getInt(1)); testAvgGroupByOrderPreserving(conn, tableName, 13); conn.createStatement().execute("ALTER TABLE " + tableName + " SET " + PhoenixDatabaseMetaData.GUIDE_POSTS_WIDTH + "=100"); testAvgGroupByOrderPreserving(conn, tableName, 6); conn.createStatement().execute("ALTER TABLE " + tableName + " SET " + PhoenixDatabaseMetaData.GUIDE_POSTS_WIDTH + "=null"); testAvgGroupByOrderPreserving(conn, tableName, 4); } @Override protected void testCountNullInNonEmptyKeyValueCF(int columnEncodedBytes) throws Exception { try (Connection conn = DriverManager.getConnection(getUrl())) { //Type is INT String intTableName=generateUniqueName(); String sql="create table " + intTableName + " (mykey integer not null primary key, A.COLA integer, B.COLB integer) " + "IMMUTABLE_ROWS=true, IMMUTABLE_STORAGE_SCHEME = ONE_CELL_PER_COLUMN, COLUMN_ENCODED_BYTES = " + columnEncodedBytes + ", DISABLE_WAL=true"; conn.createStatement().execute(sql); conn.createStatement().execute("UPSERT INTO "+intTableName+" VALUES (1,1)"); conn.createStatement().execute("UPSERT INTO "+intTableName+" VALUES (2,1)"); conn.createStatement().execute("UPSERT INTO "+intTableName+" VALUES (3,1,2)"); conn.createStatement().execute("UPSERT INTO "+intTableName+" VALUES (4,1)"); conn.createStatement().execute("UPSERT INTO "+intTableName+" VALUES (5,1)"); conn.commit(); sql="select count(*) from "+intTableName; QueryBuilder queryBuilder = new QueryBuilder() .setSelectExpression("COUNT(*)") .setFullTableName(intTableName); ResultSet rs = executeQuery(conn, queryBuilder); assertTrue(rs.next()); assertEquals(5, rs.getLong(1)); sql="select count(*) from "+intTableName + " where b.colb is not null"; queryBuilder.setWhereClause("B.COLB IS NOT NULL"); rs = executeQuery(conn, queryBuilder); assertTrue(rs.next()); assertEquals(1, rs.getLong(1)); sql="select count(*) from "+intTableName + " where b.colb is null"; queryBuilder.setWhereClause("B.COLB IS NULL"); rs = executeQuery(conn, queryBuilder); assertTrue(rs.next()); assertEquals(4, rs.getLong(1)); } } @Test public void testOrderByOptimizeForClientAggregatePlanBug4820() throws Exception { doTestOrderByOptimizeForClientAggregatePlanBug4820(false,false); doTestOrderByOptimizeForClientAggregatePlanBug4820(false,true); doTestOrderByOptimizeForClientAggregatePlanBug4820(true,false); doTestOrderByOptimizeForClientAggregatePlanBug4820(true,true); } private void doTestOrderByOptimizeForClientAggregatePlanBug4820(boolean desc ,boolean salted) throws Exception { Properties props = PropertiesUtil.deepCopy(TEST_PROPERTIES); Connection conn = null; try { conn = DriverManager.getConnection(getUrl(), props); String tableName = generateUniqueName(); String sql = "create table " + tableName + "( "+ " pk1 varchar not null , " + " pk2 varchar not null, " + " pk3 varchar not null," + " v1 varchar, " + " v2 varchar, " + " CONSTRAINT TEST_PK PRIMARY KEY ( "+ "pk1 "+(desc ? "desc" : "")+", "+ "pk2 "+(desc ? "desc" : "")+", "+ "pk3 "+(desc ? "desc" : "")+ " )) "+(salted ? "SALT_BUCKETS =4" : "split on('b')"); conn.createStatement().execute(sql); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES ('a11','a12','a13','a14','a15')"); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES ('a21','a22','a23','a24','a25')"); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES ('a31','a32','a33','a34','a35')"); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES ('b11','b12','b13','b14','b15')"); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES ('b21','b22','b23','b24','b25')"); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES ('b31','b32','b33','b34','b35')"); conn.commit(); sql = "select a.ak3 "+ "from (select pk1 ak1,pk2 ak2,pk3 ak3, substr(v1,1,1) av1,substr(v2,1,1) av2 from "+tableName+" order by pk2,pk3 limit 10) a "+ "group by a.ak3,a.av1 order by a.ak3 desc,a.av1"; ResultSet rs = conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{"b33"},{"b23"},{"b13"},{"a33"},{"a23"},{"a13"}}); sql = "select a.ak3 "+ "from (select pk1 ak1,pk2 ak2,pk3 ak3, substr(v1,1,1) av1,substr(v2,1,1) av2 from "+tableName+" order by pk2,pk3 limit 10) a "+ "group by a.ak3,a.av1 order by a.ak3,a.av1"; rs = conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{"a13"},{"a23"},{"a33"},{"b13"},{"b23"},{"b33"}}); sql = "select a.ak3 "+ "from (select pk1 ak1,pk2 ak2,pk3 ak3,substr(v1,1,1) av1,substr(v2,1,1) av2 from "+tableName+" order by pk2,pk3 limit 10) a "+ "where a.av1 = 'a' group by a.av1,a.ak3 order by a.ak3 desc"; rs = conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{"a33"},{"a23"},{"a13"}}); sql = "select a.ak3 "+ "from (select pk1 ak1,pk2 ak2,pk3 ak3,substr(v1,1,1) av1,substr(v2,1,1) av2 from "+tableName+" order by pk2,pk3 limit 10) a "+ "where a.av1 = 'a' group by a.av1,a.ak3 order by a.ak3"; rs = conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{"a13"},{"a23"},{"a33"}}); sql = "select a.ak3 "+ "from (select pk1 ak1,pk2 ak2,pk3 ak3,substr(v1,1,1) av1,substr(v2,1,1) av2 from "+tableName+" order by pk2,pk3 limit 10) a "+ "where a.av1 = 'b' and a.av2= 'b' group by CASE WHEN a.av1 > a.av2 THEN a.av1 ELSE a.av2 END,a.ak3,a.ak2 order by a.ak3 desc,a.ak2 desc"; rs = conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{"b33"},{"b23"},{"b13"}}); sql = "select a.ak3 "+ "from (select pk1 ak1,pk2 ak2,pk3 ak3,substr(v1,1,1) av1,substr(v2,1,1) av2 from "+tableName+" order by pk2,pk3 limit 10) a "+ "where a.av1 = 'b' and a.av2= 'b' group by CASE WHEN a.av1 > a.av2 THEN a.av1 ELSE a.av2 END,a.ak3,a.ak2 order by a.ak3,a.ak2 desc"; rs = conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{"b13"},{"b23"},{"b33"}}); tableName = generateUniqueName(); sql = "create table " + tableName + "( "+ " pk1 double not null , " + " pk2 double not null, " + " pk3 double not null," + " v1 varchar, " + " CONSTRAINT TEST_PK PRIMARY KEY ( "+ "pk1 "+(desc ? "desc" : "")+", "+ "pk2 "+(desc ? "desc" : "")+", "+ "pk3 "+(desc ? "desc" : "")+ " )) "+(salted ? "SALT_BUCKETS =4" : "split on(2.3)"); conn.createStatement().execute(sql); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES (2.1,2.11,2.12,'e')"); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES (2.2,2.21,2.23,'d')"); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES (2.3,2.31,2.32,'c')"); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES (2.4,2.41,2.42,'b')"); conn.createStatement().execute("UPSERT INTO "+tableName+" VALUES (2.5,2.51,2.52,'a')"); conn.commit(); sql = "select a.av1 "+ "from (select pk1 ak1,pk2 ak2,pk3 ak3, substr(v1,1,1) av1 from "+tableName+" order by pk1,pk2 limit 10) a "+ "where cast(a.ak1 as integer)=2 group by a.ak1,a.av1 order by a.av1"; rs = conn.prepareStatement(sql).executeQuery(); assertResultSet(rs, new Object[][]{{"a"},{"b"},{"c"},{"d"},{"e"}}); } finally { if(conn != null) { conn.close(); } } } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,410
Q: Write CMakeLists.txt for boost::mpi The cmake file below is the source of problem because I can compile the code with mpic++ directly and without using cmake. Why the cmake file below doesn't work? Current cmake file: cmake_minimum_required(VERSION 2.8) project(boost_mpi_cmake) set(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -std=c++11") add_executable(test test.cpp) find_package(Boost REQUIRED mpi system) include_directories(${Boost_INCLUDE_DIRS}) target_link_libraries(test ${Boost_LIBRARIES}) find_package(MPI REQUIRED) include_directories(${MPI_CXX_INCLUDE_PATH}) target_link_libraries(test ${MPI_CXX_LIBRARIES}) test.cpp: #include <boost/mpi.hpp> #include <iostream> #include <string> namespace mpi = boost::mpi; int main() { mpi::environment env; mpi::communicator world; std::string s(env.processor_name()); std::cout << s << "\n"; return 0; } Error: Undefined symbols for architecture x86_64: "boost::mpi::environment::processor_name[abi:cxx11]()", referenced from: _main in test.cpp.o ld: symbol(s) not found for architecture x86_64 collect2: error: ld returned 1 exit status make[2]: *** [test] Error 1 make[1]: *** [CMakeFiles/test.dir/all] Error 2 make: *** [all] Error 2 Compile without cmake works: mpic++ test.cpp -lboost_mpi
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,945
{"url":"https:\/\/huggingface.co\/datasets\/paws","text":"# Dataset: paws\n\nLanguages: en\nMultilinguality: monolingual\nSize Categories: 10K<n<100K100K<n<1M\nLanguage Creators: machine-generated\nAnnotations Creators: expert-generatedmachine-generated\nSource Datasets: original\n\n# Dataset Card for PAWS: Paraphrase Adversaries from Word Scrambling\n\n### Dataset Summary\n\nPAWS: Paraphrase Adversaries from Word Scrambling\n\nThis dataset contains 108,463 human-labeled and 656k noisily labeled pairs that feature the importance of modeling structure, context, and word order information for the problem of paraphrase identification. The dataset has two subsets, one based on Wikipedia and the other one based on the Quora Question Pairs (QQP) dataset.\n\nFor further details, see the accompanying paper: PAWS: Paraphrase Adversaries from Word Scrambling (https:\/\/arxiv.org\/abs\/1904.01130)\n\nPAWS-QQP is not available due to license of QQP. It must be reconstructed by downloading the original data and then running our scripts to produce the data and attach the labels.\n\n### Languages\n\nThe text in the dataset is in English.\n\n## Dataset Structure\n\n### Data Instances\n\nBelow are two examples from the dataset:\n\nSentence 1 Sentence 2 Label\n(1) Although interchangeable, the body pieces on the 2 cars are not similar. Although similar, the body parts are not interchangeable on the 2 cars. 0\n(2) Katz was born in Sweden in 1947 and moved to New York City at the age of 1. Katz was born in 1947 in Sweden and moved to New York at the age of one. 1\n\nThe first pair has different semantic meaning while the second pair is a paraphrase. State-of-the-art models trained on existing datasets have dismal performance on PAWS (<40% accuracy); however, including PAWS training data for these models improves their accuracy to 85% while maintaining performance on existing datasets such as the Quora Question Pairs.\n\n### Data Fields\n\n\u2022 PAWS-Wiki Labeled (Final): containing pairs that are generated from both word swapping and back translation methods. All pairs have human judgements on both paraphrasing and fluency and they are split into Train\/Dev\/Test sections.\n\n\u2022 PAWS-Wiki Labeled (Swap-only): containing pairs that have no back translation counterparts and therefore they are not included in the first set. Nevertheless, they are high-quality pairs with human judgements on both paraphrasing and fluency, and they can be included as an auxiliary training set.\n\n\u2022 PAWS-Wiki Unlabeled (Final): Pairs in this set have noisy labels without human judgments and can also be used as an auxiliary training set. They are generated from both word swapping and back translation methods.\n\nAll files are in the tsv format with four columns:\n\nColumn Name Data\nid A unique id for each pair\nsentence1 The first sentence\nsentence2 The second sentence\n(noisy_)label (Noisy) label for each pair\n\nEach label has two possible values: 0 indicates the pair has different meaning, while 1 indicates the pair is a paraphrase.\n\n### Data Splits\n\nThe number of examples and the proportion of paraphrase (Yes%) pairs are shown below:\n\nData Train Dev Test Yes%\nLabeled (Final) 49,401 8,000 8,000 44.2%\nLabeled (Swap-only) 30,397 -- -- 9.6%\nUnlabeled (Final) 645,652 10,000 -- 50.0%\n\n## Dataset Creation\n\n### Curation Rationale\n\nExisting paraphrase identification datasets lack sentence pairs that have high lexical overlap without being paraphrases. Models trained on such data fail to distinguish pairs like flights from New York to Florida and flights from Florida to New York.\n\n### Source Data\n\n#### Initial Data Collection and Normalization\n\nTheir automatic generation method is based on two ideas. The first swaps words to generate a sentence pair with the same BOW, controlled by a language model. The second uses back translation to generate paraphrases with high BOW overlap but different word order. These two strategies generate high-quality, diverse PAWS pairs, balanced evenly between paraphrases and non-paraphrases.\n\nMentioned above.\n\n### Annotations\n\n#### Annotation process\n\nSentence pairs are presented to five annotators, each of which gives a binary judgment as to whether they are paraphrases or not. They chose binary judgments to make dataset have the same label schema as the QQP corpus. Overall, human agreement is high on both Quora (92.0%) and Wikipedia (94.7%) and each label only takes about 24 seconds. As such, answers are usually straight-forward to human raters.\n\n## Considerations for Using the Data\n\n### Dataset Curators\n\nList the people involved in collecting the dataset and their affiliation(s). If funding information is known, include it here.\n\n### Licensing Information\n\nThe dataset may be freely used for any purpose, although acknowledgement of Google LLC (\"Google\") as the data source would be appreciated. The dataset is provided \"AS IS\" without any warranty, express or implied. Google disclaims all liability for any damages, direct or indirect, resulting from the use of the dataset.\n\n### Citation Information\n\n@InProceedings{paws2019naacl,\ntitle = {{PAWS: Paraphrase Adversaries from Word Scrambling}},\nauthor = {Zhang, Yuan and Baldridge, Jason and He, Luheng},\nbooktitle = {Proc. of NAACL},\nyear = {2019}\n}\n\n### Contributions\n\nThanks to @bhavitvyamalik for adding this dataset.\n\nNone yet","date":"2021-06-20 15:15:36","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.1853916198015213, \"perplexity\": 5384.683087712776}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623488249738.50\/warc\/CC-MAIN-20210620144819-20210620174819-00138.warc.gz\"}"}
null
null
\section{Introduction} The persisting discrepancy between theory and experiment for positronium width \cite{Westbrook} is a chalenge for QED. At the moment the hope is on taking into account corrections of relative order $\alpha^{2}$ \cite{Lepage,Khriplovich}. In the circumstances the question of self-consistency of the calculations, in particular, of gauge invariance of the result is of prime concern. The modern way to calculate parameters of two-particle atom-like bound states is to extract them from corresponding four-fermion QED Green function (see, for example, \cite{Lepage78,% Remiddi,Steinman} and this paper below). Thus, to check the gauge independence of the calculated bound state parameters, one should carry the gauge parameter through all the extraction procedure. (An example of this see in \cite{Adkins} where the gauge independence of the correction to the positronium width of relative order $\alpha$ was checked.) The extraction procedure gets more and more complicated with an increase in order of radiative corrections and direct order by order check of gauge invariance becomes impractical as a check of self-consistency of the calculations. Instead, one would like to exploit gauge invariance choosing a most convenient gauge and switching from one gauge to another in the process of the calculations. In view of these complications, it seems pertinent to make a step out of the concrete practice of bound-state calculations and to study first the gauge dependence of the four-fermion QED Green function itself without taking into account the complications of the bound-state parameter calculations. In the present paper we derive a relation between four-fermion QED Green functions of different values of gauge-fixing parameter (we consider the covariant gauges only). The relation completely defines the evolution of the Green function in the gauge-fixing parameter. Our derivation does not use perturbation theory. Next, we use our relation to check gauge invariance of the extraction procedure of atom-like bound-state parameters. The result is negative. It turns out that the existing procedure provides gauge-dependent answers for binding energies. We find a flaw in the procedure which is responsible for the gauge-dependence of the result and point the way to its correction. Next section contains a derivation of the evolution in the gauge-fixing parameter; section 3 comprises a brief recall of the extraction procedure and an utilization of the general evolution formula from section 2 for an analysis of gauge-dependence of the extraction; in the last, fourth, section we point out the reason for the gauge dependence and the way to the correct procedure. \section{Evolution in Gauge-Fixing Parameter} Let us consider the four-fermion QED Green function \begin{equation} \label{Gf} G_{\beta}(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i})\equiv i\int D\psi DA\, \exp\left(iS_{QED}(\beta)\right) (\overline{\psi}(\overline{x}_{f}) \psi(x_{f})) (\overline{\psi}({x}_{i}) \psi(\overline{x}_{i}))\, , \end{equation} where $x_{f}(\overline{x}_{f})$ is a coordinate of outgoing particle (antiparticle) and $x_{i}(\overline{x}_{i})$ is the same for ingoing pair. The definition of gauge fixing parameter $\beta$ is given by corresponding photon propagator: \begin{equation} \label{gfix} D_{\mu \nu}(\beta,x) = \int\frac{dk}{(2\pi)^{4}} \left(-g_{\mu \nu} + \beta\frac{k_{\mu}k_{\nu}}{k^{2}}\right) \frac{i}{k^{2}}e^{ikx}. \end{equation} Our aim is to study the dependence of $G_{\beta}$ on $\beta$. To this end, it is useful to consider a Green function in external photon field, $G(A)$, which is a result of integration over the fermion field in the rhs of (\ref{Gf}). {}From the one hand, it is simply connected to the Green function \cite{Vass}: \begin{equation} \label{connection} G_{\beta} = (e^{L_{\beta}}G(A))_{A=0}\,,\; L_{\beta}\equiv\frac{1}{2}\frac{\delta}{\delta A_{\mu}}D_{\mu \nu}(\beta) \frac{\delta}{\delta A_{\nu}}. \end{equation} (In this formula each $L_{\beta}$ generates a photon propagator; the dependence on the coordinates of ingoing and outgoing particles is suppressed for brevity.) {}From the other hand, $G(A)$ is siply connected to a gauge invariant object $G_{inv}(A)$: \begin{equation} \label{coninv} G(A) = G_{inv}(A) \exp\left(ie\int^{x_{f}}_{\overline{x}_{f}}A_{\mu}dx^{\mu} -ie\int^{x_{i}}_{\overline{x}_{i}}A_{\mu}dx^{\mu} \right). \end{equation} The gauge invariance of $G_{inv}$ means that it is independent of the longitudinal component of $A$: \begin{equation} \label{gi} \partial_{\mu}\frac{\delta}{\delta A_{\mu}}G_{inv}(A) = 0 \end{equation} and is a consequence of gauge invariance of the combination \begin{equation} \overline\psi(x)\exp\left(ie\int^{x}_{y}A_{\mu}dz^{\mu} \right)\psi(y). \end{equation} A substitution of (\ref{coninv}) into (\ref{connection}) yields \begin{equation} \label{hot} G_{\beta} = \left (e^{L_{\beta}}G_{inv}(A) \exp\left(ie\int^{x_{f}}_{\overline{x}_{f}}A_{\mu}dx^{\mu} -ie\int^{x_{i}}_{\overline{x}_{i}}A_{\mu}dx^{\mu}\right)\right)_{A=0}. \end{equation} Let us take a $\beta$-derivative of both sides of this equation: \begin{equation} \label{almost eq} \frac{\partial}{\partial\beta}G_{\beta} = \left (e^{L_{\beta}}(\partial_{\beta}L_{\beta})G_{inv}(A) \exp\left(ie\int^{x_{f}}_{\overline{x}_{f}}A_{\mu}dx^{\mu} -ie\int^{x_{i}}_{\overline{x}_{i}}A_{\mu}dx^{\mu}\right)\right )_{A=0} . \end{equation} To get an evolution equation, one needs to express the rhs of this equation in terms of $G_{\beta}$. It is possible because $(\partial_{\beta}L_{\beta})$ commutes with $G_{inv}(A)$ and gives a $c$-factor when acts on the consequent exponential. So, (\ref{almost eq}) transforms itself into \begin{equation} \label{equation} \frac{\partial}{\partial\beta} G_{\beta}(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i}) = F(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i}) G_{\beta}(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i}), \end{equation} where we have restored the $x$-dependence and used $F$ to denote the action of $(\partial_{\beta}L_{\beta})$ on the exponential: \begin{eqnarray} \label{F-def} \lefteqn{(\partial_{\beta}L_{\beta}) \exp\left(ie\int^{x_{f}}_{\overline{x}_{f}}A_{\mu}dx^{\mu} -ie\int^{x_{i}}_{\overline{x}_{i}}A_{\mu}dx^{\mu} \right) \equiv}\nonumber \\ & & F(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i}) \exp\left(ie\int^{x_{f}}_{\overline{x}_{f}}A_{\mu}dx^{\mu} -ie\int^{x_{i}}_{\overline{x}_{i}}A_{\mu}dx^{\mu} \right). \end{eqnarray} An explanation is in order: In deriving (\ref{equation}) we have used a commutativity of $(\partial_{\beta}L_{\beta})$ and $G_{inv}(A)$; it is a direct consequence of gauge invariance of $G_{inv}$ (see (\ref{gi})) and the fact that $(\partial_{\beta}L_{\beta})$ contains only derivatives in longitudinal components of $A$ (see (\ref{connection}) for a definition of $L_{\beta}$ and (\ref{gfix}) for $\beta$-dependence of $D_{\mu \nu}$). The solution of eq.(\ref{equation}) for $\beta$-evolution is \begin{equation} \label{solution} G_{\beta}(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i}) = \exp\left((\beta-\beta_{0}) F(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i}) \right) G_{\beta_{0}}(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i}). \end{equation} To get the final answer one needs an explicite view of $F$ from (\ref{solution}). It is easily deduced from the $F$-definition (\ref{F-def}) and the following representation for the longitudinal part of the photon propagator: \begin{equation} \label{representation} \partial_{\beta}D_{\mu \nu}(\beta,x) = -\frac{1}{16\pi^{2}}\partial_{\mu}\partial_{\nu} \ln((x^{2}-i\varepsilon)m^{2}), \end{equation} where $m$ is an arbitrary mass scale which is fixed, for defineteness, on the fermion mass. Then, up to an additive constant, \begin{equation} \label{repres} F = \frac{\alpha}{4\pi}\left( \ln\frac{1}{m^{4}(x_{f}-\overline{x}_{f})^{2}(x_{i}-\overline{x}_{i})^{2}} +\ln\frac{(x_{f}-x_{i})^{2}(\overline{x}_{f}-\overline{x}_{i})^{2}} {(x_{f}-\overline{x}_{i})^{2}(\overline{x}_{f}-x_{i})^{2}} \right). \end{equation} Substituting (\ref{repres}) into (\ref{solution}), we get our final aswer for $\beta$-evolution: \begin{eqnarray} \label{answer} G_{\beta}(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i})&=& \left[ \frac{Z(x_{f}-x_{i})^{2}(\overline{x}_{f}-\overline{x}_{i})^{2}} {m^{4}(x_{f}-\overline{x}_{f})^{2}(x_{i}-\overline{x}_{i})^{2} (x_{f}-\overline{x}_{i})^{2}(\overline{x}_{f}-x_{i})^{2}} \right]^{\frac{\alpha}{4\pi}(\beta-\beta_{0})} \times\nonumber\\ &&G_{\beta_{0}}(x_{f},\overline{x}_{f},x_{i},\overline{x}_{i}) . \end{eqnarray} The normalization $Z$ is infinite before the ultraviolet renormalization. After the renormalization it is scheme-dependent and calculable order by order in perturbation theory. We will not need its value in what follows. \section{The Bound State Parameters And The Four-Fermion QED Green Function} The four-fermion QED Green function contains too much information for one who just going to calculate bound-sate parameters. Ona can throw away unnessesary information by putting senter of mass space-time coordinate of ingoing pair and relative times of both ingoing and outgoing pairs to zero: \begin{equation} \label{eqtimes} G_{(et) \beta}(t,{\bf x},{\bf r'},{\bf r})\equiv G_{\beta}\left(x_{f}(t,{\bf x},{\bf r'}), \overline{x}_{f}(t,{\bf x},{\bf r'}), x_{i}({\bf r'}), \overline{x}_{i}({\bf r'}) \right), \end{equation} where the space-time coordinates depend on a space-time coordinate of the center of mass of the outgoing pair $(t,{\bf x})$ and a relative space coordinate of outgoing $(\bf r')$ and ingoing $(\bf r)$ pair. In the case of equal masses \begin{eqnarray} \label{def r} x_{f}=(t,{\bf x}+\frac{{\bf r'}}{2}),&\;& \overline{x}_{f}=(t,{\bf x}-\frac{{\bf r'}}{2}),\nonumber \\ x_{i}=(0,\frac{{\bf r}}{2}),&\;& \overline{x}_{i}=(0,-\frac{{\bf r}}{2}). \end{eqnarray} $G_{(et)\beta}$ still contains an unnecessary piece of information --- the dependence on the center of mass space coordinate. The natural way to remove it is to go over to momentum representation and put the center of mass momentum to zero. In coordinate representation, which is more convenient for gauge invariance check, we define the propagator $D_{\beta}$ of the fermion pair: \begin{equation} \label{propDef} G_{(et)\beta}(t,{\bf x},{\bf r'},{\bf r}) \equiv D_{\beta}(t,{\bf r'},{\bf r})\delta({\bf x}) + \ldots, \end{equation} where dots denote terms with derivatives of $\delta({\bf x})$. It is natural to consider $D_{\beta}$ as a time dependent kernel of an operator acting on wave-functions of relative coordinate. In what follows we will not make difference between a kernel and the corresponding operator. The naturalness of the above definition of the propagator is apparent in the nonrelativistic approximation: \begin{equation} \label{NR} {e^{i2mt}}D_{\beta}(t) \approx \sum_{E_{0}} \theta(t)e^{-iE_{0}t} P(E_{0}), \end{equation} where the summation runs over the spectrum of nonrelativistic Coulomb problem and $P(E_{0})$ are the projectors onto corresponding subspaces of the nonrelativistic state space. One can obtain (\ref{NR}) keeping leading term in $\alpha$-expansion of the lhs if one will keep $t\propto 1/\alpha^{2}$ and ${\bf r'},{\bf r}\propto 1/\alpha$ (see \cite{Steinman,Pivovarov}). The subscript on $E_{0}$ is to denote that it will get radiative corrections (see below). The exponential in the lhs is to make a natural shift in energy zero. In what follows we will include the energy shift in the definition of $ D_{\beta}(t)$. The next step in calculation of radiative corrections to the energy levels is a crucial one: one should make an assumption about the general form of a deformation of the $t$-dependence of the rhs of (\ref{NR}) caused by relativistic corrections. A naturall guess and the one which leads to the generally accepted rules of calculation of the relativistic corrections to the energy eigenvalues (see, for example \cite{Lepage78}) is to suppose that one can contrive oscillating part of the exact propagator $D_{\beta}$ from the rhs of (\ref{NR}) just shifting energy levels and modifying the operator coefficiens $P(E_{0})$: \begin{equation} \label{guess} D_{\beta}(t) = \sum_{E_{0}+\Delta_{E_{0}}} \theta(t) e^{-i\left(E_{0}+\Delta_{E_{0}}\right)t} P_{\beta}(E_{0}+\Delta_{E_{0}}) + \ldots, \end{equation} where dots denote terms which are slowly-varying in time (the natural time-scale here is $1/E_{0}$). The additional subscript $\beta$ on $P_{\beta}$ is to denote that oscillating part of $D_{\beta}(t)$ can acquire a gauge parameter dependence from relativistic corrections. Let us see how one can use eq.(\ref{guess}) in energy level calculations. It is quite sufficient to consider $D_{\beta}(t)$ on relatively short times when $\Delta_{E_{0}} t\ll 1,\, E_{0}t\sim 1$. For such times one can approximate $D_{\beta}$ expanding the rhs of eq.(\ref{guess}) over $\Delta_{E_{0}}t$: \begin{equation} \label{simple} D_{\beta}(t) \approx \sum_{E_{0}} \theta(t)e^{-iE_{0}t} \sum_{k}t^{k}A^{(k)}_{\beta}(E_{0}), \end{equation} where \begin{equation} \label{AE} A^{(k)}_{\beta}(E_{0}) = \sum_{\Delta_{E_{0}}} \frac{(-i\Delta_{E_{0}})^{k}}{k!}P_{\beta}(E_{0}+\Delta_{E_{0}}). \end{equation} An extraction of these objects from the perturbation theory is an interim step in the level shift calculations. (Here we should mention that in calculational practice $A^{(k)}_{\beta}(E_{0})$ are exracted in momentum representation --- i.e. not as coefficients near the powers of time but as the ones near the propagator-like singularities $(E-E_{0}+i\varepsilon)^{-(k+1)}$.) To come nerer to the level shift values, useful objects are \begin{equation} \label{A} A^{(k)}_{\beta} \equiv \sum_{E_{0}}A^{(k)}_{\beta}(E_{0})i^{k}k!. \end{equation} Namely, as notations of (\ref{AE}) suggest, eigenvalues of $A^{(0)}_{\beta}$ should be equal to normalizations of bound state wave functions which are driven from unit by relativistic corrections while the eigenvalues of $A^{(k)}_{\beta}$ should be energy shifts to the $k$-th power times corresponding normalizations. Thus, the eigenvalues of \begin{equation} \label{Skdef} S^{(k)}_{\beta} \equiv \frac{\left[A^{(0)}_{\beta}\right]^{-1}A^{(k)}_{\beta} + A^{(k)}_{\beta}\left[A^{(0)}_{\beta}\right]^{-1}}{2} \end{equation} should be just energy shifts to the $k$-th power. Thus, we define \begin{equation} \label{Sdef} S_{\beta} \equiv S_{\beta}^{(0)} \end{equation} to be the energy shift operator: its eigenvalues are the energy level shifts caused by relativistic corrections. Our aim is now to check $\beta$-independence of $S_{\beta}$ eigenvalues. Some notes are in order: If the conjecture (\ref{guess}) is true, $A^{(0)}_{\beta}$ should commute with $S^{(k)}_{\beta}$ and the following relation should hold: \begin{equation} \label{powerrel} S^{(k)}_{\beta} = \left[S_{\beta}\right]^{k} \end{equation} This relation was suggested as a check of the cojecture (\ref{guess}) in \cite{Steinman} and, to our knowlege, has never been checked. Another thing to note is that relativistic corrections affects the form of the scalar product of wave functions and, thus, one shoud add a definition of operator products to the formal expressions (\ref{Skdef}),(\ref{powerrel}). But the level of accuracy to which we will operate permits us not to go into this complication and use the operator products as they are in the nonrelativistic approximation --- i.e. as the convolution of the corresponding kernels. The way to the gauge invariance check of the energy shift calculations is clear now: Using the gauge evolution relation (\ref{answer}) one should find the $\beta$-dependence of $S_{\beta}$ and then of its eigenvalues. As $S_{\beta}$ is defined in (\ref{Sdef}),(\ref{Skdef}) through $A^{(k)}_{\beta}$'s which are, in turn, defined in (\ref{simple}) through the propagator $D_{\beta}$, the first step is to simplify (\ref{answer}) to the reduced case of zero relative time and total momentum of the fermion pair: \begin{eqnarray} \label{reduced} D_{\beta}(t,{\bf r'},{\bf r})&=&\left[ \frac{\left(1-({\bf r'}-{\bf r})^{2}/(4t^{2}) \right)} {\left(1-(({\bf r'}+{\bf r})^{2}/(4t^{2}) \right)} \right]^ {\frac{\alpha}{2\pi}(\beta-\beta_{0})}\times\nonumber \\ & &\left[ \frac{Z} {m^{2}{\bf r'}^{2}m^{2}{\bf r}^{2}} \right]^{\frac{\alpha}{4\pi}(\beta-\beta_{0})} D_{\beta_{0}}(t,{\bf r'},{\bf r}). \end{eqnarray} The factor in the square brackets of the second line is time-independent and futher factorizible on factors depending on either ingoing or outgoing pair parameters. This reduce the influence of this factor to a change in the normalization of states. Being interested in gauge invariance of energy shifts, we omit this factor in what follows. Let us turn to the analysis of the influence of the factor in the first line of (\ref{reduced}). This factor is close to unit in the atomic scale ${\bf r'},{\bf r}\sim 1/\alpha,\,t\sim1/\alpha^{2}$. We will use its approximate form: \begin{equation} \label{approx} Factor \approx 1 + \frac{\alpha}{2\pi}(\beta-\beta_{0}) \frac{{\bf r'}{\bf r}}{t^2} + O(\alpha^{5}). \end{equation} One can read the dependence of $A^{(k)}_{\beta}$ on $\beta$ from (\ref{simple}),(\ref{reduced}),(\ref{approx}) as \begin{equation} \label{betadep} A^{(k)}_{\beta} \approx A^{(k)}_{\beta_{0}} - \frac{\alpha}{2\pi}\frac{(\beta-\beta_{0})}{(k+1)(k+2)} {\bf r}A^{(k+2)}_{\beta_{0}}{\bf r}, \end{equation} where $\bf r$ is the vector operator of relative position of interacting particles. The mixing of different $A^{(k)}_{\beta}$'s with a change in the gauge parameter is due to the presence of $1/t^{2}$ in the rhs of (\ref{approx}). Finally, using the definition (\ref{Sdef}), relations (\ref{powerrel}) and the fact that \begin{equation} \label{unit} A^{(0)} \approx 1 \end{equation} in the nonrelativistic approximation one can derive the following $\beta$-dependence of $S_{\beta}$: \begin{eqnarray} \label{Sanswer} S_{\beta}&\approx&S_{\beta_{0}} -\nonumber \\ & &\frac{\alpha}{2\pi}(\beta-\beta_{0}) \left(\frac{1}{6}{\bf r}S_{\beta_{0}}^{3}{\bf r} - \frac{1}{4}S_{\beta_{0}}{\bf r}S_{\beta_{0}}^{2}{\bf r} - \frac{1}{4}{\bf r}S_{\beta_{0}}^{2}{\bf r}S_{\beta_{0}} \right). \end{eqnarray} Treating the term in the last line of the rhs of the above relation as a perturbation, one can get an approximate value of the $\beta$-dependent piece of the energy shift just averaging the perturbation with respect to the corresponding eigenstate of $S_{\beta_{0}}$. Thus, we get for the leading order of $\beta $-derivative of an energy shift the following representation: \begin{equation} \label{leading} \left(\frac{\partial}{\partial\beta}\Delta_{\beta}\right)_{L}= -\frac{\alpha}{2\pi} \left(\frac{1}{6}\left\langle {\bf r}S_{L}^{3}{\bf r}\right\rangle - \frac{1}{4}\left\langle S_{L}{\bf r}S_{L}^{2}{\bf r}\right\rangle - \frac{1}{4}\left\langle{\bf r}S_{L}^{2}{\bf r}S_{L}\right\rangle \right), \end{equation} where $\langle\ldots\rangle$ means averaging with respect to the corresponding nonrelativistic eigenstate and the subscript $L$ means the leading order in $\alpha$-expansion. Eq.(\ref{leading}) is sufficient to define an order in $\alpha$ in which the energy shifts become gauge dependent: \begin{equation} \label{order} \left(\frac{\partial}{\partial\beta}\Delta_{\beta}\right)_{L} \sim \alpha^{11}. \end{equation} Here we have taken into account that ${\bf r}\sim1/\alpha$ and $S_{L}\sim\alpha^{4}$. To have a gauge dependence in any observable is clearly unacceptable. In the next section we will see how one should correct the above procedure of energy shift extraction from the QED Green function to get rid of the gauge dependence of energy shifts. \section{A Way Out} The procedure recalled in the previous section is based on the conjecture (\ref{guess}). A consequence of this conjecture is the gauge dependence of energy shifts of (\ref{leading}). One can conclude that the conjecture is wrong. In particular, as one can infer from eq.(\ref{reduced}), the operator coefficients near the oscillating exponentials in (\ref{guess}) shoud get a time dependence from relativistic corrections. Even if in some gauge they are time independent, the gauge parameter evolution should generate a dependence which in the leading order in $\alpha$ reduce itself to the following replacement in (\ref{guess}): \begin{equation} \label{replacement} P_{\beta}(E_{0}+\Delta_{E_{0}})\rightarrow P_{\beta}(E_{0}+\Delta_{E_{0}}) + \frac{\Sigma_{\beta}(E_{0})}{t^{2}}. \end{equation} That $\Sigma_{\beta}(E_{0})$ has nothing to do with energy shifts but will give contributions to $A^{(k)}_{\beta}(E_{0})$'s from eq.(\ref{simple}). Being gauge dependent these contributions lead to the gauge dependence of energy shifts. The way to the correct procedure is to through away terms like $\Sigma_{\beta}(E_{0})/t^{2}$ prior to the definition of the energy shift operator. Thus, a necessary step in the process of extracting energy shifts from the QED Green function (and the one which necessity is not recognized in the stanard procedure) is to calculate and subtract contributions like the last term in the rhs of (\ref{replacement}) from the propagator of the fermion pair. Below we report on a calculation of $\Sigma_{\beta}(E_{0})$ from (\ref{replacement}). The most economical way to calculate it is to note that the energy dependence of the Fourier transform of the corresponding contribution to the propagator is \begin{equation} \label{fourier} (E-E_{0})\ln(-(E-E_{0}+i\varepsilon)) \end{equation} and that it comes from diagrams describing radiation and subsequent absorption of a soft photon with no change in the level $E_{0}$ of the radiating and absorbing bound state. Similar contributions (with another power of energy before the $log$) are well known for the propagator of a charged fermion \cite{Lifshits} The first step in our calculation is to present the pair propagator in the following form: \begin{equation} \label{soft} D_{\beta}(t)\approx\left(e^{L_{s}}e^{ie{\bf rA}(t)}D_{inv}(t,A) e^{-ie{\bf rA}(0)} \right)_{A=0}, \end{equation} where $L_{s}$ is the same as in (\ref{connection}) except a restriction on the momentum of photon propagator --- the range of its variation is restricted to the soft region which border is of order of atomic binding energies; the exponentials with gauge potential are originated from the ones in (\ref{hot}); $D_{inv}$ is a descendant of $G_{inv}$ from (\ref{hot}): to go over from $G_{inv}$ to $D_{inv}$ one should make all pairing of non-soft photons in $G_{inv}$ and all the reductions of space-time coordinats which was involved in going over from the $G_{\beta}$ of (\ref{Gf}) to the $D_{\beta}$ of (\ref{propDef}); at last, all gauge potentials in (\ref{soft}) are taken at zero of space coordinate in accord with the $\delta({\bf x})$ of eq.(\ref{propDef}). The difference between the lhs and the rhs of eq.(\ref{soft}) does not conribute to the term under the calculation. The leading in the nonrelativistic approximation contribution to $D_{inv}$ is the same as for $D_{\beta}$ --- it is just the propagator of the nonrelativistic Coulomb problem. We explicitly calculate the leading contribution to the dependence of $D_{inv}(t,A)$ on the gauge potential in its expansion over soft momenta of the external photons. Not surprisingly, the dipole interaction of the pair with the external photon field arise in this approximation: \begin{equation} \label{Adef} D_{inv}(t,A) \approx \left(i\frac{\partial}{\partial t} - H_{c} + e{\bf r}{\cal E}(t) \right)^{-1}, \end{equation} where $H_{c}$ is the hamiltonian of the nonrelativistic Coulomb problem and $\cal E$ is the strength of the electric field: \begin{equation} \label{Edef} {\cal E}(t)\equiv -\dot{{\bf A}}(t) + \nabla A_{0}(t). \end{equation} Substituting (\ref{Adef}) in (\ref{soft}) and keeping terms with only one soft photon propagator we get expressions which sum contains the term under calculation: \begin{equation} \label{r1} e^{2}\left(L_{s} {\bf rA}(t)D_{nr}(t){\bf rA}(0)\right)_{A=0}, \end{equation} \begin{equation} \label{r2} e^{2}\left(L_{s} \int d\tau_{1}d\tau_{2}\, D_{nr}(t-\tau_{1}){\bf r}{\cal E}(\tau_{1}) D_{nr}(\tau_{1}-\tau_{2}){\bf r}{\cal E}(\tau_{2}) D_{nr}(\tau_{2})\right)_{A=0} , \end{equation} \begin{eqnarray} \label{r3} ie^{2}\biggl(L_{s} \int d\tau\,\bigl( D_{nr}(t-\tau){\bf r}{\cal E}(\tau)D_{nr}(\tau){\bf rA}(0)&-& \\ & & {\bf rA}(t)D_{nr}(t-\tau){\bf r}{\cal E}(\tau)D_{nr}(\tau) \bigr) \biggr)_{A=0},\nonumber \end{eqnarray} where $D_{nr}(t)$ is the propagator of the nonrelativistic Coulomb problem from the rhs of eq.(\ref{NR}). The next step is to pick out a contribution of a level $E_{0}$ in (\ref{r1}),(\ref{r2}),(\ref{r3}). That is achievable by the replacement \begin{equation} \label{repl} D_{nr}(t)\rightarrow e^{-iE_{0}t}\theta(t)P(E_{0}). \end{equation} The last ingredient that one needs to calculate (\ref{r1}),(\ref{r2}),(\ref{r3}) is the time dependence of the soft photon propagators. It can be deduced from (\ref{gfix}) as \begin{eqnarray} \label{time} \left(L_{s}A_{i}(t_{1})A_{j}(t_{2})\right)&=& \theta\left((t_{1}-t_{2})^{2}>t_{c}^{2}\right) \frac{\delta_{ij}\left(-1+\frac{\beta}{2}\right)}{4\pi^{2}(t_{1}-t_{2})^{2}}, \nonumber \\ \left(L_{s}A_{i}(t_{1}){\cal E}_{j}(t_{2})\right)&=& \theta\left((t_{1}-t_{2})^{2}>t_{c}^{2}\right) \frac{\delta_{ij}}{2\pi^{2}(t_{1}-t_{2})^{3}},\nonumber \\ \left(L_{s}{\cal E}_{i}(t_{1}){\cal E}_{j}(t_{2})\right)&=& \theta\left((t_{1}-t_{2})^{2}>t_{c}^{2}\right) \frac{\delta_{ij}}{\pi^{2}(t_{1}-t_{2})^{4}}. \end{eqnarray} Here the $\theta$-functions are to account for the softness of the participating photons ($t_{c}\sim 1/E_{0}$). Taking (\ref{time}) into account we get the following contributions from (\ref{r1}),(\ref{r2}),(\ref{r3}): \begin{eqnarray} \label{contr} (\ref{r1})&\rightarrow& \frac{1}{t^{2}}\theta(t)e^{-iE_{0}t} \frac{\alpha}{\pi}\left(-1+\frac{\beta}{2}\right) {\bf r}P(E_{0}){\bf r},\nonumber \\ (\ref{r2})&\rightarrow& \frac{1}{t^{2}}\theta(t)e^{-iE_{0}t} \frac{\alpha}{\pi}\frac{2}{3}P(E_{0}){\bf r}P(E_{0}){\bf r}P(E_{0}), \nonumber \\ (\ref{r3})&\rightarrow& \frac{1}{t^{2}}\theta(t)e^{-iE_{0}t} \frac{\alpha}{\pi}i\left(P(E_{0}){\bf r}P(E_{0}){\bf r} - {\bf r}P(E_{0}){\bf r}P(E_{0})\right). \end{eqnarray} The sum of the above terms yields the result of our calculation: \begin{eqnarray} \label{sigmansw} \Sigma_{\beta}(E_{0})&=&\frac{\alpha}{\pi} \biggl( \frac{2}{3}P(E_{0}){\bf r}P(E_{0}){\bf r}P(E_{0}) + (-1+\frac{\beta}{2}){\bf r}P(E_{0}){\bf r} +\nonumber\\ & & i(P(E_{0}){\bf r}P(E_{0}){\bf r} - {\bf r}P(E_{0}){\bf r}P(E_{0})) \biggr) . \end{eqnarray} One can explicitly check that $\beta$-dependence of $\Sigma_{\beta}(E_{0})$ is the right one --- i.e. if one subtracts the $\Sigma$-term from the propagator before the definition of the energy shift operator, the latter becomes gauge independent. Another observation is that the $\Sigma$-term cannot be killed by any choice of the gauge (in contrast to the case of charged fermion propagator where an analogous term is equal to zero in the Yennie gauge). Summing up, in this paper we derived a relation between QED Green functions of different gauges. We used it to check the gauge invariance of the energy shift operator. It turns out to be gauge dependent. This fact forced us to recognize that energy shifts are not one, and the only one, source for the positive powers of time near the oscillating exponentials in the propagator of the pair. We found a particular additional source of the positive powers of time which is responsible for the gauge dependence of the naive energy shift operator. We conclude by an observation that at the moment we have not a clear definition of the energy shift operator --- to get it one needs a criterion for picking out contributions to the positive powers of time originating from the energy shifts. The author is grateful to A.~Kataev, E.~Kuraev, V.~Kuzmin, A.~Kuznetsov, S.~Larin, Kh.~Nirov, V.~Rubakov, D.~Son, P.~Tinyakov for helpful discussions. This work was supported in part by The Fund for Fundamental Research of Russia under grant 94-02-14428.
{ "redpajama_set_name": "RedPajamaArXiv" }
5,379
\section{Introduction} Quantum batteries store and deliver energy to a quantum system coherently. For such a device, energy leaking during the storing phase is a key issue \cite{Barra2019, Liu2019, Hovhannisyan2019, Farina2019, Pirmoradian2019, Santos2019, Gherardini2020, Kamin2020, Hovhannisyan2020, Zhao2021} that is absent if kept at thermodynamic equilibrium \cite{Barra2019, Hovhannisyan2019, Hovhannisyan2020}. This observation motivated analyzing quantum systems in thermodynamic equilibrium as candidates for quantum batteries \cite{Barra2019, Hovhannisyan2020}. In particular, in Ref.~\cite{Hovhannisyan2020}, we showed that a system (the battery), strongly coupled to a bath (the charger), can efficiently store energy, avoiding leakage. An agent can successively deliver such energy to another quantum system after disconnecting the battery from the bath. To have a meaningful definition of efficiency, one has to close the cycle and reconnect the battery to the charger. After that, the total system thermalizes either under the influence of an external bath or, when the charger is large, as a result of internal evolution (in which case only local observables thermalize \cite{Robinson1973, Bach2000, Gogolin2016, Farrelly2017}). The cycle has an energy cost, the ratio of which to the energy delivered by the battery defines the efficiency. In Ref.~\cite{Hovhannisyan2020}, we considered the case where the battery and the charger become uncorrelated after the battery is disconnected and discharged. In this paper, we lift that restriction: the battery and the charger keep their correlations after the energy extraction process. This opens up new possibilities to optimize battery performance in terms of efficiency, as discussed in the following. Additionally, we study the regime in which the battery--charger system is at a quantum phase transition point during the charge storage stage, revealing the effect of criticality on the performance of the device. In particular, when the total system is a quantum lattice, we observe an increase in the efficiency near the quantum critical point. Interestingly, second-order phase transitions (quantum or classical) are known to boost thermodynamic performance in a variety of thermal devices \cite{Golubeva2012a, Golubeva2013, Golubeva2014, Imparato2015, Campisi2016, Ma2017, Herpich2018, Herpich2018a, Sune2019a, Abiuso2020, Imparato2021, Puebla_2021}. However, in those instances, it is the working medium that is critical, and the enhancement is related to the increased collectivity of its constituents due to strong and long-range correlations (exhibited by both classical \cite{Chaikin} and quantum \cite{Sachdev2011} critical systems). Whereas in the present case, the working medium (the battery) consists of just one or two spins, and therefore the observed enhancement is of a fundamentally different nature. Our workhorse is the 1D transverse-field spin-1/2 Ising chain, whose critical behavior has been fully characterized both in the ground state \cite{Pfeuty1970} and in the thermal state \cite{Barouch1970, Barouch1971, Osborne2002}. We consider the thermodynamic cycle depicted in figure~\ref{fig:cycle}, where a subset of spins (the battery) is disconnected from the chain initially in the ground or in a thermal state; energy is extracted from it; and the exhausted battery is then reconnected to the rest of the chain playing the role of the charger. We show that the optimal working regime of such a quantum battery occurs on the brink of a phase transition. As figure of merits, we use both the thermodynamic efficiency and the extracted energy, expressed in terms of the battery ergotropy. The ergotropy, defined as the maximum extractable energy from a system in a cyclic unitary process \cite{Allahverdyan2004} is appropriate in our context since we are interested in systems that deliver the energy coherently. Besides focusing on the phase transition, we emphasize the importance of correlation between the subset-of-spins and rest-of-the-chain during the cycle. Indeed the presence of strong coupling and battery--charger correlation brings out the fact that locally equivalent operations can have very different global manifestations. We observe that a set of phases of the unitary operator that extracts the battery's ergotropy, which are irrelevant for the (reduced) state of the battery, play a significant role in the reconnecting energy when the battery--charger correlations are taken into account. This provides us with an additional set of parameters that, as we will see in the following, can be tuned so as to further increase the cycle efficiency. \begin{figure}[h] \center \psfrag{ }[ct][ct][1.]{ } \includegraphics[width=8cm]{system.pdf} \caption{Graphic representation of the thermodynamic cycle, made of four strokes. A set of $M$ units (here $M=3$) is disconnected (${\rm I}\to {\rm II}$) from the rest of the chain (represented by the $A$ and $B$ parts). The ergotropy is extracted in ${\rm II}\to {\rm III}$, and finally the exhausted subsystem is reconnected to the chain (${\rm III}\to {\rm IV}$).} \label{fig:cycle} \end{figure} The paper is organized as follows. In section~\ref{sec.cycle}, we first describe the cycle for implementing a quantum battery, discuss the figures of merits and the role of local phase manipulation in the energetics of the full system. Then in section~\ref{sec.ising}, we introduce the system, a transverse-field 1D spin-1/2 Ising model, in which the cycle is studied and summarize the statistical properties of the chain in the ground state and the thermal state. Section~\ref{sec:M1} illustrates our results for a single spin battery in the limit of an infinite chain. Here, we can derive the exact critical exponent characterizing the ergotropy around the critical point. In section~\ref{sec:MT}, we study numerically larger batteries and finite chargers in initial thermal states. We conclude in section~\ref{sec:conclu}. \section{The working cycle} \label{sec.cycle} The working cycle is depicted in Fig.~\ref{fig:cycle}. We do not consider a specific working substance at this stage: the cycle requires a system (the battery) that is strongly coupled to a bath (the charger) and the ability of an agent to connect and disconnect the system from the bath. As illustrated in Fig.~\ref{fig:cycle}, we consider system units and bath units of the same type. We assume the ability to perform any unitary operation on the battery without affecting the coherence between the battery and the charger, i.e., we treat them as an isolated quantum system as we perform stages I to IV of the cycle. The closing step IV $\to$ I might involve coupling to some external systems, for instance, a weak coupling to a super-bath or, if the battery--charger system is large, internal evolution causing return to equilibrium \cite{Robinson1973, Bach2000, Gogolin2016, Farrelly2017}. Either way, this step is not relevant for the energetic budget of the agent. In this sense, we assume that the initial thermal or ground state, denoted by $\varrho_{\rm I}$, is a resource given to us. The available resources and operations we have described above are very similar to those considered in \cite{Hovhannisyan2020} except for the fact that there, in the reconnecting stage III $\to$ IV, the battery was uncorrelated to the bath (charger). This fundamental difference affects the efficiency but not the ergotropy, as we discuss below. The Hamiltonian of the total battery--charger system is \begin{equation} H_{\rm tot}=H_S+H_R+H_{\rm int} \end{equation} where $H_S$ and $H_R$ are the Hamiltonians of the battery ``system'' $S$ and of the charger $R$ respectively. The interaction Hamiltonian between $S$ and $R$ is $H_{\rm int}$. In the following we will use the symbol $\varrho$ to indicate the state of the total system, while the symbol $\rho$ will be used for the reduced state of $S$---the battery---alone. The cycle consists of the following four strokes: \begin{itemize} \item In the stroke I $\to$ II, with the battery and charger in the state $\varrho_{\rm I}$, we instantaneously disconnect $S$ from $R$. The energy cost for the quench reads \begin{equation} E_d = - \tr[ H_{\rm int} \varrho_{\rm I}] \label{Ed:eq} \end{equation} Immediately after the quench, at the beginning of stage II, the system $S$ will be in the reduced state \begin{equation} \rho_{\rm II} = \tr_{R} [\varrho_{\rm I}]. \label{rhoiis:eq} \end{equation} \item Given $\rho_{\rm II}$ and $H_S$ we can extract the ergotropy $\mathcal E$ from $S$ (stroke II $\to$ III), taking $\rho_{\rm II}$ to $\rho_{\rm III}$, where \begin{equation} \rho_{\rm III} = U_{\mathcal E} \, \rho_{\rm II} \, U_{\mathcal E}^\dagger \label{rhoiii:eq} \end{equation} is now the exhausted (or passive) state of $S$ and $U_{\mathcal E}$ is a unitary operator that extracts the ergotropy $\mathcal E$ from the system. Assuming that this step is also instantaneous, from the perspective of the total system, the II $\to$ III transition results in the new state \begin{equation} \label{rho:3} \varrho_{\rm III} = {\mathcal U}_{\mathcal E} \, \varrho_{\rm II} \, {\mathcal U}_{\mathcal E}^\dagger = {\mathcal U}_{\mathcal E} \, \varrho_{\rm I} \, {\mathcal U}_{\mathcal E}^\dagger \end{equation} where the second equality is due to the fact that $\varrho_{\rm II} = \varrho_{\rm I}$, and the total unitary operator reads \begin{equation} {\mathcal U}_{\mathcal E} =U_{\mathcal E} \otimes \mathbbm{I}_R. \end{equation} The identity operator acting on $R$ manifests the assumption of a fast ergotropy extraction and represent a simplification. The important assumption is that the full system evolves unitarily and that there is no control or manipulation of the bath $R$. The ergotropy thus reads \begin{equation} \mathcal E = \tr[ H_S (\rho_{\rm II} - \rho_{\rm III})] . \label{ergo:eq} \end{equation} \item In the next stroke III $\to$ IV we suddenly reconnect $S$ to $R$. The energy cost of this operation reads \begin{equation} E_c = \tr[H_{\rm int} \varrho_{\rm III}]=\tr[{\mathcal U}_{\mathcal E}^\dag H_{\rm int}{\mathcal U}_{\mathcal E} \varrho_{\rm II}], \label{Ec:eq} \end{equation} but the state is unchanged: $\varrho_{\rm IV} = \varrho_{\rm III}$. \item Lastly, to close the cycle, we may perform the step VI $\to$ I and bring the system back to its initial state $\varrho_{\rm I}$ by, e.g., connecting the full system weakly to a super-bath. In that case, the energy (heat) delivered to the total system will be \begin{equation} E_{\rm th} = \tr[(\varrho_{\rm I}-\varrho_{\rm IV}) H_{\rm tot}], \label{Eth:eq} \end{equation} and it is not a cost for the agent running the cycle. In the case when the total system is left to rethermalize by itself (e.g., when the total system is large and the temperature is $>0$), the energetic cost of that will of course be zero. Although the system will locally appear thermal, self-rethermalization does affect the global state, and therefore the price of zero-energy reset is that the energetics of the cycle will be affected in the long run. \end{itemize} In the following, we shall study the maximal work $\mathcal E$ that can be extracted from the battery (the ergotropy) during the cycle, Eq.~(\ref{ergo:eq}), and the cycle efficiency \cite{Hovhannisyan2020}, as given by \begin{equation} \eta=\frac{\mathcal E}{E_c+E_d}. \label{eta:def} \end{equation} We notice that in this expression both $\mathcal E$ and $E_c$ are determined by the the ergotropy-extracting unitary operator $U_{\mathcal E}$ appearing in Eq.~(\ref{rhoiii:eq}). \subsection{The unitary $U_{\mathcal E}$} \label{subsecUgen} The ergotropy-extracting operator $U_{\mathcal E}$, appearing in Eq.~(\ref{rhoiii:eq}), is a unitary that achieves the minimization of the final energy, defining \cite{Allahverdyan2004} the ergotropy: \begin{eqnarray} \nonumber \mathcal E=\Tr[H_S\rho_{\rm II}]-{\rm min}_U \Tr[H_SU\rho_{\rm II} U^\dag]. \end{eqnarray} For $U_\mathcal E$, an explicit expression can be found in terms of the normalized eigenvectors of $\rho_{\rm II}$ and $H_S$ \cite{Allahverdyan2004}. Consider the spectral decompositions of $\rho_{\rm II}$ and $H_S$ \begin{eqnarray} \rho_{\rm II} &=& \sum_{\alpha = 1}^{2^M} r_\alpha^\downarrow \ket{r_\alpha^\downarrow} \bra{r_\alpha^\downarrow}, \\ H_S &=& \sum_{\alpha = 1}^{2^M} \epsilon_\alpha^\uparrow \ket{\epsilon_\alpha^\uparrow} \bra{\epsilon_\alpha^\uparrow}, \end{eqnarray} where $\downarrow$ and $\uparrow$ indicate that the eigenvalues are ordered, respectively, decreasingly and increasingly. We can thus write $U_{\mathcal E}$ in Eq.~\eqref{rhoiii:eq} as \begin{equation} U_{\mathcal E}[\, \vec{\theta} \, ] := \sum_\alpha e^{i \theta_\alpha} \ket{\epsilon_\alpha^\uparrow} \bra{r_\alpha^\downarrow}, \label{ugeneral} \end{equation} where $\vec{\theta} = \{ \theta_\alpha \}_\alpha$ is a $2^M$-tuple of arbitrary real numbers, manifesting the arbitrariness of the normalized eigenvectors $\ket{\epsilon_\alpha^\uparrow}$ and $\ket{r_\alpha^\downarrow}$. Note that one of these numbers determines a global phase and thus we can reduce their number to $2^M-1$. Usually, these phases are omitted (i.e. one takes $\theta_\alpha=0, \,\forall \alpha$) since $\rho_{\rm III} = U_{\mathcal E} \rho_{\rm II} U_{\mathcal E}^\dag = \sum_\alpha r_\alpha^\downarrow \ket{\epsilon_\alpha^\uparrow} \bra{\epsilon_\alpha^\uparrow}$ and thus neither the final passive state $\rho_{\rm III}$ nor the ergotropy $\mathcal E$ depend on them. We note here, for later convenience, that $\rho_{\rm III}$ is diagonal in the energy basis $\left\{\ket{\epsilon_\alpha^\uparrow}\right\}$ and the population decreases as the energy increases; these are the so-called passive states \cite{Pusz1978, Lenard1978}, characterized by the property that no energy can be extracted from them through a cyclic unitary process. Thus, the operator $U_{\mathcal E}$ is not unique even if the spectra of $\rho_{\rm II}$ and $H_S$ are non-degenerate. However, while this freedom is irrelevant for any observable property of $S$, the choice of $\vec{\theta}$ will affect the global state $\varrho_{\rm III}$ [cf. Eq.~\eqref{rho:3}]. As a result, it will affect $E_c$; see Eq.~\eqref{Ec:eq}. In order for this effect to occur, it is essential that the coherence, manifested by the unitary evolution of the full chain, and the correlation between the battery and the charger, are maintained during the steps I $\to$ IV. Indeed, had we considered a different setup such that after the stroke II $\to$ III the correlations between $S$ and $R$ were lost, we would have obtained another state $\varrho_{\rm III}' = \rho_{\rm III} \otimes \omega_R$ at the end of the stroke III. The state of the charger $\omega_R$ before the reconnection stroke (III $\to$ IV) could be the reduced state of the charger after stroke II $\to$ III or another ``fresh'' charger state, as in Ref.~\cite{Hovhannisyan2020}. The reconnecting energy $\Tr[H_{\rm int} \rho_{\rm III}\otimes\omega_R]$ would then be independent of $\vec{\theta}$. To summarize this section, when the coherence and correlations are preserved during the evolution of the total system during the first three strokes, the phases of the eigenstates of $\rho_{\rm III}$ and $H_S$ influence the connecting energy $E_c(\vec{\theta})$ and thus the efficiency of the cycle $\eta$. Such phases can in principle be manipulated by the agent extracting the ergotropy, and in the following we investigate the effect of these phases on the cycle and use them as free parameters to optimize the performance of a battery--charger system made of 1/2 quantum spins. \subsection{Remarks on the thermodynamics of the cycle} As a result of the first three strokes, the state of full system evolves unitarily: $\varrho_{\rm I} \to \varrho_{\rm IV}={\mathcal U}_{\mathcal E}\varrho_{\rm I}{\mathcal U}_{\mathcal E}^\dag$; whereas the total Hamiltonian is changed cyclically---$H_{\rm tot}^{\rm (IV)} = H_{\rm tot}^{\rm (I)}$. Since the initial state of the total system is passive due to the fact that it is either a Gibbs state or a ground state \cite{Pusz1978, Lenard1978}, this can only increase its average energy: \begin{eqnarray} \Tr[H_{\rm tot}(\varrho_{\rm IV}-\varrho_{\rm I})]\geq 0. \label{secondlaw:eq} \end{eqnarray} In other words, one can perform only positive work on the total system. Therefore, since the work performed on the total system during the first three cycles is $E_d - \mathcal E + E_c$, we have that $E_d + E_c \geq \mathcal E$. Since, by definition, $\mathcal E \geq 0$, the definition of efficiency in Eq.~\eqref{eta:def} is indeed meaningful and, moreover, $\eta \leq 1$. Finally, if the closing step IV $\to$ I is achieved by coupling the system weakly to a bath, the dissipation (entropy production) for the cycle will be $-E_{\rm th}/T$, where $T$ is the temperature of the bath. In view of Eqs.~\eqref{Eth:eq} and \eqref{secondlaw:eq}, $-E_{\rm th}/T\geq 0$, as one would expect from the second law. In our previous work \cite{Hovhannisyan2020}, additional sources of dissipation included the change $\varrho_{\rm III}\to\rho_{\rm III}\otimes\omega_R$ modeling the loss of correlation between the battery and the charger, plus the ``refreshing'' of the charger; those are absent in the present setup. \section{The working substance: transverse spin-1/2 Ising chain} \label{sec.ising} We introduce now the specific working substance we use to study the cycle depicted in Fig.~\ref{fig:cycle}---the transverse spin-1/2 quantum Ising chain described by the Hamiltonian \begin{equation} H_{\rm tot} = -(1-f)\sum_{i=0}^{N-1} \sigma^x_i \sigma^x_{i+1} - f \sum_{i=0}^{N-1} \sigma^z_i, \label{HN:def} \end{equation} with periodic boundary conditions (PBC) $\sigma^{\alpha}_{N}=\sigma^{\alpha}_{0}$, the latter ensuring translation symmetry and reflection symmetry around any site. Some (completely different) thermodynamic aspects of subsystems of the quantum Ising chain were studied in Ref.~\cite{Campisi2010}. The battery $S$ consists of $M$ consecutive nodes of the chain, and the charger $R$ consists of the remaining nodes; see Fig.~\ref{fig:cycle} for an illustration. The interaction Hamiltonian between $S$ and $R$ is \begin{eqnarray} H_{\rm int}= - (1-f)( \sigma^x_{i-1} \sigma^x_{i} + \sigma^x_{i+M-1} \sigma^x_{i+M}) \label{Hint}, \end{eqnarray} with $i$ arbitrary given the PBC, and where $H_S$ and $H_R$ are the bare Hamiltonian of $S$ and $R$, respectively: \begin{eqnarray} H_X &=& -(1-f) \sum_{\{j\} \cup \{j+1\} \subseteq X} \, \sigma^x_j \sigma^x_{j+1} - f \sum_{j \in X} \sigma^z_j, \end{eqnarray} with $X$ being either $S$ or $R$. When $N\to \infty$, the system described by the Hamiltonian (\ref{HN:def}) presents a quantum phase transition at $f_c=1/2$. Using some known results about the quantum Ising model \cite{Pfeuty1970, Barouch1970, Barouch1971}, we can study our battery analytically for $M=1$ and partially for $M=2$. For general $M$ and $N$, we will have to analyze the problem numerically. \subsection{Transverse Ising chain in the ground state} Let us review some of the well-known properties of the ground state of the transverse Ising chain \cite{Pfeuty1970, Barouch1970, Barouch1971, Osborne2002}. As mentioned above, the system presents a quantum phase transition at $f_c = 1/2$: the ground state $\ket{0}$ is not degenerate for $f>1/2$, but becomes doubly degenerate $\ket{0^\pm}$ for $f<1/2$. The longitudinal magnetization $\av{\sigma_i^x}$ changes from a vanishing value for $f \geq 1/2$ to a positive or negative value for $f<1/2$, depending on the ``branch'' of the ground state. Without loss of generality, in the discussion that follows, we choose the system to be in the eigenstate $\ket{0^+}$ for $f<1/2$, so that $\av{\sigma_i^x}\geq 0$. Setting $\lambda=(1-f)/f$, the following formulas for the longitudinal magnetization (the ``order parameter'') hold: \begin{eqnarray} \av{\sigma_i^x}= \left\{ \begin{array}{cc} (1-\lambda^{-2})^{\beta}, & f < 1/2 \; (\lambda >1) \\ 0, & f \geq 1/2 \; (\lambda \leq 1) \end{array} \right. \label{sigmax} \end{eqnarray} with the critical exponent $\beta=1/8$. The transverse magnetization reads \begin{eqnarray} \av{\sigma_i^y}&=&0,\\ \av{\sigma_i^z}&=& \frac{1}{\pi}\int_0^\pi d\phi\frac{1+\lambda \cos\phi}{\sqrt{1+\lambda^2+2\lambda \cos\phi}}. \label{sigmaz} \end{eqnarray} Note that $\av{\sigma_i^z}$ changes smoothly at the transition and is positive for positive $f$. The two-site correlators read \cite{Pfeuty1970}: \begin{eqnarray} \label{corrxy} \av{\sigma_i^x\sigma_{i+1}^y}&=& 0, \\ \av{\sigma_i^y\sigma_{i+1}^z}&=& 0, \\ \label{corrxx} \av{\sigma_i^x\sigma_{i+1}^x}&=&\frac{1}{\pi}\int_0^\pi d\phi\frac{\cos\phi+\lambda}{\sqrt{1+\lambda^2+2\lambda\cos\phi}}, \\ \av{\sigma_i^y\sigma_{i+1}^y}&=& \frac{1}{\pi}\int_0^\pi d\phi\frac{\cos\phi+\lambda\cos 2\phi}{\sqrt{1+\lambda^2+2\lambda\cos\phi}}, \\ \av{\sigma_i^z\sigma_{i+1}^z}&=&\av{\sigma_i^z}^2-\av{\sigma_i^x\sigma_{i+1}^x}-\av{\sigma_i^y\sigma_{i+1}^y}. \label{corr0} \end{eqnarray} An analytic expression for $\av{\sigma_i^z\sigma_{i+1}^x}$ has not been found as after the Jordan--Wigner transformation the correlation operator still contains non local terms \cite{Pfeuty1970, Osborne2002}. As we will see, $\av{\sigma_i^z\sigma_{i+1}^x}$ determines the reconnecting energy $E_c$ (\ref{Ec:eq}) and thus we resort to two different numerical approaches to evaluate it. We first diagonalize directly the Hamiltonian (\ref{HN:def}), for a finite value $N$ of spins, and then with a density matrix renormalization group (DMRG) algorithm. We anticipate that the results are not significantly different in the region of interest, besides a moderate discrepancy for $f$ close to $f_c$; see \ref{num:app} for further details on the numerical methods. When the spin chain is in the thermal state $\sim e^{-H_{\rm tot}/k_BT}$, similar expressions can be obtained for the average magnetization and correlations \cite{Barouch1970}; we list them in \ref{App:ChainfiniteT}. \section{Single spin battery $(M=1)$ in the ground state} \label{sec:M1} Of particular interest is the case where only one spin is disconnected from an infinite chain ($M=1,N=\infty$). Besides being of pedagogical relevance, all but one of the thermodynamic quantities of the single spin battery can be expressed in analytic form as discussed below. We assume that at the beginning of the cycle the spin chain is in the ground state $\varrho_{\rm I}=\ket{0^+}\bra{0^+}$. Since the chain is translation-invariant, the choice of the battery site is arbitrary; for definiteness, we choose it to be the zeroth site. In the stroke I$\to$II, the battery spin is instantaneously disconnected with a work cost \begin{eqnarray} E_d=2(1-f)\av{\sigma_0^x\sigma_1^x}, \label{EdM1:eq} \end{eqnarray} where $\av{\dots} $ is the expectation value calculated over the initial state $\varrho_{\rm I}$. Note that we have exploited the translation-invariance of the chain for the nearest-neighbor correlators. Moreover, it is easy to see that Eq.~\eqref{EdM1:eq} is independent of the number of battery sites $M$. The reduced state of the single spin (\ref{rhoiis:eq}) after the disconnection quench thus reads \begin{eqnarray} \rho_{\rm II}=\frac{1}{2}(\mathbbm{I}_2+\av{\sigma_0^x}\sigma_0^x+\av{\sigma_0^z}\sigma_0^z)=\frac{1}{2}(\mathbbm{I}_2+\bf{a \cdot} {\boldsymbol\sigma}_0) \label{rho0:eq} \end{eqnarray} represented by the vector ${\bf a}=(\av{\sigma_0^x},0,\av{\sigma_0^z})$ in the Bloch sphere in terms of \eqref{sigmax} and \eqref{sigmaz}. In the stroke II$\to$III, a unitary operation extracts the ergotropy of the state $\rho_{\rm II}$. Since the Hamiltonian $H_S=-f\sigma_0^z$, and the exhausted (or passive) state of the disconnected spin commute, the Bloch vector of the exhausted state must point in the $\bf{z}$ direction. Unitary transformations on the spin corresponds to rotations of the Bloch vector ${\bf a}$. Thus, the unitary that extracts the ergotropy rotates the Bloch vector from ${\bf a}$ to $\bar{{\bf a}}=(0,0,\bar{\sigma_0}^z)$, where \begin{eqnarray} \bar{\sigma}_0^z&=&\sqrt{\av{\sigma_0^x}^2+\av{\sigma_0^z}^2}\label{sz1}. \end{eqnarray} I.e., the passive state (\ref{rhoiii:eq}) is \begin{eqnarray} \rho_{\rm III}&=&\frac{1}{2}(\mathbbm{I}+ \bar{\sigma}_0^z\sigma_0^z). \label{rho01} \end{eqnarray} It is simple to see that the Bloch vector ${\bf a}$ was rotated an angle $2\alpha$ with respect to the $\hat{\bf y}$ axis, where \begin{eqnarray} \label{Eq19} \sin 2\alpha=\frac{\av{\sigma_0^x}}{ \bar{\sigma}_0^z}\\ \cos 2\alpha=\frac{\av{\sigma_0^z}}{ \bar{\sigma}_0^z}. \label{Eq20} \end{eqnarray} The ergotropy (\ref{ergo:eq}) of the single spin will thus be given by \begin{eqnarray} \mathcal E=\frac{f}{2}\Tr[\sigma_0^z( \bar{\sigma}_0^z\sigma_0^z-\av{\sigma_0^x}\sigma_0^x-\av{\sigma_0^z}\sigma_0^z)] =f( \bar{\sigma}_0^z-\av{\sigma_0^z}). \label{ergo:theo} \end{eqnarray} Comparing the last expression with eq.~(\ref{sz1}), we conclude that we need $\av{\sigma_0^x } \neq 0$, i.e., $f<1/2$ for the ergotropy to be non vanishing. In other words, the battery is charged only in the ordered phase. The magnetization along $x$, given by Eq.~(\ref{sigmax}), grows smoothly from zero at $f_c=1/2$ as $f$ decreases. By expanding (\ref{ergo:theo}) to the leading order, we can thus obtain the critical behaviour of the ergotropy \begin{equation} \mathcal E\sim (f_c-f)^{1/4}+O ((f_c-f)^{1/2}). \label{erg:crit} \end{equation} Thus we find that the ergotropy critical exponent is $2\beta$, where $\beta$ is the critical exponent for the order parameter $\av{\sigma_i^x}$: the two critical exponents are not independent, akin to the scaling relations in critical systems \cite{Chaikin}. Eq.~(\ref{ergo:theo}), together with Eq.~(\ref{erg:crit}), represents the first relevant result in this section. Finally we note that the ergotropy as a function of $f$ vanishes at $f=0$ and for $f>1/2$, and noticing that $\mathcal E/f= \bar{\sigma}_0^z-\av{\sigma_0^z}$ decreases monotonically from its maximum $\mathcal E/f=1$ at $f=0$ to $\mathcal E/f=0$ at $f=1/2$ we conclude that $\mathcal E$ must have a single maximum in the interval $0<f<1/2$. This is confirmed by inspection of Fig.~\ref{fig:ergo} where a plot of the ergotropy is shown. \begin{figure}[h] \center \includegraphics[width=8cm]{ergo_single_spin_N11_ground_pap.pdf} \caption{Full line: Ergotropy of the single spin battery ($M=1$) in the ground state as a function of $f$, as given by Eq.~(\ref{ergo:theo}). Points: numerical approximation of the ergotropy as obtained by diagonalising the Hamiltonian (\ref{HN:def}) with $N=11$ spins. Inset: zoom of the plot in the critical region.} \label{fig:ergo} \end{figure} To compute the ergotropy (\ref{ergo:theo}) we did not write explicitly the unitary operator $U_{\mathcal E}$ introduced in Eq.~(\ref{rhoiii:eq}). We noticed that it corresponds to a rotation around the $\hat{\bf y}$ axis bringing the ${\bf a}$ vector to the $\hat{\bf z}$ direction. On the Hilbert space of the single spin the corresponding unitary operator is $e^{i\alpha\sigma_0^y}$ with $2\alpha$, see Eqs.\eqref{Eq19} and \eqref{Eq20}, the angle of rotation in the sphere \cite{Haroche}. Once the Bloch vector points towards the $\hat{\bf z}$ direction, an arbitrary rotation around that axes leaves the state invariant, i.e., given $\alpha$, for any $\theta$, the unitary operator \begin{eqnarray} \label{Using:theta} U_{\mathcal E}(\theta)=e^{i\theta\sigma_0^z}e^{i\alpha\sigma_0^y} \end{eqnarray} extracts the ergotropy. In~\ref{appendixUM1}, we derive the same expression for $U_{\mathcal E}(\theta)$ starting from Eq.~\eqref{ugeneral} with $M=1$ As discussed in section \ref{subsecUgen}, neither the value of the ergotropy as given by Eq.~(\ref{ergo:eq}) nor $\rho_{\rm III}$ in Eq.~(\ref{rhoiii:eq}) depend on the phase $\theta$, while $\varrho_{\rm III}$ and $E_c$ do. We are now in the conditions to calculate the reconnecting work $E_c$, Eq.\eqref{Ec:eq}, after the ergotropy extraction (step III $\to$ IV in Fig.~\ref{fig:cycle}). Such a quantity for the $M=1$ case reads: \begin{eqnarray} E^{(1)}_ &=& 2 (1-f) \cos 2 \theta (\sin 2 \alpha \av{\sigma^x_0\sigma^z_{1}}- \cos2\alpha\av{\sigma^x_0\sigma^x_{1}} ), \label{Wc:def} \end{eqnarray} where we have used the fact that the correlations do not depend on the specific site, that $\av{\sigma^x_0\sigma^y_{1}}=0$, see Eq.~(\ref{corrxy}), and that the equality $\av{\sigma^x_i\sigma^z_{i+1}}=\av{\sigma^z_i\sigma^x_{i+1}}$ holds due to the inversion and translation symmetry. By using eqs.~(\ref{Eq19})--(\ref{Eq20}), or the more explicit expression eqs.~(\ref{Eq19b})--(\ref{Eq20b}), the connecting energy Eq.~(\ref{Wc:def}) can be written as \begin{equation} E^{(1)}_c=2(1-f) \frac{\cos 2 \theta}{ \bar{\sigma}_0^z}(\av{\sigma_0^x} \av{\sigma^x_0\sigma^z_{1}}-\av{\sigma_0^z} \av{\sigma^x_0\sigma^x_{1}}). \label{Wc:defp} \end{equation} Eq.~(\ref{Wc:defp}) is the second relevant result of this section: we find that while the other two energies involved in the cycle, namely the disconnecting energy $E_d$ and the ergotropy $\mathcal E$, are independent of the arbitrary phase $\theta$ appearing in Eq.\eqref{Using:theta}, the reconnecting energy does depend on this phase. In particular, one can tune it such as to minimize $E^{(1)}_c$, and by noticing that $(\av{\sigma_0^x} \av{\sigma^x_0\sigma^z_{1}}-\av{\sigma_0^z} \av{\sigma^x_0\sigma^x_{1}})\le0$ (numerically checked, data not shown), we conclude that $\theta=0$ corresponds to the minimal value of $E_c$ for any $f$. The connecting and disconnecting energies are plotted in the left panel of Fig.~\ref{best:DMRG} as functions of $f$ and for different values of the phase $\theta$: the effect of the phase on $E_c$ is clearly visible in the figure. Having derived the expression for the cycle output energy, the ergotropy [Eq.~(\ref{ergo:theo})], and the input energy \begin{eqnarray} E_d+E_c=2(1-f)\left\{\frac{\cos 2 \theta}{ \bar{\sigma}_0^z}\av{\sigma_0^x} \av{\sigma^x_0\sigma^z_{1}}+\left(1-\frac{\av{\sigma_0^z}\cos 2 \theta}{ \bar{\sigma}_0^z}\right) \av{\sigma^x_0\sigma^x_{1}}\right\} \label{eimput} \end{eqnarray} [Eq.~(\ref{EdM1:eq}) and Eq.~(\ref{Wc:defp})], we can proceed to study the efficiency (\ref{eta:def}) of the cycle for a single spin ($M=1$) which is maximized for $\theta=0$. This behaviour is confirmed by inspection of the right panel in Fig.~\ref{best:DMRG}. We also notice that the maximum of the efficiency is achieved for values of $f$ just below the critical value and decreases abruptly as it approaches it. One can outline an analysis of the scaling behaviour of the efficiency $\eta=\mathcal E/(E_d+E_c)$ near the critical point. We notice that for $f\to f_c$, both $\av{\sigma_0^z}$ and $\av{\sigma_0^x\sigma_1^x}$ (eqs.~(\ref{sigmaz}) and (\ref{corrxx}) respectively) go to $2/\pi$ and thus $E_c+E_d\to 4\sin^2\theta/\pi$ if $\theta \neq k \pi$ with $k$ an integer, as follows from Eq.~\eqref{eimput}. Therefore, the efficiency exhibits the same critical scaling as the ergotropy [see Eq.~\eqref{erg:crit}]: \begin{equation} \eta^{(1)}\sim \av{\sigma_0^x}^2 \simeq (f_c-f)^{1/4}, \quad(\theta \neq k \pi), \end{equation} with exponent $2\beta$. For $\theta=k \pi$, both $\mathcal E\to 0$ and $E_c+E_d\to 0$ as $f\to f_c$ and one finds that, to the left of the critical point ($f\lesssim f_c$), \begin{equation} \eta^{(1)}\sim \frac{\av{\sigma_0^x}}{\av{\sigma^x_0\sigma_{1}^z}}, \quad(\theta = k \pi). \label{eta1:1} \end{equation} Therefore it is not possible to derive a critical exponent for the efficiency when $\theta = k \pi$, because the expression for $\av{\sigma_i^x\sigma_{i+1}^z}$, as discussed above, is not available. However, the numerical results reported in \ref{num:app} clearly show that the correlation $\av{\sigma_i^x\sigma_{i+1}^z}$ vanishes for $f>f_c$, similarly to $\av{\sigma_i^x}$. Making the physically reasonable assumption that $\av{\sigma_i^x\sigma_{i+1}^z}$ goes to zero continuously as $f\to f_c^-$ with a scaling $\av{\sigma_i^x\sigma_{i+1}^z}\sim(f_c-f)^\delta$, for thermodynamic consistency of Eq.~(\ref{eta1:1}) the inequality $\delta \le \beta$ must hold. Thus for $\theta =k \pi$ the overall critical exponent for $\eta^{(1)}$ is $\beta-\delta<\beta$, which explain the abrupt decrement in the curve for $\theta=0$ as $f\to f_c^-$ in figure \ref{best:DMRG}. The numerical results in \ref{num:app} shows that $\av{\sigma_i^x\sigma_{i+1}^z}$ is well fitted with $\delta=\beta/2$ as $f\to f_c^-$. Thus we find a scaling law for the critical exponent of the efficiency too: its value is determined by a combination of the critical exponents of the order parameter and of the correlations, and is close to $\beta/2$. \begin{figure}[h] \center \psfrag{ }[ct][ct][1.]{ } \includegraphics[width=7.5cm]{WdWC_pap_DMRG.pdf} \includegraphics[width=7.5cm]{eta_pap_with_zoom.pdf} \caption{Left: disconnecting and connecting energies as functions of $f$, for the single spin battery ($M=1)$ in the ground state of (\ref{HN:def}), as given by Eq.~(\ref{Ed:eq}) and (\ref{Wc:defp}), respectively. Right: Efficiency of the single spin battery, $\eta=\mathcal E/(E_d+E_c)$ with $\mathcal E$ form Eq.~(\ref{ergo:theo}) and $E_d+E_c$ from Eq.~(\ref{eimput}). In both panels $\av{\sigma^x_i\sigma^z_{i+1}}$ and thus $E_c(\theta)$ are obtained numerically with the DMRG algorithm (full symbols) or by direct diagonalization of the Hamiltonian (\ref{HN:def}) (empty symbols), for a finite system with $N=11$ spins. Inset: zoom of the plot with $\theta=0$ in the critical region.} \label{best:DMRG} \end{figure} \section{Case $T>0$} \label{sec:MT} Given that in a thermal state with $T>0$, the spin chain (\ref{HN:def}) is always in the paramagnetic phase $\av{\sigma_i^x}=0$ \cite{Osborne2002}, Eq.~\eqref{ergo:theo} implies that no ergotropy can be extracted when a single spin is disconnected ($M=1$) from a thermal state. Therefore, in order to study the thermodynamic properties of the cycle in Fig.~\ref{fig:cycle} at finite temperature, we will consider here the case where $M\geq 2$ spins are disconnected from the chain. We present a few results for $M=2$ in \ref{appendixM2}. Obtaining the reduced state (\ref{rhoiis:eq}) and the exhausted state (\ref{rhoiii:eq}) becomes a daunting task as $M$ increases. Therefore in the following we will resort on the numerical analysis to obtain the thermodynamic quantities of interest. In the previous section, we have seen that direct diagonalization of the Hamiltonian (\ref{HN:def}), or the DMRG algorithm, give results very close to the exact ones when available. The initial state of the cycle in Fig.~\ref{fig:cycle} is now $\varrho_{\rm I}=\exp(- H_{\rm tot}/k_B T)/Z_{\rm tot}$. To compute the ergotropy and the post-ergotropy state $\varrho_{\rm III}$ in Eq.(\ref{rho:3}), one must chooses the phases in $U_{\mathcal E}$ introduced in Eq.(\ref{rhoiii:eq}). For the following results, we chose them to minimize the reconnection energy. Thus, we proceed as follows. Let us introduce \begin{eqnarray} \mathcal{U}_\alpha = \ket{\epsilon_\alpha^\uparrow} \bra{r_\alpha^\downarrow} \otimes \mathbbm{I}_R, \label{Ucall} \end{eqnarray} so that ${\mathcal U}_{\mathcal E} = \sum_\alpha e^{i \theta_\alpha} {\mathcal U}_\alpha$, and rewrite Eq.~\eqref{Ec:eq} accordingly: \begin{eqnarray} E_c[\, \vec{\theta} \,] = \sum_{\alpha, \gamma} e^{i(\theta_\alpha - \theta_\gamma)} \tr [H_{\rm int} \mathcal{U}_\alpha \varrho_{\rm I} \mathcal{U}_\gamma^\dagger]. \end{eqnarray} Further introducing \begin{eqnarray} A_{\alpha, \gamma} := \big\vert \tr [H_{\rm int} \mathcal{U}_\alpha \varrho_{\rm I} \mathcal{U}_\gamma^\dagger] \, \big\vert \qquad \mathrm{and} \qquad \phi_{\alpha, \gamma} := \arg \tr [H_{\rm int} \mathcal{U}_\alpha \varrho_{\rm I} \mathcal{U}_\gamma^\dagger], \label{Aag} \end{eqnarray} which are quantities that do not depend on $\vec{\theta}$, and noting that $\phi_{\gamma, \alpha} = - \phi_{\alpha, \gamma}$, we finally obtain \begin{eqnarray} E_c[\, \vec{\theta} \,] = \sum_\alpha A_{\alpha, \alpha} + 2 \sum_{\alpha < \gamma} A_{\alpha, \gamma} \cos (\theta_\alpha - \theta_\gamma + \phi_{\alpha, \gamma}). \end{eqnarray} The problem of minimization of $E_c$ can thus be formulated as \begin{eqnarray} E_c^{\min} = \sum_\alpha A_{\alpha, \alpha} + 2 \; \min_{\vec{\theta} \in [0, 2 \pi) \times \cdots \times [0, 2 \pi)} \; \sum_{\alpha < \gamma} A_{\alpha, \gamma} \cos (\theta_\alpha - \theta_\gamma + \phi_{\alpha, \gamma}). \label{eq:ecminta} \end{eqnarray} We immediately see that, when $[H_{\mathrm{int}}, H_S \otimes \mathbbm{I}_R] = 0$ and $H_S$ has a nondegenerate spectrum, $A_{\alpha, \gamma} = 0$ whenever $\alpha \neq \gamma$, meaning that all that $\vec{\theta}$-dependent terms vanish, and hence there is no room for optimizing $E_c$. Indeed, plugging Eq.~\eqref{Ucall} into Eq.~\eqref{Aag} and keeping in mind that, by definition, $H_S \ket{\epsilon_\alpha^\uparrow} = \epsilon_\alpha^\uparrow \ket{\epsilon_\alpha^\uparrow}$, we can write \begin{eqnarray} \nonumber \left\vert \epsilon_\alpha^\uparrow \right\vert A_{\alpha, \gamma} &=& \left\vert \tr \big[ \mathbbm{I}_R \otimes \ket{r_\gamma^\downarrow} \bra{\epsilon_\gamma^\uparrow} H_{\mathrm{int}} (H_S \ket{\epsilon_\alpha^\uparrow}) \bra{r_\alpha^\downarrow} \otimes \mathbbm{I}_R \, \varrho_{\mathrm{I}} \big] \right\vert \\ \nonumber &=& \left\vert \tr \big[ \mathbbm{I}_R \otimes \ket{r_\gamma^\downarrow} \bra{\epsilon_\gamma^\uparrow} (H_S \otimes \mathbbm{I}_R) H_{\mathrm{int}} \ket{\epsilon_\alpha^\uparrow} \bra{r_\alpha^\downarrow} \otimes \mathbbm{I}_R \, \varrho_{\mathrm{I}} \big] \right\vert \\ \nonumber &=& \left\vert \epsilon_\gamma^\uparrow \, \tr \big[ \mathbbm{I}_R \otimes \ket{r_\gamma^\downarrow} \bra{\epsilon_\gamma^\uparrow} H_{\mathrm{int}} \ket{\epsilon_\alpha^\uparrow} \bra{r_\alpha^\downarrow} \otimes \mathbbm{I}_R \, \varrho_{\mathrm{I}} \big] \right\vert \\ \nonumber &=& \left\vert \epsilon_\gamma^\uparrow \right\vert A_{\alpha, \gamma}, \end{eqnarray} from which, in view of the assumed nondegeneracy of the spectrum of $H_S$, the statement follows immediately. Note that, once $M \geq 2$, the amount of $(\alpha < \gamma)$ pairs is strictly larger than the amount of $\alpha$'s, therefore, it will not generally be possible to choose $\vec{\theta}$ so that $\min \sum_{\alpha < \gamma} A_{\alpha, \gamma} \cos (\theta_\alpha - \theta_\gamma + \phi_{\alpha, \gamma}) = - \sum_{\alpha < \gamma} A_{\alpha, \gamma}$. The numerical minimization can be carried out by the ``differential evolution'' method, e.g., in \textsc{Python}. The results for the ergotropy, the disconnecting energy, the minimal connecting energy and the efficiency for a finite chain with $N=8$ are shown in Fig.~\ref{fig:Ec}. \begin{figure}[h] \center \includegraphics[width=6cm]{erg_N8_M246_T01} \includegraphics[width=6cm]{erg_N8_M246_T1} \includegraphics[width=6cm]{Ecmin_N8_M246_T01} \includegraphics[width=6cm]{Ecmin_N8_M246_T1} \includegraphics[width=6cm]{Ed_N8_M246_T01} \includegraphics[width=6cm]{Ed_N8_M246_T1} \includegraphics[width=6cm]{eff_N8_M246_T01} \includegraphics[width=6cm]{eff_N8_M246_T1} \caption{Figures of merit of all possible battery--charger configurations of an $8$-node quantum Ising chain with periodic boundary condition. The first column is for $T=0.1$ and the second is for $T=1$. $M$ is the size of the battery. In the last row the efficiency is calculated as $\eta=\mathcal{E}/(E_d+E_c^{\min})$. The abrupt jumps in the efficiency are likely to be related to the error in the multiparameter optimization that is complicated by the presence of multiple local optima (the target function is a sum of cosines, and the optimization is carried over 16 angles for $M=4$ and 64 angles for $M=6$; see Eq.~\eqref{eq:ecminta}.} \label{fig:Ec} \end{figure} In the first row of Fig.~\ref{fig:Ec} we can see that the ergotropy is still maximum around the value $f_c=1/2$, but it is non-negative for $f>f_c=1/2$, because the reduced state of the disconnected $M$ spins is ``charged'' (or active). The bumps on the curves for $E_c^{\min}$ for $T=1$, $M=4,\, 6$, signal possible numerical errors in the multiparameter optimization ($2^M$ angles $\theta_\alpha$). Furthermore, as $f$ approaches and exceeds 1/2 the sum $E_c^{\min}+E_d$ becomes quite small. These combined factors result in rather imprecise curves for the efficiency, in particular for $M=4$ and $6$. At least for $M=2$, the efficiency becomes large in the $f \to 1$ regime. This is the weak coupling limit in our model, and, along with the ergotropy, also $E_d$ and $E_c$ vanish, maintaining a finite ratio in Eq.~\eqref{eta:def}. The maximization of the efficiency in the limit of zero output is a type of power--efficiency tradeoff akin to the similar tradeoff for ordinary heat engines \cite{Sekimoto2000, Allahverdyan2013, Shiraishi2016}. In a sense, this limit corresponds to the regime of reversible, quasi-static operation of the device, which, in analogy with the Carnot cycle, is marked by high efficiency at the expense of vanishing output power. In contrast, Fig.~\ref{fig:Ec} shows that, when the temperature is low ($T = 0.1$), the efficiencies of $4$- and $6$-spin batteries peak around $f = 1/2$, thereby breaking the tradeoff. Although the evidence for this is only partial---the numerical optimization of $E_c$ for $M = 4, \, 6$ could not be carried out precisely enough---we observe in Figs.~\ref{fig:ergo} and \ref{best:DMRG} a similar behavior (efficiency and ergotropy both peaking around $f = 1/2$) for a single-spin battery when the chain is in the ground state $\ket{0^+}$. In both cases, the violation of the power--efficiency tradeoff is due to the proximity of the system to criticality. In this context, it is worth noting that critical systems are responsible for breaking the power--efficiency tradeoff also in ordinary heat engines \cite{Campisi2016, Abiuso2020}, albeit by a completely different mechanism and in a completely different setting. \section{Conclusions} \label{sec:conclu} In this paper, we have studied a four-stroke thermodynamic cycle representing the operation of a quantum battery and its charger. The total system consisting of the battery and the charger is initially either in a ground state or in a thermal equilibrium state, thus protecting the battery's charged state, which is only accessible when disconnected from the charger. Here we have expanded our previous work \cite{Hovhannisyan2020} by considering a fully coherent manipulation in the first three strokes of the cycle. Moreover, we consider a battery--charger system that exhibits a quantum phase transition. We have shown that the phases of the eigenstates of the battery's reduced state can be manipulated by the energy extracting protocol to increase the efficiency of the process without compromising the extracted energy. This aspect highlights an important general point that, when manipulating a subsystem of a strongly interacting system, locally irrelevant phases may have a nontrivial effect on the global energetics of the system. This is a purely quantum effect brought about by correlations and noncommutativity. By operating the working fluid at the verge of the quantum phase transition, all the figures of merits of the device can be further increased. In particular, we found that if the charger-battery is in the ground state, the single spin device only works in the ordered phase, and the critical exponents characterizing the phase transition manifest in the properties of the device close to criticality. Moreover, the arbitrary phase $\theta$ of the unitary that extract the ergotropy can change the scaling exponent of the efficiency from $2\beta$ to a value close to $\beta/2$. For a battery with $M = 2$ and initially in the thermal ground state $(\ket{0^+} \bra{0^+} + \ket{0^-} \bra{0^-}) / 2$, the ergotropy does not present a critical behavior akin to Eq.~\eqref{erg:crit}. However, the ergotropy does show a special behavior: $d \mathcal{E} / d f$ diverges at the critical point; see \ref{appendixM2}. When the ground state is pure (e.g., $\ket{0^+}$), the critical behavior of $\av{\sigma_i^x}$ given by Eq.~\eqref{sigmax}, and the expected critical behavior of $\av{\sigma_i^x \sigma_{i+1}^z}$ suggested by Fig.~\ref{fig:sigmaxz}b, imply that any expectation value calculated on the reduced state \eqref{uglystate} will have a discontinuous derivative at $f_c$, also signaling a special behavior at criticality. When the charger--battery system is in a thermal state, a minimum of two spins are needed for the battery to deliver energy. As this size increases, the optimization of the relative phases to increase the efficiency becomes nontrivial. Overall, our results highlight that collective phenomena can be fruitfully exploited in order to enhance the thermodynamic performance of quantum many-body devices. \section*{Acknowledgements} F. B. thanks Fondecyt project 1191441 and the Millennium Nucleus ``Physics of active matter'' of ANID (Chile). K. V. H. acknowledges support by the University of Potsdam startup funds.
{ "redpajama_set_name": "RedPajamaArXiv" }
1,254
Global Mission Awareness CC Interfaith harmony for sustainable peace and human development "Our purpose: to create interfaith harmony among religions for human development; to promote a culture of peace, human rights and justice; to celebrate the International Day of Peace, Interfaith Harmony Week Environment/Earth Day, International Women's Day, Mother's Day and Father's Day; to provide formal education to underprivileged communities; to provide technical skills, vocational training for women and girls, and computer skills for the youth and oppressed communities; to open a Music Academy and College for Youth; to design projects for poverty alleviation; and to start a TV program for promoting religious harmony and broadcasting peace and religious events and festivals." Education, Media, Peacebuilding and Conflict Transformation Christianity, Hinduism, Islam Joined URI Network https://www.facebook.com/Global-Mission-Awarness-552334004854934/?rc=p Global Mission Awareness has been in contact with URI in Pakistan even before the URI Charter signing. Members participated in the "Journey for Peace" from Karachi to Khyber to welcome the new millennium, IDP celebrations, International Women's Day, Rural Women Day, Interreligious Christmas celebrations and Iftar parties as well. By becoming a Cooperation Circle (CC), Global Mission Awareness plans to be in contact with other URI CCs around the globe to learn more about improving the environment, creating space for religious harmony in society and human development. Global Mission Awareness and 7Star TV organized the 15th Annual Grand Christmas Rejoicing Program on the December 13, 2013, in Lahore, Pakistan. This program was very unique in that it was an interfaith Christmas celebration. Many Muslim and Hindu religious leaders participated along with the National Muslim Famous TV & Radio Artistes, who rejoiced in Christmas songs. CC members are also planning to open a Music Academy and College for Youth. Contact Global Mission Awareness CC Want to contact this Cooperation Circle? Fill out the form below to connect with them. Global Mission Awareness CC partners with local TV station for Interfaith Christmas The Global Mission Awareness-CC and 7Star TV organized the 15th Annual Grand Christmas rejoicing program on 12th December 2013.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,768
* 1. How easy was it to download the tutorial? * 2. How often does the tutorial freeze or crash? * 3. The main purposes of the this tutorial are to increase public librarian competency and confidence in responding to user requests for health information. How well do you feel this tutorial met its intended goals? * 4. How would you rate the presenters' subject knowledge? * 5. How would you rate the presentation skills of the presenters in this tutorial? * 6. Did you consider the time it took you to complete the tutorial reasonable? * 7. How can we improve this tutorial? * 8. Overall, are you satisfied with the Health InfoNet of Alabama program, dissatisfied with it, or neither satisfied nor dissatisfied with it? * 9. How likely are you to recommend this tutorial to others?
{ "redpajama_set_name": "RedPajamaC4" }
713
Q: Python datetime.strptime month specifier doesn't seem to work The month format specifier doesn't seem to work. from datetime import datetime endDate = datetime.strptime('10 3 2011', '%j %m %Y') print endDate 2011-01-10 00:00:00 endDate = datetime.strptime('21 5 1987', '%j %m %Y') print endDate 1987-01-21 00:00:00 Now, according to the manual the manual: %m = Month as a decimal number [01,12]. So, what am I missing, other than the hair I've pulled out trying to understand why my django __filter queries return nothing (the dates going in aren't valid!)? I've tried 03 and 05 to no avail. Versions of things, platform, architecture et al: $ python --version Python 2.7 $ python3 --version Python 3.1.2 $ uname -r 2.6.35.11-83.fc14.x86_64 (that's Linux/Fedora 14/64-bit). A: You can't mix the %j with others format code like %m because if you look in the table that you linked %j is the Day of the year as a decimal number [001,366] so 10 correspondent to the 10 day of the year so it's 01 of January ... So you have just to write : >>> datetime.strptime('10 2011', '%j %Y') datetime.datetime(2011, 1, 10, 0, 0) Else if you you wanted to use 10 as the day of the mount you should do : >>> datetime.strptime('10 3 2011', '%d %m %Y') datetime.datetime(2011, 3, 10, 0, 0) A: Isn't %j the "day of year" parser, which may be forcing strptime to choose January 21, overriding the %m rule? A: %j specifies a day of the year. It's impossible for the 10th day of the year, January 10, to occur in March, so your month specification is being ignored. Garbage In, Garbage Out.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,538
INTERPOL has decided to establish an Expert Working Group to assess various proposals devoted to the exchange of financial information and to the tracing and recovery of criminal assets. This group will consider various proposals including reviving the concept of a "Silver Notice" devoted to the tracing and recovery of criminal assets. The idea of a Silver Notice was first proposed at the 84th INTERPOL General Assembly in Kigali in 2015. This notice was intended to allow INTERPOL to "track the offenders, terrorist financers and others who are using virtual currencies like bitcoin to move and store illicit funds, out of the reach of law enforcement and other authorities". The General Secretariat was asked to create a template of the new notice, to draft general specifications, and to produce by 2016 a precise cost estimate for the development and implementation of the new tool. However, the idea was not implemented at the time and has remained dormant until now. INTERPOL has recently taken a greater interest in acting on financial crime and corruption, instituting the INTERPOL Financial Crime and Anti-Corruption Centre (IFCACC) in January 2022. At the 90th General Assembly in New Delhi this year further resolutions were passed in this area, including the formation of the Expert Working Group. Prefect Vittorio Rizzi, who led the Italian delegation at the 90th General Assembly said "As a police force we have a responsibility to the future of the young generation and we must fight together against organized crime which endangers a healthy economy, honest entrepreneurs and the safety of our communities." Australian authorities have also backed the creation of the new notice, with Australian Federal Police commissioner Reece Kershaw stating that "Crime is borderless and so is the illicit money that underpins criminality … targeting the wealth of criminals is a key strategy of the AFP". The Expert Working Group will submit a report on the outcome of its activity and a proposal for adoption at the 91st General Assembly in Vienna next year. Categories: INTERPOL Member States
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,772
{"url":"https:\/\/proofwiki.org\/wiki\/Integral_of_Integrable_Function_is_Homogeneous","text":"# Integral of Integrable Function is Homogeneous\n\n## Theorem\n\nLet $\\left({X, \\Sigma, \\mu}\\right)$ be a measure space.\n\nLet $f: X \\to \\overline{\\R}$ be a $\\mu$-integrable function, and let $\\lambda \\in \\R$.\n\nThen:\n\n$\\displaystyle \\int \\lambda f \\, \\mathrm d \\mu = \\lambda \\int f \\, \\mathrm d \\mu$\n\nwhere $\\lambda f$ is the pointwise $\\lambda$-multiple of $f$.","date":"2019-12-15 09:59:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.981606662273407, \"perplexity\": 348.11803312500797}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-51\/segments\/1575541307813.73\/warc\/CC-MAIN-20191215094447-20191215122447-00012.warc.gz\"}"}
null
null
{"url":"http:\/\/mymathforum.com\/linear-algebra\/340244-orthogonal-matrix.html","text":"My Math Forum Orthogonal matrix\n\n Linear Algebra Linear Algebra Math Forum\n\n April 28th, 2017, 12:55 PM #1 Member \u00a0 Joined: Nov 2016 From: Kansas Posts: 48 Thanks: 0 Orthogonal matrix v= $(1,2,4)^{T}$ u=$(1,1,1)^{T}$ w=$(2,1,-1)^{T}$ verify that =0 and give a description of all vectors z with $orth_{v}$z=w. I have shown =0 but what about the description. What I can tell that $Proj_{v}$z=0 and $orth_{v}$z=z=w. Is it correct or is there a different description.\n\n Tags matrix, orthogonal\n\n Thread Tools Display Modes Linear Mode\n\n Similar Threads Thread Thread Starter Forum Replies Last Post david940 Linear Algebra 0 June 29th, 2014 04:32 AM maya94 Linear Algebra 0 May 28th, 2014 10:11 PM interestedinmaths Linear Algebra 7 February 4th, 2013 12:10 PM numerik Linear Algebra 1 February 24th, 2012 08:36 PM kpc Linear Algebra 0 July 24th, 2009 02:35 PM\n\n Contact - Home - Forums - Cryptocurrency Forum - Top","date":"2017-05-28 01:03:09","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.28215059638023376, \"perplexity\": 4681.884326553406}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-22\/segments\/1495463609404.11\/warc\/CC-MAIN-20170528004908-20170528024908-00095.warc.gz\"}"}
null
null
Promachus aequalis är en tvåvingeart som beskrevs av Friedrich Hermann Loew 1858. Promachus aequalis ingår i släktet Promachus och familjen rovflugor. Inga underarter finns listade i Catalogue of Life. Källor Rovflugor aequalis
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,199
11948 Justinehénin, provisional designation , is a Themistian asteroid from the outer region of the asteroid belt, approximately 12 kilometers in diameter. The asteroid was discovered on 18 August 1993, by Belgian astronomer Eric Elst at CERGA () in Caussols, southeastern France. It was named for tennis player Justine Henin. Orbit and classification Justinehénin orbits the Sun in the outer main-belt at a distance of 2.8–3.6 AU once every 5 years and 9 months (2,091 days). Its orbit has an eccentricity of 0.12 and an inclination of 2° with respect to the ecliptic. The first identification was made at Crimea–Nauchnij in 1973, extending the asteroid's observation arc by 31 years prior to its discovery. Physical characteristics Diameter and albedo Based on an absolute magnitude of 13.2, Justinehénin potentially measures between 6 and 14 kilometers in diameter, assuming an albedo in the range of 0.05 to 0.25. Since asteroids in the outer main-belt are mostly of a carbonaceous rather than of a silicaceous composition, with low albedos, typically around 0.06, Justinehénins diameter might be on the upper end of NASA's published conversion table, as the lower the body's reflectivity (albedo), the larger its diameter at a constant absolute magnitude (brightness). Lightcurves As of 2017, the asteroid's effective size, its composition and albedo, as well as its rotation period and shape remain unknown. Naming This minor planet was named for Belgian former professional tennis player Justine Henin (born 1985). Although her name (usually) contains no acute accent, the asteroid's official name does. The official naming citation was published by the Minor Planet Center on 10 September 2003 (). References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Asteroids and comets rotation curves, CdR – Observatoire de Genève, Raoul Behrend Discovery Circumstances: Numbered Minor Planets (10001)-(15000) – Minor Planet Center 011948 Discoveries by Eric Walter Elst Named minor planets 19930818
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,983
From Henrietta Vinton Davis to Paul Robeson to The Harlem Shakespeare Festival, African American Shakespeare has a long rich history. Debra Ann Byrd, the delightful HSF Founding Artistic Director, continues the tradition with an all-female production of Othello that hit the stage this November. She shares with us her inspiration for taking on this most racially and sexually charged of Shakespeare's plays, and giving it such a radical twist.
{ "redpajama_set_name": "RedPajamaC4" }
7,978
{"url":"http:\/\/www.ams.org\/mathscinet-getitem?mr=0590431","text":"MathSciNet bibliographic data MR590431 32H25 Molzon, Robert E. Capacity and equidistribution for holomorphic maps from ${\\bf C}\\sp{2}$${\\bf C}\\sp{2}$ to ${\\bf C}\\sp{2}$${\\bf C}\\sp{2}$. Proc. Amer. Math. Soc. 71 (1978), no. 1, 46\u201348. Article\n\nFor users without a MathSciNet license , Relay Station allows linking from MR numbers in online mathematical literature directly to electronic journals and original articles. Subscribers receive the added value of full MathSciNet reviews.","date":"2016-10-25 02:30:30","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 2, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9992759823799133, \"perplexity\": 5090.281888850416}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2016-44\/segments\/1476988719843.44\/warc\/CC-MAIN-20161020183839-00053-ip-10-171-6-4.ec2.internal.warc.gz\"}"}
null
null
\section{Introduction} \label{sec:2mass_intro} In recent years several large ``blind" 21 cm surveys for galaxies have been conducted providing a measurement of the gas in galaxies in the local universe (Meyer et al. 2004; Rosenberg \& Schneider 2000; Zwaan et al. 1997, Spitzak \& Schneider 1998). These surveys have revealed many galaxies with unusual characteristics: some without a definite optical counterpart and others with very high HI to stellar mass ratios, for example. These galaxy samples give us an opportunity to explore how the stellar properties of gas-rich galaxies differ from those of optically luminous systems. Our understanding of the relationship between the global properties of gas and stars in galaxies has mostly been driven by studies of optically-selected, high surface brightness galaxies (e.g. Scodeggio \& Gavazzi 1993; Huchtmeier \& Richter 1985; Fisher \& Tully 1981). Various efforts have been made to extend these studies to lower surface brightness galaxies, (e.g., Galaz et al. 2002, McGaugh et al. 2000, O'Neil et al. 2000, Sprayberry et al. 1995) revealing a great diversity of properties outside of the traditional ``norms" defined by the high surface brightness samples, but such surveys remain tied to the requirement that the galaxies have formed stars in sufficient numbers and surface densities to be detected optically. Extragalactic HI surveys provide one of the few ways to probe the galaxy population independent of their luminosity and surface brightness. HI galaxy selection is also complementary to optical galaxy selection with respect to star formation history since the conversion of gas to stars renders a galaxy more visible optically, but less visible in an HI survey. The blind HI surveys conducted with the Arecibo radio telescope remain among the deepest to date sampling galaxies with HI fluxes almost an order of magnitude smaller than the very large, but shallow HI Parkes All-Sky Survey (HIPASS, Meyer et al. 2004) and probing the lowest mass galaxies over a wider range of environments. The stellar properties of the Arecibo samples of Zwaan et al (1997) and Spitzak \& Schneider (1998) have been studied at optical wavelengths. These studies have provided a picture of the gas-rich galaxies in the universe as containing a sub-sample of low-luminosity and low-surface-brightness galaxies (Spitzak \& Schneider 1998) in higher proportion than found in optical surveys. Infrared observations more nearly reflect the total stellar mass of galaxies, since they are less sensitive to the star formation rate and history and are less affected by dust than optical observations. In this paper we use the data from the 2 Micron All-Sky Survey (2MASS) to study the infrared properties of galaxies from the Arecibo Dual-Beam Survey (Rosenberg \& Schneider 2000; RS) and from the Slice Survey (Spitzak \& Schneider 1998; SS). Both of these surveys are ``blind" HI surveys that probed the gas-rich galaxy populations in the local universe. The 2MASS observations of this galaxy sample are useful for examining the relationship between gas and stars in gas-rich galaxies in the local universe. While 2MASS provides information about the stellar mass in these galaxies, it suffers from a lack of surface brightness sensitivity which limits the galaxy detection rate. Nevertheless, over 85\% of the galaxies in each sample were detected. We discuss the HI and 2MASS data used in this study in \S 2 along with details about how some of the galaxies were measured from the 2MASS images. In \S 3 we discuss the relationship between the gaseous and stellar properties of the galaxies and discuss the surface densities of gas and stars in \S 4. In \S 5 we discuss the 2MASS images of the galaxies and in \S 6 we summarize our results. \section{DATA} \label{sec:2mass_samples} \subsection{Sample Selection - The Arecibo Dual-Beam and Slice Surveys} We examine the stellar properties of two HI-selected galaxy samples. Both the RS and SS surveys are ``blind" 21 cm surveys carried out with the Arecibo 305 m telescope prior to the Gregorian upgrade. We have used these surveys to identify galaxies purely based on their gas content out to 7977 \kms\ and 8340 \kms\ respectively. The RS survey covered $\sim$ 430 deg$^2$ in the main beam and detected 265 galaxies while the SS survey covered $\sim$ 55 deg$^{2}$ and detected 75 galaxies. The selection functions and HI mass functions have been studied in detail and are presented in Rosenberg \& Schneider (2002) and Schneider, Spitzak, \& Rosenberg (1999) for the RS and SS surveys respectively. The details of the calculations of line width, velocity, distance, and HI mass for the entire sample are presented in RS. The SS survey additionally includes broadband B, R and I optical data. There are three galaxies from the RS survey for which the RS line widths differed from the literature values because one horn of the velocity profile was missed in the survey measurement. In these cases (RS 17, RS 109, and RS 189) we use the values from the Huchtmeier \& Richter catalog (1985) when using line widths to calculate the dynamical mass (see \S3). \subsection{The 2-Micron All Sky Survey Data} The 2 Micron All-Sky Survey (2MASS) provides simultaneous J, H, and K$_s$-band observations of the entire sky; the 2MASS project processed the data to generate a point source catalog and an extended source catalog. We use the extended source catalog data for the RS and SS galaxies in the following analyses and also supplement this with our own analysis of the images for a number of galaxies that were undetected by the standard processing procedures (discussed in \S 2.2.1). A full description of the galaxy detection algorithm and the resulting catalogs are available in the Explanatory Supplement to the 2MASS All Sky Data Release\footnote{http://www.ipac.caltech.edu/2mass/releases/allsky/doc/explsup.html by Cutri, R.M., Skrutskie, M.F., Van Dyk, S., Beichman, C.A., Carpenter, J.M., Chester, T., Cambresy, L., Evans, T., Fowler, J., Gizis, J., Howard, E., Huchra, J., Jarrett, T., Kopan, E.L., Kirkpatrick, J.D., Light, R.M, Marsh, K.A., McCallon, H., Schneider, S., Stiening, R., Sykes, M., Weinberg, M., Wheaton, W.A., Wheelock, S., Zacharias, N.} (Cutri et al.). The 2MASS extended source catalog has undergone several iterations and we use results from both the Version 2 and Version 3 catalogs in these analyses. We use Version 2 in addition to Version 3 because this earlier version of the software used previous galaxy catalog positions to seed the search algorithms (in addition to doing an independent automated search) and therefore measured a number of fainter sources that Version 3 did not detect, since it used only the automated algorithms. In addition, there are some large galaxies for which we report the Version 2 values rather than the Version 3 values because some of the information if missing the the version 3 catalog. The J-band observations are the most sensitive of the three 2MASS bands and our HI-selected sources tend to be blue on average, so we use these data in our analyses. The catalog data were used for 45 (4 from Version 2, 41 from Version 3) of the 75 SS galaxies and 180 (50 from Version 2, 130 from Version 3) of the 265 RS galaxies. Some of the galaxies that were not in the extended source catalogs can be identified on the full resolution 2MASS images; in these cases we measured the galaxy from the image. Details of our image analysis are described below. There were an additional 19 SS and 47 RS galaxies that we measured from the images. For 11 of the SS galaxies and 38 of the RS galaxies there was no cataloged detection and we were not able to measure the galaxy from the images. For most of the galaxies that were not measured, the source was just too faint to be detected in the short 2MASS exposures, but some exceptions are described in the specific galaxy notes below. The near infrared measurements for the galaxies in both the RS and SS surveys are given in Tables 1 and 2 in the Appendix. The tables also list the source for the measurements, i.e., whether the data come from Version 2 or 3 of the 2MASS catalog, were measured from the image, or if the source was not detected. \subsubsection{2MASS Measured Data} To measure the galaxies on the 2MASS images we have used the ELLIPSE package in IRAF. The brighter galaxies were interactively fit with ellipses such that the ellipse parameters were allowed to change in the highest surface brightness regions but when the fits became uncertain in the outer regions, the ellipse parameters were held fixed while additional steps in radius were taken. For the lowest surface brightness galaxies, even the central regions had uncertain parameter fits, so only fixed circular apertures were used for the photometry. ELLIPSE was run at least twice for each galaxy so that the outer ``background" ellipses could be used for background subtraction. Then the fitting was rerun on the background-subtracted image. The background subtraction was repeated if an adequately flat background had not been obtained on the first pass. In all cases, the point sources in the image were masked prior to the ellipse fitting. \subsubsection{Notes About Selected Galaxies} We provide information about the near infrared measurements for a few galaxies that warrant a comment. The numbers refer to the entry number in the RS survey catalog (see Table 1 for the ADBS names). \begin{itemize} \item{{\bf RS 50}: The HI detection of this system includes an interacting pair of galaxies. Since we do not separate the pair in the HI observation we do not try to correlate the HI with a near infrared measurement of one galaxy or the other.} \item{{\bf RS 73}: We do not report an infrared measurement for this galaxy because it is too contaminated by a bright star.} \item{{\bf RS 79}: We use the Version 2 measurement of the brighter galaxy in the interacting pair. Both galaxies are probably contributing to the HI, but including the fainter of the two galaxies would change the J-band magnitude by less than 0.3 mag.} \item{{\bf RS 184}: This galaxy is in the Version 3 extended source catalog. However, we use measurements from the image because the center of this very low surface brightness galaxy was missed in favor of a field star by the galaxy detection algorithm.} \end{itemize} \subsection{Using the 2MASS data as a Measure of Stellar Mass} As a basis for understanding the properties of these HI-selected galaxies we want to compare the gas mass and the stellar mass in these systems. The 2MASS data are particularly good for measuring the stellar mass as they are more sensitive to low mass stars and less sensitive to dust and star formation than optical observations. However, there is substantial scatter in the relationship between luminosity and stellar mass even in the infrared. The scatter can be decreased by using information about the galaxy's color (Bell \& de Jong 2001). Based on stellar population synthesis models, a variety of estimates have been made for various stellar initial mass functions and star-formation histories. Bell \& de Jong (2001) present a range of models that yield realistic colors and surface brightnesses for present-day galaxies. Bell (private communication) has provided us with J-band values for these same models and finds a tight correlation between (B-R) color and the ratio of stellar mass to J-band luminosity: \begin{equation} log(M_{star}/L_J) = 0.552 (B-R) - 0.724 \end{equation} The value of $M_{star}/L_J$ varies from $\sim0.4$ for blue galaxies [$(B-R)\sim0.6$] to $\sim1.2$ for red galaxies [$(B-R)\sim1.6$]. Including differences between models and measurement uncertainties, there is thus a range of almost a factor of 4 in the relationship between stellar mass and J-band luminosity for the normal range of galaxy colors. When the color is known, the total range reduces to about a factor of 1.5. These results appear to be consistent with alternative models carried out for I and H bands (cf. McGaugh \& de Blok 1997; McGaugh et al. 2000). The SS galaxies were previously observed in broadband colors, which allows us to estimate their mass-to-light ratios more precisely. Their average (B-R) color was 1.0, so their average M$_{star}$/L$_J$ value is 0.7. Figure \ref{fig:MLcomp2} shows the relationship between M$_{star}$/L$_J$ for the SS galaxies calculated using Equation 1 and their ratio of HI mass to J-band luminosity, M$_{HI}$/L$_J$. The total neutral hydrogen mass was calculated from the published data using the well-known relationship M$_{HI}=2.36\times10^5 D^2 I_{HI}$, where $D$ is the distance in Mpc and $I_{HI}$ is the integrated flux in Jy km s$^{-1}$. The standard deviation around the line is 0.15 while the standard deviation around the average value of $log(M_{star}/L_J$) = -0.15 is 0.19. The correlation is not extremely tight, but it allows us to improve our estimate of the mass-to-light-ratios for the galaxies that do not have broadband colors: the RS galaxies and the faintest SS galaxies. \begin{figure}[ht] \plotone{Rosenberg.fig1.eps} \caption{The relationship between the mass-to-light ratio calculated using Equation 1 and the M$_{HI}$/L$_J$ for the SS data. The fit to these data are used to improve the estimate of M$_{star}$/L$_J$ for the data for which B-R measurements are not available.} \label{fig:MLcomp2} \end{figure} In addition to the intrinsic and measurement uncertainties in determining M$_{star}$/L$_J$, an additional source of scatter is introduced in the determination of stellar mass because the 2MASS data are not very deep. Uncertainties in the J-band flux are caused by the difficulty in measuring the isophotal sizes near the limit of the survey surface brightness sensitivity, particularly for the lowest surface brightness galaxies. We compare the isophotal measurements of galaxy magnitudes at 21 mag arcsec$^{-2}$ to the measurements in the largest aperture before the noise takes over (i.e., before the values start oscillating between fainter and brighter values due to noise). For the 2MASS cataloged values, we use the brightest aperture value reported. To be consistent with the catalog measurements, we restrict our own measured apertures to those used in the catalog (ie., 5, 7, 10, 15, 20, 25, 30, 40, 50, 60, or 70 arcsec). However, 19 of the RS galaxies are significantly larger than 70$\arcsec$ so the largest aperture is not a good measurement of its size of flux. For these galaxies, we use the extrapolated J-band magnitude so that the size and flux are not severely underestimated. Table 1 indicates for which galaxies the extrapolated values are used. There are no SS galaxies that are significantly larger than the 70$\arcsec$ aperture. \begin{figure}[ht] \plotone{Rosenberg.fig2.eps} \caption{The relationship between aperture and 21 mag arcsec$^{-2}$ isophotal magnitudes from the RS and the SS samples. The open symbols indicate magnitudes that were obtained from the 2MASS extended source catalog, Version 2 or 3. The closed symbols indicate magnitudes that were measured from the images.} \label{fig:magcomp} \end{figure} Figure \ref{fig:magcomp} shows that the magnitudes for most of the faint galaxies are severely underestimated if the isophotal value is used. For all of the J-band luminosities in this paper we use the aperture measurement values. Additionally, we adopt a value of M$_\odot$(J) = 3.73 (Johnson 1966; Allen 1973) for our conversion to solar luminosities. \section{Stars and Gas in an HI-Selected Sample} By using the RS and SS data, we have selected a wide range of galaxy types with the only criteria being that they contain neutral hydrogen. These systems span the range from dwarfs and irregulars to early-type spirals and interacting systems. Appendix A shows the 2MASS near infrared images for these galaxies; their morphology is discussed in greater detail in \S 5. \begin{figure}[ht] \plotone{Rosenberg.fig3.eps} \caption{The relationship between the average J-band surface brightness and the ratio of HI mass to J-band luminosity for the RS galaxies (filled circles) and the SS galaxies (open circles).} \label{fig:sbcomp} \end{figure} Figure \ref{fig:sbcomp} shows the relationship between the J-band surface brightness and M$_{HI}$/L$_J$ for the RS galaxies (filled circles) and for the SS galaxies (open circles). The J-band surface brightness is determined within the aperture radius (or extrapolated radius for the largest galaxies) determined as discussed in the previous section. The surface brightness that we derive is, roughly, a surface brightness at a slightly fainter isophote than J=21 mag arcsec$^{-2}$. This figure illustrates that, in general, the gas-rich galaxies (galaxies with high values of M$_{HI}$/L$_J$) are the lower surface brightness ones indicating that HI-selection is a good way to find low surface brightness galaxies. Despite not having a very precise measure of surface brightness, the correlation with M$_{HI}$/L$_J$ is pretty good. The typical (B-J) color for a spiral galaxy is between 2 and 3. Using (B-J) = 2 as a conservative value, the average surface brightnesses of these HI selected galaxies range between $\Sigma_J$ = 18 to 25 mag arcsec$^{-2}$. There are a variety of different definitions of what constitutes a low surface brightness galaxy. One definition refers only to the central surface brightness of the disk component after a bulge-disk decomposition has been carried out. By this definition, many galaxies with high surface brightness bulges are categorized as LSB, because the disk component is faint. Another common definition uses the mean blue surface brightness within the $\mu_{B_0} = $ 25.0 mag arcsec$^{-2}$ isophote, giving the label of LSB to galaxies with inclination-corrected mean surface brightnesses dimmer than $\langle\mu_{B_0}\rangle > $25.0 mag arcsec$^{-2}$. Since we cannot perform bulge-disk decompositions with the present data, our mean J-band surface brightnesses within the $\mu_{J} = $ 21.0 mag arcsec$^{-2}$ are most readily comparable to the latter definition, although for typical (B-J) colors we measure the mean within a brighter isophote. It appears, though, that analogous to the B-band definition of LSB, we might call galaxies dimmer than $\langle\mu_{J}\rangle > $21.0 mag arcsec$^{-2}$ infrared LSBs. This cutoff for LSBs also marks the approximate point at which the total stellar luminosity becomes smaller than the HI mass (in solar units). The total baryonic mass was estimated from the HI and J-band emission using: \begin{equation} M_{bar} = M_{star} + 1.4\times M_{HI} . \end{equation} where M$_{star}$ is the total stellar mass, derived from the J-band luminosity mass-to-light ratio discussed in the previous section, and the total gas mass is estimated from M$_{HI}$ multiplied by 1.4 to account for helium and metals. Obviously, including an estimate of the molecular and ionized gas would increase this further. The stellar and gas masses both have significant uncertainties associated with them (e.g., due to the metallicity of the stars and the amount of molecular gas present), but the effect of internal extinction on the estimate is small compared to its effect at optical wavelengths since the extinction at J is only about 20\% of that at B. Furthermore, the additional gas mass is likely to be largest for earlier type galaxies (Young \& Knezek 1991), which are dominated by their stellar content, and the uncertainties in the stellar mass are largest for the dim galaxies that are generally dominated by their neutral hydrogen mass, so the uncertainties should have little impact on the total baryonic mass estimates. \begin{figure}[ht] \plotone{Rosenberg.fig4.eps} \caption{The dependence of stellar luminosity, HI mass, total baryonic mass, and dynamical mass upon the ratio of gas to stars. The distance independent quantities of the ratio of dynamical mass to J-band luminosity and the ratio of dynamical mass to baryonic mass are also plotted as a function of the gas to stars ratio. The filled circles are the RS galaxies, the open circles are the SS galaxies. The triangles plotted in panels containing dynamical mass are the galaxies for which b/a$>$0.4 for which the dynamical mass measurements may not be reliable.} \label{fig:mlcomp} \end{figure} Figure \ref{fig:mlcomp} shows how the various measured and derived values of mass and luminosity vary with M$_{HI}$/L$_{J}$. We find that the HI mass is essentially uncorrelated with the ratio for our HI-selected sample; consequently, the stellar luminosity declines roughly in inverse proportion to the M$_{HI}$/L$_{J}$ ratio. An indication of dynamical mass was calculated for our galaxies using their HI line widths (see RS and SS for a discussion of the measurements of line width) and optical dimensions to estimate the inclination and rotation speed, using the equation: \begin{equation} M_{dyn} = v_{rot}^2 r_{opt}/G . \end{equation} $v_{rot}$ is the inclination corrected line width ($v$/sin[i]). We limited the inclination correction to inclinations greater than $i>66^\circ$ (b/a $\le$ 0.4). This calculation of dynamical mass will not accurately estimate the masses of dwarf systems that are not primarily rotationally supported, and this mass estimate is directly dependent on the limits to which starlight is seen, but will, nevertheless give an indication of the total mass inside the faintest detected isophote. We have chosen to use the optical size in this calculation, despite the fact that there is often significant HI mass outside of this radius, because HI sizes are only available for a small subset of the RS data and are not available for any of the SS data. The optical sizes of the RS galaxies are derived from Palomar Sky Survey images and represent roughly the B-band 25 mag arcsec$^{-2}$ isophotal size. In Figure \ref{fig:mlcomp} the galaxies with b/a$>$0.4 (more face-on systems for which the determination of dynamical mass is much less reliable) have been plotted as triangles. Even though there are some clear trends in the properties of the galaxies in these surveys, Figure \ref{fig:mlcomp} also illustrates an important diversity. The width of the distribution of masses, luminosities and mass-to-light ratios at any given value of M$_{HI}$/L$_{J}$ is many times larger than might be caused by measurement uncertainties alone. The spread is probably even larger than these figures show since the lower-mass sources were only detectable to a limited distance within our search volume. What Figure \ref{fig:mlcomp} shows is that galaxies of any given HI mass span the entire range of gas-to-star ratios -- we see galaxies that are low in M$_{HI}$ because they have turned most of their gas into stars and some that have low M$_{HI}$ because they are low mass systems with little of their mass in stars. We note the scarcity of galaxies with M$_{HI}$/L$_{J}$ \ifmmode \buildrel < \over {_\sim} \else $\buildrel <\over {_\sim}$\fi 0.03 in Figure \ref{fig:mlcomp}. It is not clear whether this scarcity is a selection effect or a true deficit of galaxies. Presumably, some ellipticals and lenticulars would fall in this range, and some early-type systems {\it were} detected in these surveys. The total contribution of early type galaxies containing HI is uncertain. At such low M$_{HI}$/L$_{J}$ values, the HI selection criteria only permit us to detect high stellar luminosity sources, and even those only out to a relatively small distances. HI data for an unbiased optical sample is needed to determine their true statistical contribution. \begin{figure} \plotone{Rosenberg.fig5.eps} \caption{The relationship between the J-band luminosity and HI mass for the RS (filled circles) galaxies and the SS (open circles) galaxies. The dashed lines indicate M$_{HI_{\ast}}$ (Rosenberg \& Schneider 2002) and L$_{J_{\ast}}$ (from the 2dF sample using 2MASS measurements; Cole et al. 2001). The solid line indicates a one-to-one relationship between the parameters.} \label{fig:lcomp} \end{figure} The large range of galaxy properties and M$_{HI}$/L$_{J}$ values for the RS and SS surveys indicates that there is not an easy conversion between the HI mass function and the J-band luminosity function. Figure \ref{fig:lcomp} further demonstrates the problem with using one of these parameters to predict the other. It is not clear that there is a linear relationship between L$_J$ \ and M$_{HI}$ \ at all masses and luminosities and the spread around the correlation between the quantities is large. As we noted in our earlier paper (Rosenberg \& Schneider 2002), at a given HI mass, there may be several orders of magnitude variation in the J-band luminosity, making HI mass a poor predictor of stellar content, and vice versa. Both the baryonic and dynamical mass estimates are higher for sources with smaller ratios of M$_{HI}$/L$_{J}$. As we show in Figure \ref{fig:mlcomp}, however, the dynamical mass-to-light ratio is higher on average for the gas-rich galaxies. This figure also shows that the ratio of dynamical mass to baryonic mass is relatively flat as function of M$_{HI}$/L$_{J}$ with a median value of 3.8 for galaxies with b/a $<$ 0.4. The range in values that that we find for M$_{dyn}$/M$_{bar}$ is comparable to the range for disk galaxies (Zavala et al. 2003). Even excluding the more face-on systems for which the dynamical mass calculation is problematic, there are 5 galaxies with dynamical masses that are smaller than their baryonic mass estimate. For 2 of the RS cases, the measurement of inclination is for the bright central region, but there is a more diffuse light distribution that may indicate that these systems are more face on. For the other 3 systems it is not clear why the dynamic mass is much smaller than the baryonic mass except that they seem to have low rotation velocities for such edge-on systems that might indicate that the line width has not been properly measured (e.g., missing one horn of a double horned HI profile). \begin{figure} \plotone{Rosenberg.fig6.eps} \caption{The baryonic mass for the ADBS galaxies (filled circles) and the AS galaxies (open circles) plotted against the dynamic mass, rotational velocity, and J-band size. The triangles in the dynamic mass and rotational velocity plots indicate the galaxies for which b/a$\ge$0.4 and are, therefore, less reliable. The solid lines in the plots show the average of forward and backward least squares fits to the data excluding the galaxies with b/a$\ge$0.4.} \label{fig:mrcomp} \end{figure} Figure \ref{fig:mrcomp} shows the relationship between our estimate of the baryonic mass and a variety of other measurements that relate to the overall mass of a galaxy: the dynamical mass (as described in \S 3), the rotation speed, and the radius. Note that we have marked the face-on (b/a$\ge$0.4) galaxies with triangles and have not included them in the fits. \begin{figure} \plotone{Rosenberg.fig7.eps} \caption{The scatter of the baryonic masses around the least squares fits to each of the quantities shown in Figure 6. The rms dispersion of the fit is noted at the top of each panel.} \label{fig:rathist} \end{figure} It is apparent by eye that the scatter is significantly smaller for galaxy radius than for other more traditional estimators of mass, although when we remove the more face-on galaxies from our calculation, the difference between the dynamical mass and J-band size is greatly reduced. We averaged forward and backward linear least squares fits to these data and display the distribution around the fit and the calculated sigma in Figure \ref{fig:rathist}. Thus, the size of a galaxy makes a good predictor of the mass present. While galaxy models do indicate that size might be the third parameter in a spiral galaxy fundamental plane relationship (Shen, Mo, \& Shu 2002), the physics behind this correlation is unclear. \section{The Surface Density of Matter in Galaxies} Despite the diverse range in galaxy properties discussed in the previous sections, there is a tight correlation between HI mass and HI size (Rosenberg \& Schneider 2003) as well as a good correlation between J-band luminosity and J-band size. We examine the mass surface densities of gas, stars, and baryons since the Schmidt Law associates the surface density of gas in a galaxy with the star formation rate (Kennicutt 1998) and seems to imply a regulation mechanism between the gas and the star formation that might help explain the relationship between mass and size for these two populations. \begin{figure} \plotone{Rosenberg.fig8.eps} \caption{The J-band luminosity for the RS galaxies (filled circles) and the SS galaxies (open circles) plotted against the dynamical mass, rotational velocity, and J-band size. The triangles in the dynamical mass and rotational velocity plots indicate the galaxies with b/a$\ge$0.4. The solid lines show the forward and backward least squares fits to the data excluding galaxies with b/a$\ge$0.4.} \label{fig:lrcomp} \end{figure} Figure \ref{fig:lrcomp} shows the relationship between luminosity and dynamical mass, rotational velocity, and J-band size for these galaxies. Forward and backward linear least squares fits to these data are plotted. The standard deviation around these fits are 0.49, 0.69, and 0.37 respectively, providing an indication of the significance of the R$_J$ versus L$_J$ correlation. For the dynamical mass and rotational velocity correlations we have plotted the galaxies with axis ratios greater then 0.4 as circles (galaxies with axis ratios of 0.4 or less are plotted as triangles). We have only included those galaxies with axis ratios greater than 0.4 in these two fits. The relationship between J-band luminosity and J-band size is tighter than the Tully-Fisher relationship (Tully \& Fisher 1977) for this sample which includes irregulars, interacting systems, and dwarfs as well as spirals. Unfortunately, since luminosity and size scale the same way with distance, the tight correlation does not offer any possibilities for distance estimation, The Tully-Fisher relation (Tully \& Fisher 1977) has provided an important test of galaxy formation simulations and has been very hard to reproduce in detail; most of the galaxies produced in simulations lose too much of their angular momentum without large amounts of ad hoc energy injection into the system. Governato et al. (2004) have managed to produce one galaxy with the correct dynamical properties by going to higher resolution in their simulations, but still has an excess of massive satellites and a paucity of cold gas. Even as this work shows a success in producing the dynamical properties of a galaxy it highlights how much is left to be done. The Tully-Fisher relation seems to indicate that there is a physical connection between the dark matter component and the baryonic component in spiral galaxies. For these HI-selected galaxies, many of which are dwarfs or irregulars rather than spirals, we find less of a correlation between V$_{rot}$ or M$_{dyn}$ and M$_{bar}$ than is usually found for spiral galaxies (e.g., Giovanelli et al. 1997) culling out the systems with axis ratios greater than or equal to 0.4. However, even for this disparate group of galaxies, the average surface density of gas (Rosenberg \& Schneider 2003) has a very small range relative to the variation in the stellar surface density. The surface density of the gas is determined as the HI mass divided by the HI area defined by the size at 2$\times 10^{20}$ cm$^{-2}$. \begin{figure} \plotone{Rosenberg.fig9.eps} \caption{The distribution of mass surface densities of gas,stars, and baryons. The gas surface densities, indicated by the dark-shaded histogram, come from the RS sample galaxies with measured HI sizes. The open histogram shows the stellar mass surface densities of the RS and SS galaxies using the J-band luminosity to estimate stellar mass, and the J-band sizes. The total baryonic mass surface density (cross-hatched histogram) is estimated using the sum of gas and stars and the optical size of each galaxy.} \label{fig:sd} \end{figure} Figure \ref{fig:sd} shows the HI surface densities for the the RS galaxies with measured HI sizes. Rosenberg \& Schneider (2003) showed that the average surface density of the gas in these galaxies was nearly constant, and this result is evident in the small range in mean HI surface densities: the values range from 3 to 24 M$_{\odot}$ pc$^{-2}$ with only 4 of the 50 galaxies having values above 10. We do not discuss the surface densities of the SS sample because HI sizes are not available for these galaxies. Alternatively, the stellar surface densities or the RS and SS sample cover a broad distribution ranging from 0.8 to 660 M$_{\odot}$ pc$^{-2}$. The notable feature of this plot is that the dispersion in the HI surface density is only 0.46 while the dispersion around the stellar mass surface density is 4.5. The baryonic mass surface density is the combination of the HI and stellar masses as given in Equation 2 and has a standard deviation around the mean surface density (averaged over the HI size) of 5.8. These mass surface densities may provide an additional constraint for the simulations. The larger dispersion in the stellar mass surface density of the RS and SS samples is, at least in part, due to the large uncertainties in the J-band sizes. Also, since the diameters were measured at a high surface brightness, the mean surface densities are higher than they would have been had we measured out to an equivalently low surface density as the gas. The median value for the distribution is 19.3 M$_{\odot}$ pc$^{-2}$ relative to 5.9 M$_{\odot}$ pc$^{-2}$ for the HI mass density. We note that the median value of the mass surface density is generally higher for the stars than for the gas. This corresponds well with the observation that many gas-rich galaxies have gas distributions extending well beyond the optical (or near infrared) light distributions. However, we do note that there are still some systems in the sample that have very low stellar mass surface densities providing an indication that we have detected some very low surface brightness systems. \section {The Morphology of HI-Selected Galaxies} Figures \ref{fig:adbs1} and \ref{fig:as1} in the Appendix show the 2MASS J-band images for the RS and SS galaxies with measured values of baryonic mass, respectively. The galaxies have been placed in order of their baryonic mass from the highest baryonic mass systems to the lowest (within each survey). Figures \ref{fig:adbs2} and \ref{fig:as2} show the galaxies for which we do not measure baryonic mass. The images of these galaxies illustrate the diverse population as discussed in \S 3. The sample covers the range from nearby bright spiral galaxies like RS 189, NGC 4565, to bulge-less low surface brightness smudges like RS 184 (Figure \ref{fig:adbs1} in the middle of the second to last line on the second page). There are also early type galaxies in the sample like SS 19 (Figure \ref{fig:as1} in the middle of the last line) which is NGC 7712, an elliptical galaxy (Scodeggio et al. 1995) and close interacting pairs like RS 50 (second to last image in Figure \ref{fig:adbs2}). The SS98 optical images of the SS sample indicate that a substantial fraction of the galaxies are low surface brightness. As discussed in \S 3, these 2MASS data also indicate that there is a substantial population of low surface brightness galaxies among the RS and SS samples even though the surface brightness can not be directly compared with the usual low surface brightness definition. Comparing the 2MASS images with the SS98 optical images, we find that nearly all of the low surface brightness galaxies are not detected or are barely detected in the 2MASS data. Inevitably this is because 2MASS is a shallow survey, but there is no evidence for bulges that are bright in the infrared but not in the optical as was found for low surface brightness galaxies by Galaz et al. (2002). Visual inspection of the RS galaxy images seems to indicate that most of the highest baryonic mass galaxies are spirals with prominent bulge components while the lower baryonic mass galaxies show more variation galaxy type and bulge size down to galaxies like RS 184 where the brightness distribution appears fairly flat across the J-band detected stellar disk. This same trend is not apparent for the SS galaxies, but the sample size is significantly smaller which might account for the difference. \section{Summary} We have examined the stellar properties of two HI-selected galaxy samples and found that they show a large range of stellar properties. We find that the galaxies cover a wide range in mass-to-luminosity ratio and the ratio is uncorrelated with the HI mass of the system. The range suggests that star formation does not proceed uniformly in all galaxies, some of the galaxies have funneled a large fraction of their gas mass into stars by the present day while there are other galaxies that are still largely dominated by their gas content. Because of these differences, one cannot infer the gas properties of a galaxy from its stellar properties and vice versa. The different proportions of gas and stars in these galaxies may be providing us a glimpse of galaxies in different stages of evolution. Despite the wide range of gas-to-star ratios for galaxies in these samples, there are some surprising correlations. There is a small range in the average HI surface density in these galaxies. While the average stellar surface density distribution is not as tight, it is also a fairly narrow distribution given all of the inaccuracies in its measurement. The baryonic matter appears to average to about one quarter of the dynamical mass (within the optical dimensions of the galaxy) across a wide range of galaxy types. Most notably, the size of the stellar disk appears to be a very good predictor of the total baryonic content of galaxies. The physical causes of this correlation are not immediately apparent, but the correlation spans over three orders of magnitude in galaxy mass. \acknowledgements We would like to thank 2MASS for the effort that has gone into this catalog and for providing financial support for this research. We would particularly like to thank Roc Cutri for all of the help in obtaining the full resolution images that were needed for this paper to be possible. We would also like to thank the Arecibo and VLA staffs for their assistance with the HI observations. Thanks also to Eric Bell for the information on J-band M/L ratios. We also appreciate John Spitzak's work on the program to display and print the galaxy images. Thanks also go to the anonymous referee for helpful suggestions. JLR acknowledges support from the National Science Foundation under grant AST-0302049. The Digitized Sky Surveys were produced at the Space Telescope Science Institute under U.S. Government grant NAG W-2166. The images of these surveys are based on photographic data obtained using the Oschin Schmidt Telescope on Palomar Mountain and the UK Schmidt Telescope. The plates were processed into the present compressed digital form with the permission of these institutions.
{ "redpajama_set_name": "RedPajamaArXiv" }
9,085
package benchmarks import ( "github.com/jasonsoft/log" ) func fakeJasonLogFields() log.Context { return log. Int("int", _tenInts[0]). Ints("ints", _tenInts). Str("string", _tenStrings[0]). Strs("strings", _tenStrings). Time("time", _tenTimes[0]). Times("times", _tenTimes). Interface("user1", _oneUser). Interface("user2", _oneUser). Interface("users", _tenUsers). Err(errExample) }
{ "redpajama_set_name": "RedPajamaGithub" }
6,713
\section{introduction}\label{S:intro} It is well known (See, for instance \cite{jZ81} - \cite{aS04}) that the Jeans' instability forms the basis of our understanding of gravitational condensation. In particular, Jeans' mass criterion is invoked in astrophysical theories of the formation of stars, gaseous clouds, etc. Usually gravitational instability is analysed in terms of the Jeans' wavelength \cite{mK99} \begin{equation} \label{eq:jwl} \lambda_J = \sqrt{\frac{\pi c_s^2}{G \rho_0}} \,, \end{equation} or, equivalently, in terms of Jeans' mass $M_J \sim \rho_0 \lambda_J^3$. In this formula $G$ is the gravitational constant, $\rho_0$ is the unperturbed mass density and $c_s$ is the adiabatic sound speed. As is now widely known, perturbations in homogeneous fluid with mass greater than a critical value $M_J$ may grow producing gravitationally bounded structures. In the process of their evolution this structures can achieve states of hydrodynamic equilibrium like stars polytropes or gas clouds when pressure gradient equals gravitational force. In this paper we investigate hydrodynamic equilibrium and stability of finite self-gravitating fluid mass with inhomogeneous distribution of mass density, pressure and temperature along the radius. In linear approach we get Schr\"odiger-like \cite{lL74} equation wich eigenvalues and eigenfunctions give us increments and profiles of disturbances. \section{basic formalism}\label{S:base} Consider spherically symmetric fluid body with radius $R$ and mass $M$. We assume that the system is non-rotating and non-expanding. It can be star, gaseous cloud, etc. The evolution of a self-gravitating fluid is described by the conservation equations for mass, momentum and specific entropy, coupled with the Poisson equation \begin{equation} \label{eq:cont} \frac{\partial \rho}{\partial t} + \mathbf{\nabla} \rho \mathbf{v} = 0 \end{equation} \begin{equation} \label{eq:eul} \rho \frac{\partial \mathbf{v}}{\partial t} + \rho (\mathbf{v}\mathbf{\nabla}) \mathbf{v} = - \mathbf{\nabla} p - \rho \mathbf{\nabla} \varphi \end{equation} \begin{equation} \label{eq:ent} \frac{\partial s}{\partial t} + (\mathbf{v}\mathbf{\nabla}) s = 0 \end{equation} \begin{equation} \label{eq:newt} \nabla^2 \varphi = 4\pi G \rho \,, \end{equation} where $\mathbf{v}$ is the velocity, $p$ is the pressure, $\rho$ is the mass density, $s$ is the specific entropy and $\varphi$ is the gravitational potential. The linearization procedure has clearly invoked that local state variables deviate from their equilibrium values through linear fluctuation, namely \begin{equation} \label{eq:lin} \begin{split} p =& p_0(r) + p_1(t,\mathbf{r})\,, \quad \rho = \rho_0(r)+\rho_1(t,\mathbf{r})\,, \\ s =& s_0(r) + s_1(t,\mathbf{r})\,,\quad \varphi = \varphi_0(r)+\varphi_1(t,\mathbf{r})\,. \end{split} \end{equation} Velocity $\mathbf{v}$ itself is infinitesimal. Substitution of (\ref{eq:lin}) in (\ref{eq:cont}) - (\ref{eq:newt}) constitutes equations for equilibrium state (\cite{jZ81}) \begin{equation} \label{eq:eul0} -\frac{1}{\rho_0(r)}\mathbf{\nabla} p_0(r) - \mathbf{\nabla} \varphi_0(r)=0 \end{equation} \begin{equation} \label{eq:newt0} \nabla^2 \varphi_0(r) = 4\pi G \rho_0(r) \end{equation} and for perturbed parameters \begin{equation} \label{eq:cont1} \frac{\partial \rho_1}{\partial t} + \mathbf{\nabla} \rho_0\mathbf{v} =0 \end{equation} \begin{equation} \label{eq:eul1} \rho_0\frac{\partial \mathbf{v}}{\partial t}= -\mathbf{\nabla} p_1 - \rho_1\mathbf{\nabla} \varphi_0 - \rho_0\mathbf{\nabla} \varphi_1 \end{equation} \begin{equation} \label{eq:ent1} \frac{\partial s_1}{\partial t} + \mathbf{v} \mathbf{\nabla} s_0 = 0 \end{equation} \begin{equation} \label{eq:newt1} \nabla^2 \varphi_1 = 4\pi G \rho_1 \,. \end{equation} These equations must be coupled with the proper equation of state. To simplify the analysis and exclude buoyancy forces (\cite{lL88},\cite{lR02}) we take it to be adiabatic \begin{equation} \label{eq:pres} p \sim \rho^{\gamma} \,, \end{equation} where $\gamma$ is the adiabatic exponent. Thus we get no entropy desturbances \begin{equation} \label{eq:ent2} s(t,\mathbf{r}) = s_0(r) = \text{const} \end{equation} and pressure and mass density disturbances are bound together with adiabatic equation \begin{equation} \label{eq:pres1} p_1 = c_s^2 \rho_1 \,. \end{equation} Equations (\ref{eq:eul0}), (\ref{eq:newt0}) and (\ref{eq:pres}) for equilibrium state are resolved with well known Emden functions (\cite{jZ81}) for polytropic model. For further references we summarize here basic results of polytropic theory of ideal gas when polytrope exponent $n = 1/(\gamma - 1)$. In this case radial dependancy of equilibrium parameters is \begin{equation} \label{eq:poly1} \begin{split} p_0(r) & = p(0) \Theta_n^{n+1}(\xi) \\ \rho_0(r) & = \rho(0) \Theta_n^{n}(\xi) \\ c_s^2(r) & = c_s^2(0) \Theta_n(\xi) \,, \end{split} \end{equation} where $\xi = r/R$ and $\Theta_n$ are the non-dimensional radius and temperature respectively. To satisfy equilibrium equations (\ref{eq:eul0}), (\ref{eq:newt0}) and boundary conditions $\Theta(0) = 1$, $\Theta(1) = 0$ parameter \begin{equation} \label{eq:xi1} \xi_n^2 = \frac{4 \pi G \rho(0) R^2}{n c_s^2(0)} \end{equation} must have unique value for each $n$. For example for $n = 3/2$ it equals to $\xi_{3/2} = 3.65$ and for $n = 5/2$ $\xi_{5/2} = 5.36$ (\cite{jZ81}). Equations (\ref{eq:cont1}), (\ref{eq:eul1}), (\ref{eq:newt1}) and (\ref{eq:pres1}) with proper boundary conditions forms the basis of our treatment of stability. Boundary conditions we discuss later. But now we transform perturbed equations into the more convenient form. Substituting (\ref{eq:newt1}) into (\ref{eq:cont1}) gives us \begin{equation} \label{eq:cont2} \mathbf{\nabla}\left( \frac{1}{4\pi G} \mathbf{\nabla} \frac{\partial \varphi_1}{\partial t} + \rho_0\mathbf{v} \right) =0 \,. \end{equation} Taking into account that $\text{div rot}\mathbf{\Psi} \equiv 0$ we get from (\ref{eq:cont2}) \begin{equation} \label{eq:vel1} \rho_0 \mathbf{v} = - \frac{1}{4\pi G} \mathbf{\nabla} \frac{\partial \varphi_1}{\partial t} + \text{rot} \mathbf{\Psi} = 0 \,, \end{equation} where $\mathbf{\Psi}$ stands for vector-potential of the flow $\rho_0 \mathbf{v}$. As we can see from (\ref{eq:vel1}), vector-potential $\mathbf{\Psi}$ represents the ``axial part'' of desturbances and is not derectly bound with gravitational potential. So we assume it to be zero. Next, inserting (\ref{eq:vel1}) with $\mathbf{\Psi} = 0$ and (\ref{eq:pres1}) into (\ref{eq:eul1}), we obtain \begin{equation} \label{eq:eul2} - \frac{1}{4\pi G} \mathbf{\nabla} \frac{\partial^{2} \varphi_1}{\partial t^2} + \rho_0\mathbf{\nabla} \varphi_1 = -\mathbf{\nabla} c_s^2 \rho_1 - \rho_1\mathbf{\nabla} \varphi_0 \,. \end{equation} A harmonic time dependence $\sim \exp(-i\omega t)$ of the perturbations can now be assumed in terms of constant complex frequency $\omega$ so that equation (\ref{eq:eul2}) after simple transformations becomes \begin{equation} \label{eq:eul3} \mathbf{\nabla} \varphi_1 = -4\pi G \frac{\mathbf{\nabla} c_s^2 \rho_1 + \rho_1\mathbf{\nabla} \varphi_0} {\omega^2 + 4\pi G \rho_0} \,. \end{equation} Taking divergence of equation (\ref{eq:eul3}) and using (\ref{eq:newt1}) we finally get equation for mass density disturbance $\rho_1$ only \begin{equation} \label{eq:main1} \mathbf{\nabla} \left( \frac{\mathbf{\nabla} c_s^2 \rho_1 + \rho_1\mathbf{\nabla} \varphi_0}{\omega^2 + 4\pi G \rho_0} \right) + \rho_1 = 0 \,. \end{equation} Before discussing equation (\ref{eq:main1}) we make the last simplification by introducing auxillary function \begin{equation} \label{eq:chi} \chi(r)= \int^r_0 \frac{dr}{c_s^2 (r)}\frac{d}{dr} \left( \varphi_0(r) + c_s^2 (r) \right) \end{equation} and replace $\rho_1$ with new function $u$ \begin{equation} \label{eq:rho1} \rho_1= \exp(-\chi)u \,. \end{equation} It yields final equation for unknown function $u$ \begin{equation} \label{eq:main2} \exp(\chi)\mathbf{\nabla}\left( \frac{\exp(-\chi) c_s^2}{\omega^2 + 4\pi G \rho_0} \mathbf{\nabla} u \right) + u = 0 \,. \end{equation} Equation (\ref{eq:main2}) represents a modification of Sturm-Liouville eigenvalue problem and, together with suitable boundary conditions, provides the eigenvalues and eigenfunctions for the perturbations. Boundary conditions are summarized as follows. On the free moving surface of the body pressure must be equal to zero \begin{equation} \label{eq:marg1} p_1(t,r=R) = 0\,. \end{equation} Boundary conditions are stated at unperturbed surface so as small deviations of size leads to second order magnitude of perturbance. If the local sound velocity at the body surface is not equal zero, Eq. (\ref{eq:pres1}), (\ref{eq:marg1}) constitutes that mass density is equal to zero \begin{equation} \label{eq:marg2} \rho_1(t,r=R) = 0\,, \quad \text{if } \quad c_s(r=R) \neq 0 \,. \end{equation} But if, as it is in polytropic models, $c_s(r=R) = 0$ then boundary condition (\ref{eq:marg1}) is valid with arbitrary $\rho_1(r=R)$. To find out baundary condition for mass density in this case we integrate (\ref{eq:cont1}) on the unperturbed volume. Taking into account Gauss theorem we get \begin{equation} \label{eq:mas1} \frac{d}{dt}\int \rho_1 \,dV + \int \rho_0 \mathbf{v}\, d \mathbf{S} = 0\,, \end{equation} where $d \mathbf{S}$ is body surface element. For zero mass density $\rho_0$ at $r = R$ second item in (\ref{eq:mas1}) is equal to zero and mass conservation low (\ref{eq:mas1}) reduces to \begin{equation} \label{eq:mas2} \int \rho_1 \,dV = 0\,, \quad \text{if } \quad c_s(r=R) = 0 \,. \end{equation} A simplification can now be introduced to handle the non-trivial angular dependence in the perturbation (\ref{eq:main2}). It can be rewritten after decomposing variables into spherical harmonics $Y_{lm}$ \begin{equation} \label{eq:harm1} u(\mathbf{r}) = \sum_{l,m} U_{lm}(r)Y_{lm}(\theta,\phi) \,. \end{equation} Usually perturbations with quantum number $l > 0 $ have small increments so we take into account only spherically symmetric perturbations with $l = 0 $. For this case main equation in spherical system of coordinates is \begin{equation} \label{eq:main3} Lu=\frac{\exp(\chi)}{r^2}\frac{d}{dr}\left( r^2 \frac{\exp(-\chi) c_s^2}{\omega^2 + 4\pi G \rho_0} \frac{du}{dr} \right) + u = 0 \,. \end{equation} Operator $L$ has some important features. First of all we now prove that it may have only real eigenvalues $\omega^2$. Let multiply (\ref{eq:main3}) with complex conjugate $u^*$ and function $r^2 \exp(-\chi)$ and integrate along radius. We get \begin{equation} \label{eq:int1} -\int_0^R \frac{\exp(-\chi) c_s^2}{\omega^2 + 4\pi G \rho_0} |\frac{du}{dr} |^2 \,r^2 dr + \int_0^R \exp(-\chi)|u|^2\,r^2 dr = 0. \end{equation} First item in (\ref{eq:int1}) is derived with integrating by parts and using that $c_s^2 u$ equals zero on body surface. Subtracting from (\ref{eq:int1}) its complex conjugate we get after simple transformation \begin{equation} \label{eq:int2} (\omega^2-\omega^{2*}) \int_0^R \frac{\exp(-\chi) c_s^2}{\lvert\omega^2 + 4\pi G \rho_0 \rvert^2} |\frac{du}{dr}|^2\,r^2 dr = 0 . \end{equation} Obviously integral in equation (\ref{eq:int2}) is greater zero and imaginary part of $\omega^2$ must be zero: $\text{Im}~ \omega^2 = 0$. So there are two types of oscillation modes: if $\omega^2 > 0$ eigenvalues $\omega$ are real and introduces sound-like oscillations, but if $\omega^2 < 0$ eigenvalues are pure imaginary and branch with plus imaginary part introduces monotonously growth of perturbation. Next, again multiplying equation (\ref{eq:main3}) with $r^2 \exp(-\chi)$ and integrating it along $r$ we get \begin{equation} \label{eq:mas3} \left( r^2 \frac{\exp(-\chi) c_s^2}{\omega^2 + 4\pi G \rho_0} \frac{du}{dr} \right) \bigg|_0^R + \int_0^R \exp(-\chi) u r^2 \, dr = 0 \,. \end{equation} If $c_s(R) = 0$ first item in (\ref{eq:mas3}) equals zero and equation (\ref{eq:main3}) automatically conserve total mass (cf. \ref{eq:mas2}, \ref{eq:rho1}) \begin{equation} \label{eq:mas4} \int_0^R \exp(-\chi) u r^2 \, dr = 0 \,. \end{equation} To find out conditions for existance of instability suppose that $\omega^2 < 0$ and mass density $\rho_0(r)$ and sound velosity $c_s(r)$ monotonously decrease from theirs maxima at $r = 0$ to zero at $r = R$. Introduce also function of wave number \begin{equation} \label{eq:k1} k(r) = \sqrt{\omega^2 + 4\pi G \rho_0(r)}/c_s(r) = \sqrt{4\pi G \rho_0(r) - \lvert\omega\rvert^2 }/c_s(r)\,. \end{equation} In this notation equation (\ref{eq:main3}) can be rewritten as \begin{equation} \label{eq:main4} \frac{\exp(\chi)}{r^2}\frac{d}{dr}\left( r^2 \frac{\exp(-\chi)}{k^2} \frac{du}{dr} \right) + u = 0 \,. \end{equation} Wave number $k(r)$ equals zero at the radius $r_0$ where \begin{equation} \label{eq:k2} 4\pi G \rho_0(r_0) - \lvert\omega\rvert^2 = 0 \,. \end{equation} At this point equation (\ref{eq:main4}) has singularity and eigenfunction $u$ must have zero derivative \begin{equation} \label{eq:u2} \frac{du}{dr}(r=r_0) = 0\,. \end{equation} Now we build up eigenfunctions of (\ref{eq:main4}) with method like quasiclassic approach in quantum mechanics (\cite{lL74}). ``Sewing condition'' at the point $r = r_0$ will give us dispersion relation for unstable modes. Quasiclassic eigenfunction in the region $r < r_0$ finite at $r = 0$ is \begin{equation} \label{eq:ul} u = \frac{\exp(\chi / 2)k^{1/2}}{r} \sin \left( \int_0^r k(r)\,dr \right) \quad \text{ $r \ll r_0$, $k^2>0$.} \end{equation} As we can see from (\ref{eq:ul}), in this region perturbation oscillate with radius. But in the region $r > r_0$ it exponentially decrease \begin{equation} \label{eq:ur} u = \frac{C \exp(\chi / 2)|k|^{1/2}}{2r} \exp \left( - \int_{r_0}^r |k(r)|\,dr \right) \quad \text{ $r \gg r_0$, $k^2<0$}, \end{equation} where $C$ is constant. We will get ``sewing condition'' from analitical continuation of expression (\ref{eq:ur}) into region $r < r_0$ through up and down halfplanes of the complex variable $r-r_0$ (\cite{lL74}). This procedure yields \begin{equation} \label{eq:sew2} \frac{C \exp(\frac{\chi}{2})|k|^{1/2}}{2r}\exp \left(-\int_{r_0}^{r} |k(r)|\,dr \right) \longrightarrow \frac{C \exp(\frac{\chi}{2})k^{1/2}}{r} \sin \left( \int_{r}^{r_0} k(r)\,dr + \frac{3\pi}{4} \right). \end{equation} Right item of (\ref{eq:sew2}) must be equal to right item of (\ref{eq:ul}) \begin{equation} \label{eq:sew3} C \sin \left( \int_{r}^{r_0} k(r)\,dr + \frac{3\pi}{4} \right) = \sin \left( \int_{0}^{r} k(r)\,dr \right) \,. \end{equation} Writing integral at right part of (\ref{eq:sew3}) in the form \begin{equation} \label{eq:sew4} \int_{0}^{r} k(r)\,dr = \int_{0}^{r_0} k(r)\,dr - \int_{r}^{r_0} k(r)\,dr \, \end{equation} we get equality condition \begin{equation} \label{eq:disp1} \begin{split} \int_{0}^{r_0} k(r)\,dr &= \frac{\pi}{4} + n \pi \\ n &=0,1,2, \ldots \\ C &= (-1)^n \,. \end{split} \end{equation} Expressions (\ref{eq:disp1}) define discrete set of unstable modes with encrements $\omega_n $ ($\omega_n^2 < 0$) and eigenfunctions of type (\ref{eq:ul}), (\ref{eq:ur}). Eigenfunction $u_n$ has strictly $n$ zero nodes at the interval $0<r<R$. But the mode with $n=0$ can not exist as it does not change sign and can not obey the mass conservation low (\ref{eq:mas4}). Finally dispersion relation for unstable modes takes the form \begin{equation} \label{eq:disp2} \int_{0}^{r_0} \frac{\sqrt{4\pi G \rho_0(r) - |\omega_n|^2}}{c_s(r)}\,dr = \frac{\pi}{4} + n \pi, \quad n = 1,2,3 \ldots \,. \end{equation} Simple criterion of instability immidiently follows from expression (\ref{eq:disp2}). If we neglect $|\omega_n|^2$ and expand integration from $r_0$ to $R$ then we obviously get for $n = 1$ condition for existance of instability in the form \begin{equation} \label{eq:crit1} \alpha = \frac{4}{5\pi} \int_0^R \sqrt{\frac{4\pi G \rho_0(r)}{c_s^2(r)}} \,dr > 1 \,. \end{equation} \section{conclusion}\label{S:frem} Now we can apply criterion (\ref{eq:crit1}) to models of massive gaseous clouds mainly consisting of molecular hydrogen. At temperatures $\lesssim 90 \text{K}^{\circ}$ rotational degrees of freedom are degenerated (\cite{lL64}) and it behaves as monoatomic ideal gas with adiabatic exponent $\gamma = 5/3$ and polytrope exponent $n = 3/2$. At temperatures $>90 \text{K}^{\circ}$ adiabatic exponent has standard value $\gamma = 7/5$ and $n = 5/2$. Criterion (\ref{eq:crit1}) may be rewritten in terms of polytrope model as follows (cf. \ref{eq:poly1}, \ref{eq:xi1}) \begin{equation} \label{eq:crit2} \alpha_n = \frac{4}{5\pi} \xi_n \sqrt{n}\int_0^1 \sqrt{\Theta_n^{n-1}(\xi)} \,d\xi \,. \end{equation} Numeric integration in expression (\ref{eq:crit2}) gives for $\gamma = 5/3$ \begin{equation} \label{eq:crit3} \alpha_{3/2} = 0.92 \,, \end{equation} and for $\gamma = 7/5$ \begin{equation} \label{eq:crit4} \alpha_{5/2} = 1.1 \,. \end{equation} This result seems to be very intristing. It follows from equations (\ref{eq:crit3}) and (\ref{eq:crit4}) that cold hydrogen cloud may be stable relatively gravitational instability, but if the mean temperature of hydrogen cloud is high enough, it may be the subject of gravitational instability. We can also remark here that our result for $\gamma = 5/3$ is applicable to degenerated electron gas (white dwarfs) and our proof of its stability is in accordance with well known results of the theory of white dwarfs (\cite{jZ81},\cite{sW75},\cite{lL64}).
{ "redpajama_set_name": "RedPajamaArXiv" }
4,194
Malema on reelection: 'I am not a dictator. I'm a hard worker, I come from nothing' Newly reelected EFF leader Julius Malema has ferociously denied claims that he is dictator in the EFF, adding that he is where he is because of hard work.Malema was responding to questions by journalists that he had surrounded himself with individuals who do not challenge his authority. "My simple thing of being elected unopposed or being elected unopposed is very simple, I work very hard. I don't take anything for granted. You give it to me, I will work on it and perfect it. I pay attention to the smallest detail. I know everything that is happening in this conference, even if you want to hate me you will come to accept… I am not a dictator, I am a hard worker. I come from nothing,"Malema added that it was his detractors and those who couldn't defeat him in conferences that had perpetuated this narrative. "When it comes to conferences, I will hit you until you get mad. I'll hit you until you go home… slaughter a cow thinking your ancestors have forsaken you."He argued that he could not have been a dictator in his days in the ANC Youth League, saying that he was working with older more experienced cadres.EFF president Julius Malema says delegates must not be excited. DOR is trying to protect them. This after delegates complained about the security team. On Saturday night at least five women were caught in a stampede and pepper sprayed. #EFFNPA2019 (@lizTandwa) pic.twitter.com/RP603q8uC9— Team News24 (@TeamNews24) December 15, 2019 Meanwhile, on Saturday, EFF security clashed with party delegates, using pepper spray.Some female delegates who were seen shouting at security out of frustration told the media that the party's security detail, the Defenders of the Revolution, had a tendency of using force against women. Malema firmly denied this, saying that there was no such treatment of women in the EFF, and accused the media of making the claims up. "We have been sitting here with our women, why would our women go and complain to you and not to us? There is no one who can tell us about our members. They are our members. No one who can abuse women and remain unpunished. Our delegates get the best of the best treatment ever." Your weather update: Warm with rain expected for Sunday It will be a cool to warm day across most of the country on Sunday, but thunderstorms are expected in some provinces, according to the South African Weather Service. WatchesSevere thunderstorms are expected over the extreme south western parts of the Free State, the eastern parts of the Northern Cape, the central Highveld of Mpumalanga… ATIKUNATION SOUTH AFRICA LAUNCH AND INAUGURATION IN PRETORIA, SOUTH AFRICA The Nigerian community in South Africa is invited to the official launching and inauguration of ATIKUNATION at the Protea Hotel on Lilian Ngoyi street in Pretoria. The event is scheduled WATCH | Transnet pipeline fire extinguished, firefighters still dousing secondary blaze The City of Ekhuruleni has tweeted that fires affecting the Transnet petrol and gas pipeline in Alberton have been extinguished.Earlier, the blaze resulted in the evacuation of everyone in the area.Transnet said the fire had been caused by attempted theft at the petroleum pipeline between Alrode and the airport, resulting in a petrol spillage which also…
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,876
\section{Introduction} \label{sect:intro} \subsection{The Solar Abundance Problem} \label{sect:abundprob} Before 2005, we believed we knew very well how to model the evolution and interior structure of the sun. Evolved solar models with the latest input physics (including diffusive helium and element settling and without tachocline mixing) reproduced the sound speed profile determined from seismic inversions to within 0.4$\%$, as well as the seismically-inferred convection zone depth and convection zone helium abundance. Solar interior modelers had little impetus to progress beyond one-dimensional spherical models of the sun, with perhaps the exception of introducing some additional mixing below the convection zone (hereafter CZ) to deplete surface lithium and reduce the small remaining sound-speed discrepancy at the CZ base. For the \citet[][hereafter GN93]{GN93} or \citet[][hereafter GS98]{GS98} abundances, the simplest `spherical sun' assumptions appeared nearly adequate for solar modeling. These include one-dimensional zoning (concentric shells in hydrostatic equilibrium), initial homogeneous composition, negligible mass loss or accretion, neglecting rotation and magnetic fields, simple surface boundary conditions, mixing-length theory of convection (e.g., \citet{Bohm_1958}), and no additional mixing or structural changes from convective overshoot, shear from differential rotation, meridional circulation, waves, or oscillations. However, new analyses of solar spectral lines revise downward the abundances of elements heavier than hydrogen and helium, particularly the abundances of carbon, nitrogen, and oxygen that contribute to the opacity just below the CZ. See Table \ref{table:abund} for a summary of some of the major abundance revisions over the last twenty years. \begin{table*} \caption{Mass fractions of metals in the present-day photosphere, Z, and ratio of metals to hydrogen mass fraction, Z/X, evaluated over the last twenty years.} \label{table:abund} \begin{center} \begin{tabular}{llll} \hline Year & Source & Z & Z/X \\ \hline 1989 & \citet{AG89} &$ 0.0201 $ & $0.0274 $ \\ 1993 & \citet{GN93} &$ 0.0179 $ & $0.0244 $ \\ 1998 & \citet{GS98} &$ 0.0170 $ & $0.0231 $ \\ 2005 & \citet{AGS05} &$ 0.0122 $ & $0.0165 $\\ 2009 & \citet{AGSS09} &$ 0.0134 $ & $0.0181 $\\ 2009 & \citet{Ludwig_2009} &$0.0154 $ & $0.0209 $\\ \hline \end{tabular} \tablenotetext{a}{Values for Z are inferred from Z/X assuming Y$=$0.248.} \tablenotetext{b}{Uncertainties are $<$10$\%$.} \end{center} \end{table*} Compared to the older GS98 and GN93 abundances, the \citet[][hereafter AGS05]{AGS05} abundance of C is lower by 35$\%$, N by 27.5$\%$, O by 48$\%$, and Ne by 74$\%$. The abundances of elements from Na to Ca are lower by 12 to 25$\%$, and Fe is decreased by 12$\%$. For the GS98 abundances, the ratio of the element to hydrogen mass fraction Z/X = 0.023, and the heavy element abundance Z $\sim$ 0.018, while, for the new abundances, Z/X = 0.0165, and Z$\sim$0.0122. Models evolved with the AGS05 abundance mixture give worse agreement with helioseismic constraints; the sound-speed discrepancy is 1.4$\%$ below the CZ base, and the CZ depth is shallow and CZ helium abundance is low compared to those derived from seismic inversions. Recently, \citet[][hereafter AGSS09]{AGSS09} re-evaluated the spectroscopic abundances, carefully considering the atomic input data and selection of spectral lines and using improved radiative transfer and opacities. They revised the heavy element abundance to Z/X = 0.0181 and Z = 0.0134. This slight increase over the AGS05 values yields some improvement in agreement with seismic constraints \citep[see][]{Serenelli_2009}, although the higher abundances of GN93 or GS98 still provide the best agreement. In addition, the Cosmological Impact of the FIrst STars Team and its collaborators used the 3D model atmosphere code CO$^5$BOLD to perform an independent investigation of the solar abundances \citep[][hereafter CO$^5$BOLD]{Caffau_2008a,Caffau_2008b,Caffau_2009,Ludwig_2009}. They derived heavy element abundances of Z/X = 0.0209 and Z = 0.0154, in between the AGSS09 and GN93 values. Spectroscopic determinations measure the photospheric abundances. The continuous convective overshoot into the photosphere should leave the photosphere with the same abundances as the convection zone. In addition, convective timescales are much shorter than element diffusion and evolution timescales, making the convection zone well mixed and homogeneous. Therefore the spectroscopic abundances should be indicative of the abundances throughout the convection zone. However, we are reluctant to dismiss the recent abundance re-analyses because of the many improvements in the physics and models included, namely 3D dynamical atmosphere models, non-local thermodynamic equilibrium corrections for important elements, and updated atomic and molecular data. Line profile shapes now agree nearly perfectly with observations. Also, it is impressive that abundances derived from several different atomic and molecular lines for the same element now are consistent. For this paper, we will focus on the AGS05 abundances as the most extreme example of the lower abundance determinations. However, we will devote section \ref{sect:COBOLD} to a preliminary exploration of the CO$^5$BOLD abundances. We first review the results of helioseismic tests using the old and new abundances and ongoing attempts to resolve these discrepancies (Section \ref{sect:intro}). We discuss in detail the following three mitigation attempts: an early mass-loss phase in solar evolution (Section \ref{sect:massloss}), accretion of low-Z material early in the sun's lifetime (Section \ref{sect:accret}), and extending the CZ below the depth inferred by helioseismology (Section \ref{sect:overshoot}). We then examine including some of these adjustments in models with the higher metallicity of CO$^5$BOLD (Section \ref{sect:COBOLD}). The models used to explore all of these changes are described in Section \ref{sect:models}. \subsection{Helioseismology and solar models} \label{sect:mod_helio} Helio- and asteroseismology have turned out to be excellent tools to test the physics of stellar models. \citet{BA04a, BP04, Turck_2004} authored some of the first papers to examine the effects of the new lower abundances on solar models, demonstrating that the new abundances lead to greater discrepancy with seismic inferences. This is demonstrated in Table \ref{table:ZYR} which compares our calibrated evolution models using the GN93 and AGS05 mixtures (details of our models can be found in Section \ref{sect:models}). For the AGS05 model, the CZ helium abundance Y is low (0.22730) and the CZ base (R$_{\rm{CZB}}$ = 0.72944 R$_{\odot}$) is shallow compared to the seismically inferred CZ Y abundance of 0.248$\pm$0.003 and CZ base radius of 0.713$\pm$0.001 R$_{\odot}$ from \citet{BA04a}. \begin{comment} \begin{table} \caption{Properties of our calibrated standard solar models \citep{GWC04,GWC05}.} \label{table:standardmodels} \begin{center} \begin{tabular}{lll} \hline Model Property & GN93 Mixture & AGS05 Mixture \\ \hline Y$_{\rm{o}}$ &$ 0.26930 $ & $0.25700 $ \\ Z$_{\rm{o}}$ &$ 0.0197 $ & $0.01350 $ \\ $\alpha$ &$ 2.0379 $ & $2.0004 $ \\ Z$_{\rm{CZ}}$ &$ $ & $ $\\ Y$_{\rm{CZ}}$ &$ 0.24124 $ & $0.22730 $\\ R$_{\rm{CZB}}$ (R$_{\odot}$ ) &$ 0.71254 $ & $0.72172 $ \\ \hline \end{tabular} \end{center} \end{table} \end{comment} Figure \ref{fig:c_standard} shows the differences between inferred and calculated sound speed for calibrated evolved models using the old and new abundances. The inferred sound speed is from \citet{BPB00}. The uncertainties in sound speed inversions are much smaller than the differences between these curves, at most a few widths of the plotting line. Figure \ref{fig:O-C_standard} shows the observed minus calculated frequency differences for modes of angular degree $\ell$ = 0, 2, 10, and 20 that propagate into the solar interior below the convection zone. The calculated frequencies were computed using the \citet{Pesnell_1990} non-adiabatic stellar pulsation code. The observational data are from BiSON \citep{Chaplin_2007}, LowL \citep{ST96}, or GOLF \citep{Garcia_2001}. The observational uncertainties for the modes are less than 0.1 $\mu$Hz, much smaller than the model discrepancies. The O-C trends for the model with the \citet{Fer05} low-temperature opacities are flatter for higher frequencies that are more sensitive to the solar surface. These newer opacities include an improved treatment of grains, finer wavelength spacing, and additional molecular lines, and are higher by 12$\%$ up to a factor of three. As illustrated by both of these plots, the discrepancy with the new abundances is much larger than with the old abundances. \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{sound_standard.eps}} \caption{\footnotesize Difference between inferred and calculated sound speeds with error bars for models with the GN93 and AGS05 abundances. The sound speed inversion is from \citet{BPB00}. The seismically inferred convection zone base at R = 0.713 R$_{\odot}$ \citep{BA04a} is shown with the vertical line.} \label{fig:c_standard} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{ominc_standard.eps}} \caption{\footnotesize Observed minus calculated versus calculated frequency for our models with the GN93 mixture (triangles or circles), and the AGS05 mixture (diamonds) for modes of degree $\ell$ = 0, 2, 10, and 20. The models producing the triangle and diamond points use the newer \citet{Fer05} low-temperature opacities, while the model with the circle points uses 1995 Alexander \& Ferguson opacities. The calculated frequencies were computed using the \citet{Pesnell_1990} non-adiabatic stellar pulsation code, and the data are from \citet{Chaplin_2007,ST96,Garcia_2001}.} \label{fig:O-C_standard} \end{figure} Helioseismology has also been used to investigate heavy element abundances. \citet{Lin_2005} find that a reduction in carbon abundance, in the direction of the AGS05 abundances, can improve sound speed inference. However, \citet{Lin_2007} show that lower Z increases the discrepancy with adiabatic index inversions; \citet{AB06} use the ionization signature in the sound speed derivative to infer Z$_{\rm{CZ}}$ = 0.0172$\pm$0.002. These results favor the old, higher abundances. In addition, \citet{Chaplin_2007} use small frequency separations between low-degree modes that are sensitive to the core structure to constrain the core Z abundance. They find Z$_{\rm{core}}$ = 0.0187-0.0239. \citet{Zaatri_2007} find that the mean low-degree frequency spacings of a model using AGS05 abundances are incompatible with those determined from the GOLF measurements of \citet{Lazrek_2007} and \citet{Gelly_2002}. Similarly, \citet{Basu_2007} find that models constructed with low metallicity are incompatible with the small frequency spacing and frequency separation ratios calculated from BiSON data \citep{Chaplin_1996}. These results do not rule out accretion, enhanced diffusion, or other options that can retain high core Z. However, they do disfavor the prospect that the new, lower abundances were present initially throughout the sun. \subsection{Attempts to restore agreement through solar model modifications} \label{sect:attempts} Here we briefly review recent attempts to adjust solar models in order to mitigate the discrepancy with seismic constraints for the new abundances. For a more detailed review of previous attempts, see, for example, \citet{BA08}. In the following sections, we discuss increased opacities below the CZ ($11-30\%$); increased neon abundance ($\times$$\sim$4); increased abundances (within uncertainty limits, or using alternative determinations); enhanced diffusive settling rates ($\times$1.5 or more); accretion of lower-Z material early in the sun's lifetime; structure modification below the CZ base due to radiative damping of gravity waves; tachocline mixing (also used with old abundances); convective overshoot; and combinations of the above. The conclusion has been that it is difficult to match simultaneously the new Z/X and helioseismic constraints for CZ depth, sound speed and density profiles, and CZ helium abundance by applying these changes. \subsection{Opacity increases} \label{sect:op_inc} Heavy-element abundances primarily affect solar structure through their effect on opacity, which affects the structure of the radiative zone and the location of the CZ base. The structure of most of the convection zone is essentially independent of the opacity. \citet{JCD09} determined the change in opacity required to restore the sound-speed agreement of a solar model using the AGS05 abundances to the level of success originally attained with the GN93 abundances. They find that opacity would need to be increased ranging from about 30$\%$ below the CZ to a few percent in the core. Although improvements to the solar model can be made by {\it ad~hoc} opacity increases, there is little justification for such large enhancements. The presently available opacity tables from three separate projects (OPAL, OP, and LANL T-4) for conditions below the CZ differ by only a few percent \citep{Neuforge_2001,Badnell_2005}, making it difficult to justify such large opacity enhancements. Using the Los Alamos National Laboratory T-4 opacity library data \citep{Magee_1995, Hakel_2006}, \citet{GKK09} find that, to obtain a 30$\%$ opacity increase with the new abundances, the contribution from oxygen alone would need to increase by a factor of two to three. Alternatively, the iron absorption contribution would need to increase by a factor of three. Including additional elements has a negligible effect on Rosseland mean opacities for solar interior conditions. The Lawrence Livermore OPAL opacities for the AGS05 mixture included 17 of the most abundant elements. With the LANL T-4 opacity library, \citet{GKK09} find that including in the mixture all of the elements up to atomic number Z=30 increases the mixture opacity by only 0.2$\%$ for solar interior conditions. Including additional elements in the mixture from 30 $<$ Z $<$ 93, an 83-element mixture, further increases the mixture opacity by less than 0.1$\%$. An as-yet unidentified error in calculations of the line wings for K-shell transitions in O, C, N, and Ne at energies around 800 eV, the peak of the weighting of the Rosseland mean opacity for the temperature conditions at the CZ base, could provide some increase in the calculated Rosseland mean opacities \citep{GKK09}. In order to shed light on the issue, \citet{Turck_2009} suggest experimental investigations of opacity coefficients for the radiative zones of solar-like stars. The coming large laser facilities (LIL +PETAL, OMEGA EP, FIREX II, LMJ, NIF) have the potential to attain sufficiently high temperatures and high densities at LTE along with the precise diagnostics required for stellar opacity measurements. Experiments to investigate the opacities relevant for stellar interiors are being conducted at Sandia's Z facility by \citet{Bailey_2009}. \subsection{Neon and other element abundance increases} \label{sect:abund_inc} For a while, it was thought that an increase in the solar neon abundance provided the most plausible resolution to this problem. Neon is not measured in the photosphere due to a lack of suitable spectral lines. Instead, its abundance is determined relative to oxygen using lines formed in the solar corona, XUV and gamma ray spectroscopy of quiet and active regions, and solar wind particle collections. AGS05 adopt a Ne/O abundance ratio of 0.15, and apply this ratio to the photospheric oxygen abundance to derive the neon abundance. The neon abundance has been revised downward by $74\%$ from the GS98 value, for the most part due to the oxygen abundance reduction. Several groups explored solar models with enhanced neon. Some improvement in agreement was shown for Ne increases ranging from 0.5-0.67 dex \citep{AB05, BBS05, TCP05, Zaatri_2007, DP06}. However, \citet{Lin_2007} find that increasing Ne alone actually increases the discrepancy in the adiabatic exponent in the region 0.75-0.9 R$_{\odot}$. They find that the discrepancy is reduced if only C, N, and O abundances are increased. More modest Ne enhancements, combined with increases in the other element abundances of $\sim$0.05 dex, at the limit of the AGS05 uncertainties, have also been considered. The best model of \citet{BBS05} has Ne enhanced by 0.45 dex (2.8$\times$), Ar by 0.4 dex, and C, N, and O by 0.05 dex. This model produces reasonably good agreement with the inferred sound speed and density profile, and has CZ base radius 0.715 R$_{\odot}$ and acceptable CZ Y= 0.2439. Of course any increase in abundances from the AGS05 value will mitigate the problem by increasing opacities below the CZ. For examples, see \citet{Zaatri_2007} or \citet{Turck_2004}. \subsection{Enhanced diffusion} \label{sect:diff} Several groups, e.g., \citet{BA04a}, \citet{Montalban_2004}, \citet[][hereafter GWC05]{GWC05}, and \citet{YB07} considered the effects of enhanced diffusion. At first this idea might seem promising, because the solar interior could have higher abundances that give good sound speed agreement, while the CZ elements could be depleted to the AGS05 photospheric values. In practice, the required diffusion increases are quite large (factors of 1.5 to 2 on absolute rates), and enhanced diffusion also depletes the CZ Y abundance to well below the seismic determination and leaves the CZ too shallow. GWC05 investigate the enhancement of thermal diffusion for elements and He by different amounts. A model with resistance coefficients $\times$1/4 for C, N, O, Ne, and Mg and $\times$2/3 for He shows some improvement in the sound-speed discrepancy, but the CZ depth is still a little shallow (0.718 R$_\odot$), and the CZ Y is still a little low (0.227). Moreover, there is no justification for these {\it ad~hoc} changes in thermal diffusion coefficients.The diffusion coefficients themselves should not be in error by such a large factor. \subsection{Gravity waves and dynamical effects} \label{sect:g_waves} Arnett, Meakin, \& Young (2006, private communication) have been investigating, following \citet{Press_1981} and \citet{PR81}, the effects of gravity waves excited and launched inward at the CZ base. The radiative damping of these waves as they travel inward deposits energy and changes the solar structure in the same way as would an opacity enhancement. The amount of damping and distance that the waves propagate depend on the initial amplitudes and the degree of the mode, with low-frequency, high-degree waves damped more heavily, after traveling a shorter distance. The expected wave spectrum and amplitudes still need to be worked out, but could remove as much as one third of the sound-speed discrepancy. More recently, \citet{Arnett_2009} have re-examined convection at the surface and sub-surface layers of the sun, proposing a way to eliminate astronomical calibration from stellar convection theory. By choosing characteristic lengths that are determined by the flow, they eliminate the need for the free parameters traditionally used in mixing length theory. They show that some of the discrepancy between the new abundances and helioseismic inferences may result from the neglect of hydrodynamic processes in the standard solar model. \subsection{Combinations of effects} \label{sect:combos} In addition to single changes to solar models, several groups considered combinations of changes, such as diffusion, opacity, and abundance enhancements. See, for example, \citet{BA04a}, \citet{Montalban_2004}, and \citet{BSB06}. Although the above modifications to the input physics have achieved some success in restoring agreement between seismic constraints and models that use the new abundances, agreement is not fully restored and there is little physical justification for the proposed changes. In this paper, we discuss the motivation for and the results of three additional attempts to restore agreement: an early mass loss phase, accretion of low-Z material, and convective overshoot. \section{Solar evolution models} \label{sect:models} Solar models require input data for opacities such as OPAL \citep{IR96} or OP \citep{SB04} supplemented by low-temperature opacities \citep[e.g.,][]{Fer05}; equation of state such as OPAL \citep{RSI96}, MHD \citep{MHD}, or CEFF \citep{CEFF}; nuclear reaction rates \citep[e.g.,][]{Angulo_1999}; and diffusive element settling \citep[e.g.,][]{Burgers_1969, CGK89, TBL94}. The solar models produced by the Los Alamos group shown hereare evolved from the pre-main sequence using an updated version of the one-dimensional evolution codes described in \citet{Iben_1963, Iben_1965a, Iben_1965b}. The evolution code uses the SIREFF EOS \citep[see][]{GS97}, Burgers' diffusion treatment as implemented by \citet{CGK89}, the nuclear reaction rates from \citet{Angulo_1999} with a correction to the $^{14}$N rate from \citet{Formicola_2004}, and the OPAL opacities \citep{IR96} supplemented by the \citet{Fer05} or Alexander \& Ferguson (private communication, 1995) low-temperature opacities. The \citet{Fer05} low-temperature opacities include an improved treatment of grains, finer wavelength spacing, and additional molecular lines and are higher than the Alexander \& Ferguson (1995) low-temperature opacities by 12$\%$ up to a factor of three. As discussed in Section \ref{sect:mod_helio}, the observed minus calculated frequency trends for a model with the newer low-T opacities are flatter for higher frequencies that are sensitive to the solar surface, as seen in Figure \ref{fig:O-C_standard}. The models are calibrated to the present solar radius 6.9599$\times10^{10}$ cm \citep{Allen_1973}, luminosity 3.846$\times 10^{33}$ erg/g \citep{Willson_1986}, mass 1.989$\times 10^{33}$ g \citep{CT86}, age 4.54$\pm$0.04 Gyr \citep{Guenther_1992}, and adopted photospheric Z/X ratio. For evolution models, the initial helium abundance Y, initial element mass fraction Z, and mixing length to pressure-scale-height ratio $\alpha$ are adjusted so that the final luminosity, radius, and surface Z/X match the observational constraints to within uncertainties. See \citet{GWC05} for references and a description of the physics used in the evolution and pulsation codes and models. \begin{comment} Note for future use: luminosity 3.8418$\times 10^{33}$ erg/g \citep{Frohlich_1998, Bahcall_1995} age 4.57 Gyr \citep{Bahcall_1995} \end{comment} \section{Mass loss} \label{sect:massloss} \subsection{Motivation and method} \label{sect:massloss_meth} \citet{Willson_1987} explored the possibility that significant mass loss could occur during the early part of the main-sequence for $\sim$1-2.5 M$_{\odot}$ stars. They considered mass-loss rates ranging from $10^{-9}$ to $10^{-8}$ M$_{\odot}$/yr that would remove a substantial fraction of mass from a star before it evolves off the main sequence. Mass loss in these stars, possibly including the early sun, would be driven by pulsation, which provides the necessary mechanical energy flux, and facilitated by rapid rotation. The mass-loss rate would diminish upon the development of a surface convection zone, which channels mechanical energy away from pulsation, and of magnetic fields which provide angular momentum transfer and rotational braking. \citet{GWB87} showed that mass-losing solar models have steeper molecular weight gradients, shorter main-sequence lifetimes, higher $^8$B neutrino fluxes, deeper surface convection, higher surface $^3$He abundances, and earlier, more pronounced dredge-up of CN cycle processed material in the post-main-sequence phase compared to the standard model. In addition, mass-losing models predict complete destruction of protosolar Li and Be, requiring a mechanism, such as production in spallation reactions or flares, for partial replenishment to the observed surface abundances. Such mass loss in other stars could potentially explain blue stragglers and the earlier-than-predicted dredge-up of carbon and nitrogen in solar-mass stars ascending the first red giant branch \citep[see][]{Guzik_1988}. However, there are also drawbacks. If the sun remains at too high a mass for too long, all surface Li in the subsequent solar model at present age is destroyed, too much surface ${}^3\rm{He}$ is produced, and discrepancies with the inferred sound speed arise \citep{Guzik_1988}. In addition, \citet{GC95} compare observed and calculated $p$-mode oscillation frequencies to test the structure of solar models including early main-sequence mass loss. They show that extreme solar mass loss has a significant effect on solar structure and can be ruled out by the $p$-mode oscillation frequencies. While more extreme early main-sequence mass loss has not been observed, smaller mass loss in the sun of about 0.1 M$_\odot$ looks promising to solve a number of problems. The advantages of an early mass loss phase in solar evolution include: solving the faint early sun problem, explaining early liquid water on Mars, early inner solar system bombardment, and solar lithium destruction. Models with early mass loss using the older, higher element abundances were explored previously by \citet{GWB87}, \citet{SF92}, \citet{GC95}, and \citet{SB03}. In addition, \citet{MM07} recently assessed consequences for Earth climate and solar system formation. Here we re-investigate solar mass loss in light of the new abundances. In our models, we use the mass-loss treatment implemented by \citet{Brunish_1981} in the \citet{Iben_1963, Iben_1965a, Iben_1965b} code. We evolve two models with initial masses 1.3 and 1.15 $\rm{M}_{\odot}$, having exponentially-decaying mass-loss rates with an e-folding time of 0.45 Gyr. Following \citet{GWB87}, we adopt this exponential mass-loss prescription because it is simple and decreases smoothly with time. This is a physically plausible description, as the mass-loss rate should be highest when a rotating star arrives on the main sequence within the instability strip, where pulsation and rotation can facilitate mass loss. The mass loss should then decrease as the star moves out of the instability strip, ceases to pulsate, spins down, and develops a surface convection zone. The present solar mass-loss rate is 2 $\times10^{-14}$ $\rm{M_{\odot}/yr}$ \citep[e.g.,][]{Feldman_1977}, too small to affect the sun's evolution. The initial mass-loss rates for the two models are 6.55 and 3.38 $\times10^{-10}$ M$_{\odot}/$yr, respectively. \subsection{Results} \label{sect:massloss_results} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{Luminosity_massloss.eps}} \caption{\footnotesize Luminosity versus time for standard one solar-mass models using the GN93 and AGS05 abundances and for two mass-losing models using the AGS05 abundances with initial mass 1.3 and 1.15 M$_{\odot}$. Mass-loss rates are exponentially decaying with e-folding time 0.45 Gyr.} \label{fig:lum_massloss} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{sound_massloss.eps}} \caption{\footnotesize Inferred minus calculated sound speed differences for calibrated standard one solar-mass models using the GN93 and AGS05 abundances and for models with AGS05 abundances and initial mass 1.3 and 1.15 M$_{\odot}$ including early mass loss.} \label{fig:c_massloss} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{ominc_massloss.eps}} \caption{\footnotesize Observed minus calculated versus calculated frequencies for calibrated standard one solar mass models using the GN93 and AGS05 abundances, and for models with AGS05 abundances with initial mass 1.3 and 1.15 M$_{\odot}$ including early mass loss. Frequencies compared are for modes of angular degrees $\ell$ = 0, 2, 10, and 20. The data are from \citet{Chaplin_2007}, \citet{ST96}, and \citet{Garcia_2001}. The calculated frequencies were computed using the \citet{Pesnell_1990} non-adiabatic stellar pulsation code.} \label{fig:O-C_massloss} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{Temp_massloss_smooth.eps}} \caption{\footnotesize Temperature experienced by the present-day solar surface layer as a function of time for the mass-losing and standard models. For the mass-losing phase, the lithium-destroying temperatures are attained because the layer that is now at the surface once resided deeper inside the sun. In the post-mass-loss phase, the relevant temperatures are attained by envelope convection which mixes surface layers downward, exposing the surface material to the temperature at the CZ base. 2.8 million K is the temperature required for relatively rapid Li destruction.} \label{fig:temp_massloss} \end{figure} Table \ref{table:ZYR} summarizes the initial Y and mixing-length parameter needed to calibrate each model and the final CZ Y and CZ base radius. The seismically-inferred CZ Y abundance and CZ base radius are 0.248 $\pm$ 0.003 and 0.713 $\pm$ 0.001 R$_{\odot}$, respectively \citep{BA04a}. Figure \ref{fig:lum_massloss} shows the luminosity versus time for these models, as well as for two constant, one solar-mass calibrated models. Figure \ref{fig:c_massloss} shows the inferred minus calculated sound speed for these models. For the models with the AGS05 abundances, the sound speed agreement is considerably improved by including early mass loss. For the model with initial mass 1.3 M$_{\odot}$ sound-speed agreement is almost restored near the CZ base, but the agreement is not as good in the more H-depleted core. Unfortunately, while the model with initial mass 1.15 M$_{\odot}$ has a little better sound speed agreement in the central 0.1 R$_{\odot}$, the improvement is not as pronounced for the region below the CZ. Figure \ref{fig:O-C_massloss} shows the observed minus calculated versus calculated nonadiabatic frequencies for modes of angular degrees $\ell$ = 0, 2, 10, and 20 that propagate into the solar interior below the convection zone. Including mass loss improves agreement, but models with old abundances and no mass loss still give the best agreement. The mass-losing models described here would destroy all of the observed surface lithium. Lithium is destroyed relatively rapidly in the solar interior at temperatures $\ge$ 2.8 million K. For standard models, on the main sequence the surface layers are never mixed to high enough temperatures to deplete Li by the observed factor of 150 from the initial solar system abundance \citep{AGSS09}, and additional mixing mechanisms must be invoked. However, with mass loss, layers that are now at the surface were initially in the interior at temperatures high enough to quickly destroy Li. Figure \ref{fig:temp_massloss} shows the temperature experienced by the surface layer throughout the evolution of each model. During the mass-losing phase, the high temperatures that depleted the lithium in the current surface layer were experienced when the material that is at the surface of the now 1 M$_\odot$ sun was deeper, before the previous surface layers were lost. After the mass-loss phase, the temperature that affects the surface layer is that of the CZ base, since the material currently at the surface is continually mixed through the CZ. The mass-losing models also produce more ${}^3\rm{He}$ at the surface as the now-surface layers were once processed at higher interior temperatures where ${}^3\rm{He}$ builds up to higher equilibrium values. For the 1.3 M$_{\odot}$ initial mass model, the surface ${}^3\rm{He}$ mass fraction is enhanced from its initial value of 5.0 $\times10^{-5}$ to 9.0 $\times10^{-5}$, while for the 1.15 M$_{\odot}$ initial mass model, the surface ${}^3\rm{He}$ mass fraction is only slightly enhanced from its initial value of 5.0 $\times10^{-5}$ to 5.1 $\times10^{-5}$. The final ${}^3\rm{He}/{}^4\rm{He}$ abundance ratios for the 1.3 and 1.15 M$_{\odot}$ initial mass models are 3.9 $\times10^{-4}$ and 2.2 $\times10^{-4}$, respectively, enhanced from an initial value of 2.0 $\times10^{-4}$. \section{Accretion of low-Z material} \label{sect:accret} \subsection{Motivation and method} \label{sect:acc_meth} As a way to keep the solar interior more like models obtained using higher abundances, \citet[][hereafter GWC04]{GWC04} and GWC05 proposed accretion of material depleted in heavier elements early in the sun's lifetime. In this scenario, the pre-main sequence sun would have $\sim$98$\%$ of its present mass and a higher Z with a mixture similar to the GN93 or GS98 abundances. After the sun begins core hydrogen burning and is no longer fully convective, the remaining $\sim$2$\%$ of material accreted would have lower Z providing a convection-zone abundance similar to the current photospheric abundances of AGS05 or AGSS09. Possible justifications for this scenario are discussed by \citet{Nordlund_2009} and \citet{Melendez_2009}. One plausible explanation is that planet formation removes some high-Z elements from the solar nebula, leaving lower-Z material to be accreted after the sun is no longer fully convective. \citet{Winnick_2002} explored the accretion of metal-rich material in models with the older GS98 abundances. They show that some solar models with enhanced metallicity in the convection zone might be viable as small perturbations to the standard GS98 model. \citet{Haxton_2008} discuss the question of accretion of metal-depleted gas onto the sun as a motivation for future experiments to measure CN-cycle neutrinos. The flux of these neutrinos should depend nearly linearly on the initial core abundance of C and N. A successful measurement of the CN-cycle solar neutrino flux would therefore place constraints on possible accretion by determining the metallicity of the solar core. To test the possibility of low-Z accretion in the early sun, a model is evolved starting with $Z=0.0197$ on the zero-age main sequence, and material is progressively added to reduce the CZ Z by $0.001$ in each of six steps of about six million years. After each step, the model is given time to equilibrate to a new shallower CZ depth, leaving behind the higher-Z composition gradient \citep{Guzik_2006}. The final accretion episode leaves the CZ with $Z=0.0137$. After 36 million years of low-Z accretion, the model is evolved normally (including diffusive settling) and calibrated as usual to the observed luminosity, radius, and AGS05 $Z/X$ value. \subsection{Results} \label{sect:acc_results} Figure \ref{fig:Z_accret} shows the heavy element abundance throughout the sun in the accretion model, and Figure \ref{fig:Z_accret_Y} shows the Y abundance. As intended, the abundance in the interior is similar to the GN93 model while the CZ abundance matches the new, lower Z from photospheric observations. \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{Z_accret_new.eps}} \caption{\footnotesize Heavy element abundance profiles for the GN93 mixture model and the low-Z accretion model.} \label{fig:Z_accret} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{Y_accret_new.eps}} \caption{\footnotesize Helium abundance profiles for the GN93 mixture model and the low-Z accretion model.} \label{fig:Z_accret_Y} \end{figure} Figure \ref{fig:c_accret} shows the relative sound-speed differences for models with the GN93 mixture, the AGS05 mixture, and low-Z accretion. The accretion model shows improvement in sound-speed agreement in the interior where Z is similar to the GN93 mixture. However, discrepancy remains near the CZ base. Compared to the AGS05 model, the accretion model has a less shallow CZ base radius of 0.7235 R$_{\odot}$ and a nearly acceptable CZ Y abundance of $0.2407$. Figure \ref{fig:O-C_accret} shows the observed minus calculated versus calculated non-adiabatic frequencies for modes of angular degrees $\ell$ = 0, 2, 10, and 20 that propagate into the solar interior below the CZ. Including accretion in a model using the new abundances improves agreement with this frequency data. \citet{CVR07} also approximated an accretion model by instantaneously decreasing the Z abundance gradient in the CZ in an early main-sequence model (age 74 My). They do not find an improvement in the CZ depth, as we did for our model, but find about the same CZ Y abundance, $0.240$, and improved sound-speed agreement below 0.5 R$_\odot$. \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics[width=150mm]{sound_accret.eps}} \caption{\footnotesize Relative difference between inferred and calculated sound speeds for models with the GN93 and AGS05 abundances and with low-Z accretion.} \label{fig:c_accret} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{ominc_accret.eps}} \caption{\footnotesize Observed minus calculated frequency versus calculated ferquency of GN93, AGS05, and accretion models for degree $\ell$=0, 2, 10, and 20 modes. The calculated frequencies were computed using the \citet{Pesnell_1990} non-adiabatic stellar pulsation code. The data are from \citet{Chaplin_2007}, \citet{ST96}, and \citet{Garcia_2001}. The accretion model shows improved, though not acceptable, O-C agreement.} \label{fig:O-C_accret} \end{figure} \section{Convective overshoot} \label{sect:overshoot} \subsection{Motivation and method} \label{sect:over_meth} It is possible that the CZ depth predicted using standard mixing-length theory is too shallow and convective motions extend the nearly adiabatically stratified part of the CZ to the depth inferred seismically. \citet{Rempel_2004} presents a semi-analytical model of overshoot. This approach facilitates the understanding of the relation between numerical simulations and classical overshoot theories in terms of physical parameters. \citet{Montalban_2006} developed solar models that include convective overshoot. By adopting an overshoot parameter of the order of 0.15 times the pressure scale height and increasing the opacity by $\sim$7$\%$ (within the uncertainty limits of the abundances), they were able to reproduce the seismically inferred CZ base and Y$_{CZ}$. However, large sound-speed discrepancies remain in the radiative region of their model. To explore the possibility of convective overshoot, we evolve models with AGS05 abundances but extend the CZ that follows the adiabatic gradient to a depth that optimizes agreement with the sound speed inversions. We hoped that a deeper CZ would also inhibit diffusion and keep the CZ Y abundance higher. \subsection{Results} \label{sect:over_results} Figure \ref{fig:c_over1} shows the relative sound-speed differences for models with the GN93 mixture, the AGS05 mixture, and the first convective overshooting model. For the first overshoot model, the CZ depth is 0.704 R$_{\odot}$, deeper than inferred seismically. The sound speed agreement is improved only within the CZ but not much below it. The deeper CZ does not inhibit Y diffusion as we had hoped. In the second model, we extend the CZ even deeper, to 0.64 R$_{\odot}$. Figure \ref{fig:c_over2} shows the relative sound-speed difference for the second convective overshooting model and standard models with the GN93 and AGS05 mixtures. The sound speed gradient at the base of this adiabatically stratified CZ clearly does not agree with the seismically inferred one, and sound speed agreement in the central 0.2 R$_{\odot}$ is much worse than in any of the other models. We therefore omit the second overshoot model from further analysis. Figure \ref{fig:O-C_over} shows the observed minus calculated frequency for the first overshoot model compared to the GN93 and AGS05 models. The AGS05 model and the overshoot model where the core Z is low do not agree as well with the data as the GN93 model does. It appears that overshooting alone is not a solution to this problem. \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{sound_over.eps}} \caption{\footnotesize Relative difference between inferred and calculated sound speeds for models with the GN93 and AGS05 abundances and the first convective overshoot model that extends the adiabatically stratified layer to 0.704 R$_\odot$.} \label{fig:c_over1} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{sound_overtoomuch.eps}} \caption{\footnotesize Relative difference between inferred and calculated sound speeds for models with the GN93 and AGS05 abundances and the second convective overshoot model that extends the adiabatically stratified layer to 0.64 R$_\odot$.} \label{fig:c_over2} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{ominc_over.eps}} \caption{\footnotesize Observed minus calculated frequency versus calculated frequency for models with the GN93 and AGS05 abundances and for the overshoot model for degree $\ell$=0, 2, 10, and 20 modes. The calculated frequencies were computed using the \citet{Pesnell_1990} non-adiabatic stellar pulsation code, and the data are from \citet{Chaplin_1998}, \citet{ST96}, and \citet{Garcia_2001}.} \label{fig:O-C_over} \end{figure} \section{Summary of mitigation attempts} \label{sect:summary} \begin{table*} \caption{Initial mass and surface abundances, mixing length parameter, and final abundances and CZ base for our solar models.} \label{table:ZYR} \begin{center} \begin{tabular}{llllllllll} \hline Model & GN93 & AGS05 & ML 1 & ML 2 & Accretion & Overshoot & C 1 & C 2 & C 3 \\ \hline M$_{\rm{o}}$/M$_\odot$& 1.00 & 1.00 & 1.30 & 1.15 & 0.98 & 1.00 & 1.00 & 1.00 & 1.10 \\ Y$_{\rm{o}}$ & 0.26930 & 0.25700 & 0.24659 & 0.25279 & 0.26927 & 0.25699 & 0.26370 & 0.27780 & 0.27530 \\ Z$_{\rm{o}}$ & 0.01970 & 0.01350 & 0.01351 & 0.01351 & 0.01973 & 0.01351 & 0.01740 & 0.01700 & 0.01700 \\ $\alpha$& 2.0379 & 2.0004 & 2.0571 & 2.0104 & 1.8958 & 1.9962 & 1.9918 & 2.0635 & 2.0652 \\ Z/X & 0.0240 & 0.0163 & 0.0178 & 0.0171 & 0.0162 & 0.0164 & 0.0209 & 0.0208 & 0.0219 \\ Y$_{\rm{CZ}}$& 0.2412 & 0.2273 & 0.2388 & 0.2349 & 0.2402 & 0.2292 & 0.2349 & 0.2473 & 0.2551 \\ R$_{\rm{CZB}}$ /R$_\odot$& 0.7125 & 0.7294 & 0.7217 & 0.7264 & 0.7241 & 0.7038 & 0.7186 & 0.7190 & 0.7181 \\ \hline \end{tabular} \tablenotetext{a}{Seismically inferred values from \citet{BA04a}: Y$_{\rm{CZ}}$ = $0.2485 \pm 0.0035$, R$_{\rm{CZB}}$ /R$_\odot$ = $0.713 \pm 0.001$.} \tablenotetext{b}{Models ML 1 and ML 2 are the AGS05 models including mass loss. Models C1, C2, and C3 are the CO$^5$BOLD models with GN93 opacities, AGS05 opacities, and AGS05 opacities including mass loss.} \end{center} \end{table*} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{sound_all.eps}} \caption{\footnotesize Relative difference between inferred and calculated sound speeds for models with the GN93 and AGS05 abundances and models with mass-loss, accretion, and convective overshoot.} \label{fig:c_all} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{ominc_all.eps}} \caption{\footnotesize Observed minus calculated frequency versus calculated frequency for degree $\ell$=0, 2, 10, and 20 modes in GN93 and AGS05 models and in models with mass-loss, accretion, and convective overshoot. The calculated frequencies were computed using the \citet{Pesnell_1990} non-adiabatic stellar pulsation code. The data are from \citet{Chaplin_2007}, \citet{ST96}, and \citet{Garcia_2001}.} \label{fig:O-C_all} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{smallsep_all.eps}} \caption{\footnotesize Difference between calculated and observed small separations for $\ell$=0 and 2 modes for the Guzik et al. models. The data are from \citet{Chaplin_2007}.} \label{fig:small_sep_all} \end{figure} Table \ref{table:ZYR} summarizes the CZ Y, CZ base radius, and photosphere Z/X for the models examined here. Figure \ref{fig:c_all} shows the relative sound-speed differences of our models with the GN93 mixture, the AGS05 mixture, and models with mass-loss, accretion, and convective overshoot. The observed minus calculated frequencies of these models are shown in Figure \ref{fig:O-C_all}. As seen in Table \ref{table:ZYR} and Figures \ref{fig:c_all} and \ref{fig:O-C_all}, the AGS05 model and overshoot model do not agree well with the data. The GN93 model, the 1.3 M$_\odot$ mass loss model, and the accretion model show better agreement, though no model matches the data perfectly. Figure \ref{fig:small_sep_all} shows the small frequency separation differences of the Guzik et al. models minus the solar-cycle corrected frequency differences from the BiSON group \citep{Chaplin_2007} for $\ell$=0 and 2 modes, which are sensitive to the structure of the core. This plot illustrates that including low-Z accretion in the model retains the core structure of the GN93 model. The overshoot model and AGS05 model do not agree as well with the data as the models with higher core Z. The mass losing model with the best sound speed and observed minus calculated frequency agreement (M$_0$ = 1.3 M$_{\odot}$) shows worse agreement than any of the other models. However, mass loss does change the value of the small frequency separation in the right direction to correct for the discrepancy seen with the new abundances. Perhaps the over-compensation indicates that this model has too much mass loss; a model with a smaller initial mass would reduce the disagreement, as seen with the M$_0$ = 1.15 M$_{\odot}$ model. Mass-losing models can improve seismic agreement for the new abundances, but they do not fully restore agreement. In addition, the destruction of too much Li and the production of too much surface ${}^3\rm{He}$ make the two models considered here unlikely. A smaller amount of mass loss that leads to destruction of some, but not all, of the initial Li could provide a plausible partial mitigation of the solar abundance problem. The accretion model allows for a solar interior that is similar to models developed with the higher abundances and therefore agrees nicely with seismic inferences in the central 0.5 R$_{\odot}$ of the sun. The model still shows poor agreement near the CZ base. The very steep Z abundance gradient developed at the CZ base seen in Figure \ref{fig:Z_accret} \citep[see][]{Guzik_2006} might have a detectable signature in the seismic frequencies. \citet{Basu_1997} finds that inversions appear to rule out such steep composition gradients at the CZ base. The convective overshoot model with a CZ depth of 0.704 R$_{\odot}$ improves sound speed agreement slightly within the CZ but not much below it. Extending the CZ depth to 0.64 R$_{\odot}$ results in even worse agreement. In addition, Y diffusion is not inhibited by the deeper CZ, as we had hoped. \section{CO$^5$BOLD abundances} \label{sect:COBOLD} At the suggestion of both the referee and a colleague P. Bonifacio, we also include here a preliminary exploration of solar models using the CO$^5$BOLD abundances with a Z/X of 0.0209 and Z of 0.0154, intermediate between the AGS05 and AGSS09 abundances, but lower than the GN93 abundances. Because the abundances of only 12 elements have been re-evaluated at this time by the CO$^5$BOLD group \citep{Ludwig_2009}, it is a little premature to create new opacity tables for the CO$^5$BOLD mixture, which can, in principle be done using the Lawrence Livermore OPAL web request at http://opalopacity.llnl.gov. In addition, we do not have low-temperature opacities for a mixture representative of the CO$^5$BOLD abundances available to us. Therefore, here we decided to calibrate standard models to the Z/X of CO$^5$BOLD using opacity tables based on the GN93 and AGS05 mixtures. We observe that the O/Fe (oxygen to iron) mass fraction ratio of CO$^5$BOLD is 4.98, intermediate between the O/Fe mass ratio of 6.56 for the GN93 mixture and 4.65 for the AGS05 mixture, so our two standard models should bracket results that use the CO$^5$BOLD mixture in the opacity tables. We did update abundances to the CO$^5$BOLD values for our in-line equation of state calculation and for tracking the diffusion of the major elements. The initial Y, Z, and mixing length to pressure-scale-height ratios needed to calibrate to the CO$^5$BOLD Z/X using either opacity set are listed in Table \ref{table:ZYR}; Figures \ref{fig:c_Caf}, \ref{fig:smallsep_Caf}, and \ref{fig:O-C_Caf} show the results for sound-speed differences, small separations between calculated $\ell$=0 and $\ell$=2 modes, and observed minus calculated frequencies for $\ell$ = 0, 2, 10, and 20. The sound speed discrepancy is reduced to only about 0.6$\%$ at the convection zone base for the CO$^5$BOLD abundances, compared to 1.4$\%$ for the AGS05 abundances and 0.4$\%$ for the GN93 abundances. The results for either opacity set are identical above 0.6 R$_\odot$, but differ below this where oxygen is the main opacity contributor, and in the core where iron is the main opacity contributor, as expected. The model using the AGS05 opacity mixture requires a higher Y abundance to compensate for the relatively higher Fe opacity contribution in the core. The small separations and observed minus calculated frequencies are slightly higher than found for a model calibrated to the GN93 abundances on average, but not as high as for a model calibrated to the AGS05 Z/X. As also surmised by the referee, since the CO$^5$BOLD sound speed differences are closer to observed, improvement could be obtained with a smaller amount of mass loss. Here we have calculated an additional mass-loss model with initial mass 1.1 M$_\odot$ and an initial mass-loss rate of 2.25 x 10$^{-10}$ M$_\odot$/yr, exponentially decaying with e-folding time 0.45 Gyr. This model was calibrated to the CO$^5$BOLD Z/X using the AGS05 opacities that have O/Fe abundance closer to that of the CO$^5$BOLD O/Fe. Previous work \citep[e.g.][]{SF92,GC95,SB03} indiciated that an initial mass of 1.1 M$_\odot$ or less and a relatively short mass loss phase (less than 0.2-0.5 Gyr) could deplete the lithium to the present-day observed values from initial solar-system abundance without completely destroying the lithium or building up too much $^3$He. The sound speed agreement (Figure \ref{fig:c_Caf}) shows considerable improvement with this smaller amount of mass loss; however, the agreement for the solar core is not as good as for the non-mass losing models, as can be seen more clearly in the small separations (Figure \ref{fig:smallsep_Caf}). This mass-losing model restores the level of agreement attained with the GN93 abundances for the observed minus calculated frequencies (Figure \ref{fig:O-C_Caf}). Figure \ref{fig:lum_CafML} shows the luminosity versus time for this model, as well as for the standard solar models evolved with GN93 and AGS05 abundances. Figure \ref{fig:temp_CafML} shows the effective temperature experienced by the surface layer throughout the evolution of the mass-loss model compared to that experienced by the standard models. During the mass-losing phase, the high temperatures that depleted the lithium in the current surface layer were experienced when the material that is at the surface of the now 1 M$_\odot$ sun was deeper, before the previous surface layers were lost. After the mass-loss phase, the temperature that affects the surface layer is that of the CZ base, since the material currently at the surface is continually mixed through the CZ. We see that including mass loss in the CO$^5$BOLD model exposes the surface layers to high enough temperatures to deplete Li early in the evolution (2.8 million K is the temperature required for relatively rapid Li destruction). It is not clear that the advantages of mass loss (e.g. Li depletion and better sound speed agreement in the outer 80$\%$ of the solar radius) can be retained while at the same time not creating a discrepancy in the inner 20$\%$. \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{sound_cobold.eps}} \caption{\footnotesize Relative difference between inferred and calculated sound speeds for models using the CO$^5$BOLD abundances created with either the GN93 or AGS05 opacities and a model created using the CO$^5$BOLD abundances with the AGS05 opacities and including mass loss.} \label{fig:c_Caf} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{smallsep_cobold.eps}} \caption{\footnotesize Difference between calculated and observed small separations for $\ell$=0 and 2 modes for the Guzik et al. models using the CO$^5$BOLD abundances with either the GN93 or AGS05 opacities and with the AGS05 opacities and mass loss. The data are from \citet{Chaplin_2007}.} \label{fig:smallsep_Caf} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{ominc_cobold.eps}} \caption{\footnotesize Observed minus calculated frequency versus calculated frequency for degree $\ell$=0, 2, 10, and 20 modes in models using the CO$^5$BOLD abundances with either the GN93 or AGS05 opacities and with the AGS05 opacities and mass loss. The calculated frequencies were computed using the \citet{Pesnell_1990} non-adiabatic stellar pulsation code. The data are from \citet{Chaplin_2007}, \citet{ST96}, and \citet{Garcia_2001}.} \label{fig:O-C_Caf} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{Luminosity_CAFML.eps}} \caption{\footnotesize Luminosity versus time for standard one solar-mass models using the GN93 and AGS05 abundances and for a mass-losing model using the CO$^5$BOLD abundances with the AGS05 opacities with initial mass 1.1 M$_{\odot}$. Mass-loss rates are exponentially decaying with e-folding time 0.45 Gyr.} \label{fig:lum_CafML} \end{figure} \begin{figure}[t!] \resizebox{\hsize}{!}{\includegraphics{Temperature_CafML_smooth.eps}} \caption{\footnotesize Temperature experienced by the present-day solar surface layer as a function of time for the standard models and the mass-losing model with CO$^5$BOLD abundances and AGS05 opacities. For the mass-losing phase, the lithium-destroying temperatures are attained because the layer that is now at the surface once resided deeper inside the sun. In the post-mass-loss phase, the relevant temperatures are attained by envelope convection which mixes surface layers downward, exposing the surface material to the temperature at the CZ base. 2.8 million K is the temperature required for relatively rapid Li destruction.} \label{fig:temp_CafML} \end{figure} \section{Conclusions and future work} \label{sect:conc} In spite of the seismic evidence in favor of the old abundances, the new abundances cannot be easily dismissed. The improvements in the physics of the atmospheric models used to determine the abundances, the success achieved in line-profile matching, and the self-consistency of the abundance determinations provide great credibility to the new lower abundances. However, solar models developed with the new abundances remain discrepant with helioseismic constraints, even with a variety of (often unjustified) changes to the input physics. Adjustments to the evolution of solar models, such as the early mass loss and low-Z accretion discussed here, show some promise but do not fully restore agreement. Any single adjustment to solar models does not fully resolve the problem. Combinations of changes might provide better agreement but seem contrived. A resolution to the solar abundance problem (or the solar model problem) remains elusive. In the future, a more comprehensive exploration of parameter space, including opacity variations, different mass-loss or accretion prescriptions, diffusion, and perhaps even combinations of these effects could be useful. In particular, models with AGS05 abundances and a smaller amount of mass loss than explored here may provide a way to retain the core structure without completely destroying Li or creating too much $^3$He build-up. Further examination of the CO$^5$BOLD models with revised abundances for every element and opacity tables (including low-T opacities) adjusted for a new mixture would also be enlightening. \\ \\ \acknowledgements The authors thank David Arnett, Alfio Bonanno, Piercarlo Bonifacio, Sarbani Basu, Joergen Christensen-Dalsgaard, Wick Haxton, Ross Rosenwald, Sylvaine Turck-Chi\`eze, and our anonymous referee for helpful discussions. We thank Scott Watson for code improvement and earlier versions of the AGS models. \begin{comment} *************** Notes: To do list:\\ - add uncertainties to sound speed plot sound.eps\\ \\ References to update:\\ - Lazrek et al 2006\\ - Nordlund 2009\\ - Arnett et al 2009\\ *************** \end{comment} \bibliographystyle{aa}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,000
Abba Yusuf, governorship candidate of the Peoples Democratic Party (PDP) in Kano State, says the supplementary election held in Kano were the "most horrific ever", and he will challenge the victory of Governor Abdullahi Ganduje at the Election Petitions Tribunal. He made this known on Monday following the announcement of Ganduje as the winner of the election. He said: "In the political history of Kano, we have undergone the most horrific election ever, where the ruling APC and the Kano State Government deployed all mechanisms to orchestrate violence against the citizens. INEC had declared Kano State election inconclusive and fixed Saturday, March 23, 2019, for supplementary election. The supplementary election was marred by reports of harassment of residents and journalists. Speaking on these incidents, Yusuf said he was shocked by what he saw on March 9 and March 23, when voting was disrupted in many polling units. In the final results declared by Bello Shehu, Ganduje scored 1,033,695 votes to win the election, while Yusuf came second with 1,024,713 votes. However, the PDP candidate said he was disappointed that top Police officers deployed in the state watched as acts of lawlessness took place, only to appear publicly to say that the process was peaceful. "We condemn this fraud in its entirety. We have decided to take a legal action through the Election Petitions Tribunal with overwhelming evidence that has been gathered, and Insha Allah, the mandate of the good people of Kano State shall be reclaimed," he added.
{ "redpajama_set_name": "RedPajamaC4" }
5,418
require "graphql" require "kanji/import" module Graph class Query include Kanji::Import["persistence.db"] def call(schema:, query:, variables: {}, context: {}) GraphQL::Query.new( schema.call, query, variables: variables, context: context.merge(db: db) ).result end end end
{ "redpajama_set_name": "RedPajamaGithub" }
2,922
Vikings in America Arturo Rubio Issue: 27 August 2001 After a long journey, the weary European explorers catch a glimpse of land, far on the horizon. The men grow restless, as their ships slowly sail toward the coast. Images of rich lands and adventure race through their minds. Finally they disembark and set foot on America's pristine land for the first time in history. Two continents have made contact. Yet these are not Spaniards, commanded by an Italian sailor named Christopher Columbus. These are Vikings, guided by Leif Eriksson, arriving at American shores almost five hundred years before Columbus' momentous "discovery." This is the story of the first Europeans who bridged the gap dividing two continents. These explorers, known as Vikings, were part of a rich and complex culture. There is much to be learned behind the facade of pirates and barbarians that has commonly been attached to them. More interesting is to learn about their way of life, their prowess at sea and exploration, and the way they, in the long run, enriched European history. The Vikings were native to the land that today is Norway, Sweden, and Denmark. The populations living in these three territories were very independent from each other. Each country was a society composed of a King, a noble class -- known as Jarls or Earls -- and commoners. The country was divided into districts, each holding a yearly assembly -- known as a Thing -- in which Vikings discussed matters of common interest. All men were equal in the Thing; any man had the right to demand the settling of a dispute or whatever problem afflicted him. Each country had its own sphere of influence. Vikings from Norway, known by many as Normands, would travel to northern England, Scotland, Ireland, and the archipelagos farther to the Northwest. Vikings from Denmark, known as Danes, journeyed through southern England, the European mainland, its coasts, and the Mediterranean. Vikings from Sweden, known as Rus by Slavs, roamed parts of eastern Europe, even venturing as far away as the Caspian Sea. Putting aside the image of murdering barbarians, we now know that they were skilled farmers, traders, navigators, explorers, and settlers. They were very good storytellers, too. History was passed on from generation to generation by way of Sagas. These stories were memorized and told to others. Elders, through them, narrated the adventures of kings, heroes, and prominent families. Many of these Sagas were written down by Icelanders in the fourteenth century, in an effort to preserve the Viking's history, which would otherwise have been forgotten. Thanks to these Sagas we know many things about them, as well as their voyages. But other cultures who came in contact with Vikings also documented their way of life. In 921 the Arab chronicler Ibn Fadlan met Rus traders of Swedish origin, near the Middle Volga. Impressed by their appearance he described them as "perfect physical specimens, tall as date palms, blonde and ruddy." Of their women he wrote that "each one wears on either breast a box of iron, silver, copper or gold; the value of the box indicates the wealth of the husband. Each box has a ring from which depends a knife. The women wear neck rings of gold and silver, one for each 10,000 dirhems which her husband is worth; some women have many." It is interesting to note that Viking women had an important role in society, compared to other cultures at the time. A woman had complete authority over the farm when the husband was off on a raid or trading trip. She could own land and had the right to demand divorce if she no longer wanted to be by her husband's side. Ibn Fadlan was appalled by their apparent lack of hygiene as well as their uninhibited sexual practices. "They are the filthiest of God's creatures. They have no modesty in defecation and urination, nor do they wash after pollution from orgasm, nor do they wash their hands after eating. With them are pretty slave girls destined for sale to merchants: a man will have sexual intercourse with his slave girl while his companion looks on. Sometimes whole groups will come together in this fashion, each in the presence of others. A merchant who arrives to buy a slave girl from them may have to wait and look on while a Rus completes the act of intercourse with a slave girl," he added. He also took note of how the Vikings honored and bid their dead farewell. According to both Ibn Fadlan and the Sagas, when a wealthy Viking died, he was buried along with his ship. Ibn Fadlan witnessed one of these burials and narrated in great detail the specifics of the event. According to him, the deceased's ship was dragged out of the water and taken to where the burial would take place. It was propped up on four wooden stakes inside a pit that had previously been dug. A tent was then constructed in the middle of the ship and more wood was set underneath it. The corpse, dressed in fine clothes, was put inside the tent along with different objects he would need in the afterlife. First they deposited fruit, intoxicating drinks, and fragrant plants beside him. Then bread, meat, and onions were placed before him. After that a dog was brought, cut into two pieces and placed inside the ship. His weapons were then placed by his side. Two horses were dismembered and also put inside the ship. Both a rooster and hen were also sacrificed and placed inside. In the end, a female slave who had volunteered to join her master in death, was killed and deposited in the ship. The vessel was then set on fire and the remains covered with a mound of soil. Finally on top of this mound the Vikings placed a wooden post; on it they wrote the man's name and the name of his king, and then they departed. This was typical of a wealthy man's burial. In the case of a poor man, a small boat was constructed. He was placed inside, set on fire, and then buried. Viking life revolved around farming and trade, yet every single man was proficient in the use of weapons. The basic battle gear of a Viking was a long sword, an axe, and a small knife. A wealthy man could also have a pike and a bow and arrows. For protection he carried a round shield and a coat of chain mail, as well as a metal helmet. This brings us to another misconception, the image of a Viking wearing a horned helmet. There is no evidence that Vikings ever wore this type of headgear. Actually, a conical metal helmet with a simple rectangular nose guard was commonly used in battle. There was a small group of elite warriors though, known as Berserks, whose only purpose in life was to fight. It is thought that they engaged in rites honoring Odin, the god of war. The Sagas portray Berserks as fierce warriors, possessors of superhuman strength, and literally invincible. According to many accounts they would go into battle in some sort of trance, striking down everything that moved, even while they were severely wounded. They would carry on in this manner for many hours until the effects of the trance wore off. After this they would fall into a deep stupor, needing days to recover. This leads us to believe that they consumed some sort of hallucinogenic drug or herb before going into battle, which produced the effect of a seemingly endless supply of energy and immunity to pain, and finally caused the symptoms of withdrawal. Given that they were difficult to control -- frequently attacking even their comrades in arms -- they were outlawed before the end of the Viking era. Being native to lands with an abundance of fjords, rivers, and lakes, it was easier for Vikings to travel by ship than by land. The design of their ships was remarkable, thanks to knowledge acquired and passed on for many generations. At sea this afforded them a clear advantage over other cultures. Their ships had many variations, but there were two main types: warships and transports. Their warship was known as the Drakkar, and it was perfectly suited for incursions, being fast and easy to steer. It was commonly between 17 and 27 meters long and 2.5 to 5 meters wide at the midship. Space was at a premium onboard, so each Viking carried only a chest where he kept his possessions. This also meant there wasn't any type of cover, even in foul weather, which says a lot about the Vikings' ability to withstand less than comfortable living conditions. A dismountable mast and rectangular sail were used whenever possible. When wind was lacking, or when navigating near a coast or traveling up a river, the travelers took turns rowing. Depending on the Drakkar's size, it needed anywhere from 20 to 50 oarsmen. Fully loaded, the Drakkar would draw less than one meter of water, which gave the Vikings the ability to strike practically any coast, without the requirement of a port. And it was even light enough to be dragged over land in order to circumvent a blockaded river or to navigate across to a different one. The Knöörr was a bigger ship, suitable for transporting goods and even entire families on colonizing voyages. It had a central platform where animals, wood, or other necessities could be transported. One interesting characteristic shared by all Viking ships was an identical bow and stern. This meant that if they needed to turn back, they simply rowed in the other direction. From the Sagas, we conclude that only wealthy Vikings had sufficient capital for the construction of Drakkars and Knöörrs. Thus, ship owners usually were nobles, or commoners who had amassed great fortunes through trading or raids. The size, quality, and quantity of ships depended on the Viking's level of wealth. Accordingly, Vikings of less stature could only afford smaller ships suitable for fishing or short voyages. The Viking Era Begins The year 793 saw the first documented pillage by Scandinavian warriors. The monastery of Lindisfarne, on the eastern coast of England, was ransacked by Vikings sailing out of Norway. Setting the tone for many incursions to come, the Vikings rowed their ships onto the beach, taking the area by surprise. The monastery was overrun; anyone standing in their way was promptly slaughtered. Others were taken prisoners to be sold as slaves. The attackers took anything of value they could find and quickly rowed away. This incident is commonly regarded as the beginning of the Viking era, as historians refer to the time period ranging from 793 to 1100. During this time the Viking culture expanded into areas surrounding the Scandinavian states, even making contact with regions as far away as the Caspian Sea in the east and the coasts of North America in the west. Raids, such as the one that took place at Lindisfarne, were organized during assemblies. One Viking usually organized the whole affair, brought together the ships necessary for the raid, and recruited the right number of men. These raids usually took place during the summer, when the weather was more favorable for navigation. Before sailing every Viking was required to swear loyalty and complete obedience to the leader of the raid. Upon returning, the loot was divided, half for the organizer, half for the crew. Thus, raids and exploration voyages could be organized in this manner by any Viking, provided he could bring together enough ships, supplies, and men. But there were also many cases in which noblemen, acting on their king's orders, mounted enormous raids. One was in 968, when Jarl Gundraed, in command of 8000 men and 100 ships, led a Danish expedition into Spain. Obviously, the number of casualties left behind and booty taken were many times larger than those of the typical small, hit and run operations. In the years following the Lindisfarne incident, Norwegian Vikings dominated parts of northern England, Scotland and Ireland, while the southern English coasts were harassed by Vikings based in Denmark. Dublin and York became important Viking trade centers. It was only natural that after settling in the British Isles these explorers would travel to other areas, always in search of new lands. Thus, sailing to the northwest, Vikings discovered, and permanently settled, Iceland. Around the year 980 Erik Thorvaldsson, better known as Erik the Red, having been temporarily exiled from Iceland for the murder of two fellow Vikings, sailed west along with his family. He searched for an unexplored island someone had seen in the past. He found it, named it Greenland, and promptly built a farm in an area he called Bratthalid, near present day Julianehab. He remained there for three seasons and upon returning to Iceland told his fellow Vikings about the discovery. He described endless rolling green pastures, perfect for raising cattle, as well as an abundance of fish, whales and seals. Hundreds of fellow Vikings went back with him and settled there. In the year 1000, a Viking traveling from Iceland to Greenland was thrown off course by a storm. He ended up in the vicinity of an unknown land farther west. When he finally arrived in Greenland he narrated his ordeal and described the territory he had seen. Leif Eriksson, son of Erik the Red, decided to explore this new land. He took a ship and a crew of 35 men and sailed west. Following the directions previously given to him, he found this new land and traveled down along the coast. He named three areas according to their predominant elements: Helluland (Rocky land), Markland (Land of forests) and Vinland (Land of grapes). He disembarked in this last area and settled there temporarily. A large house and a few other structures were built by Leif and his men. According to the Sagas this land was fertile, had good weather and plenty of wildlife. Its rivers and lakes were teeming with salmon and other species of fish. Shortly Leif and his men returned home, with their ship loaded with wood, which was scarce in Greenland. A year later his father, Erik the Red, died. Leif took over the administration of the farm, and was never able to return to Vinland. Two years later his brother, Thorvald, organized a second expedition to the newly discovered land. He and his men spent two years exploring the coasts of the surrounding area. They also constructed more dwellings. On one occasion they stumbled upon a group of natives, which the Vikings named skraeling, and a skirmish ensued. Thorvald was mortally wounded and became the first European to be buried in America. His men shortly returned to Greenland carrying a full load of wood and grapes. A third expedition was later organized by another of Erik the Red's sons, Thorstein. Sadly, their ship was thrown off course by a storm and all on board, except for a woman, perished. A fourth expedition was organized by another Viking by the name of Thorfinn Karlsefni. Traveling in two ships, this group stayed for three years in the same dwellings Leif Eriksson and his crew had built. In one occasion they were approached by natives who attempted to exchange furs for Viking swords. Apparently the Vikings refused and had some problems as a result, although not as severe as in Thorvald's case. During their stay in Vinland, Snorri, son of Thorfinn and his wife Grudrid, was born. This is the first documented birth of a European in America. Later, Thorfinn and his group returned to Greenland, again with their respective cargo of wood. The fifth and last documented voyage to Vinland was organized by Freydis, Leif's sister. They traveled in two ships, one carrying Vikings from Greenland, the other from Iceland. Their one-year stay was not disturbed by the visit of natives, although it was far from uneventful. Apparently Freydis created a hostile climate between Greenlanders and Icelanders. Quarrels over unimportant issues between the two groups were common. In the end she convinced her husband and crew to get rid of the Greenlanders. According to the Sagas she single-handedly took care of the opposing group's women, chopping them to pieces with an axe. They then took both ships, with their complement of wood, and returned to Greenland. Apparently, the Vikings never returned to America after the fifth voyage. The era of Viking expansionism was at an end. Their pillaging incursions became less frequent; the fact that Christianity and its ideals quickly enveloped the Viking culture may be an important factor in this change of attitude. Trading centers in England and Ireland were abandoned, along with the settlements in Greenland. Many of the early invaders settled in parts of France, Finland, and Russia, mixing with the local population. Most of their pagan culture and language were forgotten in time. Only in Iceland, where the Sagas are still read without requiring translation, does the original Nordic language survive. The Archaeological Discoveries Through archaeology we are still learning many things about the Viking culture. Evidence of settlements has been discovered in their homelands, as well as England, Ireland, Iceland, and Greenland. Remains of dwellings and everyday objects have been found in numerous sites. But the most exciting discoveries are the remains of buried ships. In 1867 the remains of a twenty-meter long ship were unearthed in Tune, Norway. According to recent analysis it was built around 890. In 1880 the remains of another ship were found in Gokstad, Norway. It was also built in 890 and measured twenty-four meters in length. The year 1906 saw the discovery of another ship in Oseberg, Norway. It measured 22 meters in length, was built around the year 820 and apparently buried in 834. Coins, weapons and other valuable objects were found inside the ships, confirming the tales of Viking funerals. In 1960 a group of Norwegian archaeologists discovered the remains of eight long houses on the Canadian island of L'anse aux Meadows. They were proven to be of Nordic design. Other typical Viking objects were also found, such as pins, stone lamps, and some carved wooden pieces believed to be ship fittings. Further excavations -- from 1973 to 1976 -- uncovered even more utensils and about 2000 pieces of worked wood. It was mostly debris from smoothing and trimming logs, as the Vikings prepared wood to be taken back to Greenland. The Canadian Government reconstructed three of the Viking buildings, and the locale was declared a UNESCO World Heritage Site in 1978. To this day it is still unclear just why the Viking culture literally spilled over into neighboring countries from the eighth century onwards. Some scholars believe a growing population demanded the search for new territories. Others think that a divided and unstable Europe proved fertile ground for Viking raids. Yet there are those who believe that the superiority of Viking maritime technology and tactics gave them a distinct advantage over other cultures, prompting such raids. We may never know for sure. Arturo Rubio is a freelance writer from Tijuana, Mexico. He enjoys writing about history, international affairs and computers. Currently, he is working on a series of articles about the Middle East. Scientific American article on Viking ships The Lief Ericson Vikingship of Philadelphia The Viking Navy The Rus Project (Finnish Viking ship) The medieval Viking ship Helga Holm All of Google's resources on Viking ships © Copyright 2001 Arturo Rubio About Arturo Rubio
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,112
Ampasina Maningory is a rural municipality in Madagascar, located along the Maningory River, a few kilometers from the Indian Ocean. It belongs to the district of Fenerive Est, which is a part of Analanjirofo Region. The population of the commune was estimated to be approximately 36,000 in 2001 commune census. Primary and junior level secondary education are available in town. The majority 95% of the population of the commune are farmers. The most important crop is cloves, while other important products are coffee and rice. Services provide employment for 2% of the population. Additionally fishing employs 3% of the population. Rivers The Maningory River crosses Ampasina Maningory and flows into the Indian Ocean at East of the town. Roads The National road No. 5 goes through Ampasina Maningory. It is located north of Fenoarivo Atsinanana (Fénérive Est) and north of Toamasina. References Populated places in Analanjirofo
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,631
Hsiung Feng III (HF-3) je typ nadzvukové protizemní a protilodní střely vyvinutý pro námořnictvo Čínské republiky tchajwanským institutem CSIST (Chungshan Institute of Science and Technology). Střela může být vypouštěna z pozemního odpalovacího zařízení, nebo z válečných lodí. Vývoj Tchajwanská obranná strategie předpokládá užití protilodních střel proti případné invazi z kontinentální Číny. Jelikož ale země dlouhodobě obtížně získává vojenskou techniku ze zahraničí, musela vyvinout své vlastní typy střel. Na první generaci podzvukové střely Hsiung Feng I (HF-1) roku 1993 navázal vylepšený typ Hsiung Feng II (HF-II). Přibližně roku 2002 začal vývoj daleko pokročilejší nadzvukové protilodní střely HF-III. V roce 2005 bylo poprvé zveřejněno, že střela provedla úspěšnou ostrou střelbu. Veřejnosti byly střely poprvé předvedeny na přehlídce dne 10. října 2007. Sériová výroba začala roku 2007. Roku 2008 byly první střely HF-III instalovány na fregaty třídy Cheng Kung. Druhé v pořadí je dostaly raketové čluny třídy Jing Chiang (některé prameny uvádějí přepis Jing Chiang). Později následovaly fregaty třídy Kang Ding a korvety třídy Tuo Chiang. Obvykle je doplňují starši střely HF-II. Roku 2013 byla na zbrojním veletrhu TADTE 2013(Taipei Aerospace & Defense Technology Exhibition) představena pozemní verze systému HF-III. Jedno šestikolové vozidlo nese celkem čtyři střely HF-III. Konstrukce Střela startuje pomocí dvou raketových motorů na pevné pohonné látky a následně ji pohání náporový motor. Její dosah je odhadován až na 300 km. Rychlost střely je odhadována na 2,5–3 M. Nehody Dne 1. července 2016 posádka raketového člunu ROCS Jin Chiang (PG-610) třídy Jing Chiang, kotvícího na námořní základně, omylem vypustila střelu HF-III. Střela zasáhla ve vzdálenosti 40 mil plující rybářskou loď, přičemž jejího kapitána zabila a tři další osoby zranila. Nehodu zavinila lidská chyba – během cvičení operátor nedodržel postup a omylem přepnul systém ze simulačního do bojového režimu. Odkazy Externí odkazy Reference Literatura Protilodní střely Výzbroj válečných lodí
{ "redpajama_set_name": "RedPajamaWikipedia" }
8,297
\section{\label{sec:intro}Introduction} In hydrodynamics, \emph{vortex lines} are field lines of {\it vor\-ti\-ci\-ty} vector field $\boldsymbol \omega$, which is curl of velocity field $\mathbf v$. \emph{Vortex tube} is a surface made of vortex lines passing through each point of a transversal circuit (so that the circuit then encircles the tube). It was as early as 1858 (see Ref.~\onlinecite{Helmholtz1858} and Refs.~\onlinecite{Truesdell1954, Saffman1992, Batchelor2002, WuMaZhou2006, ThorneBlandford2017}) that Helmholtz proved that, in the case of ideal and barotropic fluid that is only subject to conservative forces, - vortex lines ``move with the fluid'' (the same fact is sometimes expressed as that the lines are ``frozen into the fluid" or that ``vortex lines are material lines'') and that - strength of a vortex tube is the same at all cross-sections. Here the strength is defined as the flux of the vorticity field $\boldsymbol \omega$ for cross-section itself or, via Stokes theorem, as circulation of the velocity field $\mathbf v$ round the circuit cut by the cross-section. \emph{Geometrical} (and even topological) language has proved very effective in hydrodynamics since a long time ago. In particular, for obtaining and classifying of \emph{conserved quantities}, one can use \emph{Hamiltonian} structure of hydrodynamic equations or the interconnection of \emph{symmetries} and conserved quantities, see e.g. Refs. ~\onlinecite{ArnoldKhesin1998, Anco2013, AncoDarTufail2015, BesseFrisch2017}. When treated geometrically, the Helmholtz statements may get specific meaning. For example, Arnold succeeded to show (see Ref.~\onlinecite{Arnold1966}), that the Euler equation for incompressible fluid on $n$-dimensional Riemannian manifold has an elegant formulation as the geodesic equation on the Lie group of volume-preserving diffeomorphisms of the given manifold. (In strong analogy with a much simpler - finite-dimensional - description of a rotating top, where the Lie group is $SO(3)$.) In this approach, Helmholtz theorem stems from invariance of \emph{coadjoint orbits} with respect to the dynamics. (For subsequent work in this direction, see Refs. ~\onlinecite{GuilleminSternberg1980, MarsdenWeinstein1983, MarsdenRatiuWeinstein1984, Novikov1982, KhesinChekanov1989}, and, in particular, monography ~\onlinecite{ArnoldKhesin1998}). The point of view this paper is based on starts from regarding hydrodynamics of ideal fluid as an application of the theory of \emph{integral invariants} due to Poincar\'e and Cartan (see Refs.~\onlinecite{Poincare1899, Cartan1922}, Ref.~\onlinecite{Gantmacher1975} or, in mo\-dern presentation, Ref.~\onlinecite{Arnold1989, LibermannMarle1987, Kiehn1975, Fecko2013}). Then, original Poincar\'e version of the theory refers to \emph{stationary} (time-independent) flow, described by stationary Euler equation, whereas Cartan's extension embodies the full, possibly time-dependent, situ\-ation. Let us remark that although integral invariants due to Poincar\'e and Cartan are mostly known from classical Hamiltonian mechanics, see e.g. Ref.~\onlinecite{LandauLifshitz1995}, its realm of applications is wider (see Refs.~\onlinecite{Cartan1922, Gantmacher1975}). The idea of a proof of Helmholtz theorem on vortex lines might go, within the integral invariants setting, as follows (for details, see below). First, vortex lines are identified with integral surfaces of an 1-di\-men\-sional integrable distribution, defined in terms of an ap\-pro\-pri\-ate 2-form. Second, structure of the (Euler) equation of motion immediately reveals that the 2-form is \emph{Lie-invariant} w.r.t. the flow of the fluid. So, third, the corresponding distribution is invariant w.r.t. the flow and, consequently, its integral surfaces are invariant w.r.t. the flow of the fluid. But this is exactly what Helmholtz statement says. Now, it turns out that the same reasoning may be repeated within the \emph{general integral invariant} setting (so beyond even the ``$n$-dimensional Riemannian hydrodynamics''). What differs is that we have an integrable distribution based on a possibly \emph{higher-degree} Lie-invariant differential form, there. In particular, the distribution may be \emph{higher-dimensional} and, consequently, its integral surfaces become then higher-di\-men\-sional, too. Never\-theless, they still obey the Helmholtz-like rule of ``moving with the fluid'' (i.e. the \emph{abstract} flow in the general theory translates the in\-te\-gral surfaces into one another). Concerning the vortex \emph{tubes} Helmholtz theorem, proof of the original statement is very easy and corresponding generalization to integral invariants setting is almost self-evident. The structure of the paper is as follows. In Section \ref{subsec:poincare}, in order to make the text self-contained, we shortly remind the reader, in modern language, of the \emph{Poincar\'e} theory of integral invariants. Then, in Section \ref{subsec:eulertimeindependent}, we present \emph{stationary} Euler equation rewritten in a form needed for profiting from the Poincar\'e theory. Sections \ref{subsec:helmholtztimeindependent} and \ref{subsec:helmholtztubestimeindependent} then show how (easily) one obtains Helmholtz results within this scheme. The same program is then repeated, for the case of time-\emph{dependent} Euler equation (based on \emph{Cartan}'s extension of the theory of integral invariants), in Sections \ref{subsec:cartan}, \ref{subsec:eulertimedependent}, \ref{subsec:helmholtztimedependent} and \ref{subsec:helmholtztubestimedependent} . Finally, as the principal topic of the paper, general, possibly higher-dimensional \emph{surfaces} moving with the (abstract) fluid in the phase space of a system, are studied in Sections \ref{subsec:surfacespoincare} (stationary case) and \ref{subsec:surfacescartan} (time-dependent case; here also \emph{extended} phase space plays a role). \section{\label{sec:timeindependent}Time-independent flow} \subsection{\label{subsec:poincare}Poincar\'e integral invariants} Consider a manifold $M$ endowed with dynamics given by a \emph{vector field} $v$ \begin{equation} \label{dynamicsgivenbyv} \dot \gamma = v \hskip 1.5cm {\dot x}^i = v^i(x) \end{equation} The field $v$ generates the dynamics (time evolution) via its flow $\Phi_t \leftrightarrow v$. We will call the structure \emph{phase space} \begin{equation} \label{defphasespace} (M,\Phi_t \leftrightarrow v) \hskip 1cm \text{\emph{phase space}} \end{equation} In this situation, let us have a $k$-form $\alpha$ and consider its integrals over various $k$-chains ($k$-dimensional surfaces) $c$ on $M$. Due to the flow $\Phi_t$ corresponding to $v$, the $k$-chains flow away, $c \mapsto \Phi_t (c)$. Compare the value of integral of $\alpha$ over the original $c$ and integral over $\Phi_t (c)$. If, {\it for any chain} $c$, the two integrals are equal, it reflects a remarkable property of the form $\alpha$ with respect to the field $v$. We call it integral invariant: \begin{equation} \label{integrinvariant} \int_{\Phi_t (c)} \alpha = \int_c \alpha \hskip .5cm \Leftrightarrow \hskip .5cm \int_c \alpha \ \ \ \text{is \emph{integral invariant}} \end{equation} For {\it infinitesimal} $t\equiv \epsilon$ we have \begin{equation} \label{odtec1} \int_{\Phi_\epsilon (c)} \alpha = \int_c \alpha + \epsilon \int_c \mathcal L_v\alpha \end{equation} (plus, of course, higher order terms in $\epsilon$; here $\mathcal L_v$ is \emph{Lie derivative} along $v$). Since (\ref{integrinvariant}) is to be true {\it for each} $c$, we get from (\ref{odtec1}) \begin{equation} \label{podmienka2} \mathcal L_v\alpha = 0 \end{equation} This is the \emph{differential version} of the statement (\ref{integrinvariant}). In specific situations, it may be enough that some integral only behaves invariantly when restricted to an important sub-class of $k$-chains, namely $k$-\emph{cycles}. These are chains whose boundary vanish: \begin{equation} \label{defcyklu} \partial c = 0 \hskip 2cm c = \ \text{\emph{cycle}} \end{equation} If this is the case, the condition (\ref{podmienka2}) is overly strong. It can be weakened to \begin{equation} \label{podmienka3} \mathcal L_v\alpha = d\tilde \beta \end{equation} for some form $\tilde \beta$. (The form $\mathcal L_v\alpha$ may just be \emph{exact} rather than vanish.) Indeed, in one direction, Eqs. (\ref{defcyklu}) and (\ref{podmienka3}) then give \begin{equation} \label{podmienka33} \int_c \mathcal L_v\alpha = \int_c d\tilde \beta = \int_{\partial c} \tilde \beta = 0 \end{equation} so that (\ref{integrinvariant}) \emph{is} fulfilled. In the opposite direction, if (\ref{integrinvariant}) is to be true for \emph{each cycle}, the form under the last integral sign in (\ref{odtec1}) is to be \emph{exact} due to \emph{de Rham theorem}, so (\ref{podmienka3}) holds. According to whether the integrals of forms are in\-vari\-ant for arbitrary $k$-chains or just for $k$-cycles, integral invariants are known as either \emph{absolute} invariants (for any $k$-chain) or \emph{relative} ones (just for $k$-cycles; notice that $\mathcal L_v(d\alpha) = 0$ holds from (\ref{podmienka3}), so whenever $\alpha$ gives relative invariant, $d\alpha$ already gives an \emph{absolute} one). Now, let us see what we can say about \emph{relative} integral invariants. The condition (\ref{podmienka3}) may be rewritten, using Cartan's formula \begin{equation} \label{cartanmagic} i_vd+di_v = \mathcal L_v \end{equation} as \begin{equation} \label{ivdalphajeexaktna} i_vd\alpha = d\beta \end{equation} (where $\beta = \tilde \beta - i_v\alpha$). Therefore the following main statement on relative invariants is true: \begin{equation} \label{jetotoiste2} i_vd\alpha = d\beta \hskip .5cm \Leftrightarrow \hskip .5cm \oint_c\alpha = \ \ \text{\emph{relative} invariant} \end{equation} So we can identify the presence of relative integral invariant \emph{in differential version}: on phase space $(M,v)$, we find a form $\alpha$ such that the l.h.s. of Eq. (\ref{ivdalphajeexaktna}) is exact. \subsection{\label{subsec:eulertimeindependent}Stationary Euler equation} \emph{Euler equation} for ideal (inviscid) fluid \begin{equation} \label{eulernorm} \rho \left(\partial_t \bold v + (\bold v \cdot \boldsymbol \nabla) \bold v \right) = - \boldsymbol \nabla p -\rho \boldsymbol \nabla \Phi \end{equation} (see, e.g. Ref.~\onlinecite{LandauLifshitz1987, Batchelor2002}) reduces, for \emph{stationary} flow, to \begin{equation} \label{eulerstatnorm1} (\bold v \cdot \boldsymbol \nabla) \bold v = - \frac{1}{\rho} \boldsymbol \nabla p - \boldsymbol \nabla \Phi \end{equation} Here mass density $\rho$, velocity field $\bold v$, pressure $p$ and potential $\Phi$ of the volume force field are functions of $\bold r$. In general, equation of state of the fluid may be written as \begin{equation} \label{generalfluid} p=p(\rho, s) \hskip 1cm \text{general fluid} \end{equation} where $s$ is (specific) entropy (i.e. entropy per unit mass). However, one can think of an important model, where the pressure depends \emph{on} $\rho$ \emph{alone}: \begin{equation} \label{barotropicfluid} p=p(\rho) \hskip 1.3cm \text{\emph{barotropic} fluid} \end{equation} In this case, there exists $P(\bold r)$, called specific \emph{enthalpy}, such that \begin{equation} \label{defP} \frac{1}{\rho} \boldsymbol \nabla p = \boldsymbol \nabla P \end{equation} and (\ref{eulerstatnorm1}) takes the form \begin{equation} \label{eulerstatnorm2} (\bold v \cdot \boldsymbol \nabla) \bold v = - \boldsymbol \nabla (P + \Phi) \end{equation} Now it turns out (check in Cartesian coordinates) that Eq. (\ref{eulerstatnorm2}) may be rewritten in the form of Eq. (\ref{ivdalphajeexaktna}) for the particular choice $\alpha = \tilde v$ and $\beta = -\mathcal E$, i.e. as \begin{equation} \label{eulerstatform3} i_v d\tilde v = - d\mathcal E \hskip 1cm \text{\emph{Euler equation}} \end{equation} (stationary and barotropic), where \begin{equation} \label{deftildev} \tilde v := \bold v \cdot d\bold r \hskip .5cm (\equiv \flat_g v \ \equiv g(v, \ \cdot \ )) \end{equation} is the covector (= 1-form) associated with the velocity vector field $v = v^i\partial_i$ in terms of ``lowering of index'' ($\equiv \flat_g$ procedure) and \begin{equation} \label{deffunctionE} \mathcal E := v^2/2 +P+ \Phi \hskip 1cm \text{\emph{Bernoulli function}} \end{equation} The \emph{vorticity 2-form} $d\tilde v$, present in Eq. (\ref{eulerstatform3}), is of crucial importance for us. We have \begin{eqnarray} \label{velocityform} \tilde v &=& \bold v \cdot d\bold r \\ \label{vorticityform} d\tilde v &=& (\curl \bold v) \cdot d\bold S \ \equiv \ \boldsymbol \omega \cdot d\bold S \\ \label{vortexlinesexpr} i_{\gamma'}d\tilde v &=& (\boldsymbol \omega \times {\bold r}') \cdot d\bold r \end{eqnarray} (see , e.g. \$8.5 in Ref.~\onlinecite{Fecko2006}) so that, first, $d\tilde v$ indeed encodes complete information about vorticity vector field $\boldsymbol \omega$ and, second, the equation \begin{equation} \label{vortexlinesexpr2} i_{\gamma'}d\tilde v =0 \hskip 1cm \text{\emph{vortex line equation}} \end{equation} expresses the fact that $\gamma (\lambda) \leftrightarrow \bold r(\lambda)$ corresponds to vortex line (the prime symbolizes tangent vector w.r.t. parameter $\lambda$; particular parametrization is, however, irrelevant). The form (\ref{eulerstatform3}) of the Euler equation turns out to be very convenient. Short illustration: 1. Application of $i_v$ on both sides gives \begin{equation} \label{bernoulli1} v\mathcal E =0 \hskip 1cm \text{\emph{Bernoulli equation}} \end{equation} (saying that $\mathcal E$ is constant along \emph{stream}lines). 2. Application of $i_{\gamma'}$ on both sides (where $\gamma'$ is from (\ref{vortexlinesexpr2})) gives \begin{equation} \label{bernoulli11} \gamma'\mathcal E =0 \end{equation} (saying that $\mathcal E$ is constant along \emph{vortex}-lines). 3. Putting $d\tilde v =0$ (\emph{irrotational} flow) leads to \begin{equation} \label{bernoulli2} \mathcal E = \ \text{const.} \end{equation} (a version of Bernoulli equation saying that $\mathcal E$ is, then, constant throughout the fluid). 4. Just looking at (\ref{jetotoiste2}), (\ref{eulerstatform3}) and (\ref{deftildev}) one obtains \begin{equation} \label{Kelvin_stat1} \oint_c \bold v \cdot d\bold r = \text{const.} \hskip 1.5cm \text{\emph{Kelvin's theorem}} \end{equation} (velocity circulation is conserved quantity). 5. Application of $d$ on both sides gives \emph{Helmholtz theorem} (see the next Section \ref{subsec:helmholtztimeindependent}). \subsection{\label{subsec:helmholtztimeindependent}Helmholtz statement on vortex lines - stationary case} Application of $d$ on both sides of (\ref{eulerstatform3}) and using (\ref{cartanmagic}) results in \begin{equation} \label{vorticityisinv1} \mathcal L_v (d\tilde v) = 0 \end{equation} This is, however, nothing but infinitesimal version of the statement \begin{equation} \label{vorticityisinv2} \Phi_t^* (d\tilde v) = d\tilde v \hskip 2cm \Phi_t \leftrightarrow v \end{equation} or, in words, that the vorticity 2-form $d\tilde v$ is \emph{invariant} w.r.t. the flow of the fluid. Now, we can define a \emph{distribution} $\mathcal D$ in terms of $d\tilde v$: \begin{equation} \label{distributiondef} \mathcal D := \{ \text{vectors} \ w \ \text{such that} \ \ i_w d\tilde v = 0 \ \ \text{holds} \} \end{equation} Due to Frobenius criterion the distribution is integrable. Indeed, let $w_1,w_2\in \mathcal D$. Then, because of the identity \begin{equation} \label{identity1} i_{[w_1,w_2]} = [\mathcal L_{w_1}, i_{w_2}] \equiv \mathcal L_{w_1} i_{w_2} - i_{w_2} \mathcal L_{w_1} \end{equation} (see, e.g., Ch.5.Ex.21 in Ref.~\onlinecite{CrampinPirani1986} or \$6.2 in Ref.~\onlinecite{Fecko2006}) plus (\ref{cartanmagic}) one immediately sees that \begin{equation} \label{Disintegrable} i_{[w_1,w_2]} d\tilde v = 0 \end{equation} i.e. $[w_1,w_2]\in \mathcal D$, too. So $\mathcal D$ is integrable. From (\ref{vortexlinesexpr}) and (\ref{vortexlinesexpr2}) we see that the distribution is 1-dimensional (in those points where $\boldsymbol \omega \neq 0$) and that its integral surfaces are exactly vortex lines. But this means that Helmholtz statement is true: because of (\ref{vorticityisinv2}) and (\ref{distributiondef}) the distribution $\mathcal D$ is invariant w.r.t. $\Phi_t \leftrightarrow v$ and, consequently, its integral surfaces (i.e. vortex lines) are invariant w.r.t. $\Phi_t \leftrightarrow v$, too. \subsection{\label{subsec:helmholtztubestimeindependent}Helmholtz statement on vortex tubes - stationary case} This statement is purely kinematical, it concerns the concept of vorticity itself. It holds for \emph{arbitrary} velocity fields $v$, even those which do not satisfy equations of motion (so they cannot occur). \begin{figure}[tb] \begin{center} \includegraphics[scale=0.30]{fig1a.eps} \caption{Vortex tube $\Sigma$ is made of vortex lines emanating from (all points of) circuit $c_1 = \partial S_1$ and entering the circuit $c_2 = \partial S_2$. Equation (\ref{jeabsolutny}) says that strength (vorticity flux) for the cross-section $S_1$ is the same as the strength for the cross-section $S_2$.} \label{helmholtztube} \end{center} \end{figure} Let $u$ be a vector \emph{field} defined by $i_ud\tilde v = 0$, i.e. a field tangent, at each point, to the vortex line passing through the point (see Eq. (\ref{vortexlinesexpr2})). Notice that any vortex line may be created from its single point by the flow $\Phi_s$ of $u$ and the same holds (using evident freedom $u\mapsto fu$, $f$ being a function) for the vortex \emph{tube} bounded by fixed circuits $c_1$ and $c_2$ (boundaries of fixed cross-sections $S_1$ and $S_2$, see Fig. \ref{helmholtztube}). Consider the artificial (!) ``dynamics'' given by $u$. Then the equation $i_ud\tilde v = 0$ may be regarded as a particular case of the basic equation (\ref{jetotoiste2}) from the general theory of Poincar\'e integral invariants (with $v\mapsto u$, $\alpha \mapsto \tilde v$ and $\beta \mapsto 0$). So, \begin{equation} \label{jerelativny} \oint_c \tilde v \equiv \oint_c \bold v \cdot d\bold r \hskip .5cm \text{is relative invariant} \end{equation} and, consequently, \begin{equation} \label{jeabsolutny} \int_S d\tilde v \equiv \oint_S \boldsymbol \omega \cdot d\bold S \hskip .5cm \text{is absolute invariant,} \end{equation} both of them w.r.t. our ``\emph{artificial} dynamics'' generated by $u$ (as opposed to the \emph{real dynamics} generated by the fluid velocity field $v$). Then, however, Eq. (\ref{jeabsolutny}) exactly says that the vorticity flux does not depend on particular choice of cross-section $S$ cutting the tube. Alternatively, one can use the proof of Eq. (\ref{jetotoiste4}) given in Appendix \ref{appproofintinv} (with $\xi \mapsto u$ and $\sigma \mapsto \tilde v$). \section{\label{sec:timedependent}Time-dependent flow} \subsection{\label{subsec:cartan}Cartan integral invariants} Cartan proposed, as a first step, to study the dynamics given in (\ref{dynamicsgivenbyv}) and (\ref{defphasespace}) on $M \times \mathbb R$ (\emph{extended} phase space; \emph{time} coordinate is added) rather than on $M$. Using the natural projection \begin{equation} \label{projectiononm} \pi:M \times \Bbb R \to M \hskip .4cm (m,t)\mapsto m \hskip .4cm (x^i,t)\mapsto x^i \end{equation} the forms $\alpha$ and $\beta$ (from Poincar\'e theory) may be pulled-back from $M$ onto $M \times \mathbb R$ and then combined into a single $k$-form \begin{equation} \label{definiciasigma1} \sigma = \hat \alpha + dt \wedge \hat \beta \end{equation} (Here, we denote $\hat \alpha = \pi^*\alpha$ and $\hat \beta = \pi^*\beta$). In a similar way, define a vector field \begin{equation} \label{definiciaxi1} \xi = \partial_t +v \end{equation} Its flow clearly consists of the flow $\Phi_t \leftrightarrow v$ on the $M$ factor combined with the trivial lapsing of time in the $\mathbb R$ factor. Now a simple check (for which Appendix \ref{appdecomp} might come in handy) reveals that the equation \begin{equation} \label{zakladnarovnica1} i_\xi d\sigma = 0 \end{equation} is equivalent to (\ref{ivdalphajeexaktna}). And the main statement (\ref{jetotoiste2}) takes the form \begin{equation} \label{jetotoiste4} i_\xi d\sigma = 0 \hskip .5cm \Leftrightarrow \hskip .5cm \oint_c\sigma = \ \ \text{\emph{relative} invariant} \end{equation} Here the meaning of the r.h.s. of (\ref{jetotoiste4}) is as follows: take a cycle $c_1$ located in the hyper-plane $t=t_1$ and its image $c_2$ w.r.t. the flow of $\xi$ (it is located in the hyper-plane $t=t_2$). Then integrals of $\sigma$ over $c_1$ and $c_2$ give the same number. (Notice that $dt \wedge \hat \beta$ part of $\sigma$ does not contribute, since $dt$ vanishes on the hyper-planes.) So, indeed, statements (\ref{jetotoiste2}) and (\ref{jetotoiste4}) are, in this interpretation, equivalent. First \emph{new} result by Cartan (w.r.t. Poincar\'e) is an observation that more general interpretation of (\ref{jetotoiste4}) is possible. Namely, take \emph{any} two cycles in $M \times \mathbb R$ which encircle common \emph{tube of solutions} (here ``solutions'' mean integral curves of $\xi$, i.e. solutions of the dynamics as seen from $M \times \mathbb R$). Then, \emph{still}, integrals of $\sigma$ over $c_1$ and $c_2$ give the same number. See a proof in Appendix \ref{appproofintinv}. Further Cartan's generalization, however, is much mo\-re interesting for us. Recall that (\ref{definiciasigma1}) might also be regarded as a decomposition of the\emph{ most general} $k$-form $\sigma$ on $M \times \mathbb R$, see Appendix \ref{appdecomp}. In this case, $\hat \alpha$ and $\hat \beta$ need not be obtained by pull-back from $M$. Rather, they are the most general \emph{spatial} forms on $M \times \mathbb R$. One easily sees that, in comparison with just pull-backs, they may be \emph{time-dependent}, i.e. it \emph{may happen} that \begin{equation} \label{mayhappen} \mathcal L_{\partial_t}\hat \alpha \neq 0 \hskip 1cm \mathcal L_{\partial_t}\hat \beta \neq 0 \end{equation} (In coordinate presentation, their \emph{components} may depend on time.) Recall that the proof of (\ref{jetotoiste4}) from Appendix \ref{appproofintinv} did not use any details of the decomposition. The structure of the equation (\ref{zakladnarovnica1}) is all one needs. Notice, however, that the equivalence of (\ref{zakladnarovnica1}) and (\ref{ivdalphajeexaktna}) is no longer true when (\ref{mayhappen}) holds. Instead, one easily computes (with the help of Appendix \ref{appdecomp}) that \begin{equation} \label{isequivalent} i_\xi d\sigma = 0 \hskip .7cm \Leftrightarrow \hskip .7cm {\mathcal L}_{\partial_t} \hat \alpha +i_v\hat d \hat \alpha = \hat d\hat \beta \end{equation} (the term ${\mathcal L}_{\partial_t} \hat \alpha$ is new). So, the equation \begin{equation} \label{ixidesigma1} {\mathcal L}_{\partial_t} \hat \alpha +i_v\hat d \hat \alpha = \hat d\hat \beta \end{equation} is \emph{the} equation that \emph{time-dependent} forms $\hat \alpha$ and $\hat \beta$ are to satisfy in order that integral of $\sigma$ is to be a relative integral invariant. \subsection{\label{subsec:eulertimedependent}Non-stationary Euler equation} Let us retell Cartan's results from the last section in the context of hydrodynamics, i.e. for particular choice (see Eq. (\ref{eulerstatform3})) \begin{equation} \label{definiciasigmahydro} \sigma = \hat v - \mathcal E dt \end{equation} where, in usual coordinates $(\bold r,t)$ on $E^3\times \mathbb R$, \begin{equation} \label{definiciahatv} \hat v := \bold v \cdot d\bold r \equiv \bold v (\bold r,t) \cdot d\bold r \end{equation} From (\ref{isequivalent}) we get \begin{equation} \label{hydro1} i_\xi d\sigma = 0 \hskip .7cm \Leftrightarrow \hskip .7cm {\mathcal L}_{\partial_t} \hat v +i_v\hat d \hat v = - \hat d\mathcal E \end{equation} One easily checks (e.g. in Cartesian coordinates $(\bold r,t)$) that \begin{equation} \label{hydro2} {\mathcal L}_{\partial_t} \hat v +i_v\hat d \hat v = - \hat d\mathcal E \end{equation} is nothing but the complete, time-dependent, Euler equation (\ref{eulernorm}). Therefore the time-dependent Euler equation may also be written in remarkably succinct form as \begin{equation} \label{hydro3} i_\xi d\sigma = 0 \hskip 1cm \text{\emph{Euler equation}} \end{equation} The form (\ref{hydro3}) of the Euler equation turns out to be very convenient for analyzing some of its cosequences. Two examples: 1. Just looking at (\ref{jetotoiste4}), (\ref{hydro3}) and (\ref{definiciasigmahydro}) one obtains \begin{equation} \label{Kelvin_stat2} \oint_c \bold v \cdot d\bold r = \text{const.} \hskip 1.5cm \text{\emph{Kelvin's theorem}} \end{equation} (the two loops $c_1$ and $c_2$ are usually in constant-time hyper-planes $t=t_1$ and $t=t_2$). 2. Application of $d$ on both sides gives very quickly \emph{Helmholtz theorem} (see the next Section \ref{subsec:helmholtztimedependent}). \subsection{\label{subsec:helmholtztimedependent}Helmholtz statement on vortex lines - general case} Application of $d$ on both sides of (\ref{hydro3}) and using formula (\ref{cartanmagic}) results in \begin{equation} \label{dsigmajeinvar1} \mathcal L_{\xi}(d\sigma) = 0 \end{equation} This is, however, nothing but infinitesimal version of the statement \begin{equation} \label{dsigmajeinvar2} \Phi_t^*(d\sigma) = d\sigma \hskip 2cm \Phi_t \leftrightarrow \xi \end{equation} or, in words, that the $d\sigma$ is \emph{invariant} w.r.t. the flow of the fluid (regarded as the flow of $\xi$ on $M\times \mathbb R$). Now, we want to see an integrable distribution behind vortex lines, again. Define the distribution $\mathcal D$ in terms of annihilation of as many as \emph{two} exact forms: \begin{equation} \label{newdistributiondef} \mathcal D \hskip .4cm \leftrightarrow \hskip .4cm i_w d\sigma = 0 = i_w dt \end{equation} By repeating the reasoning from (\ref{identity1}) and (\ref{Disintegrable}) one concludes that $\mathcal D$ is \emph{integrable}. The distribution $\mathcal D$ is, however, also \emph{invariant} w.r.t. the flow of the fluid. (Because of (\ref{dsigmajeinvar1}) and the trivial fact that $\mathcal L_{\xi}(dt) = 0$.) So, integral submanifolds (surfaces) \emph{move with the fluid}. What do they look like? Although perhaps not visible at first sight, they are nothing but vortex lines. Indeed, making use of general formula (\ref{dongeneral}) from Appendix \ref{appdecomp} and the form (\ref{hydro2}) of Euler equation we can write \begin{eqnarray} \label{dsigmavzdyvseob2} d\sigma &=& \hat d \hat v + dt \wedge ({\mathcal L}_{\partial_t} \hat v +\hat d \mathcal E) \hskip .8cm \text{always} \\ \label{dsigmanariesenivseob2} &=& \hat d \hat v +dt\wedge (-i_v\hat d\hat v) \hskip 1.4cm \text{\emph{on solutions}} \end{eqnarray} Let us now contemplate Eq. (\ref{newdistributiondef}). It says, that the distribution consists of \emph{spatial} vectors (i.e. those with vanishing \emph{time} component, therefore annihilating $dt$) which, in addition, annihilate $d\sigma$. Let $w$ be arbitrary \emph{spatial} vector. Denote, for a while, $i_w\hat d \hat v =:\hat b$ (it is a \emph{spatial} 1-form). Then, from (\ref{dsigmanariesenivseob2}), \begin{equation} \label{iwdsigma} i_wd\sigma = \hat b -dt\wedge i_v\hat b \end{equation} from which immediately \begin{equation} \label{iwdsigmaiszero} i_w(d\sigma) = 0 \hskip 1cm \Leftrightarrow \hskip 1cm \hat b \equiv i_w\hat d \hat v = 0 \end{equation} This says that we can, alternatively, describe the distribution $\mathcal D$ as consisting of those \emph{spatial} vectors which, in addition, annihilate $\hat d \hat v$ (rather than $d\sigma$, as it is expressed in the definition (\ref{newdistributiondef})). But Eqs. (\ref{definiciahatv}) and (\ref{vorticityform}) show that \begin{equation} \label{vorticityform2} \hat d \hat v = \boldsymbol \omega \cdot d\bold S \equiv \boldsymbol \omega (\bold r,t) \cdot d\bold S \end{equation} so that $\hat d \hat v$ is nothing but the \emph{vorticity 2-form} and, therefore, the integral surfaces of $\mathcal D$ may indeed be identified with vortex lines. So, Helmholtz statement is also true in the general, time-dependent, case. (Notice that the system of vortex lines looks, in general, different in different times. This is because its generating object, the vorticity 2-form $\hat d \hat v$, depends on time.) \subsection{\label{subsec:helmholtztubestimedependent}Helmholtz statement on vortex tubes - general case} Vortex tube is a genuinely spatial concept and the statement concerns purely kinematical property of \emph{any} velocity field at a single time (see the beginning of Sec. \ref{subsec:helmholtztubestimeindependent}). So, no (change of) dynamics has any influence on it. If the statement were true before, it remains to be true now. \section{\label{sec:surfaces}Generalization to surfaces} In this section we present details concerning the surfaces mentioned in the Introduction. By now it is easy, since we already know all the relevant ideas from hydrodynamics parts. All symbols which occur here refer to objects mentioned in the \emph{general theory of integral invariants} (due to Poincar\'e and Cartan, respectively, i.e. objects from Sections \ref{subsec:poincare} and \ref{subsec:cartan}) rather than to their special instances used in hydrodynamics (including the $n$-dimensional case). \subsection{\label{subsec:surfacespoincare}Time-independent (Poincar\'e) case} We apply $d$ on both sides of (\ref{ivdalphajeexaktna}) or (\ref{podmienka3}) and get \begin{equation} \label{Lvdalphanula} \mathcal L_v (d\alpha) = 0 \end{equation} So, the $(k+1)$-form $d\alpha$ is invariant w.r.t. the flow generated on $M$ by $v$. Now, define a distribution $\mathcal D$ given by annihilation of the form $d\alpha$: \begin{equation} \label{distributiondalpha} \mathcal D := \{ \text{vectors} \ w \ \text{such that} \ \ i_w d\alpha = 0 \ \ \text{holds} \} \end{equation} Its dimension is therefore \begin{eqnarray} \label{dimofDdalpha1} \text{dim} \ \mathcal D &=& \text{dim} \ M - \text{rank} \ d\alpha \\ \label{dimofDdalpha2} &\le& \text{dim} \ M - (k+1) \end{eqnarray} (if $\alpha$ is $k$-form, see Appendix \ref{appdimensionfofD}; the rank of $d\alpha$ is expected to be constant). The distribution $\mathcal D$ has the following two properties. First, it is \emph{invariant} w.r.t. the flow gene\-ra\-ted on $M$ by $v$. (This is because of (\ref{Lvdalphanula}).) Second, with the help of (\ref{identity1}) and (\ref{cartanmagic}) we see that \begin{equation} \label{Dalphaisintegrable1} i_{w_1} d\alpha = 0 = i_{w_2} d\alpha \hskip .5cm \Rightarrow \hskip .5cm i_{[w_1,w_2]} d\alpha = 0 \end{equation} i.e. \begin{equation} \label{Dalphaisintegrable2} w_1, w_2 \in \mathcal D \hskip .5cm \Rightarrow \hskip .5cm [w_1, w_2] \in \mathcal D \end{equation} So, due to Frobenius criterion, $\mathcal D$ is \emph{integrable}. Put the two properties together, this means that \emph{integral surfaces} (submanifolds) of the distribution \emph{move with the} (abstract) \emph{fluid}, exactly in the spirit of the Helmholtz theorem on vortex lines. (Notice that this behavior equally holds for any surface of \emph{smaller} dimension which resides within the maximal-dimension one.) \begin{figure}[tb] \begin{center} \includegraphics[scale=0.30]{fig2a.eps} \caption{Higher-dimensional analog of vortex tube, $\Sigma$. It is bounded by a $k$-dimensional boundary $c_1$ of a transversal $(k+1)$-dimensional surface $S_1$ from the left and similarly by $c_2 \equiv \partial S_2$ from the right. Here $S_2 \equiv \Phi_s(S_1)$ for some $s$.} \label{c1c2encircleSigma2} \end{center} \end{figure} Now consider a vector \emph{field} $W\in \mathcal D$ (so it satisfies $i_Wd\alpha =0$; this is analog of the field $w$ directed along vortex lines, discussed in Sec. \ref{subsec:helmholtztubestimeindependent}). Application of its flow $\Phi_s$ on $k$-dimensional boundary $c_1 \equiv \partial S_1$ of a transversal $(k+1)$-dimensional surface $S_1$ gives a $(k+1)$-dimensional analog of vortex tube, $\Sigma$ (see Fig.\ref{c1c2encircleSigma2}). So \begin{equation} \label{boundaryofV} \partial \Sigma = c_1 - c_2 \end{equation} Repeating either the reasoning from Appendix \ref{appproofintinv} or that from Sec. \ref{subsec:helmholtztubestimeindependent} we show that the (analog of the) \emph{strength} of the tube is constant along the tube \begin{equation} \label{generaltubeHelmholtz} \int_{S_1} d\alpha = \int_{S_2} d\alpha \end{equation} This is an analog of Helmholtz theo\-rem on vortex tubes. \subsection{\label{subsec:surfacescartan}Time-dependent (Cartan) case} We apply $d$ on both sides of (\ref{zakladnarovnica1}) and get \begin{equation} \label{Lxidsigmanula} \mathcal L_{\xi} (d\sigma) = 0 \end{equation} So, the $(k+1)$-form $d\sigma$ is invariant w.r.t. the flow gene\-rated on $M\times \mathbb R$ by $\xi$. Now, define a distribution $\mathcal D$ given by \emph{spatial} vectors which annihilate the form $d\sigma$: \begin{equation} \label{distributiondsigmadt} \mathcal D := \{ \text{\emph{spatial} vectors} \ w \ \text{such that} \ \ i_w d\sigma = 0 \} \end{equation} Put another way, it is defined as \begin{equation} \label{newdistributiondef2} w\in \mathcal D \hskip .4cm \Leftrightarrow \hskip .4cm i_w d\sigma = 0 = i_w dt \end{equation} The distribution is \emph{invariant} w.r.t. the flow gene\-ra\-ted on $M\times \mathbb R$ by $\xi$ (since \emph{both} its generating forms, $d\sigma$ as well as $dt$, are Lie-invariant w.r.t. $\xi$). In addition, due to Frobenius criterion, the distribution is \emph{integrable}. (One just applies (\ref{Dalphaisintegrable1}) to \emph{both} $d\sigma$ and $dt$.) Put the two properties together, this means that integral submanifolds (surfaces) of the distribution \emph{move with the} (abstract) \emph{fluid} in the spirit of the Helmholtz theorem on vortex lines. Finally, notice that, \emph{on solutions} of Eq. (\ref{zakladnarovnica1}), the distribution $\mathcal D$ generated by the pair of forms $(d\sigma,dt)$ coincides with that generated by the pair $(\hat d\hat \alpha,dt)$. (Just repeat argumentation in (\ref{dsigmavzdyvseob2}) - (\ref{iwdsigmaiszero}) replacing $\hat v \mapsto \hat \alpha$, $\mathcal E \mapsto -\hat \beta$ and Eq. (\ref{hydro2}) $\mapsto$ Eq. (\ref{ixidesigma1}).) So it consists of \emph{spatial} vectors annihilating $\hat d \hat \alpha$. Therefore, the statement about surfaces moving with the (abstract) fluid here, in Sec. \ref{subsec:surfacescartan}, is a natural generalization (namely to time-\emph{dependent} flow) of the corresponding statement mentioned in Sec. \ref{subsec:surfacespoincare}. Concerning the ``vortex tube'' Helmholtz theorem, it has nothing to do with dynamics and therefore it is tri\-vially true also here (see Sec. \ref{subsec:helmholtztubestimedependent}). \section{\label{sec:conclusions}Conclusions} The main point discussed in this paper is a statement concerning the \emph{general setting} of the theory of \emph{integral invariants} (rather than the ``ideal hydrodynamics on Riemanian manifolds'' \ or ``higher-dimensional hydrodynamics'' discussed, e.g., in Ref.~\onlinecite{ArnoldKhesin1998} and in the numerous papers mentioned in references therein). Namely, in the theory of integral invariants, both the time-in\-de\-pendent version of Poincar\'e and the extended, time-de\-pen\-dent version of Cartan, one can find specific \emph{surfaces} which \emph{move with the} (abstract) ``\emph{fluid}''. When the theory is applied to 3D-hydrodynamics of ideal and barotropic fluid only subject to potential force, the surfaces become 1-dimensional and reduce to well-known and useful concept of \emph{vortex lines}. Their property of moving with the fluid (now the real one) becomes the celebrated \emph{Helmholtz theorem} from 1858. So, in this sense, the surfaces may be regarded as a generalization of the vortex lines. One can also define, in the general higher-dimensional case, an analog of the hydrodynamical concept of \emph{vortex tubes} and check that (an analog of) Helmholtz theorem on \emph{strength} of the tubes is still true. \begin{acknowledgments} I acknowledge support from grant VEGA 1/0985/16. \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,969
#include "vice.h" #include <Alert.h> #include <Application.h> #include <DirectWindow.h> #include <FilePanel.h> #include <Locker.h> #include <MenuItem.h> #include <string.h> #include "vicemenu.h" extern "C" { #include "constants.h" #include "log.h" #include "mouse.h" #include "lib.h" #include "machine.h" #include "platform.h" #include "resources.h" #include "statusbar.h" #include "ui.h" #include "ui_file.h" #include "util.h" #include "vicewindow.h" #include "video.h" #include "videoarch.h" } /* #define DEBUG_UI */ #ifdef DEBUG_UI void print_rect(const char *view, BRect r) { log_debug("%s (Width: %f, Height: %f) (Top: %f, Bottom: %f)", view, r.Width(), r.Height(), r.top, r.bottom); } #define DBG_RECT(_x_) print_rect _x_ #define DBG_MSG(_x_) log_debug _x_ #else #define DBG_RECT(_x_) #define DBG_MSG(_x_) #endif /* FIXME: some stuff we need from the ui module */ extern ViceWindow *windowlist[]; extern int window_count; void ViceWindow::Update_Menu_Toggles(ui_menu_toggle *toggle_list) { int i, value; BMenuItem *item; if (!toggle_list) { return; } for (i = 0; toggle_list[i].name != NULL; i++) { resources_get_int(toggle_list[i].name, &value); if (item = menubar->FindItem(toggle_list[i].item_id)) { item->SetMarked(value ? true : false); } } } void ViceWindow::Update_Menu_Value_Lists(ui_res_value_list *value_list) { int i, j; int value, result; BMenuItem *item; if (!value_list) { return; } for (i = 0; value_list[i].name != NULL; i++) { result = resources_get_int(value_list[i].name, &value); if (result == 0) { for (j = 0; value_list[i].vals[j].item_id != 0; j++) { if (value == value_list[i].vals[j].value) { /* the corresponding menu is supposed to be in RadioMode */ if (item = menubar->FindItem(value_list[i].vals[j].item_id)) { item->SetMarked(true); } } } } } } void ViceWindow::Update_Menu_String_Lists(ui_res_string_list *string_list) { int i, j; int result; const char *str; BMenuItem *item; if (!string_list) { return; } for (i = 0; string_list[i].name != NULL; i++) { result = resources_get_string(string_list[i].name, &str); if (result == 0) { for (j = 0; string_list[i].strings[j].item_id != 0; j++) { if (!strcasecmp(str, string_list[i].strings[j].string)) { /* the corresponding menu is supposed to be in RadioMode */ if (item = menubar->FindItem(string_list[i].strings[j].item_id)) { item->SetMarked(true); } } } } } } /* the view for the emulators bitmap */ class ViceView : public BView { public: ViceView(BRect rect); virtual void Draw(BRect rect); virtual void MouseDown(BPoint point); virtual void MouseUp(BPoint point); }; ViceView::ViceView(BRect rect) : BView(rect, "view", B_FOLLOW_LEFT | B_FOLLOW_TOP, B_WILL_DRAW) { } void ViceView::Draw(BRect rect) { ViceWindow *wnd = (ViceWindow *)Window(); if (wnd->bitmap && !wnd->use_direct_window) { DrawBitmap(wnd->bitmap, rect, rect); } } /* some hooks for the 1351 mouse emulation */ void ViceView::MouseDown(BPoint point) { BMessage *msg; int32 buttons; if (!_mouse_enabled) { return; } msg = Window()->CurrentMessage(); msg->FindInt32("buttons", &buttons); if (buttons & B_PRIMARY_MOUSE_BUTTON) { mouse_button_left(1); } } void ViceView::MouseUp(BPoint point) { if (!_mouse_enabled) { return; } mouse_button_left(0); } ViceWindow::ViceWindow(unsigned int width, unsigned int height, char const *title) : BDirectWindow(BRect(0, 0, 300, 100), title, B_TITLED_WINDOW, B_NOT_ZOOMABLE | B_NOT_RESIZABLE | B_ASYNCHRONOUS_CONTROLS) { BRect r; /* create the menubar; key events reserved for the emu */ menubar = menu_create(machine_class); AddChild(menubar); DBG_RECT(("menubar", menubar->Frame())); menubar_offset = (int)menubar->Frame().Height() + 1; SetKeyMenuBar(NULL); /* create the File Panel */ filepanel = new ViceFilePanel(B_OPEN_PANEL, new BMessenger(this), NULL, B_FILE_NODE, false); /* create the Save Panel */ savepanel = new ViceFilePanel(B_SAVE_PANEL, new BMessenger(this), NULL, B_FILE_NODE, false); /* the view for the canvas */ r = Bounds(); r.top = menubar_offset; view = new ViceView(r); AddChild(view); DBG_RECT(("view", view->Frame())); /* bitmap is NULL; will be created by video_canvas_resize() */ bitmap = NULL; /* statusbar is NULL; will be created in Resize() */ statusbar = NULL; /* the canvas is set by video_canvas_create */ canvas = NULL; /* register the window */ windowlist[window_count++] = this; /* stuff for direct drawing */ fconnected = false; fconnectiondisabled = false; locker = new BLocker(); fclip_list = NULL; fcliplist_count = 0; /* use the resource to initialize stuff */ resources_get_int("DirectWindow", &use_direct_window); if (!SupportsWindowMode() || CheckForHaiku()) { use_direct_window = 0; } resources_set_int("DirectWindow", use_direct_window); /* finally display the window */ if (width > 0 && height > 0) { Resize(width, height); } MoveTo(window_count * 30, window_count * 30); Show(); } ViceWindow::~ViceWindow() { BView *vsid = FindView("vsid"); fconnectiondisabled = true; Hide(); Sync(); if (bitmap) { delete bitmap; } if (vsid) { RemoveChild(vsid); delete vsid; } RemoveChild(menubar); delete menubar; RemoveChild(view); delete view; RemoveChild(statusbar); delete statusbar; delete filepanel; delete savepanel; delete locker; lib_free(fclip_list); fclip_list = NULL; } bool ViceWindow::QuitRequested() { /* send an exit request to ui's event loop but dont't close the window here */ BMessage msg; msg.what = MENU_EXIT_REQUESTED; ui_add_event(&msg); return false; } void ViceWindow::MessageReceived(BMessage *message) { /* FIXME: sometimes the menubar holds the focus so we have to delete it */ if (CurrentFocus()) { CurrentFocus()->MakeFocus(false); } ui_add_event(message); switch(message->what) { default: BWindow::MessageReceived(message); break; } } void ViceWindow::Resize(unsigned int width, unsigned int height) { BRect statusbar_frame; if (BWindow::Lock()) { view->ResizeTo(width - 1, height - 1); DBG_RECT(("view after resize", view->Frame())); if (statusbar) { RemoveChild(statusbar); delete statusbar; statusbar = NULL; } statusbar_frame.top = view->Frame().bottom + 1; statusbar_frame.bottom = view->Frame().bottom + 67; statusbar_frame.left = 0; statusbar_frame.right = view->Frame().right; statusbar = new ViceStatusbar(statusbar_frame); AddChild(statusbar); DBG_RECT(("statusbar", statusbar->Frame())); ui_statusbar_update(); DBG_MSG(("statusbar_frame.bottom = %f\n", statusbar_frame.bottom)); BWindow::ResizeTo(width - 1, statusbar_frame.bottom); BWindow::Unlock(); } } void ViceWindow::CreateBitmap(unsigned int width, unsigned int height, unsigned int depth) { color_space use_colorspace; if (bitmap) { delete bitmap; bitmap = NULL; } switch (depth) { case 8: use_colorspace = B_CMAP8; break; case 16: use_colorspace = B_RGB16; break; case 32: default: use_colorspace = B_RGB32; } bitmap = new BBitmap(BRect(0, 0, width - 1, height - 1), use_colorspace, false, true); } void ViceWindow::DrawBitmap(BBitmap *bitmap, int xs, int ys, int xi, int yi, int w, int h) { if (BWindow::Lock()) { view->DrawBitmap(bitmap, BRect(xs, ys, xs + w, ys + h), BRect(xi, yi, xi + w, yi + h)); BWindow::Unlock(); } } void ViceWindow::DirectConnected(direct_buffer_info *info) { bool isdirty = false; if (!fconnected && fconnectiondisabled) { return; } locker->Lock(); switch (info->buffer_state & B_DIRECT_MODE_MASK) { case B_DIRECT_START: fconnected = true; case B_DIRECT_MODIFY: lib_free(fclip_list); fclip_list = NULL; fcliplist_count = info->clip_list_count; fclip_list = (clipping_rect *)lib_malloc(fcliplist_count * sizeof(clipping_rect)); if (fclip_list) { memcpy(fclip_list, info->clip_list, fcliplist_count * sizeof(clipping_rect)); } fbits = (BYTE *)info->bits; fbytes_per_row = info->bytes_per_row; fbits_per_pixel = info->bits_per_pixel; fbounds = info->window_bounds; isdirty = true; break; case B_DIRECT_STOP: fconnected = false; break; } locker->Unlock(); if (isdirty && use_direct_window && canvas != NULL) { video_canvas_refresh_all((struct video_canvas_s *)canvas); } }
{ "redpajama_set_name": "RedPajamaGithub" }
4,412
\section{Introduction} Feedback processes play a crucial role in galaxy formation and evolution. In particular, radiation pressure from the continuum absorption and scattering of starlight on dust grains has been proposed as an important mechanism in driving supersonic turbulence in the interstellar medium (ISM), hampering gravitational collapse, and launching large-scale galactic winds in starbursts and rapidly star-forming galaxies. One-dimensional analytic models show that dusty winds can be driven by radiation pressure in rapidly star-forming environments, such as luminous infrared galaxies (LIRGs) and ultraluminous infrared galaxies (ULIRGs) (e.g., \citealt{Thompson05}; \citealt{Murray05}; \citealt{Murray11}; \citealt{Zhang12}). However, the simplified galactic wind models contain some uncertainties. A key question that cannot be addressed by analytic models is how much momentum is coupled between radiation and dusty gas. In the single scattering limit, i.e., the system is optically thick to the UV photons, but optically thin to the re-radiated infrared (IR) emission from dust grains, all photons are absorbed and scattered once, the radiation transfers a momentum flux of $L/c$ in the gas, where $L$ is the luminosity of radiation source. However, it is uncertain how much momentum is transferred from radiation to gas if the system is optically thick to its own infrared emission. It has been argued that the rate of momentum deposition will never exceed a few of $L/c$ (\citealt{Krumholz09}), or it approaches $\tau_{\rm IR} L/c$, where $\tau_{\rm IR}\gg1$ is the mean IR optical depth of the system (\citealt{Thompson05}; \citealt{Murray10}; \citealt{Andrews11}; \citealt{Thompson15}). In order to understand the dynamics of radiation-gas interaction, multidimensional radiation hydrodynamics simulations have been carried out recently. Krumholz \& Thompson (2012, hereafter KT12) used a 2-dimensional (2D) model to investigate the efficiency of momentum transfer from IR opticlaly thick ultraluminous infrared galaxies (ULIRGs) to a dusty atmosphere with a vertically stratified gravity. Using 2D grey flux-limited diffusion (FLD) approximation in the \textsc{orion} code \citep{Krumholz07}, KT12 showed that the radiation-gas interaction gives rise to radiative Rayleigh-Taylor instability (RTI), driving supersonic turbulence, and limiting momentum transfer from the radiation to the gas to $\sim L/c$. The radiation momentum deposition in the regime where is initially sub-Eddington for dust is not sufficient to driven an unbound wind, most of the gas is eventually settled in a turbulent steady state confined near the base of the system. \citet{Skinner2015} reached a similar conclusion in a study of radiative feedback from a protocluster on a surrounding molecular cloud using their M1 closure method. Using their variable Eddington tensor (VET) algorithm implemented in the \textsc{athena} code (\citealt{Stone08}; \citealt{Davis12}; \citealt{Jiang12}), Davis~ et al.(2014, hereafter D14) revisited the results of KT12 with the same 2D (and extented them to 3D). The VET algorithm is used to calculate the local Eddington tensor by solving the radiative transfer equation with the method of short characteristics (\citealt{Davis12}). In contrast to KT12, D14 showed a stronger momentum coupling between radiation and dusty gas. Although the radiative RTI develops and limit the radiation-gas interaction, The gas can be heated and accelerated upward by radiation, and produce an unbound outflow even from an initially sub-Eddington system. D14 showed that the significant difference between the outcome of simulations in KT12 and D14 resulted from limitations in the diffusion-based FLD scheme. The FLD and VET schemes agree well in the dense gas with optical depth $\tau_{\rm IR}\gg 1$, but the FLD approximation becomes inaccurate in modeling the radiation field responds to structure in the gas distribution in the system of $\tau_{\rm IR}\lesssim$ few. \cite{Rosdahl15} simulated the same problem of KT12 and D14 using their new \textsc{ramses-rt} code with the M1 closure for the Eddington tensor. The M1 results show that the gas receives a larger acceleration than in the FLD calculations and reaches a large height, but this is ultimately insufficient to overcome the gravity and the gas eventually settles down in a marginally bound system, similar to the FLD results. Hence, their results are qualitatively closer to those obtained with the FLD rather than with the VET method. On the other hand, more recent simulations based on the implicit Monte Carlo radiation transfer scheme is more consistent with D14 (\citealt{Tsang15}). Both the M1 closure and FLD schemes impose artificial constraints on the radiation flow in optically thin regions while Monte Carlo and VET directly model the angular distribution of the radiation field. The agreement between VET and the Monte Carlo algorithms along with the D14 analysis of how the FLD algorithm breaks down in optically thin regimes suggest that these algorithms are giving the most accurate representation of the flow for this problem setup. Note that in the previous mentioned simulations of radiation-gas interaction, the size of the computational box is only about $\sim0.3$ pc$\times 1.3$ pc, with a resolution of $\Delta x \simeq 3.2\times 10^{-4}\,$pc, so that one can resolve the sound crossing timescale and the scale of gas turbulence. In order to investigate the efficiency of momentum coupling and wind propagation on a larger scale, Krumholz \& Thompson (2013, hereafter KT13) assumed that a wind is initially launched at the base of the galactic atmosphere due to super-Eddington radiation forces or other mechanisms, and turned off the gravity to study the maximum velocity the gas can gain from radiation. Using also \textsc{orion} and the FLD scheme, KT13 found that after wind acceleration begins, RTI forces the gas into a configuration that reduces the rate of momentum transfer from the radiation filed to the gas by a factor of $\sim10-100$ compared to an estimate based on the optical depth at the base of the atmosphere, the momentum transfer to gas is only a few of $\sim L/c$, without significant amplification by radiation trapping. They concluded that radiation pressure on dust is unlikely to be able to drive winds and ejecta from star-forming clusters and galaxies. So far, no other simulations have been done for the wind-gas interaction problem. Given previous discrepancies, it is important to re-examine the results of KT13 using the VET method. This paper is organized as follows. In Section \ref{setup} we briefly summarize the equations and the simulation setup. The initial conditions of the gas are given by the end states of the simulations from D14. In Section \ref{results} we show our simulation with various parameters, and summarize our results. The astrophysical implications are discussed in Section \ref{sec_discussion}. Conclusions are given in Section \ref{conclusions}. \section{Equations and Simulation Setup}\label{setup} \subsection{Equations} As in D14, we solve the equations of radiation hydrodynamics using \textsc{athena} with the built-in radiation module (\citealt{Davis12}; \citealt{Jiang12}). The equations of mass, momentum, energy, radiation energy and radiation momentum conservation are \begin{eqnarray} &&\pdif{\rho}{t} + \mathbf{\nabla} \cdot \left(\rho \mathbf{v} \right) = 0, \\ &&\pdif{\left(\rho\mathbf{v}\right)}{t} + \mathbf{\nabla} \cdot \left( \rho\mathbf{v} \mathbf{v} + {\sf P}\right) = \rho \mathbf{g} - \mathbf{S}_r(\mathbf{P}), \\ &&\pdif{E}{t} + \mathbf{\nabla} \cdot \left(E \mathbf{v} + {\sf P} \cdot \mathbf{v}\right) = \rho \mathbf{g} \cdot \mathbf{v} -c S_r(E), \\ &&\pdif{E_r}{t} + \mathbf{\nabla} \cdot \mathbf{F}_r=cS_r(E), \\ &&\frac{1}{c^2}\pdif{\mathbf{F}_r}{t}+\mathbf{\nabla} \cdot{\sf P}_r=\mathbf{S}_r(\mathbf{P}), \end{eqnarray} where $\rho$, $\mathbf{v}$, $\mathbf{g}$ are the gas density, fluid velocity and the gravitational acceleration, ${\sf P}=p{\sf I}$ is the pressure tensor, $p=\rho k_{B}T_g/\mu m_{\rm H}$ is the gas pressure, ${\sf I}$ is the identity matrix, and $E=p/(\gamma-1)+\rho v^{2}/2$ is the total fluid energy density. The radiation momentum and energy source terms $\mathbf{S}_r(\mathbf{P})$ and $S_r(E)$ are given by (\citealt{Lowrie99}) \begin{eqnarray} \mathbf{S}_r(\mathbf{P}) & = &-\frac{\sigma_{F}}{c}\left[\mathbf{F}_r- \left(\mathbf{v} E_r+\mathbf{v} \cdot {\sf P}_r\right)\right] \nonumber \\ & &+\frac{\mathbf{v}}{c}(\sigma_{\rm P}a_rT^4-\sigma_{E}E_r),\label{source1} \end{eqnarray} \begin{eqnarray} S_r(E) & = & (\sigma_{\rm P}a_rT^4-\sigma_{E}E_r) \nonumber \\ & &+\sigma_{F}\frac{\mathbf{v}}{c^2}\cdot\left[\mathbf{F}_r- \left(\mathbf{v} E_r+\mathbf{v}\cdot{\sf P} _r\right)\right],\label{source2} \end{eqnarray} where $E_r$ and $\mathbf{F}_r$ are the radiation energy density and radiation flux, $T$ is the gas temperature, $\sigma_F$, $\sigma_{\rm P}$ and $\sigma_{E}$ are the flux mean opacity, the Planck mean opacity and the energy mean opacity correspondingly, and $a_r$ is the Stefan-Boltzmann constant. For simplicity, we assume that the gas and the dust share a common temperature as $T$, and use the Planck $\kappa_{\rm P}$ and Rosseland $\kappa_{\rm R}$ mean opacities (KT12, KT13, D14) \begin{eqnarray} (\kappa_{\rm P}, \kappa_{\rm R}) = (10^{-1},10^{-3/2}) \left( \frac{T}{10 \; \textrm{K}}\right)^2 \; {\rm cm^2 \; g^{-1}}\label{opacity}. \end{eqnarray} Equation (\ref{opacity}) gives an approximation with a dusty gas at $T\lesssim 150\,$K (\citealt{Semenov03}). We set $\sigma_F=\rho \kappa_R$ in equations (\ref{source1}) and (\ref{source2}). The thermal and dynamical behaviors of dust and gas has been discussed in KT13 (see their Appendix A). In the parameter space we are concerned with, the rate of dust-radiation energy exchange is higher than the rate of dust-gas energy exchange, therefore we expect that the dust is in thermal equilibrium with the radiation field, and we have the dust temperature $T_{\rm dust}\simeq T_r = (E_{r}/a_{r})^{1/4}$, where $T_r$ is the characteristic radiation temperature. On the other hand, the gas may have different temperature. \cite{Goldsmith01} showed that the dust thermally couples with the gas and has the same temperature only if the gas density exceeds $\sim10^{4}-10^{5}$ cm$^{-3}$. As the ISM material is accelerated and spreads out, the density of the gas drops quickly, and the gas no longer holds the same temperature as the dust, although they are still dynamically well coupled (\citealt{Murray05}; KT13). However, even in the case, the assumption $T_{\rm gas}\simeq T_{\rm dust}$ still provides a reasonable approximation. Since the gas is highly supersonic in the outflow, the thermal pressure of the gas is much weaker compared to ram pressure, changing the gas temperature is unlikely to significantly affect the dynamics of the gas. For simplification we still assume $T_{\rm gas}=T_{\rm dust}=T$ in our work. As discussed in D14, alternative simulations with $\kappa_{\rm R,P} \propto T_r$ were rung for the problem setup using non-zero gravity and the results agreed very closely with the simulations with $\kappa_{\rm R,P}\propto T$. Given that setting gravity to zero is the only significant change in the current setup, we assume this will still hold. As the dusty neutral gas is accelerated farther from its origin, it likely to become more diffuse and some of the neutral gas will become more highly ionized, and the dust will be sublimated. Hence, our results here apply to an earlier neutral and dynamically well coupled phase of the outflow. The radiation pressure ${\sf P}_r$ is given by ${\sf P} _r = {\sf f}E_r$, where ${\sf f}$ is the eponymous VET, which is calculate directly by \begin{equation} {\sf f}=\frac{{\sf P}_r}{E_r}=\frac{\int I(\hat{n}) \mu_i \mu_j d\Omega}{\int I(\hat{n}) d\Omega}, \end{equation} where $d\Omega$ is the differential of solid angle, and $\mu_i \equiv \hat{n} \cdot \hat{x}_i$ is the cosine factor. The specific intensity of the radiation field $I$ can be solved by the radiation transfer equation \begin{equation} \hat{n} \cdot \nabla I = \sigma_{F} \left(\frac{a_r c}{4\pi} T^4 - I\right). \label{eq:radtrans} \end{equation} This equation is solved using the short characteristics method, as descried in detail in \cite{Davis12}. \subsection{Dimensionless Units} We define a constant flux $F_*$ as the source of the radiation field in the system injecting at the lower boundary, and \begin{equation} T_*=\left(\frac{F_*}{a_r c}\right)^{1/4} \end{equation} is the characteristic temperature. Here, we follow the KT12 convention of denoting fiducial quantities with a ``*''. Following KT12 and D14, we choose $T_*=82$\,K, which corresponds to $F_*=2.54\times10^{13}\,L_{\odot}\,$kpc$^{-2}$, and $\kappa_{\rm R,*}=\kappa_{\rm R}(T_*)=2.13$ cm$^{2}$ g$^{-1}$. These values are chosen to be in reasonable agreement with a ULIRG disk. As shown in KT12, the system with gravity is characterized by two dimensionless numbers, i.e., the dimensionless Eddington ratio \begin{equation} f_{\rm E,*} = \frac{\kappa_{\rm R,*} F_*}{g c}, \end{equation} and the optical depth \begin{equation} \tau_* = \Sigma \kappa_{\rm R,*}. \end{equation} Physically, the system is initially set to have a temperature of $T_*$ everywhere, $f_{\rm E,*}$ and $\tau_*$ are the initial Eddington and optical depth of the system. The characteristic sound speed is defined by \begin{equation} c_{s,*}^{2}= \frac{k_{\rm B}T_*}{\mu m_{\rm H}}. \end{equation} The scale height $h_*$, density $\rho_*$ and time $t_*$ are \begin{equation} h_* = \frac{c_{s,*}^{2}}{g}, \qquad \rho_* = \frac{\Sigma}{h_{*}}, \qquad t_* = \frac{h_*}{c_{s,*}}. \end{equation} The VET and Monte Carlo results imply that a wind can be launched for an initially sub-Eddington system with $f_{\rm E,*}\sim 1$ (i.e. an Eddington factor less than, but near unity), while the FLD and M1 closure methods imply that a dusty wind can be launched only for $f_{\rm E,*}>1$. Regardless, if a dusty wind has been launched by radiation pressure or other mechanisms, it has already overcome its gravitational potential at the base of the system. Following KT13, we focus on the limit of $g\rightarrow 0$, i.e., $f_{\rm E,*}\rightarrow \infty$ in our simulations. In this case an accelerating wind without gravity gives an upper limit of momentum transfer between radiation and dusty gas. For this we need to define another set of natural units for the gravity-free system. We use a characteristic acceleration \begin{equation} g_{a}=\frac{\kappa_{\rm R,*}F_0}{c}, \end{equation} which parameterizes the radiation force on the dust. The units of length, time and density are defined using $g_{a}$ instead of $g$: \begin{eqnarray} &&h_{a}=\frac{c_{s,*}^{2}}{g_{a}}=\frac{h_*}{f_{\rm E,*}} \qquad t_{a}=\frac{h_{a}}{c_{s,*}}=\frac{t_{*}}{f_{\rm E,*}}\nonumber\\ &&\rho_{a}=\frac{\Sigma}{h_a}=f_{\rm E,*}\rho_*. \end{eqnarray} Note that the definitions of $h_a$, $t_a$ and $\rho_a$ are different from KT13. In KT13, $h_a$, $t_a$ and $\rho_a$ are functions of $\tau_*$, but we set these variables to be independent on $\tau_*$, which provides common time and length scales for any choice of $f_{E,*}$. \subsection{Initial Conditions} Since the gravitational field is set as $g=0$ ($f_{\rm E,*}\rightarrow \infty$), the simulation results only depends on $\tau_*$. In this paper, we run four 2-dimensional simulations in the $(x,y)$ plane with three values of $\tau_*$. Two types of boundary conditions -- hydrodynamic boundary and radiation boundary are set up for all simulations. Periodic boundary conditions are imposed in the horizontal direction ($x-$direction) on both the radiation and hydrodynamic variables. Reflecting and outflow boundary conditions are used on hydrodynamic variables at the bottom and the top of the vertical direction ($y-$direction) respectively, and inflow and vacuum boundary conditions are setup at the bottom and the top respectively. Tables \ref{tab_parameter1} summarizes simulation parameters for our runs. T3H and T3L correspond to $\tau_*=3$, T10 correspond to $\tau_*=10$, and T1 corresponds to $\tau_*=1$. We run T3H and T10 with a high resolution $\Delta x/h_a=0.25$, and T1 and T3L with a low resolution $\Delta x/h_a=0.512$. In D14 (see also \citealt{Rosdahl15} and \citealt{Tsang15}), the isothermal dusty atmosphere is initialized with initial density perturbation, which seeds the growth of RTI and turbulence. It is reasonable to assume that a wind launched at the base of a galaxy have already been in a fully turbulent state with small-scale structures. The initial conditions for gas-wind interaction in KT13 are chosen from the end states of the simulations from KT12. Similarly, in this paper we choose the initial conditions from the end states in D14. \begin{table} \begin{center} Simulation Parameters \begin{tabular} {lcccc} \hline\hline Run & IC & $[L_x \times L_y]/h_a$ & $N_x \times N_y$ & $\Delta x/h_a$\\ \hline T3H & T3\_F0.5 & $256\times 4096$ & $1024 \times 16384$ & 0.25\\ T10 & T10\_F0.5 & $256\times 4096$ & $1024 \times 16384$ & 0.25\\ T1 & T1\_F0.5L & $256\times 8192$ & $500 \times 16000$ & 0.512 \\ T3L & T3\_F0.5L & $256\times 8192$ & $500 \times 16000$ & 0.512 \\ \hline \hline \end{tabular} \end{center} \caption{The initial conditions (IC) show the corresponding runs with gravity in D14. $L_x \times L_y$ is the size of the computational box in unit of $h_a$. $N_x \times N_y$ is the zones in the box. $\Delta x/h_a=0.25$ for the high resolution, and $\Delta x/h_a =0.512$ for the low resolution.}\label{tab_parameter1} \end{table} D14 considered various runs with a range of $\tau_*$ and $f_{\rm E,*}$ with gravity. In particular, we focus on three runs: T10\_F0.5 ($\tau_*=10$, $f_{\rm E,*}=0.5$), T3\_F0.5 ($\tau_*=3$, $f_{\rm E,*}=0.5$), and T1\_F0.5 ($\tau_*=1$, $f_{\rm E, *}=0.5$) in D14. The size of the box for three runs are $[L_x \times L_y]/h_*=512\times1024$, with a resolution of $L_{(x,y)}/N_{(x,y)}=0.5 h_*$. We take the simulations results from T3\_F0.5 and T10\_F0.5. The gas in the atmosphere is accelerated by radiation, and eventually reaches the top of the box as unbound outflow. The gas in T3\_F0.5 is accelerated to $\sim 1024 h_*$ at $t \sim 80 t_*$, and turbulence is well developed within the gas. We take the gas at $t = 80 t_*$ as the initial state for run T3H. On the other hand, the gas in T10\_F0.5 reaches the top of the box much earlier than that in T3\_F0.5 due to a higher optical depth and a larger radiation force on the gas. We take the gas at $t = 37.5 t_*$ as the initial state for run T10. Note that the resolution for T3H and T10 are the same as T3\_F0.5 and T10\_F0.5, but the length scale has been changed from $h_*$ to $h_a$ in this paper. We also run two simulations T1\_F0.5L and T3\_F0.5L, which have the same setup as T1\_F0.5 and T3\_F0.5 in D14 respectively, but using a lower resolution $L_{(x,y)}/N_{(x,y)}=1.024 h_*$. In constrast to other cases with $\tau_* >1$, the gas in T1\_F0.5 is only accelerated to a maximum height of $\sim 200 h_*$ at $t \sim80 t_*$, then falls back to the base of the system, and eventually reaches a quasi-steady state at $t\sim 125 t_*$. We take the gas at $t = 80 t_*$ when the gas reaches its maximum height as the initial state for T1. T3\_F0.5L gives a very similar result as T3\_F0.5 in D14, but a slightly slower acceleration by radiation because of the lower resolution (see D14 for more discussion on the effects of spatial resolution). We choose the initial state of T3L at $t = 90 t_*$ from T3\_F0.5L. Note that T1 and T3L have the same resolution as T1\_F0.5L and T3\_F0.5L. In all simulations, we expand the vertical direction of the domain box and initialize zones which are beyond the simulation domains in D14 by giving a uniform temperature $T_* = 82\,$K, and $\rho =10^{-10} \rho_*$ as the background. Low resolution runs (T1 and T3LR) were carried out on the \textit{Rivanna} cluster at the University of Virginia, and high resolution runs (T3HR and T10) were carried out on the TACC cluster Stampede. \begin{figure}[t] \begin{center} \includegraphics[width=7.6cm]{f1.pdf} \end{center} \caption{Density distribution $\rho$ for five snapshots from run T3H.}\label{T3H} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=7.6cm]{f2.pdf} \end{center} \caption{Density distribution $\rho$ for five snapshots from run T10.}\label{T10} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=7.6cm]{f3.pdf} \end{center} \caption{Density distribution $\rho$ for five snapshots from run T1.}\label{T1} \end{figure} \begin{figure*}[t] \centerline{ \includegraphics[width=9.5cm]{f4a.pdf}\includegraphics[width=9.5cm]{f4b.pdf}} \caption{Left: mean gas velocity as a function of time for three runs T1 (black), T3H (red dash) and T10 (blue dash-dotted). The solid black shows the gas acceleration in T1 in a box with the same height ($4096 h_a$) as in T3H and T10, the dotted black shows the gas acceleration in T1 to a height of $8192 h_a$. Right: mean gas velocity as a function of time for T3H (solid) and T3L (dotted). The gas in T3L is accelerated to a height of $8192 h_a$. }\label{velocity1} \end{figure*} \begin{figure} \begin{center} \includegraphics[width=7.6cm]{f5.pdf} \end{center} \caption{Comparison of density distribution $\rho$ between T3L and T3H.}\label{compare2} \end{figure} \section{Results}\label{results} \subsection{Wind Properties}\label{sec_densityprofile} We first consider the T3H run. Figure \ref{T3H} shows five snapshots of the density field from this run. Without the gravitational confinement, the gas moves upward and expands in the vertical direction, with the initial filamentary structure stretches out in the radiation field. At $t \sim 59 t_a$, the dense gas hits the upper boundary of the domain, and the gas expands to a thickness of $\sim 1300 h_a$, which covers $\sim 35\%$ of the box. Most of the gas is in a few filaments with $\rho \sim 10^{-3} - 10^{-2} \rho_*$, in between the filaments the volume is filled with a gas of $\rho \gtrsim 10^{-5}\rho_*$. This result is different from KT13 (their Figure 2), in which more extended filamentary structure is driven by radiative RTI, and the vertical extent of the gas eventually becomes comparable to the vertical size of the entire computational box. Figure \ref{T10} shows density snapshots for the run T10. Due to a higher initial optical depth, the gas is accelerated faster than that in T3. The initial turbulent filamentary structure is stretched along the direction of motion due to the differential acceleration by the radiation field and the gas has a larger velocity dispersion than the gas in T3. Between the filaments of dense gas, the volume has lower density $\rho \lesssim 10^{-6}\rho_*$. After $t=30\,t_a$, the relative velocities at some shock fronts in the T10 run become very high (Mach numbers of $\lesssim 100$). In regions where shock fronts cross obliquely, the temperature spikes in low density zones adjacent to the shock front. The algorithm compensates on the following timesteps by generating a large radiation flux that then artificially heats neighboring optically thick zones. From this point on energy conservation is violated at a few percent level. By itself, this modest violation of energy conservation might not be troubling, but this uncontrolled heating produces artificially elevated temperatures at interfaces between low density channels and high density filaments. Due to the $T^2$ temperature dependence of the opacity, the radiation forces on the edges of the low density channels causes them to expand, creating voids that are not seen in other simulations or at earlier times in this simulation. Reducing the timestep reduces the temperature jumps, altering subsequent evolution. However, running for extended time with a significantly lower timestep would be prohibitively computationally expensive so we halt this run at $t=30\,t_a$. Figure \ref{T1} shows gas acceleration for $\tau_* =1$. Simulations in D14 shows that gas in a gravitational field with $\tau_*\leq 1$ and $f_{E,*} \leq 0.5$ should fall back to the bottom of the system and maintains a quasi-steady state. In the absence of gravity the gas is accelerated and becomes unbound, and we study the behavior of the unbound gas. It reaches the top of the box at $t\sim 68 t_a$ and spreads over a vertical height of $\sim 500 h_a$, smaller and with lower velocity dispersion than the gas in T3 and T10. Gas acceleration with various optical depths can be clearly seen in the mass-weighted mean velocity, which is given by \begin{equation} \langle \mathbf{v} \rangle = \frac{1}{M}\int \rho \mathbf{v} dV, \end{equation} and mass-weighted velocity dispersion \begin{equation} \sigma^2_{i} = \frac{1}{M} \int \rho (v_i-\langle v_i\rangle)^2 dV, \end{equation} respectively, where $M=\int \rho dV$ is the total mass in the atmosphere. The left panel of Figure \ref{velocity1} shows the mean velocity in $y$ direction $\langle \mathbf{v}_y \rangle$, and the total velocity $\sigma=\sqrt{\sigma_x^{2}+\sigma_y^{2}}$ for T1, T3H and T10 runs. For convenience, we convert the dimensionless units in the simulations to cgs units. The velocity $\langle \mathbf{v}_y \rangle$ increases almost linearly with time. The timescale of gas acceleration to 50 km s$^{-1}$ is $\sim 100\,$kyr, which is comparable to the time to launch a wind from the base of the system. Gas with lower initial optical depth $\tau_*$ receives a slower acceleration. For example, $\langle \mathbf{v}_y \rangle$ reaches 50 km/s at $t\sim 150\,$kyr in T1, but reaches the same velocity with a shorter time $t\sim110\,$kyr in T3H. Also, lower $\tau_*$ leads to lower dispersion velocity. Velocity dispersion $\sigma$ in T1 grows from $\sim2$ km/s to 3.5 km/s at $t\sim 200\,$kyr, while $\sigma$ in T3H increases from 5 km/s to 7 km/s at $t\sim 80$\,kyr, then oscillates at $\sim6-7$ km/s at later time. Note that the velocities obtained by radiation-pressure-acceleration in Figure \ref{velocity1} are far below the observed velocities of cold clouds in nearby starbursts such as M82 (\citealt{Walter02}; \citealt{Leroy15}), NGC 253 (\citealt{Bolatto13}; \citealt{Walter17}), Mrk 231 (\citealt{Rupke11}; \citealt{Gonzalez14}; \citealt{Feruglio15}), or other star-forming galaxies (e.g., \citealt{Heckman00}; \citealt{Veilleux05}; \citealt{Rupke02, Rupke05a, Rupke05b, Rupke05c}; \citealt{Martin05}; \citealt{Weiner09}; \citealt{Chen10}; \citealt{Erb12}; \citealt{Kornei13}), which can reach hundreds or even thousands of km s$^{-1}$. However, in our simulations we only study wind propagation within a vertical height of $\sim 5-10\,$pc. An estimate of the momentum transfer from the radiation to the gas on a larger scale is discussed in Section \ref{sec_discussion}. \subsection{Spatial Resolution}\label{sec_resolution} We have performed simulations with the same initial conditions $\tau_*=3$, with a high resolution in the T3H run, and a low resolution in the T3L run. Since the low resolution run is less expensive, we run T3L a bit longer than T3H. We run T3L in a box with a vertical height of $8192 h_a$, which is twice as that in the higher-resolution T3H. The right panel of Figure \ref{velocity1} compares $\langle \mathbf{v}_y \rangle$ and $\sigma$ in T3H and T3L. Although the initial conditions from T3\_F0.5 and T3\_F0.5L are slightly different, the two runs with different initial inputs show very similar acceleration. The velocity dispersion $\sigma$ increases more quickly in T3H at $t\lesssim 80\,$kyr, but both $\sigma$ become flat at $80\,$kyr $\lesssim t \lesssim\,$140 kyr, then $\sigma$ in T3L slightly increases to $9\,$km s$^{-1}$ by the end of the simulation. Figure \ref{compare2} shows the snapshots of the density distribution $\rho$ from T3L and T3H at a same time $t=40 t_a$. The gas in T3L is accelerated more quickly than that in T3H. The front of the gas in T3L reaches a height of $h\sim2600 h_a$, when the gas in T3H only reaches $h\sim 2300 h_a$. The structure at the top of the gas in two runs are different: T3L shows a slighltly denser front head than the gas in T3H. The similar acceleration in the two cases suggests resolving $h_*$ might not be essential for obtaining the correct value for the bulk radiative acceleration of the outflow. This should bode well for larger scale simulations of radiative outflows where resolution of $h_*$ would require prohibitively high resolution. \begin{figure*} \begin{center} \includegraphics[width=9.7cm]{f6.pdf} \end{center} \caption{Trapping factors as a function of time for four runs: T1 (black solid), T3L (red dotted), T3H (red dashed) and T10 (blue dash-dotted).}\label{trapping} \end{figure*} \begin{figure*} \begin{center} \includegraphics[width=9.7cm]{f7.pdf} \end{center} \caption{Correlation between density and flux $\sigma_{\rho F}$, which is averaged over grid zones with $\rho \geq 10^{-6}\,\rho_{*}$. The lines have the same meaning as in Figure \ref{trapping}.}\label{correlation} \end{figure*} \begin{figure*}[t] \centerline{ \includegraphics[width=12.0cm]{f8.pdf}} \caption{Velocity distribution functions for run T1, T3L, T3H and T10 at $t/t_a=0$ (solid), 15 (dotted), 30 (dashed) and 45 (dash-dotted).}\label{Vdistribution} \end{figure*} \begin{figure}[t] \begin{center} \includegraphics[width=9.0cm]{f9.pdf} \end{center} \caption{Trapping factors as a function of optical depth for four runs. The points shows the time-averaged value, and the error bars are the standard deviation. The black dashed line is the linear fitting of the points, and red dashed line is $f_{\rm trap}=\tau_*/(f_{E,*})_{0}-1$. }\label{fitting} \end{figure} \subsection{Trapping Factor}\label{sec_trapping} We study the momentum coupling between the infrared radiation field and the gas. Without gravity, the y-component momentum equation of the gas is \begin{equation} \frac{d \langle v_y \rangle}{d t}=f_{\rm rad},\label{momentum1} \end{equation} where $f_{\rm rad}$ is defined as the mean radiation force per unit mass (acceleration) \begin{equation} f_{\rm rad}=\frac{1}{c}\frac{\langle \kappa_{\rm R}\rho F_{ry} \rangle}{\langle \rho \rangle}. \end{equation} Following KT13, we define the trapping factor $f_{\rm trap}$ in a gravity-free field by \begin{equation} 1+f_{\rm trap} = \frac{f_{\rm rad}}{f_{\rm rad,dir}}\label{trapping1} \end{equation} where $f_{\rm rad,dir}$ is the momentum flux per unit mass of the directly injected radiation field. We have $f_{\rm rad,dir}=F_{ry}/(c\langle \rho \rangle L_y)=F_*/(c\langle \rho \rangle L_y)$, where $L_y$ is the vertical height of the computational domain. Thus, equation (\ref{momentum1}) can be re-written as \begin{equation} f_{\rm trap}=\frac{t_a \tau_*}{c_{s,*}}\frac{d \langle v_y \rangle}{d t}-1.\label{momentum2} \end{equation} The trapping factor $f_{\rm trap}$ measures the momentum transfer from the radiation to the gas. The upper limit of trapping in analytic models adopts $f_{\rm trap}\sim \tau_{\rm IR}$, where $\tau_{\rm IR}$ is the infrared optical depth of the system. In our simulations, the initial $f_{\rm trap}^{0}$ is obtained from the end state of the gas at the base of the system with gravity (KT12 and D14) \begin{equation} 1+f_{\rm trap}^{0}=L_y \frac{\langle \kappa_R \rho F_{ry}\rangle}{\langle F_{ry} \rangle}=\frac{f_{\rm rad}\tau_*}{g f_{E,*}}=\frac{\tau_* f_{E,V}}{f_{E,*}}, \end{equation} where $f_{E,*}$ is the fiducial Eddington ratio in the simulation with gravity ($f_{E,*}=0.5$), and \begin{equation} f_{E,V}=\frac{f_{\rm rad}}{g} \label{fEv} \end{equation} is the Eddington ratio computed using the initial gravity $g$. According to KT12 and D14, $f_{E,V} \sim 1$ due to the radiative RTI regulation, therefore, \begin{equation} f_{\rm trap}^{0} \simeq \frac{\tau_*}{f_{E,*}}-1.\label{ftrap0} \end{equation} We have $f_{\rm trap}^{0} \simeq 1$ for $\tau_*=1$, $f_{\rm trap}^{0} \simeq 5$ for $\tau_*=3$, and $f_{\rm trap}^{0} \simeq 19$ for $\tau_*=10$. KT13 found that $f_{\rm trap}$ without gravity significantly deceases from $f_{\rm trap}^{0}$ to a smaller value, which they attributed to the radiative RTI. Figure \ref{trapping} shows trapping factor as a function of time in our simulations. In contrast, we do not see any significant evolution of the trapping factor. Comparison of $f_{\rm trap}$ for T3L and T3H suggests that the trapping property is insensitive to the resolution. The values of $f_{\rm trap}$ are largely consistent with the values $f_{\rm trap}^0$ inferred from the D14 runs with gravity. Thus, it is perhaps somewhat surprising that the runs performed here with $g=0$ see little evolution of the trapping factor. One possibility is that the RTI has little to no affect on trapping factor in these runs with $g=0$ and the simulations simply retain knowledge of their initial density and flux distributions. If present, the RTI is expected to largely shape the trapping factor through its effect on the flux -- density relationship, so we calculate the correlation between density $\rho$ and the vertical component of radiation flux $F_{ry}$ \begin{equation} \sigma_{\rho F}=\frac{\langle (\rho-\langle \rho \rangle)(F_{ry}-\langle F_{ry} \rangle)\rangle}{\sqrt{\langle (\rho-\langle \rho \rangle)^{2}\rangle} \sqrt{\langle (F_{ry}-\langle F_{ry} \rangle)^{2}\rangle}}. \end{equation} We compute $\sigma_{\rho F}$ over the whole simulation domain, but only including grid zones with $\rho \geq 10^{-6}\rho_*$. The density floor in the correlation excludes the background region where the density is low and flux is near the fiducial value. If this region is not excluded, it skews the correlation due to the large fraction of the simulation volume at these low densities. Since density and flux are anti-correlated, we find a negative value for $\sigma_{\rho F}$, with variations on shorter timescales but no long term evolution in any run. We also find that $\sigma_{\rho F}$ is, on average, higher for larger $\tau_*$. The shorter timescale variation of $\sigma_{\rho F}$ with time does not closely track the variation of $f_{\rm trap}$ for any of the runs, suggesting that effects other than the flux -- density correlation are impacting the trapping factor. The behaviors of $f_{\rm trap}$ and $\sigma_{\rho F}$ in the VET simulations are different from that in the FLD simulation (see Appendix \ref{section_FLD}, Figure \ref{fig_FLD}). Similar to KT13, we find that the trapping factor $f_{\rm trap}$ drops with time in the FLD run. This decrease of $f_{\rm trap}$ matches a trend towards increasingly negative (more anti-correlated) $\sigma_{\rho F}$ with time. As in the VET runs, there is no clear correspondence between variations in $\sigma_{\rho F}$ and $f_{\rm trap}$ on shorter timescales, but the overall downward trend is suggestive that the simulation is allowing a larger fraction of the radiation flux to escape through low density channels as the run progresses. These results suggest that radiative RTI has relatively little impact on the long term evolution of the flux--density correlation or trapping factor in our VET simulations. Analysis of the linear instability to radiative RTI \citep{Jacquet11} in optically thin and adiabatic limits suggests these flows should be linearly stable when $g \rightarrow 0$. This does not preclude non-linear interactions that cause channels to develop or widen, but neither is there a clear motivation for radiative RTI to have a strong impact on the density structure and resulting momentum coupling in this limit where $g \rightarrow 0$. \subsection{Velocity Distribution} Figure \ref{Vdistribution} shows mass-weighted velocity probability distribution functions (PDFs) in y-direction for four runs: T1, T3L, T3H and T10. Since the initial condition for T1 is quasi-steady, the velocity distribution is nearly symmetric at $v_y=0$. On the other hand, the initial velocity distributions for T3L, T3H and T10 are asymmetric with a tail extending to $v_y\sim 20-30$ km s$^{-1}$, indicating that most of the gas has already been accelerated at the base of the system. As time evolves, the PDFs for all runs shift to higher velocity of $v_y$. Higher $\tau_*$ gives a higher acceleration, and a larger spreading of velocities. This is consistent with Figure \ref{velocity1} that larger $\tau_*$ leads to faster acceleration and larger velocity dispersion. Also,the panels for T3L and T3H show that resolution does not change the PDF qualitatively. \section{Discussion}\label{sec_discussion} \subsection{Momentum Transfer Between Radiation and Gas} In KT13, a linear fit of $f_{\rm trap}$ is given by $f_{\rm trap}\approx 0.5$ in the limit of $f_{E,*}\rightarrow \infty$. They adopt an interpolation for $f_{\rm trap}$ as a function of $\tau_*$ and $f_{E,*}$, and conclude that winds can only be produced from systems with $f_{E,*}\gtrsim 1$ (super-Eddington limit). However, using the VET method we find different conclusions. In this section we put gravity back to estimate wind acceleration by radiation in a gravity field. Note that Figure \ref{trapping} shows that $f_{\rm trap}$ is approximately flat without gravity $f_{E,*}$. Recall the relation that \begin{equation} f_{E,V}=(1+f_{\rm trap})\frac{f_{E,*}}{\tau_*}\label{fEV_relation}. \end{equation} For $f_{E,*}\lesssim 1$ (but not $f_{E,*}\ll 1$), the radiative RTI regulates equilibrium between infrared radiation and gravity. Thus, we have $f_{E,V}\sim 1$, and $f_{\rm trap}\simeq \tau_*/f_{E,*}-1$. On the other hand, Figure \ref{fitting} shows the time-averaged values of $f_{\rm trap}$ as a function of $\tau_*$ in the limit of $f_{E,*}\rightarrow \infty$ and the linear fitting of the points. We find that the estimate $f_{\rm trap}\sim f_{\rm trap}^{0}\simeq \tau_*/(f_{E,*})_{0}-1$ holds, where $(f_{E,*})_{0}$ is the initial Eddington ratio where the wind is launched. Including gravity, equation (\ref{momentum1}) can be written as \begin{equation} \frac{d \langle v_y \rangle}{d t}=f_{\rm rad}-g\label{dvdt}. \end{equation} Combining equations (\ref{trapping1}), (\ref{fEv}), (\ref{fEV_relation}) and (\ref{dvdt}) yields the equation for the net rate of momentum coupling \begin{eqnarray} \frac{d p_{\rm wind}}{d t}&=&(1+f_{\rm trap})\left(1-\frac{1}{f_{E,V}}\right)\frac{L}{c}\label{dvdt2}. \end{eqnarray} Here $d p_{\rm wind}/dt$ is the momentum injection as a combination of both infrared radiation acceleration and gravitational deceleration. Note that $f_{E,V}/f_{E,*} \sim (f_{E,V})_{0}/(f_{E,*})_{0}$, or $f_{E,V}\simeq f_{E,*}(f_{E,V})_{0}/(f_{E,*})_{0}$, and $1+f_{\rm trap} \simeq 1+(f_{\rm trap})_0 = \tau_* (f_{E,V})_0 /(f_{E,*})_0$ according to our simulations, and since $(f_{E,V})_{0}\gtrsim 1$, from equation (\ref{dvdt2}) we obtain \begin{equation} \frac{d p_{\rm wind}}{d t} \simeq \frac{\tau_*}{(f_{E,*})_{0}}\left[1-\frac{(f_{E,*})_{0}}{f_{E,*}}\right]\frac{L}{c}, \end{equation} For an infrared optically thick disk, the gravitational force initially drops faster than the radiation force with the height, $f_{E,*}$ increases monotonically with the height above the disk, the Eddington ratio above the disk is higher than that at the base of the system, i.e. $f_{E,*}>(f_{E,*})_{0}$ (e.g., \citealt{Zhang12}). If we include the direct radiation from the stellar UV light, we have the total momentum injection from radiation \begin{equation} \frac{d p_{\rm wind}}{d t} \sim \left[1+\frac{\tau_*}{(f_{E,*})_{0}}\right]\frac{L}{c}, \end{equation} This is consistent with the result obtained at the base of the system (see Section 5.4 in D14). According to D14, $\tau_*/(f_{E,*})_{0}$ represents the effective infrared optical depth for momentum transfer, which is slightly lower that $\tau_{\rm IR}$ of the system. Here $\tau_{\rm IR}$ can be estimated by the volume-weighted mean optical depth \begin{equation} \tau_V = L_{y}\langle \kappa_R \rho \rangle. \end{equation} We average $\tau_V$ in our simulations and find $\langle \tau_V \rangle=1.8$ in T1, 7.9 in T3L, 8.5 in T3H and 48.3 in T10, thus, we can define the efficiency $\eta$ where $\eta\tau_{\rm IR}$ is equivalent to $\tau_*/ (f_{E,*})_{0}$. Thus, we find $\eta=0.90$ in T1, 0.71 in T3L, 0.69 in T3H, and 0.47 in T10. Therefore, we conclude that for $f_{E,*}> (f_{E,*})_{0}$, radiation pressure on dust is able to drive an unbound wind. The momentum transfer from the radiation field to the gas is amplified by a factor of $\eta \tau_{\rm IR}$ with $\eta\sim0.5-0.9$, increasing with the optical depth in the atmosphere. \subsection{Rapidly Star-Forming Galaxies and Starbursts} Since $\tau_*$ and $f_{E,*}$ are the most important parameters in the simulations, it is worthwhile to estimate them in real rapidly star-forming galaxies and starbursts. KT13 calculated $\tau_*$ and $f_{E,*}$ analytically using a mass-to-light ratio motivated by the starburst99 model (\citealt{Leitherer99}). We estimate $\tau_*$ and $f_{E,*}$ using recent observation data. We consider the gas surface density in a galactic disk is $\Sigma_{\rm g} = 10^{4}\Sigma_{\rm g,4}\,M_\odot$ pc$^{-2}$, the infrared flux is $F_{\rm IR}=10^{13}F_{\rm IR,13}\,L_{\odot}$ kpc$^{-2}$, where $10^{4}\,M_{\odot}$ pc$^{-2}$ (2.1 g cm$^{-2}$), and $10^{13}\,L_{\odot}$ kpc$^{-2}$ are the typical surface densities and fluxes in LIRGs/ULIRGs (e.g., \citealt{Thompson05}). The characteristic temperature in the atmosphere is given by $T_* = (F_{\rm IR}/a_{r}c)^{1/4}$, and the surface gravitational force is $g=2\pi G \Sigma_{\rm g}f_{g}^{-1}$, where $f_{g}=0.5f_{g,0.5}$ is the mass fraction of the gas. Thus, we have \begin{eqnarray} &&\tau_*^{\rm max}= 2.8\,\Sigma_{\rm g,4} F_{\rm IR,13}^{1/2},\label{obs1}\\ &&f_{E,*}=0.10 f_{\rm g,0.5} F_{\rm IR,13}^{3/2}(\Sigma_{\rm g,4})^{-1}\label{obs2}. \end{eqnarray} Here, equation (\ref{obs1}) gives an upper bound of the infrared optical depth in the atmosphere of the galaxy. We estimate these two values using the most recent observation of LIRGs and ULIRGs measured and compiled by Barcos-Mu\~{n}oz et al. (2016, submitted). They have observed 22 local LIRGs and ULIRGs using the Very Large Array radio observation. We take LIRGS/ULIRGs in their work as a sample. Since molecular gas is presumably the dominant component in LIRGs and ULIRGs, especially in the central regions, we use their molecular gas density $\Sigma_{\rm mol}$ as our estiamte for $\Sigma_g$\footnote{Note that the measurements of $\Sigma_{\rm mol}$ are uncertain, depending on the assumed conversion factor of CO to H$_2$, and the assumption that the emitting area is well-charaterized by the 33 GHz emission. More discussion on these assumptions is given in Barcos-Mu\~{n}oz et al. (2016).}. Using the data in Barcos-Mu\~{n}oz et al. (2016), we find that most LIRGs/ULIRGs have $f_{E,*}<1$, and about one fourth of them have $f_{E,*}\sim 0.1-1$. The values of $\tau_*^{\rm max}$ are typically large $\tau_*^{\rm max} \gtrsim 1-10$, suggesting a large $\tau_* \gtrsim 1$ in the atmosphere is possible. For example, Arp 220 has molecular gas density $\Sigma_{\rm mol}\sim4.9\times10^{4}\,M_{\odot}$ pc$^{-2}$ and a flux of $F_{\rm IR}\sim 6.1\times 10^{13}\,L_{\odot}$ kpc$^{-2}$, corresponding to $f_{E,*}\sim 0.3$ and $\tau_* ^{\rm max}\sim 30$. Although $\tau_*$ of the atmosphere is much less the total $\tau_*$, we still expect $\tau_* \gtrsim 1$. Our simulation suggests that the infrared radiation may launch dusty gas out of the galaxy, as $f_{E,*}$ drops above the galactic disk, the gas may be accelerated and become unbound. Moreover, an extreme case is given by the ULIRG Mrk 231 (UGC 08058) with $\Sigma_{\rm mol}\sim 1.7\times 10^{5}\,M_{\odot}$ pc$^{-2}$ and $F_{\rm IR}\sim 2.6\times 10^{14}\,L_{\odot}$ kpc$^{-2}$, corresponding to $f_{E,*} \sim 0.8$ and $\tau_{*}^{\rm max}\sim 2.3\times 10^{2}$. Although Mrk 231 has active galactic nucleus activities (e.g., \citealt{Rupke13}), these results suggest that infrared radiation alone could drive a powerful dusty wind in Mrk 231. An important caveat to the above analysis is that equations (\ref{obs1}) and (\ref{obs2}) implicitly assume $\kappa_{\rm R} \propto T^2$. This relation approximately holds only for $T \lesssim 150$K and can lead to a possible overestimation of $f_{E,*}$ and $\tau_*$ for higher temperatures. For example, the value of $f_{E,*} \sim 0.8$ comes about because $T_* \simeq 150$K and yields $\kappa_{\rm R,*} \simeq 6.8 \rm \, cm^2/g$. Becoming super-Eddington requires $\kappa_{\rm R}$ to increase to $8.6 \rm \, cm^2/g$. It is unclear whether such high infrared dust opacities are obtained in these systems. These values depend on both the dust distribution and dust to gas ratio. One could alternatively follow \citet{Skinner2015} and formulate a bound on the Eddington ratio in terms of the light-to-mass ratio and an assumed maximum opacity. Their equation (12) says that super-Eddington fluxes require \begin{equation} \kappa_{R} > 15 {\, \textrm{cm}^2 \, \textrm{g}^{-1}}\left(\frac{\Psi}{1700 \ \rm erg \ s^{-1} \ g^{-1}}\right)^{-1}, \end{equation} where $\Psi$ is the light-to-mass ratio. These results lead one to conclude that both a very large light-to-mass ratio and a high maximum dust opacity are required for radiative pressure alone to drive outflows. In general, we find many of the systems in the Barcos-Munoz sample have $f_{E,*} \lesssim 1$ and $\tau_* >1$. Since gas can be launched by radiation from an initially marginally sub-Eddington system, and as $f_{E,*}$ increases with the height above the ULIRG disk, gas may potentially be accelerated to the observed velocities. If the dust opacities and light-to-mass ratios are sufficiently large, radiation may be able to play a dominant role in driving outflows, but this is most likely only the case in a subset of the most extreme star-forming galaxies. Radiation pressure may also operate in concert with other driving mechanisms (e.g. supernova, cosmic rays) in less extreme systems. The key result of our analysis is that it does not seem that RTI alone fundamentally prevents radiative acceleration of outflows. \section{Conclusions}\label{conclusions} We study the dusty winds driven by radiation pressure in the atmospheres of rapidly star-forming environments. Krumholz \& Thompson (2013) (KT13) used flux-limited diffusion algorithm to a two-dimensional problem modeling the radiation hydrodynamics (RHD) of a column of gas that is accelerated by a constant infrared radiation flux. We apply the more sophisticated variable Eddington tensor (VET) algorithm to re-examine the problem in KT13. In the absence of gravity, the system, which is characterized by the initial optical depth ($\tau_*$) of the gas and the initial conditions, gives an upper limit on momentum transfer between radiation and gas. We carry out four runs with different $\tau_*$ and varying resolutions. In each simulation, the initial state of the gas is given by the end state of simulation in D14 with the same $\tau_*$ and resolution, but including gravity ($f_{E,*}=0.5$). In D14 the gas evolves only at the base of the system. We expand the vertical direction of the computational domain, and study the wind-gas interaction and momentum coupling between the radiation field and the gas. We find that the gas spreads out along the height of box with increased mean velocity and velocity dispersion, due to the interactive of the dusty gas and the radiation force. However, the radiative RTI does not seem to be limiting momentum transfer as in KT13. We find that the momentum coupling between gas and radiation in the absence of gravity is similar to that with gravity. The trapping factor $f_{\rm trap}$, which measures the momentum transfer from the radiation to the gas (see equations [\ref{trapping1}]), has the same value to within a factor of two or less at the base of the system. Combing the results in D14, we conclude that dusty gas can be accelerated by radiation even in an initially sub-Eddington system $f_{E,*}<1$, and the momentum from the radiation couples well with the gas during the wind propagation. For $f_{E,*}$ increasing along the height of the system, the momentum transfer from radiation to gas is approximate \begin{equation} \frac{d p_{\rm wind}}{d t} \simeq \left\{1+\eta \tau_{\rm IR}\left[1-\frac{(f_{E,*})_{0}}{f_{E,*}}\right]\right\}\frac{L}{c}, \end{equation} where $(f_{E,*})_{0}$ is the Eddington ratio at the base of the system, $\tau_{\rm IR}$ is the integrated infrared optical depth through the dusty gas, and the efficiency $\eta$ is estimate to be $\sim0.5-0.9$ from $\tau_*=1$ to $\tau_*=10$. Thus, the momentum transfer from the radiation to the wind is not merely $\sim L/c$, but is amplified by a factor of $\eta \tau_{\rm IR}$. Therefore, we conclude that radiation pressure may still be a important mechanism to drive winds in rapidly star-forming galaxies and starbursts. \acknowledgments We thank the anonymous referee for helpful comments that improved this manuscript. We thank Yan-Fei Jiang for helpful discussions and technical assistance. We thank James Stone, Eve Ostriker, Norm Murray, and Loreto Barcos-Mu\~{n}oz for stimulating discussions and/or detailed comments. D. Z. also thanks Todd Thompson, Mark Krumholz, Evan Scannapieco, Chris Hayward, Nahum Arav, Mike McCourt, Eliot Quataert, Renyue Cen, Mordecai-Mark Mac Low, Greg Bryan, Kohei Inayoshi, Yong Zheng, Zhi-Yun Li, Alberto Bolatto, Sylvain Veilleux, Francesco Tombesi, and Karen Yang for helpful discussions. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation (NSF) grant No. ACI-1053575. This work also used the computational resources provided by the Advanced Research Computing Services (ARCS) at the University of Virginia. S. W. D. acknowledges support from NSF grant AST-1616171 ``The Physics of Star Formation Feedback and Molecular Cloud Destruction" and an Alfred P. Sloan Research Fellowship.
{ "redpajama_set_name": "RedPajamaArXiv" }
6,713
CNN Suspends Anchor Chris Cuomo Indefinitely Anderson Cooper will anchor the 9 p.m. slot on Tuesday, the CNN spokesperson said. Embattled CNN anchor Chris Cuomo has been suspended indefinitely Tuesday in the wake of documents released by The New York Attorney General's office that point to his attempts to help his brother Andrew Cuomo cover-up allegations of sexual misconduct. The evidence includes text messages between Chris Cuomo and Melissa DeRosa, a then-top aide to Gov. Andrew Cuomo. The messages revealed that Cuomo was willing to use his sources to smear the reputations of the former governor's accusers. "When Chris admitted to us that he had offered advice to his brother's staff, he broke our rules and we acknowledged that publicly," the network said in a statement. "But we also appreciated the unique position he was in and understood his need to put family first and job second." In August Cuomo publicly denied serving as an advisor to his brother. The network reportedly warned Cuomo to stop communicating with his brother's aides. According to Ballotpedia, New York Gov. Andrew Cuomo (D) resigned on August 24. He first announced his resignation on August 10, saying, "Given the circumstances, the best way I can help now is if I step aside and let the government get back to governing. "I never attacked nor encouraged anyone to attack any woman who came forward. I never made calls to the press about my brother's situation," Cuomo said in August. "The New York Attorney General's office released transcripts and exhibits Monday that shed new light on Chris Cuomo's involvement in his brother's defense," a CNN spokesperson said Tuesday evening. "The documents, which we were not privy to before their public release, raise serious questions." Related Topics:Andrew CuomoBarrett News MediaBNMChris CuomoCNNJason BarrettNews Television Donald Trump: Chris Cuomo Suspension Is Great News for TV Viewers Records Show Chris Cuomo Had Greater Role in Helping Andrew Cuomo
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,264
Big Lonely Doug The Story of One of Canada's Last Great Trees Harley Rustad Copyright © 2018 Harley Rustad Published in Canada in 2018 and the USA in 2019 by House of Anansi Press Inc. www.houseofanansi.com All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Distribution of this electronic edition via the Internet or any other means without the permission of the publisher is illegal. Please do not participate in electronic piracy of copyrighted material; purchase only authorized electronic editions. We appreciate your support of the author's rights.Library and Archives Canada Cataloguing in Publication Rustad, Harley, author Big Lonely Doug / Harley Rustad. Issued in print and electronic formats. ISBN 978-1-4870-0311-1 (softcover). — ISBN 978-1-4870-0312-8 (EPUB). — ISBN 978-1-4870-0313-5 (Kindle) 1. Old growth forest ecology—British Columbia. 2. Old growth forest conservation—British Columbia. 3. Logging—British Columbia. 4. Ecotourism—British Columbia. I. Title. QH106.2.B7R87 2018 577.309711 C2018-900673-0 C2018-900674-9 Library of Congress Control Number: 2018943835 Book design: Alysia Shewchuk Map of Vancouver Island: Mary Rostad Cover design: Alysia Shewchuk • Cover image: TJ Watt — Ancient Forest Alliance _We acknowledge for their financial support of our publishing program the Canada Council for the Arts, the Ontario Arts Council, and the Government of Canada._ For Dad, who taught me how to name the trees Contents Prologue: A Seed Chapter 1: The Ribbon Chapter 2: Evergreen Chapter 3: A Tree of Many Names Chapter 4: Green Gold Chapter 5: War for the Woods Chapter 6: A Forest Alliance Chapter 7: The Logger Chapter 8: Last Tree Standing Chapter 9: Growing an Icon Chapter 10: Big Tree Hunting Chapter 11: Tall Tree Capital Chapter 12: A New Ecosystem Epilogue: A Giant _Notes_ _Acknowledgements_ _Index_ Prologue A Seed A calm wind ruffled the branches of some of the largest trees in the world. It twisted and turned through the forest, picking up scents of cedar and fir and spruce — even a faint tinge of salt, this close to the Pacific Ocean. Late afternoon sun had burned off any lingering mist, leaving a clear blue sky. Nearly every branch on nearly every tree held cones that dangled like ornaments. On one tree, a Douglas fir growing in a valley on Vancouver Island, a cone shook and bounced in the breeze. It began to open. The warm season had caused the cone's colour to gradually turn from green and sticky with sap to brown and papery dry, its thumbnail-shaped scales to separate, and the species' telltale trident-like bracts to curl — the final stage in the cone's year-and-a-half cycle to maturation. As the temperature fluctuated between the early autumn's hot days and cool nights, the cone responded accordingly, opening and closing so slightly it would be nearly imperceptible to the eye. One degree of seasonal difference could spell disaster for the precious seeds held within the cone: too hot and they might dry out; too cold and wet and they might rot. As the sun began to drop behind the forested hills, and when the moisture in the air was just right, a seed dislodged from between the scales and began tumbling earthwards alongside the great trunk of its parent tree. Its feathery tail twirled slightly in the freefall towards a dense undergrowth of salal, sword fern, and huckleberry — a fall where the randomness of nature would determine its fate. The vast majority of the fifty thousand seeds that fell from each tree that year would die. They would be eaten by birds or squirrels or would simply not be lucky enough to find the optimal conditions to sprout. But this one survived. This one landed softly on a patch of moist, green moss growing on the rotting bark of a tree that had been blown over by a fierce wind a century before. Feeding off nutrients in the log, the seed pushed through the moss and into the light. The seedling, barely an inch tall, spread its first pair of glossy green needles. In time, the seedling would enter an exalted arboreal pantheon, which included some of Canada's biggest trees: western red cedars so wide that it would take ten people holding hands in a chain to encircle their bases; Sitka spruces so tall that their tops would rival towers of a city core; and Douglas firs so old they would outlive more than a dozen human generations. In the wet valleys would grow the epitomes of their respective species — great, hulking masses of nature. These trees would come to attract the attention of loggers, who would put axe and saw to trunk to harvest the warm wood that could be cut and manipulated for innumerable uses. These trees would be surrounded by protestors fighting for their protection, seeing more value in keeping them alive than in their immediate utility. And these trees would attract visitors who wanted little more than to feel awe and wonder in the shadow of one of nature's giants. The seedling grew into a sapling — and then it grew into a tree. Chapter 1 The Ribbon On a cool morning in the winter of 2011, Dennis Cronin parked his truck by the side of a dirt logging road, laced up his spike-soled caulk boots, put on his red cargo vest and orange hard hat, and stepped into the trees. He had a job to do: walk a stand of old-growth forest and flag it for clear-cutting. In many ways, this patch of forest was unremarkable. Cronin had spent four decades traipsing through tens of thousands of similar hectares of lush British Columbia rainforest, and had stood under hundreds of giant, ancient trees. Over his career in the logging industry, he had seen the seemingly inexhaustible resource of big timber continue to dwindle, and the unbroken evergreen that once covered Vancouver Island reduced to rare and isolated groves. Known as cutblock number 7190 by his employer, one of the largest timber companies operating on the island, the twelve hectares represented a small sliver — around the size of twelve football fields — of the kind of old-growth forest that once spanned the island nearly from tip to tip and coast to coast. But this small patch of trees fringing the left bank of the Gordon River, just north of the small seaside town of Port Renfrew, was a prime example of an endangered ecosystem. Black bears and elk, wolves and cougars passed quietly under its canopy. Red-capped woodpeckers knocked on standing deadwood; squirrels and chipmunks nibbled on cones to extract seeds; and fungi the size of dinner plates protruded from the trunks of some of the largest trees in the world. Cronin brushed through the salal and fern undergrowth, his jeans wet with dew that even during a hot summer forms every morning in these forests of perpetual damp. Underfoot, mounds of moss covering a thick bed of decaying tree needles were soft and spongy. Sounds don't linger in these forests, arriving and dissipating quickly — absorbed by thicket and peat and mist before they're allowed to swell. For now, the forest was still. Cronin began the survey along the low edge of cutblock 7190, where he could hear the Gordon River thundering on the other side of a steep gorge. Come spring, salmon fry would be wriggling free of the pebbled river bottom and making their first swim downstream to open water; come fall, mature fish would hurl themselves upstream to spawn. The ancient trees, with their dense tangle of roots growing along the banks of the river, would filter out sediment and loose soil so that even during a rainstorm the forest kept the waters running clear. As a forest engineer, Cronin's job involved walking the contours of the cutblock, taking stock of the timber, and producing a map for the fallers to follow. At regular intervals of a couple dozen metres or so, he reached into his vest pocket for a roll of neon orange plastic ribbon and tore off a strip. The colour had to be bright to catch the eye of the fallers who would follow in the weeks or months to come. He tied the inch-wide sashes around small trees or the low-hanging branches of hemlocks or cedars to mark the edges of the cutblock. "FALLING BOUNDARY" was repeated along each ribbon. Timber companies in the province follow a forestry code stipulating that forest engineers must leave an intact buffer of fifty metres of forest up from a river, especially one that is known to be a spawning ground for salmon. Some engineers keep tight to those regulations to try to extract as much timber as possible from a given area. Known as "timber pigs," they work the bush under a singular mantra: log it, burn it, pave it. The sentiment is twofold: ecology is secondary to economics, and these forests exist to be harvested. But Cronin was often generous with these buffer zones, leaving sixty to seventy-five metres — as much as he could without drawing the ire of co-workers or bosses. There were trees of every age: a handful of exceptionally large cedars and firs, many younger and thinner hemlocks, and saplings filling in the gaps. The sun broke through the canopy in long beams that spotlit sword ferns and huckleberry bushes on the forest floor. Patches of lime-green moss turned highlighter-­fluorescent in the sun. Scattered clouds broke an unusually clear blue sky; Cronin was more used to working amid thick mist and showers on winter days, emerging from a forest soaked and chilled. Once the boundary of the twelve hectares was flagged with orange ribbon, Cronin criss-crossed the cutblock, surveying the pitches and gradients of the land. It was a slow task, clambering over slippery fallen logs and through thickets of bush. At one point, he climbed up onto a log to determine where a road could be ploughed into the forest. In many cutblocks, the first step in harvesting the timber is to construct a road — a channel through the bush where logs can be hauled, loaded onto trucks, and transported to a mill. It takes a specific skill to see through dense forest and haphazard undergrowth and plot a sure course that will allow for the safest and easiest extraction of logs. Maneuvering over undulating land layered with deadfall and vegetation, Cronin marked a direct line through the forest with strips off another roll of ribbon, this one hot pink and marked with the words "ROAD LOCATION." He traversed any creek he came across and flagged it with red ribbon. When the flagging was done, the green-and-brown grove was lit up with flashes of foreign colour. As Cronin waded through the thigh-high undergrowth, something caught his eye: a Douglas fir, larger than the rest, with a trunk so wide he could have hidden his truck behind it. He scrambled up the mound of sloughed bark and dead needles that had accumulated around the base of the giant tree. Dennis Cronin looked up. The tree dominated the forest — a monarch of its species. Its crown of dark green, glossy needles flitted in the breeze well above the canopy of the forest. Like many of the oldest Douglas firs he had come across in his career, the tree's trunk was limbless until a great height. The species often loses the lower branches that grow in the shadow of the forest's canopy. Many of these large and old Douglas firs have clear marks of disease, with trunks that are twisted and gnarled. This tree's trunk sported few knots and a grain that appeared straight: it was a wonderful specimen of timber, Cronin thought. With his hand-held hypsometer, a device to measure a standing tree's height using a triangulation of measurements, Cronin took readings from the base and the top of the tree and estimated its stature at approximately seventy metres — around the height of a twenty-storey apartment building. Using a tape, he measured the tree's circumference at 11.91 metres, and calculated the diameter to be 3.79 metres; if felled and loaded onto a train, the log would be wider than an oil tank car. The tree appeared just shy of the Red Creek Fir, the largest Douglas fir in the world, located a couple of valleys away. Cronin didn't know it then, but he had not only stumbled upon one of the largest trees he had ever seen in his career — he had found one of the largest trees in the country. It was surely ancient as well, Cronin knew. A Douglas fir of such height and girth, growing in a wet valley bottom on Vancouver Island, could easily prove half a millennium in age. But to the experienced forester, this one looked much older. _A thousand years?_ he wondered. The logger could have moved on. He could have brushed his broad shoulders past yet another broad trunk and continued through the forest, leaving the giant fir to its fate. He could have walked through the undergrowth, across log and stream, to finish the job of mapping and flagging the cutblock. Fallers would have arrived; the tree would have been brought down in a thunderclap heard kilometres away, hauled from the valley, loaded onto logging trucks, and taken to a mill to be broken down into its most useful and most valuable parts. Over forty years working on timber hauling crews and as a forest engineer, Cronin had accrued countless days working in the forests of Vancouver Island — he had encountered thousands of enormous trees over his career. But under this one, he lingered. He walked around its circumference, running his hand along the tree's rough and corky bark. He looked up at a trunk so broad and straight it would hold some of the finest and most valued timber on the coast. Instead of moving on, Cronin reached into his vest pocket for a ribbon he rarely used, tore off a long strip, and wrapped it around the base of the Douglas fir's trunk. The tape wasn't pink or orange or red but green, and along its length were the words "LEAVE TREE." Chapter 2 Evergreen The western coastline of Vancouver Island ripples like the scalloped blade of a serrated knife, with hundreds of bays, harbours, and estuaries plunging deep into the island. Along the outermost fringe of the craggy shoreline, precariously perched on rocky points, little trees eke out an existence with roots in a crack of soil, bearing the full brunt of the near-constant lashing of storms off the Pacific Ocean. They cling to a life that never allows them to realize their full potential. Often, the side of the tree facing the turbulent water is entirely devoid of branches, or the trunk leans back from being relentlessly pushed by wind and spray. Even on a calm, clear day these small trees appear to shy away from the ocean — as if permanently wincing from the punch they know will come. Behind these stunted specimens grow sentinels that guard their backs — trees both tall and broad, stoically awaiting the coming storms. They benefit from fertile soils, temperate climates, and nourishing rain. When tempests arrive, they do so with relentless force, battering and drenching without respite. But this wall of wood extending from the northern tip of the island to the southern rim endures — as it has for millennia. Here, seasons aren't marked by sudden changes of colour or temperature; instead they blend together seamlessly and subtly with demarcations disappearing in the mist. Even at the height of summer, while parts of British Columbia's West Coast glisten like a temperate California, the Pacific rim of Vancouver Island can remain enveloped in grey — locals affectionately call the month after July "Fogust." Waves that crash upon this coast don't immediately roll back to hammer it once again but are left vaporized — suspended in the air as thick banks of mist whose hoary tendrils penetrate deep into the forests. As the trees are draped in this damp, dense white, their spiky forms soften into green-blue silhouettes that gradually fade into nothing. The valleys of Pacific temperate rainforest can feel both Edenically inviting and primordially ominous. There is alluring comfort among these great trees that embraces your presence and softens your footsteps. What lies beyond the curtain of mist and trees are unknowns: great treasures to be found, or great dangers lurking. One of the largest trees in the country could be hidden a few dozen metres away, obscured in the fog, but so could a bear, a cougar, or a wolf. The canopy above disappears into a grey ceiling and the forests begin to appear manageable. Everything feels within reach. When the conditions are right, this coastline offers a spectacular sight. It is a brilliant culmination of three of the West Coast's most iconic and characteristic elements — sun, fog, and trees — meeting in perfect unison and producing a result that can be as awesome as a fireworks display or as haunting as an aurora borealis. On a cool, misty morning with a warm, clear forecast, the sun will rise behind the shoreline's forest wall as tree branches bifurcate the rays into hundreds of filigreed beams — each one illuminated by the lingering coastal fog — creating a natural laser light show. While the fog is the texture of this coastline, it is the rain that is the driving force of life. Vancouver Island rises from the Pacific Ocean like the back of a grey whale breaching the surface of the water. Storms brew in the ocean before hammering the island's west coast with wind, enveloping it in fog, and disgorging rain as the systems hit the Vancouver Island Ranges, mountains and hills that run like a spine down the island's length. This unsheltered rim can receive more than ten times the annual rainfall of the eastern side of the island — and gives truth to the name Canada's Wet Coast. Nourished by a near-constant supply of rain and sustained by a climate that rarely peaks above thirty degrees Celsius or dips below minus ten, the forest ecosystem of Vancouver Island is never offered a dormant season. For thousands of years, leading up to the arrival of European settlers, the island continuously produced an unbroken evergreen of tangled forest that filled almost every corner of the 31,285-square-kilometre island, and that formed part of a solid band of Pacific temperate rainforest ecosystem that fringed the northwestern edge of the continent. Half of the world's temperate rainforest grows along the west coast of Canada and the United States, from Alaska in the north, through British Columbia, and ending in northern California. The rest is found in small pockets around the world, in countries including Norway, Chile, Ireland, Japan, and New Zealand. While "Pacific temperate rainforest" is a designation recognized by international organizations such as the World Wildlife Fund, for forest and wildlife management purposes British Columbia has separated itself into fourteen biogeoclimatic zones — depending on a region's climate, geography, and natural characteristics — since 1976. These include the higher-altitude Alpine Tundra zone, the interior Sub-Boreal Spruce zone, and the rare Coastal Douglas Fir zone, found in small pockets of Vancouver Island and mainland B.C. Most of the coastal regions of the province from Haida Gwaii to Victoria fall into the Coastal Western Hemlock zone — named after the species of tree found most commonly throughout its range. This region is the wettest zone of the entire province, with annual rainfall of up to 4,500 millimetres. Here, mounds of moss growing on tree branches twenty storeys high hold moisture long after rains have ceased. Even when the sun has taken the place of the clouds, water can still fall like rain in these forests. Nearly the entirety of Vancouver Island is in the Coastal Western Hemlock zone, with the exception of patches of higher-elevation alpine regions and a thin band of a Coastal Douglas Fir zone that runs along the southeastern coast down to Victoria. It is here that the trees, when left for centuries, can achieve truly tremendous growth — wider than an SUV is long, and taller than two blue whales stacked nose to tail. * * * The words "old-growth forest" evoke a Tolkien-esque grove of trunk-to-trunk behemoths separated by flat patches of easily traversable mossy ground. But in Pacific temperate rainforests, like those found on Vancouver Island, the reality is much more complex. There is no order in these forests: trees of every size and age grow here, and windfall litters the ground in various stages of decomposition. Under a canopy of dark green foliage, thick salal bushes make one section impenetrable to pedestrians, next to another that opens into a small clearing. Some trees appear painted in moss, while grey lichen known colloquially as "old man's beard" droops from branches of the older trees like tinsel left long after Christmas. The largest trees pierce the canopy, allowing long beams of light to penetrate the forest floor. The term "old growth" had been used casually by forestry professionals in British Columbia's timber industry throughout the twentieth century. But in the 1970s, it began to be employed by ecologists and scientists as a loose definition of any forest undisturbed by significant human impact. Predominantly, the definition has come to encapsulate any forest untouched by commercial logging. But while historical signs of human impact within these forests are less obvious than a clear-cut, these ecosystems do show the scars of human presence. Before the arrival of European settlers, Vancouver Island's forests were not "untouched" or "unspoilt" landscapes, as they are often referred to. For as long as there have been humans on the island, trees have been felled, bark harvested, and innumerable aspects of the forests used. More than fifty First Nations have inhabited the island, collectively in three Indigenous peoples: the Kwakwaka'wakw to the north, the Coast Salish to the south, and the Nuu'chah'nulth along the west-central coast of the island. There are dozens of nations among them. Along the coast, many of the larger nations, each with populations numbering in the thousands, became fragmented after a great flood in 1700 — the result of a cataclysmic earthquake and tsunami that forced salt water nearly half a kilometre inland. Along the island's southwest coast, one nation — the Pacheedaht, or the "people of the sea foam" — rebuilt at two estuary sites around the mouth of the Diitiida River, also now known as the Jordan River, and within a large bay that centuries later would be named Port San Juan by Spanish colonialists. Throughout the twentieth century, as development and industry — mining, logging, and hydroelectric operations — increased around the Jordan River, the Pacheedaht saw the salmon stocks, which would run up the river to spawn every fall, begin to dwindle. The volume of fish became so depleted that the Diitiida community relocated north up the coast, to the head of Port San Juan. The Pacheedaht can trace their history along this coastline for millennia. Their recorded presence is more often found not by uncovering anything constructed — houses eventually crumble and rot away — but within the region's forests, within the very trees themselves that bear the scars of harvesting or logging. Wherever patches of Pacific temperate rainforest grow, archaeological evidence of the Indigenous population can be found. On Vancouver Island, these marks — a strip of bark peeled from a live trunk; a "window" carved into a tree to test its solidity for canoe building; an entire plank removed from the side of a tree — are most commonly found in western red cedars. These culturally modified trees (CMTs), as they are known, bear evidence of procedures that were done carefully, to remove a part of the tree without killing it, and have been documented across the island — on mountainsides and in valleys, along the coast and in the interior — and provide clues about how forests were used before the arrival of European settlers. Many archaeological finds, be they a fragment of pottery or a stone wall, point to a vague date range, a decade or two at best, but a CMT can offer a specificity of age down to the year, simply by a researcher counting the rings of the tree. In 1996, the British Columbia government issued a directive regulating how culturally modified trees should be handled and protected: any site that predates 1846 is protected under the Heritage Conservation Act. In 2001, a logger named George Halpert was the first person charged with cutting down protected CMTs — the trees, located near Terrace, British Columbia, dated back to the 1600s. He was sentenced to six months' probation and required to make a formal apology to the Kitsumkalum First Nation. When CMTs are found, often by timber workers, they must be recorded in their company site plans and the local First Nation notified, but CMTs dating to after 1846 hold no formal protection. Often, timber companies incorporate them in a riparian area — a buffer of trees left standing around a river, lake, or wetland — and exclude the historic trees from their cutting plan. The company must apply for a permit to alter the site plan if they want to cut down the CMT, which has to be agreed upon by the First Nation. But a First Nation's desire to understand its past can supersede the push to preserve every tree possible. A CMT left standing can only offer so much. It can show how a bark strip was cut away from a cedar to be turned into baskets or clothing. It can show how Indigenous peoples probed a live tree to determine whether or not it was fit to be turned into a canoe. But a felled tree can reveal much more. A cut log can be accurately dated through its rings, to reveal new information about the age, scope, or reach of a First Nation. In particular, was the CMT pre-contact or post-contact? In a forest south of Port Renfrew, the Pacheedaht culturally modified tree crew, which works in conjunction with regional timber companies, found 180 CMTs within a single thirty-hectare patch. The density was surprising, but it was the location — far inland from the seaside hamlet of Jordan River — that helped redefine the nation's understanding of their historical range. Culturally modified trees are as much a part of an old-growth forest as any great tree that has been left unmolested to grow for hundreds of years or any grove that has remained predominantly undisturbed by human impact. Rather than being "original" or "untouched" or "virgin," as these forests are often described, old-growth forests are complete, and every stage of growth — from seedling to skyscraper — is represented. Old-growth forests can be found across Canada — in the stubby spruces of the Maritimes, in the vast expanse of the northern boreal, and in the squat, high-altitude stands of the Rocky Mountains. The term can be appropriately applied to each forest, despite the radical differences in appearance. Even within Vancouver Island, forests that are considered old growth range from scraggly stands clinging to a mountainside to stunted forests growing out of bogs — as well as the more identifiable, high-productivity valleys that produce the country's biggest trees. The size of a grove's trees is not the principal factor in an old-growth forest. In drier climates, forests are kept youthful by fire that constantly scrapes the land clear and creates a blank slate for new growth. But along the western half of Vancouver Island, in the lee of the interior mountains, plunging valleys of deep green are rarely touched by the ravages of fire. Compared to the interior of the province, or even the eastern side of the island, lightning doesn't present much of a threat. There are groves that have never seen fire in their entire existence — ever since tree seedlings first sprouted out of glacial sediment as the last ice age retreated northwards around twelve thousand years ago. Without the refreshing cleanse of fire, the forests in these wet valleys are allowed to continuously grow with little disturbance. As a result, Pacific temperate rainforests hold the largest biomass — the total amount of flora and fauna, alive and dead — of any ecosystem on the planet. They even hold more biomass than forests found in the tropics, where the greater heat breaks down dead matter more quickly, in a rapid churn from life to death to life again. But in the rainforests of Vancouver Island, this cycle is decelerated — a fallen cedar log will gradually decompose, but can remain virtually intact for well over a century — and biomass is allowed to accumulate. Each discernible characteristic of an old-growth forest — the biodiversity, the complexity of structure, the presence of both live and dead matter — is a product of a singular unhalting force: time. In British Columbia's drier interior, a forest is considered old growth when it is more than approximately 150 years old. On the province's coast and islands, it is when a forest is more than 250 years old. But apart from age, neither the province of British Columbia nor timber companies have agreed upon a formal and universally accepted definition of what constitutes an old-growth forest. Various ecosystems around the province, and indeed the country, may satisfy the age requirement but look wholly different whether found in the wet valleys of western Vancouver Island or in the high-elevation coastal mountains or in the dry interior. Timber companies have used this imprecision to their advantage. They often speak of forests that have partially succumbed to the destructive natural forces of fire or wind within the 250-year window not as an "old-growth forest" but as a forest that may hold "old-growth characteristics," with several "veteran trees." It is a classification that suits their purposes: they aren't cutting tracts of old-growth forest, it is often presented; they are cutting younger, second-growth stands with a handful of old-growth trees. Clear definitions are crucial, for both environmental activists trying to protect this precarious ecosystem and timber companies trying to extract hard value from it. Monitoring deforestation is challenging when parameters and interpretations vary between local organizations and companies, and also around the world. On a global scale, according to the United Nations Framework Convention on Climate Change, what constitutes a forest can include an ecosystem with as low as 10 percent tree cover. There exist more than eight hundred definitions of "forest," based upon a number of factors, including location, climate, temperature, soil condition, and the presence of human activity. On a local scale, the lack of a clear definition and set of parameters means environmental groups argue that there is very little old-growth forest left, while timber companies maintain that there is plenty, by including old-growth forests growing in bogs or in high-alpine regions, where trees are often stunted and difficult to access and are therefore of little timber value. Environmental activists become frustrated with this inclusion. To them, the immediate focus lies on the high-productivity areas that offer the most ideal conditions for trees to grow big. There lies a different kind of value: one that can be extracted not in terms of cubic metres of cut timber but in terms of cultural, social, and environmental returns. * * * On the floor of Vancouver Island's old-growth forests, life teems in every square metre. One researcher calculated that when he goes walking in these coastal forests, eighteen thousand invertebrates wriggle within the column of soil under each step of his size 9.5 shoes. In this verdant and lively layer, sword ferns erupt out of the damp, peaty earth as curly fiddleheads, before growing into waist-high thickets of bracken. Salmonberry bushes form tangled and impenetrable walls, while delicate huckleberries sprout from moss-covered trunks. Mushrooms unfurl overnight, revealing caps as pure white as fresh snow or as glossy black as obsidian. Among it all, black bears bound through the bush to make their dens in hollow trees, while elk rub their antlers against the tree trunks and deer nibble on new shoots. Squirrels and chipmunks drop detritus from snacking on seeds into piles on the forest floor. And above, great trees grow so large they block out the sun. The rainforests of Vancouver Island are one of the few environments on the planet that hold some of the world's biggest trees alongside large carnivores including mountain lions, wolves, and bears, and ungulates such as elk and deer. These forests are home to species that depend entirely on ancient characteristics. The marbled murrelet, a seabird that migrates along the coast, doesn't nest on cliffs but builds them out of lichen and moss in the very tops of old-growth trees — often only on Douglas firs that are more than 150 years old. The Queen Charlotte goshawk, a yellow- or red-eyed raptor, lives and nests in older forests along the coast from Vancouver Island to Haida Gwaii — and typically in the tallest trees. The hawk is classified as "threatened" under the federal Species at Risk Act, whose registry states that "continued logging of low-elevation, old growth coniferous forest" is the bird's most significant danger to its survival. But this ecosystem is not defined by its black bears or elk, nor its tens of thousands of species of invertebrates, insects, and birds. These are forests, after all, where every aspect — the nourishing rain, the moderate seasons, and the supporting biomass — contributes to the gargantuan growth of this environment's signature feature. The western red cedar, the Sitka spruce, the Douglas fir — within this ecosystem, these trees grow not just voraciously but continuously, into the planet's largest expressions of their respective species. The tallest Douglas fir ever measured anywhere in the world was a 126.5-metre behemoth — the size of one and a quarter football fields — found in 1902 in Lynn Valley, on the North Shore of Vancouver. To encounter a great tree in a forest, one that is three metres in diameter and a hundred metres tall, is to come face to face with one of nature's grandest creations. There are few things on the planet that have been growing and thriving for a millennium. To some, stepping into an ancient forest can evoke a sense of religious or spiritual awe, as if entering a church or mosque or temple — columns replaced with trunks, marble floors and pews with soft soil and leafy undergrowth, and altars with trees. Old-growth Pacific temperate rainforests are cathedrals of nature, awesome in their grandeur and yet humbling in their structure. Throughout history, such aged forests have represented something dark and mysterious, dangerous and unknown. But individual trees have always held an allure that provokes curiosity. They have been sources of wonder and magic, and of perceived wisdom through age. To see and touch these ancient trees is to confront centuries of history, technological progress, and social and cultural evolution — the light and the dark of human development. To stand next to a tree that has withstood everything nature and humans have thrown at it — fire, storm, industry, climate change — is to be reminded of our capacity to nurture as well as our capacity to destroy. To be dwarfed by a tree that would dominate most city blocks is to have any form of hubris quashed. Hemlocks, ubiquitous and opportunistic, sprout out of a dead log seemingly the hour after it falls to the forest floor. They grow ready and able to survive and thrive in nearly any condition, filling in the gaps. While not the most outwardly grand, they are sturdy and applicable, recognizable by their delicate tops that arch over as if bowing subserviently to the larger trees. Among the conifers grow the odd deciduous trees: alders, often the first to regrow in a clear-cut, spring up to attention like a company of soldiers with their spear-shaped leaves; and the colourful and colour-shifting maples ignite the green with flashes of fire and movement. Magnanimous and regal at the top are three species that make these forests Brobdingnagian in scale. A grove of Sitka spruces, with their columnar trunks and finely scaled bark, grow into natural pillars as true as any stonemason's creation. Cedars, sporting their multi-pronged crowns and fine bark, conceal warm-coloured wood, ready to be transformed into boats, baskets, building materials, and instruments. And Douglas firs, with their wide and hulking trunks, cracked bark, and dominant forms, protrude through the canopy like towers. For millennia, the Indigenous people on Vancouver Island have held the cedar in exalted status. They used the versatile tree — in its yellow and western red species — in innumerable ways. They split its bark and wove it into baskets and clothing; they harnessed the wood's flexibility and steamed it into boxes and containers so tightly fitted they could hold water; and they sought out the prime specimens with which to build houses and carve canoes. A single tree, with its light and rot-resistant wood, could produce products that spanned a spectrum of uses. The eighteenth- and nineteenth-century British settlers arrived with a more brutish approach. They brought with them smallpox, which wiped out nearly a third of the province's Indigenous population, and they brought people to build towns and cities, as well as a view that forests were to be managed, nature was to be controlled, and the wild was to be tamed. This view was reflected not in the soft and malleable wood of the cedar, but in the hard and sturdy wood of another tree. While this tree could be found across the province, it was along the coasts and on the islands where it achieved the pinnacle of its growth. Since its inception, around the middle of the nineteenth century, British Columbia's commercial timber industry has been dominated by one species, the tree of a thousand uses: the Douglas fir. Chapter 3 A Tree of Many Names On March 29, 1778, Captain James Cook sailed his ships, HMS _Resolution_ and HMS _Discovery_ , into a broad inlet two-thirds up the west coast of Vancouver Island. The region would come to be known as Nootka Sound, based off either an anglicization of Nuu'chah'nulth, the name for the local First Nation, or from an Indigenous word meaning to "go around." The mountain Cook had seen at the head of the inlet was in fact a large island. The British vessels were in need of repairs, with broken masts and spars from their crossing of the Pacific via Hawaii, known then as the Sandwich Islands. The crew immediately went ashore in search of timber. "I raised my eyes to the sky and could see nothing but the worthless timber that covered everything," one British man remarked. Furs were the primary target, highly prized for their low weight and high value — as ships began to be used not only to transport explorers but also to return with valuable commodities in their holds — but it wasn't long before expedition financiers began to see value in the endless forests. Timber could be strapped to the deck of a ship and fetch a significant price in places like Western Europe, where trees of the magnitude found on Vancouver Island were the stuff of legend. When British captain and fur trader John Meares left Nootka Sound in 1788 with a ship laden with raw timbers, he remarked, "Indeed the woods of this part of America are capable of supplying, with these valuable materials, all the navies of Europe." After several decades of burgeoning colonial settlement along the coast, in the spring of 1825 the Hudson's Bay Company ship _William and Ann_ sailed into the mouth of the Columbia River, the largest river that spills into the Pacific Ocean, located in modern-day Washington state. The vessel had left England nine months prior with a mission to resupply the forts and trading posts along this coast, one of the most remote corners of the company's expansion. On board, a man named David Douglas gazed across the tranquil waters to the impenetrable-looking forests that fringed the riverbanks. There lay trees taller and larger than any he had seen in his career as a budding botanist. His mission was simple in its goal but challenging in practice. A year before leaving England, Douglas had quit his job as gardener at the Glasgow Botanic Garden to accept a position at the Horticultural Society of London. Established in 1804, the society was beginning to expand and form a mandate: to promote the study, discussion, and discovery of new plant species. The society's burgeoning gardens had been created by the samples that had been collected by roving botanists sent around the world. There, in his first year tending the gardens, Douglas mastered the care of plants and trees and began pushing himself to learn more experimental techniques in breeding, cloning, and propagation. But Douglas had grown up tramping the highlands and moors around his hometown of Scone, Scotland, and his ambition could not be contained to the orderliness of a city. He set his sights on one of the coveted positions of "society explorer," a post that would allow him to leave the greenhouse and garden. These intrepid envoys were dispatched on botanizing expeditions to document previously unknown species of plant growing in the farthest reaches of the British Empire — South Asia, East Africa, the Far East, Australia, and the Americas. They returned with drawings, paintings, and descriptions that delighted and enthralled naturalists. More important to the society, however, was to return, if possible, with not just drawings but samples — seeds that could be sown and nurtured in the greenhouses of London, and eventually studied, classified, and propagated. With glowing recommendations from some of the field's most respected members, Douglas was deemed by the Horticultural Society a keen and ideal candidate to join a ship to the northwest coast of North America. The society bestowed on him an ambitious assignment: after acquiring a brief taste of the natural bounty that grew along this coastline — unusual flowers and trees of unimaginable heights — it now needed to confirm the documentation of species amassed on previous expeditions that it held in its archives. For an experienced botanist, collecting plants and seeds in a location like the west coast of North America was relatively simple, especially for a Scot working in a familiar climate. This was not the humid tropics, after all. But ensuring the survival of the samples during the three-month return voyage around Cape Horn to England was a complicated gamble. Months, if not years, of work could be destroyed in an instant. Moisture meant ruin, so samples, cones, and seeds were kept as dry as possible in order to stave off rot or mould. In one doomed instance, another Scottish botanist, Robert Fortune, had laboured over the collection of tea seeds from deep in China's interior. But his specimens, which he had planted in glass terrariums known as Wardian cases, were ruined after being opened upon arrival in muggy Calcutta, India, rather than the cool climate of hilly Bengal, which would have been similar to their origin. He had to replicate the whole ordeal before the plantations in Darjeeling could be started — and eventually produce world-famous tea. The European botanizing missions of the late eighteenth and early nineteenth centuries weren't purely scientific — sending out seemingly benign botanists into the forests and fields was also the act of a colonizer. David Douglas's job was to accurately describe the terrain and its potential — floral, faunal, and mineral — for the advancement of science as well as for the possible development of resource extraction. Along this northwest coast of North America, the borders between British, American, and Spanish conquest were just being drawn — erasing those that had been adhered to by Indigenous people for centuries — and the wealth that lay both below and above ground was beginning to be realized. As Douglas sailed up the Columbia River, he was overwhelmed with excitement at the possibilities on offer. "The scenery," Douglas wrote in his journal, "round this place is sublimely grand — lofty, well-wooded hills, mountains covered with perpetual snow, extensive natural meadows, and plains of deep, fertile, alluvial deposit, covered with a rich sward of grass, and a profusion of flowering plants." On his botanizing missions he travelled throughout the Columbia region, which covered what is now northern Washington State and southern British Columbia, with near free rein. Within six months of landing, Douglas had collected 499 species of flora, which he pressed and dried between sheets of paper and described in remarkable detail in his journal. For the species that were known to the botanists of the Horticultural Society, he added more detail or more accurate information. But many he documented were hitherto unknown to British botanists, including some species that are now iconic of the natural landscape of the North American coast, like the orange California poppy and several species of the multicoloured lupin; and shrubs such as salal, ocean spray, and Oregon grape. But it was the trees — including the peely-barked arbutus and the columnar Sitka spruce — that fascinated him deeply, in this land of never-ending giants. When the _William and Ann_ embarked on its return journey to England, within her hull were boxes and crates of Douglas's acquisitions: sixteen large bundles of dried plants, as well as preserved samples of birds and mammals. But the most significant chest contained more than a hundred varieties of seeds. The Scottish botanist was also a cautious man — he retained a small collection of seeds from some of his most prized species, which he intended to carry personally by land across North America as a precautionary measure in case the ship was lost at sea. He sent so many conifer seeds and samples back to England that he remarked to William Jackson Hooker, his mentor at the Horticultural Society, "You will begin to think that I manufacture pines at my pleasure." Some of the species David Douglas encountered were uncommon, like the purple wild hyacinth, and some otherworldly, like the sequoia — the tallest tree species in the world. While Douglas undoubtedly came across specimens of this species that were taller than any building in existence anywhere around the world, he returned time and again to another species of tree that he encountered throughout his travels and that grew in great quantities and to great heights — a large conifer with thick bark and dark green needles. It was a tree that the botanist would be best known for, and would eventually colloquially bear his name: the Douglas fir. He found the species growing in the two most common climates of this region: along the wet coasts and throughout the drier inland hills: The trees which are interspersed in groups or standing solitary in dry, upland, thin, gravelly soils or on rocky situations, are thickly clad to the very ground with widespreading pendent branches, and from the gigantic size which they attain in such places and from the compact habit uniformly preserved they form one of the most striking and truly graceful objects in Nature. Those on the other hand which are in the dense gloomy forests, two-thirds of which are composed of this species, are more than usually straight, the trunks being destitute of branches to the height of 100 to 140 feet, being in many places so close together that they naturally prune themselves, and in the almost impenetrable parts where they stand at an average distance of five square feet, they frequently attain greater height . . . In such places some arrive at a magnitude exceeded by few if any trees in the world . . . Douglas described one sixty-nine-metre-tall specimen he came across — one that he remarked was exemplary for its girth: 14.6 metres in circumference at its base. While walking in the fir forests that surrounded Mount St. Helens, in modern-day Washington State, Douglas noted: "A forest of these trees is a spectacle too much for one man to see." The collecting of cones from this tree proved difficult, due to its great height and lack of low branches. As a botanist, Douglas was unequipped to fall such a tree — or even one much smaller than the one he stood before — possessing neither the equipment (he carried only a small hatchet) nor the will to climb one. He tried using his gun to shoot at the high branches in an attempt to dislodge a cone, but the buckshot he had brought for hunting birds and ducks proved ineffective. He resigned himself to collecting cones from much smaller examples of the species. But while the specimens Douglas encountered along the Columbia River were grand, the ideal climate for the tree was farther north — in the wet, lush rainforests of Vancouver Island, where the species had first been documented by another Scottish man. Botanist Archibald Menzies had tracked the island's forests — first in 1786 with the crew of the _Prince of Wales_ and again in 1792 under the captainship of George Vancouver aboard the _Discovery_. Near Nootka Sound, Menzies traversed Vancouver Island collecting samples of tree, flower, and plant. He described in his journal and collected seeds from one tree previously unknown to botanists in Britain. But the seeds Menzies sent to London never arrived; it wasn't until April 1826 that the first samples and seeds of this great conifer, sent by David Douglas, were successfully delivered to the Horticultural Society of London. Douglas returned to England in October 1827, but after two years, he boarded another ship and returned to the Columbia River. In all, he set a record for the most species ever introduced by a society explorer. "The botanical world was literally startled by the number and importance of his discoveries," wrote a biographer. He was admitted to the Linnean Society, the Zoological Society of London, and the Geological Society of London. On January 1, 1826, during his first visit to the northwest, Douglas wrote in his journal: Commencing a year in such a far removed corner of the earth, where I am nearly destitute of civilized society, there is some scope for reflection. In 1824, I was on the Atlantic on my way to England; 1825, between the island of Juan Fernandez and the Galapagos in the Pacific; I am now here, and God only knows where I may be next. In all probability, if a change does not take place, I will shortly be consigned to the tomb. I can die satisfied with myself. I never have given cause for remonstrance or pain to an individual on earth. I am in my 27th year. David Douglas died eight years later, on June 12, 1834, while hiking a volcano in Hawaii in search of new plants. A coastal Douglas fir, originally from the west coast of North America, is now the tallest conifer in Europe. The tallest tree growing anywhere in the United Kingdom is a Douglas fir that was planted in the 1880s in Reelig Glen, a grove in Scotland that lies two and a half hours from the birthplace of David Douglas. * * * For nearly two hundred years, what is now commonly called the Douglas fir held numerous taxonomic names. In the early 1800s, years after Archibald Menzies had sent back drawings and descriptions to Britain, the tree was classified as a pine and given the name _Pinus taxifolia_. Throughout the nineteenth century, the tree was bounced from classification to classification, including _Abies_ as a fir, _Tsuga_ as a hemlock, and _Pinus_ as a pine. During the expeditions of David Douglas in the 1820s and '30s, the tree went by several names, including _Pinus douglasii_ — which Douglas himself used in his journals, a bit self-servingly. While his name didn't stick, the specimens Douglas collected and shipped back to the Horticultural Society of London helped reveal a surprise about the tree — that the Douglas fir, the king of the firs, was not really a fir at all. In 1867, it was proposed to rename the tree _Pseudotsuga douglasii_ — _pseudo_ , Greek for "false," and _tsuga_ , Japanese for "hemlock" — to mark that, nearly eight decades after it was first documented, the tree had been found to be an imposter that had fooled many early botanists. (Today, the tree is often written hyphenated, Douglas-fir, as a signal of its outlier status.) The tree's cones hung below its needled branches, unlike those of true firs which stand above. The species stands as an example of the trials and uncertainties of taxonomy. The confusion may have arisen because the Douglas fir appeared to have several varieties. At higher elevations and in rockier soils, the tree grows to a considerably more stunted version than the gargantuan specimens found in damp valley bottoms. Along the coasts, salt-laden spray and air tinges the tree's needles a noticeably bluer hue than those trees that grow away from the ocean, which appear as a truer dark green. The smell of the foliage of those found in dense, lush valleys has a distinct citrus tone; whereas those that grow closer to the sea or on more exposed hillsides offer a more pungent, turpentine odour. These inconsistencies, among others, confused botanists and taxonomists for decades. In 1892, the _Journal of the Royal Horticultural Society_ highlighted the complicated issue of taxonomic naming with regards to David Douglas's work in documenting the Douglas fir: It is unfortunate, and it seems unjust, that the discoverer of an object in natural history — one who, like Douglas, has the energy and daring to explore, the intelligence to comprehend when he has an object in sight that is new to science, and, moreover, the ability to describe and name it correctly, referring it to the proper genus in vogue at the time of publishing — it seems unjust that such a namer should subsequently lose the honours of discovery and of authorship, because, forsooth, another view of the relative importance of groups places the object in another category, and therefore another person, to wit, the one who so places it, becomes the author of the species. Such is the latest usage, however, based upon lately revived ancient laws of nomenclature; and, in the long run, it works less mischief than would a reverse rule, whereby pseudo-scientists could air their vanity by foisting upon us a host of unfounded terms at will. In 1950, David Douglas and his work to help classify the species was officially stripped of recognition. The tree was renamed _Pseudotsuga menziesii_ in honour of Archibald Menzies, who had been the first European botanist to document the tree when he encountered it on Vancouver Island. Still, in the end Douglas came out on top, in perhaps what matters most: vernacular rather than technical usage. At times, the tree has been colloquially called Oregon pine or red fir, but most people today know of the species by one name: the Douglas fir — the fir of Douglas. * * * Various large trees dominate the forests of Vancouver Island — western red cedars, Sitka spruces, bigleaf maples, western and mountain hemlocks — but the Douglas fir is the grand, albeit humble, icon of coastal British Columbia. Douglas firs can be found in B.C. in two regions, with geography and climates producing variations among the species. In the province's interior, the drier environment produces trees that are stubbier and shorter, appearing in a more classic Christmas tree form, with branches of needles growing from near the base to the very top. This variation is more resistant to frost and cold, as temperatures in the Rocky Mountains often dip well below freezing. Along the province's coast, by contrast, grows a variety of Douglas fir that thrives in wetter environments such as the deep and damp valleys of Vancouver Island. Because of the density of these forests, the older examples of coastal Douglas firs often shed their lower branches below the forest's canopy level, creating a clean trunk with a crown of branches and needles and resembling a Corinthian column. The combination of more stable climates, plentiful rainfall, and nutrient-rich soils produces specimens of more than double the size of their interior cousins. It is along the coastline and on the islands of British Columbia where the Douglas fir earns its place as one of the largest trees in North America, with historical records of some pushing forty storeys tall. In 1895, a logger named George Cary felled a gargantuan Douglas fir outside Vancouver. The tree was said to have been nearly 8 metres in diameter at its base and 127 metres tall — about one-quarter the height of Toronto's CN Tower. The Cary Fir, as it became known, remained little more than a story of great accomplishment told among timber workers — for bringing down a tree of such proportions quickly became lore. Then, in 1922, a photograph supposedly of the legendary fir graced the August cover of _Western Lumberman_. The image depicted an enormous log lying on its side, upon which six men, two women, six children, and a baby sat or stood. One man balanced on the sixth rung of a ladder leaning up against the log — apparently George Cary — was still several metres from reaching the top. After publication, however, doubts were raised as to the authenticity of the photograph. Some claim the image is not of a Douglas fir but of a coast redwood, commonly known as a sequoia. Many of these ochre-coloured giants are located in Redwood National Park in northern California — including Hyperion, the tallest living tree in the world at more than 115 metres in height. But foresters and experts were uncertain, and the debate about the species of the photographed tree raged, to no universally agreed-upon conclusion. Ecologists and silviculturists also disagreed on whether the standing trees behind the log appear to be that of a Douglas fir forest found along British Columbia's coast or of a forest found in northern California. Rumours also bubbled that the image was simply a fake, created by superimposing out-of-scale human figures onto a photograph of a large log, and used as a tool by British Columbian businessmen to lure American investors to their province's timber ventures — an attempt at manipulation akin to evoking a nineteenth-century "strike it rich" frenzy with an image of someone holding a gold-painted rock the size of a grapefruit and calling it a nugget. However, experts have concluded that period technology in image manipulation would have been detectable. The source of the image remains a mystery, but the story, to many, was plausible. There are countless other trees of truly tremendous heights — well documented with photographs and anecdotes — that had been felled, having grown in ideal conditions in the valley bottoms of coastal British Columbia. Despite being moved by the scale, grandeur, and uniformity of the large conifer that would bear his name, David Douglas also recognized that underneath the thick bark was immense value. "The wood," he wrote in his journal, "may be found very useful for a variety of domestic purposes: the young slender ones exceedingly well adapted for making ladders and scaffold poles, not being liable to cast; the larger timber for more important purposes; while at the same time the rosin [resin] may be found deserving attention." In the winter of 1847, tests were conducted in the dockyard of Portsmouth, on the south coast of England, to determine if the Douglas fir logs from Vancouver Island were stronger and better-suited as spars than those that shipbuilders had been importing from the shores of the Baltic Sea. The North American fir proved superior, and the British Admiralty placed an order paying up to a hundred pounds (around $12,000 today) for a single twenty-one-metre log, sixty centimetres in diameter. Throughout the nineteenth century, the Douglas fir was prized by the settlers who built along the western Canadian and American coast. In his 1918 book, _Steep Trails_ , the Scottish-American naturalist John Muir, renowned environmentalist and father of the U.S. National Parks, praised the species as "tough and durable and admirably adapted in every way for shipbuilding, piles, and heavy timbers." Loggers and millers found the wood dimensionally stable — it doesn't twist or warp when drying — while consumers prized its pronounced grain and warm colour, which made it ideal for flooring, doors, windows, and beams. Because of its resistance to fire, the timber was advertised to early twentieth century builders as preferable to steel, which would bend and buckle. The Douglas fir, by contrast, would char but remain intact. Many living veterans of the species bear the black scars of a fire that once raged through the forest. Streets were even paved with Douglas fir. Over the course of the nineteenth century, roads in towns and cities from Victoria to San Francisco were laid with wooden planks. In 1908, Waddington Alley — a narrow passageway connecting Yates and Johnson Streets in downtown Victoria — was paved with creosote-soaked blocks of Douglas fir, stacked with its strong edge-grain facing upwards. The alley underwent a full renovation in 1992, and continues to be maintained with wooden cobbles from Douglas fir trees harvested on Vancouver Island. The species grew to such iconic status that at Expo 67, the World's Fair held in Montreal in Canada's centenary year, the Western Provinces pavilion featured Douglas fir trees so tall their tops protruded out of the roof of the structure. Visitors passed under their branches and around a genuine logging truck fully loaded with wood, while sounds of a timber camp — chainsaws, falling trees — played through the speakers. As British Columbia's logging industry expanded, the species grew to become its number-one resource, with coastal and interior varieties of Douglas fir producing more timber than any other tree in North America. The coastal Douglas fir ecosystem is one of the most threatened in the country, in the hallowed company of the "Pocket Desert" in British Columbia, the Tall Grass Prairie in Manitoba, and the Carolinian Forest in Ontario. Today, 99 percent of the original Douglas firs on Vancouver Island and British Columbia's south coast have been logged. Chapter 4 Green Gold While driving the logging roads offers an intimate portrait of the state of Vancouver Island's forested landscape, the scope of timber harvesting is best realized from the air. Looking out of a plane window at a thousand feet up, the southern half of Vancouver Island appears as a patchwork quilt, simultaneously ragged and ordered from industrial logging. Some hillsides appear as if shaved by a fifteen-year-old boy with his first razor: small tufts and patches here and there, often in the most inaccessible places. Others are puzzle-like in the uniformity of the clear-cuts. Cutblocks are easily discernible in their various stages of use: freshly cut are orange, recently cut are grey, light green are regrowing, and darker green are re-established. At first glance, many areas of the island appear covered in trees. But with a keen eye, the reality comes into focus: nearly every tree has been planted by human hands. There's a saying among West Coast ecologists: in a second-growth forest, a deer would have to pack a lunch. There just isn't enough to eat. Even at high noon, a replanted forest is a dark place: the uniform canopy formed by even-aged trees creates a thick barrier that blocks most sunlight from penetrating to the forest floor, resulting in an environment often bare of the substantial and complex undergrowth found in old-growth forests. The biodiversity of plant species is replaced with a monocrop of trees growing closely together and at the exact same rate, in unison, as grass does in a lawn. The complexity of structure is lost without the benefit of time and death. Second-growth forests are grown not to be self-regenerative or as a replacement for original stands — they are grown to be harvested. Every clear-cut will regrow, whether naturally over time or with the assistance of a silviculture program. But many questions remain: What will a regrown 250-year-old forest look like? Will it have the same biodiversity or the same depth of biomass as one never touched by commercial logging? Will it have the same complex structure and interwoven networks? We have yet to arrive at a point where any commercial clear-cut has regrown long enough to tell. When a patch of boreal — the forest that covers much of Canada's sub-Arctic north — is harvested, what grows back will look relatively similar to its original form in around a century. In West Coast forests, however, estimates project that a replanted cutblock will begin looking as it once did in closer to half a millennium. On Vancouver Island, second-growth forests are allowed to grow for only fifty or sixty years before they are logged once again. For the vast majority of replanted regions, the plan is never to regrow forests like those that once stood. From the ground or from the air, it takes an even keener eye to see the vestiges of original forest on Vancouver Island. Often, they appear as a small patch at the top of a mountain or down a steep hillside — places more difficult for loggers to access. Provincial parks, with their protected trees, stand out of the landscape like Central Park does in an aerial view of New York City. But if hikers and visitors were to walk towards the edge of a provincial park, they would meet the end of the green. Sky would appear through the trees, and the reality of the extent of forest loss would become shockingly clear as they stepped into a clear-cut. * * * On November 24, 1848, a few kilometres west of the fledgling British colony known as Fort Victoria on the southern tip of the island, a waterwheel-powered sawmill began operations. It was the first mill in the territory that nearly a quarter of a century later would become the province of British Columbia. From this Millstream facility, the first commercial shipment of timber ever sent from Vancouver Island across the channel to the mainland reached the Hudson's Bay Company outpost of Langley. In Nanaimo, a hundred kilometres north along the eastern coast of the island, the Hudson's Bay Company opened a second mill in 1854. While most of the mill's logs were cut by settlers, some were traded by local Indigenous communities. At that time, eight logs — each four and a half metres long, and forty centimetres in diameter at the narrow end — would fetch one Hudson's Bay blanket. The most famous mill was the Anderson sawmill opened by Captain Edward Stamp, a British lumberman, in 1861 along the Alberni Inlet. Within its first year, it was producing fourteen thousand board feet (a unit of measuring timber, twelve inches by twelve inches by one inch thick) of lumber every day, which was being shipped as far abroad as Australia and Peru. But it was the discovery of gold that ignited the region's timber trade. Stories of riches in the Cariboo in the Fraser Valley during the late 1850s, and in the Yukon's Klondike region in the final years of the century, fuelled the need for timber in order to turn backwater outposts and fledgling colonies into bustling towns with general stores, saloons, and hotels in support of the prospectors going north. When word of the initial discoveries of gold along the North American coast crossed the Atlantic, British botanists at the Horticultural Society of London remembered something they had seen in a shipment of trees they had received years prior. When they examined the collection of pines from California that David Douglas had sent, within each sample's bundled-up mass of root and soil were flakes of gold. At the time, the London botanists who received Douglas's samples had ignored the glittering flakes tangled in the roots of the seedlings. They weren't interested in the potential riches that they could have exploited years before prospectors flooded the river valleys. Instead, they saw value in the fragile seeds and seedlings they held in their hands. But as the California Gold Rush grew, and news of the riches being earned began to circulate, both the botanists who had been in the field and their colleagues back in England who had received the gold-laden samples seventeen years prior became the target of blame. Their omission is understandable, considering small samples of gold had been uncovered across California throughout the decades leading up to the 1848 rush. Still, the oversight shows how focused these men were on floral rather than mineral discoveries; they couldn't even be distracted from their goals by the most glittering and beguiling of natural treasures. And it's unlikely Douglas himself realized the magnitude of his discovery when he was making his collections — whether he deemed his accidental mineral-finding insignificant or whether he was simply too preoccupied with documenting new species of tree and flower. His journals are noticeably absent of mentions of hitting pay dirt of that kind. Naturalist explorers of his ilk and era were discouraged from scouring the creeks, rivers, and caves for gold; they were botanists, after all, with a scientific mission and a mindset of gradual, rather than immediate, discovery. Decades later, in 1935, an American magazine quietly ran a tourism advertisement titled "More Curious Facts About Southern California," highlighting the counterintuitive discovery. Chief among these facts was a note that read: "First discovery of California gold was made _in England_ in 1831. (Found on the roots of trees sent back by a Scottish botanist.)" Douglas and Menzies saw value in the great trees that grew along the coast not solely as a resource or commodity or product, but in the details of their seeds and bracts, in the specific formation of their needles and the varying textures of their bark. But by the middle of the nineteenth century, eyes had begun to fall on Vancouver Island's trees in earnest. Once again, the search for gold led the initial exploratory push. "So exciting is gold hunting that men are willing to leave the certainty of good wages to take the uncertainty of poor ones, led away by the hopes of striking large ones," wrote botanist and explorer Robert Brown in an 1864 resource survey of the island. In the Nitinat Valley, one of the largest watersheds on the west coast of the island, just up the coast from Port Renfrew, he remarked how the terrain was rough but the vast quantity and quality of forest he encountered held standing wealth beyond the uncertainty of a gold rush. "The timber was however of the most magnificent description," Brown wrote. "Spars of Douglas pine and hemlock 100 to 150 feet in height & even higher, & from 2 to 3 feet in diameter, without a twig for 80 to 100 feet were shady in every direction, and the difficulty would not be in getting good ones, but in selecting among so many magnificent sticks . . . The timber alone would be a certain fortune." Interest was budding, but the vast tracts of big timber had yet to be commercially exploited on a great scale. Trees larger than colonists had ever seen were useless without a method of extraction. What was principally needed was a means of transporting logs from the remote valleys and mountainsides to the coast, where they could be processed at mills or loaded onto ships. The job of constructing and maintaining the island's railroad fell to the Esquimalt & Nanaimo Railway Company. As compensation for the task, in 1884 the province of British Columbia handed over more than 750,000 hectares of land to the company, which began constructing lines and trestles into the heart of the island. Empty railcars went in and returned laden with logs. At thirteen years since the British colony became the sixth province of Canada, British Columbia was beginning to realize the magnitude of wealth that could be exploited in its forests. Beginning in 1905, the province began selling timber licences (TLs) — one square mile of forest for ten dollars apiece — to prospectors or "cruisers." These leases allowed holders to cut, process, and sell any timber harvested off their TL, but once the trees were gone, the land would revert back to the province. Over the next century, this relationship has remained virtually the same: approximately 95 percent of British Columbia is publicly owned or Crown land, with leases granted to companies or individuals through a tenure agreement managed by the provincial government. A Victoria-based timber operator named H. H. Jones wrote in _British Columbia Magazine_ : "It was in 1906, when the timber fever was at its height! Cruisers, many of them of the tenderfoot order, were everywhere staking land, rock or water — anything that could be placed on paper, for the buyers were mostly of the same class as the cruisers: taking everything in sight, or, rather, out of sight, so long as it was called timber." Even as early as 1912 there was concern over the rate of harvest, marked by British Columbia's minister of lands, William Roderick Ross, advocating for the passing of the Forest Act on the floor of the provincial legislature: An epoch, sir, is drawing to a close — the epoch of reckless devastation of the natural resources with which we, the people of this fair young Province, have been endowed by Providence — those magnificent resources of which the members of this Government and this Assembly are but the temporary trustees. That rugged rudimentary phase of pioneer activity is doomed to end. The writing is on the wall; the writing — to put the simple fact — is in this Forest Bill. Armed with that weapon, as forged by this honourable Assembly, the Government of British Columbia will undertake the work: of forest conservation. The Forest Act appeared to signal an end to the Wild West of timber cruising in British Columbia, an era of "cut and get out." Ross spoke of "a past epoch condemned" and "a new epoch inaugurated" in terms of how British Columbians were going to see and value their forest lands. "We glance down the vista of the years to come, and, turning from that vision of the future, we call the world to witness that we legislate today," the minister concluded, "not only for ourselves and for the needs of this day and this generation, but also, and no less, for our children's children, and for all posterity — that we may hand down to them their vast heritage of forest wealth, unexhausted and unimpaired." While Ross spoke of conservation, he was actually more concerned with economics — with reinventing a forestry system that had led to a commercial shortfall for the provincial government, and therefore the public, for years. To spur economic growth, the Forest Act attached strict requirements on the timber company holding a licence. "All timber cut on Crown lands . . . shall be used in this Province or be manufactured in this Province into boards, deal, joints, lath, shingles, or other sawn lumber," the original 1912 Forest Act stated, noting a few exceptions such as telephone poles. What were known as "appurtenancy clauses" required some licence holders to invest not just in the mechanisms for resource extraction but in communities themselves. Timber companies were required to saw or pulp their logs at mills within the very area that was being logged. A tree cut in the town of Lake Cowichan would have to be milled in Lake Cowichan. These rules led to a decades-long employment boom across Vancouver Island, turning backwater communities into thriving timber towns. Throughout the 1970s, Port Alberni had one of the highest per-capita incomes of any community in British Columbia — based primarily on the region's valley-bottom big timber. To manage the resource and develop its extraction and processing, the provincial government created an institution alongside the 1912 Forest Act, the Forest Service. It also began collecting "stumpage fees" — a form of tax paid by timber companies to the government. Initially determined by the number of trees cut, "per stump," the fee became based on the volume of timber cut off a company's leases, measured in cubic metres or board feet. By measuring a tree's circumference with a tape and its height with a hypsometer, forest engineers could estimate the volume of wood held within to assess the total value of a stand. Stumpage fees created economic incentives for the government to support its timber industry, even when cries of concern arose over both the depletion of a resource as well as degradation of the environment. In 1918, the Commission of Conservation in British Columbia published a report of the province's forest inventory. Even then, the commission recognized a dire state: "When one considers that the total stand of saws material in the whole Dominion probably does not greatly exceed this amount now, the seriousness of this loss, which can be attributed very largely to public carelessness, becomes apparent." But this "loss" in question was not because of logging but from fire, and it highlights the foundational principal for Canadian forestry at the beginning of the twentieth century: cut it before it burns. Forest fire was an unpredictable force but a known entity. It would return, to some degree, each hot summer — devouring what was becoming the province's most valuable resource. To lead the Forest Service, the government appointed as its first chief forester Harvey Reginald (or "H. R.") MacMillan. The Ontarian would go on to become one of the pioneers of private timber companies in British Columbia with the establishment of H. R. MacMillan Export Company Ltd. — a precursor to MacMillan Bloedel, one of the foremost timber companies working up and down the coast from 1951 to when it was sold in 1999. But in 1912, MacMillan began his career on the political side of timber with a report to the provincial legislature encouraging further and urgent development of the industry: The annual growth of the forests of British Columbia is even now, before they are either adequately protected from fire or from waste, certainly not less than five times the present annual lumber cut . . . It is not merely advisable to encourage the growth of our lumber industry until it equals the production of our forests — it is our clear duty to do so, in order that timber which otherwise will soon rot on the ground may furnish the basis for industry, for reasonable profits to operator and Government, for home-building and, in the last analysis, for the growth of British Columbia. This "clear duty," as MacMillan called it, framed much of British Columbia's perspective towards logging through the twentieth century — that it was the province's obligation to cut and use its primary resource before it was too late. The end of the First World War saw a rapid expansion of timber harvesting on Vancouver Island and across the province. During the first four decades of the twentieth century, whoever held leases on land could cut whatever was desired at a rate regulated entirely by what they could sell. As long as a licence had been granted by the provincial government, nearly any tree, or any amount of trees, was up for grabs. What held the period in relative check wasn't forethought or restraint or resource management, but technology. Logs were not nuggets of gold that could be transported with ease. The process of felling a tree and moving a log to mill was a tedious and tiresome act. * * * Cutting a tree began as an intimate task: one man and an axe. While pairs or teams often worked to take down a large tree, every swing and blow of the axe was felt and every chunk removed was hard-earned. But throughout the twentieth century, technological evolution and development in machinery refined the process of falling a tree from a plodding chore to a swift act. Photographs of loggers of the late 1800s and early 1900s, first with axes and then with crosscut handsaws, depict men balancing on springboards — planks of wood wedged into notches carved into a tree's trunk as high as three metres off the ground — slowly chipping away or gradually cutting into a behemoth fir or cedar or spruce. When the tree came down, it would have been the culmination of days of work — marked by a soft crack as the trunk finally gave way, a silence as the tree floated weightless for a moment, and an earth-shattering boom as the log struck the ground. The 1970s saw the development of feller bunchers — backhoe-like vehicles with extendable saw arms capable of chopping, de-limbing, and cutting trees to length. These efficient machines cut forest like a combine harvester cuts wheat, and made it possible for loggers to clear an entire hectare of trees on level ground within a day. But in the Pacific temperate old-growth forests of coastal British Columbia, there is no machine capable of felling trees on such uneven ground. And more simply, the trees are just too big. Every great tree growing on Vancouver Island is brought down by hand — by a logger standing beside a colossal trunk with a saw. Over the course of the twentieth century, the falling of a tree changed dramatically. In 1905, Samuel J. Bens of San Francisco, California, began experiments with mechanizing the laborious task of felling a tree by crosscut saw. His goal was to create a machine — an "endless chain saw" — capable of taking down his state's giant redwoods. Over the next two decades, various versions of a gasoline-powered small-engine saw with a sharp-toothed chain were tested. One, developed in 1918, weighed 210 pounds and was mounted on a 4.5-by-6-foot frame. Still, it wouldn't be until after the Second World War that anything resembling a modern chainsaw began to appear on the Canadian market, and by 1960 it still weighed more than twice a twenty-first-­century model. Each development in technology allowed for more timber to be more effectively harvested — turning what might have taken a full day's work into a task accomplished in mere minutes. While it could take five hundred years for a tree to reach fifty metres in height and two metres in width, it could take five minutes for a skilled faller with a chainsaw to bring it down. Falling big timber is an act similar to hunting big game. There is a quest to locate the prime of the species; there is a gradual approach towards a calm moment when the trigger is pulled; and there is a rumble and crashing to earth when the great beast is bagged. Except, on the open savannah, an elephant or a rhino can flee from hunters, possibly even fight back. In timber there is no chase — just a search and a kill. The men who stalked the forests of British Columbia in search of big timber weren't the legendary lumberjacks of eastern Canadian folklore, magnanimous in iconic plaid shirts while running logs down the river. The lumbermen of the west were _fallers_ — who lived and breathed the bush, without glamour or glory. They were rough-and-tumble men of work who burned their years with hands perennially coated in pitch, hair smelling of cedar, and burned their earnings on whiskey and sex in the saloons of Vancouver and Victoria. Companies capitalized on this machismo. In the 1970s, one of the largest chainsaw manufactures, Stihl, produced an advertisement featuring hard-hatted fallers standing on an enormous cedar stump while cutting into another tree. The tagline read: "We Came. We Saw. We Conquered." Despite the technological limitations of the early twentieth century, by 1920 British Columbia was producing half of all timber around the country, surpassing production in every other province. Locating the towering stands of timber took great effort, and falling trees was laborious; but the greatest challenge was engineering a way to move a log weighing thousands of kilograms out of the forest. At every level, working in timber has been one of the most dangerous jobs, with forestry still holding the highest percentage of work-related deaths from injury than any other sector in British Columbia. One of the first jobs for someone entering the industry has been a chokerman, someone whose responsibility was to set (or "choke") cables around logs so that they could be pulled out of the forest. In charge of a team of "chokers" was a hooktender who oversaw the crew working the lines. Communication between workers was done by a series of commands shouted over hundreds of metres through the forest. Logs would be rigged up with cables and hauled by horse trains to a centralized collecting point and then loaded onto railcars. Because of the rough terrain and roads often strewn with slash, horses wore shoes studded with hobnails, just like a logger's caulk boots. In 1897, the first steam-powered engine — called a "donkey engine" — was introduced to West Coast logging, replacing the animal trains as the primary means of hauling logs. It was more reliable, efficient, and didn't need to be fed. A simple engine that turned a spool to recoil a cable, the donkey engine transformed the ease with which logs were acquired from previously inaccessible locations. But if a log became stuck or pinched among the rough terrain, disaster could strike when it finally broke free and the tension in the cables propelled the log like a multi-tonne bullet through the forest. To alleviate this danger, timber crews looked to the trees themselves. Sometimes, crews employed the strength and stability of large standing trees to help haul logs from a cutblock, by wrapping a cable around the trunk to use it as an anchor or yarding point. Standing trees used in logging operations needed to be sound and secure, healthy, and large enough to withstand the stress. The most dangerous, and thus highest-paid, position was that of a high rigger. Using specialized boots with metal spurs attached at the inner ankle, and a loop of cord around the trunk, a high rigger would ascend a selected tall tree. With a hand axe and single-person cross-saw, the rigger would denude the tree of branches and chop off its top to create a freestanding pole — a "spar" — that was secured to the earth by its own natural root system and stabilized through a set of guy-lines. A series of cables and pulleys were rigged to the spar, allowing logs to be lifted into the air. This "high lead" system was much more stable and controlled than dragging a log along the ground, and removed any danger of one becoming caught on debris or rough terrain. Increased mechanization brought heavy machinery, namely the yarding tower, in which up to four cables could haul logs simultaneously to an extraction point. At times the process of falling timber can appear clumsy, with trees tumbling this way and that, one on top of another, to the ground. At times it can appear indiscriminate, as if a bulldozer and wrecking ball would suffice. But there is a method, in which minuscule adjustments can be employed by the faller so that a tree lands exactly where intended. At other times the network of cables and pulleys extracting logs from the base of a mountain slope to a road appear as if part of a high-wire act in a big top tent. Cables can span a valley, known as "skylines," where logs are attached to dangling cables off a mainline and gradually removed from the cutblock that can be hundreds of metres away. It is a delicate process, from falling to extraction, where mistakes at any stage in the operation could lead to death or maiming. Fallers need to be cautious of dead treetops or limbs — known in the industry as "widowmakers" — that could break off and plummet earthwards. For choker crews, in attaching and managing cables there are risks of logs twisting or rolling, of one becoming loose and tumbling free. Alongside the twentieth century's technological evolutions of how trees were felled and collected were developments in transportation — how to move a log from a deep valley at the heart of Vancouver Island to the mills on the coast. Even before the modern chainsaw revolutionized the industry, falling a tree was a straightforward task. But a fallen log in the middle of a forest is of no use to any timber company. Along the coasts of Vancouver Island, companies would seek out stands growing on hillsides where the trees, once felled, would tumble down by their own great momentum into the ocean. They would then be rafted together into large booms and maneuvered around the island to the mainland mills. But the process was a delicate one: at any moment the log could be set loose, and in a flash begin its violent slide. A logger would have to keep an eye out for an unexpected shift, and deftly leap to safety or else be caught by a passing branch. When a log is loose, it "runs" down the hillside with such power and such force it can shatter another large tree if hit head-on or plough through smaller trees, creating a wake of destruction similar to a jetliner crash-landing in the forest. Groves deeper in the interior of the island were accessed by railcar. The area around Shawnigan Lake, just outside Victoria, holds one of the most spectacular remnants of twentieth-century logging engineering anywhere in the world. In 1920, Canadian National Railways completed the longest wooden railway trestle not just anywhere in the country but in the entire Commonwealth, at 188 metres. It was also one of the tallest in the world, built 44 metres above the river. The Kinsol Trestle saw millions of logs of some of Vancouver Island's finest old growth shipped by railcar across its breadth, until it started to fall from use in the 1950s. Following the Second World War, many deactivated military trucks were sold cheaply to timber companies, converted to logging trucks that soon replaced the railcar as the primary means of transporting logs. This flexibility allowed companies to access groves that were previously unreachable. As the forests of the eastern half of Vancouver Island began to dwindle, and with new technology and methods of harvesting, timber companies began to slowly expand into the lush rainforests of the western edge, where some of the province's biggest and most valuable timber was found. Increased mechanization in falling, and a transition away from more cumbersome means of transportation, turned British Columbia's timber industry into a commercial harvest that was the driving force of the province's economy. At the industry's pinnacle in 1966, B.C. produced nearly three-quarters of all sawn lumber in Canada. Despite ups and downs over the subsequent decades, by the end of the twentieth century, forest products accounted for 30 percent of all British Columbia's exports, the industry was producing more than $10 billion in total revenue, and one out of every ten jobs in the province was related to timber. Every old-growth cedar, spruce, and fir was vital to the industry, but each ancient tree's value extended well beyond what could be felled, milled, and sold — into the ground below and the air above. * * * At first glance, the old-growth forests of Vancouver Island seem defined by life: deer and elk browsing on the tender tips of forest grasses, red-and-white spotted mushrooms erupting from the earth, and towering trees trembling in the wind. But it is death that makes these forests complex. "The woods are full of dead and dying trees, yet needed for their beauty to complete the beauty of the living . . . How beautiful is all Death!" wrote John Muir in his journals. That is the positive death of the coastal old-growth forests of British Columbia, the kind that feeds the next round of inhabitants, both floral and faunal. The churning in these forests is nearly imperceptible. It would take years of patient study to notice the movement. But it is there — turning and folding and regenerating without respite. While some of the giant trees live for many centuries, even a millennium, they will fall. And when a structure as tall as an apartment building comes down, shattered by lightning or forced over in a gale, it crashes to the forest floor with thunderous applause — for the metre-tall saplings growing in the dappled shadow welcome the sunlight that beams through the new gap in the canopy. Every tree that falls naturally in an old-growth forest remains. Their hulking corpses sometimes break and shatter, while others hold nearly intact from root to tip. Instantly, after a fall, the moment the forest returns to silence, that log becomes a feeding ground for the pileated woodpecker and the red-tailed chipmunk, and home to the black bear and the marten. It also becomes a "nurse log" — a rotting tree that offers its tonnes of nutrients to opportunistic seedlings. A common sight in an old-growth forest is a dead cedar with hundred-plus-year-old hemlock trees with roots growing out of and hugging the log, the cedar's natural preservative — an anti-fungal and anti-bacterial chemical called thujaplicin — keeping the log nearly entirely intact for more than a century. Each natural death begets life. But these forests also play host to another kind of mortality. Unlike those trees that tumble naturally and become part of the biomass, those felled by human hands are hauled away. Not only is the life lost, but so is the life-giving death. Since the early days of logging in British Columbia, one sentiment has been common in the timber industry: that old-growth forests are "decadent" and have a shelf life with an expiry date. If not redeemed, the value of the forests will be lost. By the middle of the twentieth century, this perspective became firmly entrenched as the principal perspective of timber companies. In 1949, a classification system was proposed to determine "cull factors" for older trees. Features including broken tops, swollen knots, burls, and trunk cracks were used as examples that a tree was rapidly losing its value. The governing ideology was that old-growth forests were diseased and dying ecosystems that needed to be converted into fresh, lively, and vibrant new stands through harvesting and replanting. While words like "decadent" and "over-mature" eventually faded from common use, in practice the perspective remains. That argument has often been presented by members of the logging community — that the big, old trees are not as ecologically valuable as younger ones. On the surface, new seedlings planted in a clear-cut appear to grow more voraciously than the grandparents that once stood in their place. However, these surface-level conclusions are based on little more than casual observations, as opposed to scientific fact. Through the latter half of the twentieth century, scientists began looking closely at the ecological mechanisms and forces at play in these old-growth forests, making discoveries that would eventually redefine how we understand the ecosystems. In a global study of 403 temperate and tropical tree species, including plots of Douglas fir, Sitka spruce, western red cedar, and western hemlock planted as early as the 1930s, researchers found tree growth to accelerate with age, rather than slow down, in 97 percent of the species examined — busting the myth that ancient trees are ecologically decadent. This rapid growth means that the oldest trees sequester an increasing amount of carbon with every passing year, becoming more and more important as sentinels against climate change. Replanted, second-growth forests, therefore, cannot match the productivity and ecological value of an unmanaged forest. Even on a miniature scale these big trees play a crucial role. Researchers uncovered species of plant and insect that are endemic to the forest canopies of old-growth trees, living only in suspended soil on tree branches hundreds of feet above the ground. From moss on the forest floor to the tip of the tallest tree, this layer teems with obvious life. And here also lies the value: timber, pulp, and fibre for the loggers; or trees and forests that could be protected for environmentalists. But underground — supporting this layer of giants — is a structure as complicated, vast, and crucial as anything sprouting or frolicking above ground. In 1997, a professor of forest ecology at the University of British Columbia named Suzanne Simard published a study that popularized a radical notion of the depth of ecological relationships in forests. What started as a chance observation ended up revealing a profound interconnectivity between trees. It began when she was a child, on holiday in British Columbia's interior rainforests, when her family dog, Jiggs, fell into an outhouse. A messy rescue ensued, with members of the family putting mattock and shovel to earth in a mad dash to free their beloved beagle. As the pile of dirt grew, Simard's attention turned to the multicoloured layers of excavation and the dense mass of tree roots. Jiggs was eventually freed, but Simard focused on a filigree of white strands running through the soil. The strands — a kind of fungi — were about the diameter of a human hair and exceptionally fragile. The moment of panic had turned to inspiration for Simard, who went on to complete a degree in forestry at the University of British Columbia and would eventually teach and work in silviculture across the province — in a job assessing the successes and failures of reforestation after logging. Simard's fascination with root networks led to work at the university, studying the relationship between fungi and trees. Fungi were once considered parasitic to a tree, but experiments by Simard and her team demonstrated a complex symbiotic relationship, called mycorrhiza. While the term had been coined in 1885 — from the Greek _mykós_ for "fungus" and _rhiza_ for "root" — and associations between fungi and trees had been documented, it wasn't until the mid-1990s that the depth of this relationship began to be realized. In an experiment, Simard and her colleagues set out to map these mycorrhizal networks. In the relationship between fungus and tree, thin strands of mycorrhizal fungi attach on a tree's roots and spread throughout the forest to connect with other fungi that have colonized other trees. After injecting a tree with a harmless radioactive isotope, they were able to trace the isotope using a Geiger counter, as the tree photosynthesized carbon dioxide into sugars. As the sugars descended the tree's trunk, so too did the isotope — into the ground, into the network of mycorrhizal fungi, and up into neighbouring trees. The strands of fungi were in fact tubes of a superhighway tunnel system, a massive underground network that connected trees together. Both organisms benefit from this relationship: the trees provide much-needed sugars to the fungi, and the fungi absorb nutrients, including nitrogen and phosphorus, from the soil that they provide to the trees. Nitrogen, in particular, is a key building block for trees to grow big; without the mycorrhizal fungi, the trees of the Pacific temperate rainforests of Vancouver Island would never achieve such great heights. The interconnectivity extends throughout the entire forest column, below ground and above, best illustrated in the relationship between the region's top three natural icons: trees, bears, and salmon. Along the coast, streams and rivers connect the Pacific Ocean to their mountain sources and provide the breeding grounds for salmon to lay their eggs. Old-growth forests that fringe these rivers and estuaries are key to a successful salmon run, by stabilizing the banks with their root networks and filtering meltwater that trickles down from the mountains or rainwater that falls throughout the forest. With healthy salmon populations come healthy bear populations. But when the salmon are plentiful — leaping in great numbers against the current to reach the cool, calm pools in which they lay their eggs — the bears become selective. It is common to see a bear catch a fish in its mouth, carry it ashore, and feed only on the richest and fattiest part: its brain. After a feed, corpses of headless salmon lie scattered along riverbanks and become a source of food for scavengers such as ravens and crows. But the protein-rich flesh also decomposes into the soil. Typically, the nutrient injection of the salmon's natural death cycle benefits plant growth within a thin riparian zone around a river, but bears have been seen carrying fish nearly a kilometre into the forest — as if they were gardeners dumping fertilizer directly onto the bases of trees. And because salmon can travel rivers to return to spawning grounds up to a thousand kilometres inland, this relationship can penetrate far from the immediate coastline. Scientists could not only imagine a benefit to tree growth, but were also able to document a particular nitrogen molecule found in salmon within the very rings of the trees themselves. As the salmon decomposed, the mycorrhizal fungi absorbed the nitrogen and fed it to the trees. Not only did this process provide a historical record of which years were a salmon boom, for example, but it also revealed a measurable and profound connection between three key features of coastal forests: the bears eat the salmon; the decomposing carcasses of the salmon feed the trees; and the trees stabilize the habitat for the salmon and provide homes for the bears. At the pinnacle of this triangle are the large Sitka spruce, western red cedar, and Douglas fir. The level of connection extends beyond resource-sharing. In times of drought or seasonal change, trees can use their mycorrhizal network as a storehouse for sugars accumulated during growth seasons, until they are needed. The networks have also been found to be used for a kind of arboreal 911 call between trees. When a tree is attacked by an insect, it can send a chemical signal through the mycorrhizal network to its neighbours, triggering them to release a defence mechanism such as a volatile organic compound that is harmful to the insect. The concept of a whole ecosystem with organisms dependent on one another was not new, but research by scientists such as Suzanne Simard helped change how forests had previously been seen: as clusters of trees growing independently and even competing for resources, space, and light. There is less a life-or-death race to the top than a collaborative effort for success. What became clear was that the largest trees were the nuclei of this network — drawing nutrients from their great height to sustain those growing in the shadows below. Over time, Simard discovered that the largest and oldest trees in the forest contained the most expansive networks of mycorrhizal connections. She found one Douglas fir to be linked with forty-seven other trees in its neighbourhood. "Although trees from all cohorts were linked, large mature trees acted as hubs with a higher degree of connectivity," Simard and her colleagues wrote in their cleverly titled follow-up study "Architecture of the wood-wide web" __ in 2009. The largest mature trees had the most-developed root systems and therefore the deepest networks, and "they accounted for most of the connectivity and centrality among nodes in the network." Also critical to the optimal functionality of this network is a range of ages among the trees. A replanted, second-growth forest composed of single-age trees does not benefit as much as one with a spectrum of generations. When an old-growth forest is clear-cut, more than the trees disappear. Without the trees providing sugars, the mycorrhizal fungi die — and it can take years, if not decades, after a cutblock is replanted for the underground network to re-form. Over time, fungi may eventually creep in from neighbouring forests, but the young seedlings are on their own in more ways than on the surface. They are tasked with not only regrowing into a forest, but also helping to re-establish a subterranean network critical to the health and sustainability of the broader ecosystem. Simard's 2009 study concludes: "To ensure that old-growth Douglas fir forests remain resilient and self-regenerative following disturbance, our findings support a management approach that conserves large trees or groups of trees and their mycorrhizal fungal associates." When left standing, the oldest and largest trees of these coastal forests play perhaps the most critical role: as the stewards of the forests, they ensure the viability of the forest, both in the present and in the future. The argument to protect the largest trees isn't purely a sentimental one. They aren't simply the last of their kind or an example of a species that we will never see again if completely harvested — these big trees are vital to the stability of our coastal forest ecosystems. Their vast networks of roots bind the landscape together and offer the foundation on which every kind of smaller life — mammals, fish, insects, other trees — can thrive. In 1954, the British Columbia Ministry of Forests inventoried Vancouver Island's forests and produced a map of the existing supply. Its assessment placed the number of hectares of old-growth forest — those untouched by commercial logging — at 1.69 million, or approximately half of the island's total area. Over the following four decades, 24,000 hectares of the island's old-growth forests were cut annually — an amount equal to sixty Stanley Parks, Vancouver's iconic urban green space. By 1990, there was only 829,000 hectares of old-growth forest; more than half had been logged. And in the southern half of the island, where the hub of the region's timber history had buzzed for a century, it was estimated that only 25 percent of the original forests that had been standing in 1954 remained. In the early 1990s, environmental groups were estimating that if the annual cut continued at the rate and volume it had maintained over the preceding four decades, Vancouver Island's unprotected old growth would be eliminated by 2022. All that would remain would be the few patches in provincial parks and recreation sites. The rest would be replanted clear-cuts in various stages of regrowth. Between 1990 and 2015, the island saw its remaining old-growth forests decline by approximately 30 percent. By comparison, the Food and Agriculture Organization of the United Nations found that over the same twenty-five-year period, primary forests — those that are "globally irreplaceable with unique qualities that make significant contributions to biodiversity conservation, climate change mitigation, and sustainable livelihoods" — located in tropical countries declined by only 10 percent. While deforestation in Latin America and Southeast Asia often attracts attention, the forests of Vancouver Island are disappearing at a faster rate. In 2016, the British Columbia chapter of the Sierra Club, the U.S.–based environmental advocacy organization founded in 1892 by John Muir, announced Vancouver Island's old growth was in a "state of ecological emergency." Catastrophic ecological damage, including species loss, was imminent if the timber industry was left unchecked. The Sierra Club B.C. subsequently released a map of the island's remaining old-growth stands, calling them "as rare as white rhinos." In the island's highly productive valleys, decades of commercial logging had reduced this specific slice of old growth — where the oldest, largest, and consequently rarest trees grow — to less than 7 percent of what originally stood. What had occurred was a century of furious harvesting on the island, fed by an overarching notion that these trees will never be extinguished and this seemingly inexhaustible resource will never be depleted. To sail along the serrated west coast of the island, watching the wall of grand trees that buffer the storms pass by, is to be misled. These are among Vancouver Island's finest forests, but they are little more than a mirage — a thin fringe of lush, complex rainforest that obscures a harsh reality. Behind that wall of green gold lies the truth of Vancouver Island's forest legacy. Chapter 5 War for the Woods In 1945, a commission was held to assess the future of British Columbia's forestry industry. The Sloan Commission, named after provincial chief justice Gordon Sloan, brought one issue into focus: the management and sustainability of harvesting the highest-value timber reserves, the old-growth forests. Sloan's report marked the first earnest push to change forest policy in British Columbia: "At present our forest resources might be visualized as a slowly descending spiral," it read. "That picture must be changed to an ascending spiral. Differently phrased, we must change over from the present system of unmanaged and unregulated liquidation of our forested areas to a planned and regulated policy of forest management, leading eventually to a programme ensuring a sustained yield from all our productive land area." The consensus was that forests must be seen "as the source of renewable crops and not as a mine" — in other words, a resource that can be managed and replenished rather than drained. Based on Sloan's recommendations, the Forest Act was amended in 1947 to create a form of tenure known as tree farm licences (TFLs), large blocks of Crown land leased to timber companies on a long-term, renewable basis. These blocks are broken up into individual cutblocks of one or two dozen hectares. TFL 33 surrounds Kamloops in the province's interior. TFL 39 encompasses much of the northern half of Haida Gwaii. And TFL 46 runs north of Port Renfrew and includes some of the most productive valleys on Vancouver Island. To manage the rate of harvest, the Sloan Commission suggested that the province's forest harvesting be regulated by an allowable annual cut (AAC) — a maximum volume of timber that can be extracted in a given year, as set by the chief forester. The AAC was meant to serve as a regulatory measure to limit the amount of trees that could be cut by timber companies to avoid overharvesting, but still allowed for controversial techniques, including clear-cutting. The focus of the commission was more on managing and re-establishing the resource than any kind of environmental degradation. Out of fear that the harvest was unsustainable long term, Sloan recommended an increase in the rate of tree planting, as well as a greater diversity of species planted. Of the seven million seedlings that were planted in 1955, the vast majority were Douglas fir that companies had been casually planting after cutting, as a means to ensure future supply. Sloan's objective of 38.4 million seedlings planted annually was never met, and it wasn't until 1987 that timber companies were required by law to replant their cutblocks. Sloan's objective was for planted second growth to eventually replace the original old growth so that, as the latter decreases, the former is supplanting the timber supply. But there was concern that by the time all the old growth was cut, the second growth wouldn't be ready, and this gap would lead to a "fall down" effect — a social and economic collapse. With burgeoning economic tensions coupled with a rise in North American environmentalism (precipitated in 1962 by the publication of American biologist Rachel Carson's _Silent Spring_ ), alongside the furious pace of resource extraction on Vancouver Island came a more gradual rise in anti-logging activism. Environmentalists — often starting out as recreational hikers — began to delve deeper along logging roads and came to realize the extent to which the timber companies had cut away the forests. Both activists and loggers were hunting for the same thing: the island's lush valley bottoms, where trees of the Pacific temperate rainforest not only grow well, they grow big. There stood great value for both parties. To the loggers, each great tree, if felled, represented tens of thousands of dollars in prized timber. And to the environmental activists, the groves, if left standing, could be turned into a park or recreation zone for tourists and hikers. In the spring of 1988, environmental activist Randy Stoltmann went looking for Canada's tallest tree. Rumours had been swirling for years that a giant Sitka spruce had been identified in the 1950s in one of the major watersheds on the southern half of Vancouver Island — in the Carmanah Valley. Few places on the entire coast are quite as sublime an exemplar of Pacific temperate old-growth forest than that which grows along the banks of the Carmanah Creek. The valley is broad and flat, with rich silt banks, accumulated over centuries of flooding, that offer the ideal canvas to grow a forest. "It was an incredibly inspiring place; a living cathedral. None of us had seen groves of trees that tall in B.C.," wrote Paul George in his history of the Western Canada Wilderness Committee (WCWC), an environmental education and activism organization he co-founded in 1980. Legends of the giant tree date back to 1956, when Mike Gye, a twenty-nine-year-old timber surveyor for MacMillan Bloedel, was working in the lower reaches of the Carmanah Valley. Around two kilometres inland from the Pacific Ocean, where the West Coast Trail would eventually be formalized by the establishment of the Pacific Rim National Park Reserve in 1970, Gye stumbled upon an enormous tree. He measured the Sitka spruce a couple of times, concluding that the tree exceeded ninety metres in height. If confirmed, it would not only be the country's tallest tree but the tallest known Sitka spruce in the world. Over the following decades, MacMillan Bloedel turned its attention away from the forests of Carmanah to those more easily accessed elsewhere on Vancouver Island. The record-breaking spruce wasn't even entered into the company's official inventory; in the mid-1950s there were still many trees of similar stature. Gye's spruce faded from story to myth: Canada's tallest tree was out there, somewhere within a dense forest spanning thousands of hectares, waiting to be found once again. Based on everything Stoltmann knew about the geography and ecology of Carmanah, if the country's tallest tree would be growing anywhere, it would be there, in the wet trough of the valley. As a member of the WCWC and an avid big-tree hunter, Stoltmann had explored much of Vancouver Island's forests. And the spring of 1988 wasn't the first time he had gone looking for the legendary tree of Carmanah. Six years earlier he had been dropped by helicopter on one of the gravel bars along the creek. He'd hiked up and down the valley, and while he never located the record-breaker he did document some towering Sitka spruces. In his 1987 book _Hiking Guide to the Big Trees of Southwestern British Columbia_ , he described how after only eight hours in the valley he was left stunned by what he had found, calling it "perhaps the finest remaining stand of virgin Sitka spruces in Canada." The size of the trees was remarkable, but so was the density of the grove: hundreds of Sitka spruces, with their column-straight trunks covered in scale-like bark, appeared at every turn through the lush undergrowth. Still, among pages of descriptions, directions, and hand-drawn maps dedicated to the great trees and forests of the island, Stoltmann only mentioned the forests around Carmanah Creek in an appendix. While tens of thousands of hikers walking the West Coast Trail crossed the creek as it spills into the Pacific Ocean along the coastline, few ventured inland. For decades, even timber companies had focused their operations elsewhere. When Stoltmann returned for his second excursion into the valley, with fellow activist Clinton Webb, he noticed the great trees of Carmanah Valley were under imminent threat. Hundreds of hectares of forest around the valley had already been clear-cut, and a road had been constructed right to the edge — directly above the grove of huge Sitka spruces he had previously strolled through. And when he hiked down into the valley itself, he found flagging and spray paint on trees. To the activists it was clear: MacMillan Bloedel, the company that owned the tree farm licence for that area, was hoping to log the towering trees of Carmanah before anyone noticed. * * * After Stoltmann discovered the logging road and flagging, he went to check on MacMillan Bloedel's five-year cutting plan. In December 1984, the company had received approval from the provincial Forest Service for their proposal, which included every region that the company intended to log within tree farm licence 44, a massive 450,000-hectare holding in southern Vancouver Island. There was no indication that Carmanah was part of that five-year plan. Stoltmann found out, however, that shortly after receiving approval following a public review period, MacMillan Bloedel had adjusted the boundaries of its plan. The company had made the modification with the consent of the provincial government, which agreed to approve logging in Carmanah as early as 1989. The move placed the lower portions of the valley — that "living cathedral," as Paul George would call it — under imminent danger of being cut. The _Vancouver Sun_ ran an article shortly thereafter titled "Tree hunter's claim of forest giants sparks preservation plea," in which a MacMillan Bloedel spokesperson said that he "would be surprised if we can't find spruce of equal size already preserved," in an attempt to downplay the ecological significance and rarity of Carmanah's forests. To the activists, it became clear not only that Carmanah would be a new battleground but that it could also could be turned into their flagship campaign. While it was less than a decade old, the Western Canada Wilderness Committee had already achieved success in activism campaigns focused on protecting biodiversity and wilderness areas. In the mid-1980s, protests led by the WCWC on South Moresby on Haida Gwaii, then known as the Queen Charlotte Islands, included dozens of arrests for violating an injunction against blocking logging roads, but ultimately led to the establishment of the South Moresby National Park Reserve. In May 1988 in Carmanah, amid growing pressure, MacMillan Bloedel agreed to a month-long stalemate and to temporarily halt the construction of three logging roads around Carmanah. The decision was made to avoid confrontation with protestors, but also so the company could conduct a thorough assessment of the timber value in the valley. It wanted to know exactly how much money was at stake. The WCWC rushed to do the same — not to determine the monetary value but to gauge the potential usefulness to their cause. They wanted to find and measure as many giants as possible — the bigger the trees, the bigger the public outcry. The organization began by sending groups, spearheaded by Stoltmann, to build a trail into the valley. The protestors built two base camps. One, down near the river in the middle of the old growth, was known as Camp Heaven, and another, out of the valley alongside the muddy logging road in a clear-cut, was dubbed Camp Hell. While remnants or traces of each have long since been cleaned up or overgrown by the forest, the first is memorialized by one of the broadest Sitka spruces in the valley, known as Heaven Tree, with a base diameter of 3.5 metres and a height of 77 metres. Quietly, both activists and timber workers were also looking for the fabled giant Sitka spruce. Since Mike Gye had stumbled upon the tree in 1956, it had only continued to grow. But in the spring of 1988, for both groups, Gye's fabled spruce became a Holy Grail. While the environmentalists were hiking the valley searching for big trees on foot, MacMillan Bloedel was buzzing above the canopy in a helicopter. From their aerial view, the magnitude of the grove of Sitka spruces became clear. It was a collection of such density, it rivalled the great groves of old that early twentieth-century loggers had put their axes to. After a reconnaissance flight, a MacMillan Bloedel forester remarked that just one of these spruces could fetch $40,000; but processed and sold as higher-market products to companies that build furniture or guitars, it could be worth double that. With each venture farther up and down the valley, more enormous trees were identified and recorded. They found them in dense groves that, if uprooted and planted in downtown Victoria, would be landmarks seen from anywhere the city. Still, they hunted for the big one, the rumoured Sitka that would shatter records. One MacMillan Bloedel forester commented at the time: "We hope that these trees aren't the biggest or the tallest, so that we can just come in here and log them." But in early June, a MacMillan Bloedel helicopter circled over a portion of the Carmanah Valley just inland from where the creek trickles into the Pacific Ocean. The team located one tree with a delicate top well above the forest canopy, dropped a chain, and were stunned when it touched the ground: ninety-five metres. It was the tallest Sitka spruce in the world. They had found Gye's legendary tree. It was members of the timber company who gave the tree a name — the Carmanah Giant — but it was activists who were most excited by the confirmation. The tree, estimated to be between five hundred and seven hundred years old, became a rallying point for the entire anti-logging protest. Public attention intensified. Nearly every news article about the conflict included mention of the record-breaking tree. It was called the "king of the valley" and a "national treasure." It was a single tree that people could unite around and that petitions could be written about. It had a legendary story, a name, and a superlative. _How could we possibly cut down Canada's tallest tree?_ In the face of mounting pressure, MacMillan Bloedel needed to relieve the tension. "It's very unlikely it would be cut," Dennis Bendickson, a manager for the timber company, told the _Vancouver Sun_ shortly after the Carmanah Giant had been found. "It's a significant tree and our policy always has been to protect trees like that." It was an admission that gave few assurances to activists watching trucks continue to haul logs out of the region's forests. The Western Canada Wilderness Committee recognized the potential this one tree held as a symbol. "You have to make a poster of that tree and get it out right away," one member of the organization told Paul George. "It'll become the icon that saves the whole valley." The WCWC hired a helicopter for a photographer to shoot the tree and turn the image into a poster. George had high hopes, but admitted that when the photographer's images were developed he wasn't impressed — the tree looked like a "giant shrub." The tallest tree in Canada didn't look that tall nestled in a thick forest along a riverbank. George was hoping for awe, but was left underwhelmed. And even Randy Stoltmann, who was photographed at the tree's base to provide scale, didn't look as small as the organization was hoping for. The poster project cost the organization $4,000 and was never printed for public sale. Still, even without an evocative image, the story of the Carmanah Giant spread well beyond the confines of the valley — becoming a legendary beacon for visitors to the area. By the end of the summer, the trail from Camp Heaven to Canada's tallest tree would be completed. * * * Up until the summer of 1988, timber companies across Vancouver Island had enjoyed near-unchecked reign. There had been minor protests, mere blips in their relentless harvest of big timber, but it wasn't until Carmanah Valley that the battle over the island's old-growth forests became a national and international issue. But pressure within the logging industry was also mounting. Mills were beginning to be closed around the island and timber workers were starting to feel a change in the air — one that with enough momentum could threaten their employment. Giving up even a few hundred hectares to environmental activists didn't just represent a concession; it represented the snowflake that could cause the avalanche. One tree didn't just represent several hundred cubic metres of timber; it represented a job. In late June, MacMillan Bloedel put forward their first of several offers to placate the activists. The proposed nine-hectare protected zone around the Carmanah Giant and a ninety-hectare one in the valley — representing just 1.4 percent of the entire Carmanah watershed — was rejected by the Western Canada Wilderness Committee outright. In October, the timber company offered to increase the protected zone to 2 percent, and then in January 1989 to 7 percent. Both were dismissed by the environmental group as token offers that would allow the vast majority of the valley to be cut. As the WCWC fought MacMillan Bloedel in the courts, activists doubled their efforts to begin turning Carmanah Valley into a park — in appearance if not in title. In part because people wanted to participate in the anti-logging protests, and in part to stand among the valley's legendary trees, visitors began flocking to Carmanah. To accommodate them, and lessen the environmental impact of hundreds of people traipsing through the forest, activists began formalizing the trail that connected Camps Heaven and Hell, activity that MacMillan Bloedel officials strenuously opposed. Over the summer of 1988, volunteers continued to carve out trails to create a network leading to the most significant and largest trees. In August, protestors focused on blazing a path to the West Coast Trail, one of the most famous hiking trails in the country, as well as connecting Carmanah with Walbran Valley to the south. In one steep section of the ravine, they used a chainsaw to cut steps into a fallen cedar log to create a dramatic natural staircase into the valley. Constructing path networks in proposed logging sites became an effective tactic of environmental groups. Activists recognized that if they could establish even informal recreation sites in Carmanah, Walbran, and elsewhere, hikers and campers frequenting the area would act as a deterrent to timber companies. Promoting a forest for its tourism potential, in a bid to establish a near-permanent presence of visitors, would in theory force a timber company to reassess their cutting plans. Or, more simply, it would establish a platform on which the value of a particular forest could be conveyed to the greatest number of people. But the activists knew the clock was ticking, and they opted not to wait for official government permission to build trails on Crown land, instead organizing numerous expeditions into Carmanah Valley to clear paths to many of the watershed's most spectacular features: trees, pools, waterfalls. After just one year in the news, Carmanah was called "one of Canada's most popular wilderness destinations." Recognizing that with an influx of visitors would come greater public awareness of their logging plan to fell some of the country's largest trees, in response MacMillan Bloedel called the trail "dangerous to hike" in an attempt to discourage people from visiting. On one expedition, a caravan of 150 WCWC trail builders, activists, and supporters ran into a locked gate, as well as a piece of heavy machinery and a pile of logs blocking the road several kilometres before the trailhead to the valley. Paul George saw it as an ideal opportunity: they had travelled with a camera crew, to gather footage for a documentary called _Carmanah Forever_. In the film, an activist presents George with a letter from MacMillan Bloedel, which he reads aloud: "Please be advised that persons engaged in unauthorized construction activity, including trail construction within the vicinity of Carmanah Creek, are to cease and desist immediately. These activities are unauthorized and therefore illegal under the current management working plan and under the current cutting permit for tree farm licence 44." The shot took several takes, but in the end was a perfectly choreographed moment that demonstrated the tension between the two groups and their fight for the valley. To host _Carmanah Forever_ , the WCWC turned to David Suzuki, renowned environmental activist, author, and host of CBC's hit documentary series _The Nature of Things_. The iconic — and familiar — face and voice described scenes of dusty clear-cuts juxtaposed with those from the verdant valley bottom. Suzuki called the region "irreplaceable" and argued that "preserving a single tree or isolated grove will not ensure the forest's survival." The film was a success as a tool for activism as well as for education, being shown in classrooms across British Columbia. But most people around the country, even many who lived in Victoria and Vancouver, had never stood beside a tree with a trunk wider than their vehicle or taller than their office building. Without standing under them, tracing a long gaze from root to tip, the size of the trees was hard to convey. The WCWC campaign in Carmanah needed to show people who would never make the journey, which included a three-hour drive on bumpy logging roads, the scale of these trees. Paul George commissioned nature photographer Adrian Dorst to photograph Carmanah for a poster. "As luck would have it," George wrote in his history of the WCWC, "on the same day Adrian arrived to pick up the camera and film, a petite biology student from eastern Canada serendipitously walked into our office." She became the model, the point of scale that would demonstrate the size of Carmanah's behemoth trees. The result was a simple but evocative image that depicted a young woman dwarfed by a towering grove of trees. With the tagline "Carmanah: Big Trees not Big Stumps," the poster was a hit, reprinting many times and selling tens of thousands of copies. Carmanah became a hotbed for biologists interested in studying the canopy ecosystems of Pacific temperate rainforests. To raise funds to build and maintain a research station in the treetops of a seventy-five-metre-tall Sitka spruce — in what would become the first research platform in the canopy of a temperate rainforest anywhere in the world — the WCWC began an adopt-a-tree campaign. Thousands around the country mailed in twenty-five dollars with a chosen name and received a certificate in return. The main argument for protecting Carmanah was emotional, centring on the rarity of the trees and the rarity of the grove. It marked a spark of an internal confrontation within many Canadians, between what kind of country Canada has always been — rich through its resources — and what it was working to become: environmentally progressive. Carmanah epitomized this strife. Simply standing under its towering trees and looking up was to be forced to grapple with a country's dark past in resource extraction. It was to imagine two possible futures: one where Canadians will never again experience a feeling of awe at some of nature's biggest creations, and another where these rare trees are recognized for their enduring value and protected. Over the summer of 1989, four expeditions made up of some of Canada's most renowned artists ventured into the Carmanah Valley. Acclaimed Canadian painters Robert Bateman, Jack Shadbolt, and Gordon Smith were among the approximately one hundred artists who camped along the riverbank to paint, photograph, and sketch. The artists focused on a variety of aspects of the Pacific temperate rainforest: the massive trees, the subtle details in the undergrowth, the scope of the forested hills, the contrast in a clear-cut. In an interview at the time, Jack Shadbolt summarized the experience: __ "Standing with my back to this big tree that's behind here, how can I help but feel something of a tremendous grandeur of natural growth — that the world has a certain kind of meaning? . . . I like to live somewhere near that kind of feeling as an antidote to all the practical things I have to do in life just to survive." Robert Bateman, known for his realistic paintings of animals and landscapes, found himself drawn to the clear-cuts — it was as if he were looking at piles of bones. The work he produced that summer depicted a grey and scattered cutblock above a panel of intact forest. Artists had always been drawn to these forests, to capture the wilds of British Columbia. Chief among them was Emily Carr. Born in Victoria in 1871, Carr built a career on condensing the scale and depth of the province's expansive nature onto her canvases. Her paintings plunged viewers into the heart of the forest, revealing the untamed and impenetrable aspects of the natural world as being both ethereal and inviting. In the darkest places, she found light: a beam through a canopy, or a highlighted branch. Carr also documented the destruction of these forests. In her 1966 book, _Hundreds and Thousands_ , she wrote: Yesterday I went into a great forest, I mean a portion of growth undisturbed for years and years. Way back, some great, grand trees had been felled, leaving their stumps with the ragged row of "screamers" in the centre, the last chords to break, chords in the tree's very heart. Growth had repaired all the damage and hidden the scars. There were second-growth trees, lusty and fine, tall-standing bracken and sword ferns, salal, rose and blackberry vines, useless trees that nobody cuts, trees ill-shaped and twisty that stood at the foot of those mighty arrow-straight monarchs long since chewed by steel teeth in the mighty mills, chewed into utility, nailed into houses, churches, telephone poles, all the "woodsyness" extracted, nothing remaining but wood. In 1931, Emily Carr painted _Scorned as Timber, Beloved of the Sky_ — an image of a single slender tree with a hat-like crown, standing in a clear-cut surrounded by stumps. Before Carmanah, campaigns to save British Columbia's old-growth forests rarely extended beyond the borders of the province. In the summer of 1988, several dozen activists were arrested for blockading roads in Clayoquot Sound, a mountainous watershed and forested archipelago on the west coast of Vancouver Island near Tofino, and were sentenced to between three and forty-five days in jail. The pattern was repeated in numerous other skirmishes across coastal British Columbia, including the Lower Tsitika Valley in 1990 and the Slocan Valley in 1991. But Carmanah, in part because of the celebrity artists and high-profile activists like David Suzuki, made the issue international. Canadian singer Bryan Adams took a tour of the valley and held a benefit concert in support of the Western Canada Wilderness Committee's campaign. Companies offered support as well: Mountain Equipment Co-op, the outdoor retailer founded in Vancouver in 1971, gave the activist organization access to its mailing list to use for a newsletter campaign, only to receive numerous complaints from people who were gear consumers but not preservationists. The original artwork produced during the summer of 1989 was auctioned off to raise money for the legal battles over protecting the Carmanah Valley, and was later turned into a book, _Carmanah: Artistic Visions of an Ancient Rainforest_. __ "Campfire conversations would often turn to the stark contrast between the haunting beauty of Carmanah's virgin forest and the slash-choked, burned and blackened clear-cuts that lie just outside the watershed," the WCWC wrote in their newsletter that autumn. "The artists spoke of how the distant growl of heavy logging equipment, carried by the wind from the next valley, affected them . . . a constant reminder as they sketched and painted, of why they were there." * * * In the fall of 1990, a group of loggers blockaded the entrance to the Carmanah Valley, stopping the activists from reaching their treetop research station for two days. When the loggers relented, the activists found the Western Canada Wilderness Committee's research tent near Camp Heaven had been burnt to the ground, boardwalks had been axed, and bridges had been toppled into the creek — damage the WCWC estimated at more than $30,000. But the battle for British Columbia's trees was waged not only along the dusty logging roads or deep in the forests of Vancouver Island, but on the airwaves and in advertising that could offer greater reach. While activist organizations marketed the value of protecting these big trees, logging companies launched campaigns that extolled the virtues of bringing them down. In the 1970s, MacMillan Bloedel produced _The Incredible Forest_ , a film that rhapsodized about British Columbia's timber industry. "This is the age of a new breed of fallers, buckers, skinners, choker-setters, and he comes with an army of foresters, logging engineers, cruisers, tree markers, and other scientists — armed with barometers, binoculars, microscopes, surveying instruments, thermometers, and test tubes," reads the narrator over romantic scenes of a logger hiking through a forest and felling a large Sitka spruce with his chainsaw. "Timber!" the man shouts as the tree falls into a clear-cut. "These are the loggers of today," the narrator continues, "living with and caring for the forests of tomorrow." In another film produced by MacMillan Bloedel, titled _The Managed Forest_ , the narrator calmly reads: "Forest managers know that seeing a freshly logged site can be a distressing emotional experience. The site looks utterly devastated. But forest managers also know clear-cutting is not only ecologically sound but also the safest and most economical way of logging B.C.'s coastal forest." The most significant investment in marketing timber came in the late 1980s, leading up to the eruption of the movements to protect Carmanah Valley and Clayoquot Sound. In fear of losing the public relations battle, the Council of Forest Industries in British Columbia invested $1.5 million in an advertising campaign called "Forests Forever." The commercials featured plaid-wearing timber workers strolling happily through the forest, and hard-hatted children holding Douglas fir seedlings, while the voiceover praised the industry's responsible forest management and care for the environment. One even included what appeared to be a stuffed deer and a fake eagle. Many of the public — not just environmental activists — weren't convinced. They saw the advertisements as simple greenwashing: timber companies attempting to glamorize their work and paint over its faults. Soon, it became apparent that the campaign had produced the opposite effect than intended: even less trust of the forestry industry. As public opinion was beginning to shift, advertisements were created to counter to the "Forests Forever" campaign, including one featuring an animated young sapling asking his "grandfather" — who appears to be an aged spruce — if he will grow up to be just as big and strong. The old tree expresses doubt: "Unless something is done soon, big old trees like me will be nothing but a memory . . . " "What would the forest be without old ones like you?" the little sapling asks. "I think they call it a tree farm, son." The tagline read: "A Tree Farm is Not a Forest." The so-called subvertisement, however, never saw airtime after being rejected by networks; it had been produced by Kalle Lasn, who shortly thereafter, in 1989, co-founded the Vancouver-based social activist media company Adbusters, in part over conservation battles with B.C. timber companies. In 1990, a consortium of timber companies hired the New York public relations firm Burson-Marsteller, which had represented Union Carbide six years prior, after the gas leak in Bhopal, India, that caused between two thousand and four thousand immediate deaths and tens of thousands of long-term health issues. The firm formalized the consortium into the B.C. Forest Alliance and began a widespread marketing campaign to re-establish the timber industry's dominant and proud allure of decades past — when communities were expanding, jobs were plentiful, and resource tensions were nonexistent — and to undercut the growing environmentalist wave. Millions of dollars was invested into advertisements in print and on TV. On the ground, timber companies tried to break the image of the peace-loving activist by driving the narrative of the radicalized anarchist. Protestors were called "eco-terrorists" in Carmanah, and likened to "anti-abortion protesters" in Clayoquot. Some tactics employed in Carmanah, Clayoquot, and Walbran were simply an attempt to stop loggers from accessing the trees. "Tree sitting" involved protestors climbing into the branches and remaining there for days or weeks on end as a human shield. In Carmanah, a twenty-year-old protester had to be medically evacuated by helicopter after sustaining injuries to her back and leg from falling out of her treetop perch. But the tactic that caused furor among timber communities and companies was much more assertive. "Tree spiking" involved someone hammering iron spikes into the base of large trees. These spikes would likely be harmless to such an enormous tree, but could be fatal to a faller who might hit a four-inch bolt of metal with their chainsaw. If the machine's chain, moving at more than eighty-five kilometres per hour through a soft cedar trunk, were to impact a solid piece of metal, the violent kickback could potentially lead to the amputation of an arm of a faller, or even death. In September 1991, a MacMillan Bloedel faller in the Walbran Valley narrowly escaped disaster when his chainsaw hit one. "It's like planting landmines or leaving little time bombs around," MacMillan Bloedel forester Gord Eason told the _Vancouver Sun_. While tree spiking was roundly denounced by less radical activists, the practice continued regardless. Accusations about who exactly was spiking trees were thrown across the protest lines: loggers accused the activists, while Syd Haskell, president of the Carmanah Forestry Society, was certain that the timber workers planted the spikes themselves in an effort to paint the environmentalists as sinister. "I'm alleging that someone sympathetic to timber-cutting in the Walbran did this in order to discredit our image," he told the _Vancouver Sun_ in the fall of 1991. "If there are trees being spiked, I have no doubt where they are coming from." In the spring of 1992, after eighty-five spikes were discovered embedded in trees in the Walbran Valley, Joe Foy, campaign director of the Western Canada Wilderness Committee, condemned the act, calling it "a form of terrorism" and offering a $1,000 reward for information that would lead to arrests. His denunciation of this form of protest raised the ire of more radical anti-logging activists. Weeks after Foy's statement, someone filled the locks of the WCWC's downtown Vancouver office with glue and attached a poster to the door that bore his face and flipped his $1,000 reward onto Foy himself. "Responsible environmentalists work for the Earth, not for the police," read the poster. "Which side are you on? Remove the bounty." Foy was unmoved. "You learn the most important things in the sandbox of a playground," the activist told the _Vancouver Sun_. "You don't hurt people and you don't put people at risk. Tree spiking creates fear and unnecessary stress for forest workers and their families." The early 1990s were a period of intense fervour over the future of Vancouver Island's forests, but they ultimately ended with thousands of hectares of old growth off limits to logging. In June 1990, the province tabled a bill to establish Carmanah Pacific Park, removing the entirety of the valley from MacMillan Bloedel's tree farm licence. Five years later, portions of the neighbouring Walbran Valley were incorporated, forming the 16,365-hectare Carmanah Walbran Provincial Park. But the timber companies didn't leave Carmanah and Walbran empty-handed. The government of British Columbia paid MacMillan Bloedel $83.75 million for income lost. Today, the Carmanah Giant still stands, but the trail along Carmanah Creek to the tree quickly became overgrown once the activists had achieved their goal of protecting the valley. Ferns and brambles reclaimed the track. The tree is only accessible along a two-kilometre detour off the West Coast Trail, far enough that few hikers opt for the diversion as they trudge along the coast. No activists or loggers lurk around its trunk, no helicopters hover at its top — the tallest tree in Canada, the Sitka spruce of legend, grows quietly once again. * * * Carmanah was the spark for forest activism on Vancouver Island that reached full flame farther up the coast of Vancouver Island, near the world-renowned surfer town of Tofino. The logging of Clayoquot Sound, which had been spurring minor protests since the early 1980s, turned the battle for Vancouver Island's old-growth forests into what was called the War in the Woods. The head-to-head between activists and timber workers culminated in the summer of 1993, with two hundred litres of excrement being dumped near the Western Canada Wilderness Committee's staging site, and around 950 protesters being arrested and 850 convicted of defying a court injunction against block­ading logging roads. The protests were one of the largest acts of civil disobedience in the country's history. Greenpeace pushed for a boycott of forest products from British Columbia to pressure the industry to back down. In one of its more international campaigns, the WCWC dug up a nearly four-hundred-year-old cedar stump, loaded it onto a flatbed trunk, named it Stumpy, and toured it from the B.C. legislature in Victoria across the country to Ottawa under the banner "Clayoquot Sound NOT Clearcut Sound." It was then loaded onto a ship and toured England and Germany. In 1995, Clayoquot Sound was protected by provincial order, and in 2000 it was designated a UNESCO biosphere reserve. For years, Carmanah and Clayoquot remained in the memories of both activists and timber workers. To activists, these battlegrounds became legendary as examples of how a handful of plucky environmentalists can stand up to Big Timber, how a war can be fought and won. Not only were trees saved, but the actions forced the Ministry of Forests to re-examine its policies. The same year as the formation of Carmanah Walbran Provincial Park, the Forest Practices Code of British Columbia Act became law, establishing new regulations for logging companies, reforestation policies, road construction, and the treatment of wildlife habitats and watersheds. In 1991, the Forest Resources Commission had released _The Future of Our Forests_ , a report that made it clear that B.C.'s timber industry was approaching a cliff edge. Forestry practices were focused on short-term returns, without considering long-term consequences or how these forests might offer sources of value other than planks and pulp. For the fallers, hauling crews, and truck drivers, Carmanah represented a dark scar on British Columbia's timber history — a significant concession to appease environmentalists, one that took available resources off the table and therefore affected jobs. After decades working in the industry on Vancouver Island, one timber worker summarized that time of tension: "Boy, we lost that war." Chapter 6 A Forest Alliance While the 1990s was a decade of feverish activity around Vancouver Island's forests, the 2000s saw a dip in attention. The pro-forest focus began to migrate north up the coast, towards the region known as the Great Bear Rainforest. There, the elusive and mysterious "spirit bear," a subspecies of black bear with white fur, became a symbol of the rarity of these forests and helped galvanize the public to protect the region. At the same time, the logging industry began to make its move. In the early 2000s, some of the province's largest timber companies — including Weyerhaeuser (which had purchased MacMillan Bloedel), Interfor, TimberWest — promised that if the government agreed to certain changes in the Forest Practices Code of British Columbia Act, they would invest a billion dollars in the industry by building mills and investing in the research and development of value-added products, including engineered timbers (beams made from lower-value wood glued together). In 2002, the provincial government made good on its promise with the most significant amendments to forestry regulations in half a century, with the reformation of the Forest Practices Code into the Forest and Range Practices Act. After a series of clarifications and amendments, the act came into effect in 2004 and heralded a period of deregulation for the province's forestry industry, where the onus was placed on the individual forester or forestry company to follow regulations. "It was like putting a fox in a chicken coop and saying 'only take one,'" as a long-time forest engineer put it. Despite these changes to the code, of the $1 billion pledged by the logging companies, only a small fraction of the promised investment has materialized. The changes also removed the appurtenancy clauses, which required timber cut on Crown lands to be "used" and "manufactured" in the province, clauses that had been in place since 1947, when the last major amendments were made to the 1912 Forest Act. Companies that held a licence to cut in a specific region had been required to invest in the construction and maintenance of mills, forcing companies to invest heavily in communities. After 2004, timber companies were no longer required, or penalized for failing, to maintain their existing mills or upgrade and retool them to accommodate smaller second-growth logs, a necessary condition to moving away from old-growth logging. Between 1997 and 2001, twenty-seven mills had closed across the province; between 2001 and 2011, seventy more were shuttered. Thousands of jobs were lost. In 2001, TimberWest closed its mill in Youbou, near Dennis Cronin's hometown of Lake Cowichan, which had been in operation since 1913. The year before, the company announced it was increasing its raw log exports by 85 percent. Raw logs — trees felled, limbed, and loaded onto a truck or ship for export without processing — are the most basic product that can be harvested from a forest. Such a base form of a resource holds the potential to have many more value-added layers. Processing the log into dimensional lumber is one; turning the waste into pulp products is another; and finally, the wood can be manufactured into high-value goods like furniture and guitars. Though some raw logs have been shipped abroad for as long as there has been commercial logging on Vancouver Island, throughout the twentieth century the majority of trees were processed at local mills into timber products that were then sold domestically and internationally. Some companies — such as Teal Jones, which processes what it cuts on Vancouver Island at a mainland mill in Surrey, near Vancouver — have resisted the export of raw logs. Many other companies have not. Any log removed from Crown land has to pass a surplus test: if the harvests exceed the needs of the province, then those surplus logs can be legally exported. The logs have to be put up for sale to provincial mills first, but if timber companies are no longer legally required to erect or maintain mills, there will be fewer places locally to buy and process logs. More and more wood, therefore, becomes surplus. To Arnie Bercov, it's a "self-fulfilling prophecy." Bercov worked as a chokerman on a logging crew early in his career, before transitioning to work at a mill near Nanaimo and becoming president of the Public and Private Workers of Canada (PPWC), a union once known as the Pulp, Paper and Woodworkers of Canada. While some have blamed environmental movements for job losses and mill closures, Bercov has attributed the blame to government changes that have been undermining its most valuable industry. Instead of employing thousands of mill workers and running dozens of production facilities, companies can simply export the wood and export the job. In 2016, the volume of raw logs exported from British Columbia had risen to 6.3 million cubic metres, which means that roughly one out of every three trees cut was shipped abroad — predominantly to China, Japan, and the United States — without any value added locally. Despite it being among the most renowned in the world, for every dollar of British Columbia timber, the province adds approximately thirty cents of value added, whereas Ontario and Quebec add $1.50. The province that once lured timber workers from across the country and turned remote communities into thriving towns has become one that places little value in the full potential of its resource. But policies and practices of forestry companies themselves have also faced blame, above strictly environmental concerns. One Vancouver Island timber worker pointed to a flaw in the stumpage fee, the tax the provincial government levies based on volume of timber cut off Crown land. Some avaricious timber companies, when negotiating cutblocks with the provincial government, have been known to combine stands of valley-bottom old growth with those of a much lower value with no intention of cutting anything but the biggest and best. The government then calculates and charges a stumpage fee based on a considerably lower total average — and the company never cuts the less valuable stand, deceiving the government and maximizing profits. * * * As timber companies and governments shifted the values they placed on British Columbia's forests, so too did the environmental movements. In the early 2000s, one environmental story dominated headlines: climate change. Activists were struggling to draw attention to massive global forces affecting the planet — the deterioration of the ozone layer, atmospheric carbon dioxide, rising ocean temperatures, acid rain — and local battles became dwarfed. On the coast, Western Canada Wilderness Committee activist Ken Wu watched as each environmental issue splintered and support began to be stretched thin. Wu felt that the ideological, social, or political needle towards ending old-growth forest logging wasn't moving. Wu began his work as an activist canvassing in Vancouver for the WCWC during the Clayoquot Sound movement in the early 1990s. He was an ideological advocate for civil disobedience and blockades — a "serial protester," by his own definition. But a conversation during a car ride with WCWC co-founder Paul George lit a spark. George told him that while direct action — protesting, barricading logging roads, rallying — was an important component of environmental campaigns, the most crucial aspect was curating an educated and motivated public. Direct action can play a role, he told Wu, if it focuses the issue and drives people to action, but to change legislation requires considerable momentum and stamina in order to exert pressure on government. Their work wasn't just about bringing activists or even tourists into the forests, but to convince people — across the country and across the world — to care. After two decades working as part of West Coast environmental movements, Wu found himself preaching to the converted and attracting few new acolytes. The organization's base was firmly established: a left-leaning, CBC-listening, Green Party–voting, environmentally conscious public. With the WCWC focused on shoring up its base and spreading its reach to cover new causes, including the expansion of oil pipelines to the coast and the proliferation of tankers, Wu saw an opportunity to expand into new demographics. He left the Western Canada Wilderness Committee to launch a new, forest-first organization. On January 19, 2010, the Ancient Forest Alliance (AFA) was born. "The new organization will undertake expeditions to document the endangered ancient forests, heritage trees, and clear-cuts destroying the remaining old-growth forests on Vancouver Island and in southern B.C., and work to undertake public education and mobilization campaigns to ensure their protection," the organization's first statement read. The AFA was initially founded on four principal platforms: establish a provincial strategy to inventory the remaining old-growth forests; promote sustainable second-growth logging, including the retooling of mills to handle these logs; end the international export of raw logs to ensure local jobs are maintained; and support any Indigenous communities' land-use plans that focus on protecting old-growth forests. The AFA was registered as a provincial non-profit society instead of a national charity, which allows it to openly support political parties and politicians who advocate old-growth protection — or condemn those who don't. The Western Canada Wilderness Committee, by contrast, holds charitable status, which allows them to speak in favour of or against policies but not vocally support parties or candidates. In effect, it means the Ancient Forest Alliance can be more overtly political. In its initial mandate, the organization stated it would "not be constrained by charitable status that forbids organizations from rejecting or endorsing politicians and parties due to their stances on important issues." Wu's focus began to shift towards mobilizing a broader demographic. He started with an ambitious goal of expansion: to convince those British Columbians who typically put business or social interests above environmental ones to care about old-growth forest protection. He knew that he needed to break ground with three key groups: business owners, people of faith, and those of multicultural backgrounds. The rapidly growing Chinese and Indian communities in the Lower Mainland presented an opportunity for Wu to captivate a new generation of Canadians, many of whom became enamoured with British Columbia's big nature. Wu, who is of Taiwanese descent, began offering big-tree tours in Mandarin. Likewise, he figured people of faith who were part of the growing trend away from structured religion to a more broad spirituality might find resonance within the forests. Activities including "nature therapy" and "forest bathing" — immersing oneself in a forest as a tool for healing, stress relief, and mindfulness — were on the rise. Taking a walk in the woods became a spiritual act, a way to connect with forces greater than the individual. But it was the business groups that proposed the greatest challenge and the greatest reward. Wu saw that if a movement is purely based on ideals that are divorced from the economy, it will never be seen as anything other than an echo chamber. But if he could connect the two, if people's livelihoods were at stake, they would fight as hard and as passionately as the ideologues and idealists. The obstacle was changing the minds of people who have for generations relied, culturally and financially, on timber rather than trees. For many British Columbians, the battles for Carmanah and Clayoquot felt like a lifetime ago and an issue more or less settled — the wars were won and the old growth was saved. The cause was fading from media and public attention. Being in emergency mode all the time is not only exhausting — for activists as well as supporters — but unsustainable. This palpable sense of fatigue brought rise to one of the more fundamental shifts, both organizationally and personally, for Wu. The Ancient Forest Alliance needed to focus not just on the negatives — clear-cutting, job losses, ecological impacts — but on the positives. Wu realized he needed to focus on what he and the AFA were in favour of, rather than hammering on about what they stood against. There had to be green among the grey. His fledgling organization needed to find and document the remaining exceptionally large trees and intact stands, and bring evidence of what was at stake into people's homes. These remaining trees and groves, rather than stumps and clear-cuts, would be the spark that would reignite the movement. His first hire was twenty-five-year-old photographer TJ Watt, who Wu had previously contracted to take pictures of protests and forests for WCWC campaigns. Watt was born in Metchosin, a leafy coastal community just outside Victoria, close to the location of some of the island's first logging mills. When he was a kid, he would climb a large cedar in the backyard of his house until he could see over the rooftops. Watt's father told him that the forest behind the house, which looked so wild, had once been logged. It took a moment, but then Watt saw it: springboard notches and an old logging road. It was the first time he saw historical layers in a forest. In high school, Watt grew interested in photography, buying disposable cameras at the gas station with Petro-Points. After earning a diploma in professional photography at the Western Academy of Photography in Victoria, he joined the AFA as a campaigner and photographer with the principal job of seeking out old-growth forests, big trees, and recent clear-cuts. Photography was a tried-and-tested method of raising awareness for an environmental cause, but there were new tools that had appeared on the scene since Ken Wu had campaigned in Carmanah and Clayoquot, including social media. "You can find the trees," Wu said, "but you have to know how to market them." Half of Ken Wu's job at the AFA is spent trying to draw attention, trying to sell the forests and sell the trees. He hired Watt to find them. * * * In December 2009, a month before the Ancient Forest Alliance was established, TJ Watt grabbed his camera and headed into the forests of southern Vancouver Island. He went to photograph the Walbran Valley, a region that had been a focal point of environmental activism since the early 1990s. As a self-proclaimed "big-tree hunter," even while not on assignment for the AFA he would still spend free weekends hiking and exploring. Watt had been to the Walbran before, but this excursion was his first of many into the bush with the express purpose of locating old-growth forests that stood awaiting the saw. After a night sleeping in his Subaru with a friend, with temperatures dropping to where his socks froze to the windows, he decided to take the backroads south in the direction of Port Renfrew. Hillsides were patched with clear-cuts, some containing enormous cedar stumps, and most of the forests he passed — even stands that towered above his vehicle — were second growth. He checked his map, noticing that he was in the Gordon River Valley just outside of town. As the winter sun was beginning to set, something caught Watt's eye: grey spikes sticking out above the canopy of the forest along the logging road. While many forests in central and eastern Canada undergo a radical and all-consuming colour shift, turning a riotous spectrum of reds, oranges, and yellows in the fall, the forests of Vancouver Island remain predominantly green year round. But within this unwavering colour, hills of old-growth Pacific temperate rainforest appear variegated and motley. The canopy is dappled: dark green for the conifers (the firs and hemlocks and cedars), and lighter for the deciduous (the maples and alders). From a distance, it is often hard to tell a five-hundred-year-old forest from one that is seventy-five years old. But there is one clear marker of Pacific temperate old-growth forest: the spiky, dead tops of ancient cedars. These multi-tipped crowns — known as candelabra tops — are a characteristic of age. When a cedar is several hundred years old, its fragile tip often breaks off in fierce wind or from a lightning strike. From the fractured top sprout new branches that turn skywards, and after decades these often dry out and die themselves. The spiky crowns become bleached grey in the sun, and stand out from the dark green conifer forest like splintered popsicle sticks. Watt knew what to look for, and here, along the side of the road, multiple candelabra tops emerged from the dark green foliage. He parked his car and scrambled down a slope into the forest. Almost instantly, he came across an enormous, burly cedar and a towering Douglas fir, both with tops protruding through the well-established canopy. For a region that had seen extensive logging for the better part of a century, it shocked Watt that a stand of old growth containing valuable timber not only still stood so close to Port Renfrew, but also alongside a well-used logging road. Within an hour, he had located more than a dozen trees three to four metres wide — some with great twisted forms and burls erupting from their bases. Knowing Ken Wu would be interested, Watt returned to Victoria and went straight to the seasoned activist to tell him of his find. Wu initially didn't believe that a grove of old-growth forest stood fifteen minutes from Port Renfrew. He had to see it for himself. Around a month later, when the pair found time to drive up the coast from Victoria to Port Renfrew, Wu was dumbfounded at the size of some of the trees and the density of the grove. But while they were walking through the forest, something jumped out at Watt. Scattered throughout the grove hung the familiar orange "FALLING BOUNDARY" and pink "ROAD LOCATION" ribbons placed by Teal Jones's forest engineers. Large cedars were marked with spray paint, identifying the largest trees in the cutblock or other markings for the fallers. Within the short period between his visits, timber engineers had been sent to flag the forest and lay out a cutblock map. To Watt and Wu, it was a clear sign that the company intended to return with fallers and trucks and turn this patch of old growth into a familiar grey sight. The Ancient Forest Alliance had found its inaugural ancient forest. Weeks after the organization's founding, the AFA issued a press release that announced a new battleground for the fight to protect old-growth forests on Vancouver Island. They called it Avatar Grove, after the James Cameron sci-fi epic that had been released just months prior and was already starting to break box-office records. Beneath the flare of 3-D filmmaking was a not so thinly veiled message: it is possible to fight back against a company that is exploiting land and extracting resources without regard for Indigenous peoples or the environment. Twentieth Century Fox had been taken aback by the strong ecological message in the draft script of _Avatar_. "When they read it, they sort of said, 'Can we take some of this tree-hugging, _FernGully_ crap out of this movie?'" director James Cameron said in an interview. "And I said, 'No, because that's why I'm making the film.'" In addition to identifying with the film's environmental message, the AFA likened the unusually shaped cedars found in Avatar Grove to some of the alien trees growing on Cameron's fictional moon Pandora. It was also a catchy name — one that would resonate with the public and attract the attention of the media. To further link _Avatar_ 's environmental message to their cause, the AFA held a rally in Vancouver where dozens of participants painted their bodies blue in emulation of the Na'vi, Cameron's forest-dwelling aliens. The organization even invited the famous director to attend, but he didn't show up. It was the second time Ken Wu had used pop culture as a conduit for a pro-environment protest. In 2004, while working for the Western Canada Wilderness Committee, he organized a rally at the B.C. legislature in Victoria — the largest at that location since the Clayoquot Sound protest in 1993 — where participants dressed in papier-mâché costumes of Ents, the giant tree creatures from J. R. R. Tolkien's _Lord of the Rings_ trilogy. The protestors acted out a battle against dark forces that were destroying the forests. During the wave of West Coast forest activism that began in the 1980s, names for big trees and groves were bestowed principally based on location. In Randy Stoltmann's pioneering book, _Hiking Guide to the Big Trees of Southwestern British Columbia_ , his descriptions of some of the province's grandest groves shy away from flash or glamorous identification. Trees held names such as the San Juan Spruce, Red Creek Fir, or Lynn Creek Cedar, as well as many with Indigenous roots such as Carmanah, Koksilah, and Cheewhat. More recent activist organizations have felt that aggressive marketing is needed in order to turn these trees into symbols. Organizations such as the Ancient Forest Alliance don't have the luxury of being timid. They must create a splash. "Avatar Grove would probably be a sea of stumps right now had we called it Gordon River Valley Grove," said Ken Wu. "If you don't know how to build a communications campaign around them, then they're just another big tree, ultimately." But not everyone views the marketing of these big trees and old-growth groves as positive. One central point of tension is the language used in some environmental activist campaigns, primarily around the word "discovery." Indigenous people point to their history on the land — long before any timber worker or activist — and the evidence that can be found in the markings and remnants of culturally modified trees. Activist organizations often defend the language by saying that their "discovery" is not to say they were the first to ever see the tree or walk the grove, but that they are the first to recognize the tourism or recreation potential and significance of the trees. The naming of places has always been a fraught process in Canada, where thousands of years of Indigenous history and presence have been erased by placing a single word on a map. It is a subject that splits Vancouver Island's environmental activist community, with each organization trying to push forcibly for results while being cautious not to step on cultural toes. For some, naming groves or trees after Western movies or literature applies yet another layer of Western presence on Indigenous land. The text on the wooden sign at the trailhead to Avatar Grove includes "T'l'oqwxwat" — the Pacheedaht name for the site of a long-time summer fishing camp along the Gordon River. But the Indigenous word has never found its way into colloquial usage. "They were 'finding' something that's well documented within Pacheedaht history and likely has a name and likely has uses," said Kristine Pearson, a representative for the First Nation. And the propensity to label the forests with a Western name has never sat well either. "It would be one thing if you came to the nation first and asked about the history," she said. "It's a very colonial attitude to come in and rename an area." Avatar Grove was more than just marketing: the forest contained some three- and four-metre-wide cedars that were easily several hundred years old. The AFA declared it "the most accessible and finest stand of ancient trees left in a wilderness setting on the South Island." To Ken Wu and TJ Watt, the forest held everything they needed to create a tourist destination. Places like Carmanah or Walbran or the islands of Clayoquot Sound are the cream of Vancouver Island's old-growth forests, but their remote locations, down dozens of kilometres of logging roads, act as a deterrent to most. People are less inclined to visit one — even to view the largest tree in the country — if it's a three-hour drive down bumpy, tire-flattening roads, or if it requires bushwhacking on foot or taking a boat across a choppy channel. Avatar Grove lay just beyond where the pavement ends, an easy drive outside Port Renfrew. With minimal effort, visitors could explore a prime example of untouched Pacific temperate rainforest. They could also see first-hand what is in danger of being lost, without having to delve deep into Vancouver Island's interior. The value in Avatar Grove — not only from an ecological or tourism perspective, but for the AFA's cause — was as staggeringly obvious as the towering trees. The subsequent press releases caught the attention of the media, who were eager for stories about a new war in the woods, or at least a fresh skirmish. With British Columbia's history of timber and forest activism, a rapt audience was guaranteed. "I know that we would've succeeded in building a powerful movement anyways, but Avatar Grove was rocket fuel," Wu said. On a spring day, after showing reporters the forest, Watt and Wu decided to check out the north side of the road, where another patch of old growth extended up a hill. As they hiked through the salal undergrowth, they passed even more giant cedars, one after another. Then, as they crossed Baird Creek, a seasonal trickle of water that flows into the Gordon River, Watt spotted something up a hill: an enormous, stout, burly cedar. While western red cedars can grow straight and branchless, with a grain eloquent enough for guitar making or true enough for canoe building, the aged examples of the species are known for more erratic and misshapen growth. After hundreds of years, the centres of cedar trunks often rot away, leaving hollow cavities that offer ideal dens for a family of black bears. On some cedars, the grain twists and turns, creating folds and mounds in the bark that from some angles make the trees look like sitting Buddhas meditating in a forest. Most of the ancient cedars in Avatar Grove were impressive in their girth and stature, but this one tree was so unusual in its shape that the activists knew they had found this forest's protagonist. The cedar appeared squashed by some unseen force from above that was pushing bulbous lumps out from its base. A few metres off the ground, a burl the size of a small car protruded from the trunk like a giant's goiter. Cedar burls were once thought to be an infection in the bark or some kind of arboreal tumour, but recently the protruding lumps have been thought to be stores of regenerative cells that a tree can access when it is damaged in the wind — when its top is broken or its trunk split. Some burls swell and shrink over time. The cedar growing on the slope in Avatar Grove wasn't the largest or tallest, but it broke the mould for how a tree should look. So the AFA presented the specimen to the world as "the gnarliest tree in Canada." The organization suggested having an online vote or competition to name the unusual tree, but "Canada's Gnarliest Tree" — or more colloquially, "the Gnarly Tree" — stuck. The campaign to save Avatar Grove exploded like wildfire. To TJ Watt, the movement was beginning to feel like a "mini Carmanah." Everything was coming together: the exceptionality of the stand of giant trees, the accessibility of the short drive from Port Renfrew, and the fact that an environmentally minded blockbuster movie was breaking box-office records in theatres. It was the perfect moment to launch a powerful pro–old-growth forest campaign. * * * The Ancient Forest Alliance led its first hike through Avatar Grove that same spring, guiding nearly a hundred people through the rain and bush. The organization set a goal of leading at least one hike every month for a year. People who could see first-hand the size and rarity of these trees were key to spreading the word and furthering the cause. Still, every time TJ Watt drove over the bump separating pavement and dirt logging road on a visit to Avatar Grove, he felt anxiety build in him. He didn't know if one day he would turn his van around the final corner and find the trees he was working to protect had been cut and hauled away. They needed to find a way to ensure this patch of forest was officially off limits to logging. At first, the move to protect a small patch of old-growth forest was met with a familiar tepid response from the provincial government. "I think it is important to mention that not all old-growth forests can be protected," wrote Pat Bell, the British Columbia minister of forests and range, in response to a letter from the Sooke Region Tourism Association and Port Renfrew Chamber of Commerce, which extolled the broader economic benefits of leaving old-growth forests that held recreation values. "A certain amount must be harvested to provide a viable and sustainable wood supply to the forest industry, which is an important component of the provincial economy." As news began to circulate more widely about Avatar Grove, writer and former environmental auditor Hans Tammemagi drove out to see the forest in the summer of 2010. He was shocked that an old-growth forest or a five-hundred-year-old tree held no formal protection. Those within parks or protected areas did, but the trees carried no weight in and of themselves. Tammemagi continued farther up the Gordon River Valley to a recent clear-cut that contained several cedar stumps approximately three metres wide. One stump was nearly five metres in diameter — nearly wide enough for three people of average height to lie head to toe across its cut. Harvested that spring, cutblock 7184 lay just down the road from 7190, which was awaiting a pair of forest engineers to lay out its cutting map. For Tammemagi, who had worked in and around forestry issues in Canada for decades, the contrast between the intact forest and recent clear-cuts was staggering. When he called the office of the logging company that held the tree farm license, Teal Jones, they were adamant that what they had done was entirely legal. They had submitted a cutting application to the ministry, received approval, and set to work. There existed no legal mechanism or requirement for timber companies to save or exclude big trees. They may set aside a bear den here or there, or extend a riparian zone, but it is done at the discretion of the forest engineers. In June 2010, Tammemagi filed a complaint to the Forest Practices Board (FPB), the province's independent watchdog for forest and forestry issues. Formed in the wake of the Forest Practices Code of British Columbia Act, which came into effect in 1995, the board's responsibilities included ensuring the act was being followed; reviewing complaints and conducting audits into forest and range practices; and issuing reports and recommendations to the provincial government. The FPB received nineteen complaints within the first year of its establishment. Tammemagi's demands were threefold: a full stop on cutting the most "ancient" trees, a long-term strategy to protect old growth, and the immediate safeguarding of Avatar Grove from logging. In their report, released in February 2011, the Forest Practices Board highlighted a number of problematic issues, including the fact that forest policy "does not classify old growth in a sufficiently refined way to capture the full range of old forest values." A forest that is 250 years old is treated the same way as one that is more than 500 years old. Similarly, a tree that is 250 years old is treated the same way as one that is a thousand years old. If they're not protected in a provincial park, wildlife habitat zone, or old-growth management area, the most ancient and rare of Vancouver Island's trees can simply be cut. The FPB recognized that "certain individual, or small groups of, exceptional trees on the timber harvesting land base may provide a higher social and economic value if they are treated as a special resource feature and excluded from timber harvesting." Teal Jones responded to the board that these larger trees are often dying or rotten and are consequently felled not for timber value but as a safety measure for its employees working in the field. In its recommendations, the FPB encouraged "government, forest professionals, and forest licensees to seek creative means to conserve trees of exceptional size or form, age or historical significance and, where appropriate, the forest stands that contain them." It was a move that excited pro–old-growth environmentalists. The Forest Practices Board submitted their report to minister of forests and range Pat Bell, who requested a review of the government's existing legal mechanisms to protect big trees. The review determined that the tools and processes — the formation of old-growth management areas or recreation sites, for example — were only sufficient when the big trees were identified prior to the onset of logging operations in the area. No legal process required timber companies to set aside a thousand-year-old tree, for example, once harvesting had begun. Nor was there a mechanism for the provincial government to intervene once operations were underway. The pressure was placed on the public — activists, environmentalists, hikers, and the like — to find and identify these monumental trees before the process was initiated. When the province opened up a public review process, 232 out of 236 comments submitted were in favour of protecting Avatar Grove. Under public pressure, Minister Pat Bell called Ken Wu and set up a meeting. Bell made it clear to the activists that broad legislation to halt old-growth logging was not on the table. But setting aside Avatar Grove — the small patch just outside Port Renfrew — was. The AFA's request was straightforward: a small protected area along Baird Creek could be extended to encompass the entire grove. The thin band had been designated an old-growth management area (OGMA), a patch of forest recognized by the provincial government to contain old-growth attributes. Some OGMAs are intact groves untouched by commercial logging, while others are younger second-growth stands that are off limits to logging and maintained to achieve old-growth characteristics. There are more than fifty thousand OGMAs around the province, representing nearly four million hectares of forested land, but an investigation in 2010 by the Forest Practices Board found that approximately 30 percent of these OGMAs had protection by a government order, while 70 percent held no legal status — likely a result of government land-use plans acting not as legal requirements but as guidelines. Timber companies could build roads through or even harvest two-thirds of OGMAs. However, in their investigation, the board found that most licensees tended to avoid all OGMAs, even though they were not legally required to. Still, that the responsibility to protect forests is largely to the discretion of the logging industry left activists uneasy. Following negotiations, the minister agreed to extend the protected area to cover the entirety of what the activists were asking for, creating a fifty-nine-­hectare old-growth management area. In February 2012, two years to the month after the AFA announced its identification of the old-growth stand, Avatar Grove was officially given protected status. The gnarly cedars within this patch of forest had found their shield — but safety came at a cost. To appease Teal Jones, which was now out of a lease to cut approximately sixty hectares of forest, Pat Bell offered compensation in exchange, by adjusting the borders of existing OGMAs to open them up for logging. Half of this compensation consisted of forest more than 250 years old, and half of older second growth, including one area of approximately one-hundred-year-old Douglas firs — a rarity on Vancouver Island. Just outside Lake Cowichan, a three-thousand-­person town northeast of Port Renfrew in the middle of Vancouver Island, Mark Carter was running Teal Jones's operations for tree farm licence 46 out of a small trailer office, managing the company's forest engineers, including Dennis Cronin. Carter described the deal with Avatar Grove as an "easy call." Even though the Ancient Forest Alliance said in early 2010 that "the Grove is slated for logging any day now," representatives for Teal Jones have tempered this claim. When the company sent in their timber cruisers to do a value assessment, the cutblock didn't register as anything exceptionally significant — particularly in a region that contained sections of forests of much higher value. There were large cedars, but they were old, misshapen, or broken. The land was uneven, too, sloping down a mountainside with many gullies and depressions, which would make it more difficult to fall and extract than other patches in the valley. Carter called the deal a "win-win," where the AFA received a grove of old growth within a short drive from Port Renfrew that they could market and publicize to tourists, and the logging company received stands of higher-commercial-value timber farther from town. Still, the deal was a tough pill to swallow for the activists, and a precedent Ken Wu felt uncomfortable setting. With so little of Vancouver Island protected in parks or old-growth management areas, and so much of it fragmented and disconnected, every stand of old growth or older second growth counts. * * * In the two years after Avatar Grove was first announced, thousands of people hiked the forest, creating two paths: one leading to TJ Watt's Gnarly Tree in the upper half of the grove, and another in a loop around the largest trees of the lower half. But the only way into the grove was for visitors to haul themselves up a slippery, often muddy slope, using a rope. It was a less than an ideal — or safe — entry point to what was becoming the West Coast's new _it_ -forest. The need to establish a more official trail system grew. In the summer of 2013, the Ancient Forest Alliance received confirmation from the B.C. Ministry of Forests, Lands and Natural Resource Operations that Avatar Grove would be an official recreation site, and permission was given to begin constructing a formal trail. The Pacheedaht First Nation donated cedar planks for the first phase of a boardwalk, which was expanded to include a number of viewing platforms and benches. But the First Nation was also not without concerns, pointing to the lack of toilet and refuse facilities, which are often found in provincial parks and managed by parks services. More importantly, Jeff Jones, chief of the Pacheedaht, pointed to a missed or yet-to-be-developed opportunity: there is nobody more experienced and knowledgeable about these forests than the Indigenous people who have lived, used, and worked within them for millennia. He said the Ancient Forest Alliance initially suggested that during the summer high season, a permanent representative of the First Nation would offer guided walks through Avatar Grove, providing information both historical and ecological, and diminishing the impact of tourists who might otherwise wander off trail. With limited funding, guides have yet to be hired. The AFA has also received criticism from some who point to the thousands of tourists traipsing through a once-pristine old-growth forest as a mark of hypocrisy. If these organizations are truly for the protection of sensitive and dwindling forests, then why allow hordes of tourists to trample the undergrowth and clamber up the burls of trees to take photographs? It is an argument that Ken Wu easily dismisses: a thin trail through the forest is a small measure of impact compared to what might have befallen the grove. And with time, the impact of the initial hikers and the construction workers will fade. Some of the first boardwalks built in Carmanah Walbran Provincial Park in the mid-1990s have slowly been enveloped by the forest green, to the point where traversing the walkways in Carmanah feels like strolling on top of a cloud of undergrowth without disturbing anything living. Given time to heal, nature will reassert its dominance. Following Avatar Grove, the AFA announced the identification of two other groves in the region that also held potential as tourist destinations. The first was a section of old-growth forest, seventy hectares of which was a protected wildlife habitat, while another sixty hectares lay within a tree farm licence with no protection. Off Highway 14, just south of Port Renfrew on the road to Victoria, down a path frequented by in-the-know surfers heading to a secluded beach, was a grove of ancient cedars, firs, and spruces tightly clustered like the monoliths of Stonehenge. When it came to adding a name, Ken Wu knew exactly what to call it — a name that he had been looking to attach to a stand of old growth for years: Jurassic Grove. Plus, as the organization stated, if the British Columbia government decided to expand nearby Juan de Fuca Provincial Park to include the grove, the area could be renamed Jurassic Park. The second location featured centuries-old Sitka spruces rather than the bulbous western red cedars of Avatar Grove. One spruce was nearly four metres in diameter, almost big enough to break into the top-ten widest known Sitka spruces in the province. TJ Watt called the grove the "Serengeti of Vancouver Island" because of its biodiversity of fauna — elk, black bears, and wolves. The AFA named it FernGully Grove, after _FernGully: The Last Rainforest_ — the 1992 animated film that centres around an alliance between fairies and animals as they fight to protect the destruction of their forest from loggers and an evil entity bent on its eradication. Still, not every campaign took off like Avatar Grove. A few kilometres farther into the Gordon River Valley lay a section of old-growth forest that rivalled any in the region. It held big trees and bear dens, bubbling streams and waterfalls. The AFA nicknamed it the Christy Clark Grove, after the then premier of British Columbia who, after being elected in 2011, had shown little interest in shifting policy away from old-growth logging. The organization even named one of the grove's largest Douglas firs the Clark Giant, and a burly western red cedar the Gnarly Clark, thinking that the premier couldn't let trees named after her be cut down. It was a bid to draw the attention of the province's highest politician to these vanishing forests, but the campaign never got off the ground. The name confused some left-leaning supporters, who accused Ken Wu of honouring the British Columbia Liberal premier instead of singling the politician out. The AFA eventually renamed the premier's unwanted eponymous forest the more tame and apolitical Eden Grove. It wasn't the first time an environmental activist had tried this tactic with little success. In the summer of 1988, Randy Stoltmann found a Sitka spruce in the Carmanah Valley that had one of the largest circumferences he had ever come across. The tree was dead, a standing snag, so he named it the Dave Parker Tree after the minister of forests who had called the forests of Carmanah "over-mature" and therefore of little value and in need of immediate harvesting. The name never stuck. Public attention on its own can rarely lead to formal protection for these forests. For each of the AFA's identified ancient groves near Port Renfrew, Ken Wu tried to court the Pacheedaht First Nation, who he said has held the "trump card on the issue" of pushing for protection. In some instances, support was forthcoming — from supplying wood for boardwalks to lobbying the government — while in others the Pacheedaht have been cautious about wholly taking the activists' side. Support from the nation meant that the activists wouldn't have to engage in any direct-action forms of protest or hold rallies. Instead, the Pacheedaht could push for the groves to be turned into old-growth management areas in their negotiations with timber companies. That way, the AFA wouldn't have to make concessions to the British Columbia government as they did with Avatar Grove, trading nine cutblocks to save one. * * * Avatar Grove arrived at a time of widespread skepticism and doubt within the environmental community in Canada. Three months after the AFA's flagship forest was formally protected, the country's elder of environmentalism, David Suzuki, was struck with defeatism. "Environmentalism has failed," he boldly declared in May 2012. He cited many successes, but noted that "we were so focused on battling opponents and seeking public support that we failed to realize these battles reflect fundamentally different ways of seeing our place in the world." But within this handful of hectares of old-growth forest near Port Renfrew was an optimistic model of collaboration among activists and timber workers, Indigenous groups and businesses. Still, not everyone immediately swooned at the tourism potential of marketing Vancouver Island's big trees. Greg Klem, who moved to Port Renfrew from Kitchener, Ontario, in the mid-1990s, had driven by what became Avatar Grove many times while working in tree planting, up and down Vancouver Island — part of the hordes of seasonal workers contributing to the 200 million seedlings that are planted every year across the province. Klem was surprised by the attention Avatar Grove was receiving. In his estimation, the stand wasn't even that special. He had walked through dozens of old-growth groves that were grander and more varied than the one attracting the media and the public. The forest, according to him, is based on a lie — he claims it is not entirely old growth but a handful of ancient cedars interspersed with much younger hemlocks, which led him to dub it "Avafraud Grove." In an opinion article for the _Sooke News Mirror_ , a local paper, in the spring of 2011, Klem wrote, "Unfortunately, much of the campaign has been based on misinformation, falsehoods and 'spin.' The 'Avatar Grove' is neither ancient nor endangered. The handful of damaged, old survivors are surrounded by 100-year-old diseased hemlock that grew after a major windstorm." He wrote that there were other forests in the area that were more spectacular than Avatar but were logged without fanfare or opposition, noting that "some trees must be more equal" than others, in a reference to George Orwell's _Animal Farm_. The Ancient Forest Alliance defended their campaign by focusing on Avatar Grove's location, saying it is "particularly valuable because it is the easiest to access monumental stand of ancient trees near Port Renfrew. Other old-growth stands are farther away along rough logging roads, on steep slopes." While forests untouched by commercial logging may not bear the scars of chainsaws or heavy machinery, they still demonstrate natural wounds. No old-growth forest on Vancouver Island stands utterly unblemished, with every tree being allowed to grow unmolested. There are always young trees alongside the ancients. But Klem's frustration had a deeper cause. These forests had formed the foundation of an eco-tour company that he had casually established a decade before Avatar Grove was brought into the limelight. He would explain logging practices and history to visitors, all the while driving the bumpy backroads more often frequented by hulking trucks laden with logs than tourists. He would point out big trees, but also big clear-cuts. He would guide groups through old growth while explaining the problems with today's timber harvesting. He would show the grey as well as the green. The centre point of his tour was a massive, twisted cedar that he nicknamed "Lumpy," a tree whose location he has kept a relative secret — rather than publicizing the tree for anyone else or any organization to use for their benefit. On the side of his white pickup truck he scrawled in green paint his email address, in case a passerby might be interested in a Lumpy Tour. But Klem's exasperation boiled as he saw the attention on TJ Watt's "Gnarliest Tree in Canada" grow from local to national — with hundreds of visitors on busy summer weekends clambering into the forest to see the unusual tree. "It's not even the twelfth-gnarliest in the district," Klem said, without offering examples or recognizing the complete subjectivity of the designation. To make a point, Klem retrieved his can of green paint, and along the tailgate of his truck wrote a new slogan for his Lumpy Tours: "Bigger. Better. Knarlier [ _sic_ ]." Once Avatar Grove started to pick up momentum as a focus for tourists — and the AFA began giving free hiking tours — Klem found it harder to compete to the point where his business dried up. He has no Instagram page or fancy website; his only advertising is a small listing in the business directory in the Port Renfrew Chamber of Commerce's brochure. Rather than polished activist-speak, he'll give you a reality check from someone not shy with the contradictions: that some of the highest-productivity forest regions are also where we have built our communities; that a gradual weaning away from old-growth logging is more likely than a cold-turkey full stop; and that environmentalists will always look to polish a feature in order to market their cause. Klem has been imagining a reality show based on the premise of hunting Canada's next biggest trees. "More than likely you'll find it right on Mount Edinburgh, in that goshawk preserve," he said, speaking about the mountain that rises above the Gordon River Valley. "There's giants in there." The elusive big trees that may stand somewhere within a remote valley on Vancouver Island have always inspired Klem to seek them out. There is power in stumbling upon a natural skyscraper in a dark forest — maybe not power to push change, but at the very least power to inspire. "Fewer people are going to church anymore, so they're looking for something to grasp on to. Forests is the new cause for them." For the Ancient Forest Alliance, encouraging tourists to visit Avatar Grove was out of hope that each person would come away with an impression — awe at the very least, vexation that these forests are still being cut at best, and ideally something approaching anger that could be harnessed into action. But mostly, people come to see the trees. They come to wander in the woods and look up at the towering Douglas firs and take photographs beside the plump western red cedars, their tiny frames juxtaposed with some of the largest examples of these species on the planet. It is the story of the trees that people are drawn to. While the organization was writing press release after press release and feverishly petitioning the provincial government to protect Avatar Grove, timber workers continued to delve deeper into the valleys of Vancouver Island, flagging and cutting hundreds of other patches of old-growth forest. In late January 2011, a few kilometres down the dirt road from Avatar Grove and across a bridge high over the Gordon River, Dennis Cronin stepped out of his truck, quietly put on his caulk boots and hard hat, and began preparations to bring down cutblock 7190. Chapter 7 The Logger Dennis Cronin stepped back from the giant Douglas fir he had just flagged with green ribbon, and continued on through the forest. As he was marking the cutblock with orange and pink and red ribbons, he noticed he was being followed. It was common to encounter a bear in these remote valleys; cougars and wolves were more rare. But this time it was a bird. Wherever he went in the cutblock, a blue and black Steller's jay — the official bird of British Columbia — took particular interest in his work. "He would follow me around like a dog," Cronin later said. "I would be traversing creeks, taking my measurements and bearings, and he's hopping behind me, picking up the bugs as I stirred them up." The bird would stop when he stopped, cock its crested head to the side, and follow along. Even when he returned the following day to finish the site plan, at some point the jay would appear and Cronin would toss a piece of his peanut butter sandwich to the bird. But when Cronin and his partner, Walter Van Hell, found a way over the creek that acted as the boundary of 7190 to flag a neighbouring section of old growth, the jay stopped. As the pair completed their work in the two cutblocks near the base of Edinburgh Mountain in the Gordon River Valley, traversing back and forth across the creek boundary, the bird always remained in 7190. "He would never cross that creek. We would pick him up again when we crossed back," Cronin said. At the Teal Jones office, a teal-coloured office located fifty kilometres away from cutblock 7190, near Lake Cowichan, Cronin and Van Hell transcribed their field notes of the forest's features and contours on a computer map of the cutblock. They added thin red lines for the creeks and rivers, to mirror the red flagging they had placed in the forest. They marked where an access point should be built, where a cable yarder could be positioned to haul the logs to the road. And they calculated the merchantable cubic metres of wood within the cutblock. At roughly the size of twelve football fields, cutblock 7190 was a tiny sliver of the great forests that had once covered the island. But it held some towering and valuable trees. The price of timber fluctuates every year, depending on species and market, but that year, old growth was fetching between $80 and $100 per cubic metre of wood. (One cubic metre is roughly the size of a telephone pole.) West Coast old-growth forests produce between 800 and 1,200 cubic metres of wood per hectare, roughly twice as much timber as second growth. The gross value of the wood in this one cutblock could yield approximately one million dollars. * * * Dennis Cronin spent the majority of his life walking through old-growth forests, under the canopies of some of the largest trees in the country. He was born in the spring of 1954 in Toronto, and his family moved out of the city when he was five. In 1972, when he was eighteen years old and living in the small farming town of Whitby, Ontario, he headed west, where he had a choice of towns on Vancouver Island which were booming under the banner of falling trees. At the time, the West Coast timber industry was raging, with big money to be made. Unlike other resource surges across the country — in oil production, mining, or seasonal fishing — the move to work in the forests was more often than not a permanent one. Towns across the island had sprouted out of the sawdust of timber mills. Work camps situated in remote locations in the bush had evolved into communities with schools, shops, post offices, and hospitals. Holding everything together was the local timber company, which provided jobs and incomes to keep families not only afloat but flourishing. Some towns, such as Mill Bay on the east coast of the island, bear names that reflect their timber history. For Cronin, a secure job was only half the draw. He wanted to work in the great outdoors of British Columbia, with pitch on his hands and mud on his jeans. He settled in the tranquil community of Lake Cowichan. The town was not only located in the heart of southern Vancouver Island's forested hills but for decades had been one of the region's most important timber hubs. For a decade and a half, through the heyday of West Coast logging, Cronin walked the forests as a hooktender, leading a crew that hauled logs out of cutblocks. "It was continuous clear-cut back then," Cronin said. "You just cut everything down. If it was there, you mowed 'er all down." As the eastern half of Vancouver Island began to run out of the high-value, old-growth forests so coveted by timber companies, operations started delving into the plunging and wet valleys of the island's west coast. "I logged some big dough in the valleys," said Cronin, with both a touch of pride and a touch of regret. "You'd only be six hundred feet from the landing and there'd be just monsters." In the sluice flats of the Nitinat Valley, west of Cowichan Lake and up the coast from Port Renfrew, he remembered Sitka spruces so big they defied standard operation. "You'd have to get low beds to come in," he said. "You couldn't get the logs high enough to get them onto a logging truck. You'd need two machines picking it up at the same time." In the late 1980s, after years of back-breaking work hauling logs, Cronin wanted a change. The B.C. government was formalizing the role of forest engineers and they recognized his experience, like that of many others, and counted it as training. It was a calmer job, and one less taxing on the body. Forest engineers are often the first wave of loggers to enter a cutblock. Their job is to survey the land, design where roads should go, mark any unusual features, and build a layout map for the fallers. "We might walk around for three days scratching our heads, looking at the ground, looking at the trees," Cronin said. In this role, he began seeing trees differently. "Fallers see them lying on the ground, not standing up," he said. "So it's quite a difference being the first ones in." In his previous role, he worked in cutblocks that had been clear-cut, but as an engineer he worked in intact forests untouched by commercial logging. In the bush, Cronin looked every bit of a West Coast logger. When he went to work, he wore jeans and a plaid or work shirt, with the sleeves rolled up when the weather was warm. He wore a hard hat and a timberman's spike-soled caulk boots so he could traverse the forest with ease. He shaved clean, except for a bushy moustache. He never left for work without a loaf of bread and a jar of peanut butter for his lunch. For fifteen years, he had one main partner while working in the woods of Vancouver Island, forest engineer Walter Van Hell. The pair became so comfortable in the bush that when they came across a bear den in a hollow cavity of a large cedar tree, they would reach inside and feel around, or even poke their heads in, without any fear that a bear would come tearing out or tear something off. Cronin didn't just work the forest; he lived and breathed the bush. Few weekends passed without at least one excursion into the vast network of unpaved logging roads around Lake Cowichan or Port Renfrew. He would go hunting up the mountains with friends, or fishing with his two sons along the hundreds of creeks and rivers that drain into the Pacific. One of his favourite activities was shed-antler hunting, where he would hike around looking for deer or elk racks that the animals would naturally drop in early spring. Or he would simply wake up on a Saturday morning and say to his wife, Lorraine, "Come and see my new patch," which referred to either a grove of old growth he had recently flagged or a forest that had recently been cut. They would often hop in their truck and head out for the day, to hike through a grove Cronin had been working or to a point of interest — maybe it was a sliver of Pacific Ocean that had recently been exposed after loggers had done work to a cutblock. Maybe it was a bear den. Or maybe it was a tree he deemed unusual. Over decades, Cronin developed a deep understanding of these forests. There were some who just went to work and got the job done, but Cronin wanted to know the details. He could recite the names of every species in the rainforest and the regulations within the governing codes. When his co-workers had a question, they would seek out Dennis. Cronin had seen hundreds of giants, but this one Douglas fir in cutblock 7190 stood above the rest. "When I walked up to it, I passed some big firs and some really big cedars — twelve-footers, maybe," Cronin said, referring to the diameter of neighbouring trees. But this one fir dominated the rest. "He towered above the forest. He stuck out like a sore thumb." Douglas firs and western red cedar are the two species in this area that are the most wind resistant, so are often stable enough to outlast storms and continue to grow through several iterations of a forest over a millennium. Still, many of the larger, centuries-old examples of these two species break off at their more fragile tops, and over time their centres fill with water and rot. They become unstable and prone to blowdown, and the timber inside slowly begins to lose its value. After decades as a timberman, Cronin could tell by looking at a tree's bark and the knots along its trunk if there was rot inside. The big Douglas fir held just the faintest twist in its trunk, which was free of limbs or blemishes up to its crown. When Cronin wrapped the green "LEAVE TREE" ribbon around its base, he secured it tightly with a knot. Over the course of his career, Cronin had flagged other trees with green ribbon, but they were ones that he considered to hold non-merchantable wood: their trunks were too twisted or too flawed. When he laid eyes on the big Douglas fir in cutblock 7190, he could see immense timber value. "I'm a logger and I've taken out millions of trees. But I was impressed." He couldn't know with 100 percent certainty — "You don't know until you put a saw into it and by that point it's too late" — but the tree exhibited few of the telltale signs of rot or disease. He had an encyclopedic knowledge of these forests, but could also see beyond a tree's rough bark to the dollar value of the timber within. "I can look at a tree and tell if it's got value or not. If it's not twisted, if the bark is healthy, if the limbs are healthy," Cronin said. "That one had value." Encased within the deeply crevassed bark of this Douglas fir lay enough wood to fill four logging trucks to capacity, with some to spare. If milled into dimensional lumber — two-by-fours, two-by-sixes, and the like — it could be used to frame five 2,000-square-foot houses. At first glance, he assessed the single tree in unprocessed log value at around $20,000. But since it was a Douglas fir, with its coveted warm colour and pronounced grain, the tree could be turned into higher-priced beams and posts for houses in Victoria and Vancouver. This single tree could fetch more than $50,000. A site plan for the fallers had already been drawn, but at Cronin's insistence it was redone to take into account the Douglas fir he had flagged in the middle of the forest. It cost Teal Jones around $1,000 to redraw the site plans alone. In the middle of the map, Cronin and Van Hell dropped an icon the shape of a single tree, marking the location of the designated Douglas fir. The falling crew would be forced to honour this map: a single icon on a page, and a thin, tearable ribbon around a broad trunk — that would prove the strongest form of protection. * * * Less than a year after Cronin wrapped the green flagging around the big Douglas fir, the trees of cutblock 7190 were gone. Throughout the summer of 2011, the grove of old-growth forest stood awaiting its fate. When the October rains turned heavy, a sound erupted in the cool morning air: fallers, contracted by Teal Jones, were starting up their chainsaws. Following Dennis Cronin's ribbon markers and the map drawn by Walter Van Hell, the fallers began bringing down the trees. The teeth of the saws bit into half-a-millennium-old trunks, casting arcs of sawdust that settled over sword fern and moss. The cut conifer quickly filled the air with a thick, woodsy perfume. The giant cedars and firs hit the forest floor with thunderous thuds, but the trees might as well have made no sound at all. A crew of hooktenders wrapped cables around the trunks of the fallen trees, attaching the lines to a cable yarder positioned on the road above the clear-cut. One by one the logs were hauled and loaded onto trucks, driven across the bridge over the Gordon River, past a group of anti-logging activists standing next to a grove of old-growth forest, and across the island to the town of Lake Cowichan, where Dennis Cronin lived. From there, the logs were trucked up-island to Nanaimo, where they were dropped into the ocean and incorporated into a boom. Tugboats hauled the boom across the Strait of Georgia, under the bridges of Vancouver, and up the Fraser River to the Teal Jones mill on the mainland. Unlike many logs that are exported whole, or raw, for processing and manufacturing, those of cutblock 7190 remained in the province. They were de-barked and run through a milling machine, which dissected them into timbers of various lengths and dimensions. There are beams of houses or pieces of furniture, windows or doorframes, guitars or works of art, that are made from the wood harvested from cutblock 7190. After a few months, silence returned to the base of Edinburgh Mountain. The fallers had long since packed up their chainsaws and gear; the trucks, laden with logs, had departed. A faint dusting of snow fell onto the clear-cut. As spring came, any remaining mounds of moss and bushes of salal crackled and dried up in the unfiltered sun. Bears that had called this patch of forest home found other hollows to den, while birds sought other branches to roost. Every wiry cedar, every droopy-topped hemlock, and every great fir that once made up this rainforest grove was gone — every tree, except one. Chapter 8 Last Tree Standing Dennis Cronin's big Douglas fir swayed quietly on its own in the middle of cutblock 7190. Winds swirled, grey mist rolled off the Pacific to fill the valley, and the sun rose and set. But the tree stood. One morning, the sun rose behind Edinburgh Mountain, rays fragmenting through the trees that cap its ridge. In the valley below, near the mountain's base, a single tree stood in darkness. Across the Gordon River, sunlight hit the tops of the hills before slowly descending down the slopes. Then, after most of the hills across the river were warmed with an orange glow, the broken top of a towering tree in the middle of a clear-cut was illuminated, like the lighting of a solitary candle. The sun climbed higher above the mountain until the entire great Douglas fir was gradually revealed from under the mountain's shadow. Along the rutted, principal logging road that ran through the Gordon River Valley, TJ Watt navigated his blue, right-side-drive Mitsubishi Delica, scanning the hills on either side through the windows. The tall van bumped this way and that, over a road that in parts had been packed smooth by heavy logging trucks laden with timber, while other areas were washed-out rough, as if paved with petrified loaves of bread. The hillsides in the Gordon River Valley were a patchwork quilt of cutblocks in various stages of regrowth. Some hills appeared cartoonish, as if drawn in a child's scribble book, with canopies of replanted saplings growing in unison to form a single layer. From a distance, the second growth looked less like forests than fields of even-aged wheat. There were fresh cutblocks, too, with stumps and scraps of cedar and fir, bright orange and ochre as if still warm from the chainsaws and machines that had cut them down. There was little remaining in these patches — a few fragments and splinters left behind after the logs had been hauled away. And there was old growth, when Watt looked closely, clinging to the very tops of a steep mountainside or down the plunging bottom of a gorge. These were the inaccessible trees, too far or too difficult or too costly to access by a timber company. It was a cool day in February 2012 as Watt approached Avatar Grove. The forest he had helped protect was drawing tourists from afar. This time he kept going past the grove, farther into the spiderweb of dirt logging roads that covers much of the southern half of the Vancouver Island. Watt had grown used to seeing trees disappear. In his role as campaigner and photographer for the Ancient Forest Alliance, he had driven thousands of kilometres of logging roads looking for the island's dwindling old-growth forests. Over the years, his expeditions to find groves untouched by commercial logging had forced him to delve deeper, along the rough backroads of the island, up mountainsides and down valleys, in search of Canada's last great trees. More often than not, what Watt found was not intact forests but fresh clear-cuts. Driving along these roads felt like peering into a post-apocalyptic future: dry, dusty, barren — a wasteland of destruction. But every so often, at the end of a road, he found a glimpse of a glimmering and verdant past — a remnant of a forest that had been left largely undisturbed for millennia. When he spotted the telltale signs of large, ancient trees emerging from a canopy, he would park his vehicle alongside the dirt road and head into the tangled forest on foot. It was no easy job to traverse some of the densest forest ecosystems in Canada, where an hour can pass, and you've advanced only a couple hundred metres, where undergrowth forms impenetrable barriers of bracken and bush, and where wild animals of tooth and antler lurk. But possibility compelled him farther, up hill and over creek, in the hope of finding some of the largest trees in the world — placid leviathans waiting in the forest. With each kilometre he drove and every ramble he took, the clock kept ticking. Logging companies continued to build new roads in a feverish bid to access new groves. Watt was trying to find them before a logger did. With each expedition into the bush, he could feel the race to locate, and hopefully protect, a small fraction of the province's arboreal legacy before it was permanently cut away. His goal was to bring back evidence not only that clear-cutting old growth continues to occur, but that there are still forests that can be saved from the saw. If you aren't familiar with the roads and terrain here, it is easy to become lost. Take one wrong turn and you can drive for hours, switchbacking up and down hills before arriving at one of the thousands of dead ends that mark the extremes of a logging company's reach. But Watt was familiar with this area. He had explored the valley that follows the Gordon River dozens of times, and he knew where he was going: to a patch of forest at the base of Edinburgh Mountain that was part of one of the largest continuous unprotected tracts of old-growth forest on the island. Located alongside the river, on a gentle slope, it was a prime candidate for producing big trees. Out the window to his right, something caught his eye: the unmistakable orange of a fresh clear-cut. He knew the road would lead to the stumps, to where he had been hoping to find trees. After turning onto a spur road he was forced to stop at a locked gate, a clear sign that there was current logging activity in the area. Watt grabbed his camera and continued on foot, across a single-lane wooden bridge. A hundred feet below, the emerald-green waters of the Gordon River thundered towards the Pacific Ocean a few kilometres away. On either side of the road grew young alders, often the first species to regrow after a cut. The area had seen much logging over the years, with replanted forests filling in the blanks. Farther down the road, the smell of conifer grew stronger, of cut wood and glossy needles releasing their oils into the air. He rounded a bend, glanced to his right, and stopped. The patch of old growth he had come to hike through was gone — a bite had been taken out of the forest. It was a familiar feeling for Watt, to return to photograph a lush ancient forest only to find it levelled. If you make enough trips off the island's main roads, the excursions begin to feel like surprise funerals. Watt often returned home from a weekend to compare his photographs of a recent clear-cut with images he had taken only months previous. It was jarring to witness: before and after, green and grey. Before him, this time, was a scene altogether different from any he had ever photographed. It wasn't a forest or a clear-cut; it wasn't an unblemished ecosystem or the scarred remains of an industrial harvest, but something he had never seen. What stood out to Watt wasn't the fact that yet another section of old-growth forest had been decimated, but that in the middle of the cutblock a single tree remained standing. It was a Douglas fir — and it was enormous. The tree was limbless from its base to 80 percent of its height, where a crooked crown of branches held dark green needles that ruffled gently in the breeze. One of the branches — which bent down and then up like a flexed arm — could have been a tree in and of itself. He brought his camera to his eye. Through the viewfinder, he framed an image unlike any he had taken before. In the middle of the clear-cut, the giant fir stood like an obelisk in a desert. * * * From the road above the cutblock, the scene looked like the aftermath of a nuclear detonation: a blast of destruction that ended abruptly at the shockwave's farthest point. But at the centre was not a crater but a single tree. The clear-cut was fresh: branches that had been cut off from logs still held their green needles, and fractured remnants of hundreds of firs, cedars, and hemlocks had yet to turn from warm orange and yellow to sun-bleached grey. The clear-cut was scattered with trunks, branches, and shattered wood — anything deemed of little or no value to the timber company that had come and gone. An excavator had been left within the cutblock and a cable yarder on the slope above, where the clear-cutting extended across the road and up the hillside. Cut and branchless logs lay in haphazard piles, the scene like a game of pick-up-sticks abandoned by a giant. A fresh cutblock is a jarring sight to behold. Along each colossal stump runs a ridge of splintered wood, marking as far as the chainsaw can enter the trunk and where the tree fractured as it fell. Emily Carr, in her wanderings of Vancouver Island in search of landscapes to paint, called these remains "screamers." They are "the cry of the tree's heart," she wrote, "wrenching and tearing apart just before she gives that sway and the dreadful groan of falling, that dreadful pause while her executioners step back with their saws and axes resting and watch. It's a horrible sight to see a tree felled, even now, though the stumps are grey and rotting. As you pass among them you see their screamers sticking up out of their own tombstones, as it were. They are their own tombstones and their own mourners." This was part of what the Ancient Forest Alliance was calling the Christy Clark Grove, the campaign that wasn't resonating with the public like Avatar Grove. The reality was settling upon TJ Watt. He had driven along this very logging road many times in the previous two years. He had crossed the bridge high above the Gordon River, followed the bumpy track flanked by second growth, and driven between two towering groves of old growth at the base of Edinburgh Mountain. He had passed cedars and Douglas firs that flanked the road as if he were driving between skyscrapers of a downtown core. He had hiked down into the old growth, taking pictures of any big tree he came across. If he had only crossed a small creek and continued through the bush for a hundred metres or so, he would have found himself standing under the second-largest Douglas fir in the country. Or he might have walked close but never seen it, as he focused on not breaking an ankle in a crevasse made by moss and root, or as he trudged around impenetrable barriers of undergrowth and deadfall. He could have been within a few dozen metres of such a gargantuan tree and walked right past it — the forest forcing him to follow its own paths, which may lead to danger or discovery. Watt had been so close, and yet he might as well have been another valley away. As he stared across the ruin of a forest, a familiar feeling of frustration and anger set in — at the logging company for harvesting yet another old-growth grove, and at himself for not identifying it or protecting it in time. But he could only do so much. The rate at which forests were being cut far exceeded the ground a few eager activists could cover. Dispirited, Watt continued down the road to hike in the adjacent grove of intact old growth. He wanted to feel the soft, spongy earth under his boots, smell the conifers and peat, and hear the creeks babbling between the trees. He wanted the comfort of being in an intact forest. When Watt returned to Victoria, he mentioned the clear-cut with the solitary big tree to Ken Wu. But with the campaign to protect Avatar Grove reaching its climax, and an announcement imminent, it was shrugged off. Wu knew that timber workers sometimes leave individual trees or patches of trees, so it didn't sound that unusual. All attention was on developing Avatar into a premier tourist destination. A month later, while accompanying a documentary-filmmaking student who was interested in filming clear-cuts, Watt finally descended into cutblock 7190 to stand under the towering Douglas fir, which stood on a flat plateau at the bottom of a slope. Dried twigs and slash snapped loudly under his boots. He scrambled over the jumble of deadwood and past three-metre-wide cedar stumps. He tiptoed along what had once been a nurse log, now scrubbed of the miniature forest of seedlings that had been growing along its length. With every step farther down into the clear-cut, the tree kept getting bigger and bigger and bigger until it towered above him, blocking out the sun. TJ Watt looked up. In that moment, he knew he had stumbled upon something significant. To his eye, as someone who had spent years documenting Vancouver Island's big trees, this fir looked to be one of the largest in Canada. He had visited the largest known Douglas fir in the world many times — the Red Creek Fir, a giant 73.8 metres tall and 4.2 metres in diameter, located an hour away down several twisty turns of logging road. This one appeared roughly the same size. Watt's initial photographs of the tree had no point of reference, and without a forest to compare it to, no scale. But this time, Watt returned to Ken Wu with a photograph of him standing on a cut hemlock stump adjacent to the enormous Douglas fir, leaning in and touching the tree's broad trunk. The photograph sent tingles down Wu's spine. The scale was key. Seeing a human dwarfed by the tree made all the difference. Wu had to see it for himself. "This could be the biggest Douglas fir in the country!" Wu said to Watt, after the pair made the trip to Port Renfrew to stand in the middle of cutblock 7190 and look up at the solitary tree. In that moment, the two activists realized that this tree presented a different opportunity than an intact old-growth forest. The pair stumbled around the cutblock. Among the discarded branches were the stumps of once-ancient cedars and firs. Wu and Watt climbed on top to examine each one's rings. Some trees, they estimated, had been around five hundred years old. It is a challenge for any environmental activist to motivate the public into action — to write to a politician, to join a protest, or simply to vote in an election with an issue such as old-growth protection in mind. For Wu, activists need something for the public to rally around: a point of tension, a symbol, an icon. The general concept of protecting old growth can never resonate as much as a place that people can walk through, touch, and see. Environmental photographers can help bring what is often a remote issue into the home. But images of a babbling creek surrounded by forest, of a bear being dwarfed by a tree, of an eagle soaring over a valley, these all tend to blend together. They can be beautiful, but they are rarely effective. The challenge, for an activist pushing a cause, is to find an image — a symbol — that transcends nature and starts making people think. There, in the middle of cutblock 7190, stood something different. Hope amid devastation. Life enduring against the odds. This tree provided exactly what the Ancient Forest Alliance needed: an image that symbolized its cause. What TJ Watt hadn't fully recognized on his first visit now became clear: this was an opportunity. Chapter 9 Growing an Icon On March 21, 2014 — timed to the International Day of Forests — the Ancient Forest Alliance issued a press release titled "Canada's Most Significant Big Tree Discovery in Decades!" Attached was the self-portrait of TJ Watt leaning against the tree, and the claim that it was possibly the second-largest Douglas fir in the country, just behind the Red Creek Fir, measured by calculating its total volume. The tree was perfect. It was a near record-breaker. It was close to Port Renfrew, a town humming with activity around big-tree tourism after Avatar Grove. And it was alone. It was a site that could be visited by tourists, whose photographs wouldn't even need a caption. The tree summarized the entirety of the AFA's old-growth forest conservation issue in a single staggering blink. The Ancient Forest Alliance gave it a name: Big Lonely Doug. "The days of colossal trees like these are quickly coming to an end as the timber industry cherry-picks the last unprotected, valley-bottom, lower-elevation ancient stands in southern B.C. where giants like this grow," Ken Wu stated in the release. "It's time for the B.C. government to stop being more enthusiastic about big stumps than big trees, and for them to enact forest policies that protect our last endangered ancient forest ecosystems," TJ Watt noted, hoping this single tree might push a change in legislation. Big Lonely Doug instantly became a celebrity. This wasn't just any tree in a forest. This was a sole survivor standing amid ruin. And its anthropomorphization resonated with people. It had a name, and a sad one, too. The press release remarked that the tree's trunk bore a scar in its bark around the base. Throughout Vancouver Island's logging history, large Douglas firs have often been used for their strength and stability as an aid in hauling felled logs from a cutblock. A crew might wrap cables around a prominent trunk to use it as an anchor for hauling logs. The cable would dig into the thick bark of the tree as a steam donkey or machine hauled. The scar around the base of Big Lonely Doug wasn't there when Dennis Cronin wrapped green ribbon around its trunk. If the tree had been used as a yarding point by the hauling crew of cutblock 7190, that wasn't his intention. But the logging crew saw strength in the tree's size and girth that could be employed. As its first act as a solitary tree, it was turned into a spar to haul logs from the cutblock. The image of the tree presented by the AFA was not only that of a survivor but a victim — forced to bear witness to the razing of its forest, while simultaneously being used as an aid in its destruction. The story rippled through the media, with the _Globe and Mail_ calling the tree "sad" and "perhaps the loneliest tree in Canada." Many timber workers, including some of Dennis Cronin's co-workers, met the media attention with little more than eye rolls. Fallers and forest engineers immediately questioned the "second-largest Douglas fir" designation given to Big Lonely Doug. To many people in the industry, "tallest," "largest," "widest," and perhaps especially "gnarliest" are little more than monikers that help promote an activist cause and attract attention — just another way of commercializing the trees. Activists may not sell the timber, but they sell the trees. During more than a century of commercial logging on Vancouver Island, timber workers have encountered hundreds if not thousands of trees larger than Big Lonely Doug. Nearly every single one has come down. Mike Pegg, who worked with Cronin at Teal Jones, noted another Douglas fir, just off a spur road and up a hillside nearby in the Gordon River Valley. He said it was bigger than Big Lonely Doug. But the tree had been blown over by the wind. There have existed much wider and taller Douglas firs, but apart from a handful, including the Red Creek Fir, none of them are still standing. They've been felled by chainsaw or axe, or have succumbed to a vicious storm. Regardless of their size, they have fallen. A tree cannot be a record holder if it no longer exists. The oldest person in the world does not retain her crown when she dies. The difference is one of perspective: to those in the industry, a record tree is a record tree, regardless of whether it is alive and standing or fallen and dead. But to activists and ecologists, the value in these trees isn't finite. The return on these forests doesn't have to end when the wall of a house is erected. As soon as Big Lonely Doug hit the media, questions began to surface about its survival. Amid the wonder and awe at such an unusual sight was concern for the tree now that its forest buffer had been cut. "The fact that all of the surrounding old-growth trees have been clear-cut around such a globally exceptional tree, putting it at risk of being damaged or blown down by windstorms, underscores the urgency for new provincial laws to protect B.C.'s largest trees, monumental groves, and endangered old-growth ecosystems," Ken Wu said in the press release. "Lonely Doug is far more susceptible to blow-down in a serious wind now that his forest mates are gone. There's a metaphor there for us on the planet," one commenter posted under a news article about the tree. "If there is a major storm in summer or winter, sadly this great tree that has seen history could keel over," another wrote. These gargantuan trees, despite having endured for centuries, do fall. On New Year's Day 1997, a fierce windstorm tore through MacMillan Provincial Park, home to Cathedral Grove — a stand of easily accessible ancient Douglas firs situated alongside the narrow cross-island highway to Tofino. The wind knocked down some of the grove's largest trees and reshaped the structure of the park. In 2003, a sixty-metre-tall Douglas fir in Cathedral Grove came crashing down onto a parked car, killing two people inside. Along the Koksilah River, less than an hour's drive north of Victoria, stood a Douglas fir that had been left by timber workers who proceeded to cut most of the surrounding forest. The seven-hundred-year-old tree blew down in a storm in 1979 because, ecologists asserted, it was without its protective buffer. At nearly four metres in diameter, the Koksilah Tree was one of the largest Douglas firs ever documented, and at the time held the record of being the second-largest Douglas fir in Canada, at 69.2 metres tall. Its fallen log became incorporated into a nature trail where hikers could walk along its length, gripping the furrowed bark of a ruined tower of old growth with their boots. Many activists point to the Koksilah Tree as an example of why saving individual trees — whether by activists or by loggers — is a more short-sighted approach to protecting old growth. The projection that Big Lonely Doug would suffer the same fate appeared on the surface to make logical sense, but it overlooked several less-obvious ecological forces at work. Storms off the Pacific Ocean have hammered southern Vancouver Island for millennia. One sudden and riotous wind can be found just up the coast from Port Renfrew, where high pressure on the west side of the island forces wind through the Alberni Valley and across the entire island. The system is known as a Qualicum, after the beach and surf town on the island's eastern coast where the wind disgorges into the Strait of Georgia. But ferocious winds are known to race up every valley that runs perpendicular to the Pacific, battering the trees that stand dozens of kilometres from the coast. Over time, this strength training creates robust root systems, and while branches may occasionally blow off, those trees that endure grow thicker with each passing storm. Wind also acts as a form of brute natural selection, picking off the weaker and older trees with rot in their centres, or those that never had strong root systems to begin with. Wind is a relentless force along the coast, and if a tree cannot withstand the torture, it falls. Those that can, survive. Many of the large trees that have fallen in storms have blown down not because of exposure but because of age. While these giants may seem to be god-like eternal beings that have survived a thousand years, and should therefore survive a thousand more, they are impermanent, with an inevitable death — just like anything crawling or growing or lurking in a forest, regardless of size or stature. Big Lonely Doug had endured strong winds for as long as its latest apple-green needles protruded above the forest canopy. Before cutblock 7190 was felled, the tree's crown stood well above the treetops, where it bore the full brunt of winds that coursed through the valley every year. The tree bears several scars from the wind that predate the loss of its forest buffer, including a broken top — as many of the largest Douglas firs do — like the chipped turret of a castle constantly under siege. When Dennis Cronin first walked the stand, among the dozen or so exceptionally large cedars and firs he noticed something unusual about this particular patch of forest: there was a significant gap in the age range between the largest trees — some three metres wide — and the rest of the grove. There was a collection of large cedar and fir stumps, around or greater than five hundred years old, but the majority of the stumps were from hemlocks of about one hundred years old. To an experienced forester, it was a clear sign that the majority of the forest had grown back after some kind of hurricane-force gale had torn through the valley and knocked down the weaker and less-established trees. The great trees, including Big Lonely Doug, withstood the storm. "Ninety-nine percent of that forest would've been flattened right at the turn of the century," Cronin said. He could see it in the forest as clearly as seeing grandparents among a group of children. "But that tree," he added, referring to the Douglas fir he had flagged, "it probably lived through four or five rotations of the forest in the time that it was alive." When forest ecologist Andy MacKinnon saw a picture of Big Lonely Doug, his first thought was that the tree would not remain standing for long with its companions gone and its location in a notoriously windy place. But when he visited the cutblock, he noticed a pattern in the age range as Cronin had, and one that he could measure in the rings of the remaining stumps. What both men observed, before and after the clear-cut, was that a cataclysmic storm had ripped across southern Vancouver Island perhaps a century ago. It was a storm that the Pacheedaht remembered, that ecologists could see evidence of in the forest, and that Teal Jones had noted on its maps when estimating the dominant age of the stand. But the specifics — the first-hand written account — had long been forgotten. In 1906, the Victoria-based timber operator H. H. Jones was hired by a businessman out of Minneapolis named T. W. Welter to locate some fine timber on southern Vancouver Island. Welter was one of many Americans who saw untapped wealth within the forests of British Columbia and began cruising for land claims. Jones knew of some stands of big timber that lay on the way to the headwaters of the Gordon River, most easily accessible at that time via the interior of the island rather than the coast. He enlisted one of Welter's timber cruisers, a man named John McClure, who would assess the value of the trees for his boss, and formed a group with an Indigenous man named Fred who they met in Duncan and a Swedish man named Henry. After days of trekking through the forest with their gear, the foursome made camp along the Gordon River and set about surveying the timber. Before crawling into their tent for the night, they laid out "forty-five sections of as fine timber as every grew," as H. H. Jones wrote in "A Cyclone Among the Timber Titans," an article for a 1911 issue of _British Columbia Magazine_. As darkness fell, the weather turned. The air was still; in fact, there was no sound, save the cry of a timber wolf or the thud of a lump of soft snow dropping from its perch high in the tree-tops to the earth beneath, breaking the silence. But a storm was coming in from the Pacific — a storm without a precedent in the centuries in which those gigantic specimens of forest trees had made their growth, and one not likely to be repeated for centuries to come. A runaway from its natural course was upon us. It had no introduction — and certainly required none. I have been in some very bad storms; have seen houses swing from their foundation, roofs removed, trees shattered, and have witnessed the death of both man and beast during terrible storms, but I never knew of one which had not given some warning of its approach. The timber workers were stuck, precariously situated with little more than a canvas tent as protection, as trees crashed to the earth around them. The trunks hit the ground like blasts of dynamite, which Jones likened to the shocking spontaneity of a fireworks display. In three gargantuan waves the storm hammered the forest, until finally it subsided. When the same storm passed south over Washington State, it claimed the lives of three men who were killed when a tree fell on their shack. "It would be equally difficult to estimate the velocity of the fiend which laid waste so much wealth in its mad frolic," Jones wrote. "Had it struck the wind gauge at the meteorological office, it certainly would have heated the bearing of that instrument." In the morning, Jones and his companions rushed outside to assess the destruction. The storm, on a line from west to east, running within ten feet of our tent, had cut every tree and left them piled in a tangled mass in places fifty feet high. They were not uprooted, but broken off from ten to thirty feet above the ground. Trees from three to five feet in diameter were smashed as if but twigs. The mighty rush of the storm allowed no chance for the forest giants to sway and loosen their roots. They were pushed forward with one mighty strain until they broke. But the storm had not destroyed every tree. Some of the oldest and largest trees — western red cedars with broad bases, Douglas firs with deep roots, and Sitka spruces with columnar trunks — remained. Dennis Cronin and Andy MacKinnon, from their two perspectives as forest engineer and ecologist respectively, had found evidence in the Gordon River Valley of a legendary great wind that tore across southern Vancouver Island — in some places devastating entire forests. After taking into account the forest's ecological history, and walking the cutblock himself, MacKinnon quickly changed his perspective on Big Lonely Doug's situation — it was not as bleak as he had originally thought. Cronin never had doubts about the stability of such a healthy, substantial Douglas fir that had withstood storm after storm battering its branches. "He's used to the wind," he said, "so he's got a chance." Further evidence of Big Lonely Doug's survival can be found even further back in the dendrochronology of this particular patch of the Gordon River Valley. The tree is estimated to be approximately a thousand years old, but the stumps of the next-oldest trees in the cutblock were dated to around five hundred years. The evidence suggests that these trees — around a dozen western red cedars and Douglas firs — sprouted through the wreckage of another hurricane-force storm, that lashed the region half a millennium ago. But amid the ruin, one tree had survived. It wasn't the only time the tree that would be known as Big Lonely Doug would stand alone. * * * In the spring of 2014, Dennis Cronin was at home watching TV when a news program came on. The screen flashed with an image of the misty hills of the Gordon River Valley that he knew so well, of a clear-cut, and of a single enormous Douglas fir towering above someone standing at its base. He started laughing and called his wife, Lorraine, into the room. "There's my tree!" the logger exclaimed. He was shocked but not surprised. After cutblock 7190 was harvested, he had returned to the tree and noticed bootprints in the mud around its base — possibly from wood salvagers but more likely from activists. It was only a matter of time, he'd thought, that the tree would be found by an organization like the Ancient Forest Alliance. On screen came TJ Watt and Ken Wu, talking about the last remaining old-growth forests in the region. Wu pointed to a large branch, nearly a foot thick, lying on the ground at the base of the Douglas fir, saying that it had been ripped off of Big Lonely Doug by a recent storm due to the loss of its forest buffer. "And potentially the tree itself could be blown down," Wu said as he was being interviewed near the tree. "To lose Big Lonely Doug would be a tragedy," Watt reiterated in the news clip. "It's a sad enough scene as it already is." It was a claim that struck Cronin as he watched. The excitement over seeing the tree he'd saved on TV soon turned to resentment. He was one of the few people to have walked the forest before it was cut and knew that particular tree had grown well above the canopy, feeling the full brunt of winds against its branches for centuries. To him, the claim that it was now more vulnerable was just another example of activist "doom and gloom-ers" using "scare tactics," as he put it, to galvanize the public into action. Trees lose branches every blustery season or succumb to the wind entirely, even when they are standing in an intact forest. The broken branch lying at Big Lonely Doug's base was not a consequence of loggers isolating this one tree, Cronin maintained, but simply an occurrence within the natural cycle of these forests. He had met many environmental activist groups over the years. Like other timber workers — fallers, truck drivers, engineers — he would pass them in his truck on his way to a cutblock while they were out looking for old growth. Maybe they would exchange a wave; maybe not. Still, Cronin had a job to do, and that job, at least superficially, was at odds with what the activists were trying to achieve. He disagreed with many of their tactics, remembering the fear he'd felt when working in the Carmanah region during the early 1990s and the worry that someone he knew, one of his co-workers and friends, might hit a spiked tree with their chainsaw and be seriously hurt or even killed. Elsewhere, including in the Walbran Valley, he had come across activists having tampered with the loggers' work by painting over their spray paint with brown in an attempt to camouflage the markings. Or they would remove the brightly coloured flagging tape — sometimes even retying the pink "ROAD LOCATION" tape to branches that would steer timber workers down an errant path through the forest, eventually leading to the edge of a cliff. There was little Cronin could do amid an increasingly negative climate that was turning loggers into villains. He would just keep working. Cronin had figured he would hear about the big fir in cutblock 7190 eventually, just not it being presented as a new "discovery" of an organization ostensibly at odds with his work. It was he who had wrapped the green ribbon around its base and pushed his bosses to set it aside. Without him, the Douglas fir would be planks and boards and beams. Lorraine was keen to set the record straight, and emailed the Ancient Forest Alliance to let them know that the tree wouldn't be standing if not for her husband — a logger. But it was too late: Big Lonely Doug had become an unwitting mascot for an environmental cause. Chapter 10 Big Tree Hunting For thousands of years the residents of Vancouver Island have hunted big timber. It began with the coastal First Nations, who sought out large cedars deep in the forests, carefully selecting ideal specimens of western red cedar from which to carve their canoes. Then, Scottish botanists headed into uncharted bush with notebook and pencil to track down, document, and collect samples of some of the biggest trees in the world. Next, as the forest became a commercial resource, settlers delved deeper into the island's heart to locate the highest-value stands and brilliantly engineered how to extract the mammoth trees. And when environmental activists of the 1980s and '90s began to realize the scope of what was being logged — and of what remained — they found immense groves, like those in Carmanah and Clayoquot, and singular specimens to be at the centre of their campaigns. Now, tourists are going off the well-trodden paths to find the latest record-breaking tree. In the mid-1980s, as eyes began falling on valuable regions of old-growth forest on Vancouver Island such as Carmanah and Clayoquot and Walbran, the question of how much remained arose. After decades of timber harvesting, there was no universally accepted record of forest untouched by commercial logging, nor of remaining big trees. The most comprehensive archive had been casually collected by Randy Stoltmann, the activist who had first alerted the threat of logging Carmanah and who first began documenting the large trees around his home as a high school student in West Vancouver. By the age of twenty-four, he had personally visited, searched out, and collected information on many of the remaining significant trees in British Columbia. Stoltmann's records and notes formed the foundation of the province's first prominent inventory in 1986, in partnership with the B.C. Forestry Association. The B.C. BigTree Registry's goal was to encourage outdoor and environmental enthusiasts to locate, describe, and catalogue the largest trees of each species "to produce an official register, and to provide protection for these special trees," read a WCWC pamphlet, and to mail in their findings. But on May 21, 1994, Stoltmann died in an avalanche while ski-mountaineering. Recognizing his efforts to protect the Carmanah Valley, the provincial government renamed Heaven Grove, the patch of Sitka spruces once the location of Camp Heaven in Carmanah Walbran Provincial Park, the "Randy Stoltmann Commemorative Grove." At Stoltmann's funeral, his friend and fellow activist Clinton Webb, who had been with Stoltmann the day the pair stumbled upon evidence that MacMillan Bloedel was moving towards logging Carmanah, concluded his eulogy: "Let us make sure that in the falling of a great tree to the earth, the hole in the forest canopy is soon filled with the vigorous growth of many saplings." After Stoltmann's sudden death, the B.C. BigTree Registry fell from priority, and some of his handwritten records, research, and maps went missing. Some, however, he had copied into a report for the B.C. Conservation Data Centre, which was passed to the Ministry of Forests and Range, and then, in 2010, to the University of British Columbia's faculty of forestry. In October 2014, seven months after Big Lonely Doug was presented to the public by the Ancient Forest Alliance, the registry was launched online, becoming a searchable database of record for the province's largest, tallest, and widest trees. True to its original ethos, the registry remains open to the public for additions. Newly identified trees can be submitted online with measurements, descriptions, and photographs, which are assessed, confirmed, and added to the registry. Trees that make it to the registry are approved based on certain superlatives — tallest, largest base circumference, or largest in total volume — and ordered in top-ten lists. Using a method devised by an American forester named Fred Besley in 1925, each tree is awarded a score based on tree height, circumference, and crown spread, with the greatest appointed a "champion." Vancouver Island and the Gulf Islands boast eight champions. The largest shore pine grows in Esquimalt, just outside Victoria, and the largest Pacific dogwood flowers every spring on Salt Spring Island. But the forests around Port Renfrew hold the region's most impressive trees. An hour east of town, down several twists and turns of logging roads flanked by clear-cuts, grows the Red Creek Fir, the world's largest Douglas fir. The Cheewhat Giant grows off the logging road to Carmanah Walbran Provincial Park, and is the largest western red cedar as well as the largest tree by volume in the country. And the Carmanah Giant, not only Canada's tallest Sitka spruce but also the country's tallest tree at more than ninety-five metres, grows just up the coast from Port Renfrew. Many of the province's most significant trees are growing on Crown land — and possibly available to timber companies to cut. They were also likely found and nominated to the BigTree Registry by activists, environmentalists, or people who have an interest in protecting the trees and the forests. Since its inception, the registry has meant not just to be a record but also a tool for conservation. For a tree like Big Lonely Doug, its "second-largest Douglas fir in Canada" moniker is only true as long it remains undisputed. In all likelihood, at the bottom of a valley just beyond the furthest logging road's reach lies a record-breaking tree somewhere on southern Vancouver Island: a Douglas fir with a total size greater than the Red Creek Fir, a Sitka spruce even taller than the Carmanah Giant, or a western red cedar with a gnarly and twisted base even wider than any tree already identified. The wind is tickling their fragile tops while mist enshrouds their trunks. There may be record-breakers that reshape our understanding of their growth and their role in the forest. They may have been passed by Indigenous peoples many times over, but they have yet to be assessed with an eye for either commercial logging or large-scale protection. For TJ Watt, the possibility of finding more giant trees spurs him on — to drive to the very end of a rocky logging road and continue on farther into the bush on foot. He has identified many; he could identify more. But Watt recognizes that the people who spend the most time in the remotest forests of the island aren't activists; they're loggers: "Often the first and last people who are seeing these forests are the people who are cutting them down." But for Watt, it doesn't have to be. To find a giant is one step, to recognize its rarity and ecological value is another. But to turn the trees themselves into destinations would offer the greatest return; it would not only excite the new generation of environmentalists but would turn tourists into activists, hikers into big-tree hunters. * * * Two months after the Ancient Forest Alliance introduced Big Lonely Doug to the world, TJ Watt stepped into a climbing harness on a sunny spring day, buckled a blue helmet under his chin, slung the strap of his bulky DSLR camera over his shoulder, and looked up. Cotton balls of white cloud rolled across a blue sky. He was about to climb one of the largest trees in the country. Watt clipped into an ascender — a mechanism used to facilitate climbing by gripping and locking onto a rope — slipped his foot into a sling, and hauled himself up off the ground. Inch by inch, he slowly but surely ascended. The apparatus and technique were meant to summit a mountain, but were ideal for climbing a tree. The high-rig loggers of old would have used spiked boots and a sling wrapped around a tree to shimmy their way to the top, but the ascender allowed for the barest minimum of impact upon the tree. Up Watt went. From a distance, he was a spider climbing a thread of silk beside a telephone pole. The wide trunk blocked out the sun like the moon in a solar eclipse. Even as he neared the crown of branches, despite tapering slightly the trunk still loomed beside him. The bark was close enough to touch. It took Watt around fifteen minutes of hard work to reach the first branches. There, inside the canopy, he met Matthew Beatty — a co-founder of the Arboreal Collective, an informal network of like-minded professional arborists who advocate for the protection of old-growth forests through climbing big trees. The collective was one of several that had emerged in British Columbia, Washington, and Oregon — with names like Expedition Old Growth and Ascending the Giants — that aimed to add big-tree climbing to the roster of adventure tourism. To these tree climbers, seeking out and climbing the largest western red cedars, Sitka spruces, and Douglas firs on Canada's West Coast and in the United States' Pacific Northwest is to experience nature in a more intimate way. There lies great tourism potential, Beatty has seen — to bring people typically disconnected with the wilderness, these trees, and the issues surrounding the protection of old-growth forests into the treetops. It isn't standing at the base of a mountain looking up, but at the top of a mountain looking down. There exists a deep history of activists around the world using sit-ins in trees as a protest tool. In 1971, in what was known as the Elm Conflict, people in Stockholm, Sweden, climbed into the treetops of several urban elms to protest the proposition to cut them down to make way for a subway expansion. In 1985, an activist named Mikal Jakubal climbed a Douglas fir in Oregon's Willamette National Forest to protest clear-cutting — sparking a series of similar tree sit-ins along the U.S. coast. The longest and most notorious was by Julia "Butterfly" Hill who, beginning in 1997, spent 738 days in the tops of a fifty-five-metre-tall coast redwood in California to save it from being felled. Her action led to the timber company placing the estimated 1,500-year-old tree, named Luna, and a buffer of forest around it off limits to logging. Matthew Beatty sees tree climbing organizations in small part as an extension of that activist ethos, but more about connecting the public with these trees emotionally. Beatty had brought his team of experienced tree climbers to Port Renfrew to climb Big Lonely Doug, to accurately measure its height and to create a promotional package of video and photography of the tree for the Ancient Forest Alliance. For Beatty, there was urgency to climb such an unusual tree as well. He, too, worried that Big Lonely Doug's isolated existence might not last long, that its lack of forest buffer would eventually prove fatal for the tree. Standing at the base, Beatty had pulled hard on a high-powered slingshot and let it fly. A beanbag attached to a thin line sailed more than fifty metres up and over one of Big Lonely Doug's thick branches. Using the line, he had hauled up a climbing rope and set the rigging to climb the tree. Taking care to minimize impact on the tree itself, the tree climbers employed a stationary rope system, where one end is anchored to avoid friction on the branch. A woodpecker landed on one of the tree's limbs, before flittering off confused by the newcomers in the treetops. Beatty and Watt hung suspended among the branches. The two could see the entire valley. They could see the old-growth trees of Eden Grove, and the other cutblock flagged by Dennis Cronin and Walter Van Hell. They could see patches of replanted second growth, light green and verdant and uniform, around the valley. Dozens of storeys in the air, even the light breeze caused the tree to sway side to side; they could feel the tree twist as it moved. Big Lonely Doug trembled as the valley stirred, but the breeze was a mere whisper of the great winds that had once knocked down the entire forest, leaving only a few aged giants behind. Watt snapped photographs of small ferns and young honeysuckle bushes growing out of a moss-covered branch. But the ferns looked crinkled and dry without the moisture emanating from a rainforest below. He turned his camera down, to take the kind of photograph that he often uploaded to his website and Instagram page. Below him, there was only one thing that caught a shadow. Stretching across cutblock 7190 to nearly touch the patch of dark green old-growth forest next door was a long silhouette of a giant tree — like the shadow of a great sundial ticking and ticking around the clear-cut. After months of speculation, Watt would finally have confirmation of the height of Big Lonely Doug. He had watched as another tree climber ascended higher through the canopy to reach the fractured tip that had broken off years, maybe decades, before. Only a twisted burl remained — bleached grey in the sun. Still, a few small huckleberries had sprouted in the fold of deadwood. Even at the pinnacle of the tree, twenty storeys above ground, death had fostered life. The climber had shimmied his way up to the very top, steadied himself, and dropped a yellow measuring tape down alongside the trunk. Big Lonely Doug's height — from the tree's point of germination several metres under the mound of needles and shed bark up to its broken top — was confirmed at sixty-six metres tall, just shy of Dennis Cronin's estimation with his hypsometer the day he flagged the tree. Climbing a skyscraper-sized tree requires some technical skill and experience, but the method to accurately measure its height is straightforward. Determining age, however, is a more challenging task. Dendrochronology, the study of a tree's age based on its growth rings, can be accurate to the year. Much can be learned by examining the rings of a tree — primarily any major environmental events that affected the tree's growth. A temperate and wet year during which the tree grew rapidly will form a thicker ring, while a drier year with more extreme seasons will produce a thinner ring. The appearance of thin lines on a stump or cut log are in reality variations in density and colour that form the tree's distinctive and countable rings. To age a living, standing tree, however, is much more difficult. Using a technique called "core sampling," dendrochronologists employ a drill that bores into a trunk to harmlessly remove a pencil-thin column of the tree's core. This method was used to date what for decades was thought to be the oldest known tree, determined to be nearly five thousand years old. It sprouted through the earth around the same time as the bricks of the Great Pyramid of Egypt were being methodically stacked. The bristlecone pine, nicknamed Methuselah, grows somewhere within Inyo National Forest, California, but its exact location has never been made public out of concern that it might be assaulted by trophy hunters keen on pilfering a branch of the ultimate record-breaker. It's a legitimate fear: in 1964, a dendrochronologist graduate student cut down the oldest known tree at the time, named Prometheus and also a bristlecone pine, when reportedly his core sampling bit failed. He determined the tree's age by killing it. In 2013, another example of the species was assessed to be older than Methuselah — more than five thousand years old — earning the crown as the oldest known tree in the world. The tree has yet to be given a name. These trees grow slowly in the harsh, high-elevation mountains, which is reflected in their dwarfed size compared to the great towers growing in the lush valleys of the Pacific temperate rainforests. For trees like Big Lonely Doug, with its nearly four-metre diameter, this method of dating is simply not possible; no bit is long enough to core through its enormous girth. When Watt first stumbled upon Big Lonely Doug, he and several ecologists estimated the tree to be approximately a thousand years old, after comparing its width to the stumps of five-hundred- and six-hundred-year-old Douglas firs nearby. It would have been a seedling around the time the Viking Leif Erikson first landed on the east coast of North America and began building sod houses at L'Anse aux Meadows in what is now Newfoundland. It would predate the formation of Canada by seven times. It would have been seven hundred years old, already a titan of the forest, when the great flood of 1700 surged along the coast. Dennis Cronin had stood on more stumps than most, after decades working in the timber industry. He knew that on a rich, well-draining plateau in the lee of a mountain, Douglas firs grow well beyond how they would on a rockier, more arid slope. He maintained that the tree could easily prove to be more than a thousand years old. But until the tree falls — blown over by the wind when it finally becomes too old to repel the storms — how many times its shadow has been cast across the land will remain a mystery. Only with the fall of a giant will its impact really become known. Only death will reveal how long Big Lonely Doug has lived. The image of a single surviving giant tree standing in the middle of a clear-cut began drawing tourists away from the beaches and hiking trails along the famed West Coast, and into logging country at the heart of Vancouver Island. Big Lonely Doug captivated people not because of a catchy cultural reference but because it held emotion. Visitors began asking at Port Renfrew's tourist-office-cum-community-centre for directions to the tree, wanting to "keep him company." People would go to hug the tree. They would go to sit underneath its canopy and look across the empty clear-cut. They would go to scramble over the scraps of forest for a picture where they appear the size of an ant. The tree, and its name, had become the Ancient Forest Alliance's new Avatar Grove — the hook that drove attention to the organization and to the cause. To attract donations, the AFA began an Adopt-an-Ancient-Tree program, in which supporters could choose between eight individual trees — including the Red Creek Fir, the Cheewhat Giant, and Canada's Gnarliest Tree — or six groves and pay a minimum fifty dollars. Anyone who selects Big Lonely Doug, the campaign's spokesperson, receive a dedicated colour certificate marking him or her as "an adoptive guardian of Canada's 2nd largest Douglas fir tree, Big Lonely Doug" and someone "helping to support the Ancient Forest Alliance's campaign to protect British Columbia's endangered old-growth forests." The certificate is printed with a photo of the giant tree standing "lonely as ever" in a clear-cut. More generous donors are bestowed with a title: "Ancient Forest Defender" ($100) or "Ancient Forest Protector" ($200). It was a tried-and-tested marketing tactic by environmental non-profit and for-profit organizations to encourage participation in a cause. The World Wildlife Fund has been offering a similar "adoption" program for decades, where the donor receives a plush stuffed animal — a panda, a snow leopard, an orca — in return for their contribution. Big Lonely Doug also lies in the ideal location for tree-hunting visitors who don't want to drive for hours through a warren of logging roads or trek for kilometres through thick bush to take in the sight. And the tree stands far enough off the main paved road that finding it feels like an adventure — a mini-expedition just off the beaten track. It feels like a search for an endangered beast that was thought to have died out long ago. When the tree comes into view, there is as much relief as there is awe: Big Lonely Doug is still there. It is still standing. Big Lonely Doug began to appear in the marketing campaigns of a variety of organizations and companies. Expectedly, numerous environmental advocacy groups used photographs of the tree to elicit donations. Businesses saw value in the tree as well. Sitka, a Victoria-based clothing company, started a funding campaign to raise money to improve the trail through the cutblock and construct a viewing platform around the tree to protect its roots, "now that people are coming to visit Big Lonely Doug to keep him company." They used a photo of the tree in the clear-cut, writing, "Doug is lonely because his old-growth friends were clear-cut all around him in 2012." The campaign also helped raise $4,000 towards the construction of the boardwalks through Avatar Grove. Similarly, when the American outdoor gear company Patagonia opened a Victoria location, it decided to commit 1 percent of the store's sales — which the company typically donates to environmental non-profits — to the Ancient Forest Alliance. On the wall of its store hung a photo of Big Lonely Doug. Perhaps less expectedly, the feminine hygiene company o.b. released a social media advertisement promoting its more environmentally friendly, applicator-free tampons. "A woman uses 10,000 of these in her lifetime," the ad read, showing a graphic of an applicator. "That's 18x the height of Big Lonely Doug in British Columbia." And there it was, the unmistakable big tree, with its branch that looks like a flexed arm, silhouetted in white over o.b.'s iconic teal branding. The ad concluded with o.b.'s trademarked catchphrase: "Only what you need, nothing you don't." The commercial wasn't met with universal praise. "How much bleach and chemicals go into your tampons? How much plastic? What are you doing for reforestation efforts? Do you use tree farms or old growth trees?" one commenter questioned under the ad. "DIVA CUP all the way!!!!! Why not save ALL the trees," another posted, referring to the reusable silicone alternative. Another commenter was simply shocked that this single tree standing in a valley near Port Renfrew had been used in such an unlikely marketing campaign: "I can't believe lonely Doug was just featured in a tampon commercial." To some activists, there is a danger with focusing on a single tree and ignoring the forest. These charismatic arboreal protagonists can become so big on their own, they cast the issue they represent into shadow. People become tree-centric — focusing on individual trees and not the entire ecosystem. Tourists journey to see the record-breakers while driving past a clear-cut or a second-growth forest or even an old-growth forest without stopping. In 2015, the AFA applied to the South Coast Recreation District branch of the Ministry of Forests, Lands and Natural Resource Operations to turn Big Lonely Doug into a recreational reserve. The organization needed approval if it wanted to construct a wooden viewing platform around the tree to protect its roots and base from visitors. Recreational reserve applications often come from environmental activist groups, but also from non-commercial recreation clubs hoping to build a dirt bike trail or an informal ski run, or from regional districts looking to increase tourist opportunities in their communities. They must show they will manage the site, repair boardwalks and trails, and oversee any facilities. Unlike Parks Canada, the recreation site system is not a conservation model but one that works in conjunction with various resource players. Big Lonely Doug was approved as a recreational reserve, meaning that if a timber application, a mineral claim, or a hydroelectric proposal for the tree's immediate area is ever submitted, the recreation officer will be notified. If it is promoted to a recreational site, while the designation affords little in the way of formal protection, it would allow an organization like the Ancient Forest Alliance to legally begin constructing trails and boardwalks. The organization had received approval in 2012 for Avatar Grove to be promoted from a recreational reserve to a full recreational site, and soon began to enjoy marketing by the Ministry of Tourism. When the application for a recreational reserve around Big Lonely Doug was circulated, representatives of the Pacheedaht First Nation approved but Teal Jones expressed concern about the bridge over the Gordon River leading to the tree. The company has moved its operations elsewhere in the valley, stating no immediate plan of returning to the spur road along the river, and so would not be maintaining the bridge. They posted bright yellow signs informing visitors that visitors assume all liability for using the road and bridge. Their concern was that if Big Lonely Doug was turned into a formal recreational site, where it would be developed with a trail and viewing platform and benefit from official provincial advertising, the bridge wouldn't be stable enough for more tourists, despite being built to carry fully loaded logging trucks. The short guardrail would need to be updated. Some saw the move by the company as a tactic to discourage people from making the pilgrimage to see the solitary tree. The application revealed a paradox: the timber company would only update and manage the bridge if there were plans for them to return to work in the neighbouring cutblocks, including the grove of old growth next to Big Lonely Doug. The only way the tree could be granted full provincial protection in becoming a formal recreational site was if the timber company returned to clear-cut more old-growth forest at the base of Edinburgh Mountain. Chapter 11 Tall Tree Capital In Port Renfrew, with the successful launch of their Big Lonely Doug campaign, resulting in a furor of local interest and tourism, TJ Watt and Ken Wu began to see a movement building. They noticed two factors were resonating most strongly with the public: emotion and money. Civil disobedience can often be swept aside by government injunction or dismissed by the conservative end of the public spectrum, but when environmental issues are entrenched in business, there exists economic incentive for change. These trees could be transformative, they thought, not just for their cause but for an entire town and region, as a "first-rate potential destination" for tourists. Port Renfrew is a place where two rivers meet — the San Juan from the east and the Gordon from the north — and spill their melt and rain water, carried from deep inside the island, into the plunging harbour of Port San Juan. In the fall, salmon return to the harbour's head and hurl themselves into the air on their soldier-like march upriver to spawn and die. Black bears feed along the shores. Bald eagles survey the coastline from atop droopy cedars. Elk and deer graze in the grassy marshes and estuaries. Cougars and wolves lurk in the forests. But the soggy and wind-beaten town would not exist if not for its bounty of big trees. The region's first colonial logging activity was carried out in the 1880s by Alfred Deakin, who cut trees in the Gordon River Valley and shipped timber from the port. The region's logging industry was a modest venture until 1914, when the British Columbia Lumber Company and a crew of 125 opened a camp not far from present-day Avatar Grove. In 1929, a shingle mill began operation in town, and ever since, Port Renfrew has revolved around timber — every one of its residents in some way connected to bringing trees down. For a century, all focus was on harvesting the great trees and shipping timber from its port. Planks, beams, posts, and raw logs were loaded onto ships bound for mills and markets in urban hubs, including Victoria, Vancouver, and Seattle. In the 1930s, a rail line was built that extended twenty-two kilometres to access the heart of the region's finest timber stands. One rail trestle over Bear Creek was used so frequently that uneasy train crews would disembark, send one conductor across the bridge, and let the train pass unmanned — fearful that the bridge might give way and the load would topple into the river. Numerous logging camps were erected around Port Renfrew: along Bear Creek, Harris Creek, and the Gordon River. For decades, the town remained a backwater, a bustling but isolated community. But through most of its history, the town existed in a liminal space. When its first post office was established in 1895, mail addressed to Port San Juan — the original name of the settlement — was erroneously being delivered to the San Juan Islands, an archipelago belonging to the United States, southeast of Vancouver Island. At the urging of the perturbed postmaster, the settlement was renamed, after Baron Renfrew, one of the titles held by the Prince of Wales. It wasn't until 1958 that a road was extended north from Jordan River, pushing through the coastal forest to finally connect Port Renfrew with Victoria. Up until then, residents relied on weekly ships for supplies. Still, for decades the road remained a treacherous track — weaving along the jagged coast, up and down gulley, and across rickety bridges over creeks and rivers until finally the port came into view. Drivers would pass a concrete guardrail on which someone had painted: "Hang on to your beer!" While the town remained a timber hub, most outsiders came to Port Renfrew to hike along one of the most famous trails in the world. Across the bay is the southern terminus of the West Coast Trail, a seventy-five-kilometre hike that draws thousands of visitors every year. The path was originally a trade and travel route used by coastal First Nations, and was adopted by early colonists as a telegraph trail to assist survivors of shipwrecks. This stretch of coastline was known as the Graveyard of the Pacific because of the frequency of ships hitting rocky reefs in the mist. In 1973, the hiking trail was incorporated into Pacific Rim National Park. Port Renfrew either bids good luck to trekkers setting forth, smelling fresh and with an eager spring in their step, or welcomes them out of the trail at the conclusion of their week-long hike — sore, wet, muddy, hunched, and hungry. Here, in one of the wettest places in Canada, rain falls two out of every three days a year. The region was also known for attracting people looking to live freely off the land. Since the 1960s, Sombrio Beach, just south of Port Renfrew and one of the coastline's premier surfing destinations, had been a draw for squatters, back-to-the-landers, and free-spiriters looking for a co-operative but disengaged-from-the-world way of living. What started as a few ramshackle huts grew into a small community along one of the most postcard-perfect crescents of sand anywhere on the island — people living off Crown or unused private lands for free. One couple raised eleven children along Sombrio. But in 1996, with the establishment of the Juan de Fuca Provincial Park and a sister route to the West Coast Trail south of Port Renfrew, the Juan de Fuca Trail, the squatters were evicted. Those tied to the area moved to Port Renfrew. But throughout the coastal community, the lure of escaping still hangs in the air: you can disappear among the misty forests or at least have your unwanted past carried away by the waves and wind. By the middle of the 1980s, the B.C. timber industry was worth more than $20 billion per year, but increasingly this money was being centralized into a handful of companies. Throughout the 1980s and '90s, smaller companies merged to form conglomerates that controlled increasingly large tree farm licences across the island. Like many once-bustling timber towns, some with mills that supported near-entire communities, Port Renfrew saw its jobs dry up and its population shrink to just a few hundred people. An hour down coast, the seaside community of Jordan River met a similar fate. It had been a bustling logging community since it was established in the 1880s, around seventy kilometres up the coast from the Hudson's Bay trading post turned provincial capital of Victoria. With upwards of a thousand residents, who flourished up until the 1970s, Jordan River kept as its mainstay the shipping of logs south to the mills that had opened in Victoria and Vancouver. But Jordan River also began to dwindle in the 1980s, to approximately one-tenth its size — eventually becoming known more for surfing than logging. The area has since been deemed the most seismic-prone region in British Columbia, with the so-called Big One expected along the West Coast. Out of fear that the hydroelectric project upriver, built by the Vancouver Island Power Company in 1911, would breach if a magnitude 9.0 earthquake struck the region, BC Hydro bought out nearly all of the remaining residents, effectively turning the community into a ghost town. While some timber jobs in Port Renfrew held in certain sectors, such as contract fallers, the town started seeing an upturn in a different form of commercial interest beginning around 2010. A new wave of business erupted in town — one centred around the value not of trees lying on the ground but the value of the ones left standing. "The trees give name to Port Renfrew," said Dan Hagar, who was elected president of the town's chamber of commerce in 2013. "People are coming not just for the trees, but also the reason they know about Renfrew is because of the trees." Over his time as chamber president, he noticed thousands of tourists making the drive north along Highway 14, not just to hike the famous West Coast Trail or fish the shoals off coast, but to head inland to stand under some of the largest trees in the world. Since the boom began, Port Renfrew has been heralded as "the next Tofino," a nod to the thriving surfing destination up coast that shot to global recognition following the protests in Clayoquot Sound. For Ken Wu, Tofino became a model for turning a former timber town into a tourist destination in the wake of an environmental movement, where "with every arrest, the community's GDP went up." In marketing Port Renfrew, however, the Ancient Forest Alliance found that their early supporters were people already inclined to be voices for old-growth forest protection. So, rather than repeating the tired adage of trees versus jobs, the AFA took a different approach to court the skeptical. The organization held talks and slide shows at restaurants and cafés in town, where timber workers and environmentalists, residents old and new, met face to face. So often these duelling groups only see each other through the windshield of a logging truck as it speeds past hauling a load of timber, while activists are out searching for big trees or embarking on trail-building expeditions. Instead, the AFA positioned themselves not as the truck-blocking tree-huggers of old, but as an organization that was concerned with the future of the trees as well as the people and communities that depended on them. But to many in Port Renfrew, including Dan Hagar, "the next Tofino" moniker is despised. For him, a more appropriate comparison for the rapid rise in development and attention in Port Renfrew lies on the mainland, not Tofino but Whistler. Tofino is a five-hour drive from Victoria, or a three-hour drive from the nearest mainland-connected ferry. Whistler — one of the most renowned ski and snowboard destinations in the world — is easily accessible to Vancouver urbanites, just an hour-and-a-half drive north of the city. For those looking for outdoor recreation — skiing and snowboarding in the winter; hiking and climbing in the summer — Whistler is far enough away for an escape, but close enough to reach after work on a Friday evening. Hagar started seeing Port Renfrew, just a two-hour drive north along the coastal highway from Victoria, in roughly the same place as where Whistler was in the 1980s: a beautifully set location that is both close to a major urban centre and well connected by multiple access roads, and somewhere that offers a bounty of outdoor activities and growing amenities. In 2010, Port Renfrew launched the Tall Tree Music Festival, a weekend-long, early summer gathering of local and international bands and musicians, who perform on a stage erected in a clear-cut now just down the road from Avatar Grove and Big Lonely Doug. Born on the Saskatchewan prairies, Hagar moved to the West Coast and purchased a single cottage in a development that was being built out of an old campground in the summer of 2010. By 2012, he had purchased four and started a business, Handsome Dan's, which manages rental units and cottages. Many of the sea-view cottage owners live out of town — a niche in property management that Hagar stumbled upon by chance. He now runs the logistics — booking, cleaning, servicing — of more than forty rental properties, and has seen the town's exponential growth first-hand. His revenue in 2016 was ten times what it was in 2012. Thanks in part to the popularity of the big trees and the activist campaigns creating interest, local hotels and bed and breakfasts have seen a surge in demand and revenue. "Think about how much money we would have had to spend in order to get the advertising we got as a result of Avatar Grove, Big Lonely Doug, the controversy around logging in the Walbran," Hagar said. "It was probably in the hundreds of millions of dollars in the amount of advertising that we got for Port Renfrew organically." Two days after the Ancient Forest Alliance issued the press release announcing Big Lonely Doug, Hagar registered biglonelydoug.com and mapped the domain so that any clicks redirected to the webpage of his cottage rental management business. * * * In December 2015, the Port Renfrew Chamber of Commerce called for a moratorium on logging old growth in the region, citing the business and tourism potential of keeping the big trees standing. The statement, clear and direct, came as a paradox to some residents: the chamber is meant to support businesses, the largest of which — in thousands of small towns up and down Vancouver Island and across British Columbia — has always been timber. But the statement signalled the beginning of a shift across the province. The town of Port Renfrew stood tall, and British Columbia followed. Six months later, the B.C. Chamber of Commerce, representing 36,000 businesses across the province, passed a resolution calling on the provincial government to increase old-growth protection, stating, "The local economies stand to receive a greater net economic benefit over the foreseeable future by keeping their nearby old-growth forests standing." They cited Big Lonely Doug as an example. The chamber also noted an economic analysis conducted by a kayaking company located in the Discovery Islands, between Vancouver Island and the mainland. When, in 2012, a logging company expressed interest in cutting sixty hectares of old-growth forest, which would have negatively impacted the tourism industry, the kayaking company crunched the numbers on this particular plot of trees. If logged, the sixty hectares would initially produce a timber value of $3,600,000 — or $60,000 per year over the sixty-year regeneration cycle until the forest could be harvested again. The kayaking company, however, was earning $416,000 per year off of operations around the un-logged islands, which would amount to $24,960,000 over the same sixty-year period. To harvest the sixty hectares, the logging company would provide three hundred full-time days of employment, while the kayaking company would provide 20,160 days of employment if the trees were left standing. In addition, the numbers cited were for just one tourism company — and forty were in operation in the region. The B.C. Chamber of Commerce recommended to the provincial government to "support the increased protection of old-growth forests in areas of the province where they have or can likely have a greater net economic value for communities if they are left standing for the next generation and beyond" and "protect endangered old-growth forests by enacting new regulations such as an Old-Growth Management Area, Wildlife Habitat Area, or Land Use Order, with the intent to eventually legislate permanent protection for areas through provincial park or conservancies." To activists it was seen as an enormous win to gain the support of a significant provincial business body. Still, before the year was out, timber workers and companies protested, forcing the B.C. Chamber of Commerce to issue a follow-up release titled, "B.C. Chamber Does Not Support Ban on Old-Growth Logging." To "clarify its policy position on conservation of old-growth forests," it stated that while the chamber maintained support for conservation in communities where the tourism potential is high, it also supported the province's "vibrant forestry industry," which creates jobs, powers the economy, and is "world-renowned for its sustainable forest management practices." The win, such as it was, came and went swiftly. Dan Hagar and the Port Renfrew Chamber of Commerce decided on another approach, by redesigning the town's tourist brochure to highlight the one feature that was rapidly becoming the region's principal draw. They came up with a moniker for the town: Canada's Tall Tree Capital. The brochure included driving directions and a map, developed by the AFA, to Avatar Grove and the area's largest trees, including Big Lonely Doug. On the back cover was a picture, taken by TJ Watt, of the solitary Douglas fir. All the big trees near Port Renfrew — and all the other great firs, cedars, and spruces across Vancouver Island, in the forests of Carmanah and Walbran and Avatar Grove — grow within intact forests. Their trunks appear smaller when surrounded by other trees, and their heights shorter with undergrowth growing around them. Their tops are often obscured by a canopy. None of these trees, as tall or as wide or as gnarly as they are, create the stark contrast that sets Big Lonely Doug apart: one of Canada's largest trees standing alone in a clear-cut. Sold at the small gift shop in Port Renfrew, novelty T-shirts read, "Port Renfrew: a drinking town with a fishing problem." While some of the West Coast's best fishing can be found along the rocky shores, a more fitting shirt for the future of Port Renfrew might be: "A logging town with a big-tree problem." Port Renfrew — and the valleys that extend inland, including San Juan, Gordon, and Walbran — were becoming ground zero for the new battle over Vancouver Island's remaining old-growth forests. The region had the most to gain, and the most to lose. * * * While many families and businesses have profited from the rebranding of Port Renfrew as the Tall Tree Capital, the benefits have only helped the Pacheedaht First Nation in a small measure. Years before Dennis Cronin flagged cutblock 7190, Bear Charlie walked that forest along the Gordon River looking for CMTs — a bark strip tree or maybe even the remnants of a dugout canoe. Originally from Ahousaht First Nation in Clayoquot Sound, he moved to the Port Renfrew area to work on the Pacheedaht's culturally modified tree crew, and was hired by Teal Jones as part of the company's CMT requirement before clear-cutting began in cutblock 7190. He undoubtedly walked under the branches of the second-largest Douglas fir in the country, but the great tree didn't register as anything special to him or his crew partner. "When we go in, we go more for the cedar content," he said. He found several CMTs within the cutblock. The Pacheedaht have been managing their forest resources since long before the arrival of Europeans, but amid pressure from timber companies and activists they have remained primarily concerned with the prosperity of their people. In the spring of 2017, the First Nation opened a sawmill near Port Renfrew, minutes down the road from Avatar Grove. Jeff Jones, chief of the Pacheedaht First Nation, asked Ken Wu and the Ancient Forest Alliance if they wanted to go on a tour. Wu expected that they might head out to walk through an old-growth forest. Instead, Jones wanted to show him their mill. "It was the most spectacular ancient cedars that they're milling," Wu said. "It was essentially Avatar Grove laying down on its side in their yard." The mill buys logs from local tree farm licence holders, including Teal Jones, as well as private landholders. Rather than two-by-fours, pulpboard, shakes, or other low-value lumber products, the Pacheedaht mill is turning the old-growth cedar logs into larger dimensional timbers that are sold to a supplier on Vancouver Island. They are making the best use they can out of their wood. The tour was eye-opening for Wu, not only to see such large trees lying waiting for the saw, but because it showed that First Nations can simultaneously be advocates and allies for old-growth forest protection while also profiting from its timber. Ken Wu sees a simple way forward where both values can be protected: taking a portion of the "stumpage fees" that timber companies pay to the provincial government, and redirecting the money to non-timber revenue generation for First Nations. Wu has been proposing the option for years. But the mill offered the Pacheedaht something tangible and immediate that the activists didn't: jobs. Seasonal employment in tourism may help a few, but their mill provided year-round jobs for 10 percent of the entire nation. Jones has tried to remain "neutral" when the Pacheedaht have been placed between environmental activists and timber companies. "We as a nation are trying to benefit from the resource itself by providing stable jobs, even if that has to do with harvesting old growth," he has said. The area that became Avatar Grove was well known to the Pacheedaht for centuries before Watt noticed the candelabra tops in the winter of 2009; their seasonal fishing camp had been located at the site long before either activists or loggers arrived. While the nation supports the desire to create more recreation sites, there is also a degree of caution. Bear Charlie would chuckle at some of the activists' tactics. In one location in the Walbran Valley, activists had camped out in the trees in a forest that the logging company had already set aside for the Pacheedaht to manage. "They were protesting something that was already saved," he said. The logging trucks just drove by, knowing that their company had no intention of entering that particular stand. But Charlie also has had serious concerns: "If you're not going to monitor your own Avatar Grove, you're destroying it just as much as a logging company — just in different ways." He has seen tourists walking off trail, relieving themselves in the forest, and camping at the site. "You're not doing the same damage that a logging company does, but at the same time the company is coming back later to replant when they cut." He heard talk about the AFA hiring Pacheedaht guides for Avatar Grove — in a similar way that Parks Canada has hired Indigenous people to be guardians of the West Coast Trail — but it never materialized. To Jeff Jones, there's a more fundamental issue for his nation than squabbling over individual trees or specific groves. "Our vision here is to get as much control of our territory as possible, by either management or ownership," he said. "That's our ultimate goal." Activists and loggers have been fighting over the forests for a few decades, but the Indigenous peoples of Vancouver Island have watched these forests ebb and flow for millennia. Many see the trees on a much longer time-continuum. In 2005, the Pacheedaht launched a four-hundred-year cedar conservation plan, where a percentage of tree farm licence holdings of existing old growth, or replanted forests that are at least ninety years old, is set aside by timber companies for the Pacheedaht. One of the biggest contributors of old growth and second growth to this conservation plan has been Teal Jones, the holder of TFL 46. The companies started by setting aside individual trees, but the nation wanted forests — small areas with good cedar of various ages that would be ideal for the nation's future use. To some on the outside, opening a mill was self-interested, without regard for supporting old-growth protection; but for more than a century the nation has seen timber companies harvest trees off their traditional lands with little or no say. Jones is looking ahead — at the urgent needs of his people, as well as hundreds of years in the future and the generations to come. The immediate goal is to keep the mill operating, even if stalwart environmentalists might not entirely agree with the kinds of trees that are sawn. Jones has seen the benefits that increased big-tree tourism can bring to the Pacheedaht-run campground that spans a crescent beach at the head of Port San Juan, and to the restaurants and services in town, but he has been cautious about blindly supporting Victoria-based activists who market a portion of forest, encourage people to visit, promote their cause, and then leave the tourists unmanaged. There have never been official guides at Avatar Grove, no signs with rules or historical information, and no washrooms. The nation has not opposed the creation of recreational sites, but questions who benefits. "There is a fine line between conservation and economics," said Pacheedaht representative Kristine Pearson. "The activists do really well as a non-profit, and individually they make careers out of the issue." For Jones and others in the nation, the phrase "old-growth forest" is a construction that holds less weight than it does for activists or timber companies or governments. It is a recent phrase as well, when looking through at a forest through a lens of at least four hundred years. Even a replanted cutblock of knee-high seedlings will eventually return to resemble those untouched by commercial logging, with enough of the one element that has always defined these forests: time. Chapter 12 A New Ecosystem Among the black bears and towering trees, the ferns and fungi, a new ecosystem has emerged from the forests of Vancouver Island. There are forces strong and weak, cataclysmic movements and hidden repercussions. There are threads that form connections that could be severed in an instant, or gradually eroded over the near-imperceptible passing of time. This ecosystem includes the rights of Indigenous peoples to monitor and manage their lands and resources. It includes timber workers concerned with getting their jobs done, providing for their families, and keeping their communities afloat. It includes activists and environmentalists who fight to protect rapidly dwindling habitats and species, and who seek a compromise with an industry that has enjoyed an unchecked reign for nearly all of its existence. This new ecosystem also includes businesses looking to the forests for new sources of revenue; tourism companies using the icon of the tree to promote resilience, determination, and strength; and towns rebranding, transforming themselves from places that value their trees cut and horizontal to places that value forests left intact and standing. At the heart of this ecosystem stands Big Lonely Doug. It has been rare that individuals in nature transcend their ecosystem. For our oceans there was Moby Doll, an orca intended to be killed and used as the model for an exhibit at the Vancouver Aquarium, but which instead was harpooned off the Gulf Islands and dragged across the Strait of Georgia to the city. Over less than two months in 1964, tens of thousands of visitors came to see the "blackfish" — a beast from the depths — struggle to survive in a makeshift dockyard enclosure. But in its brief time as one of the first orcas in captivity, the whale came to symbolize our quest to capture and train these animals for show and profit, as well as to represent the spark to further understand and protect them. For our north there was Knut, the polar bear that never walked free. In 2006, a cub was born in the Berlin Zoological Garden that was rejected by its mother. The animal quickly became a media sensation, with approximately four hundred reporters covering the cub's public unveiling. But when an animal rights activist suggested that Knut should have been put down rather than raised by humans, protests erupted. At age four, the first polar bear to survive past infancy at the facility died, with the zoo stating that Knut's untimely passing was due to "significant changes to the brain, which could be seen as the reason for the sudden death," and PETA claiming the animal had gone "crazy." Over his short life, Knut not only became beloved but was registered as a trademark by the Berlin zoo, generated more than $7 million in revenue, and was photographed by Annie Leibovitz for _Vanity Fair_ 's Green Issue. For our forests there is Big Lonely Doug, a survivor standing resolute. No matter the storm, whether nature's wrath or human greed, that courses through these valleys, life can endure if given a chance. * * * Many people speculated as to why a logger, whose job it was to extract as much monetary value from the forests as possible, would save such an enormous tree. The most common assumption is that Big Lonely Doug was left as a wildlife tree, a specimen of great age set aside by timber companies to help reseed a cutblock. The Ancient Forest Alliance's initial press release stated that Teal Jones likely left the tree in order to satisfy requirements for "variable retention" timber harvesting, a practice where individual trees or clumps of trees are left standing in a cutblock in order to maintain diversity of species and age. If a few of the older trees are kept, it is thought, the forest will at least carry on some of its ecological heritage. The practice was introduced in 1995 by the Clayoquot Scientific Panel in the wake of the War in the Woods protests. According to the B.C. Ministry of Forests, "The broader focus of retaining structure within the stand results in the maintenance of a much wider variety of forest values, including wildlife habitat and aesthetics. In short, the retention system shifts the management focus from what can be removed to what can be retained." It is a timber harvesting practice that has never sat well with activists. The AFA wondered if Teal Jones had left Big Lonely Doug in an attempt to absolve themselves of the "clear-cutting" label. Variable retention harvesting, however, has set restrictions, chief among them being that at least half of a cutblock's trees must be retained, each within a tree height of each other. If the trees are approximately thirty metres tall, for example, that much space must be left between. Leaving one or a few scattered individual trees in a cutblock would not qualify. While the terms are not synonymous, "variable retention" is what most people imagine when they hear the words "selective logging," which is where loggers remove only some of the trees in a forest and leave a substantial portion remaining to reseed, maintain age diversity, and retain at least some of the structural integrity of the forest. In 1995, the provincial government also released the _Biodiversity Guidebook,_ a lengthy set of recommendations for forest engineers, planners, and managers to meet ecological goals outlined in the Forest Practices Code of British Columbia Act. The guidebook was meant to be exactly that, a guide with practices "designed to reduce the impacts of forest management on biodiversity, within targeted social and economic constraints." It defines a "wildlife tree" as any standing live or dead tree with special characteristics that provide valuable habitat for conservation or enhancement of wildlife. These trees have characteristics such as large size (diameter and height) for site, condition, age, and decay stage; evidence of use; valuable species types; and relative scarcity. They serve as critical habitat (for denning, shelter, roosting, and foraging) for a wide variety of organisms such as vertebrates, insects, mosses, and lichens. The spectrum of nine categories described in the _Biodiversity Guidebook_ runs from a dead, branchless tree — commonly called a "standing snag" — that offers habitats for insects and amphibians, to a healthy tree with no decay or rot that is ideal for reseeding a cutblock. Above all, the guidebook marks the oldest and largest trees as the best candidates for retention and recommends that they are left at the edges of a cutblock, incorporated into a riparian buffer. Timber companies prefer to leave trees along the perimeters as well. Loggers have to work around a wildlife tree standing in the middle of a cutblock, and it can often be damaged during the process. While companies may not suppress the inclination to save certain trees, they are far from encouraging of the practice. Forest workers are under no requirement to set aside a particular number or percentage of individual or clusters of trees within a cutblock. It is left to the discretion of the logger. When Dennis Cronin wrapped a green ribbon around the big fir in cutblock 7190, it was not without precedent. Like many timber workers, Cronin was not immune to pressure once the confrontations between environmental activists and loggers that had occurred predominantly on Haida Gwaii, then the Queen Charlotte Islands, in the late 1980s, began expanding to Vancouver Island — especially when conflict erupted near his hometown of Lake Cowichan. "Everybody was trying to get dirt on you all the time," he said. "They had cameras on you." The job became scrutinized by the media and by the public. Cronin looked back at Carmanah and the War in the Woods as a pivotal moment that sparked much-needed change. The industry needed a shakeup, he said. In the wake of these movements, Cronin and his partner Walter Van Hell began saving more trees, usually ones on the edges of a cutblock that held little value to their company. On one job in the Cowichan Valley, the pair left a patch of trees surrounding a Douglas fir nearly as large as Big Lonely Doug. On another, he worked in a small cutblock that held more than fifteen bear dens, including some several metres off the ground in slits in the side of large hollow cedars. It was impossible not to be affected by the realization that after leaving a cutblock, the inevitability of logging would set in. Just outside Port Renfrew, Van Hell helped promote the formal protection of an approximately eighty-metre-tall Sitka spruce growing within a thin sliver of forest sandwiched between Harris Creek and the road that connects the town to Lake Cowichan. In his office and among friends, the tree was known as the Van Hell Spruce, but it was eventually named the Harris Creek Spruce. Dennis Cronin and Walter Van Hell weren't alone in their desire to protect a few exceptional trees. Just outside Victoria, along the Koksilah River grew a stand of Douglas firs that held some towering and ancient specimens. Without the damp and fruitful conditions of the west coast of the island, the trees on the southeastern rim of Vancouver Island grow more slowly. One Douglas fir near the river, while only forty-five metres tall, was deemed by MacMillan Bloedel as worthy of a bronze plaque. Affixed on October 4, 1957, it noted the Douglas fir "is believed to be the oldest living tree of its kind in Canada." The tree blew down in the winter of 1985, allowing it to be dated at 1,340 years old. The plaque also mentioned that "the area is now set aside to remain in its natural state." Later reports stated the plaque could "no longer be found." By the late 1980s, patches of untouched coastal Douglas fir forest similar to Koksilah Grove were rapidly dwindling across Vancouver Island. But the value remained. In the spring of 1989, the timber company that had assumed timber rights in the Shawnigan Lake region sent two of its fallers into what had become known as Koksilah Grove. But the grove growing alongside the Koksilah River was supposed to have already been protected. Apart from MacMillan Bloedel's bronze plaque, a forest engineer named Don McMullan had recommended two years prior that a small patch of forest around the largest trees be set aside, but the recommendation was supposedly misplaced. And so the timber company, Fletcher Challenge, sent in its fallers. Don Hughes and Louie Van Beers were immediately struck not only by the grandeur of the forest but by the rarity of such old Douglas firs. Defiantly, the two men put down their saws and refused to cut the stand. "You don't find old-growth timber like this anymore," Van Beers told the _Times Colonist_. "There are old firs seven to eight feet through, and some cedar. It is very accessible to the public and alongside the river. And we both felt they could put aside a little piece of that." With mounting public pressure, Fletcher Challenge agreed not to cut four hectares of trees along the river and mark it as a land reserve. It was a company designation that was little more than a promise not to harvest that particular area. For two decades, timber companies paid little attention to the patch of great firs growing along the Koksilah River. Locals continued to enjoy recreating along the river and under the trees, until 2007 when a hiker, to his surprise, noticed a logging road that appeared to have been recently laid out, and trees flagged with ribbon and sprayed with light blue paint — clear signs of imminent logging activity. Once again, another company was pushing forward plans to log Koksilah. Amid renewed public and media pressure, the company relented — agreeing to set aside the patch of forest — and the provincial Ministry of Forests placed the Koksilah Grove on its list for park acquisition. Modern environmental activists often point to the story of the Koksilah Grove as a cautionary tale of why timber companies cannot be trusted with policing or protecting the forests. Their word, or even a company designation such as "wildlife zone" or "wildlife tree," holds no formal protection and offers no assurance to those fighting for their conservation. A plaque can disappear in a windstorm, paperwork can be misplaced in the turmoil of acquisition and merger, and the story of the dissenting act of two timber workers can fade from memory. Dennis Cronin didn't flag the big Douglas fir in cutblock 7190 to satisfy a code or management policy for the company he worked for. It wasn't a wildlife tree in his eyes. In the end, it may help repopulate the clear-cut, dropping its feather-tail seeds from its branches, but to him the tree didn't hold any kind of future utility that could be exploited. It didn't tick a box on a form. "It's like a legacy, ya know?" Cronin said, four years after he saved the tree. "You're saving something special. Even though I'm a logger and I've taken out millions of trees, you won't see anything like these trees again." In a March 18, 1923, article in the _New York Times_ , a reporter asked British climber George Mallory, after two unsuccessful expeditions to attempt to climb Mount Everest, why he wanted to try again the following year — why the alpinist felt compelled to summit the tallest mountain in the world. "Because it's there," Mallory is quoted as saying. The now-legendary retort has been called "the most famous three words in mountaineering," and reduced the world's greatest sporting feat to its fundamental motivations. The climber didn't need a grand reason. He didn't need to make a point or push a cause. He didn't need to puff his chest or inflate his accomplishments. He just had a job to do: put on his spiked boots and step into the mountains. A foreman on the crew that was hauling the logs from cutblock 7190 asked Dennis Cronin why he saved that particular tree. Cronin offered a response with a similar rationale. "Because I liked it," he said. * * * Throughout his career, Dennis Cronin stumbled upon other unusual finds while working in the forests of Vancouver Island. He noticed countless examples of Indigenous people using the trees as a resource: holes that had been drilled into cedars to test their density; or stacks of cedar shakes, split and ready to use to construct houses. On one occasion, he found an unfinished ocean canoe partially dug out of a felled cedar located two and a half kilometres from the coast. The Indigenous carver had sought out the most ideal piece of timber, even if it would produce a canoe that would eventually have to be hauled through thick forest for days to be launched at the shoreline. But the canoe had been abandoned — likely from a defect that had appeared in the wood — more than a century ago, Cronin estimated, judging by the one-hundred-year-old tree that was growing out of the log. "They were just starting to carve it out, but left it," Cronin said. "It was _exactly_ ten metres." He uncovered pre–European-contact stone tools and hundreds of culturally modified trees. On October 23, 2013, while surveying a patch of forest on a mountainside overlooking Port Renfrew, he stumbled upon a remarkable archaeological find: the wreckage of an airplane. Cronin and his partners had just laid out a cutblock near the top of a mountain, which held less timber than they were expecting, when they began cruising down the slope in search of a high-value stand that could be incorporated. As Cronin scrambled down through the salal bushes, he spotted bright yellow among the green and brown. He picked up a piece of twisted aluminium and called to his co-workers, including his partner Walter Van Hell. "There's an airplane here!" Cronin yelled out. One piece of metal led to another, and another even larger still, until the timber workers were surrounded by fragments of a fuselage, wings, and two distinct propeller engines. The wreck looked old, overgrown as if it had been there for decades. By happenstance Cronin had solved a decades-old mystery: the disappearance of Avro Anson L7056, a Second World War–era British training aircraft that had vanished. Just after 9 a.m. on October 30, 1942, the plane had taken off from RCAF Station Patricia Bay, now the site of Victoria International Airport, on a three-hour navigational training flight. It never returned. A number of other planes had been lost at sea during similar exercises; the same was assumed to have happened to L7056, but Cronin's discovery proved otherwise. The pilot had likely become disoriented in the thick fog that often forms a bank along the coastline and crashed into the forested mountains just inland. The aircraft would have entered the forest like a lightning bolt, carving a line through the trees before disintegrating. Four airmen, two of them just twenty-one years old, died. Within a sprawling debris field spread throughout the forest, the three timber workers found a leather boot, a first-aid kit, and what looked like a bomb protruding from the earth. The men left the wreck and called the RCMP. Seventy-one years to the day after the plane crashed, they led members of the Canadian Forces to the site, which had been kept from the public to ensure artefact hunters wouldn't prowl the wreckage looking for trinkets, and closed the chapter on an enduring mystery. But of all Cronin's findings, it was the big Douglas fir in cutblock 7190 that stood out. During that sunny winter day in 2011, he unintentionally created a monument that is drawing pilgrims away from the famed coastlines and over to the frontlines of old-growth logging in the heart of Vancouver Island. "Back in the day, that tree would've been cut down," Cronin said. "I'm glad it grabbed everybody's attention. Nobody would have ever seen it if we hadn't logged that piece." It is a statement — that logging was responsible for revealing the second-largest Douglas fir in the country — that is hard to hear for activists like TJ Watt, who continue to spend weekend after weekend in the hope of finding and saving not just the big trees but the forests around them. And yet if Big Lonely Doug was a twenty-metre-tall fir standing alone in a cutblock, it would not have attracted as much attention; if it was growing at the edge of a clear-cut, it would not have offered such a stark image; if it was found deep in the hills of Vancouver Island, far from a town like Port Renfrew, it would never have brought so many visitors to stand at its broad base; and if the tree was found in an already-protected forest, never in danger of being cut down, it would never have been given its name nor made headlines. The tree that is known as Big Lonely Doug is a product of many factors that began when a logger stood beside its trunk and looked up. * * * Big Lonely Doug is one of the last remaining great specimens of an endangered species. If it had the face and white fur of the spirit bear, it would have governments partnering with environmental groups to protect its habitat. If it swam in pods and leapt from the ocean like an orca, it would have documentaries made about its plight that sparked public outcry and protesters outside aquariums. But Big Lonely Doug is endangered nonetheless. It is one of the last of its kind — the great trees of Vancouver Island; an example of natural grandeur and history that will soon only be found in a few protected zones and seen by only the most intrepid among us. For now, Big Lonely Doug stands tall. The tree's thick roots, as wide as a person, draw groundwater from deep underground and up seventy metres to nourish its crown of dark green needles and the mosses, lichens, and ferns that cling to its high branches. In the years since cutblock 7190 was logged, life has slowly returned to the barren twelve hectares alongside the Gordon River. Around the base of the great tree, huckleberry and salmonberry bushes bristle through sun-bleached fragments and dead branches of the great cedars, hemlocks, and firs that once stood shoulder to shoulder in this valley. Trees are growing there, too. The replanted seedlings are inching upwards, filling the blank space and returning green to the cutblock with every year that passes. Because life is opportunistic. The network of underground fungi will eventually return to connect the great fir's broad roots to those just starting out. Water and nutrients will begin to flow through the subterranean network to once again connect the trees. There are other knee-high trees growing in cutblock 7190 that are undoubtedly the offspring of Big Lonely Doug. When a seed falls from a tree growing in an intact forest, it tumbles directly down through the canopy, protected from winds that otherwise can carry it afar. But for an isolated and exposed tree bearing the full brunt of the winds, its seeds may well be caught up in a torrential updraft and carried as far as a kilometre away. There are still days every autumn when little more than a cool breeze will enter the valley and ruffle the branches and cones of the lone tree in the cutblock, dislodging seeds that tumble the sixty-six metres to the ground. Most of the tens of thousands of seeds will never sprout, finding the ground inhospitably dry or overexposed, but some will find a niche of tolerable conditions and thrive. A forest will return to cutblock 7190. It will take decades — maybe close to a century — for the seedlings to become saplings and eventually grow into trees that will begin to fill in the blank space around Big Lonely Doug. The forest that became a clear-cut will become a crop — and the trees that will eventually surround the single towering fir will never be the same as those that once stood. The majority of the seedlings, all planted at once, will grow in unison and create an even-aged canopy that blocks the beams of sunlight that so commonly penetrate the variegation of old-growth forests. Moss and lichen and undergrowth will struggle to fully establish in the drier and darker conditions. In several decades, these twelve hectares along the northeast bank of the Gordon River will look much like the rest of the second-growth forests found across Vancouver Island. Eventually, the replanted seedlings will mature into trees substantial enough to once again draw the attention of the timber industry. We won't let this stand grow for centuries and centuries until it begins to resemble what it once did — natural deadfall becoming nurse logs for the next generation of trees, mounds of moss and thickets of salal covering the forest floor, lichens dripping from branches — with all the depth of character that can be achieved only with time. Instead, our impatience will overrule once again. The next generation of foresters will be sent to the valley with their orange and pink flagging, to plot out the boundaries of a new cutblock and create a map depicting how the logs can be extracted. The cruisers will arrive next to assess the value of the stand, now ripe to be cut. The dollar figure will be much less than any patch of old growth that once stood on Vancouver Island, but it will be all the industry can get by that point. Machines will come next — hulking loaders and trucks will be driven across the bridge high above the Gordon River and along the road at the base of Edinburgh Mountain to their next job site. Under a new name, cutblock 7190 will be logged once again. The trees will come down more easily than the ancients that stood before, most likely with the help of a machine that can saw through the narrow trunks with ease. But before the regrown patch of forest disappears, under the canopy will walk a logger — an engineer, a timber cruiser, a faller — with a job to do. Each tree they weave around will be identical to the last, looking like stalks of giant corn growing uniformly in a field. But they will come upon one, a Douglas fir with a girth and height that dwarfs its neighbours; it will protrude from the uniform canopy like a monolith. The logger will stand under the big tree and stop — and gaze from broad base to broken top. What kind of value will that logger see? * * * Like the great fir he saved, Dennis Cronin was the last of his kind. If the remaining old growth is eventually brought down, the generations of loggers who put axe and chainsaw to trunk will have no more great trees to cut. The West Coast fallers who stalked the forests of Vancouver Island in search of big timber will find only small trees left to cut. The shift to cutting second growth exclusively will arrive — if not by choice then by necessity — and with it a continued shift towards mechanized falling. When that happens, Vancouver Island's old-growth legacy — save for a few scattered parks and protected areas — will be lost, and with it, any potential for communities similar to Port Renfrew to build new economies out of groves left intact and trees left vertical. And so the race continues to find Vancouver Island's last great trees. Down kilometres of logging roads, far from public view, timber workers search for pockets of dense green worth millions. Activists are on the hunt, too, for the same trees but with a different vision. They hope a few can be saved as small groves or larger forest tracts, to be preserved, enjoyed, and appreciated for centuries to come. Over his career, Cronin saw thousands of hectares of forest up and down Vancouver Island disappear — ancient trees in great stands felled, limbed, hauled, and loaded onto trucks destined for the mill. He walked some of the grandest forests on the West Coast, and loved every second of it. After he flagged the big Douglas fir in cutblock 7190, Cronin would often return to stand beside its wide trunk and under its crooked canopy. One weekend, shortly after the clear-cut, he took his wife, Lorraine, and their friends Joe and Karen Simpson to the tree. They drove to Port Renfrew and out onto the bumpy logging roads, with Cronin describing the details of the landscape they passed and where he had worked. "He was a master of the backroads," Joe Simpson remembered the day. "I can't tell a rose from a thorn, but he knew all the plants, all the trees, and all the flowers." Cronin spotted a small herd of elk grazing in a marshy meadow before anyone else did. They parked on the road and walked down into cutblock 7190, past stumps of giant cedars and firs. Cronin was proud, Karen Simpson remembered, to show them the tree that was still standing. He told his friends how there are only a few of these exceptionally large and old Douglas firs remaining on Vancouver Island. Originally from Ottawa, the Simpsons were astounded — standing back and looking up at a thousand-year-old tree more than two-thirds the height of the Peace Tower on Parliament Hill in their hometown. "These guys that work in the lumber industry see all sorts of trees, but Dennis obviously recognized this one as a very, very special tree that should never be cut down," Joe Simpson said. The four of them joined hands to try to encircle the tree. They came up several people short. Sometimes Cronin and his wife took the drive from their home in Lake Cowichan to visit Big Lonely Doug. Sometimes they brought their sons. But Cronin would always remember to pack a bag of bread for the Steller's jay. After cutblock 7190 was harvested, the jay that followed him around like a dog moved to the grove of old growth next door. The bird finally flew over the creek it would never cross — by force rather than by will — after the trees it had called home disappeared. Cronin would stand on the road alongside the droopy limbs of hemlocks and cedars, and the bird would fly out of the forest and eat from his hand. After four decades working in the forests of Vancouver Island — first as a hooktender for a crew and then as a forest engineer — Cronin's career ended abruptly. On September 5, 2012, seven months after tying the green "LEAVE TREE" ribbon around the base of the second-largest Douglas fir in the country, he was diagnosed with colon cancer. He stopped work the following week. Like many cancer treatments, it came with ups and downs. Positive results were met with optimism; negative ones with increasing concern. He tried conventional treatments as well as non-traditional ones. As the chemicals that were fighting his disease coursed through his body, the man grew thinner. But it was a seemingly minor consequence of his disease and treatment that hit him hard — Cronin's moustache, which he hadn't shaved since he was old enough to grow one, began to fall out. He was devastated. The town of Lake Cowichan, despite losing its mill located up the lake in the small community of Youbou in 2001, had remained a logging town through and through. The town rallied to help the Cronins. A bottle drive, where people donated their refundables, and a hot dog sale were held to raise money to help the family cover costs. Timber workers filled their trucks with scrap wood gathered from cutblocks around the Lake Cowichan and Port Renfrew area that was then split and sold as firewood to help fundraise. Through it all, Cronin wanted desperately to return to work, to be healthy and back in the bush with his co-workers, his forests, and his peanut butter sandwiches. When treatments worked, he returned to work; it was during one of these returns, when his cancer had gone into remission, that he stumbled upon the wreck of the Avro Anson plane. But when the cancer came back, Cronin could delay his official retirement for only so long. He finally accepted that the odds were against him and that he might never return to work in the forests of Vancouver Island. He retired in the spring of 2015. Less than a year later, on April 12, 2016, Dennis Cronin died in the living room of his home in Lake Cowichan. His spike-soled logger's caulk boots and red vest lay at the ready. The valleys of Vancouver Island can be ruthlessly windy places and the Cowichan Valley is no different. That spring, a cool wind swirled in the valley, churning up white-capped waves on the long lake. The wind tore southwest over the hills — through ancient forest and over fresh clear-cut — towards the Pacific Ocean. It rose over Edinburgh Mountain, where Queen Charlotte goshawks caught the up-currents, twisting and turning in the air above their nests in the tallest trees. The wind rushed down the mountainside, dispersing the morning fog that hugged the trees in the Gordon River Valley before erupting into open space where a forest once stood. The wind swirled in the clear-cut and around the trunk of a single tree standing on its own. The tree's glossy green needles ruffled, its broad trunk swayed ever so gently back and forth — another force pulling at its limbs — but the tree stood. Still tied at the base of the great Douglas fir, Dennis Cronin's green ribbon fluttered in the wind. Epilogue A Giant It takes great effort to leave footprints in an old-growth forest in a valley on Vancouver Island, where every mark in the moss and soil from a heavy step is near-instantly absorbed. A simple stroll is always an ordeal. Vines and bramble snag at boots, damp ferns soak through pants, and every apparent way through ends up being blocked by a tree or fallen log or thicket. It is what painter Emily Carr called "perfectly ordered disorder designed with a helter-skelter magnificence." It was a sense of the unknown — what may lie hidden around the next turn — that kept bringing TJ Watt back again and again to Port Renfrew's forests. One grey September day, Watt dipped under damp hemlock branches and into Eden Grove. This forest had always held a particular pull for him. It was where he came so close to stumbling upon the second-largest Douglas fir in the country but came too late. It was where a photograph had changed the course of his organization. Scattered groves similar to Eden stand flagged and ready to be razed. They are tucked away down kilometres of remote logging roads across Vancouver Island, far from where the pavement ends. These forests will fall in a quiet thunder, like thousands before them. Vancouver Island has already entered the twilight years of its old-growth logging. Most of the great trees are already gone: cut, hauled, milled, and sold. The magnificent towers of nature broken down and reassembled into great manmade towers in their stead. But the tipping point is coming. It is only a matter of time. The question of Vancouver Island's timber industry shifting from old growth to second growth is not one of _if_ but _when_. The finite supply of ancient, big trees will, when exhausted, force that change. Both activists and loggers agree that the days of old-growth logging are approaching the horizon. Some say within ten years, while others more optimistically say twenty. Like cutblock 7190, the southern half of Eden Grove was flagged and surveyed by Dennis Cronin and Walter Van Hell; their orange, pink, and red ribbons were still tied to the branches and fluttering in the light breeze. Teal Jones, the licensee for this stand of old growth, holds the power to send in fallers with their chainsaws at any moment. Watt photographed a series of pink "road location" ribbons dangling in a line through the old-growth forest like a trail of ominous breadcrumbs. It was a picture that hinted at what could come next: the thin pieces of ribbon replaced with a road cut through a previously undisturbed forest. There is a stoicism in Watt, a self-assurance from knowing that more can be gained in the fight to protect Vancouver Island's dwindling old-growth forests by gradually and patiently taking steps forward. The best campaigns take time to conceptualize, design, and implement. Clever marketing can be as effective as bullish activism, and there is sometimes more power in a picture than in a protest. "You go to Egypt to see the pyramids, but people are coming here to see the trees," Watt said, standing before a western red cedar nearly as burly and twisted as his Gnarly Tree. After Avatar Grove hit the news, he began receiving calls from tourists from as far as Russia, Australia, and Switzerland asking to hire him as a guide to see Port Renfrew's great trees. The ground under his boots was spongy and sodden; the bushes of salal and huckleberry dripping with dew. He tiptoed along a fallen cedar log, slowly rotting but acting as a veritable nursery for hundreds of hemlock seedlings. Watt continued down towards the Gordon River, farther into the forest, until a large cedar came into view through the tangle of undergrowth and trees. A metre-long slit in the tree's trunk created an opening, around which were layers of scratches and claw marks. A bear den. Watt had visited this tree often since his first forays into the forests of this valley. Nearby, a narrow creek rushing water towards the river acted as a demarcation line between the two cutblocks — between intact forest and clear-cut, between what he was fighting to protect and what had fallen, between past and future. He peered through a window in the forest made by drooping branches, and across cutblock 7190, to spot the silhouette of Big Lonely Doug against the grey sky. Watt bent down to remove a field camera he had wired to a small cedar trunk on a previous visit. He was hoping to capture video of the black bear entering her den. He pressed play and the small LCD screen flashed on, and the one-minute video began: rain patters down through the canopy, as the mother bear lumbers into the frame next to the giant cedar with the hollow. She is quickly followed by her cub, likely born that spring, which stops suddenly in the forest after noticing the camera, a foreign object, attached eye level to a tree. The cub approaches, licks the lens, and gnaws for a second on the metal box before realizing it has fallen behind its mother. It bounds off after her. With no more motion in the frame, the video cuts to black. The megafauna that inhabit the Pacific temperate rainforests often leave traces of their existence — scat, claw marks, dens, disturbed earth. But they remain hidden and quiet, lurking just out of sight of the humans who thrash clumsily about. But they are there. For Watt, ecstatic as he tucked the full memory card into his camera backpack, to see a mother black bear and her cub among these giant trees, their fur wet with rain, was a reminder that these forests hold value for more than just humans. Watt descended farther into the forest, from colossal cedar to towering fir to colossal cedar, weaving through the underbrush as if he were following a well-marked trail, until he reached a pebble-bottomed creek. In a flash of blue and black, a Steller's jay landed on a mossy branch above him, looked down, and cocked its head expectedly. * * * As the Ancient Forest Alliance's director, Ken Wu has always seen his job as similar to a filmmaker's. "The big trees are sort of like an actor, but you still have to have the script, the directing, the producing, and the cinematography," he says. A narrative is needed to fill in the blanks, superlatives to provide the hook, and tension to add the drama — but it all needs to be framed around a tree or grove that will evoke awe. "We can't use an encrusted lichen as the most charismatic character." In the narrative of Big Lonely Doug, there was obvious tension, but Wu hopes his campaigns focus on the beauty rather than the destruction. "We have to include enough destruction in there that it's compelling and motivating and urgent," he says. With Avatar Grove, the layer of destruction in the narrative was a looming one — a question of what is at stake of being lost. With Big Lonely Doug, it was a glimpse of the stark reality of timber harvesting on Vancouver Island. Arnie Bercov at the Public and Private Workers of Canada has forged a relationship with the AFA to achieve similar goals: sustainable management of British Columbia's forests as well as development of communities. Many see the friendly relationship between a labour union representing timber industry employees and an environmental activist group pushing to save the trees as an unlikely alliance — but the two organizations have found common ground at an intersection where clichéd rhetoric is replaced with no-nonsense pragmatism. The relationship has revealed that the age-old image of burly loggers facing off against dreadlocked tree-huggers is largely a construct, and the environment-versus-jobs argument a smokescreen. While the PPWC has been supportive of the Ancient Forest Alliance's call for an increase in old-growth forest protection and a decrease of raw log exports, it is still a labour union — it represents workers and jobs above all else. For Bercov, where the provincial government has failed is not in the forests that we are not cutting but in the forests that we _are_ cutting. "If we're going to make the effort to cut a tree down," Bercov says, "then we better make sure we use it to the fullest extent." Vancouver Island has the potential to arise as a model not just for British Columbia or Canada but North America, in which a longstanding industry based on a primary resource not only adapts to meet the changing environment but also looks to the future. It could stand as a model where healthy ecosystems and healthy timber workforces are not mutually exclusive. For Bercov, there lies potential in innovation in sustainable second-growth forestry, in investment in new mills tied to timber leases producing high-value products such as laminated beams made from second-growth trees, and in diverting stumpage fees to support Indigenous communities pursuing more diverse economic models. If properly managed, Vancouver Island's forests stand to be the epitome of a renewable resource. "The industry is a shell of what it should be," Bercov says. "Not what it _could_ be but what it _should_ be." It should be creating jobs, addressing climate change, providing opportunities for the next generation, working with Indigenous communities, and seeking out alternative sources of income from our forests. It should be valuing every part of a tree — whether cut or standing. * * * The value the Ancient Forest Alliance has gleaned from Big Lonely Doug, in awareness and attention, has spread their cause of protecting the region's old-growth forests well beyond Vancouver Island's coastlines. Port Renfrew's big trees have caught the eye of filmmakers and photographers around the world. In 2016, acclaimed filmmaker and artist Kelly Richardson took a hike though Avatar Grove and found herself overwhelmed, physically and emotionally. To celebrate the fiftieth anniversary of IMAX in 2019, she partnered with Christian Kroitor, the grandson of Canadian filmmaker and IMAX co-founder Roman Kroitor, to begin filming a moving-image installation that would highlight not only the size of the region's trees but also their predicament — "why we continue to define progress through the conversion of nature." It was an image of Big Lonely Doug that excited world-renowned environmental photographer Edward Burtynsky. Over a career dating back to the 1970s, Burtynsky, originally from St. Catharines, Ontario, has travelled around the globe producing images that are hauntingly beautiful in their depiction of destruction, corrosion, and consequence. Often shot out of a helicopter or plane window, his photographs take on an otherworldly veneer, where the subject and scene isn't immediately clear. It takes a moment to realize what exactly is in the frame. _Is that water? Is that oil? Are those trees?_ And then, like a Magic Eye autostereogram, the reality comes into focus. His photography has been exhibited at the National Gallery of Canada in Ottawa, the Bibliothèque nationale in Paris, and the Guggenheim Museum in New York City. Burtynsky starred in the 2006 documentary _Manufactured Landscapes_ , __ set on one of his shoots in China, which took the viewer from an appliance factory to the Three Gorges Dam to highlight the impact of mass industrialization. In 2013, he co-directed _Watermark_ , which visited nearly a dozen places around the world to show how water is used, consumed, and tainted by humans. While doing research for possible subjects and locations for his next film and visual project, _Anthropocene_ , Burtynsky felt that one of the more pressing issues of humans transgressing the boundaries of the planet is deforestation. He considered the forests cut for palm-oil production in Borneo and the destruction of rainforests in the Amazon — two subjects with plenty of media attention — but ultimately settled on a location closer to home: Canada's West Coast. One image and story stood out: that of a single Douglas fir left standing in the middle of a cutblock near Port Renfrew, British Columbia. Big Lonely Doug offered an opportunity for Burtynsky to employ a new way of communicating an image and an issue. While staying true to his well-known ethos — large-scale photographs depicting the often unseen and fraught intersections where humans and environments meet — he employed new technology. Rather than trying to come away with a singular photograph that encapsulated the story, the place, and the issue, he set out to capture the tree in new ways that would highlight its scale and individuality on a level people hadn't seen before. Over several visits to the tree, Burtynsky gathered footage for his documentary but also used a drone to capture high-resolution images of Big Lonely Doug's trunk that can be stitched together. Spanning 1.5 metres high and up to 12 metres long — the length of the tree's circumference — the image will be a life-size representation of the tree's girth that can be printed and displayed flat against the wall of a museum, business, or institution. "It's a little more conceptual in its origins than trying to be accurately representational," Burtynsky says about the image. "It's more about trying to represent a tree in a different way." Big Lonely Doug also presented an opportunity for Burtynsky to represent the tree in three dimensions. Using hundreds of images he has taken of the tree, he has created an augmented reality object, where people can download an app to their smartphone or tablet, stand back, and be able to see through the camera on their device a life-size 3-D virtual image of Big Lonely Doug standing before them on the street. They will be able to walk around its massive trunk and take photos of their friends alongside the second-largest Douglas fir in the country. It could take the issue of Vancouver Island's old-growth destruction and preservation to people anywhere in the world. "It will bring the scale of this tree into public consciousness," Burtynsky says. Big Lonely Doug is one of several of his augmented reality installations, including a mound of automotive parts in a scrapyard in Ghana and a pile of tusks from poached elephants Burtynsky photographed in Kenya that were confiscated by officials and set alight. People will be able to walk up to and around the thousands of tusks piled twenty feet high. "It's a way to speak about extinction. And cutting down these thousand-year-old trees is the same." He hopes that his projects will inspire change, both in perspective and policy, and help spur a moratorium on old-growth logging in British Columbia. "Big Lonely Doug is a hopeful symbol. It represents that these amazing ecosystems are still among us — and that they are truly our responsibility to preserve." If Big Lonely Doug had never been found, never flagged, and never protected, it would have lived out its natural life, no doubt as one of the last exceptionally large Douglas firs on Vancouver Island. It would never have been climbed and measured, never been added to the B.C. BigTree Registry alongside its elite brethren. It would never have been turned into a symbol, marketed, and promoted. Big Lonely Doug was erected like a tower. It was a calculated creation to highlight the plight of an entire species — the Douglas fir — of an entire landscape — the hills and valleys of Vancouver Island — and of an entire ecosystem — the Pacific temperate rainforests of Canada's West Coast. And it has worked. Few single trees in Canada have ever enjoyed such a reputation. Still, in a heartbeat TJ Watt would trade the single tree as it stands now for the forest that was razed around it. He still shakes his head when he thinks about how close he came, while hiking through Eden Grove mere months before Dennis Cronin, to saving both: "I went left instead of right." For Ken Wu, so much of the forest industry and the dominant paradigm have been focused on the tree rather than the ecosystem, which has allowed the timber companies to claim that old growth can be easily replaced by plantations, and that leaving Big Lonely Doug is a fundamentally good deed. But Wu still sees value in these individual trees. For years, Wu has been petitioning British Columbia's Liberal government to create a Big Tree Protection Order, a piece of legislation that would shield the province's biggest trees for good. Specimens of a certain diameter would be untouchable to timber companies, in recognition of their superior ecological and cultural value — above and beyond what they are worth as boards and posts. Additionally, each tree that met the size and age requirements would be left with a surrounding buffer of forest. In many areas that hold a high density of large trees, these buffer zones would overlap to effectively place the entire grove off limits to logging. In 2009, British Columbia Minister of Forests and Range Pat Bell tempered the appeal. "We're confident these trees won't be harvested," he told the _Vancouver Sun_ about the record-holders listed in the BigTree Registry. "They're tagged, they're named, we know exactly where they are and we're keeping track of them . . . No district manager would dare approve a cutting plan or permit that would allow for the harvesting of any of these trees." But Wu wasn't concerned with those already recognized and named by the public; he was worried about the trees nobody knew about — ones that could be felled before anyone noticed. Two years later, with public pressure mounting following the rise in popularity of Avatar Grove, Bell hinted that the government was considering a legal tool that would protect the largest trees in the province from logging. New optimism bloomed in 2017, when a coalition of New Democratic Party and Green Party was elected to replace Christy Clark and the Liberals, to much celebration from West Coast environmentalists. While the new government quickly stated it could not commit to a full moratorium on old-growth logging, it announced "a new policy is being developed to protect iconic trees in B.C." On January 1, 2018, B.C. Timber Sales, the agency that has managed the timber harvested off public land on behalf of the provincial government since 2003, released a best-practices guideline for retaining "legacy trees" — those exceptionally large trees on the coast that "are increasingly supporting the growing ecotourism economy as valuable destinations in and of themselves." The guideline noted minimum diameters for various species including western red cedar, Sitka spruce, and Douglas fir that would place them off limits to logging, but noted that "it is up to the judgement of the assessor to use both estimated measurements and quality indicators to determine if a tree qualifies as a legacy tree suitable for retention." It was a step forward, but one that gave little assurance to activists concerned about placing the onus of protection into the hands of a logger. Just months after the guideline was released, the ninth largest Douglas fir in British Columbia was felled along with its old-growth grove in the Nahmint Valley on Vancouver Island. In the wake of public outcry, the B.C. government announced that it will review its policy. As governments delay, timber companies continue to cut ten thousand hectares of old-growth forests on Vancouver Island every year — three square metres every second. But Ken Wu can't help but look to one tree as a symbol of hope, the one tree at the centre of this legislative push: "This will be the legacy of Big Lonely Doug." It will also be the legacy of the logger who saved one tree, but in the end might protect many. The way forward through the seemingly impenetrable forest may not be sparked by a protester chaining herself to a logging truck, but from the simple act of a logger saving a single tree — and doing more for the protection of old-growth forests than any march or barricade. * * * A year after her husband died, Lorraine Cronin made the short drive from her home in Lake Cowichan to visit Big Lonely Doug. It was a cloudy day, overcast and dreary as so many spring days are on the West Coast. But the low cloud and pillows of mist softened the edges and brought the world in close. The last time she had stood under the tree was with Dennis, in his final year. Lorraine took a detour first, up the steep switchbacks to near the top of a mountain where the head of Port San Juan and a few buildings of Port Renfrew came into view in the distance below. She parked her truck along the side of the logging road by a recent clear-cut. The clouds threatened a downpour but held back. The path to the wreckage of the Avro Anson plane found by her husband and his crew was marked along the scattered clear-cut with small holes in the logs from the spikes of timber workers' caulk boots — some undoubtedly left by Dennis. She moved slowly over the logs, careful not to slip, until she found the forest trail marked by a piece of ribbon dangling from a tree. The first sign of anything unnatural within the forest was flakes of yellow paint, some the size of a fingernail, others as large as a hand. After more than seventy years, the Second World War aircraft appeared to have been well consumed by the undergrowth. But with each step closer to the crash site, bent pieces of metal stuck out among the salal and sword fern. A large cedar broken halfway up its trunk marked the beginning of the wreckage — Dennis Cronin had wondered if the plane had struck the tree when it ploughed through the forest. Then, among the trees: an engine with a bent propeller, parts of the wooden fuselage still intact after three-quarters of a century, an electronic board with circuits and wires, and an unbroken lightbulb. A mound of metal fragments lay collected to one side from when the military team and archaeologists had combed the wreckage, looking for remains or personal effects. Lorraine beamed with pride at the thought that her husband had helped solve the mystery of the missing plane, and helped descendants in Britain find closure. The mist on the mountain grew thicker as Lorraine returned to her truck and began the cautious drive down the steep logging road. She drove past a line of cars parked at the trailhead to Avatar Grove, where hikers and tourists were taking pictures under Canada's Gnarliest Tree. She kept going without pause. After fifteen minutes, she took the first right off the Gordon River Main Line logging road and crossed the bridge high over the churning water. "My kids used to call this Daddy's Bridge," she said, explaining how Dennis had been part of the crew that had it initially installed. She continued on as the road steepened to a grade only manageable by a four-wheel-drive vehicle, past the plantations of second-growth forest, and past the overlook built by Teal Jones road builders to haul the logs out of cutblock 7190. Two vehicles were parked alongside the dirt road near the trail leading down to Big Lonely Doug. The first belonged to the crew of photographer Edward Burtynsky, whose team was operating a drone to capture images of the tree. The second vehicle, a blue Mitsubishi Delica van, belonged to TJ Watt, the Ancient Forest Alliance activist and photographer who had helped launch the tree into the limelight. Lorraine sighed. She was hoping for a quiet moment. She parked her truck, put on her jacket, and started down the trail to the tree. Squatting on a stump halfway down was Watt, who was providing an overview perspective to help the drone operator. "My husband saved that tree," Lorraine said, her voice quivering yet direct. The activists claiming the "discovery" of the tree always made her feel like what Dennis had done was being overshadowed. "We're thankful for it," Watt replied. For years, Watt had wanted to sit down with Lorraine Cronin to have a conversation about how their work isn't meant to be combative or aggressive towards timber workers and their families. On that misty day, the two forces that made Big Lonely Doug — the widow of the logger who flagged it and the activist who promoted it — had collided by happenstance under the great tree's boughs. Lorraine looked across the cutblock to Big Lonely Doug, turned, and headed back up the trail to her truck. Halfway down the path was as close as she could get to the tree her husband had saved. It was a start. The whirl of the drone kicked up, its helicopter-like blades buzzing as the man with the controller navigated it up and down and around the trunk of the enormous tree. The photographs were to be stitched together to form the high-definition, 360-degree image for their augmented reality exhibition. Lorraine shut the door to her truck. The fog had condensed to a light drizzle and tears began welling in her eyes. It was an odd moment for her. Big Lonely Doug is more than just a big tree her husband saved. It has become a monument of sorts — a twenty-storey-tall tombstone to a man who loved the forests. That same month, the South Island Natural Resource District designated Big Lonely Doug a recreational reserve as per the Forest and Range Practices Act. From there, the "minister may order the establishment of Crown land as an interpretive forest site, a recreation site or a recreation trail." It is a designation that has been given to other significant trees in the area, including the Red Creek Fir and the Harris Creek Spruce, which stand outside formal protection areas such as a provincial park. If turned into a recreational site, it would be promoted and advertised formally by the provincial government, luring a new wave of tourists to keep the lonely tree company. The recreation officer gave the reserve the forest file identification number REC230530 — not quite as emotive as the name bestowed by the activists. Despite expressing caution about the bridge leading to the tree, Teal Jones was not void of support for turning the tree into a destination — only the timber company's representatives preferred not to use the name "Big Lonely Doug" but "Dennis Cronin Memorial Tree" instead. It took a millennium for this Douglas fir to turn from one of a million seedlings sprouting along the Gordon River near Port Renfrew into one of the largest trees in Canada. But within the few short years since Dennis Cronin paused under the tree and tied a piece of green ribbon around its base, the tree has grown exponentially in renown. It became Big Lonely Doug — a tourist site and a rallying point for environmental activists; a symbol of the future of logging and the future of Vancouver Island's ancient forests. It went from a tree surrounded by forest to a tree in a wasteland to a tree known around the world. Sitting in the cab of her truck, Lorraine Cronin stared with watery eyes at the Douglas fir in the middle of a clear-cut, watching the branches softly tremble in the breeze. She thought of her husband, Dennis, and shook her head ever so slightly. "It's just a tree." Notes Chapter 2: Evergreen Meidinger, Del, and Jim Pojar, eds. _Ecosystems of British Columbia_. Victoria: B.C. Ministry of Forests, February 1991. Sierra Club of Western Canada and the Wilderness Society. _Ancient Rainforests at Risk: An interim report by the Vancouver Island Mapping Project_. Victoria: 1991. Allen, George S., and John N. Owens. _The Life History of Douglas-fir_. Ottawa: Environment Canada, Canadian Forestry Service, 1972. https://doi.org/10.2307/1296578. "On definitions of forest and forest change." Food and Agriculture of the United Nations, November 2, 2000. http://www.fao.org/docrep/006/ad665e/ad665e00.htm. _Global Forest Resources Assessment 2010_. Rome: Food and Agriculture Organization of the United Nations, November 2, 2010. Venkateswarlu, D. "Definition of forests: A review." New Delhi: Teri University. http://www.teriuniversity.ac.in/mct/pdf/assignment/VENKATESWARLU.pdf. Sexton, Joseph O., Praveen Noojipady, Xiao-Peng Song, Min Feng, Dan-Xia Song, Do-Hyung Kim, Anupam Anand, Chengquan Huang, Saurabh Channan, Stuart L. Pimm, and John R. Townshend. "Conservation policy and the measurement of forests." _Nature Climate Change_ , __ October 5, 2015. https://doi.org/10.1038/nclimate2816. Centre for Forest Conservation Genetics, University of British Columbia. ClimateBC/WNA/NA program. doi: <http://cfcg.forestry.ubc.ca/projects/climate-data/climatebcwna/>. Jones, Charles. _Queesto: Pacheenaht Chief by Birthright_. British Columbia: Theytus Books, 1981. Arima, E. Y. _The West Coast People: The Nootka of Vancouver Island and Cape Flattery_. Victoria: British Columbia Provincial Museum, 1983. Finkbeiner, Ann. "The Great Quake and the Great Drowning." _Hakai Magazine_ , September 14, 2015. https://www.hakaimagazine.com/features/great-quake-and-great-drowning/. Tindall, D. B., Ronald L. Trosper, and Pamela Perreault, eds. _Aboriginal Peoples and Forest Lands in Canada_. Vancouver: UBC Press, 2013. Archaeology Branch, B.C. Ministry of Small Business, Tourism and Culture. _Culturally Modified Trees of British Columbia: A handbook for the identification and recording of culturally modified trees_. British Columbia: Resources Inventory Committee, March 2001. https://www.for.gov.bc.ca/hfd/pubs/docs/mr/mr091/cmthandbook.pdf. "Man must apologize for cutting old trees." _Canadian Press_ , November 16, 2001. Banner, Allen, and Philip LePage. "Long-term recovery of vegetation communities after harvesting in the coastal temperate rainforests of northern British Columbia." _Canadian Journal of Forest Research_ 38, no. 12 (2008): 3098–3111. https://doi.org/10.1139/X08-145. Moldenke, Andrew. "Small in Size, Great in Importance: Invertebrates in your soil." _Northwest Woodlands Magazine_ (Summer 2001). Silva Ecosystem Consultants. _Old Growth Literature Review_. May 1992. Menary, David. _Great Trees of Canada_. Indianapolis: Blue River Press, 1997. Nelson, John David. _A Vanishing Heritage: The Loss of Ancient Red Cedar from Canada's Rainforests_. Vancouver: Western Canada Wilderness Committee, 2004. Morales-Hidalgo, David, Sonya N. Oswalt, and E. Somanathan. "Status and trends in global primary forest, protected areas, and areas designated for conservation biodiversity from the Global Forest Resources Assessment 2015." _Forest Ecology and Management_ 352 (September 2015): 68–77. https://doi.org/10.1016/j.foreco.2015.06.011. Chapter 3: Tree of Many Names Lindsay, Ann, and Syd House. _The Tree Collector: The Life and Explorations of David Douglas_. London: Aurum Press, 2005. Hooker, W. J. "Companion to the Botanical Magazine." Vol. II, 1896. "Four of Britain's tallest trees in glen near Inverness." BBC News, March 25, 2014. Douglas, David. _Journal Kept by David Douglas during his Travels in North America: 1823–1827_. New York: Cambridge University Press, 2011. Harvey, Athelstan George. _Douglas of the Fir: A Biography of David Douglas, Botanist_. Cambridge: Harvard University Press, 2014. Nisbet, Jack. _The Collector: David Douglas and the Natural History of the Northwest_. Seattle: Sasquatch Books, 2010. Newcombe, C. F., ed. _Menzies' Journal of Vancouver's Voyage: April to October, 1792._ Victoria: William H. Cullin, 1923. "The discovery of gold in California," _The Century Magazine_ 4 (November 1980 to April 1891). Murchison, Roderick Impey. "Siberia and California." _The Quarterly Review_ 87 (1850): 395–434. Shakespeare, Mary, and Rodney H. Pain. _West Coast Logging 1840–1910_. National Museum of Man, Mercury Series. Ottawa: National Museums of Canada, 1977. Davidson, John. _Conifers, Junipers and Yew: Gymnosperms of British Columbia_. London: T. F. Unwin, 1927. Lauriault, Jean. _Identification Guide to the Trees of Canada_. Markham: Fitzhenry and Whiteside, 1989. Hansen, Carl. "Pinetum Danicum." _Journal of the Royal Horticultural Society of London_ , vol. XIV (1892). Parminter, John. "A Tale of a Tree." _British Columbia Forest History Newsletter_ , no. 45 (January 1996). Gould, Ed. _Logging: British Columbia's Logging History_. Blaine, WA: Big Country Books, 1975. Robson, Peter A., Art Walker, and the Working Forest Project. _The Working Forest of British Columbia_. British Columbia: Harbour Publishing, 1995. Sculland, Keri. "Old forests get protection." _Alberni Valley Times_ , August 5, 2010. Sauder, E. A., and G. V. Wellburn. _Planning Logging: Two Case Studies on the Queen Charlotte Islands, B.C_. Vancouver: Forest Engineering Research Institute of Canada, September 1989. https://www.for.gov.bc.ca/hfd/pubs/docs/mr/Lmr/Lmr059.pdf. Foster, R. E., G. P. Thomas, and J. E. Browne. "A Tree Decadence Classification for Mature Coniferous Stands." _The Forestry Chronicle_ 29, no. 4 (1953): 359–366. Silva Ecosytems Consultants. _Old Growth Literature Review_. May 1992. Chapter 4: Green Gold Rajala, Richard. _The Legacy and the Challenge: A Century of the Forest Industry at Cowichan Lake._ Lake Cowichan: Lake Cowichan Heritage Advisory Committee, 1993. Gould, Ed. _Logging: British Columbia's Logging History_. Blaine, WA: Big Country Books, 1975. Hayman, John, ed. _Robert Brown and the Vancouver Island Exploring Expedition_. Vancouver: UBC Press, 1989. Jones, H. H. "A cyclone among the Timber Titans," _British Columbia Magazine_ , vol. VII (1911). "British Columbia's Forest Policy: Speech by the Hon. William R. Ross, Minister of Lands, on the second reading of the Forest Bill." Legislative session of 1912. Saywell, John F. T. _Kaatza: The Chronicles of Cowichan Lake_. Sidney: Cowichan Lake District Centennial Committee, 1967. Gillis, R. Peter, and Thomas R. Roach. _Lost Initiatives: Canada's Forest Industries, Forest Policy and Forest Conservation_. New York: Greenwood Press, 1986. Drushka, Ken. _Canada's Forests: A History_. Montreal: McGill-Queen's University Press, 2003. Rajala, Richard Allan. _Clearcutting the Pacific Rain Forest: Production, Science, and Regulation._ Vancouver: UBC Press, 1998. Province of British Columbia. _Report of the Forest Branch of the Department of Lands 1913_. https://www.for.gov.bc.ca/hfd/pubs/docs/mr/annual/ar_1911-30/annual_1913.pdf. Andrews, Ralph W. _This Was Logging: Drama in the Northwest Timber Country_. Atglen, PA: Schiffer Publishing, 1997. Mackie, Richard. _Mountain Timber: The Comox Logging Company in the Vancouver Island Mountains._ Winlaw, B.C.: Sono Nis Press, 2009. Pearce, Peter H. "Evolution of the forest tenure system in British Columbia." Vancouver: February 1992. Turner, Robert D. _Logging by Rail: The British Columbia Story_. Winlaw, B.C.: Sono Nis Press, 1990. Wolfe, Linnie Marsh. _John of the Mountains: The Unpublished Journals of John Muir_. Madison: University of Wisconsin Press, 1979. Köhl, Michael, Prem R. Neupane, and Neda Lotfiomran. "The impact of tree age on biomass growth and carbon accumulation capacity: A retrospective analysis using tree ring data of three tropical tree species grown in natural forests of Suriname." _PLoS ONE_ 12, no. 8 (August 16, 2017). https://doi.org/10.1371/journal.pone.0181187. Oregon State University. "Oldest trees are growing faster, storing more carbon as they age." _ScienceDaily_ , __ January 15, 2014. https://www.sciencedaily.com/releases/2014/01/140115132740.htm. S. J. and Jessie E., Quinney College of Natural Resources, Utah State University. "Inequality is normal: Dominance of the big trees." _ScienceDaily_ , May 8, 2018. https://www.sciencedaily.com/releases/2018/05/180508155029.htm. Stephenson, N. L., et al. "Rate of tree carbon accumulation increases continuously with tree size." _Nature_ 507 (March 6, 2014): 90–93. https://doi.org/10.1038/nature12914. Faculty of Forestry, University of British Columbia. "The Pacific Salmon Ecology and Conservation lab." _Branchlines_ 27, no. 1 (Spring 2016). Adams, Megan S., Christina N. Service, Andrew Bateman, Mathieu Bourbonnais, Kyle A. Artelle, Trisalyn Nelson, Paul C. Paquet, Taal Levi, and Chris T. Darimont. "Intrapopulation diversity in isotopic niche over landscapes: Spatial patterns inform conservation of bear–salmon systems." _Ecosphere_ 8, no. 6 (June 2017). https://doi.org/10.1002/ecs2.1843. Babikova, Zdenka, Lucy Gilbert, Toby J. A. Bruce, Michael Birkett, John C. Caulfield, Christine Woodcock, John A. Pickett, and David Johnson. "Underground signals carried through common mycelial networks warn neighbouring plants of aphid attack." _Ecology Letters_ 16, no. 7 (July 2013): 835–843. https://doi.org/10.1111/ele.12115. Beiler, Kevin J., Daniel M. Durall, Suzanne W. Simard, Sheri A. Maxwell, and Annette M. Kretzer. "Architecture of the wood-wide web: Rhizopogon spp. Genets link multiple Douglas-fir cohorts." _New Phytologist_ 185, no. 2 (January 2010): 543–53. https://doi.org/10.1111/j.1469-8137.2009.03069.x. Simard, Suzanne W., Kevin J. Beiler, Marcus A. Bingham, Julie R. Deslippe, Leanne J. Philip, and François P. Teste. "Mycorrhizal networks: Mechanisms, ecology and modelling." _Fungal Biology Reviews_ 26 (2012): 39–60. https://doi.org/10.1016/j.fbr.2012.01.001. Twieg, Brendan D., Daniel M. Durall, and Suzanne W. Simard. "Ectomycorrhizal fungal succession in mixed temperate forests." _New Phytologist_ 176, no. 2 (October 2007): 437–47. Simard, Suzanne W. "Unseen Connections." In _We Discover_ , edited by Marc Guttman. 2016. www.wediscover.net. Sierra Club B.C. "Twenty-five international environmental organizations call for urgent action for Vancouver Island's rainforest and communities." April 10, 2017. https://sierraclub.bc.ca/25-international-environmental-organizations-call-for-urgent-action-for-vancouver-islands-rainforest-and-communities/. Sierra Club B.C. "Sierra Club B.C.'s Google Earth tool shows Vancouver Island old-growth in a state of emergency." March 30, 2016. https://sierraclub.bc.ca/sierra-club-bcs-google-earth-tool-shows-vancouver-island-old-growth-state-emergency/. Chapter 5: War for the Woods Sloan, Gordon McG. _Report of the Commissioner, the Honourable Gordon McG. Sloan, Chief Justice of British Columbia, relating to the Forest Resources of British Columbia,_ _1956_. Victoria: Don McDiarmid, 1957. https://www.for.gov.bc.ca/hfd/pubs/docs/mr/rc/rc004/Rc004-1.pdf. Utzig, G. F., and D. L. Macdonald. _Citizens' Guide to Allowable Annual Cut Determinations: How to Make a Difference_. Vancouver: British Columbia Environmental Network Education Foundation, 2000. Hume, Mark. "Tree team tracking giant spruce." _Vancouver Sun_ , May 14, 1988. Western Canada Wilderness Committee. _Carmanah Forever_ (film). 1988. https://www.wildernesscommittee.org/video/1988_04_15_carmanah_forever. George, Paul. _Big Trees Not Big Stumps: 25 Years of Campaigning to Save Wilderness with the Wilderness Committee_. Vancouver: Western Canada Wilderness Committee, 2006. Hume, Mark. "Carmanah road building halted." _Vancouver Sun_ , May 19, 1988. Hume, Mark. "Record spruce elusive, but big ones abound." _Vancouver Sun_ , May 17, 1988. Hume, Mark. "Woodsman spare that tree." _Vancouver Sun_ , June 11, 1988. "Most of the valley productive forest," _The Province_ , April 11, 1990. "Save the Carmanah and save the murrelets," _Vancouver Sun_ , December 2, 1989. Stanbury, William T. _Environmental Groups and the International Conflict over the Forests of British Columbia, 1990 to 2000_. Vancouver: SFU-UBC Centre for the Study of Government and Business, 2000. Salazar, Debra J., and Donald K. Alper, eds. _Sustaining the Forests of the Pacific Coast: Forging Truces in the War in the Woods._ Vancouver: UBC Press, 2000. Western Canada Wilderness Committee. _Visions of Carmanah_ (film). 1989. Carr, Emily. _Hundreds and Thousands: The Journals of Emily Carr_. Vancouver: Douglas & McIntyre, 1966. Western Canada Wilderness Committee. _Suzuki Kids in Carmanah Valley_ (film). 1990. https://www.wildernesscommittee.org/video/1990_05_23_suzuki_kids_carmanah_valley. MacMillan Bloedel. _The Incredible Forest_ (film). __ Canadian Forest Industries Films. Montreal: 1976. MacMillan Bloedel. _The Managed Forest_ (film). 1986. Rowell, Andrew. _Green Backlash: Global Subversion of the Environment Movement_. New York: Routledge, 1996. Niosi, Goody. _Magnificently Unrepentant: The Story of Merve Wilkinson and Wildwood_. Surrey, B.C.: Heritage House Publishing, 2001. Meikle, Graham. _Future Active: Media Activism and the Internet_. New York: Routledge, 2002. Wilson, Jeremy. _Talk and Log: Wilderness Politics in British Columbia_. Vancouver: UBC Press, 1998. Bohn, Glenn. "Arrests, injury, and tree spiking escalate battle over Walbran." _Vancouver Sun_ , September 24, 1991. Bohn, Glenn. "Environmentalists spiked for bounty." _Vancouver Sun_ , April 24, 1991. Boei, William. "Clayoquot Sound: 200 litres of human excrement dumped at anti-logging group's information tent." _Vancouver Sun_ , August 4, 1993. Forest Resources Commission _. The Future of Our Forests: Executive Summary_. Victoria: B.C. Ministry of Forests, 1991. Chapter 6: A Forest Alliance Ancient Forest Alliance. "New B.C. organization 'Ancient Forest Alliance' launched to protect B.C.'s old-growth forests and forestry jobs." January 19, 2010. https://www.ancientforestalliance.org/news-item.php?ID=1. Ancient Forest Alliance. "An exceptionally spectacular and accessible stand of newly located old growth redcedars and Douglas firs near Port Renfrew has recently been marked for logging." February 18, 2010. https://www.ancientforestalliance.org/news-item.php?ID=10. Ancient Forest Alliance. "Earth Day media release: Avatar's James Cameron invited by environmental group to visit the endangered 'Avatar Grove' of ancient trees." April 22, 2010. https://www.ancientforestalliance.org/news-item.php?ID=55. "James Cameron: Fox didn't want Avatar's 'treehugging crap,'" _USA Today_ , February 19, 2010. http://content.usatoday.com/communities/greenhouse/post/2010/02/james-cameron-fox-didnt-want-avatars-treehugging-crap/1#.WvsjZjKZP-Y. George, Paul. _Big Trees Not Big Stumps: 25 Years of Campaigning to Save Wilderness with the Wilderness Committee_. Vancouver: Western Canada Wilderness Committee, 2006. Ancient Forest Alliance. "The 'gnarliest tree in Canada' found in the endangered 'Avatar Grove' on Vancouver Island in British Columbia." March 25, 2010. https://www.ancientforestalliance.org/news-item.php?ID=33. Lavoie, Judith. "B.C. chops down bid to protect 'Avatar Grove.'" _Vancouver Sun_ , August 5, 2010. Forest Practices Board. _Logging Old-Growth Forest Near Port Renfrew_. Victoria: February 2011. Ancient Forest Alliance. "Breaking News: Avatar Grove might get saved — please write a letter now!!" February 12, 2011. https://www.ancientforestalliance.org/news-item.php?ID=196. Lavoie, Judith. "Island version of Avatar Grove given provincial protection." _Times Colonist_ , February 17, 2012. Gardner, Sheila. "Forest alliance welcomes government announcement to preserve Avatar Grove." CFAX, February 16, 2012. "Protection of Avatar Grove will boost tourism." _Sooke News Mirror_ , February 22, 2012. https://issuu.com/sookemirror/docs/snmn_2012_02_22. "British Columbia: clearcutting the 'Avatar Forest.'" _Pacific Free Press_ , February 19, 2010. Ancient Forest Alliance. "Stunning grove of unprotected old-growth trees located near Port Renfrew." May 11, 2017. https://www.ancientforestalliance.org/news-item.php?ID=1120. Ancient Forest Alliance. "Magnificent Old-Growth Forest found on Vancouver Island — 11 foot wide, near-record size Sitka spruce towers in 'FernGully Grove.'" December 15, 2017. https://www.ancientforestalliance.org/news-item.php?ID=1156. Ancient Forest Alliance. "Christy Clark Grove." April 20, 2012. https://www.ancientforestalliance.org/news-item.php?ID=413. "Ancient grove named for premier." _Sooke News Mirror_ , April 25, 2012. https://www.sookenewsmirror.com/news/ancient-grove-named-for-premier/. Klem, Greg. "Avafraud Grove." _Sooke News Mirror_ , April 13, 2011. https://www.sookenewsmirror.com/opinion/avafraud-grove/. Wu, Ken. "Avatar Grove must get saved." _Sooke News Mirror_ , April 20, 2011. https://www.sookenewsmirror.com/opinion/avatar-grove-must-get-saved/. Chapter 9: Growing an Icon Ancient Forest Alliance. "Canada's most significant big tree discovery in decades!" March 21, 2014. https://www.ancientforestalliance.org/news-item.php?ID=753. Hume, Mark. "Canada's loneliest tree still waiting on help." _Globe and Mail_ , June 9, 2014. https://www.theglobeandmail.com/news/british-columbia/canadas-loneliest-tree-around-1000-years-old-still-waiting-on-help/article19064507/. Stoltmann, Randy. _Hiking Guide to the Big Trees of Southwestern British Columbia_. Vancouver: Western Canada Wilderness Committee, 1987. Jones, H. H. "A Cyclone among Timber Titans." _British Columbia Magazine_ , vol. VII (1911). Chapter 10: Big Tree Hunting The University of British Columbia. B.C. BigTree Registry. http://bigtrees.forestry.ubc.ca. Chapter 11: Tall Tree Capital Goldman, Josephine. _Pioneer Days of Port Renfrew_. Privately printed, 1973. Norcross, E. Blanche, and Doris Farmer Tonkin. _Frontier Days of Vancouver Island_. Courtenay, B.C.: Island Books, 1969. Lunman, Kim. "Life at sawmill faces final cut." _Globe and Mail_ , January 20, 2001. https://www.theglobeandmail.com/news/national/life-at-sawmill-faces-final-cut/article1029767/. Parfitt, Ben. _Getting More from Our Forests: Ten Proposals for Building Stability in B.C.'s Forestry Communities_. Vancouver: Canadian Centre for Policy Alternatives, December 2005. Ancient Forest Alliance. "Horgan, Hicks, and Cash join Ancient Forest Alliance on tour of Avatar Grove and to Canada's biggest trees and stumps." September 28, 2010. https://www.ancientforestalliance.org/news-item.php?ID=136. Britten, Liam. "BC Hydro buys out properties below Jordan River dam." CBC News, May 17, 2016. http://www.cbc.ca/news/canada/british-columbia/b-c-hydro-jordan-river-1.3585351. Chapter 12: A New Ecosystem Leiren-Young, Mark. _The Killer Whale Who Changed the World_. Vancouver: Greystone Books, 2016. "Berlin zoo: Brain problems led to death of polar bear Knut." _Toronto Star_ , March 22, 2011. https://www.thestar.com/news/world/2011/03/22/berlin_zoo_brain_problems_led_to_death_of_polar_bear_knut.html. B.C. Ministry of Forests and B.C. Ministry of Environment. _Forest Practices Code of British Columbia: Biodiversity Guidebook_. Victoria: 1995. https://www.for.gov.bc.ca/hfd/library/documents/bib19715.pdf. British Columbia Forest Service. "The Retention System: maintaining forest ecosystem diversity." _Notes to the Field_ 7 (March 2002). https://www.for.gov.bc.ca/hfp/publications/00095/note_07.pdf. Stoltmann, Randy. _Hiking Guide to the Big Trees of Southwestern British Columbia_. Vancouver: Western Canada Wilderness Committee, 1987. Lavoie, Judith. "Retired logger ready to renew fight to save fir; magnificent old-growth stand viewed as being under threat despite logging company's denials." _Times Colonist_ , __ May 23, 2007. Bainas, Lexi. "Old-growth grove faces saws yet again." _Cowichan News Citizen_ , May 25, 2007. "Old-growth trees not coming down." _Cowichan News Citizen_ , May 30, 2007. Wilson, Carla. "Fallers persuade logging bosses to spare centuries-old fir grove." _Times Colonist_ , May 4, 1989. "Climbing Mount Everest is work for superman." _New York Times_ , March 18, 1923. Epilogue: A Giant B.C. Timber Sales. "Best Management Practices for Coastal Legacy Trees." https://www.for.gov.bc.ca/ftp/tsg/external/!publish/EMS2/Supplements/TSG-BMP-CoastalLegacyTrees.pdf. Pynn, Larry. "Call to protect B.C.'s 100 top heritage trees." _Vancouver Sun_ , January 31, 2009. Acknowledgements I am so grateful for all the timber workers, environmental activists, members of the Pacheedaht First Nation, residents of Port Renfrew, ecologists, and experts in their various fields who took the time to speak or take a walk in the woods with me, with special mention to Dennis and Lorraine Cronin, TJ Watt, Jeff Jones, Mark Carter, Ken Wu, Walter Van Hell, Dan Hagar, Greg Klem, Kristine Pearson, Bear Charlie, Arnie Bercov, Torrance Coste, Andy MacKinnon, Hans Tammemagi, Joe and Karen Simpson, Matthew Beatty, and Ray Travers, among many others. I am thrilled that _Big Lonely Doug_ is the inaugural title in the Walrus Books imprint at House of Anansi Press. Thanks to Shelley Ambrose, executive director and publisher of _The Walrus_ , and Sarah MacLachlan, president and publisher of Anansi, for their enthusiasm for this story. Research and reporting for this book simply would not have been possible without support from the Chawkers Foundation Writers Project, for which I am deeply grateful. A special thanks to Carmine Starnino, deputy editor at _The Walrus_ magazine, for editing my original article that appeared in the October 2016 issue, and for helping me hone the story into one about us — our relationships, our motivations, our emotions — as much as one about a tree. I'm so proud the article resonated with so many people, won a silver National Magazine Award, and was reprinted in _Reader's Digest Canada_ — and I owe a great deal to him for championing the story. I am exceptionally grateful to Janie Yoon, my editor at House of Anansi, for her vision when it came to expanding this story and for her sharp yet kind editing. I could not have asked to be in better hands for my first book. And to everyone at Anansi — including managing editor Maria Golikova, for patiently fielding all my extremely basic questions about how a book is made, Alysia Shewchuk for the beautiful cover design, Gemma Wain for her detailed copyediting and vital fact-checking, and Peter Norman for the final proofread. I have been extremely fortunate to have learned from and worked under some generous and talented journalists and writers early in my career. I would like to thank University of King's College professor David Swick for his mentorship and friendship; Stephanie Nolen for accepting my plea to intern for her at the _Globe and Mail_ 's South Asia bureau in New Delhi, India; and Matthew McKinnon, editor and former colleague at _The Walrus_ , for his guidance and encouragement in writing and editing. And thanks to author Kevin Patterson for lending me his sailboat, where I managed an important breakthrough in the writing of this book despite never untying from the dock, and for being so relentlessly positive. Writing this book was often an isolating experience. I was so grateful for a group of friends in Toronto who met in the University of King's College's journalism program — Geoff Lowe, Julia Pagel, Thea Fitz-James, Miles Kenyon, Laura Bain, Kevin Philipupillai, and Laura Armstrong — who have all made that big city with its tiny trees feel like home. To my sisters, Britta and Clare, for adventuring in the forests together, looking for "watermelon slices," when we were young, and to my mum and dad for constantly surrounding me with books and magazines and newspapers and stories — and for never telling me to climb down from that tree. Index305 activists vs. loggers arrests of activists, , , , Avatar Grove value, Carmanah Valley, –, , – Clayoquot Sound, – loggers' cynicism, , , , – meaning of trees, media campaigns, – saving Big Lonely Doug, _See also_ direct action; environmental activists; loggers Adams, Bryan, adopt-a-tree campaign, , alder trees, , allowable annual cut (AAC), Ancient Forest Alliance (AFA) AFA platforms, – Big Lonely Doug recreational reserve application, – broader mobilization, –, –, –, , , –, campaigns with Big Lonely Doug, –, –, , community meetings, – Eden Grove, –, – speculating about Big Lonely Doug, – tourism, – _See also_ Avatar Grove; Watt, TJ; Wu, Ken Anderson sawmill, _Anthropocene_ (film), Arboreal Collective, artists, – augmented reality installations, – _Avatar_ (film), – Avatar Grove AFA finding, – cynicism regarding, –, , – the Gnarly Tree, – managing, naming of, , Pacheedaht people and, –, – protection of, –, Teal Jones and, – as tourist destination, –, – Avro Anson L7056 airplane, –, – Bateman, Robert, B.C. BigTree Registry, –, – B.C. Forest Alliance, B.C. Timber Sales, bears, –, – Beatty, Matthew, Bell, Pat, , , , –, – Bens, Samuel J., Bercov, Arnie, , – Big Lonely Doug in 3D, –, – age of, – as anchor, – Burtynsky and, – as captivating, , –, – climbing, – Cronin finding, –, – as endangered, –, –, , height of, – leaving intact, –, , , location and visitors, logging responsible for, marking, as monument, as recreational reserve, –, – root networks, status and, sun catching, as symbol, –, –, –, , – in tampon commercial, Watt finding, , – Wu first view, – Big Tree Protection Order, biodiversity, _See also_ biomass _Biodiversity Guidebook_ , – biogeoclimatic zones, – biomass, botanizing missions, – bristlecone pine trees, – British Columbia, –, – _See also_ Vancouver Island British Columbia Lumber Company, Brown, Robert, burls, Burson-Marsteller company, Burtynsky, Edward, – California Gold Rush, – Cameron, James, candelabra tops, _Carmanah: Artistic Visions of an Ancient Rainforest_ , _Carmanah Forever_ (film), – Carmanah Giant, –, –, –, Carmanah Valley, –, – Carmanah Walbran Provincial Park, , , Carr, Emily, –, Carter, Mark, Cary, George, Cary fir, cedars, , , , , –, – chainsaws, – chambers of commerce, – Charlie, Bear, , Clark, Christy, Clayoquot Sound, , , clear-cutting artists' views of, –, early concern, – inventory on Vancouver Island, – marking trees, –, monitoring, roads, – root networks and, second-growth forests, –, –, , , – timber companies' views of, visual descriptions, , , –, , _See also_ activists vs. loggers; cutblock 7190; cutblocks; environmental activists; loggers; logging industry; old-growth forests; timber companies Coastal Douglas Fir zone, Coastal Western Hemlock zone, Coast Salish people, – colonization, _See also_ botanizing missions; settlers Columbia River, , cones, –, – conservation B.C. BigTree Registry, early, –, – and economics, as emotional, –, , for root networks, – _See also_ environmental ­activists Cook, James, – Council of Forest Industries, Cronin, Dennis activists and, – captivated by Big Lonely Doug, , – cutblock 7190 trees, – death of, Eden Grove and, finding Avro Anson L7056, – finding Big Lonely Doug, – finding CMTs, – guessing age of Big Lonely Doug, illness, – as logger, –, – protecting Big Lonely Doug, –, –, , Steller jay and, – Cronin, Lorraine, –, , –, –, –, culturally modified trees (CMTs), –, , , – _See also_ Indigenous Peoples cutblock 7190, –, –, –, –, _See also_ Big Lonely Doug cutblocks, –, , , _See also_ clear-cutting; loggers; logging industry "A Cyclone Among the Timber Titans" (Jones), – Deakin, Alfred, "Dennis Cronin Memorial Tree" _See_ Big Lonely Doug Diitiida River/Jordan River, , direct action against activists, blockades Clayoquot Sound, blockades of Queen Charlotte Islands, confusing loggers, – dismissed, vs. education, – GDP and, Pacheedaht support and, tree sitting, , tree spiking, –, _See also_ activists vs. loggers donkey engine, – Dorst, Adrian, – Douglas, David collecting samples, , –, –, , on the Columbia River, , – death of, description of Douglas fir, – as Horticultural Society explorer, –, –, – interest in Douglas fir, , taxonomic name Douglas fir, , value of wood, – vernacular name Douglas fir, Douglas fir description of, –, –, , diameter of, falling, – heartiness of, –, – height of, , , –, –, – lifespan, , locations of, logging industry and, –, and marbled murrelets, oldest in 1957, – photograph controversy, – Red Creek Fir, as roads, root networks, , in shipbuilding, taxonomic names of, – _See also_ Big Lonely Doug economics, –, , –, –, , –, Eden Grove, –, –, Edinburgh Mountain, –, , , –, , _See also_ Big Lonely Doug Elm Conflict, environmental activists broader mobilization, –, ( _see also_ Ancient Forest Alliance) Carmanah Valley importance, Clayoquot Sound, climate change, definitions of old-growth, – as "eco-terrorists," – focus on positives, Indigenous Peoples and, , –, photography as effective, rise of, – as salespeople, Sierra Club B.C. and Vancouver Island, subvertisements, – symbols needed, tree-centric concerns, tree sitting, tree spiking, – unions and, – variable retention and, _See also_ activists vs. loggers; Ancient Forest Alliance; Avatar Grove; direct action; Stoltmann, Randy; Watt, TJ; Western Canada Wilderness Committee; Wu, Ken Expo 67, fallers _See_ loggers FernGully Grove, fire, –, –, – Fletcher Challenge, – fog, – Forest Act, –, forest buffers, –, –, , , forest engineers, –, _See also_ Cronin, Dennis forest management, –, – Forest Practices Board (FPB), –, Forest Practices Code of British Columbia Act/Forest and Range Practices Act, , – forestry code, forests death in, – definitions, – as spiritual, Forest Service, , , "Forests Forever" campaign, Fortune, Robert, – Foy, Joe, fungi, , , –, –, _The Future of Our Forests_ (Forest Resources Commission), George, Paul, , , –, , Gnarly Tree, – gold rush, – Gordon River, – Gordon River Valley, , , , _See also_ Avatar Grove; Big Lonely Doug Great Bear Rainforest, Greenpeace, greenwashing, Gye, Mike, Hagar, Dan, –, –, Haida people, Halpert, George, Heaven Tree, hemlock trees, , high riggers, – _Hiking Guide to the Big Trees of Southwestern British Columbia_ (Stoltmann), , Hill, Julia "Butterfly," historic logging early concern, – early falling methods, Indigenous Peoples, –, , , – settlers, , –, – _See also_ loggers; logging industry Hooker, William Jackson, Horticultural Society of London, –, , , –, – Hudson's Bay Company, , Hughes, Don, – _Hundreds and Thousands_ (Carr), Hyperion, _The Incredible Forest_ (film), Indigenous Peoples as activists, activists and, , –, _Avatar_ film and, botanists and, culturally modified trees (CMTs), –, , , – as early loggers, –, passing big trees, settlers and, stumpage fees, , West Coast Trail, _See also_ Nuu'chah'nulth people; Pacheedaht people Interfor, – Jakubal, Mikal, Jones, H. H., , – Jones, Jeff, , , –, – Jordan River (town), – _Journal of the Royal Horticultural Society_ (journal), – Jurassic Grove, kayaking, Kinsol Trestle, Kitsumkalum First Nation, Klem, Greg, – Knut (polar bear), – Koksilah Grove, – Kroitor, Christian, Kwakwaka'wakw people, – Lake Cowichan, , Lasn, Kalle, – legacy trees, – lightness/darkness, – loggers antipathy towards activists, , –, , – in British Columbia, – chainsaws, – chokermen jobs, , early tree cutting, ( _see also_ historic logging) faller jobs, –, feller bunchers, – Forest Act and, – high rigger jobs, – hooktender jobs, , job losses, –, – jobs in British Columbia, – media usage, – as protectors, – refusing to cut, – sentiment on old-growth, in storm, – transportation, tree spiking and, –, unions, – work-related deaths, , –, _See also_ activists vs. loggers; clear-cutting; Cronin, Dennis; timber companies; Van Hell, Walter logging industry best-practices guide, conglomerates, deception, deregulation, – evolving communities, – expansion, – expansion and Douglas fir, – expectations of, – historic logging, , , , – impatience, – local processing, – logging as obligation, –, –, loopholes for clear-cutting, mandatory replanting, selective logging, – transportation, –, –, –, – trusting, variable retention, ( _see also_ clear-cutting) _Lord of the Rings_ (Tolkien), Lumpy tree, MacKinnon, Andy, –, – MacMillan, Harvey Reginald (H. R.), MacMillan Bloedel Carmanah Giant and, , –, , Carmanah Valley logging, –, marketing films, oldest Douglas fir and, on tree spiking, _See also_ Weyerhaeuser Mallory, George, _The Managed Forest_ (film), _Manufactured Landscapes_ (film), maple trees, marbled murrelets, McClure, John, – McMullan, Don, measuring trees, , , , –, Menzies, Archibald, , , Methuselah tree, – Moby Doll orca, Muir, John, , , mycorrhiza, –, –, new ecosystem, – Nitinat Valley, Nootka Sound, , nurse logs, Nuu'chah'nulth people, –, o.b. company, old-growth forests age of, –, – animals in, –, , – composition of, , –, –, culturally modified trees and, ( _see also_ culturally modified trees) death in, – definitions, , – as disordered, – dwindling, , ( _see also_ clear-cutting) as ecological emergency, inventory on Vancouver Island, –, , legacy trees, – locations of, logging industry sentiment and, pictures evoked, vs. second-growth forests, , as soil filter, sounds in, underground structure, –, – _See also_ rainforests (temperate) old-growth management area (OGMA), – Pacheedaht people activists and, –, –, benefits of Tall Tree Capital, – controlling territory, – as guides, – as loggers, , – protecting Big Lonely Doug, sawmill, –, – Teal Jones and, Pacific Ocean, –, , , , Pacific temperate rainforests _See_ rainforests (temperate) Patagonia company, – Pearson, Kristine, , Pegg, Mike, photography _See_ Burtynsky, Edward; Watt, TJ Port Alberni, Port Renfrew, , , –, – _See also_ Avatar Grove; Big Lonely Doug; cutblock 7190 Port San Juan, Prometheus tree, Public and Private Workers of Canada, – Queen Charlotte goshawk, rain, , , rainforests (temperate) animals in, , artists' views of, Big Lonely Doug and, biomass, canopy of, , as cathedrals, – colours in, – composition of, – historic logging, , –, locations of, – tree cutting processes, – tree growth, , , –, valleys of, –, _See also_ old-growth forests raw logs, – Red Creek Fir, replanting, Richardson, Kelly, root networks, –, – Ross, William Roderick, – salmon, –, , – sawmills, –, , –, _Scorned as Timber, Beloved of the Sky_ (Carr), second-growth forests, –, –, , , – seedlings, – seeds, – _See also_ cones sequoia trees, settlers, –, Shadbolt, Jack, Shawnigan Lake, – Sierra Club, Simard, Suzanne, –, – Simpson, Joe, – Simpson, Karen, – Sitka clothing company, Sitka spruce Carmanah Giant, , , – Cronin and, description of, Douglas and, FernGully Grove, – Heaven Tree, height of, Van Hell Spruce, Sloan, Gordon, – Sloan Commission, – social media, Sombrio Beach, – Species at Risk Act, Stamp, Edward, _Steep Trails_ (Muir), Steller jay, – Stoltmann, Randy, , –, , , , – storms/wind, , , , –, –, , stumpage fees, , , Stumpy (tree stump), sun, Suzuki, David, –, Tall Tree Capital _See_ Port Renfrew Tammemagi, Hans, – taxonomy, – Teal Jones Big Lonely Doug bridge and, – compensation to, – culturally modified trees and, logging practices, , , – marking trees, Pacheedaht people and, processing logs, – protecting Big Lonely Doug, –, – _See also_ Cronin, Dennis threatened species, thujaplicin, timber companies definitions of old-growth, – media campaigns, –, pressure within, – stumpage fees, _See also_ clear-cutting; Forest Act; Forest Service; logging industry; MacMillan Bloedel; Teal Jones; TimberWest; Weyerhaeuser timber industry _See_ logging industry timber licences (TLs), TimberWest, –, timber workers _See_ loggers Tofino, tree climbing, – tree farm licences (TFLs), , , , , tree hugging, tree hunting, –, –, –, –, –, –, _See also_ tree registry tree registry, – _See also_ tree hunting trees (general) coastline trees, – wildlife tree, – tree sitting, , tree spiking, , trucks, unions, – Van Beers, Louie, – Vancouver Island coastline of, – fires in, – inventory of old-growth, as model, old-growth decline, Vancouver Island Ranges, – Van Hell, Walter, , , –, , Van Hell Spruce, Victoria, B.C., Waddington Alley, Walbran Valley, , , –, War in the Woods, , , _Watermark_ (film), Watt, TJ about, – climbing Big Lonely Doug, –, discovery and, – dispirited about Big Lonely Doug, – in Eden Grove, –, enthusiasm for Big Lonely Doug, –, on FernGully Grove, finding Avatar Grove, finding Big Lonely Doug, – finding the Gnarly Tree, – meeting Lorraine, – photographing Big Lonely Doug, – as tree hunter, –, –, –, – TV interviews, urging governments, _See also_ Ancient Forest Alliance (AFA) Webb, Clinton, –, Welter, T. W., West Coast Trail, Western Canada Wilderness Committee (WCWC) about, , B.C. BigTree Registry, – B.C. legislature protest, – broader mobilization, – Carmanah Giant protests, –, –, –, as charity, War in the Woods, Wu and George, – _See also_ Stoltmann, Randy _Western Lumberman_ (magazine), – western red cedars, , , –, – Weyerhaeuser, – _See also_ MacMillan Bloedel Whistler, wildlife tree, – wind/storms, , , , –, –, , Wu, Ken about, – B.C. legislature protest, – on Big Lonely Doug legacy, Big Tree Protection Order, dismissing hypocrisy, on destruction, Eden Grove and, – as filmmaker, – first view of Big Lonely Doug, – on logging industry practices, meeting with Bell, on Pacheedaht sawmill, – promoting Avatar Grove, –, , , starting AFA, – Tofino and, TV interviews, urging governments, _See also_ Ancient Forest Alliance; Avatar Grove The Walrus Books The Walrus sparks essential Canadian conversation by publishing high-quality, fact-based journalism and producing ideas-focused events across the country. The Walrus Books, a partnership between The Walrus, House of Anansi Press, and the Chawkers Foundation Writers Project, supports the creation of Canadian non-fiction books of national interest. _Big Lonely Doug_ is the first in this series. thewalrus.ca/books Harley Rustad is an editor at _The Walrus_ magazine. His articles and photography have been published in _The Walrus_ , _Outside_ , the _Globe and Mail_ , _Geographical_ , CNN, and elsewhere. He has reported from India, Nepal, Cuba, and across Canada. Born on Salt Spring Island, B.C., he lives in Toronto. @hmrustad harleyrustad.com House of Anansi Press was founded in 1967 with a mandate to publish Canadian-authored books, a mandate that continues to this day even as the list has branched out to include internationally acclaimed thinkers and writers. The press immediately gained attention for significant titles by notable writers such as Margaret Atwood, Michael Ondaatje, George Grant, and Northrop Frye. Since then, Anansi's commitment to finding, publishing and promoting challenging, excellent writing has won it tremendous acclaim and solid staying power. Today Anansi is Canada's pre-eminent independent press, and home to nationally and internationally bestselling and acclaimed authors such as Gil Adamson, Margaret Atwood, Ken Babstock, Peter Behrens, Rawi Hage, Misha Glenny, Jim Harrison, A. L. Kennedy, Pasha Malla, Lisa Moore, A. F. Moritz, Eric Siblin, Karen Solie, and Ronald Wright. Anansi is also proud to publish the award-winning nonfiction series The CBC Massey Lectures. In 2007, 2009, 2010, and 2011 Anansi was honoured by the Canadian Booksellers Association as "Publisher of the Year." Culturally modified trees, like this one found on Flores Island in Clayoquot Sound, Vancouver Island — where the anti-logging campaign known as the War in the Woods was sparked — are used by many First Nations in coastal British Columbia as records of their historical presence and forest use. (Photograph by Harley Rustad) The controversial photograph of the Cary Fir, an allegedly 126.2-metre-tall Douglas fir said to be felled near Vancouver in 1895 by logger George Cary, has been widely accepted as a hoax. (Image C-06489 courtesy of the Royal BC Museum and Archives) Timber workers for A And L Logging Co., circa 1926, using a large Douglas fir on Vancouver Island as a spar tree, an anchor point for cables pulled by a steam "donkey" to haul logs out of a cutblock. (Image D-04875 courtesy of the Royal BC Museum and Archives) Logger Dennis Cronin beside the large Douglas fir in cutblock 7190 that would come to be named Big Lonely Doug — pictured here the day he wrapped the green "Leave Tree" flagging around its trunk. (Courtesy of Lorraine Cronin) Big Lonely Doug surrounded by the clear-cut remains of cutblock 7190. Around the cutblock is the old-growth forest known as Eden Grove and the replanted second-growth forests in the Gordon River Valley, near Port Renfrew. (Photograph by TJ Watt) Activist, photographer, and big tree hunter TJ Watt in Eden Grove, an intact patch of old-growth forest next door to Big Lonely Doug. (Photograph by Björn Hermannes) A tree climber, part of a group of forest activists that accurately measured the height of Big Lonely Doug in May 2014, hauls himself towards the canopy of the second-largest Douglas fir in Canada. (Photograph by TJ Watt) Ancient Forest Alliance founder Ken Wu alongside the stump of a western red cedar that was cut near Big Lonely Doug and that sparked the petition for formally protecting Avatar Grove. (Photograph by TJ Watt) ## Contents 1. Big Lonely Doug 2. Copyright 3. Dedication 4. Contents 5. Prologue 6. Chapter 1 7. Chapter 2 8. Chapter 3 9. Chapter 4 10. Chapter 5 11. Chapter 6 12. Chapter 7 13. Chapter 8 14. Chapter 9 15. Chapter 10 16. Chapter 11 17. Chapter 12 18. Epilogue 19. Notes 20. Acknowledgements 21. Index 22. The Walrus Books 23. About the Author 24. About the Publisher 25. Image Gallery ## Landmarks 1. Cover 2. Body Matter # List of Pages 1. i 2. iii 3. iv 4. v 5. vii 6. xii 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25. 26. 27. 28. 29. 30. 31. 32. 33. 34. 35. 36. 37. 38. 39. 40. 41. 42. 43. 44. 45. 46. 47. 48. 49. 50. 51. 52. 53. 54. 55. 56. 57. 58. 59. 60. 61. 62. 63. 64. 65. 66. 67. 68. 69. 70. 71. 72. 73. 74. 75. 76. 77. 78. 79. 80. 81. 82. 83. 84. 85. 86. 87. 88. 89. 90. 91. 92. 93. 94. 95. 96. 97. 98. 99. 100. 101. 102. 103. 104. 105. 106. 107. 108. 109. 110. 111. 112. 113. 114. 115. 116. 117. 118. 119. 120. 121. 122. 123. 124. 125. 126. 127. 128. 129. 130. 131. 132. 133. 134. 135. 136. 137. 138. 139. 140. 141. 142. 143. 144. 145. 146. 147. 148. 149. 150. 151. 152. 153. 154. 155. 156. 157. 158. 159. 160. 161. 162. 163. 164. 165. 166. 167. 168. 169. 170. 171. 172. 173. 174. 175. 176. 177. 178. 179. 180. 181. 182. 183. 184. 185. 186. 187. 188. 189. 190. 191. 192. 193. 194. 195. 196. 197. 198. 199. 200. 201. 202. 203. 204. 205. 206. 207. 208. 209. 210. 211. 212. 213. 214. 215. 216. 217. 218. 219. 220. 221. 222. 223. 224. 225. 226. 227. 228. 229. 230. 231. 232. 233. 234. 235. 236. 237. 238. 239. 240. 241. 242. 243. 244. 245. 246. 247. 248. 249. 250. 251. 252. 253. 254. 255. 256. 257. 258. 259. 260. 261. 262. 263. 264. 265. 266. 267. 268. 269. 270. 271. 272. 273. 274. 275. 276. 277. 278. 279. 280. 281. 282. 283. 284. 285. 286. 287. 288. 289. 290. 291. 292. 293. 294. 295. 296. 297. 298. 299. 300. 301. 302. 303. 304. 305. 306. 307. 308. 309. 310.
{ "redpajama_set_name": "RedPajamaBook" }
7,662
Home/Philippine News/News: In the Philippines, the government calls for proposals from the creative industry sector News: In the Philippines, the government calls for proposals from the creative industry sector Nina Unlay Send an email May 20, 2020 MANILA, PHILIPPINES — The Department of Science and Technology – Philippine Council for Industry, Energy, and Emerging Technology Research and Development (DOST-PCIEERD) is calling for research and development proposals to support the development of creative industries in the country. These proposals must fall under the priority areas of Heritage or Functional Creations. Heritage includes arts and crafts (i.e. furniture, household goods, paper, etc.) as well as design. Functional creations cover new media such as software, animation, etc., industrial craft, and other creative related technologies. Creative Industries are considered one of the fastest-growing sectors in the global economy and contribute significantly to the Gross Domestic Product (GDP) of developed countries by capitalizing on their creative industries. The Philippines is among the developing countries with a rich cultural heritage as well as a pool of creative talents that can potentially boost the economy through its creative goods. A PDIS report states that the Philippines supplies and exports a wide range of creative goods including creative services, research and development (R&D), design goods, art crafts, new media, and architecture. While GDP contribution of creative industries slipped from 13.8 percent in 2006 to 5.44 percent in 2009, Philippines Creative Goods exports increased from $776 million in 2005 to $915 million in 2015, an 18 percent increase. Over the years, the performance of creative industries shows that it has the ability to provide support and strengthen different fields to be competitive globally while promoting locally made products and designs. FDCP champions the Philippines as a viable filming destination, opens applications for Incentives program The intersection of culture, technology, and innovation allows creative economies to succeed. Science and technology research can power the economic development of the country, long believed to have the potential to be an Asian creative hub. All proposals should be submitted online through dpmis.dost.gov.ph on or before May 31, 2020. For those who are interested to submit proposals, the complete package of the Call can be viewed at the PCIEERD website: bit.ly/C4PForms #CallForProposals2020 #ScienceForThePeople #DOSTPh Jason Inocencio was once the Digital Editor of adobo magazine who still loves seeing great campaigns from all over the world. He proudly shows off his love for all kinds of geeky things, whether it be movies, TV shows, comics, sports, or trivia. Digital: New normal? PumaPodcast wants to talk about a #BetterNormal, starting with the public jeepney system Campaign Spotlight: Grand Visual and Plexus launch #SendingLove, sharing messages from 153 cities around the world Art Fair Philippines set to transform The Link Carpark, Makati from February 17 to 19 The Philippine Postal Corporation unveils official Year of the Rabbit stamps The 15th Kidlat Awards recharges creativity as call for entries approaches January deadline Grab bags major wins at the Marketing Excellence Awards for hit GrabFood campaign TV5 wraps up the past year with awards and nominations for its programs, talents, and more
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3
\section{Introduction} The Donaldson-Thomas invariant (DT invariant, for short) is a virtual count of stable sheaves on a smooth projective Calabi-Yau 3-fold $Y$ over $\mathbb{C}$ which was defined as the degree of the virtual fundamental class of the moduli space $\mathfrak{X} $ of stable sheaves (\cite{Tho}). Using microlocal analysis, Behrend showed that the DT invariant is in fact the Euler number of the moduli space, weighted by a constructible function $\nu_{\mathfrak{X} }$, called the Behrend function (\cite{Beh}). Since the ordinary Euler number is the alternating sum of Betti numbers of cohomology groups, it is reasonable to ask if the DT invariant is in fact the Euler number of a cohomology theory on $\mathfrak{X} $. On the other hand, it has been known that the moduli space is locally the critical locus of a holomorphic function, called a local Chern-Simons functional (\cite{JoSo}). Given a holomorphic function $f$ on a complex manifold $V$, one has the perverse sheaf $\phi_f(\mathbb{Q}[\dim V-1])$ of vanishing cycles supported on the critical locus and the Euler number of this perverse sheaf at a point $x$ equals $\nu_{\mathfrak{X} }(x)$. This motivated Joyce and Song to raise the following question (\cite[Question 5.7]{JoSo}). \medskip \noindent {\it Let $\mathfrak{X} $ be the moduli space of simple coherent sheaves on $Y$. Does there exist a natural perverse sheaf $P^{\bullet} $ on the underlying analytic variety $X=\mathfrak{X} _{red}$ which is locally isomorphic to the sheaf $\phi_f(\mathbb{Q}[\dim V-1])$ of vanishing cycles for $f, V$ above?} \medskip The purpose of this paper is to provide an affirmative answer. \begin{theo}\label{thmInt} (Theorem \ref{truemainth} and Theorem \ref{theo4.3.8})\\ Let $\mathfrak{X} $ be a quasi-projective moduli space of simple sheaves on a smooth projective Calabi-Yau 3-fold $Y$ with universal family ${\cal E}$ and let $X=\mathfrak{X} _{red}$ be the reduced scheme of $\mathfrak{X} $. Then there exist an \'etale Galois cover $$\rho:X^\dagger\to X=X^\dagger/G$$ and a perverse sheaf $P^{\bullet} $ on $X^\dagger$, which is locally isomorphic to the perverse sheaf $\phi_f(\mathbb{Q}[\dim V-1])$ of vanishing cycles for the local Chern-Simons functional $f$. In fact, for any \'etale Galois cover $\rho:X^\dagger\to X$, there exists such a perverse sheaf $P^{\bullet} $ if and only if the line bundle $\rho^*\det( \Ext_{\pi}^\bullet({\cal E},{\cal E}))$ admits a square root on $X^\dagger$ where $\pi:X\times Y\to X$ is the projection and $ \Ext_{\pi}^\bullet({\cal E},{\cal E})=R\pi_*R{\cal H} om({\cal E},{\cal E})$. \end{theo} We will also prove the same for mixed Hodge modules (Theorem \ref{thmMHM}), i.e. there is a mixed Hodge module $M^{\bullet} $ on $X^\dagger$ whose underlying perverse sheaf is $rat(M^{\bullet} )=P^{\bullet} $. (See \S\ref{sec9}.) Note that the perverse sheaf $P^{\bullet} $ may not be unique because we can always twist $P^{\bullet} $ by a $\mathbb{Z}_2$-local system on $X$. \medskip As an application of Theorem \ref{thmInt}, when $\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ admits a square root, the hypercohomology $\mathbb{H} ^i(X,P^{\bullet} )$ of $P^{\bullet} $ gives us the \emph{DT (Laurent) polynomial} $$DT_t^Y(X)=\sum_i t^i\dim \mathbb{H} ^i(X,P^{\bullet} )$$ such that $DT_{-1}^Y(X)$ is the ordinary DT invariant by \cite{Beh}. Another application is a mathematical theory of Gopakumar-Vafa invariants (GV for short) in \cite{GoVa}. Let $\mathfrak{X} $ be a moduli space of stable sheaves supported on curves of homology class $\beta\in H_2(Y,\mathbb{Z})$. The GV invariants are integers $n_h(\beta)$ for $h\in \mathbb{Z}_{\ge 0}$ defined by an $sl_2\times sl_2$ action on \emph{some cohomology} of $\mathfrak{X} $ such that $n_0(\beta)$ is the DT invariant of $\mathfrak{X} $ and that they give all genus Gromov-Witten invariants $N_g(\beta)$ of $Y$. By Theorem \ref{thmInt}, when $\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ admits a square root, there exists a perverse sheaf $P^{\bullet} $ on $X$ which is locally the perverse sheaf of vanishing cycles. By the relative hard Lefschetz theorem for the morphism to the Chow scheme (\cite{Sai88}), we have an action of $sl_2\times sl_2$ on $\mathbb{H} ^*(X,\hat{P}^{\bullet} )$ where $\hat{P}^{\bullet} $ is the graduation of $P^{\bullet} $ by the filtration of $P^{\bullet} $ which is the image of the weight filtration of the mixed Hodge module $M^{\bullet} $ with $rat(M^{\bullet} )=P^{\bullet} $. This gives us a geometric theory of GV invariants which we conjecture to give all the GW invariants $N_g(\beta)$. \bigskip Our proof of Theorem \ref{thmInt} relies heavily on gauge theory. By the Seidel-Thomas twist (\cite[Chapter 8]{JoSo}), it suffices to consider only vector bundles on $Y$. Let ${\cal B}={\cal A}/{\cal G}$ be the space of semiconnections on a hermitian vector bundle $E$ modulo the gauge group action and let ${\cal B}_{si}$ be the open subset of simple points. Let $cs:{\cal B}\to \mathbb{C}$ be the (holomorphic) Chern-Simons functional. Let $\mathfrak{X} \subset {\cal B}_{si}$ be a locally closed complex analytic subspace. We call a finite dimensional complex submanifold $V$ of ${\cal B}_{si}$ a \emph{CS chart} if the critical locus of $f=cs|_V$ is $V\cap \mathfrak{X} $ and is an open complex analytic subspace of $\mathfrak{X} $. By \cite{JoSo}, at each $x\in X$, we have a CS chart $V$ with $T_xV=T_x\mathfrak{X} $, which we call the Joyce-Song chart (JS chart, for short). Thus we have a perverse sheaf $P^{\bullet} |_V$ on $V\cap X$. One of the difficulties in gluing the local perverse sheaves $P^{\bullet} |_V$ is that the dimensions of the JS charts $V$ vary from point to point. In this paper, we show that there are \begin{enumerate} \item a locally finite open cover $X=\cup_\alpha U_\alpha$; \item a (continuous) family ${\cal V}_\alpha\to U_\alpha$ of CS charts of constant dimension $r$, each of which contains the JS chart; \item a homotopy $\mathbf{V} _{\alpha\beta}\to U_{\alpha\beta}\times [0,1]$ from ${\cal V}_\alpha|_{U_{\alpha\beta}}=\mathbf{V} |_{t=0}$ to ${\cal V}_\beta|_{U_{\alpha\beta}}=\mathbf{V} |_{t=1}$\end{enumerate} where $U_{\alpha\beta}=U_\alpha\cap U_\beta$. We call such a collection \emph{CS data}. (See Proposition \ref{prCSdata}.) From the CS data, we can extract perverse sheaves $P^{\bullet} _\alpha$ on $U_\alpha$ for all $\alpha$ and gluing isomorpisms $\sigma_{\alpha\beta}:P^{\bullet} _\alpha|_{U_{\alpha\beta}}\to P^{\bullet} _\beta|_{U_{\alpha\beta}}$. (See Proposition \ref{plopgl}.) The 2-cocycle obstruction for gluing $\{P^{\bullet} _\alpha\}$ to a global perverse sheaf is shown to be $$\sigma_{\alpha\beta\gamma}=\sigma_{\gamma\alpha}\circ\sigma_{\beta\gamma}\circ\sigma_{\alpha\beta}=\pm 1\in \mathbb{Z}_2$$ which coincides with the 2-cocycle obstruction for gluing the determinant line bundles of the tangent bundles of CS charts. Since the perfect obstruction theory $\Ext^{\bullet} _\pi({\cal E},{\cal E})$ for $\mathfrak{X} $ is symmetric, the determinant of the tangent bundle is a square root of $\det\, \Ext^{\bullet} _\pi({\cal E},{\cal E})$. Therefore the local perverse sheaves $\{P^{\bullet} _\alpha\}$ glue to a global perverse sheaf if and only if there is a square root of $\det\, \Ext^{\bullet} _\pi({\cal E},{\cal E})$ in $\mathrm{Pic}(X)$. (See Theorem \ref{truemainth}.) When $\mathfrak{X} $ is the moduli scheme of one dimensional stable sheaves on $Y$, we show that the torsion-free part of $\det\, \Ext^{\bullet} _\pi({\cal E},{\cal E})$ has a square root by Grothendieck-Riemann-Roch. More generally, Z. Hua (\cite{Hua}) proved that it is true for all sheaves. In \S9, we simplify his proof and generalize his result to the case of perfect complexes. By taking a spectral cover using a torsion line bundle, we obtain a finite \'etale Galois cover $\rho:X^\dagger\to X$ with a cyclic Galois group $G$ and a perverse sheaf $P^{\bullet} $ on $X^\dagger$ which is locally the perverse sheaf of vanishing cycles of a local CS functional. (See Theorem \ref{theo4.3.8}.) \medskip The layout of this paper is as follows. In \S2, we recall necessary facts about the perverse sheaves of vanishing cycles and their gluing. In \S3, we collect the main results of this paper. In \S4, we prove that there exists a structure which we call preorientation data on $X$. In \S5, we show that preorientation data induce CS data mentioned above. In \S6, we show that CS data induce local perverse sheaves, gluing isomorphisms and the obstruction class for gluing. In \S7, we prove an analogue of Theorem \ref{thmInt} for mixed Hodge modules. In \S8, we develop a theory of GV invariants. In \S9, we discuss the existence of square root of $\det\, \Ext^{\bullet} _\pi({\cal E},{\cal E})$. \bigskip An incomplete version of this paper was posted in the arXiv on October 15, 2012 (1210.3910). Some related results were independently obtained by C. Brav, V. Bussi, D. Dupont, D. Joyce and B. Szendroi in \cite{BBDJS}. We are grateful to Dominic Joyce for his comments and suggestions. We thank Martin Olsson for his comments, Zheng Hua for informing us of his paper \cite{Hua} and Yan Soibelman for his comments. We also thank Takuro Mochizuki for answering questions on mixed Hodge modules. \vskip5pt \noindent \textbf{Notations}. A complex analytic space is a local ringed space which is covered by open sets, each of which is isomorphic to a ringed space defined by an ideal of holomorphic functions on an analytic open subset of $\mathbb{C}^n$ for some $n>0$, and whose transition maps preserve the sheaves of holomorphic functions. A complex analytic variety is a reduced complex analytic space. We will denote the variety underlying a complex analytic space $\mathfrak{X} $ by $X$. We will use smooth functions to mean $C^\infty$ functions. In case the space is singular, with a stratification by smooth strata, smooth functions are continuous functions that are smooth along each stratum. We use analytic functions to mean continuous functions that locally have power series expansions in the real and imaginary parts of coordinate variables. We will work with analytic topology unless otherwise mentioned. \vskip5pt \section{Perverse sheaves of vanishing cycles}\label{sec2} In this section, we recall necessary facts about perverse sheaves of vanishing cycles. Let $X$ be a complex analytic variety and $D^b_c(X)$ the bounded derived category of constructible sheaves on $X$ over $\mathbb{Q}$. Perverse sheaves are sheaf complexes which behave like sheaves. \begin{defi}\label{def1} An object $P^{\bullet} \in D^b_c(X)$ is called a \emph{perverse sheaf} (with respect to the middle perversity) if \begin{enumerate} \item $\dim \{x\in X\,|\, H^i(\imath_x^*P^{\bullet} )=\mathbb{H} ^i(B_\varepsilon(x);P^{\bullet} )\ne 0\} \le -i$ for all $i$; \item $\dim \{x\in X\,|\, H^i(\imath_x^!P^{\bullet} )=\mathbb{H} ^i(B_\varepsilon(x),B_\varepsilon(x)-\{x\};P^{\bullet} )\ne 0\} \le i$ for all $i$ \end{enumerate} where $\imath_x:\{x\}\hookrightarrow X$ is the inclusion and $B_\varepsilon(x)$ is the open ball of radius $\varepsilon$ centered at $x$ for $\varepsilon$ small enough. \end{defi} Perverse sheaves form an abelian category $Perv(X)$ which is the core of a t-structure (\cite[\S2]{BBD}). An example of perverse sheaf is the sheaf of vanishing cycles which is the focus of this paper. \begin{defi}\label{def2} Let $f:V\to \mathbb{C}$ be a continuous function on a pseudo-manifold $V$. We define \[A^{\bullet} _f:=\phi_f(\mathbb{Q}[-1])=R\Gamma_{\{\mathrm{Re} f\le 0\}}\mathbb{Q}|_{f^{-1}(0)}. \] \end{defi} When $V$ is a complex manifold of dimension $r$ and $f$ is holomorphic, $A^{\bullet} _f[r]$ is a perverse sheaf on $f^{-1}(0)$. (See \cite[Chapter 8]{KaSh}.) Let $X_f$ be the critical set $(df=0)$ of $f$ in $V$. Since $A^{\bullet} _f[r]=\phi_f(\mathbb{Q}[r-1])$ is zero on the smooth manifold $f^{-1}(0)-X_f$, $A^{\bullet} _f[r]$ is a perverse sheaf on $X_f$, called the \emph{perverse sheaf of vanishing cycles} for $f$. The stalk cohomology of $A^{\bullet} _f[1]$ at $x\in f^{-1}(0)$ is the reduced cohomology $\tilde H^{\bullet} (M_f)$ of the Milnor fiber $$M_f=f^{-1}(\delta)\cap B_\epsilon(x) \quad\text{for }0<\delta \ll \epsilon\ll 1.$$ \begin{prop}\label{prop2} Let $f, f_0,f_1:V\to \mathbb{C}$ be continuous functions on a pseudo-manifold $V$.\\ (1) Let $Z$ be a subset of $f^{-1}_0(0)\cap f_1^{-1}(0)$. Suppose $\Phi:V\to V$ is a homeomorphism such that $\Phi|_{Z}=\mathrm{id}_{Z}$ and $f_1\circ\Phi=f_0$. Then $\Phi$ induces an isomorphism $\Phi^*:A^{\bullet} _{f_1}|_Z\mapright{\cong} A^{\bullet} _{f_0}|_Z$ in $D^b(Z)$ by pulling back. \\ (2) Suppose $\Phi_t:V\to V$, $t\in [0,1]$, is a continuous family of homeomorphisms preserving $f$, i.e. $f\circ\Phi_t=f$ for all $t$, such that $\Phi_t|_{Z}=\mathrm{id}_{Z}$ and $\Phi_0=\mathrm{id}_V$. Then the pullback isomorphism $\Phi_1^*:A^{\bullet} _{f}|_Z\mapright{\cong} A^{\bullet} _{f}|_Z$ is the identity morphism. \end{prop} \begin{proof} By the definition of $A^{\bullet} _f$, we have $A^{\bullet} _{f_0}=A^{\bullet} _{f_1\circ\Phi}\cong \Phi^*A^{\bullet} _{f_1}$. Let $\imath:Z\hookrightarrow V$ denote the inclusion. Since $\Phi\circ \imath=\imath$, we have the isomorphism $$\Phi^*:\imath^*A^{\bullet} _{f_1}=\imath^*\Phi^*A^{\bullet} _{f_1}\mapright{\cong} \imath^*A^{\bullet} _{f_0}.$$ Because $\Phi_t$ preserves the set $\{ \mathrm{Re} f\le 0\}$, the isotopy $\{\Phi_t\}$ induces a homotopy from the identity chain map $\Phi_0^*=\mathrm{id}_{A^{\bullet} _f|_Z}$ to $\Phi_1^*$ by choosing a flabby resolution $I^{\bullet} $ by the complex of singular cochains. Since homotopic chain maps are equal in the derived category, we find that the induced isomorphism $\Phi_1^*:A^{\bullet} _{f}|_Z\mapright{\cong} A^{\bullet} _{f}|_Z$ is indeed the identity morphism. \end{proof} \begin{exam} \label{nex2.4} Let $q=\sum_{i=1}^ry_i^2$ on $\mathbb{C}^r$. The set $\{\mathrm{Re}\, q>0\}\subset \mathbb{C}^r$ is a disk bundle over $\mathbb{R}^r-\{0\}$ which is obviously homotopic to $S^{r-1}$. From the distinguished triangle \begin{equation}\label{edtpv} R\Gamma_{\{\mathrm{Re}\, q\le 0\}}\mathbb{Q}\to \mathbb{Q}\to R\imath_*\imath^*\mathbb{Q}\end{equation} where $\imath:\{\mathrm{Re}\, q>0\}\hookrightarrow \mathbb{C}^r$ is the inclusion, we find that $A_q^{\bullet} [1]$ is a sheaf complex supported at the origin satisfying $A^{\bullet} [1]\cong\mathbb{Q}[-r+1]$, i.e. $$A^{\bullet} [r]\cong\mathbb{Q}.$$ Suppose $\Phi:\mathbb{C}^r\to \mathbb{C}^r$ is a homeomorphism such that $q\circ\Phi=q$. Since $A^{\bullet} [r]\cong \mathbb{Q}$, the isomorphism $\Phi^*:A_q^{\bullet} [r]\to A_q^{\bullet} [r]$ is either $1$ or $-1$. The sign is determined by the change in the orientation of the sphere $S^{r-1}$ in the Milnor fiber. Since $q$ is preserved by $\Phi$, $d\Phi|_0:T_0\mathbb{C}^r\to T_0\mathbb{C}^r$ is an orthogonal linear transformation with respect to $q$ whose determinant is either $1$ or $-1$. It is easy to see that these two sign changes are identical, i.e. $$\Phi^*=\det(d\Phi|_0)\cdot\id.$$ \end{exam} The following fact about the sheaf $A^{\bullet} _f$ of vanishing cycles will be useful. \begin{prop}\label{prop1} (1) Let $g:W\to \mathbb{C}$ be a holomorphic function on a connected complex manifold $W$ of dimension $d$ and let $q=\sum_{i=1}^ry_i^2$. Let $V=W\times\mathbb{C}^r$ and $f:V\to \mathbb{C}$ be $f(z,y)=g(z)+q(y)$. Then the summation form of $f$ induces an isomorphism $$A^{\bullet} _f[d+r]\cong pr_1^{-1}A^{\bullet} _g[d]\otimes pr_2^{-1} A^{\bullet} _q[r]\cong pr_1^{-1} A^{\bullet} _g[d]\otimes \mathbb{Q}\cong pr_1^{-1}A^{\bullet} _g[d]$$ of perverse sheaves on the critical set $X_f$ of $f$.\\ (2) Let $\Phi:V\to V$ be a biholomorphic map such that $f\circ\Phi=f$ and $\Phi|_W=\id_{W\times\{0\}}$. Then $\Phi^*:A^{\bullet} _f\to A^{\bullet} _f$ is $\det(d\Phi|_{W\times\{0\}})\,\id_{A^{\bullet} _f}$ and $\det(d\Phi|_{W\times\{0\}})=\pm 1$. \end{prop} \begin{proof} (1) is a result of D. Massey in \cite[\S2]{Massey}; (2) is proved in \cite[Theorem 3.1]{BBDJS}.\end{proof} It is well known that perverse sheaves and isomorphisms glue. \begin{prop}\label{prop3} Let $X$ be a complex analytic space with an open covering $\{X_\alpha\}$.\\ (1) Suppose that for each $\alpha$ we have $P^{\bullet} _\alpha\in Perv(X_\alpha)$ and for each pair $\alpha, \beta$ we have isomorphisms \[ \sigma_{\alpha\beta}:P^{\bullet} _\alpha|_{X_\alpha\cap X_\beta}\mapright{\cong}P^{\bullet} _\beta|_{X_\alpha\cap X_\beta}\] satisfying the cocycle condition $\sigma_{\beta\gamma}\circ\sigma_{\alpha\beta}=\sigma_{\alpha\gamma}$. Then $\{P^{\bullet} _\alpha\}$ glue to define a perverse sheaf $P^{\bullet} $ on $X$ such that $P^{\bullet} |_{X_\alpha}\cong P^{\bullet} _\alpha$ and that $\sigma_{\alpha\beta}$ is induced by the identity map of $P^{\bullet} |_{X_\alpha\cap X_\beta}$.\\ (2) Suppose $P^{\bullet} , Q^{\bullet} \in Perv(X)$ and $\sigma_\alpha:P^{\bullet} |_{X_\alpha}\mapright{\cong}Q^{\bullet} |_{X_\alpha}$ such that $\sigma_\alpha|_{X_\alpha\cap X_\beta}=\sigma_\beta|_{X_\alpha\cap X_\beta}$. Then there exists an isomorphism $\sigma:P^{\bullet} \to Q^{\bullet} $ such that $\sigma|_{X_\alpha}=\sigma_\alpha$ for all $\alpha$. \end{prop} See \cite[Theorem 2.5]{BBDJS} for precise references for proofs of Proposition \ref{prop3}. One way to prove Proposition \ref{prop3} is to use the elementary construction of perverse sheaves by MacPherson and Vilonen. \begin{theo}\label{thm1} \cite[Theorem 4.5]{MV} Let $S\subset X$ be a closed stratum of complex codimension $c$. The category $Perv(X)$ is equivalent to the category of objects $(B^{\bullet} ,C)\in Perv(X-S)\times Sh_{\mathbb{Q}}(S)$ together with a commutative triangle \[\xymatrix{ R^{-c-1}\pi_*\kappa_*\kappa^*B^{\bullet} \ar[rr]\ar[dr]_m && R^{-c}\pi_*\gamma_!\gamma^*B^{\bullet} \\ & C\ar[ur]_n }\] such that $\mathrm{ker} (n)$ and $\mathrm{coker} (m)$ are local systems on $S$, where $\kappa:K\hookrightarrow L$ and $\gamma:L-K\hookrightarrow L$ are inclusions of the perverse link bundle $K$ and its complement $L-K$ in the link bundle $\pi:L\to S$. The equivalence of categories is explicitly given by sending $P^{\bullet} \in Perv(X)$ to $B^{\bullet} =P^{\bullet} |_{X-S}$ together with the natural morphisms \[\xymatrix{ R^{-c-1}\pi_*\kappa_*\kappa^*B^{\bullet} \ar[rr]\ar[dr]_m && R^{-c}\pi_*\gamma_!\gamma^*B^{\bullet} \\ & R^{-c}\pi_*\varphi_!\varphi^*P^{\bullet} \ar[ur]_n }\] where $\varphi:D-K\hookrightarrow D$ is the inclusion into the normal slice bundle. \end{theo} See \cite[\S4]{MV} for precise definitions of $K$, $L$ and $D$. Morally the above theorem says that an extension of a perverse sheaf on $X-S$ to $X$ is obtained by adding a sheaf on $S$. Since sheaves glue, we can glue perverse sheaves stratum by stratum. \begin{proof} [Proof of Proposition \ref{prop3}] We stratify $X$ by complex manifolds and let $X^{(i)}$ denote the union of strata of codimension $\le i$. On the smooth part $X^{(0)}$, $P^{\bullet} _\alpha$ are honest sheaves and hence they glue to a sheaf $P^{\bullet} |_{X^{(0)}}$. For $X^{(1)}=X^{(0)}\cup S$, we find that since $P^{\bullet} _\alpha$ are isomorphic on intersections $X_\alpha\cap X_\beta$, the sheaves $R^{-c}\pi_*\varphi_!\varphi^*P^{\bullet} _\alpha$ glue and so do the natural triangles \begin{equation}\label{edth19}\xymatrix{ R^{-2}\pi_*\kappa_*\kappa^*(P^{\bullet} |_{X^{(0)}})\ar[rr]\ar[dr]_m && R^{-1}\pi_*\gamma_!\gamma^*(P^{\bullet} |_{X^{(0)}})\\ & R^{-1}\pi_*\varphi_!\varphi^*P^{\bullet} _\alpha.\ar[ur]_n }\end{equation} Hence we obtain a perverse sheaf $P^{\bullet} |_{X^{(1)}}\in Perv(X^{(1)})$. It is obvious that we can continue this way using Theorem \ref{thm1} above until we obtain a perverse sheaf $P^{\bullet} $ on $X$ such that $P^{\bullet} |_{X_\alpha}=P^{\bullet} _\alpha$. The gluing of isomorphisms is similar. \end{proof} Another application of Theorem \ref{thm1} is the following \emph{rigidity property of perverse sheaves}. \begin{lemm}\label{lemidex} Let $P^{\bullet} $ be a perverse sheaf on an analytic variety $U$. Let $\pi:T\to U$ be a continuous map from a topological space $T$ with connected fibers and let $T'$ be a subspace of $T$ such that $\pi|_{T'}$ is surjective. Suppose an isomorphism $\mu:\pi^{-1}P^{\bullet} \cong \pi^{-1}P^{\bullet} $ satisfies $\mu|_{T'}=\id_{(\pi^{-1}P^{\bullet} )|_{T'}}$. Then $\mu=\id_{\pi^{-1}P^{\bullet} }$. \end{lemm} \begin{proof} We first prove the simple case: if we let $C$ be a locally constant sheaf over $\mathbb{Q}$ of finite rank on $Z\subset U$ and $\bar\mu:\pi^{-1}C\to \pi^{-1}C$ be a homomorphism such that $\bar\mu|_{T'\cap \pi^{-1}(Z)}=\id$, then $\bar\mu$ is the identity morphism. Indeed, since the issue is local, we may assume that $Z$ is connected and that $C\cong \mathbb{Q}^r$ so that $\bar\mu:\mathbb{Q}^r\to\mathbb{Q}^r$ is given by a continuous map $\pi^{-1}(Z)\to GL(r,\mathbb{Q})$. By connectedness, this obviously is a constant map which is $1$ along $T'\cap\pi^{-1}(Z)$. We thus proved the lemma in the sheaf case. For the general case, we use Theorem \ref{thm1}. As in the proof of Proposition \ref{prop3} above, we stratify $U$ and let $U^{(i)}$ be the union of strata of codimension $\le i$. Since $P^{\bullet} $ is a perverse sheaf, $P^{\bullet} |_{U^{(0)}}[-\dim U]$ is isomorphic to a locally constant sheaf and hence $\mu|_{U^{(0)}}$ is the identity map. For $U^{(1)}=U^{(0)}\cup S$, using the notation of Theorem \ref{thm1}, $C=R^{-1}\pi_*\varphi_!\varphi^*P^{\bullet} $ is a locally constant sheaf and $\mu$ induces a homomorphism $\pi^{-1}C\to \pi^{-1}C$ which is identity on $T'\cap \pi^{-1}(S)$. Therefore $\mu$ induces the identity morphism of the pullback of \eqref{edth19} by $\pi$ to itself and hence $\mu$ is the identity on $U^{(1)}$. Continuing in this fashion, we obtain Lemma \ref{lemidex}. \end{proof} \section{Preorientation data and perverse sheaves}\label{sec3} In this section, we collect the main results of this paper. We first introduce the notion of preorientation data which induces a family of CS charts. This will give us a collection of local perverse sheaves and gluing isomorphisms. We identify the cocycle condition for the gluing isomorphisms as the existence of a square root of the determinant bundle of $\mathrm{Ext}^{\bullet} _\pi({\cal E},{\cal E})=R\pi_*R{\cal H} om({\cal E},{\cal E})$ where ${\cal E}$ denotes the universal bundle over $X\times Y\mapright{\pi} X$. \subsection{Chern-Simons functionals on connection spaces} In this subsection we briefly recall the necessary gauge theoretic background. More details will be provided in later sections. Our presentation largely follows \cite[Chapter 9]{JoSo}. Let $Y$ be a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$, with a Hodge metric implicitly chosen. We fix a nowhere vanishing holomorphic $(3,0)$-form $\Omega$ on $Y$. Let $E$ be a smooth complex vector bundle on $Y$ with a smooth hermitian metric. In this paper, a smooth semiconnection is a differential operator $\bar{\partial} :\Omega^0(E)\to \Omega^{0,1}(E)$ satisfying the $\overline{\partial}$-Leibniz rule. We denote by $\Omega^{0,k}(E)$ the space of smooth $(0,k)$-forms on $Y$ taking values in $E$. Following the notation in gauge theory, we denote $ad\,E=E^\vee\otimes E$; thus fixing a smooth semiconnection $\overline{\partial}_0$, all other semiconnections can be expressed as $\overline{\partial}_0+\mathfrak{a} $, with $\mathfrak{a} \in \Omega^{0,1}(ad E)$. We fix a pair of integers $s\ge 4$ and $\ell>6$, and form the completion $\Omega^{0,k}(ad E)_s$ of $\Omega^{0,k}(ad E)$ under the Sobolev norm $L_s^\ell$. ($L_s^\ell$ is the sum of $L^\ell$-norms of up to $s$-th partial derivatives.) We say $\overline{\partial}_0+\mathfrak{a} $ is $L_s^\ell$ if $\mathfrak{a} $ is $L_s^\ell$, assuming $\overline{\partial}_0$ is smooth. We denote by ${\cal G}$ the gauge group of $L_{s+1}^\ell$-sections of $\mathrm{Aut}(E)$ modulo $\mathbb{C}^*$, which is the $L_{s+1}^\ell$ completion of $C^\infty(\mathrm{Aut}(E))/\mathbb{C}^*$. We denote by ${\cal A}$ the space of $L_s^\ell$-semiconnections on $E$. We have an isomorphism of affine spaces, after a choice of smooth $\overline{\partial}_0\in {\cal A}$, via \begin{equation}\label{dbar} \overline{\partial}_0+\cdot: \Omega^{0,1}(ad\,E)_s \longrightarrow {\cal A},\quad \mathfrak{a} \mapsto \overline{\partial}_0+\mathfrak{a} . \end{equation} The gauge group ${\cal G}$ acts on ${\cal A}$ via $g\cdot (\overline{\partial}_0+\mathfrak{a} )=(g^{-1})^*(\overline{\partial}_0+\mathfrak{a} )$. Let ${\cal A}_{si}$ be the ${\cal G}$-invariant open subset of simple semiconnections, i.e. the automorphism groups are all $\mathbb{C}^*\cdot\id_E$. Let $${\cal B}_{si}={\cal A}_{si}/{\cal G}\, \subset\, {\cal A}/{\cal G}:={\cal B}. $$ Then ${\cal B}_{si}$ is a complex Banach manifold. An element $\bar{\partial} \in {\cal A}$ is called integrable if the curvature $F^{0,2}_{\bar{\partial} }\!:= (\bar{\partial} )^2$ vanishes. If $\bar{\partial} $ is integrable and $\mathfrak{a} \in \Omega^{0,1}(ad\,E)$, then $\bar{\partial} + \mathfrak{a} $ is $F^{0,2}_{\bar{\partial} +\mathfrak{a} }=\bar{\partial} \mathfrak{a} +\mathfrak{a} \wedge\mathfrak{a} $. By Sobolev inequality, $F^{0,2}_{\overline{\partial}_0+\cdot}$ is a continuous operator from $\Omega^{0,1}(ad E)_s$ to $\Omega^{0,2}(ad E)_{s-1}$, analytic in $\mathfrak{a} $. An integrable smooth semiconnection $\overline{\partial}$ defines a holomorphic vector bundle $(E,\overline{\partial})$ on $Y$. Picking a (reference) integrable $\bar{\partial} \in {\cal A}$, the holomorphic Chern-Simons functional is defined as $$ CS:{\cal A}\to \mathbb{C}, \quad CS(\bar{\partial} +\mathfrak{a} )=\frac1{4\pi^2}\int_Y \tr \left(\frac12 (\bar{\partial} \mathfrak{a} )\wedge\mathfrak{a} + \frac13 \mathfrak{a} \wedge\mathfrak{a} \wedge \mathfrak{a} \right)\wedge \Omega. $$ This is a cubic polynomial in $\mathfrak{a} $ whose quadratic part is $$CS_2(\bar{\partial} +\mathfrak{a} )=\frac1{4\pi^2}\int_Y \tr \bigl(\frac12 (\bar{\partial} \mathfrak{a} )\wedge\mathfrak{a} \bigr) \wedge \Omega. $$ Since the directional derivative of $CS$ at $\overline{\partial}+\mathfrak{a} $ in the direction of $\mathfrak b$ is $$\delta\,CS(\overline{\partial}+\mathfrak{a} )(\mathfrak b)=\frac1{4\pi^2}\int_Y \tr (\mathfrak b\wedge F^{0,2}_{\bar{\partial} +\mathfrak{a} })\wedge\Omega,$$ $\delta\,CS(\overline{\partial}+\mathfrak{a} )=0$ if and only if $\overline{\partial}+\mathfrak{a} $ is integrable. Thus the complex analytic subspace ${\cal A}_{si}^{int}$ of simple integrable smooth semiconnections in ${\cal A}_{si}$ is the critical locus (complex analytic subspace) of $CS$. Let $\mathfrak{V}_{si}={\cal A}_{si}^{int}/{\cal G}\subset {\cal B}_{si}$. Since $CS$ is ${\cal G}$-equivariant, it descends to $$cs:{\cal B}_{si}\to \mathbb{C} $$ whose critical locus (complex analytic subspace) in ${\cal B}_{si}$ is $\mathfrak{V}_{si}$. \subsection{The universal family over $X$} \label{sec3.2} Let $\mathfrak{X} \subset \mathfrak{V}_{si}\subset {\cal B}$ be an open complex analytic subspace and denote by $X=\mathfrak{X} _{\text{red}}$ the $\mathfrak{X} _{\text{set}}$ endowed with the reduced scheme structure. We assume that $X$ is quasi-projective and that there is a universal family of holomorphic bundles ${\cal E}$ over $X\times Y$ that induces the morphism $X\to {\cal B}$. Our convention is that ${\cal E}$ is with its holomorphic structure $\overline{\partial}_X$ implicitly understood. For $x\in X$, we denote by ${\cal E}_x={\cal E}|_{x\times Y}$, the holomorphic bundle associated with $x\in X$. We denote by $\overline{\partial}_x$ the restriction of $\overline{\partial}_X$ to ${\cal E}_x$. We use $ad {\cal E}_x$ to denote the smooth vector bundle ${\cal E}_x^\vee\otimes {\cal E}_x$. We fix a hermitian metric $h$ on ${\cal E}$ that is analytic in $X$ direction (see \S\ref{sec4.3}). For $x\in X$, we denote by $h_x$ the restriction of $h$ to ${\cal E}_x$. Using $h_x$, we form $\Omega^{0,k}(ad {\cal E}_x)_s$, the space of $L_s^\ell$ $ad {\cal E}_x$-valued $(0,k)$-forms. We form the space ${\cal A}_x$ of $L_{s}^\ell$ semiconnections on ${\cal E}_x$, which is isomorphic to $\Omega^{0,1}(ad {\cal E}_x)_s$ via $\overline{\partial}_x+\cdot$ (cf. \eqref{dbar}). We form the adjoint $\overline{\partial}_x^{\ast}$ of $\overline{\partial}_x$ using the hermitian metric $h_x$. Since $\Aut({\cal E}_x)=\mathbb{C}^{\ast}$, a standard argument in gauge theory shows that the tangent space of ${\cal B}$ at $x$ is \begin{equation}\label{a} T_x{\cal B} \cong \Omega^{0,1}(ad {\cal E}_x)_s/\image(\overline{\partial}_x)_s \cong \mathrm{ker} (\overline{\partial}^{\ast}_x)_s, \quad x\in X, \end{equation} where the first isomorphism depends on a choice of smooth ${\cal E}_x\cong E$; the second isomorphism is canonical using the Hodge theory of $({\cal E}_x,\overline{\partial}_x,h_x)$. (We use the subscript ``$s$" to indicate that it is the image in $\Omega^{0,1}(ad {\cal E}_x)_s$.) For any open subset $U\subset X$, we denote $T_U{\cal B}= T{\cal B}|_U$. Since $X\subset {\cal B}$ is a complex analytic subspace, it is a holomorphic Banach bundle over $U$. For $x\in X$, we form the Laplacian $$\Box_{x}=\overline{\partial}_{x}\overline{\partial}_{x}^{\ast} +\overline{\partial}_{x}^{\ast} \overline{\partial}_{x}: \Omega^{0,1}(ad {\cal E}_x)_s\to \Omega^{0,1}(ad {\cal E}_x)_{s-2}, $$ and its truncated eigenspace $$\Theta_x(\eps)=\mathbb{C}\text{-span}\,\{\mathfrak{a} \in \Omega^{0,1}(ad {\cal E}_x)_s\mid \overline{\partial}_x^{\ast} \mathfrak{a} =0,\ \Box_{x}\mathfrak{a} =\lambda \mathfrak{a} , \ \lambda< \eps\}\subset \mathrm{ker} (\overline{\partial}_x^{\ast})_s. $$ Note that $T_x\f \cong \mathrm{ker} (\Box_x)_s^{0,1}\subset \Theta_x(\eps)$ for any $\eps>0$; since $({\cal E}_x,\overline{\partial}_x,h_x)$ are smooth, $\Theta_x(\eps)\subset \Omega^{0,1}(ad {\cal E}_x)$ (i.e., consisting of smooth forms). \def{\alpha\beta} {{\alpha\beta} } \subsection{Preorientation data} We recall the quadratic form $cs_{2,x}$ on $T_x{\cal B}$, $x\in X$, induced by the CS-function $cs:{\cal B} \to \mathbb{C}$, defined explicitly by $$cs_{2,x}(\mathfrak{a} _1,\mathfrak{a} _2)=\frac1{8\pi^2}\int\tr\bigl( \mathfrak{a} _1\wedge \overline{\partial}_{x}\mathfrak{a} _2\bigr) \wedge \Omega, \quad \mathfrak{a} _i\in \Omega^{0,1}(ad {\cal E}_x)_s/ \mathrm{ker} (\overline{\partial}_x)_s. $$ Since $T_x\mathfrak{X} $ is the null subspace of $cs_{2,x}$, we have the induced non-degenerate quadratic form \begin{equation}\label{cs-quad} {\cal Q}_x: T_x{\cal B}/T_x\mathfrak{X} \times T_x{\cal B}/T_x\mathfrak{X} \longrightarrow \mathbb{C}. \end{equation} We introduce the notion of orientation bundles and their homotopies. Let $r$ be a positive integer. (The notion of analytic subbundles will be recalled in the next section.) \begin{defi}\label{def1.1} Let $U\subset X$ be open. A rank $r$ \emph{orientation bundle} on $U$ is a rank $r$ analytic subbundle $\Xi\subset T_U{\cal B}$ such that \begin{enumerate} \item there is an assignment $U\ni x\mapsto \eps_x\in (0,1)$, continuous in $x\in U$, such that for every $x\in U$, $\Theta_{x}(\eps_x)\subset \Xi_{x}\!:= \Xi|_x$; \item at each $x\in U$, $Q_x\!:= {\cal Q}_{x}|_{\Xi_x/T_{x}\mathfrak{X} }$ is a non-degenerate quadratic form. \end{enumerate} \end{defi} We also need a notion of homotopy between orientation bundles. \begin{defi} Let $\Xi_a$ and $\Xi_b$ be two orientation bundles over $U$. A \emph{homotopy} from $\Xi_a$ to $\Xi_b$ is a family of orientation bundles $\Xi_{t}$ on $U$, such that \begin{enumerate}\item the family is analytic in $t$, and $\Xi_0=\Xi_a$ and $\Xi_1=\Xi_b$; \item for all $t\in [0,1]$, $\Xi_t$ satisfy (1) of Definition \ref{def1.1} for a single $\eps_\cdot$. \end{enumerate} \end{defi} The following is one of the key ingredients in our construction of perverse sheaves on moduli spaces. \begin{defi} We say $X$ is equipped with rank $r$ \emph{preorientation data} if there are \begin{enumerate} \item a locally finite open cover $X=\cup_\alpha U_\alpha$, \item a rank $r$ orientation bundle $\Xi_\alpha$ on $U_\alpha$ for each $\alpha$, and \item a homotopy $\Xi_{{\alpha\beta} }$ from $\Xi_\alpha|_{U_{{\alpha\beta} }}$ to $\Xi_\beta|_{U_{\alpha\beta}}$ for each $U_{{\alpha\beta} }=U_\alpha\cap U_\beta$. \end{enumerate}\end{defi} Since the open cover is locally finite, for each $x\in X$, we can find an open set $U_x$ which is contained in any $U_\alpha$ with $x$ inside. We can further choose $\eps_x>0$ such that $\Xi_\alpha\supset \Theta_x(\eps_x)$ whenever $x\in U_\alpha$. To simplify the notation, from now on we will suppress $\eps_x$ and write $\Theta_x$ for the subbundle $\Theta_x(\eps_x)$. \vskip5pt In \S\ref{secExOr}, we will prove the following. \begin{prop}\label{pExOr} Every quasi-projective $X\subset\mathfrak{V}_{si}$ with universal family admits preorientation data.\end{prop} \subsection{Families of CS charts and local trivializations} We introduce another key ingredient, called CS charts. \begin{defi}\label{def4} Let $f$ be a holomorphic function on a complex manifold $V$ such that $0$ is the only critical value of $f$. The \emph{reduced critical locus} (also called the critical set) is the common zero set of the partial derivatives of $f$. The \emph{critical locus} of $f$ is the complex analytic space $\mathfrak{X} _f$ defined by the ideal $(df)$ generated by the partial derivatives of $f$. \end{defi} \begin{defi} An $r$-dimensional \emph{CS chart} for $\mathfrak{X} $ is an $r$-dimensional complex submanifold $V$ of ${\cal A}_{si}$, which embeds holomorphically into ${\cal B}_{si}$ by the projection ${\cal A}_{si}\to {\cal B}_{si}$, such that, letting $\imath:V\to {\cal B}$ be the inclusion, the critical locus $\mathfrak{X} _{cs\circ\imath}\subset V$ is an open complex analytic subspace of $\mathfrak{X} \subset \mathfrak{V}_{si}$. \end{defi} We say the chart $(V,\iota)$ contains $x\in X$ if $x\in \imath (\mathfrak{X} _{cs\circ\imath})$. \begin{exam}\label{exn1} By \cite[Theorem 5.5]{JoSo}, for any $x=\bar{\partial} _0\in\mathfrak{V}_{si}$, $$V=\{\bar{\partial} _\mathfrak{a} =\bar{\partial} _0+\mathfrak{a} \,|\, \bar{\partial} _0^*\mathfrak{a} =\bar{\partial} _0^*F_{\bar{\partial} _0+\mathfrak{a} }^{0,2}=0, \parallel\!\mathfrak{a} \!\parallel_s< \varepsilon \}\subset {\cal A}_{si} $$ is a CS chart of $\mathfrak{X} $ containing $x$ of dimension $\dim T_x\mathfrak{X} $, for a sufficiently small $\varepsilon>0$. This chart depends on $x$ and the choice of a hermitian metric on $E$ on which the adjoint $\overline{\partial}_0^{\ast}$ depends. In this paper, we call this chart the Joyce-Song chart at $x$. We remark that when $\overline{\partial}_0$ and the hermitian metric are smooth, all $\overline{\partial}_0+\mathfrak{a} \in V$ are smooth. \end{exam} \begin{defi}\label{dcfcs} Let $\rho: Z\to X$ be a continuous map from a topological space $Z$ to $X$. Let $r$ be a positive integer. A \emph{family of $r$-dimensional CS charts} (for $\mathfrak{X} $) is a subspace ${\cal V}\subset Z\times {\cal B}$ that fits in a diagram $$\xymatrix{ {\cal V}\ar@{^(->}[r]\ar[d]_\pi & Z\times {\cal B}\ar[dl]^{pr_Z}\\ Z} $$ such that for each $x\in Z$ the fiber ${\cal V}_x:=\pi^{-1}(x)\subset {\cal B}_{si}$ and is an $r$-dimensional CS chart of $\mathfrak{X} $ containing $\rho(x)$. \end{defi} Given ${\cal V}\subset Z\times{\cal B}$ a family of CS charts over $\rho: Z\to X$, we define $\Delta(Z)=\{(x,\rho(x)_x)\mid x\in Z\} \subset Z\times {\cal V}$, where $\rho(x)_x$ is the unique point in ${\cal V}_x$ whose image in ${\cal B}$ is $\rho(x)$. \begin{defi}\label{dltcf} A \emph{local trivialization} of the family $Z\leftarrow {\cal V}\hookrightarrow Z\times {\cal B}$ consists of an open $U_0\subset Z$, an open neighborhood ${\cal U}$ of $\Delta(U_0)\subset {\cal V}_{U_0}\!:= \pi^{-1}(U_0)$, and a continuous \begin{equation}\label{Psi} \begin{CD} U_0\times {\cal V}_{U_0}\supset {\cal U} @>{\Psi}>> {\cal V}_{U_0}\times U_0, \end{CD} \end{equation} such that $\Psi$ commutes with the two tautological projections ${\cal U}\subset U_0\times{\cal V}_{U_0}\to U_0\times U_0$ and ${\cal V}_{U_0}\times U_0\to U_0\times U_0$, and that \item \begin{enumerate} \item letting $U_0\to U_0\times U_0$ be the diagonal, then $\Psi|_{{\cal U}\times_{U_0\times U_0}U_0}=\id$; \item for any $x, y\in U_0$, letting ${\cal U}_{x,y}={\cal U}\cap (x\times {\cal V}_y)$ and $\Psi_{x,y}\!:= \Psi|_{{\cal U}_{x,y}}:{\cal U}_{x,y}\to {\cal V}_x$, then $\Psi_{x,y}$ is biholomorphic onto its image; $\Psi_{x,y}$ restricted to ${\cal U}_{x,y}\cap \mathfrak{X} _{f_y}$ is an open immersion into $\mathfrak{X} _{f_x}\subset {\cal V}_x$, commuting with the tautological open immersions $\mathfrak{X} _{f_x}, \mathfrak{X} _{f_y}\subset \mathfrak{X} $. \end{enumerate} We say that ${\cal V}\subset Z\times{\cal B}$ admits local trivializations if for any $x_0\in Z$ there is a local trivialization over an open neighborhood $U_0$ of $x_0$ in $Z$. \end{defi} \begin{defi}\label{comp-1} Let $Z\subset \mathbb{R}^{n}$ be a (real) analytic subset defined by the vanishing of finitely many analytic functions. Let ${\cal V}\subset Z\times{\cal B}$ be a family of CS charts over $\rho:Z\to X$. A complexification of ${\cal V}$ consists of a complexification $Z^\mathbb{C}\subset \mathbb{C}^n$ of $Z$ (thus having $Z^\mathbb{C}\cap \mathbb{R}^n=Z$), a holomorphic $\rho^\mathbb{C}: Z^\mathbb{C}\to X$ extending $\rho: Z\to X$, and a holomorphic family of CS charts ${\cal V}^\mathbb{C}\subset Z^\mathbb{C}\times{\cal B}$ over $Z^\mathbb{C}$ (i.e. ${\cal V}^\mathbb{C}$ is a complex analytic subspace of $Z^\mathbb{C}\times {\cal B}$) such that $${\cal V}^\mathbb{C}\times_{Z^\mathbb{C}}Z ={\cal V} \subset Z\times {\cal B}. $$ \end{defi} Let $U\subset X$ be an open subset and ${\cal V}\subset U\times{\cal B}$ be a family of CS charts over $U$ which admits local trivializations. Let $U_0\subset U$ be an open subset and $\Psi$ in \eqref{Psi} a local trivialization of ${\cal V}$ over $U_0$. \begin{defi} We say that the local trivialization $\Psi$ is \emph{complexifiable} if for any $x\in U_0$, there is an open neighborhood $ x\in {O_x}\subset U_0$ such that if we let ${\cal V}_{O_x}\subset {O_x}\times{\cal B}$ and $\Psi_{O_x}: {\cal U}_{O_x}\to {\cal V}_{O_x}\times {O_x}$, where ${\cal U}_{O_x}={\cal U}\times_{U\times {\cal V}} {O_x}\times{\cal V}_{O_x}$ and $\Psi_{O_x}$ is the pullback of $\Psi$, the following hold: \begin{enumerate} \item the family ${\cal V}_{O_x}\subset {O_x}\times{\cal B}$ admits a complexification ${\cal V}_{{O_x^\mathbb{C}}}\subset {O_x^\mathbb{C}}\times{\cal B}$ over a complexification ${O_x^\mathbb{C}}$ of ${O_x}$; \item there is a holomorphic local trivialization $\Psi_{{O_x^\mathbb{C}}}: {\cal U}_{O_x^\mathbb{C}}\to {\cal V}_{{O_x^\mathbb{C}}}\times {O_x^\mathbb{C}}$, i.e. ${\cal U}_{{O_x^\mathbb{C}}}\subset {O_x^\mathbb{C}}\times{\cal V}_{{O_x^\mathbb{C}}}$ is open and contains the diagonal $\Delta({O_x^\mathbb{C}})$, such that $\Psi_{{O_x^\mathbb{C}}}$ is holomorphic, $${\cal U}_{{O_x^\mathbb{C}}}\times_{{O_x^\mathbb{C}}}{O_x}={\cal U}_{O_x}\quad{\rm and}\quad \Psi=\Psi_{{O_x^\mathbb{C}}}|_{{\cal U}}: {\cal U}_{O_x}\to {\cal V}_{O_x}\times {O_x}\subset {\cal V}_{{O_x^\mathbb{C}}}\times {O_x^\mathbb{C}}. $$ \end{enumerate} \end{defi} In \S\ref{secCSdata}, we will prove the following. \begin{prop}\label{prCSdata} Let $X\subset {\cal B}_{si}$ be equipped with preorientation data $(\cup U_\alpha,\Xi_\alpha, \Xi_{\alpha\beta} )$. Then there are \begin{enumerate} \item a family of $r$-dimensional CS charts ${\cal V}_\alpha\subset U_\alpha\times {\cal B}$ with complexifiable local trivializatons at all $x\in U_\alpha$; \item an open neighborhood $U_x$ and a subfamily ${\cal W}_x$ of CS charts in ${\cal V}_\alpha|_{U_x}$ for each $x\in U_\alpha$, i.e. a subbundle ${\cal W}_x$ of ${\cal V}_\alpha|_{U_x}$ which admits compatible complexifiable local trivializations \[\xymatrix{ U_x\times {\cal V}_\alpha|_{U_x}\supset {\cal U} \ar[r]^{\Psi} & {\cal V}_\alpha|_{U_x}\times U_x\\ U_x\times {\cal W}_x|_{U_x}\supset {\cal U}'\ar@{^(->}[u] \ar[r]^{\Psi} & {\cal W}_x|_{U_x}\times U_x \ar@{^(->}[u] }\] \item a family of CS charts $\mathbf{V} _{\alpha\beta} $ parameterized by $U_{\alpha\beta} \times [0,1]$ with $\mathbf{V} _{\alpha\beta} |_{U_{\alpha\beta} \times\{0\}}= {\cal V}_\alpha|_{U_{\alpha\beta} }$, $\mathbf{V} _{\alpha\beta} |_{U_{\alpha\beta} \times\{1\}}= {\cal V}_\beta|_{U_{\alpha\beta} }$, which has complexifiable local trivializations at all $(x,t)\in U_{\alpha\beta} \times [0,1]$, such that $\mathbf{V} _{\alpha\beta} |_{U_x\times [0,1]}$ contains the subfamily ${\cal W}_x\times [0,1]$ of CS charts over $U_x\times [0,1]$. \end{enumerate} \end{prop} We call the above $({\cal V}_\alpha, {\cal W}_x, \mathbf{V} _{\alpha\beta} )$ \emph{CS data} for $X$. \def\mathbf{V} {\mathbf{V} } \subsection{Local perverse sheaves and gluing isomorphisms} Given CS data, we can construct perverse sheaves $P^{\bullet} _\alpha$ on each $U_\alpha$ and gluing isomorphisms $\sigma_{\alpha\beta} :P_\alpha^{\bullet} |_{U_{\alpha\beta} }\to P_\beta^{\bullet} |_{U_{\alpha\beta} }$. We will prove the following in \S\ref{seclopgl}. \begin{prop}\label{plopgl} (1) Let $\pi:{\cal V}\to U$ be a family of CS charts on $U\subset X\subset {\cal B}_{si}$ with complexifiable local trivializations at every point $x\in U$. Then the perverse sheaves of vanishing cycles for $$f_x:{\cal V}_x=\pi^{-1}(x)\subset {\cal B}_{si}\mapright{cs} \mathbb{C}$$ glue to a perverse sheaf $P^{\bullet} $ on $U$, i.e. $P^{\bullet} $ is isomorphic to $A_{f_x}^{\bullet} [r]$ in a neighborhood of $x$.\\ (2) Let ${\cal V}_\alpha$ and ${\cal V}_\beta$ be two families of CS charts on $U$ with complexifiable local trivializations. Let $P^{\bullet} _\alpha$ and $P^{\bullet} _\beta$ be the induced perverse sheaves on $U$. Let $\mathbf{V} $ be a family of CS charts on $U\times [0,1]$ with complexifiable local trivializations such that $\mathbf{V} |_{U\times\{0\}}={\cal V}_\alpha$ and $\mathbf{V} |_{U\times\{1\}}={\cal V}_\beta$. Suppose for each $x\in U$, there are an open neighborhood $U_x\subset U$ and a subfamily ${\cal W}$ of both ${\cal V}_\alpha|_{U_x}$ and ${\cal V}_\beta|_{U_x}$ such that ${\cal W}\times [0,1]$ is a complexifiable subfamily of CS charts in $\mathbf{V} |_{U_x\times [0,1]}$. Then there is an isomorphism $\sigma_{\alpha\beta} :P^{\bullet} _\alpha\cong P^{\bullet} _\beta$ of perverse sheaves.\\ (3) If there are three families ${\cal V}_\alpha, {\cal V}_\beta, {\cal V}_\gamma$ with homotopies among them as in (2), then the isomorphisms $\sigma_{\alpha\beta} , \sigma_{\beta\gamma}, \sigma_{\gamma\alpha}$ satisfy $$\sigma_{{\alpha\beta} \gamma}:=\sigma_{\gamma\alpha}\circ \sigma_{\beta\gamma}\circ \sigma_{\alpha\beta} = \pm \id.$$ \end{prop} In fact, the isomorphism in (2) is obtained by gluing pullback isomorphisms via biholomorphic maps $\chi_{\alpha\beta} :{\cal V}_{\alpha,x}\to {\cal V}_{\beta,x}$ at each $x$. The sign $\pm 1$ in (3) is the determinant of the composition $$T_x{\cal V}_{\alpha,x}\mapright{d\chi_{\alpha\beta} } T_x{\cal V}_{\beta,x}\mapright{d\chi_{\beta\gamma}} T_x{\cal V}_{\gamma,x} \mapright{d\chi_{\gamma\alpha}} T_x{\cal V}_{\alpha,x}$$ where ${\cal V}_{\cdot,x}$ is the fiber of ${\cal V}_{\cdot}$ over $x$. By Serre duality, $\det T{\cal V}_{\alpha,x}$ is a square root of $\det \Ext^\bullet_\pi({\cal E},{\cal E})$ and hence the 2-cocycle $\sigma_{{\alpha\beta} \gamma}$ defines an obstruction class in $H^2(X,\mathbb{Z}_2)$ for the existence of a square root of $\det \Ext^\bullet_\pi({\cal E},{\cal E})$. Combining Propositions \ref{pExOr}, \ref{prCSdata} and \ref{plopgl}, we thus obtain the following. \begin{theo}\label{truemainth} Let $X\subset \mathfrak{X} \subset \mathfrak{V}_{si}$ be quasi-projective and equipped with a universal family ${\cal E}$. Then there is a perverse sheaf $P^{\bullet} $ on $X$ which is locally a perverse sheaf of vanishing cycles if and only if there is a square root of the line bundle $\det \Ext^\bullet_\pi({\cal E},{\cal E})$ in $\mathrm{Pic}(X)$. \end{theo} It is obvious that the theorem holds for any \'etale cover of $X$. \subsection{Divisibility of the determinant line bundle} In this subsection, we show that for moduli spaces $\mathfrak{X} $ of stable sheaves on $Y$, and for the universal sheaf ${\cal E}$ on $X\times Y$, $\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ has a square root possibly after taking a Galois \'etale cover $X^\dagger\to X$. By Theorem \ref{truemainth}, there is a globally defined perverse sheaf $P^{\bullet} $ on $X^\dagger$ which is locally the perverse sheaf of vanishing cycles. To begin with, using the exponential sequence $$H^1(X,\mathcal{O}_X)\longrightarrow H^1(X,\mathcal{O}_X^{\ast})\mapright{} H^2(X,\mathbb{Z}), $$ we see that \begin{equation}\label{det} \det \Ext_{\pi}^\bullet({\cal E},{\cal E})=\det R\pi_*R{\cal H} om({\cal E},{\cal E}) \end{equation} admits a square if and only if its first Chern class in $H^2(X,\mathbb{Z})$ is even. We determine the torsion-free part of the first Chern class of \eqref{det} using the Grothendieck-Riemann-Roch theorem: $$\mathrm{ch}(\Ext_{\pi}^\bullet({\cal E},{\cal E})) =\pi_*\bigl( \mathrm{ch}( R{\cal H} om({\cal E},{\cal E}))\mathrm{td}(Y)\bigr). $$ Since ${\cal E}$ is flat over $X$, $\alpha_i\!:= c_i({\cal E})\in A^{\ast} (X\times Y)_\mathbb{Q}$. Let $r=\rank {\cal E}$. Then one has $$\mathrm{ch}({\cal E})= r+\alpha_1+\frac{1}{2}(\alpha_1^2-2\alpha_2)+\frac{1}{6}(\alpha_1^3-3\alpha_1\alpha_2+3\alpha_3)+\delta_4+\cdots, $$ where $\delta_4\in A^4(X\times Y)_\mathbb{Q}$, and $\cdots$ are elements in $A^{>4}(X\times Y)_\mathbb{Q}$. Thus \begin{equation}\label{ch} \mathrm{ch}( R{\cal H} om({\cal E},{\cal E}))=\mathrm{ch}({\cal E})\mathrm{ch}({\cal E}^\vee)\qquad\qquad\qquad\qquad\qquad\qquad\qquad \end{equation} $$ \ \ \qquad\qquad=r^2+((r-1)\alpha_1^2-2r\alpha_2)+(-\frac{\alpha_1^4}{12}+\alpha_2^2-\alpha_1\alpha_3+2r\delta_4)+\cdots. $$ Since $Y$ is a Calabi-Yau three-fold, we have $$\mathrm{td}(Y)=1+\frac{1}{12} c_2(T_Y). $$ Thus by GRR, the torsion-free part of the first Chern class of \eqref{det} is \begin{equation}\label{c1} c_1\bigl(\det \Ext_{\pi}^\bullet({\cal E},{\cal E})\bigr)=\bigl[\mathrm{ch}(\Ext_{\pi}^\bullet({\cal E},{\cal E})) \bigr]_1\qquad\qquad\qquad\qquad\qquad\quad\ \, \end{equation} $$\qquad\qquad\ =\pi_*\bigl( -\frac{\alpha_1^4}{12}+\alpha_2^2-\alpha_1\alpha_3+2r\delta_4+ ((r-1)\alpha_1^2-2r\alpha_2)\frac{1}{12} c_2(T_Y)\bigr) $$ where $[\cdot]_1$ denotes the degree one part in $A^1X_\mathbb{Q}$. We now suppose that $\mathfrak{X} $ is the moduli of one dimensional sheaves. Then $r=0$ and $\alpha_1=0$. Hence \eqref{c1} reduces to $$c_1\bigl( \det \Ext_{\pi}^\bullet({\cal E},{\cal E})\bigr) =\pi_* (\alpha_2^2). $$ We let $[\alpha_2]$ be the torsion-free part of the image of $\alpha_2$ in $H^4(X\times Y,\mathbb{Z})$. We apply the K\"unneth formula \cite{Spa}. By the canonical isomorphism (modulo torsions) $$ H^4(X\times Y,\mathbb{Z})/\text{tor}\cong [ H^{\ast}(X,\mathbb{Z})\otimes H^{\ast}(Y,\mathbb{Z})]^4/\text{tor}. $$ We write $[\alpha_2]=\sum_{i=0}^4\sum_j a_{i,j}\otimes b_{4-i,j}$, where $b_{4-i,j}\in H^{4-i}(Y,\mathbb{Z})$, etc. Then $$[\alpha_2]^2\equiv \sum_{i=0}^4\sum_j a_{i,j}\otimes b_{i-4,j}\wedge a_{i,j}\otimes b_{i-4,j}\equiv \sum_{i,j} a_{2i,j}^2\otimes b_{4-2i,j}^2\!\! \mod 2, $$ whose part in $\cdot\otimes [Y]^\vee$ is trivial. This proves that the torsion-free part of \eqref{c1} is even. More generally, by a similar calculation, it is proved in \S\ref{sec8} that the torsion-free part of \eqref{c1} is even for any perfect complex ${\cal E}$ on $X\times Y$. See Theorem \ref{1301111}. \begin{lemm} Let $\mathfrak{X} $ be a fine moduli scheme of simple sheaves on $Y$. There is a torsion line bundle $L$ on $X=\mathfrak{X} _{red}$ with $L^{\otimes k}\cong \mathcal{O}_X$ for some $k>0$ such that the pullback of $\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ by the cyclic \'etale Galois cover $X^\dagger=\{s\in L\,|\,s^k=1\}\to X$ admits a square root ${\cal L}$ on $X^\dagger$. \end{lemm} \begin{proof} Since the torsion-free part of $c_1\left(\det \Ext_{\pi}^\bullet({\cal E},{\cal E})\right)$ is even, there is a torsion line bundle $L$ on $X$ such that $L\otimes\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ is torsion-free and admits a square root. Note that the pullback of $L$ to $X^\dagger$ is trivial. \end{proof} For a moduli space $\mathfrak{X} $ of stable sheaves with universal bundle, we can apply the Seidel-Thomas twists (\cite{JoSo}) so that we can identify $\mathfrak{X} $ as a complex analytic subspace of ${\cal B}_{si}$. By Theorem \ref{truemainth}, we have the following. \begin{theo}\label{theo4.3.8} If $\mathfrak{X} $ is a fine moduli scheme of stable sheaves on $Y$, there exist a cyclic \'etale Galois cover $X^\dagger\to X$ and a perverse sheaf $P^{\bullet} $ on $X^\dagger$ which is locally the perverse sheaf of vanishing cycles. \end{theo} \section{Existence of preorientation data}\label{secExOr} In this section we prove Proposition \ref{pExOr}. We construct orientation bundles, their homotopies, and their complexifications. \subsection{The semiconnection space} We continue to use the convention introduced at the beginning of Subsection \ref{sec3.2}. Thus, ${\cal E}$ is the universal family of locally free sheaves on $X\times Y$ and $({\cal E}_x,\overline{\partial}_x)$ is the restriction ${\cal E}|_{x\times Y}$, and we use $ad{\cal E}_x$ to denote the smooth vector bundle ${\cal E}_x^\vee\otimes {\cal E}_x$. Since $X$ is quasi-projective, $X$ has a stratification according to the singularity types of points in $X$. We fix such a stratification. We say a continuous function (resp. a family) on an open $U\subset X$ is smooth if its restriction to each stratum is smooth. We now choose a smooth hermitian metric $h$ on ${\cal E}$. Since $X$ is quasi-projective, by replacing ${\cal E}$ by its twist via a sufficiently negative line bundle from $X$, for a sufficiently ample ${\cal H}$ on $Y$, we can make ${\cal E}^\vee\otimes p_Y^{\ast}{\cal H}$ generated by global sections, where $p_Y: X\times Y\to Y$ is the projection. Thus for an integer $N$, we have a surjective homomorphism of vector bundles $\mathcal{O}_{X\times Y}^{\oplus N}\to {\cal E}^\vee\otimes p_Y^{\ast}{\cal H}$; dualizing and untwisting ${\cal H}^\vee$, we obtain a subvector bundle homomorphism ${\cal E}\subset p_Y^{\ast}{\cal H}^{\oplus N}$. We then endow ${\cal H}$ a smooth hermitian metric; endow ${\cal H}^{\oplus N}$ the direct sum metric, and endow $p_Y^{\ast} {\cal H}^{\oplus N}$ the pullback metric. We define $h$ to be the induced hermitian metric on ${\cal E}$ via the holomorphic subbundle embedding ${\cal E}\subset p_Y^{\ast} {\cal H}^{\oplus N}$. For $x\in X$, we denote by $h_x$ the restriction of $h$ to ${\cal E}_x$. For the integers $s$ and $\ell$ chosen before, we denote the $L_s^\ell$-completion of $\Omega^{0,k}(ad {\cal E}_x)$ by $\Omega^{0,k}(ad {\cal E}_x)_s$. We form the formal adjoint $\overline{\partial}_x^{\ast}$ of $\overline{\partial}_x$ using the hermitian metric $h_x$. Then $\overline{\partial}_x$ and $\overline{\partial}_x^{\ast}$ extend to differential operators $$\overline{\partial}_x\ (resp. \ \overline{\partial}_x^{\ast}): \Omega^{0,k}(ad{\cal E}_x)_{s+1-k}\to \Omega^{0,k+1}(ad{\cal E}_x)_{s-k}\quad (resp.\ \Omega^{0,k-1}(ad{\cal E}_x)_{s-k}). $$ We use $\mathrm{ker} (\overline{\partial}_x)^{0,k}_{s+1-k}$ to denote the kernel of $\overline{\partial}_x$ in $\Omega^{0,k}(ad{\cal E}_x)_{s+1-k}$; likewise, $\mathrm{ker} (\overline{\partial}_x^{\ast})^{0,k}_{s+1-k}$. We form the Laplacian $$\Box_{x}=\overline{\partial}_{x}\overline{\partial}_{x}^{\ast} +\overline{\partial}_{x}^{\ast} \overline{\partial}_{x}: \Omega^{0,k}(ad {\cal E}_x)_{s+1-k}\to \Omega^{0,k}(ad {\cal E}_x)_{s-1-k}. $$ We denote by $\Box_x^{-1}(0)^{0,k}$ the set of harmonic forms (the kernel of $\Box_x$) in $\Omega^{0,k}(ad {\cal E}_x)$.\footnote{If we don't put any subscript at $\Omega^{0,\cdot}(\cdot)$, it means the set of smooth sections.} \vskip5pt It will be convenient to fix a local trivialization of ${\cal E}$. Given $x_0\in X$, we realize an open neighborhood $U_0\subset X$ of $x_0$ as $U_0=X_{f_0}$, where $(V_0,f_0)\!:= (V_{x_0}, f_{x_0})$ is the JS chart. We fix a smooth isomorphism $E\simeq {\cal E}_{x_0}$. Since $V_0$ is a complex manifold, by shrinking $x_0\in V_0$ if necessary, we assume that $V_0$ is biholomorphic to an open subset of $\mathbb{C}^n$ for some $n$. We let $z=(z_1,\cdots,z_n)$ be the induced coordinate variables on $V_0$. By abuse of notation, we also use $z$ to denote a general element in $V_0$. Let $\overline{\partial}_0+\mathfrak{a} _0(z)$ be the family of semiconnections on $E$ of the chart $(V_0, f_0)$. Because $\mathfrak{a} _0(z)$ satisfies the system in Example \ref{exn1}, $\overline{\partial}_z\mathfrak{a} _0(z)=0$. Let $E_{V_0}=V_0\times E$, as a vector bundle over $V_0\times Y$. Let $\overline{\partial}_z$ be the $\overline{\partial}$-operator along the $z$ direction of the product bundle $E_{V_0}=V_0\times E$. Then $\overline{\partial}_{V_0}\!:= \overline{\partial}_0+\overline{\partial}_z+\mathfrak{a} _0(z)$ is a semiconnection on $E_{V_0}$. It is known that (cf. \cite[Chapter 9]{JoSo}, \cite{Miya}) $\mathfrak{X} _{f_0}=(F^{0,2}_{\overline{\partial}_0+\mathfrak{a} _0(z)}=0)$, which is the same as $((\overline{\partial}_{V_0})^2=0)$ since $\overline{\partial}_z \mathfrak{a} _0(z)=0$. Using $U_0=X_{f_0}$, the restriction $(E_{V_0}, \overline{\partial}_{V_0})|_{U_0}$ is a holomorphic bundle over ${U_0}\times Y$. By construction, it is biholomorphic to ${\cal E}_{U_0}\!:= {\cal E}|_{U_0}$. Let \begin{equation}\label{local} \zeta: (E_{U_0}, \overline{\partial}_{V_0}|_{U_0})\longrightarrow {\cal E}_{U_0} \end{equation} be a biholomorphism. Since ${\cal E}_{x_0}$ is simple, we can assume that $\zeta$ extends the smooth isomorphism $E\simeq {\cal E}_{x_0}$ we begun with. In the following, we call $\zeta$ a framing of ${\cal E}$ over $U_0$. Via this framing, we identify $\Omega^{0,k}(ad{\cal E}_x)_s$ with $\Omega^{0,k}(ad E)_s$; the connection form $\mathfrak{a} _0(z)$ locally has convergent power series expansion in $z$ everywhere over $V_0$, with coefficients smooth $ad E$-valued $(0,1)$-forms. Another application of this local framing is that it gives local trivializations of $T_X{\cal B}$. Using the holomorphic family of semiconnections $\overline{\partial}_0+\mathfrak{a} _0(z)$, we embeds $V_0$ into ${\cal B}$ as a complex submanifold. (Since $(V_0,f_0)$ is a CS chart, $V_0\subset {\cal B}_{si}$.) Using ${\cal B}={\cal A}/{\cal G}$, we have an induced surjective homomorphism of holomorphic Banach bundles \begin{equation}\label{surj} V_0\times \Omega^{0,1}(ad E)_s\longrightarrow T_{V_0}{\cal B}. \end{equation} By shrinking $0\in V_0$ if necessary, the induced $V_0\times \mathrm{ker} (\overline{\partial}_0^{\ast})^{0,1}_s\to T_{V_0}{\cal B}$ is an isomorphism of holomorphic Banach bundles. Since for $x\in X$, $\overline{\partial}_x^{\ast}:\Omega^{0,1}(ad{\cal E}_x)_s\to \Omega^0(ad{\cal E}_x)_{s-1}/\mathbb{C}$ is surjective, $\mathrm{ker} (\overline{\partial}^{\ast})_X^{0,1}\!:= \coprod_{x\in X} \mathrm{ker} (\overline{\partial}_x^{\ast})_s^{0,1}$ is a smooth Banach bundle. The same holds for $\mathrm{ker} (\overline{\partial})_X^{0,2} \subset \Omega_X^{0,2}(ad {\cal E})_{s-1}$. \subsection{Truncated eigenspaces} We form the (partially) truncated eigenspace $$\Theta_x(\eps)\!:= \mathbb{C}\text{-span}\,\{\mathfrak{a} \in \Omega^{0,1}(ad {\cal E}_x)_s\mid \overline{\partial}_x^{\ast} \mathfrak{a} =0,\ \Box_{x}\mathfrak{a} =\lambda \mathfrak{a} , \ \lambda< \eps\}\subset \mathrm{ker} (\overline{\partial}_x^{\ast})^{0,1} $$ and its $(0,2)$-analogue $$\Theta_x(\eps)'\!:= \mathbb{C}\text{-span}\,\{\mathfrak{a} \in \Omega^{0,2}(ad {\cal E}_x)_s\mid \overline{\partial}_x \mathfrak{a} =0,\ \Box_{x}\mathfrak{a} =\lambda \mathfrak{a} , \ \lambda< \eps\}\subset \mathrm{ker} (\overline{\partial}_x)^{0,2}. $$ Let $U\subset X$ be an open subset. We form $$\Theta_U(\eps)=\coprod_{x\in U}\Theta_x(\eps)\subset \mathrm{ker} (\overline{\partial}^{\ast})^{0,1}_X|_U \quad{\rm and}\quad \Theta'_U(\eps)=\coprod_{x\in U}\Theta'_x(\eps)\subset \mathrm{ker} (\overline{\partial})^{0,2}_X|_U. $$ Recall that each $\Box_x$ has discrete non-negative spectrum. We say an $\eps>0$ separates eigenvalues of $\Box_x$ for $x\in U$ if for any $x\in U$, $\eps$ is not an eigenvalue of $\Box_x$. \vskip5pt Using that all ${\cal E}_x$ in $X$ are simple, we have the following vanishing result. \begin{lemm}\label{no} There is a continuous function $X\ni x\mapsto \eps_x\in (0,1)$ such that for any $x\in X$, $\Box_x$ has no eigenforms in $\mathrm{ker} (\overline{\partial}_x)_s^{0,1}$ and $\mathrm{ker} (\overline{\partial}_x^{\ast})^{0,2}_{s-1}$ of eigenvalues $\lambda\in (0,\eps_x)$. \end{lemm} \begin{proof} For any $x_0\in X$, we pick a connected open $x_0\in U_0\subset X$ so that the closure $\bar U_0\subset X$ is compact. We show that we can find $\eps_0>0$ so that the first statement holds for $x\in U_0$ and $\eps_x$ replaced by $\eps_0$. Suppose not. Then we can find a sequence $x_n\in U_0$, $\alpha_n\in\mathrm{ker} (\overline{\partial}_{x_n})^{0,1}$ and $\lambda_n>0$ such that $\Box_{x_n}\alpha_n=\lambda_n \alpha_n$ and $\lambda_n\to 0$. Using the spectral theory of self-adjoint operators and Hodge theory, and that $\overline{\partial}_{x_n} \alpha_n=0$, we conclude that $\alpha_n=\overline{\partial}_{x_n}\beta_n$ for a $\beta_n\in \mathrm{ker} (\overline{\partial}^{\ast}_x)^0_{s}$. Since $\Box_{x_n}\alpha_n=\lambda_n \alpha_n$, $\overline{\partial}_{x_n}\bigl( \overline{\partial}_{x_n}^{\ast}\overline{\partial}_{x_n}\beta_n-\lambda_n \beta_n\bigr)=0$. Further, by subtracting a constant multiple of $\id_E$, we can assume $\int_Y \tr\beta_n\ast \!1=0$ for all $n$. Since $\bar U_0$ is compact, by passing through a subsequence we can assume $x_n\to x\in \bar U_0$. Then after normalizing the $L^\ell$ norm of $\beta_n$ to be one, using elliptic estimate we conclude that (after passing through a subsequence) $\beta_n$ converges to $\beta\ne 0\in \mathrm{ker} (\overline{\partial}^{\ast}_x)^0_s$ and satisfying $\overline{\partial}_{x} \overline{\partial}_{x}^{\ast}\overline{\partial}_{x}\beta=0$ and $\int_Y\tr \beta\ast\! 1=0$. From $\overline{\partial}_{x} \overline{\partial}_{x}^{\ast}\overline{\partial}_{x}\beta=0$, we conclude $\overline{\partial}_x\beta=0$. Because ${\cal E}_x$ is simple (by assumption on $\mathfrak{X} $), $\int\tr\beta\ast\!1=0$ forces $\beta=0$, contradicting to $\parallel\!\beta\!\parallel_{L^\ell}=1$. Since $X$ is quasi-projective, we can cover $X$ by a countable number of open subsets each of which has compact closure in $X$. Applying the proof to each of these open set, we conclude that a continuous $\eps(\cdot)$ exists making the first statement of the lemma hold. The proof of the part on $\mathrm{ker} (\overline{\partial}_x^{\ast})_{s-1}^{0,2}$ is parallel, using that for every $x\in X$, because $Y$ is a Calabi-Yau threefold, $\Box_x^{-1}(0)^{0,3}\cong \Box_x^{-1}(0)^{0}\cong \mathbb{C}$. This proves the lemma. \end{proof} We will fix this function $\eps(\cdot)$ in the remainder of this section. \begin{lemm} \label{cont} Let $U\subset X$ be connected and open, and let $\eps_0\in (0,\eps_x)$ separate the eigenvalues of $\Box_x$ for all $x\in U$. Then $\Theta_U(\eps_0)$ (resp. $\Theta_U'(\eps_0)$) is a smooth subbundle of $\mathrm{ker} (\overline{\partial}^{\ast})^{0,1}_X|_U$ (resp. $\mathrm{ker} (\overline{\partial})^{0,2}_X|_U$). \end{lemm} \begin{proof} Since $\eps_0$ is not an eigenvalue of $\Box_x$ for all $x\in U$, and since $U$ is connected, the family $$\tilde \Theta_{x}(\eps_0)=\text{span} \{ \mathfrak{a} \in \Omega^{0,1}(ad E_{x})_s\mid \Box_{x}\mathfrak{a} =\lambda\mathfrak{a} , \ \lambda<\eps_0\}, \quad x\in U, $$ have identical dimensions and form a smooth (finite rank) vector bundle over $U$ (cf. \cite{Kato}). Because $\eps_0<\eps_x$ for all $x\in U$, $\tilde \Theta_{x}(\eps_0)\cap \image(\overline{\partial}_{x})=\{0\}$. Therefore, $\tilde \Theta_{x}(\eps_0)=\Theta_{x}(\eps_0)$ for all $x\in U$. This proves that $\Theta_{U}(\eps_0)$ is a smooth bundle over $U$ and thus a smooth subbundle of $\mathrm{ker} (\overline{\partial}^{\ast})^{0,1}_X|_U$. The proof for the other is identical. \end{proof} \subsection{Complexification}\label{sec4.3} \def\mathrm{Re}\,{\mathrm{Re}\,} \def\mathrm{Im}\,{\mathrm{Im}\,} We recall the notion of analytic families. Let $D\subset \mathbb{C}^n$ be an open subset, with $z=(z_1,\cdots, z_n)$ its induced coordinate variables, and $\mathrm{Re}\, z=(\mathrm{Re}\, z_1,\cdots, \mathrm{Re}\, z_n)$ and same for $\mathrm{Im}\, z$. An analytic function on $D$ is a smooth $\mathbb{C}$-valued function on $D$ that locally has convergent power series expansions in $\mathrm{Re}\, z$ and $\mathrm{Im}\, z$. We denote by ${\mathscr O}_D^{an}$ the sheaf of analytic functions on $D$. An analytic section of ${\mathscr O}_D^{\oplus r}$ is a section of the sheaf ${\mathscr O}_D^{\oplus r}\otimes_{{\mathscr O}_D}{\mathscr O}_D^{an}$. \begin{defi} Let $F\to U$ be a holomorphic vector bundle over a reduced complex analytic subspace $U$. We say a continuous section $s\in C^0(U,F)$ is analytic if at every $x\in U$, there is an open neighborhood $U_0\subset U$ of $x\in U$, a holomorphic trivialization $F|_{U_0}\cong {\mathscr O}_{U_0}^{\oplus r}$ and a closed holomorphic embedding $U_0\subset D$ into a smooth complex manifold $D$ such that $s|_{U_0}$ is the restriction of an analytic section of ${\mathscr O}_D^{\oplus r}$. We say a rank $l$ complex subbundle $F'\subset F$ is analytic if locally $F'$ is spanned by $l$ analytic sections of $F$. In case $F$ is a holomorphic Banach vector bundle over $U$, the same definition holds with ${\mathscr O}_{U_0}^{\oplus r}$ replaced by the local holomorphic Banach bundle trivializations $F|_{U_0}\cong B\times U_0$, for Banach spaces $B$. \end{defi} The purpose of this subsection is to prove \begin{prop}\label{ana} Let the situation be as in Lemma \ref{cont}. Then the bundle $\Theta_U(\eps)$ is an analytic subbundles of $T_{U}{\cal B}$. \end{prop} We will prove the proposition after we construct a complexification of the family $\overline{\partial}_x^{\ast}$. Since this is a local study, for any $x_0\in U$, we pick an open neighborhood $U_0\subset U$ and fix an isomorphism $\zeta$ (cf. \eqref{local}) derived from realizing $U_0=X_{f_0}$ for the JS chart $(V_0,f_0)= (V_{x_0}, f_{x_0})$. Let $D\subset V_0$ be an open neighborhood of $0\in V_0$. As in the discussion leading to \eqref{local}, we denote by $\overline{\partial}_0+\mathfrak{a} _0(z)$ the family of semiconnections on $E_{V_0}=E\times V_0$ over $V_0\times Y$. The connection form $\mathfrak{a} _0(z)$ is a $ \Omega^{0,1}(ad E)$-valued holomorphic function on $D$, with $\mathfrak{a} _0(0)=0$. Writing $z_k=u_k+iv_k$, we can view $D$ as an open subset of $\mathbb{R}^{2n}$, where $\mathbb{R}^{2n}$ is with the coordinate variable $(u_1,\cdots, u_n,v_1,\cdots, v_n)$. By allowing $u_k$ and $v_k$ to take complex values, we embed $\mathbb{R}^{2n}\subset \mathbb{C}^{2n}$, thus embed $D\subset \mathbb{C}^{2n}$ as a (totally real) analytic subset. We call an open $D^\mathbb{C}\subset \mathbb{C}^{2n}$ a complexification of $D$ if $D^\mathbb{C}\cap \mathbb{R}^{2n}=D$. We use $w$ to denote the complex coordinate variables of $\mathbb{C}^{2n}$. \begin{lemm}\label{ex-1} We can find a complexification $D^\mathbb{C}\supset D$ such that the function $\mathfrak{a} _0(z)$ extends to a holomorphic $\mathfrak{a} _0(\cdot)_\mathbb{C}: D^\mathbb{C}\to \Omega^{0,1}(ad E)$. \end{lemm} \begin{proof} The extension is standard. Since $\mathfrak{a} _0(z)$ is derived from the JS chart, it is holomorphic in $z$. Thus for any $\alpha=(\alpha_1,\cdots,\alpha_n)\in D$, $\mathfrak{a} _0(z)$ equals to a convergent power series in $(z_k-\alpha_k)$ in a small disk centered at $\alpha$ with coefficients in $\Omega^{0,1}(ad E)$. Letting $\alpha_k=a_k+i b_k$, $a_k, b_k\in \mathbb{R}$, and writing $z_k=u_k+i v_k$, the power series becomes a power series in $(u_k-a_k)$ and $(v_k-b_k)$. Because $u_k$ and $v_k$ are complex coordinate variables of $\mathbb{C}^{2n}\supset \mathbb{R}^{2n}\supset D$, $\mathfrak{a} _0(z)$ extends to a holomorphic $\Omega^{0,1}(ad E)$-valued function in a small neighborhood of $\alpha$ in $\mathbb{C}^{2n}$. Because the extension of a function defined on an open subset of $\mathbb{R}^{2n}$ to a germ of holomorphic function on $\mathbb{C}^n$ is unique, the various extensions of $\mathfrak{a} _0(z)$ using power series expansions at various $\alpha\in D$ give a single extension of $\mathfrak{a} _0(z)$ to a holomorphic $\mathfrak{a} _0(w)_\mathbb{C}$ on some complexifications $D^\mathbb{C}\supset D$. \end{proof} For $D\subset V_0$ a neighborhood of $0=x_0\in V_0$, we denote $$O_0\!:= D\cap X_{f_0}=D\cap X. $$ For $x\in O_0$, we write $\overline{\partial}^{\ast}_x=\overline{\partial}_0^{\ast}+\mathfrak{a} _0(x)^\dag$. The extension problem for $\mathfrak{a} _0(x)^\dag$ is more delicate because it is not defined away from $O_0$. \begin{lemm} For any $y\in Y$, there is an open neighborhood $S\subset Y$ of $y\in Y$ and an open neighborhood $D\subset V_0$ of $0\in V_0$ so that the hermitian metric $h|_{O_0\times S}$ extends to an $L_{s+2}^\ell$ hermitian metric on $E_{D\times S}\!:= E_D|_{D\times S}$, analytic in $z\in D$. \end{lemm} \begin{proof} Let $S\subset Y$ be an open neighborhood of $y$ so that $S$ is biholomorphic to the unit ball in $\mathbb{C}^3$, and that ${\cal E}|_{O_0\times S}\cong {\mathscr O}_{O_0\times S}^{\oplus r}$ and ${\cal H}|_S\cong {\mathscr O}_S$. We let $k_S$ be the hermitian norm of $1$ in ${\mathscr O}_S\cong {\cal H}|_S$ of the hermitian metric of ${\cal H}$ fixed earlier. Then $k_S$ is a smooth positive function on $S$. We let $s_1,\cdots, s_r$ be the standard basis of ${\cal E}|_{O_0\times S}\cong {\mathscr O}_{O_0\times S}^{\oplus r}$. Because ${\cal E}\subset p_Y^{\ast} {\cal H}^{\oplus N}$ is a subvector bundle over $X\times Y$, using ${\cal H}|_S\cong {\mathscr O}_S$, the image of $s_k$ in $p_Y^{\ast}{\cal H}^{\oplus N}|_{O_0\times S}$ has the presentation $s_k=(s_{k,1},\cdots,s_{k,N})$, where $s_{k,j}\in\Gamma({\mathscr O}_{O_0\times S})$. Then the hermitian metric form of $h$ on ${\cal E}|_{O_0\times S}$ in the basis $s_1,\cdots, s_r$ takes the form \begin{equation}\label{hz} h(s_k,s_l)=k_S\cdot \sum_j s_{k,j}\overline {s_{l,j}}. \end{equation} To extend this expression over $D\times S$, we will modify the semiconnection $\overline{\partial}_{D\times S}\!:= \overline{\partial}_0+\overline{\partial}_z+\mathfrak{a} _0|_{D\times S}$ to an integrable semiconnection $\overline{\partial}_{D\times S}'$ and extend $s_1,\cdots, s_r$ to holomorphic sections of $(E_{D\times S}, \overline{\partial}_{D\times S}')$. Let $\mathfrak{m}\subset {\mathscr O}_D$ be the maximal ideal generated by $z_1,\cdots, z_n$, and let $I\subset {\mathscr O}_D$ be the ideal sheaf of $D\cap\mathfrak{X} \subset D$. Then $F_{\overline{\partial}_0+\mathfrak{a} _0}^{0,2}\equiv 0\!\!\mod I$. We construct $\overline{\partial}_{D\times S}'$ by power series expansion. We let $s'=s+2$, and set $\mathfrak b_0(z)=0$. Suppose we have found $\mathfrak b_k(z) \in \Omega^{0,1}(ad E|_S)_{s'}\otimes_\mathbb{C} I$ such that \begin{equation}\label{k0} \overline{\partial}_0\mathfrak b_k(z)\in \Omega^{0,1}(ad E|_S)_{s'}\otimes_\mathbb{C} I\quad{\rm and}\quad F^{0,2}_{\overline{\partial}_0+\mathfrak{a} _0(z)+\mathfrak b_k(z)} \equiv 0\!\!\mod \mathfrak{m}^k\cap I, \end{equation} where $F^{0,2}_{\overline{\partial}_0+\mathfrak{a} _0(z)+\mathfrak b_k(z)}\!:= (\overline{\partial}_0+\mathfrak{a} _0(z)+\mathfrak b_k(z))^2|_{D\times S}$. Then by the Bianchi identity, using $\mathfrak{a} _0(0)=\mathfrak b_k(0)=0$, we have \begin{equation}\label{Bian} \overline{\partial}_0 F^{0,2}_{\overline{\partial}_0+\mathfrak{a} _0(z)+\mathfrak b_k(z)}\equiv (\overline{\partial}_0+\mathfrak{a} _0(z)+\mathfrak b_k(z))F^{0,2}_{\overline{\partial}_0+\mathfrak{a} _0(z)+\mathfrak b_k(z)} \equiv 0\!\!\mod \mathfrak{m}^{k+1}\cap I. \end{equation} We let $\phi_\alpha\in \mathfrak{m}^k\cap I$, $\alpha\in\Lambda_{k}$, be a $\mathbb{C}$-basis of $(\mathfrak{m}^k\cap I)/(\mathfrak{m}^{k+1}\cap I)$; we write $$F^{0,2}_{\overline{\partial}_0+\mathfrak{a} _0(z)+\mathfrak b_k(z)}\equiv \sum_{\alpha\in\Lambda_k} A_\alpha\phi_\alpha\!\!\mod \mathfrak{m}^{k+1}\cap I. $$ Because $\mathfrak{a} _0(z)$ is from the family on the JS chart, $\mathfrak{a} _0(z)\in \Omega^{0,1}(ad E)\otimes_\mathbb{C} I$ (i.e. is smooth). Thus using \eqref{k0} and \eqref{Bian}, we conclude that $A_\alpha\in\Omega^{0,2}(ad E|_S)_{s'}$, and $\overline{\partial}_0 A_\alpha=0$. Since $S$ is biholomorphic to the unit ball in $\mathbb{C}^3$, it is strictly pseudo-convex with smooth boundary. Applying a result of solutions to the $\overline{\partial}$-equation with $L_s^\ell$ estimate (cf. \cite[Theorem 6.11]{SCV}), there is a constant $C$ depending only on $S$ such that for each $\alpha\in \Lambda_k$, we can find $B_\alpha\in \Omega^{0,1}(ad E|_S)_{s'}$ such that \begin{equation}\label{est} \overline{\partial}_0 B_\alpha=A_\alpha\quad{\rm and}\quad \parallel\! B_\alpha\!\parallel_{L_{s'}^2(S)}\leq C \parallel\! A_\alpha\!\parallel_{L_{s'}^2(S)}. \end{equation} We define $\delta_{k}(z)=\sum_{\alpha\in\Lambda_k} B_\alpha\phi_\alpha$, and let $\mathfrak b_{k+1}(z)=\mathfrak b_k(z)+\delta_k(z)$. Then \eqref{k0} holds with $k$ replaced by $k+1$. We consider the infinite sum $\sum_{k=1}^\infty \delta_k(z)$. Using the estimate \eqref{est}, and the Morrey's inequality $\parallel\! u\!\parallel_{C^{0,1-6/\ell}(S)}\leq C' \parallel\! u\!\parallel_{L_1^\ell(S)}$ for the domain $S$, a standard power series convergence argument (cf. \cite[Section 5.3(c)]{Kodaira}) shows that possibly after shrinking $0\in D$ and $y\in S$, $\sum_{k=1}^\infty \delta_k|_{D\times S}$ converges to a $\mathfrak b(z)\in \Omega^{0,1}(ad E|_{S})_{s'}\otimes_\mathbb{C} I$. Then the semiconnection $$\overline{\partial}_{D\times S}'\!:= \overline{\partial}_0+\overline{\partial}_z+\mathfrak{a} _0(z)+\mathfrak b(z) $$ on $E_{D\times S}$ is integrable and is the desired modification. We now extend the metric. Possibly after shrinking $0\in D$ and $y\in S$, we can assume that the subbundle homomorphism ${\cal E}|_{O_0\times S}\to p_Y^{\ast} {\cal H}^{\oplus N}|_{O_0\times S}$ extends to a subbundle homomorphism $$g: (E_{D\times S},\overline{\partial}_{D\times S}') \longrightarrow p_Y^{\ast} {\cal H}^{\oplus N}|_{D\times S}; $$ the sections $s_1,\cdots, s_r$ extend to holomorphic sections $\tilde s_1,\cdots, \tilde s_r$ of $(E_{D\times S},\overline{\partial}'_{D\times S})$ that span the bundle $E_{D\times S}$. Using ${\cal H}|_S\cong {\mathscr O}_S$, and writing $g(\tilde s_k)=(\tilde s_{k,1},\cdots, \tilde s_{k,N})$, we define \begin{equation}\label{hs} h_{S}(\tilde s_k,\tilde s_l)\!:= k_S \cdot \sum_{j=1}^N \tilde s_{k,j}\overline{\tilde s_{l,j}}, \end{equation} which defines a hermitian metric $h_S$ of $E_{D\times S}$, extending the metric \eqref{hz}. It remains to express the metric $h_S$ in a basis constant along $D$. We let $e_k=s_k|_{0\times S}$; $e_1,\cdots, e_r$ form a smooth basis of $E|_S$. We let $\tilde e_k$ be the pullback of $e_k$ via the tautological projection $E_{D\times S}\to E|_S$. Under this basis, $\mathfrak{a} _0(z)+\mathfrak b(z)$ becomes an $r\times r$-matrix with entries $\Omega^{0,1}(ad E|_S)_{s'}$-valued holomorphic functions over $D$. Let $c_{kj}$ be functions so that $\tilde s_k=\sum_j c_{kj}\tilde e_j$. Because $\tilde s_k$ are $\overline{\partial}_{D\times S}'$ holomorphic, using $\overline{\partial}_z \tilde e_k=\overline{\partial}_0\tilde e_k=0$, we have $$\overline{\partial}_z c_{kj}+(\mathfrak{a} _0(z)+\mathfrak b(z))_{ki}c_{ij}+\overline{\partial}_0 c_{kj}=0. $$ Since the only the term $\overline{\partial}_z c_{kj}$ takes value in $(0,1)$-forms of $D$, (others take value in $(0,1)$-forms of $S$,) we have $\overline{\partial}_z c_{kj}=0$. Therefore, $c_{kj}$ are holomorphic in $z$. This proves that the hermitian metric form of $h_S$ under the basis $\tilde e_1,\cdots,\tilde e_r$ is real analytic in $(\text{Re}\,z, \text{Im}\, z)$. Finally, we add that $\tilde s_i$ and $\tilde e_j$ are $L_{s'}^\ell$, thus $c_{kj}$ lie in $L_{s'}^\ell=L_{s+2}^\ell$. This proves that the metric $h_S$ is $L_{s+2}^\ell$. \end{proof} \begin{coro}\label{cor3} There is an open neighborhood $0\in D\subset V_0$ and an $L_{s+2}^\ell$ hermitian metric $\tilde h_D$ on $E_D$ such that $\tilde h_D$ is analytic in $z\in D$ and extends $h|_{O_0\times Y}$. \end{coro} \begin{proof} By the pervious Lemma, for any $y\in Y$, we can find an open neighborhood $D\times S\subset V_0\times Y$ of $(0,y)\in V_0\times Y$ and a hermitian metric $\tilde h_{S}$ on $E_{D\times S}$ that extends $h|_{O_0\times S}$, and is analytic in $z\in D$. Because $Y$ is compact, we can cover $Y$ by finitely many such opens $S_a$, $a=1,\cdots, l$, paired with $0\in D_a\subset V_0$. Let $D_0=\cap D_a$. Then $0\in D_0\subset V_0$ is open and $\tilde h_{S_a}$ are defined over $D_0\times S_a$. We then pick a smooth partition of unity $\sum_{a=1}^l \chi_a=1$ with $\chi_a: Y\to [0,1]$ such that the closure $\overline{(\chi_a>0)}$ lies in $S_a$. Then $\tilde h_D=\sum_{a=1}^l \chi_a\cdot \tilde h_{S_a}$ is an $L_{s+2}^\ell$ hermitian metric on $E_{D_0}$ that is analytic in $z\in D_0$, and extends $h|_{O_x\times Y}$. \end{proof} \begin{lemm}\label{comp2} Let the hermitian metric $\tilde h_D$ on $E_D$ be given by Corollary \ref{cor3}, and let $\overline{\partial}_z^{\ast}=\overline{\partial}_0^{\ast}+\mathfrak{a} _0(z)^\dag$, $z\in D$, be the formal adjoint of $\overline{\partial}_z$ using the hermitian metric $\tilde h_z\!:= \tilde h_D|_{z\times Y}$. Then we can find a complexification $D^\mathbb{C}\supset D$ such that the function $\mathfrak{a} _0(z)^\dag$ extends to a holomorphic $\mathfrak{a} _0(\cdot)_\mathbb{C}^\dag: D^\mathbb{C}\to \Omega^0(ad E\otimes_\mathbb{C} T_Y^{0,1})_s$. \end{lemm} \begin{proof} Using the explicit dependence of $\overline{\partial}_z^{\ast}\!:= (\overline{\partial}_0+\mathfrak{a} _0(z))^{\ast}$ on the metric $\tilde h_z$, we see immediately that $\mathfrak{a} _0(z)^\dag\!:= \overline{\partial}_z^{\ast}-\overline{\partial}_0^{\ast} \in \Omega^0(ad E\otimes_\mathbb{C} T_Y^{0,1})_s$ is analytic in $(\text{Re}\,z, \text{Im}\, z)$. Following the proof of Lemma \ref{ex-1}, there is a complexification $D^\mathbb{C}\supset D$ such that $\mathfrak{a} _0(z)^\dag$ extends to $\mathfrak{a} _0(w)_\mathbb{C}^\dag$, defined over $D^\mathbb{C}$ and holomorphic in $w\in D^\mathbb{C}$. Here we have used that $\tilde h_D$ is $L_{s+2}^\ell$ to ensure that $\mathfrak{a} _0(w)_\mathbb{C}^\dag$ are $L_s^\ell$. \end{proof} In the remainder of this section, we fix a complexification $D^\mathbb{C}\supset D$ so that both $\mathfrak{a} _0(z)$ and $\mathfrak{a} _0(z)^\dag$ extend holomorphically to $\mathfrak{a} _0(w)_\mathbb{C}$ and $\mathfrak{a} _0(w)_\mathbb{C}^\dag$ on $D^\mathbb{C}$. We define \begin{equation}\label{OC} O_0^\mathbb{C}=\bigl( F_{\overline{\partial}_0+\mathfrak{a} _0(w)_\mathbb{C}}^{0,2}=0\bigr)_{\mathrm{red}}\subset D^\mathbb{C}. \end{equation} For $w\in D^\mathbb{C}$, we define $\overline{\partial}_w^{\ast}=\overline{\partial}_0+\mathfrak{a} _0(w)^\dag_\mathbb{C}$. \begin{coro}\label{red} We have $(\overline{\partial}_w^{\ast})^2|_{O_0^\mathbb{C}}=0$. \end{coro} \begin{proof} Via the holomorphic map $\eta: D^\mathbb{C}\to V_0$, $w=(u_1,\cdots,u_n,v_1\cdots, v_n)\mapsto z=(u_1+iv_1,\cdots, u_n+iv_n)$, we see that the pullback $\mathfrak{a} _0(\eta(w))$ is a holomorphic extension of $\mathfrak{a} _0(z)$. Thus by the uniqueness of holomorphic extension, we have $\mathfrak{a} _0(w)_\mathbb{C}=\mathfrak{a} _0(\eta(w))$. Thus $O_0^\mathbb{C}=\eta^{-1}(O_0)$. In particular, every irreducible component $A\subset O_0$ has its complexification $A_\mathbb{C}=\eta(A)$, and vice versa (cf. \cite[Proposition 5.3]{Nar}). Since $(\overline{\partial}_w^{\ast})^2$ is holomorphic and vanishes along $O_0$, by studying its vanishing near a general point of any irreducible component $A$ of $O_0$, and noticing that ${O_0^\mathbb{C}}$ is with the reduced analytic subspace structure, we conclude that $(\overline{\partial}_w^{\ast})^2|_{O_0^\mathbb{C}}=0$. \end{proof} We now complexify $\Theta_D(\eps)$ using the span of generalized eigenvectors of the ``Laplacian" of $\overline{\partial}_w$. We define $$\Box_w=\overline{\partial}_w\overline{\partial}_w^{\ast}+\overline{\partial}_w^{\ast}\overline{\partial}_w: \Omega^{0,j}(ad E)_s\longrightarrow \Omega^{0,j}(ad E)_{s-2}, \quad w\in D^\mathbb{C}. $$ This is a family of second order elliptic operators, holomorphic in $w\in D^\mathbb{C}$, whose symbols are identical to that of $\Box_{x_0}$. \begin{lemm} Let the notation be as before. Suppose $\eps_0>0$ separates eigenvalues of $\overline{\partial}_z$ and $\eps_0<\eps(z)$ for all $z\in O_0$. Then we can choose $D^\mathbb{C}\supset D$ such that $\Theta_{O_0}(\eps_0)\subset O_0\times \Omega^{0,1}(ad E)_s$ extends to a holomorphic subbundle $\Theta_{O_0^\mathbb{C}}(\eps_0) \subset O_0^\mathbb{C}\times \Omega^{0,1}(ad E)_s$. \end{lemm} \begin{proof} We extend $\Theta_D(\eps_0)$ to $D^\mathbb{C}$ using the generalized eigenforms of $\Box_w$. Since $\Box_w$ is holomorphic in $w$, and since $1+\Box_z$, $z\in D$, are invertible, by \cite[page 365]{Kato} after shrinking $D^\mathbb{C}\supset D$ if necessary, the family $$(1+\Box_w)^{-1}: \Omega^{0,1}(ad E)_s\longrightarrow \Omega^{0,1}(ad E)_s, \quad w\in D^\mathbb{C}, $$ is a holomorphic family of bounded operators. We now extend $\Theta_D(\eps_0)$. First, note that $\lambda$ is a spectrum of $\Box_z$ if and only if $(1+\lambda)^{-1}$ is a spectrum of $(1+\Box_z)^{-1}$, and they have identical associated spaces of generalized eigenforms. Since $\Box_z$, $z\in D$, has discrete spectrum (eigenvalues) and $\eps$ is not its eigenvalue, for any $x\in D$, we can pick a small open neighborhood $D_x\subset D$ of $x\in D$ and a sufficiently small $\delta$, $\eps_0\gg \delta>0$, so that no eigenvalues of $(1+\Box_z)^{-1}$ for $z\in D_x$ lie in $\big||\lambda|-(1+\eps_0)^{-1}\big|<\delta$. Then by the continuity of the spectrum, we can find an open $D_x^\mathbb{C}\subset D^\mathbb{C}$, $D_x^\mathbb{C}\cap D=D_x$, such that no $(1+\Box_w)^{-1}$, $w\in D_x^\mathbb{C}$, contains spectrum in the region $\big||\lambda|-(1+\eps_0)^{-1}\big|<\delta/2$. Applying \cite[Theorem VII-1.7]{Kato}, over $D_x^\mathbb{C}$ we have decompositions $\Omega^{0,1}(ad E)_s=E_{1,w}\oplus E_{2,w}$ such that $E_{1,w}$ and $E_{2,w}$ are holomorphic in $w$, invariant under $(1+\Box_w)^{-1}$, and $T_{i,w}\!:= (1+\Box_w)^{-1}|_{E_{i,w}}: E_{i,w}\to E_{i,w}$ has its spectrum in $|\lambda|<(1+\eps_0)^{-1}$ for $i=1$ and in $|\lambda|>(1+\eps_0)^{-1}$ for $i=2$. For us, the key property is that $E_{1,w}=\Theta_w(\eps_0)$ when $w\in D_x$. For $w\in D_x^\mathbb{C}$, we define $\Theta_w(\eps_0)=E_{1,w}$. Then $\Theta_{D_x^\mathbb{C}}(\eps_0)\!:= \coprod_{w\in D_x^\mathbb{C}}\Theta_w(\eps_0)$ extends $\Theta_D(\eps_0)$ holomorphically to $D_x^\mathbb{C}$. By covering $D$ by open subsets like $D_x$, and using that the holomorphic extensions of $\Theta_D(\eps_0)$ are unique, when they exist, we conclude that for a complexification $D^\mathbb{C}\supset D$, $$\Theta_{D^\mathbb{C}}(\eps_0)\!:= \coprod_{w\in D^\mathbb{C}} \Theta_w(\eps_0)\subset D^\mathbb{C}\times \Omega^{0,1}(ad E)_s $$ extends $\Theta_D(\eps_0)$ and is a holomorphic bundle over $D^\mathbb{C}$. \end{proof} \begin{coro}\label{coro} $\Theta_{U_0}(\eps_0)$ is an analytic subbundle of $T_{U_0}{\cal B}$. \end{coro} \begin{proof} Applying the surjective homomorphism \eqref{surj}, and using the complexification constructed, the conclusion follows. \end{proof} \subsection{The existence of orientation bundles} \def^{\mathrm{rd}}{^{\mathrm{rd}}} \def\mathrm{Ori}{\mathrm{Ori}} We prove Proposition \ref{pExOr}. We begin with a rephrasing of the non-degeneracy condition under $cs_{2,x}$. We define a pairing \begin{equation}\label{c} (\cdot,\cdot)_x: \Omega^{0,1}(ad{\cal E}_x)_s\times \Omega^{0,2}(ad {\cal E}_x)_{s-1}\longrightarrow \mathbb{C}, \quad x\in X, \end{equation} via $(\mathfrak{a} _1,\mathfrak{a} _2)_x=\frac{1}{8\pi^2}\int \tr(\mathfrak{a} _1\wedge\mathfrak{a} _2)\wedge\Omega$. It relates to the quadratic form $cs_{2,x}$ via \begin{equation}\label{d} cs_{2,x}(\mathfrak{a} ,\mathfrak b)=(\mathfrak{a} ,\overline{\partial}_x\mathfrak b)_x, \quad \mathfrak{a} ,\mathfrak b\in T_x{\cal B}\cong \mathrm{ker} (\overline{\partial}_x^{\ast})_s^{0,1}. \end{equation} Given a subspace $W\subset T_x{\cal B}\cong \mathrm{ker} (\overline{\partial}_x^{\ast})_s^{0,1}$ that contains $T_x\mathfrak{X} \cong \Box_x^{-1}(0)^{0,1}$, we define its companion spaces by \begin{equation}\label{pri} W'=\Box_x^{-1}(0)^{0,2}\oplus \overline{\partial}_x(W)\quad{\rm and}\quad W^{\prime\prime}\!:= \Box_x^{-1}(0)^{0,1}\oplus \Box_x(W) \end{equation} Recall that $Q_x$ is the descent of $cs_{2,x}$ to $T_x{\cal B}/T_x\mathfrak{X} $. \begin{lemm}\label{perf} Let $W\subset T_x{\cal B}$ be a subspace containing $T_x\mathfrak{X} $, and let $W'$ be its companion space. Then $Q_{x}|_{W/T_x\mathfrak{X} }$ is non-degenerate if and only if the restricted pairing $(\cdot,\cdot)_x: W\times W'\to \mathbb{C}$ is a perfect pairing. \end{lemm} \begin{proof} Since $Y$ is a Calabi-Yau threefold, by Serre duality, the pairing $(\cdot,\cdot)_x$ restricted to $\Box_x^{-1}(0)^{0,1}\times\Box_x^{-1}(0)^{0,2}$ is perfect. Let $e_{1},\cdots, e_{l}\in W$ be so that $\overline{\partial}_x e_{1},\cdots,$ $\overline{\partial}_x e_{l}$ form a basis of $\overline{\partial}_x(W)$. By Hodge theory, $e_{1},\cdots, e_{l}$ and $\Box_x^{-1}(0)^{0,1}$ span $W$. By \eqref{d}, and that $\Box_s^{-1}(0)^{0,1}$ is orthogonal to $\image(\overline{\partial}_x)$ under $(\cdot,\cdot)_x$, $Q_x$ is non-degenerate on $W/T_x\mathfrak{X} =W/\Box_x^{-1}(0)^{0,1}$ if and only if $(e_{i},\overline{\partial}_x e_{j})_x$ form an invertible $l\times l$ matrix, which is equivalent to that $(\cdot,\cdot)_x$ on $W\times W'$ is perfect. This proves the lemma. \end{proof} \begin{lemm} Let $x\in X$ and $r\ge d_x=\dim T_x\mathfrak{X} $ be an integer. Then we can find an open neighborhood $U\subset X$ of $x$ such that there exists a rank $r$ orientation bundle over $U$. \end{lemm} \begin{proof} We first recall an easy fact. Let $q$ be a non-degenerate quadratic form on $\mathbb{C}^n$. Let $0<l\le n$ be an integer, and let $Gr(l,\mathbb{C}^n)$ be the Grassmannian of $l$ dimensional subspaces of $\mathbb{C}^n$. We introduce \begin{equation}\label{Gr} Gr(l,\mathbb{C}^n)^\circ=\{[S]\in Gr(l,\mathbb{C}^n) \mid q|_S\ \text{is non-degenerate}\}. \end{equation} Since $q$ is non-degenerate, it is direct to check that $Gr(l,\mathbb{C}^n)^\circ$ is the complement of a divisor in $Gr(l,\mathbb{C}^n)$, and thus is connected and smooth. We now prove the lemma. We first treat the case $r=d_x$. Since $\Box_x$ has non-negative discrete eigenvalues, there is an $\eps>0$ so that it has no eigenvalues in $(0,2\eps)$. By Corollary \ref{coro}, over an open neighborhood $U\subset X$ of $x$, $\Theta_U(\eps)$ is an analytic subbundle of $T_U{\cal B}$. To show that it is an orientation bundle, we only need to verify that for any $y\in U$, $Q_y$ restricting to $\Theta_y(\eps)/\Box_y^{-1}(0)^{0,1}$ is non-degenerate. By Lemma \ref{perf}, this is equivalent to that $$(\cdot,\cdot)_y: \Theta_y(\eps)\times \Theta_y(\eps)'\longrightarrow \mathbb{C} $$ is perfect. Because it is perfect at $y=x$, and because being perfect is an open condition, possibly after shrinking $x\in U$ if necessary, it is perfect for every $y\in U$. This proves the case $r=d_x$. In case $l=r-d_x>0$, applying the discussion at the beginning of this proof, we find an $l$-dimensional subspace $W\subset T_x{\cal B}/T_x\mathfrak{X} $ so that $Q_x|_W$ is non-degenerate. We let $\Xi_x$ be the preimage of $W$ under the quotient map $T_x{\cal B}\to T_x{\cal B}/T_x\mathfrak{X} $. To complete the proof, we extend $\Xi_x$ to a neighborhood of $x\in X$ and show that it is an orientation bundle. We pick an open neighborhood $U\subset X$ of $x\in X$ such that the isomorphism $\zeta$ in \eqref{local} has been chosen; we pick a basis of $W$, say $u_1,\cdots, u_l\in \mathrm{ker} (\overline{\partial}_x^{\ast})^{0,1}_s/\Box_x^{-1}(0)^{0,1} =\image(\overline{\partial}_x^{\ast})^{0,1}_s$. We then extend $u_k$ to be the constant section of $U\times \Omega^{0,1}(ad E)_s$ and let $\tilde u_k$ be its image sections in $T_U{\cal B}$. It is a holomorphic extension of $u_k$. By shrinking $x\in U$ if necessary, $\Theta_U(\eps)$ and the sections $\tilde u_1,\cdots,\tilde u_l$ span a subbundle $\Xi$ of $T_U{\cal B}$. Because $\Theta_U(\eps)$ is an analytic subbundle of $U\times \Omega^{0,1}(ad E)_s$, and because $\tilde s_k$ are holomorphic, we see that $\Xi$ is an analytic subbundle of $T_U{\cal B}$ and contains $\Theta_U(\eps)$. It remains to check that for any $y\in U$, $Q_y$ restricted to $\Xi_y/T_y\mathfrak{X} $ is non-degenerate. By the previous lemma, this is equivalent to the fact that the pairing $(\cdot,\cdot)_y: \Xi_y\times\Xi_y'\to\mathbb{C}$ is perfect, where $\Xi_y'$ is the companion space of $\Xi_y$ (cf. \eqref{pri}). Because this pairing is perfect when $y=x$, by shrinking $x\in U$ if necessary, we can make it perfect for all $y\in U$. Therefore, $\Xi$ is an orientation bundle over $U$. \end{proof} \begin{lemm} Suppose $\Xi_\alpha$ and $\Xi_\beta$ are two rank $r$ orientation bundles over an open $U$. Then for any ${x}\in U$, there is an open neighborhood $U_0\subset U$ of ${x}$ such that there is a homotopy from $\Xi_\alpha|_{U_0}$ to $\Xi_\beta|_{U_0}$. \end{lemm} \begin{proof} We begin with an easy fact. For any finite dimensional subspace $W_0\subset {\cal W}\!:= T_x{\cal B}/T_x\mathfrak{X} $, there is a finite dimensional subspace $W\subset {\cal W}$, containing $W_0$, such that $Q_x|_{W}$ is non-degenerate. Indeed, let $N\subset W_0$ be the null-subspace of $Q_x|_{W_0}$. Since $Q_x$ is non-degenerate, we can find a subspace $M\subset {\cal W}$ such that $W_0\cap M=0$ and $Q_x: N\times M\to \mathbb{C}$ is perfect. Then the space $W=W_0\oplus M\subset {\cal W}$ is the desired subspace. We now construct the desired homotopy. We first find a finite dimensional subspace $W\subset {\cal W}$ so that it contains both $\Xi_\alpha|_{x}/T_{x}\mathfrak{X} $ and $\Xi_\beta|_{x}/T_{x}\mathfrak{X} $, and $Q_{x}$ is non-degenerate on $W$. Let $l=r-d_{x}$. We form the Grassmannian $Gr(l,W)$ and its Zariski open subset $Gr(l,W)^\circ$ as in \eqref{Gr} (with $Q_x$ in place of $q$). Then both $[\Xi_\alpha|_{x}/T_{x}\mathfrak{X} ]$ and $[\Xi_\beta|_{x}/T_{x}\mathfrak{X} ]$ are in $Gr(l,W)^\circ$. Because $Gr(l,W)^\circ$ is a smooth connected quasi-projective variety, we can find an analytic arc $[S_t]\in Gr(l,W)^\circ$, $t\in [0,1]$, such that $S_0=\Xi_\alpha|_{x}/T_{x}\mathfrak{X} $ and $S_1=\Xi_\beta|_{x}/T_{x}\mathfrak{X} $. We let $\Xi_{t,x}$ be the preimage of $S_t$ under the quotient homomorphism $$\pi_{x}: T_{x}{\cal B}\to T_{x}{\cal B}/T_{x}\mathfrak{X} . $$ Then $[S_t]$ form an analytic family of subspaces in $T_{x}{\cal B}$, interpolating between $\Xi_\alpha|_{x}$ and $\Xi_\beta|_{x}$. We extend this to an analytic family of orientation bundles in a neighborhood of ${x}\in U$. As before, we realize an open neighborhood $U_0\subset U$ of ${x}\in U$ as $U_0=X_{f_0}$, ${x}=0\in V_0$, for the JS chart $(V_0,f_0)$. Then we have the isomorphism of holomorphic Banach bundles $T_{U_0} {\cal B}\cong U_0\times \mathrm{ker} (\overline{\partial}_{{x}}^{\ast})^{0,1}_s$. By choosing $U_0$ sufficiently small, we can find an $\eps>0$ so that $\Theta_{U_0}(\eps)\subset \Xi_\alpha|_{U_0}$, $\Theta_{U_0}(\eps)\subset \Xi_\beta|_{U_0}$, and $\Theta_{{x}}(\eps)=\Box_{{x}}^{-1}(0)^{0,1}$. Next, for $i=\alpha$ and $\beta$, we find $l $ analytic sections $s^i_1,\cdots s^i_l $ of $T_{U_0}{\cal B}$ such that $\Theta_{U_0}(\eps)$ and $s^i_1,\cdots s^i_l $ span $\Xi_i|_{U_0}$. Because $s_k^\alpha({x})$ and $s_k^\beta({x})$ all lie in $\pi_{{x}}^{-1}(W)$, we can find arcs $\xi_k(t)\in \pi_{{x}}^{-1}(W)$, analytic in $t\in [0,1]$, such that $\xi_k(0)=s_k^\alpha(x)$ and $\xi_k(1)=s^\beta_k(x)$, and $$\Xi_{t,{x}}=\Theta_{{x}}(\eps)\oplus \mathbb{C}\text{-span}\bigl( \xi_1(t),\cdots \xi_l (t)\bigr). $$ Using the isomorphism $T_{U_0}{\cal B}\cong U_0\times \mathrm{ker} (\overline{\partial}_{{x}}^{\ast})^{0,1}_s$, we can view $\xi_k(t)$ as analytic arcs in $\mathrm{ker} (\overline{\partial}_{{x}}^{\ast})^{0,1}_s\subset \Omega^{0,1}(ad E)_s$. We then define $$s^t_k(y)=\xi_k(t)+(1-t)(s_k^\alpha(y)-s_k^\alpha(x))+t(s_k^\beta(y)-s_k^\beta(x)). $$ Clearly, they are analytic in $t$, and $s^0_k(y)=s^\alpha_k(y)$ and $s^1_k(y)=s_k^\beta(y)$ for all $y\in U_0$. Therefore, by shrinking ${x}\in U_0$ if necessary, for every $t\in [0,1]$, the sections $s_1^t(y),\cdots s_{l }^t(y)$ and $\Theta_{U_0}(\eps)$ span a rank $r$ analytic subbundle $\Xi_t^{\text{pre}}\subset U_0\times \Omega^{0,1}(ad E)_s$. Because the arcs $\xi_k(t)$ are analytic in $t$, the family $\Xi_t^{\text{pre}}$ is analytic in $t$. Finally, because $[S_t]$ all lie in $Gr(l,W)^\circ$, by shrinking ${x}\in U_0$ if necessary, $$\Xi_t\!:= \text{image of $\Xi_t^{\text{pre}}$ under $U_0\times\Omega^{0,1}(ad E)_s\to T_{U_0}{\cal B}$} $$ form an analytic family of orientation bundles providing the desired homotopy between $\Xi_\alpha|_{U_0}$ and $\Xi_\beta|_{U_0}$. \end{proof} \begin{proof}[Proof of Proposition \ref{pExOr}] We first pick a locally finite cover $U_\alpha$ so that each $U_\alpha$ has an orientation bundle $\Xi_\alpha$. For each $x\in X$, we pick an open neighborhood $U_x$ of $x\in X$ so that (1) $U_x\subset U_\alpha$ whenever $x\in U_\alpha$, and that (2) for every pair $\alpha$, $\beta$ with $x\in U_{{\alpha\beta} }=U_\alpha\cap U_\beta$, we have a homotopy from $\Xi_\alpha|_{U_{x}}$ to $\Xi_\beta|_{U_{x}}$. Then we can pick a locally finite refinement of the covering as follows: For each $x\in X$, we fix any $\alpha(x)$ such that $U_x\subset U_{\alpha(x)}$. Since $X$ is quasi-projective, we have a metric $d(\cdot,\cdot)$ on $X$ induced from projective space. By shrinking $U_x$ if necessary, we may assume $U_x$ is the ball $B(x, 2\eps_x)$ of radius $2\eps_x>0$ centered at $x$. Let $O_x=B(x,\eps_x)$ and $\Xi_x=\Xi_{\alpha(x)}|_{O_x}$. Then $\{O_x\}$ is an open cover of $X$ and $\Xi_x$ is an orientation bundle on $O_x$. Suppose that $O_x\cap O_y\ne \emptyset$. Without loss of generality, we may assume $\eps_x\le \eps_y$. Then $O_x\subset B(y,2\eps_y)=U_y\subset U_{\alpha(y)}$. Also we have $x\in O_x\subset U_x\subset U_{\alpha(x)}$. Hence $O_x\subset U_{\alpha(x)}\cap U_{\alpha(y)}$ and thus we have a homotopy from $\Xi_x|_{O_x\cap O_y}$ to $\Xi_y|_{O_x\cap O_y}$ as desired. \end{proof} \section{CS data from preorientation data}\label{secCSdata} In this section we prove Proposition \ref{prCSdata}. We construct CS charts from orientation bundles, their local trivializations, and complexifications. \subsection{Constructing families of CS charts} Let $\Xi$ be an orientated bundle on $U$. We generalize Joyce-Song's construction in \cite{JoSo} to form a $\Xi$-aligned family of CS charts. Given $\Xi$, for any $x\in U$, we view $\Xi_x\subset \mathrm{ker} (\overline{\partial}_x^{\ast})^{0,1}_s$ and denote its companion space $\Xi_x^{\prime\prime}\subset \Omega^{0,1}(ad {\cal E}_x)$ be as defined in \eqref{pri} with $W$ replaced by $\Xi_x$. Using condition (1) of Definition \ref{def1.1}, one sees that $\Xi^{\prime\prime}\!:= \coprod_{x\in U}\Xi_x^{\prime\prime}$ is an analytic subbundle of $\Omega_X^{0,1}(ad {\cal E})_{s-2}|_U$. We define the quotient homomorphism of Banach bundles \begin{equation}\label{Px} P: \Omega^{0,1}_X(ad{\cal E})_{s-2}|_U \longrightarrow \Omega^{0,1}_X(ad{\cal E})_{s-2}|_U\big/\Xi^{\prime\prime}, \end{equation} whose restriction to $x\in U$ is denoted by $P_x: \Omega^{0,1}(ad{\cal E}_x)_{s-2} \to \Omega^{0,1}(ad{\cal E}_x)_{s-2}/\Xi_x^{\prime\prime}$. For $x\in U$, we form the elliptic operator \begin{equation}\label{LL} \mathbf{L}_x: \Omega^{0,1}( ad {\cal E}_x)_s \longrightarrow \Omega^{0,1}( ad {\cal E}_x)_{s-2}/\Xi_x^{\prime\prime},\quad \mathbf{L}_x(\mathfrak{a} )=P_x \bigl( \Box_x \mathfrak{a} +\overline{\partial}_x^{\ast}( \mathfrak{a} \wedge\mathfrak{a} )\bigr). \end{equation} For a continuous $\varepsilon(\cdot):U\to (0,1)$ to be specified shortly, we define \begin{equation}\label{Vz} V_x=\{ \mathfrak{a} \in \Omega^{0,1}(ad {\cal E}_x)_s \mid \mathbf{L}_x(\mathfrak{a} )=0, \ \parallel\! \mathfrak{a} \!\parallel_s<\varepsilon(x)\}. \end{equation} (Here $\parallel\!\cdot\!\parallel_s$ is defined using $h_x$.) Letting $\Pi_x: \Omega^{0,1}(ad {\cal E}_x)_s\to {\cal B}$ be the composite of the tautological isomorphism $\overline{\partial}_x+\cdot: \Omega^{0,1}(ad {\cal E}_x)_s\cong {\cal A}_x$ (cf. \eqref{dbar}) with the tautological projection ${\cal A}_x\to{\cal B}$, we define \begin{equation}\label{VV} {\cal V}_x=\Pi_x(V_x). \end{equation} We comment that ${\cal V}_x$ only depends on $(\Xi_x,h_x,\varepsilon(x))$. Let $f_x: V_x\to\mathbb{C}$ (or $f_x: {\cal V}_x\to\mathbb{C}$) be the composite of $V_x\hookrightarrow {\cal B}$ and $cs: {\cal B}\to\mathbb{C}$. \begin{prop}\label{lem1.2} Let $U\subset X$ be open and $\Xi$ a rank $r$ orientation bundle on $U$. Then there is a continuous $\varepsilon(\cdot): U\to (0,1)$ such that the family ${\cal V}_x$, $x\in U$, constructed via \eqref{Vz} using $\varepsilon(\cdot)$ is a smooth family of complex manifolds of dimension $r$, and such that all $(V_x,f_x)$ are CS charts of $\mathfrak{X} $. \end{prop} \begin{proof} We relate $V_x$ to the JS charts by first showing that $\mathbf{L}_x(\mathfrak{a} )=0$ if and only if \begin{equation}\label{JS-eq} \overline{\partial}_x^{\ast}\mathfrak{a} =0\quad{\rm and}\quad P_x\circ \overline{\partial}_x^{\ast} F_{\overline{\partial}_x+\mathfrak{a} }^{0,2}=0. \end{equation} Indeed, it is immediate that \eqref{JS-eq} implies $\mathbf{L}_x(\mathfrak{a} )=0$. For the other direction, suppose $\mathbf{L}_x(\mathfrak{a} )=0$. Since $\Xi_x^{\prime\prime}\subset \mathrm{ker} (\overline{\partial}^{\ast}_x)_{s-2}^{0,1}$, applying $\overline{\partial}_x^{\ast}$ to $\mathbf{L}_x(\mathfrak{a} )=0$, we obtain $\overline{\partial}_x\overline{\partial}_x^{\ast}\mathfrak{a} =0$, which forces $\overline{\partial}_x^{\ast}\mathfrak{a} =0$. Having this, we obtain $P_x\circ \overline{\partial}_x^{\ast} F_{\overline{\partial}_x+\mathfrak{a} }^{0,2}=0$. This proves the equivalence. By direct calculation, the linearization of $\mathbf{L}_x$ at $\mathfrak{a} =0$ is $$\delta\mathbf{L}_x=P_x\circ\Box_x: \Omega^{0,1}(ad {\cal E}_x)_s\longrightarrow \Omega^{0,1}(ad {\cal E}_x)_{s-2}/\Xi_x^{\prime\prime}, $$ which is surjective with kernel $\Xi_x$. Since the operators $\mathbf{L}_x$ depend smoothly on $x\in U$, applying the implicit function theorem, for a continuous $\varepsilon(\cdot): U_0\to (0,1)$ (taking sufficiently small values), the solution spaces $V_x$, $x\in U$, form a family of manifolds of real dimensions $2r$. Since the operators $\mathbf{L}_x$ are holomorphic in $\mathfrak{a} $, each solution space $V_x$ is a complex submanifold of $\Omega^{0,1}( ad {\cal E}_x)_s$ and their images in ${\cal B}$ lie in ${\cal B}_{si}$. Finally, because the family $\Box_x$ is smooth in $x\in U$, the family $V_x$ is a smooth family of complex manifolds. Using local isomorphism $\zeta$ in \eqref{local}, we see that the family ${\cal V}_x$, $x\in U$, form a smooth family of complex submanifolds of ${\cal B}_{si}$. This proves the first part of the Proposition. \vskip5pt For the second part, we first show that by choosing $\varepsilon(x)$ small enough, ${\cal V}_x\cap\mathfrak{X} $ contains an open complex analytic subspace of $\mathfrak{X} $ containing $x$. At individual $x\in U$, this follows from that each $V_x$ contains (an open neighborhood of $x$ in) the JS chart $(V_x^{JS}, f_x^{js})$. However, to prove that we can choose $\varepsilon(x)$ continuously in $x$, we argue directly. As $({\cal E}_x,\overline{\partial}_x)$ are simple, the tautological $\Pi_x: \mathrm{ker} (\overline{\partial}_x^{\ast})_s^{0,1}\to{\cal B}$ is biholomorphic near $\Pi_x(0)=x$. We let $$\mathfrak{F}_x: \Omega^{0,1}(ad{\cal E}_x)_s\to \Omega^{0,2}(ad {\cal E}_x)_{s-1},\quad \mathfrak{F}_x(\mathfrak{a} )=F^{0,2}_{\overline{\partial}_x+\mathfrak{a} }, $$ be the curvature section. Then $\mathrm{ker} (\overline{\partial}_x^{\ast})^{0,1}_s\cap (\mathfrak{F}_x=0)$ contains an open complex analytic subspace of $\mathfrak{X} $ containing $x$ (cf. \cite[Chapter 9]{JoSo}, \cite{Miya}). Because $\mathrm{ker} (\overline{\partial}_x^{\ast})_s^{0,1}\cap (\mathfrak{F}_x=0)$ is contained in $(\mathbf{L}_x=0)$, ${\cal V}_x\cap \mathfrak{X} $ contains an open neighborhood of $x \in\mathfrak{X} $. We now prove that for $\varepsilon(x)$ small, $\mathfrak{X} _{f_x}=\Pi_x^{-1}({\cal V}_x\cap \mathfrak{X} )$. We follow the proof of \cite[Prop. 9.12]{JoSo}. Let $x\in U$. We define a subbundle $$R_x\!:= \{(\overline{\partial}_x+\mathfrak{a} ,\mathfrak b)\mid P_x\circ\overline{\partial}_x^{\ast}\mathfrak b=\overline{\partial}_x^{\ast}(\overline{\partial}_x\mathfrak b-\mathfrak b\wedge\mathfrak{a} -\mathfrak{a} \wedge\mathfrak b)=0\} \subset V_x\times\Omega^{0,2}(ad {\cal E}_x)_s. $$ Applying the implicit function theorem, because the two equations in the bracket are holomorphic in $\mathfrak{a} $, for $\varepsilon(x)$ small enough, $R_x$ is a holomorphic subbundle of $V_x\times\Omega^{0,2}(ad {\cal E}_x)_s$ over $V_x$. Then the Bianchi identity coupled with the equivalence relations \eqref{JS-eq} ensures that the restriction of the curvature section to $V_x$, namely $\mathfrak{F}_x|_{V_x}$, is a section of $R_x$. Since $\mathfrak{X} $ locally near $\overline{\partial}_x$ is defined by the vanishing of $\mathfrak{F}_x$, we conclude \begin{equation}\label{eq} \Pi_x^{-1}({\cal V}_x\cap \mathfrak{X} )= V_x\cap (\mathfrak{F}_x|_{V_x}=0). \end{equation} It remains to show that $(\mathfrak{F}_x|_{V_x}=0)=(df_x=0)$. We define a bundle map $$\varphi_x: R_x\to T^\vee V_x, \quad (\overline{\partial}_x+\mathfrak{a} ,\mathfrak b)\mapsto (\overline{\partial}_x+\mathfrak{a} , \alpha_\mathfrak b), $$ where $\alpha_\mathfrak b\in T_{\overline{\partial}_x+\mathfrak{a} }^\vee V_x$ is $\alpha_\mathfrak b(\cdot)=\frac{1}{4\pi^2}\int \tr(\cdot\wedge\mathfrak b)\wedge\Omega$. Clearly, $\varphi_x$ is holomorphic and $\varphi_x\circ\mathfrak{F}_x|_{V_x}=df_x$. We show that by choosing $\varepsilon(x)$ small, we can make $\varphi_x$ an isomorphism of vector bundles over $V_x$. We first claim that restricting to $x$ we have \begin{equation}\label{Rx} R_x|_x=\{\mathfrak b\in\Omega^{0,2}(ad{\cal E}_x)_{s-1} \mid P_x\circ\overline{\partial}_x^{\ast}\mathfrak b=\overline{\partial}^{\ast}_x\overline{\partial}_x\mathfrak b=0\}=\overline{\partial}_x(\Xi_x)\oplus \Box_x^{-1}(0)^{0,2}. \end{equation} Indeed the first identity follows from the definition of $R_x$. We prove the second identity. For any $\mathfrak b\in R_x|_x$, since $\overline{\partial}^{\ast}_x\overline{\partial}_x\mathfrak b=0$, we have $\overline{\partial}_x\mathfrak b=0$. Thus we can write $\mathfrak b=\mathfrak b_0+\overline{\partial}_x\mathfrak c$ with $\Box_x\mathfrak b_0=0$. Then $P_x\circ\overline{\partial}_x^{\ast}\mathfrak b=0$ implies that we may take $\mathfrak c\in \Xi_x$. This proves \eqref{Rx}. Applying Lemma \ref{perf}, we know that the pairing $(\cdot,\cdot)_x$ (cf. \eqref{c}) restricted to $T_xV_x\times R_x|_x$ is perfect and thus $\varphi_x|_x$ is an isomorphism. Therefore, by choosing $\varepsilon(x)$ sufficiently small, $\varphi_x$ is an isomorphism over $V_x$ and thus $(V_x,f_x)$ is a CS chart of $\mathfrak{X} $. Note that the choice of $\varepsilon(x)$ is to make sure that $\varphi_x$ are isomorphisms over $V_x$. Since this relies on the openness argument, we can choose $\varepsilon(\cdot): U\to (0,1)$ continuously to ensure this. This proves the proposition. \end{proof} The family of CS charts are canonical. \begin{lemm}\label{can1} Let $\Xi_a$ and $\Xi_b$ be two orientation bundles over $U$. Suppose $\Xi_a\subset \Xi_b$ is a vector subbundle. Let $\varepsilon(\cdot): U\to (0,1)$ be the size function that produces a family of CS charts ${\cal V}(\Xi_b)\subset U\times {\cal B}$ using $\Xi_b$. Then the same $\varepsilon(\cdot)$ produces a family of CS charts ${\cal V}(\Xi_a)\subset U\times{\cal B}$, and ${\cal V}(\Xi_a)\subset {\cal V}(\Xi_b)\subset U\times{\cal B}$. \end{lemm} \begin{proof} This follows from the construction because the only data needed to construct $V_x$ are $\Xi_x$ and $\varepsilon(x)$. \end{proof} \subsection{Local trivializations} In this subsection, we prove \begin{prop}\label{P-local} The family ${\cal V}\subset U\times{\cal B}$ constructed in Proposition \ref{lem1.2} is locally trivial everywhere. \end{prop} The study is local. For any $x_0\in U$, we pick an open neighborhood $U_0\subset U$ of $x_0\in U$ so that $U_0=X_{f_0}$ for the JS chart $(V_0,f_0)$, coupled with the isomorphism $\zeta$ in \eqref{local}. Using the induced ${\cal E}_x\cong E$ for $x\in U_0$, the CS charts $V_x$ become complex submanifolds of $\Omega^{0,1}(ad E)_s$. We define \begin{equation}\label{Ve} V_{U_0}=\coprod_{x\in U_0} x\times V_x\subset U_0\times\Omega^{0,1}(ad E)_s. \end{equation} By shrinking $x_0\in U_0$ if necessary, we assume that $U_0$ lies in $X_{f_x}\subset X$ for all $x\in U_0$. To keep the notation transparent, for $x, y\in U_0$, we denote by $y_x\in X_{f_x}\subset V_x$ the point whose image in $X$ is $y$, i.e. the point in $X_{f_x}$ associated to $y\in U_0$. We denote by $\overline{\partial}_x+\mathfrak{a} _x(y_x)$ the connection form of $y_x\in V_x$. Because $U_0\subset X_{f_y}$, for any $x,y,z\in U_0$, as $(E,\overline{\partial}_x+\mathfrak{a} _x(z_x))\cong (E,\overline{\partial}_y+\mathfrak{a} _y(z_y))$, there is a unique $g\in {\cal G}$ so that \begin{equation}\label{gg} \overline{\partial}_y+\mathfrak{a} _y(z_y)= g\bigl( \overline{\partial}_x+\mathfrak{a} _x(z_x)\bigr)= \overline{\partial}_x-\overline{\partial}_x g\cdot g^{-1}+ g \mathfrak{a} _x(z_x) g^{-1}. \end{equation} As before, for open ${\cal U}\subset U_0\times V_{U_0}$ and $x,y\in U_0$, we denote ${\cal U}_{x,y}\!:= {\cal U}\cap (x\times V_y)$, viewed as a subset of $V_y$. We denote $\Delta(U_0)=\{(x,x_x)\mid x\in U_0\}\subset U_0\times V_{U_0}$. \begin{lemm}\label{lem4.7} We can find an open ${\cal U}\subset U_0\times V_{U_0}$ containing $\Delta(U_0)\subset U_0\times V_{U_0}$ and a smooth $g: {\cal U}\to {\cal G}$ such that the following hold: \begin{enumerate} \item for any $x,y\in U_0$, $g_{x,y}\!:= g|_{{\cal U}_{x,y}}: {\cal U}_{x,y}\to {\cal G}$ is holomorphic and $g_{x,x}(\cdot)=1$; \item for $x,y\in U_0$, $(x,z)\in {\cal U}_{x,y}$, and letting $\mathfrak{a} (x,y,z)=g_{x,y}(z)(\overline{\partial}_{y}+\mathfrak{a} _y(z))-\overline{\partial}_{x}$, we have $\overline{\partial}^{\ast}_{x}\mathfrak{a} (x,y,z)=0$, and $\mathfrak{a} (x,x,x_x)=0$, \end{enumerate} \end{lemm} \begin{proof} For $\alpha\in {\cal G}$, and for $x,y\in U_0$ and $z\in V_y$, we form $$\mathfrak c_{x,y,z}(\alpha)\!:= \alpha(\overline{\partial}_y+\mathfrak{a} _y(z))-\overline{\partial}_x, $$ and define $$\mathbf R_{x,y,z}(\cdot)=\overline{\partial}_x^{\ast} \mathfrak c_{x,y,z}(\cdot): {\cal G}_{s+1}\longrightarrow \Omega^0(ad E)_{s-1}/\mathbb{C}. $$ Note that $\mathbf R_{x,x,z}(1)=0$. We calculate the linearization of the operator $\mathbf R_{x,y,z}(\cdot)$ at $(x,x,x_x)$ and $\alpha=1$: $$\delta \mathbf R_{x,x,x_x}|_{\alpha=1}=-\overline{\partial}_x^{\ast}\overline{\partial}_x: \Omega^0(ad E)_{s+1}/\mathbb{C}\longrightarrow \Omega^0(ad E)_{s-1}/\mathbb{C}. $$ Because $(E,\overline{\partial}_x)$ are simple, $\delta \mathbf R_{x,x,x_x}|_{\alpha=1}$ are isomorphisms. Thus by the implicit function theorem, for an open ${\cal U}\subset U_0\times V_{U_0}$ containing the diagonal $\Delta(U_0)\subset U_0\times V_{U_0}$, we can find a unique smooth $$g_{\cdot,\cdot}(\cdot): {\cal U}\longrightarrow {\cal G} $$ such that $\mathbf R_{x,y,z}(g_{x,y}(z))=0$, and $g_{x,x}(z)=1$. Because the equation $\mathbf R_{x,y,z}(\alpha)$ is holomorphic in $z$ and $\alpha$, $g_{x,y}(z)$ is holomorphic in $z\in {\cal U}_{x,y}\subset V_y$. Thus both (1) and (2) of the lemma are satisfied. \end{proof} We remark that by the uniqueness of the solution $g_{x,y}(z)$, by an extension of \eqref{gg} (to the non-reduced case), we have $\mathfrak{a} (x,y,\cdot)|_{{\cal U}_{x,y}\cap \mathfrak{X} _{f_y}}=\mathfrak{a} _y(\cdot)|_{{\cal U}_{x,y}\cap \mathfrak{X} _{f_y}}$. \begin{proof}[Proof of Proposition \ref{P-local}] We take the family $g_{x,y}(z)$ defined on ${\cal U}\subset U_0\times V_{U_0}$ constructed in the previous Lemma. To find a local trivialization $\Psi: {\cal U}\to V_{U_0}\times U_0$, we solve $\mathfrak b(x,y,z)$ satisfying the system \begin{equation}\label{bb} \mathbf{L}_x\bigl( \mathfrak{a} (x,y,z)+\mathfrak b(x,y,z)\bigr)=0,\quad (\mathfrak b(x,y,z),\nu)_x=0, \quad \forall \nu\in \Xi_x', \end{equation} where $\mathbf{L}_x$ is defined in \eqref{LL}. By the remark at the end of the previous proof, we see that restricting to ${\cal U}_{x,y}\cap \mathfrak{X} _{f_y}$, $\mathfrak b(x,y,\cdot)=0$ are solutions. Also, since $\mathfrak{a} (x,x,z)=\mathfrak{a} _x(z)$ and $\mathbf{L}_x(\mathfrak{a} _x(z))=0$, $\mathfrak b(x,x,z)=0$ are solutions. We denote $(\Xi_x')^\perp=\{\mathfrak b\mid (\mathfrak b,\nu)_x=0, \forall \nu\in \Xi_x'\}\subset \Omega^{0,1}(ad E)_s$. We define $$\mathbf M_{x,y,z}(\cdot)=\mathbf{L}_x(\mathfrak{a} (x,y,z)+\cdot): (\Xi_x')^\perp\longrightarrow \Omega^{0,1}(ad E)_{s-2}/\Xi_x^{\prime\prime}. $$ By our construction, the linearization $\delta\mathbf M_{x,x,x_x}$ at $0$ is $$\delta\mathbf M_{x,x,x_x}|_{0}=P_x\circ \Box_x: (\Xi_x')^\perp\longrightarrow \Omega^{0,1}(ad E)_{s-2}/\Xi_x^{\prime\prime}, $$ which is an isomorphism. Thus applying the implicit function theorem, for an open ${\cal U}'\subset {\cal U}$ containing the diagonal $\Delta(U_0)$, we can solve the system $\mathbf M_{x,y,z}(\mathfrak b(x,y,z))=0$ uniquely and smoothly in $(x,y,z)$. Further, since $g_{x,y}(z)$ is holomorphic in $z$, and the operator $\mathbf M_{x,y,z}$ is a linear operator holomorphic in $z$, $\mathfrak b(x,y,z)$ is holomorphic in $z$. We set $\Psi: {\cal U}'\to U_0\times V_{U_0}$ to be $$\Psi_{x,y}(\overline{\partial}_y+\mathfrak{a} _y(z))=g_{x,y}(z)\bigl( \overline{\partial}_y+\mathfrak{a} _y(z)\bigr)+\mathfrak b(x,y,z)-\overline{\partial}_x. $$ Because $\mathbf M_{x,y,\cdot}(0)|_{{\cal U}_{x,y}\cap \mathfrak{X} _{f_y}}=0$, we conclude that $\Psi_{x,y}$ is an open embedding of ${\cal U}_{x,y}'\cap \mathfrak{X} _{f_y}$ into $\mathfrak{X} _{f_x}\subset V_x$. Therefore, $\Psi$ is the desired local trivialization. \end{proof} The local trivializations are canonical. \begin{lemm} Let $\Xi_a\subset \Xi_b$ be two orientation bundles over $U$, as in Lemma \ref{can1}. Suppose $U_0\subset U$ is an open subset and $\Psi_i: {\cal U}_i\to {\cal V}(\Xi_i)_{U_0}\times U_0$, $i=a, b$, be local trivializations constructed. Let ${\cal V}(\Xi_a)_{U_0}\subset {\cal V}(\Xi_b)_{U_0}$ be the inclusion given by Lemma \ref{can1}. Then $$\Psi_a|_{{\cal U}_a\cap{\cal U}_b}=\Psi_b|_{{\cal U}_a\cap{\cal U}_b}: {\cal U}_a\cap{\cal U}_b\longrightarrow {\cal V}(\Xi_a)_{U_0}\times U_0\subset {\cal V}(\Xi_b)_{U_0}\times U_0.$$ \end{lemm} \begin{proof} This follows from that $(\Xi_{b,x}')^\perp\subset (\Xi_{a,x}')^\perp$, and thus $\Psi_a$ is also the solution to the equations for $\Psi_b$ when restricted to ${\cal V}_a\subset {\cal V}_b$. By the uniqueness of the solution, we have the identity. \end{proof} \begin{coro}\label{comp3} Let $\Xi$ (resp. $\Xi_t$) be an orientation bundle (resp. a homotopy of orientation bundles) on $U$. Then $\Xi$ (resp. $\Xi_t$) can be complexified locally everywhere. \end{coro} \begin{proof} The proof is straightforward, knowing that both $\Xi$ and $\Xi_t$ are analytic. \end{proof} \subsection{Complexifications} We construct complexifications of a family of CS charts. \begin{prop} The family of CS charts constructed in Proposition \ref{lem1.2} and the local trivialization constructed in Proposition \ref{P-local} can be complexified locally everywhere. \end{prop} \begin{proof} Given an orientation $\Xi$ on an open $U\subset X$, for any $x_0\in U$, we pick an open neighborhood $U_0\subset U$ of $x_0\in U$ so that $U_0=X_{f_0}$ for the JS chart $(V_0,f_0)$, where $x_0=0\in V_0$. By Lemma \ref{comp2}, we pick an open $0\in D\subset V_0$ so that $\overline{\partial}_z$ and $\overline{\partial}_z^{\ast}$ entend to holomorphic $\overline{\partial}_w=\overline{\partial}_0+\mathfrak{a} _0(w)_\mathbb{C}$ and $\overline{\partial}^{\ast}_w=\overline{\partial}_0^{\ast}+\mathfrak{a} _0(w)_\mathbb{C}^\dag$ over a complexification $D^\mathbb{C}$ of $D$. By Corollary \ref{comp3}, we extend $\Theta_D(\eps)$ and $\Xi|_{D}$ to holomorphic subbundles $\Theta_{D^\mathbb{C}}(\eps)$ and $\Xi_{D^\mathbb{C}}$ of $T_{D^\mathbb{C}}{\cal B}$. (Here we follow the convention that for any $Z\to {\cal B}$, we denote $T_Z{\cal B}=T{\cal B}\times_{{\cal B}}Z$.) We form the projection $P_w$ as in \eqref{Px}, with ${\cal E}$ (resp. $\Xi_x^{\prime\prime}$) replaced by $E$ (resp. $\Xi_w^{\prime\prime}$). We form $V_w$ using \eqref{Vz}, with ${\cal E}_x$ replaced by $E$ and subscript ``$x$" replaced by the subscript ``$w$". Since the proof that the resulting family ${\cal V}\subset U\times {\cal B}$ is a smooth family of complex manifolds only uses the isomorphism property of the linearization of $\mathbf{L}_x$ and the implicit function theorem, the same study extends to small perturbations of $\mathbf{L}_x$. Thus possibly after shrinking $D^\mathbb{C}\supset D$ if necessary, the family $$V_{D^\mathbb{C}}=\coprod_{w\in D^\mathbb{C}} w\times V_w\subset D^\mathbb{C}\times \Omega^{0,1}(ad E)_s $$ is a smooth family of complex manifolds. Because all $\overline{\partial}_w$, $\overline{\partial}_w^{\ast}$ and $\Xi_{D^\mathbb{C}}^{\prime\prime}$ are holomorphic in $w\in D^\mathbb{C}$, the system $\mathbf{L}_w$ is holomorphic in $w$. Therefore, $V_{D^\mathbb{C}}$ is a complex manifold and is a complex submanifold of $D^\mathbb{C}\times\Omega^{0,1}(ad E)_s$. Next we study the issue of being CS charts. Let $\Pi_w: V_w\to {\cal B}$ be defined by $\overline{\partial}_w+\cdot$, and define $f_w=\Pi_w\circ cs: V_w\to \mathbb{C}$. We show that $(V_w,f_w)$ are CS charts for $w\in O_0^\mathbb{C}$, where $O_0^\mathbb{C}$ (cf. \eqref{OC}) is the complexification of $O_0=D\cap X$. Going through the proof of Proposition \ref{lem1.2}, we first need to check that for $w\in O_0^\mathbb{C}$, $\mathfrak{a} \in V_w$ satisfies the system (cf. \eqref{JS-eq}) \begin{equation}\label{JS-2} \overline{\partial}^{\ast}_w\mathfrak{a} =0\quad{\rm and}\quad P_w\circ\overline{\partial}_w^{\ast}(\overline{\partial}_w\mathfrak{a} +\mathfrak{a} \wedge\mathfrak{a} )=0. \end{equation} First because $\Xi_{O_0^\mathbb{C}}$ is the complexification of $\Xi_{O_0}$, and $\Xi_x\subset \mathrm{ker} (\overline{\partial}_x^{\ast})_s^{0,1}$ for $x\in O_0$, we have that $\Xi_w\subset \mathrm{ker} (\overline{\partial}_w^{\ast})_s^{0,1}$ for $w\in O_0^\mathbb{C}$. On the other hand, $\mathfrak{a} \in V_w$ means that $\overline{\partial}_w\overline{\partial}_w^{\ast}\mathfrak{a} + \overline{\partial}_w^{\ast}(\overline{\partial}_w\mathfrak{a} +\mathfrak{a} \wedge\mathfrak{a} )\equiv 0\!\!\mod \Xi_w^{\prime\prime}$. Thus applying $\overline{\partial}_w^{\ast}$ to this relation, we obtain $\overline{\partial}^{\ast}_w\overline{\partial}_w\overline{\partial}^{\ast}_w\mathfrak{a} =0$. Finally, since $\overline{\partial}_x^{\ast}\overline{\partial}_x:\Omega^0(ad E)_s/\mathbb{C}\to \Omega^0(ad E)_{s-2}/\mathbb{C}$ are isomorphisms for $x\in D$, by shrinking $D^\mathbb{C}\supset D$ if necessary, $\overline{\partial}_w^{\ast}\overline{\partial}_w$ are isomorphisms for $w\in O_0^\mathbb{C}$, thus $\overline{\partial}^{\ast}_w\mathfrak{a} =0$. This proves that for all $w\in O_0^\mathbb{C}$, all $\mathfrak{a} \in V_w$ satisfy \eqref{JS-2}. After this, mimicking the proof of Proposition \ref{lem1.2}, we see that $(V_w,f_w)$ is a CS chart if for the similarly defined subbundle $R_w\subset V_w\times \Omega^{0,2}(ad E)_s$, the similarly defined vector bundle homomorphism $\varphi_w: R_w\to T^\vee V_w$ is an isomorphism. But this follows from that $\varphi_x$ are isomorphism for $x\in O_0$, and that being isomorphism is an open condition. Thus by shrinking $D^\mathbb{C}\supset D$ if necessary, all $\varphi_w$ are isomorphisms for $w\in O_0^\mathbb{C}$. This proves that the ${\cal V}_{O_0^\mathbb{C}}$ is a complexification of ${\cal V}_{O_0}$. The proof that the local trivialization $\Psi$ can be complexified is similar, using that all the operators and families used to construct $\Psi$ are holomorphic on $D^\mathbb{C}$, and that the only tool used to construct $\Psi$ is the implicit function theorem. We skip the repetition here. \end{proof} \subsection{Homotopy of family of CS charts} Let $U\subset X$ be open and let $\Xi_t$, $t\in [0,1]$, be a homotopy between the orientation bundles $\Xi_0$ and $\Xi_1$ on $U$. For any $x_0\in U$, we pick an open neighborhood $x_0\in U_0\subset U$ such that $U_0=X_{f_0}$ for the JS chart $(V_0,f_0)$, that $U_0$ has compact closure in $U$, and that there is an $\eps>0$ such that $\Theta_{U_0}(\eps)$ is an orientation bundle contained in $\Xi_t$ for all $t\in [0,1]$. Because of the compactness of the closure of $U_0$ in $U$, we can find a sufficiently small $\varepsilon>0$ (in place of $\varepsilon(\cdot):U_0\to (0,1)$) such that for each $t\in [0,1]$, we have the family of CS charts ${\cal V}_t\subset U_0\times {\cal B}$ by applying Proposition \ref{lem1.2} using $\Xi_t|_{U_0}$ and the size function $\varepsilon(\cdot)=\varepsilon$. We denote $\mathbf{V} _{[0,1]}=\coprod_{t\in [0,1]}{\cal V}_t$, and call it the homotopy of the families ${\cal V}_0$ and ${\cal V}_1$. Applying Proposition \ref{lem1.2} to the orientation bundle $\Theta_{U_0}(\eps)$, using the same $\varepsilon(\cdot)=\varepsilon$, we obtain the family of CS charts ${\cal W} \subset U_0\times{\cal B}$. \begin{prop}\label{in-e} We have tautological inclusion $[0,1]\times {\cal W} \,\subset\, {\cal V}_{U_0}$ as subspace of $[0,1]\times U_0\times{\cal B}$. Further, this pair can be complexified locally everywhere. \end{prop} \begin{proof} The proof is similar to the complexification of the family of CS charts and their local trivializations studied in the previous subsection. We omit the repetitions. \end{proof} \section{Perverse sheaves from CS data }\label{seclopgl} In this section we prove Proposition \ref{plopgl}. \subsection{A perverse sheaf from a family of CS charts}\label{seclopgl1} In this subsection we prove (1) of Proposition \ref{plopgl}. Let ${\cal V}\subset U\times{\cal B}_{si}$ be a family of CS charts over $U$ and $\Psi: U\times{\cal V}\supset {\cal U}\to {\cal V}\times U$ be a local trivialization. Let \begin{equation}\label{cs-1} f: {\cal V}\mapright{\subset} U\times {\cal B}_{si}\mapright{pr} {\cal B}_{si} \mapright{cs}\mathbb{C} \end{equation} and let $\pi_{\cal V}$ be the projection from $U\times{\cal V}$ or ${\cal V}\times U$ to ${\cal V}$. We need a technical result that there is a homeomorphism interpolating $(f\circ\pi_{\cal V})\circ \Psi$ and $f\circ\pi_{\cal V}$, \begin{prop}\label{prop6.1} There is an open subset ${\cal U}'\subset {\cal U}$ containing the diagonal $\Delta(U)\subset {\cal U}$ and an injective local homeomorphism $\Phi:{\cal U}'\to{\cal U}$ preserving the projections to $U\times U$, such that $$(f\circ\pi_{\cal V})\circ \Psi\circ\Phi=f\circ\pi_{\cal V}: {\cal U}'\longrightarrow \mathbb{C}. $$ \end{prop} \begin{proof}[Proof of Proposition \ref{plopgl} (1)] For $x\in U$, we can choose a sufficiently small open neighborhood $O_x$ of $x$ in $U$ such that $O_x$ is contained in the critical set $X_{f_z}\cap {\cal U}'\subset {\cal V}_z\cap {\cal U}'$ for any $z\in O_x$. Let $\psi_x: O_x\times{\cal V}_x\to{\cal V}$ be the restriction of $\Psi$ to $O_x\times{\cal V}_x\subset U\times{\cal V}$ (cf. \eqref{12101}). By shrinking ${\cal V}_x$ if necessary and restricting $\Phi$ to $O_x\times {\cal V}_x\subset O_x\times {\cal V}|_{O_x}$ over $O_x\times\{x\}\cong O_x$, we have a homeomorphism $$\lambda_{x}:O_x\times {\cal V}_x\mapright{\Phi} O_x\times {\cal V}_x\mapright{\psi_x} {\cal V}$$ that pulls back $f$ to $f_x\circ pr_2$ where $pr_2$ denotes the projection onto the second factor and $f_x:{\cal V}_x\subset {\cal B}_{si}\mapright{cs}\mathbb{C}$. If we further restrict it to $\{z\}\times {\cal V}_x$ and vary $z$ in $O_x$, we obtain a continuous family of homeomorphisms $\lambda_{x,z}:{\cal V}_x\cong {\cal V}_z$, defined by $\lambda_{x,z}=\lambda_x|_{z}$, that pulls back $f_z$ to $f_x$ and thus we have a continuous family of isomorphisms $\lambda_{x,z}^*:A_{f_z}^{\bullet} \cong A_{f_x}^{\bullet} $ by Proposition \ref{prop2}. We have to show that the perverse sheaves $P_x^{\bullet} :=A_{f_x}^{\bullet} [r]$ on $O_x\subset {\cal V}_x$ glue to give us a perverse sheaf $P^{\bullet} $ on $U$. Let $y\in U$ such that $O_x\cap O_y\ne \emptyset$. For any $z\in O_x\cap O_y$, we have isomorphisms \[ P_x^{\bullet} \mapleft{\lambda_{x,z}^*} P_z^{\bullet} \mapright{\lambda_{y,z}^*} P_y^{\bullet} \] over a neighborhood of $z$. For another $z'$ in the connected component of $O_x\cap O_y$ containing $z$, we choose a path $z_t$ for $t\in [0,1]$ with $z_0=z$, $z_1=z'$. Then $\lambda_{y,z_t}^{-1}\circ\lambda_{x,z_t}:{\cal V}_x\to {\cal V}_y$ is a continuous family of homeomorphisms that pulls back $f_y$ to $f_x$. By Proposition \ref{prop2}, we have the equality $$\lambda_{y,z}^*\circ(\lambda_{x,z}^*)^{-1}=\lambda_{y,z'}^*\circ(\lambda_{x,z'}^*)^{-1}:P_x^{\bullet} \mapright{\cong} P_y^{\bullet} .$$ By Proposition \ref{prop3}, $\lambda_{y,z}^*\circ(\lambda_{x,z}^*)^{-1}$ glue to give us an isomorphism $\tau_{xy}:P_x^{\bullet} \cong P_y^{\bullet} $ over $O_x\cap O_y$. Now the cocycle condition for gluing $\{P_x^{\bullet} \}$ is obviously satisfied because $\tau_{vx}\circ\tau_{yv}\circ\tau_{xy}$ at any $z\in O_x\cap O_y\cap O_v$ is $\lambda_{x,z}^*\circ(\lambda_{v,z}^*)^{-1}\circ\lambda_{v,z}^*\circ(\lambda_{y,z}^*)^{-1} \circ\lambda_{y,z}^*\circ(\lambda_{x,z}^*)^{-1}=1$. Therefore the perverse sheaves $P_x^{\bullet} $ glue to give us a perverse sheaf $P^{\bullet} $ on $U$. \end{proof} The proof of Proposition \ref{prop6.1} requires techniques for complex analytic subspaces. For this, we use the complexification of the family of CS charts and its local trivializations. Let $x\in U$. By assumption, we have an open neighborhood $O_x$ of $x$ in $U$, a complex manifold $D^\mathbb{C} _x$ containg $O_x$ as a closed real analytic subset and a \emph{holomorphic} family ${\cal V}_{D^\mathbb{C} _x}\to D^\mathbb{C} _x$ of CS charts such that ${\cal V}_{D^\mathbb{C} _x}|_{O_x}\cong {\cal V}\times_UO_x$ and that there is a holomorphic local trivialization $$D^\mathbb{C} _x\times {\cal V}_{D^\mathbb{C} _x}\supset {\cal U}_{D^\mathbb{C} _x}\mapright{\Psi^\mathbb{C}} {\cal V}_{D^\mathbb{C} _x}\times D^\mathbb{C} _x.$$ By shrinking $O_x$ and ${\cal V}$ around $x$ if necessary, we may assume that ${\cal U}_{D_x^\mathbb{C}}=D^\mathbb{C} _x\times{\cal V}_x$ and $O_x\subset {\cal V}_z$ for all $z\in O_x$. If we restrict $\Psi$ to $D^\mathbb{C} _x\times\{x\}$, we get a holomorphic map $\psi_x$ fitting into the commutative diagram \begin{equation}\label{12101}\xymatrix{ D^\mathbb{C} _x\times {\cal V}_x\ar[rr]^{\psi_x}_\simeq\ar[dr] & & {\cal V}_{D^\mathbb{C} _x}\ar[dl]\\ &D^\mathbb{C} _x .}\end{equation} which is a fiberwise open embedding. (We use $\simeq$ to mean injective holomorphic maps.) Let $f:{\cal V}_{D^\mathbb{C} _x}\to \mathbb{C}$ be similary defined as in \eqref{cs-1}, and let $f_x:{\cal V}_x\subset {\cal B}_{si}\mapright{cs}\mathbb{C}$. We let $$\mathbf{f}_0=(f\circ\pi_{\cal V})\circ (\id_{D^\mathbb{C} _x}\times \psi_x) \quad{\rm and}\quad \mathbf{f}_1=(f\circ\pi_{{\cal V}}) \circ \Psi^\mathbb{C}\circ (\id_{D^\mathbb{C} _x}\times \psi_x); $$ both are holomorphic functions $D_x^\mathbb{C}\times D_x^\mathbb{C}\times {\cal V}_x\to\mathbb{C}$. \begin{prop} \label{12102} After shrinking $D^\mathbb{C} _x$ and ${\cal V}_x$ if necessary, there is an open subset ${\cal U}'_\mathbb{C}\subset D_x^\mathbb{C}\times D_x^\mathbb{C}\times{\cal V}_x$ containing the diagonal $\Delta=\{(y,y,y)\mid y\in D_x^\mathbb{C}\}$ and an injective local homeomorphism $\Phi^\mathbb{C} :{\cal U}'_\mathbb{C}\to D^\mathbb{C} _x\times D^\mathbb{C} _x \times {\cal V}_x$ commuting with the projection $pr_{12}: D^\mathbb{C} _x\times D_x^\mathbb{C}\times {\cal V}_x \to D^\mathbb{C} _x\times D^\mathbb{C} _x$ such that $\mathbf{f} _1\circ\Phi^\mathbb{C}=\mathbf{f} _0$. \end{prop} By cancelling the isomorphism $\id_{D^\mathbb{C}_x}\times\psi_x$ and restricting to the real part $O_x\subset D^\mathbb{C}_x$, Proposition \ref{prop6.1} is an immediate consequence of Proposition \ref{12102}. \medskip We now prove Proposition \ref{12102}. Let $$ \mathbf{f} _t=(1-t)\mathbf{f} _0+t\mathbf{f} _1$$ be holomorphic functions on $D^\mathbb{C} _x\times D^\mathbb{C} _x\times {\cal V}_x$. Let $(d_V\mathbf{f} _t)$ be the ideal generated by the partial derivatives of $\mathbf{f} _t$ in the direction of ${\cal V}_x$. We will find a homeomorphism $\Phi$ in a neighborhood of $(x,x,x)$ such that $\mathbf{f} _1\circ\Phi=\mathbf{f} _0$ by introducing a vertical vector field along the fibers of $pr_{12}:D^\mathbb{C} _x\times D^\mathbb{C} _x\times {\cal V}_x\to D^\mathbb{C} _x\times D^\mathbb{C} _x$. We need a few lemmas. \begin{lemm}\label{12141} Let $V$ be a smooth complex manifold and $S$ a complex analytic subspace. Suppose $W_1\subset W_2$ are two closed complex analytic subspaces of $V\times S$ such that the induced projection $W_1\subset V\times S\to S$ is flat, and that there is a closed complex analytic subspace $S_0\subset S$ such that $W_1\times_S S_0=W_2\times_S S_0$. Then there is an open subset ${\cal U}_0\subset V\times S$ that contains $V\times S_0$ such that $W_1\cap {\cal U}_0= W_2\cap {\cal U}_0$ as complex analytic subspaces in ${\cal U}_0$. \end{lemm} \begin{proof} Let ${\cal K}$ be the kernel of the surjection $\mathcal{O}_{W_2}\to \mathcal{O}_{W_1}$. Since $\mathcal{O}_{W_1}$ is flat over $\mathcal{O}_S$, we have an exact sequence \[ 0\longrightarrow {\cal K}\otimes_{\mathcal{O}_S}\mathcal{O}_{S_0} \longrightarrow \mathcal{O}_{W_2}\otimes_{\mathcal{O}_S}\mathcal{O}_{S_0} \mapright{\phi} \mathcal{O}_{W_1}\otimes_{\mathcal{O}_S}\mathcal{O}_{S_0} \longrightarrow 0. \] Since $W_1\times_S S_0=W_2\times_S S_0$, $\phi$ is an isomorphism, thus $ {\cal K}\otimes_{\mathcal{O}_S}\mathcal{O}_{S_0}=0$. As ${\cal K}$ is a coherent sheaf of $\mathcal{O}_{V\times S}$-modules, there is an open ${\cal U}_0\subset V\times S$, containing $V\times S_0$, such that ${\cal K}|_{{\cal U}_0}=0$. This proves that $W_1\cap {\cal U}_0=W_2\cap {\cal U}_0$, as complex analytic subspaces of ${\cal U}_0$. \end{proof} \begin{lemm}\label{12111} Let $Z_t$ be the complex analytic subspace of $D^\mathbb{C} _x\times D^\mathbb{C} _x\times {\cal V}_x$ defined by the ideal $(d_V\mathbf{f} _t)$. Then $Z_t$ is independent of $t$ in an open neighborhood of $(x,x,x)\in D^\mathbb{C} _x\times D^\mathbb{C} _x\times {\cal V}_x$. \end{lemm} \begin{proof} Let $Z=D^\mathbb{C}_x\times D^\mathbb{C}_x\times \mathfrak{X} _{f_x}$ where $\mathfrak{X} _{f_x}$ is the critical locus of $f_x:{\cal V}_x\subset {\cal B}_{si}\mapright{cs}\mathbb{C}$ in ${\cal V}_x$ defined by the ideal $(df_x)$. We first show $Z_0=Z_1=Z$. Since the critical locus of $\mathbf{f} _i|_{\{z\}\times\{z'\}\times {\cal V}_x}$ is $\mathfrak{X} _{f_x}$ by the definition of local trivialization for $i=0,1$, $Z\subset Z_i$ for $i=0,1$. By Lemma \ref{12141}, $Z=Z_i$ for $i=0,1$. Let $F, G: \mathbb{A} ^1\times D^\mathbb{C} _x\times D^\mathbb{C} _x\times {\cal V}_x\to \mathbb{C}$ be functions defined by $$F(t,z,z',p)=(1-t)\mathbf{f} _0(z,z',p)+t\mathbf{f} _1(z,z',p)\quad{\rm and}\quad G(t,z,z',p)=\mathbf{f} _0(z,z',p).$$ Since $\mathbf{f} _t$ is a linear combination of $\mathbf{f} _0$ and $\mathbf{f} _1$, we have the inclusion $$\mathfrak{X} _F:=(d_VF=0)\supset \mathfrak{X} _{G}:=(d_VG=0)=\mathbb{A} ^1\times Z $$ of analytic subspaces. Also the restrictions of $F$ and $G$ to $\Gamma=\mathbb{A} ^1\times\{(z,z)\,|\,z\in D^\mathbb{C}_x\}$ coincide and hence $\mathfrak{X} _F|_{\Gamma}=\mathfrak{X} _G|_\Gamma$. By Lemma \ref{12141}, $\mathfrak{X} _F=\mathfrak{X} _{G}=\mathbb{A} ^1\times Z$ in an open neighborhood of $\Gamma$. Restricting to $t\in [0,1]$, we obtain the lemma. \end{proof} We need another lemma. Let $V$ and $S$ be as before. We assume $V\subset \mathbb{C}^r$ is an open subset. We endow $V$ the standard inner and hermitian product. \begin{lemm}\label{lem5} Let the notation be as stated. Let $Z\subset V$ be a closed complex analytic subspace. Suppose $f: V\times S\to \mathbb{C}$ is a holomorphic function with only one fiberwise critical value $0$ such that the vanishing locus (complex analytic subspace) of $d_V f$ is $Z\times S$. Then for any convergent $p_n\to p_0\in V\times S$ such that $d_V f(p_n)\ne 0$ and $d_V f(p_n)\to 0$, we have $$\lim_{n\to \infty}\frac{f(p_n)}{|\!|d_V f(p_n) |\!|}=0. $$ \end{lemm} \begin{proof} We prove the case $S=pt$. The general case is exactly the same. Let $\nu:\tilde V\to V$ be the resolution of the ideal sheaf of $Z$ so that $\nu^{-1}(Z)$ is a normal crossing divisor in $\tilde V$. Let $\tilde p_n$ be the unique lifting of $p_n$ in $\tilde V-\nu^{-1}(Z)$. Suppose $n_k$ is a subsequence so that $\tilde p_{n_k}\to q\in \nu^{-1}(Z)$. We investigate the limiting behavior of $|f(p_{n_k})|/|\!|df(p_{n_k})|\!|$. By our choice of resolution $\nu$, locally near $q\in \tilde V$, the pullback $\nu^*(d f)$ of the ideal $(d f)$ is a principal ideal sheaf generated by some monomial $\varphi=z_1^{k_1}\cdots z_m^{k_m}$ with $k_i>0$ for holomorphic functions $z_1,\cdots,z_r$ on $V$ such that restricting to the fiber $\tilde V$ they give a local coordinates of $\tilde V$ centered at $q$. Let $w_1,\cdots, w_r$ be local coordinates of $V$ centered at $p_0$. Then $\varphi$ divides $\nu^*\frac{\partial f}{\partial w_i}$ for all $i$. We claim that $\nu^*f$ is divisible by $z_1^{k_1+1}\cdots z_m^{k_m+1}$. Indeed if we expand near $z_1=0$, $\nu^*f=c_{m_1}z_1^{m_1}+c_{m_1+1}z_1^{m_1+1}+\cdots$, where $c_j$ are holomorphic in $z_2,\cdots,z_r$, and $c_{m_1}\ne 0$. Then $\varphi$ divides \[ \sum_i \frac{\partial f}{\partial w_i}\frac{\partial w_i}{\partial z_1}=\frac{\partial}{\partial z_1}\nu^*f=m_1c_{m_1}z_1^{m_1-1}+(m_1+1)c_{m_1+1}z_1^{m_1}+\cdots . \] Hence $m_1\ge k_1+1$ and $z_1^{k_1+1}$ divides $\nu^*f$. Likewise $z_i^{k_i+1}$ divides $\nu^*f$ for each $i$. Therefore $\nu^*(f)\subset \nu^*(d f)\,\sqrt{\nu^*(d f)}$ where $\sqrt{\nu^*(d f)}$ is the radical of $\nu^*(d f)$. We write $\nu^*f=\sum g_i\cdot \nu^*\frac{\partial f}{\partial w_i}$ with $g_i|_{\nu^{-1}(Z)}=0$. Therefore \[ \nu^*\frac{|f(p_{n_k})|}{|\!|d f(p_{n_k})|\!|}=\frac{|\nu^*f(\tilde p_{n_k})|}{(\sum |\nu^*\frac{\partial f}{\partial w_i}(\tilde p_{n_k})|^2)^{1/2}}\le \left(\sum |g_i(\tilde p_{n_k})|^2\right)^{1/2}\longrightarrow 0. \] Because $\tilde V\to V$ is proper, and because this convergence holds for all convergent subsequence $\tilde p_{n_k}$, we find that $\lim_{n\to \infty}\frac{f(p_n)}{|\!|d f(p_n)|\!|}=0. $ \end{proof} \begin{proof}[Proof of Proposition \ref{12102}] We use the inner product and the hermitian metric on ${\cal V}_x$ via embedding ${\cal V}_x\cong V_x\subset \Omega^{0,1}(ad {\cal E}_x)_s$. We let $\nabla_V \mathbf{f} _t$ be the relative gradient vector field of $\mathbf{f} _t$ on $D^\mathbb{C}_x\times D^\mathbb{C}_x\times {\cal V}_x$ as the metric dual of $d_V\mathbf{f} _t$, the differential of $\mathbf{f} _t$ in the ${\cal V}_x$ direction. Note that $\nabla_V\mathbf{f} _t$ is a vertical vector field with respect to the projection $pr_{12}:D^\mathbb{C}_x\times D^\mathbb{C}_x\times {\cal V}_x \to D^\mathbb{C}_x\times D^\mathbb{C}_x$ which is differentiable on each fiber. We define a time dependent vector field on the complement of $Z=Z_t=(d_V\mathbf{f} _t=0)$ by \begin{equation}\label{eq1} \xi_t=\frac{\mathbf{f} _0-\mathbf{f} _1}{|\!|\nabla_V \mathbf{f} _t|\!|^2}\ \overline{\nabla_V \mathbf{f} _t}.\end{equation} We claim that it extends to a well defined vector field on some neighborhood of $(x,x,x)$ in $D^\mathbb{C}_x\times D^\mathbb{C}_x\times {\cal V}_x$. It suffices to show that \[ |\!|\xi_t|\!|=\frac{|\mathbf{f} _0-\mathbf{f} _1|}{|\!|\nabla_V \mathbf{f} _t|\!|}=\frac{|\mathbf{f} _0-\mathbf{f} _1|}{|\!|d_V \mathbf{f} _t|\!|}\] approaches zero as a point approaches $Z$. Since $Z_t=Z_0=Z_1$ by Lemma \ref{12111}, we have an inclusion $(d_V\mathbf{f} _t)\supset (d_V\mathbf{f} _i)$ of ideals for $i=0,1$. Hence we can express the vertical partial derivatives of $\mathbf{f} _{0}$ and $\mathbf{f} _{1}$ as linear combinations of the vertical partial derivatives of $\mathbf{f} _{t}$. Thus in a neighborhood of $(x,x,x)$, we have \[ |\!|d_V \mathbf{f} _{0}|\!|\le C|\!|d_V \mathbf{f} _{t}|\!|\quad{\rm and}\quad |\!|d_V \mathbf{f} _{1}|\!|\le C |\!|d_V \mathbf{f} _{t}|\!|\] for some $C>0$. By Lemma \ref{lem5}, we have \[ |\!|\xi_t|\!|=\frac{|\mathbf{f} _0-\mathbf{f} _1|}{|\!|d_V \mathbf{f} _t|\!|}\le C^{-1}\left( \frac{|\mathbf{f} _0|}{|\!|d_V \mathbf{f} _0|\!|}+\frac{|\mathbf{f} _1|}{|\!|d_V \mathbf{f} _1|\!|} \right)\to 0, \ \text{ as } d_V\mathbf{f} _0, d_V\mathbf{f} _1\to 0. \] This proves that the vector field $\xi_t$ is well defined in a neighborhood of $(x,x,x)$. Let $\mathbf{x}_t$ for $t\in [0,1]$ be an integral curve of the vector field $\xi_t$, so that $$\frac{d\mathbf{x}_t}{dt}=\dot{\mathbf{x}}_t=\xi_t(\mathbf{x}_t).$$ Since $\xi_t$ is a vertical vector field for $pr_{13}$, $\mathbf{x}_t$ lies in a fiber of $pr_{13}$. Then $\mathbf{f} _t(\mathbf{x}_t)$ is constant in $t$ because \[ \frac{d}{dt}\mathbf{f} _{t}(\mathbf{x}_t)=d_V\mathbf{f} _{t}(\dot{\mathbf{x}}_t)+\mathbf{f} _{1}-\mathbf{f} _{0}=\nabla_V \mathbf{f} _{t}\cdot\dot{\mathbf{x}}_t+\mathbf{f} _{1}-\mathbf{f} _{0}\] \[=\nabla_V \mathbf{f} _t\cdot \frac{\mathbf{f} _0-\mathbf{f} _1}{|\!|\nabla_V \mathbf{f} _t|\!|^2}\ \overline{\nabla_V \mathbf{f} _t}+\mathbf{f} _1-\mathbf{f} _0=0.\] Therefore the flow of the vector field $\xi_t$ from $t=0$ to $t=1$ gives a homeomorphism $\Phi$ of a neighborhood of $(x,x,x)$ into $D^\mathbb{C}_x\times D^\mathbb{C}_x\times {\cal V}_x$ such that $\mathbf{f} _1\circ\Phi=\mathbf{f} _0$. Because $\xi_t=0$ for all $t$ over the diagonal $\{(z,z)\}\subset D^\mathbb{C}_x\times D^\mathbb{C}_x$, $\Phi$ is the identity map over the diagonal. \end{proof} \subsection{Gluing isomorphisms} \label{sec6.2} In this subsection we prove (2) of Proposition \ref{plopgl}. We have a family $\mathbf{V} $ of CS charts on $U\times [0,1]$ with $\mathbf{V} |_{U\times\{0\}}={\cal V}_\alpha$ and $\mathbf{V} |_{U\times \{1\}}={\cal V}_\beta$. For $x\in U $, there exist an open neighborhood $O_x$ of $x$ in $U$ and a subfamily ${\cal W}\times [0,1]$ of CS charts in $\mathbf{V} |_{O_x\times [0,1]}$. Moreover, we have perverse sheaves $P^{\bullet} _\alpha$ and $P^{\bullet} _\beta$ on $U$ given by ${\cal V}_\alpha$ and ${\cal V}_\beta$ respectively, whose restrictions to $O_x$ are the perverse sheaves of vanishing cycles $A^{\bullet} _{f^\alpha_x}[r]$ and $A^{\bullet} _{f^\beta_x}[r]$ respectively where $f^\alpha_x:{\cal V}_\alpha|_x\subset {\cal B}_{si}\mapright{cs}\mathbb{C}$ and $f^\beta_x:{\cal V}_\beta|_x\subset {\cal B}_{si}\mapright{cs} \mathbb{C}$. Recall that $P^{\bullet} _\alpha$ was obtained by gluing $A^{\bullet} _{f^\alpha_x}[r]$ by the isomorphisms $$A^{\bullet} _{f^\alpha_x}[r]\mapleft{(\lambda_{x,z}^\alpha)^*} A^{\bullet} _{f^\alpha_z}[r]\mapright{(\lambda_{y,z}^\alpha)^*} A^{\bullet} _{f^\alpha_y}[r]$$ for $x,y\in U$ and $z\in O_x\cap O_y$. \begin{lemm}\label{12121} Suppose for each $x\in U$ we have an isomorphism $\chi_x^*:A^{\bullet} _{f^\beta_x}\mapright{\cong} A^{\bullet} _{f^\alpha_x}$ such that for $z\in O_x$ we have a commutative diagram of isomorphisms \begin{equation}\label{12l22}\xymatrix{ A^{\bullet} _{f_x^\alpha} & A^{\bullet} _{f_z^\alpha}\ar[l]_{(\lambda_{x,z}^\alpha)^*} \\ A^{\bullet} _{f_x^\beta}\ar[u]^{\chi_x^*}& A^{\bullet} _{f_z^\beta}\ar[l]_{(\lambda_{x,z}^\beta)^*}\ar[u]^{\chi_z^*}. }\end{equation} Then the isomorphisms $\chi^*_x$ glue to give an isomorphism $$\sigma_{\alpha\beta}:P^{\bullet} _\alpha\to P^{\bullet} _\beta.$$ \end{lemm} \begin{proof} If $z\in O_x\cap O_y$, we have a commutative diagram \[\xymatrix{ A^{\bullet} _{f_x^\alpha} & A^{\bullet} _{f_z^\alpha}\ar[l]_{(\lambda_{x,z}^\alpha)^*}\ar[r]^{(\lambda_{y,z}^\alpha)^*} & A^{\bullet} _{f_y^\alpha}\\ A^{\bullet} _{f_x^\beta} \ar[u]^{\chi_x^*}& A^{\bullet} _{f_z^\beta}\ar[u]^{\chi_z^*} \ar[l]_{(\lambda_{x,z}^\beta)^*} \ar[r]^{(\lambda_{y,z}^\beta)^*}&A^{\bullet} _{f_y^\beta} \ar[u]^{\chi_y^*}. }\] By Proposition \ref{prop3} (2), we have an isomorphism $\sigma_{\alpha\beta}$ that restricts to $\chi_x^*$. \end{proof} \begin{lemm}\label{nl4.3.2} Let $W\subset V$ be a complex submanifold in a complex manifold. Let $f:V\to \mathbb{C}$ be a holomorphic function and $g=f|_W$. Suppose the critical loci $\mathfrak{X} _f$ and $\mathfrak{X} _g$ are equal. Let $x$ be a closed point of $\mathfrak{X} _f=\mathfrak{X} _g$. Then there is a coordinate system $\{z_1,\cdots, z_r\}$ of $V$ centered at $x$ such that $W$ is defined by the vanishing of $z_1,\cdots,z_m$ and $$f=q(z_1,\cdots,z_m)+g(z_{m+1},\cdots,z_r)\quad \text{where }q(z_\cdot)=\sum_{i=1}^mz_i^2, \quad{\rm and}\quad g=f|_W. $$ \end{lemm} \begin{proof} We choose coordinates $\{y_1,\cdots,y_r\}$ of $V$ centered at $x$ such that $W$ is defined by the vanishing of $y_1,\cdots,y_m$. Let $I$ be the ideal generated by $y_1,\cdots, y_m$. Since $\mathfrak{X} _f=\mathfrak{X} _g$, i.e. $(df)=(dg)+I$, we have $$\frac{\partial f}{\partial y_i}\Big|_W=\sum_{j=m+1}^r a_{ij}\frac{\partial g}{\partial y_j},\quad i=1,\cdots,m$$ for some functions $a_{ij}$ regular at $x$. By calculus, we have $$f=g(y_{m+1},\cdots,y_r)+\sum_{i=1}^m\frac{\partial f}{\partial y_i}\Big|_W\cdot y_i+I^2 $$ $$=g(y_{m+1},\cdots,y_r)+\sum_{j=m+1}^r\frac{\partial g}{\partial y_j}\left( \sum_{i=1}^ma_{ij}y_i\right)+I^2 $$ $$=g(z_{m+1},\cdots, z_r)+\sum_{i,k=1}^m b_{ik}y_iy_k$$ where $z_j=y_{j}+\sum_{i=1}^ma_{ij}y_i$ for $j\ge m+1$ and $b_{ik}$ are some functions holomorphic near $x$. Since the kernel of the Hessian of $f$ at $x$ is the tangent space of $\mathfrak{X} _f=\mathfrak{X} _g\subset W$ at $x$, the quadratic form $q=\sum_{i,k=1}^m b_{ik}y_iy_k$ is nondegenerate near $x$. Hence we can diagonalize $q=\sum_{i=1}^m z_i^2$ by changing the coordinates $y_1,\cdots, y_m$ to new coordinates $z_1,\cdots,z_m$. It follows that $z_1,\cdots,z_r$ is the desired coordinate system. \end{proof} Let ${\cal V}^t=\mathbf{V} |_{U\times\{t\}}$ so that ${\cal V}^0={\cal V}_\alpha$ and ${\cal V}^1={\cal V}_\beta$. Let $\delta_t:U\to {\cal V}^t$ be the tautologial section which sends $x\in U$ to $x$ in the fiber ${\cal V}^t_x\subset {\cal B}_{si}$ of ${\cal V}^t$ over $x$. Let $\mathbf{f} _t: {\cal V}^t\subset U\times {\cal B}_{si}\mapright{cs}\mathbb{C}$ be the family CS functional. \begin{lemm} \label{12124} For $x\in U$, there exist an open neighborhood ${\cal U}_0$ of $\delta_0(x)$ in ${\cal V}^0$ and a homeomorphism $\chi$ of ${\cal U}_0\times [0,1]$ into $\mathbf{V} $ such that \begin{enumerate} \item the restriction $\chi_t$ (of $\chi$) to ${\cal U}_0\times\{t\}$ is a holomorphic map to an open set in ${\cal V}^t$; \item $\chi_t|_{{\cal U}_0\cap {\cal W}}$ maps ${\cal U}_0\cap{\cal W}$ into ${\cal W}\subset {\cal V}^t$ and is the identity map; \item $\chi_0:{\cal U}_0\to {\cal V}^0$ is the identity map; \item $\mathbf{f} _t\circ\chi_t=\mathbf{f} _0$.\end{enumerate} \end{lemm} \def\mathbf{g} {\mathbf{g} } \begin{proof} Since $[0,1]$ is compact, it suffices to find such a $\chi$ over an interval $[t_0,t_0+\epsilon]$ at each $t_0\in [0,1]$ with $0<\epsilon\ll 1$. For $t_0\in [0,1]$, by choosing a complexification of the pair ${\cal W}\times[0,1]\subset \mathbf{V} $ near $(x,t_0)$, we can find coordinate functions $y_1,\cdots,y_r$ of the fibers of $\mathbf{V} \to U\times [0,1]$ at $(x,t_0)$ such that $y_1,\cdots, y_m$ are holomorphic along fibers of $\mathbf{V} \to U\times[0,1]$, and ${\cal W}\times [0,1]$ is defined by the vanishing of $y_1,\cdots, y_m$ (cf. Proposition \ref{in-e}). Let $t$ be the coordinate for $[0,1]$. We can repeat the proof of Lemma \ref{nl4.3.2} with $$\mathbf{f} :\mathbf{V} \hookrightarrow U\times [0,1]\times {\cal B}_{si}\mapright{pr_3}{\cal B}_{si}\mapright{cs}\mathbb{C}$$ and $\mathbf{g} =\mathbf{f} |_{{\cal W}\times [0,1]}$. By Lemma \ref{12141}, $(d_V\mathbf{f} )=(d_V\mathbf{g} )+I$ where $I=(y_1,\cdots,y_m)$ and $(d_V\mathbf{f} )$ (resp. $(d_V\mathbf{g} )$) denotes the ideal generated by the partial derivatives in the fiber direction of $\mathbf{V} \to U\times [0,1]$ (resp. ${\cal W}\times [0,1]\to U\times [0,1]$). Then we obtain a new coordinate system $\{z_i\}$ of $\mathbf{V} $ over $U\times [0,1]$ at $(x,t_0)$ such that $\mathbf{f} =\sum_{i=1}^m z_i^2+\mathbf{g} (z_{m+1},\cdots,z_r)$ with $z_j|_{{\cal W}\times [0,1]}=y_j$ for $j\ge m+1$. Then the coordinate change from $\{z_j|_{t_0}\}$ to $\{z_j|_{t}\}$ defines the desired map $\chi$. \end{proof} Let $f_x^t:{\cal V}^t_x\subset {\cal B}_{si}\mapright{cs}\mathbb{C}$ be the CS functional on the CS chart ${\cal V}^t_x$. \begin{lemm}\label{12125} The isomorphism $\chi_1^*:A^{\bullet} _{f^1_x}\to A^{\bullet} _{f^0_x}$ induced from $\chi_1$ is independent of the choice of $\chi$. \end{lemm} \begin{proof} If $\chi'$ is another such homeomorphism, then $\chi^{-1}_t\circ \chi_t':{\cal V}^0_x\to {\cal V}^0_x$ is a homotopy of homeomorphisms which is $\id$ at $t=0$. By Proposition \ref{prop2}, $(\chi^{-1}_1\circ\chi'_1)^*:A^{\bullet} _{f^0_x}\to A^{\bullet} _{f^0_x}$ is the identity. This proves the lemma. \end{proof} \begin{lemm}\label{12129} For each $x\in U$, we have an isomorphism $\chi_x^*:A^{\bullet} _{f^\beta_x}\mapright{\cong} A^{\bullet} _{f^\alpha_x}$ such that \eqref{12l22} holds. \end{lemm} \begin{proof} By Proposition \ref{12102}, we have a homeomorphism $\xi_\alpha:O_x\times {\cal V}_\alpha|_{O_x}\to {\cal V}_\alpha|_{O_x}\times O_x$ that gives the gluing isomorphism $(\lambda^\alpha_{x,z})^*$ over $(x,z)$. By construction, the restriction of $\xi_\alpha$ to $(O_x\times {\cal V}_\alpha|_{O_x})\times_{O_x\times O_x}O_x$ to the diagonal $O_x\subset O_x\times O_x$ is the identity map. The composition of homeomorphisms \begin{equation}\label{12127}\xymatrix{ O_x\times {\cal V}_\alpha|_{\mathcal{O}_x} \ar[r]^{\id\times\chi} & O_x\times {\cal V}_\beta|_{\mathcal{O}_x} \ar[d]^{\xi_\beta} \\ {\cal V}_\alpha|_{\mathcal{O}_x}\times O_x \ar[u]_{\xi_\alpha^{-1}} & {\cal V}_\beta|_{\mathcal{O}_x}\times O_x\ar[l]_{\chi^{-1}\times \id} }\end{equation} is the identity over the diagonal $O_x\subset O_x\times O_x$. Upon fixing a local trivialization ${\cal V}_\alpha|_{O_x}\cong O_x\times {\cal V}_{\alpha,x}$, we have $A^{\bullet} _{\mathbf{f} _\alpha}\cong pr_2^{-1}A^{\bullet} _{f_x^\alpha}$ because the analytic space $O_x$ is locally contractible. By Lemma \ref{lemidex} for $O_x\times {\cal V}_x\subset O_x\times O_x\times {\cal V}_x\to {\cal V}_x$, we obtain the commutativity of the diagram of isomorphisms \begin{equation}\label{12128}\xymatrix{ pr_1^{-1}A^{\bullet} _{\mathbf{f} _\alpha} & pr_1^{-1} A^{\bullet} _{\mathbf{f} _\beta}\ar[l] \\ pr_2^{-1}A^{\bullet} _{\mathbf{f} _\alpha}\ar[u] & pr_2^{-1}A^{\bullet} _{\mathbf{f} _\beta}\ar[l]\ar[u] }\end{equation} Restricting to the fiber over $(x,z)$, we obtain \eqref{12l22} possibly after shrinking $O_x$. \end{proof} By Lemma \ref{12121} and Lemma \ref{12129}, we have the desired isomorphism $\sigma_{\alpha\beta}:P^{\bullet} _\alpha\to P^{\bullet} _\beta$ over $U_\alpha\cap U_\beta$ and thus we proved (2) of Proposition \ref{plopgl}. \medskip \subsection{Obstruction class} In this subsection we prove (3) of Proposition \ref{plopgl} and complete our proof of Theorem \ref{truemainth}. Let $x\in U_\alpha\cap U_\beta\cap U_\gamma$. By our construction in \S\ref{sec6.2}, in a neighborhood of $x$, $\sigma_{\gamma\alpha}\circ\sigma_{\beta\gamma}\circ\sigma_{\alpha\beta}$ is given by a biholomorphic map $\varphi:{\cal V}_{\alpha,x}\to {\cal V}_{\alpha,x}$ preserving $f^\alpha_x$ whose restriction to ${\cal W}$ is the identity. By Proposition \ref{prop1}, $\sigma_{\gamma\alpha}\circ\sigma_{\beta\gamma}\circ\sigma_{\alpha\beta}$ is $\det(d\varphi|_x)\cdot\id$ and $\det(d\varphi|_x)=\pm 1$. Thus we obtain a $\mathbb{Z}_2$-valued Cech 2-cocycle $\{\sigma_{\alpha\beta\gamma}\}$ of the covering $\{U_\alpha\}$. One checks directly that the cocycle is closed and its cohomology class $\sigma\in H^2(X,\mathbb{Z}_2)$ is the \emph{obstruction class} for gluing the perverse sheaves $\{P^{\bullet} _\alpha\}$. \begin{prop}\label{prop3.3.5} Given preorientation data $\{\Xi_\alpha\}$ on $X$, the perverse sheaves $\{P^{\bullet} _\alpha\}$ in \S\ref{seclopgl1} glue to give a globally defined perverse sheaf $P^{\bullet} $ on $X$ if and only if the obstruction class $\sigma\in H^2(X,\mathbb{Z}_2)$ vanishes.\end{prop} The cocycle $\sigma_{\alpha\beta\gamma}$ is by definition the cocycle for gluing the determinant line bundles $\det(T{\cal V}_{\alpha,x})$ by the isomorphisms induced from the biholomorphic maps ${\cal V}_{\alpha,x}\cong {\cal V}_{\beta,x}$. In particular, $\det(T{\cal V}_{\alpha,x})$ glue to a globally defined line bundle on $X$ if and only if $\sigma\in H^2(X,\mathbb{Z}_2)$ is zero. Since a neighborhood $O_x$ of $x$ in $\mathfrak{X} $ is the critical locus of the CS functional on ${\cal V}_{\alpha,x}$, we have a symmetric obstruction theory $$ {\cal F}:=[ T_{{\cal V}_{\alpha,x}}\to \Omega_{{\cal V}_{\alpha,x}}]\longrightarrow \mathbb{L}^{\bullet} _{\mathfrak{X} }|_{O_x}.$$ Hence the determinant line bundle $\det {\cal F}$ is the inverse square of the determinant line bundle of the tangent bundle $T_{{\cal V}_{\alpha,x}}$. On the other hand, by \cite{Tho}, ${\cal F}\cong \Ext_{\pi}^\bullet({\cal E},{\cal E})[2]$ on $O_x$ where $\pi:\mathfrak{X} \times Y\to \mathfrak{X} $ is the projection and ${\cal E}$ is the universal bundle. Hence the determinant bundle of $T_{{\cal V}_{\alpha,x}}$ is a square root of the determinant bundle of $\Ext_{\pi}^\bullet({\cal E},{\cal E})[1]$. Therefore if the obstruction class $\sigma\in H^2(X,\mathbb{Z}_2)$ is zero, then $\det\, \Ext_{\pi}^\bullet({\cal E},{\cal E})$ has a square root. Suppose $\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ has a square root $L$. Then the local isomorphisms $\det T_{{\cal V}_{\alpha,x}}\cong L|_{O_x}$ induce gluing isomorphisms for $\{ \det T_{{\cal V}_{\alpha,x}}\}$. Therefore the obstruction class $\sigma$ to gluing them is zero. This implies the gluing of $\{P^{\bullet} _\alpha\}$ by Proposition \ref{prop3.3.5}. This completes the proof of Theorem \ref{truemainth}. We add that if a square root $L$ of $\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ exists, then any two such square roots differ by tensoring a $2$-torsion line bundle (i.e. a $\mathbb{Z}_2$-local system) on $X$, and vice versa. \begin{rema} In \cite{KS0}, Kontsevich and Soibelman defined orientation data as choices of square roots of $\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ satisfying a compatibility condition. See Definition 15 in \cite{KS0}. For the existence of a perverse sheaf which is the goal of this paper, it suffices to have a square root. However for wall crossings in the derived category, the compatibility condition seems necessary. See \S5 of \cite{KS0} for further discussions. \end{rema} \section{Mixed Hodge modules}\label{sec9} In this section, we prove that the perverse sheaf $P^{\bullet} $ in Theorem \ref{thmInt} lifts to a mixed Hodge module (MHM for short) of Morihiko Saito (\cite{Sai90}). A mixed Hodge module $M^{\bullet} $ consists of a $\mathbb{Q}$-perverse sheaf $P^{\bullet} $, a regular holonomic $D$-module $A^{\bullet} $ with $DR(A^{\bullet} )\cong P^{\bullet} \otimes_\mathbb{Q}\mathbb{C}$, a $D$-module filtration $F$ and a weight filtration $W$, with polarizability and inductive construction. Like perverse sheaves, mixed Hodge modules form an abelian category $MHM(X)$. There is a forgetful functor \[ rat: MHM(X)\longrightarrow Perv(X) \] which is \emph{faithful and exact}. Moreover, if $f:V\to \mathbb{C}$ is a holomorphic function on a complex manifold $V$ of dimension $r$, there is a MHM $M^{\bullet} _f=\phi_f^m(\mathbb{Q}[r-1])$ such that \[ rat(M^{\bullet} _f)=\phi_f(\mathbb{Q}[r-1])=A^{\bullet} _f[r] \] is the perverse sheaf of vanishing cycles of $f$. Further, if $\Phi:V\to V$ is a biholomorphic map and $g=f\circ\Phi$, then $\Phi$ induces an isomorphism $\Phi_*:M^{\bullet} _f\to M^{\bullet} _g$. We also note that the category $MHM(X)$ is a sheaf, i.e. gluing works (\cite{Sai04}). The goal of this section is to prove the following. \begin{theo}\label{thmMHM} The perverse sheaf $P^{\bullet} $ in Theorem \ref{thmInt} lifts to a MHM $M^{\bullet} $. Namely, there exists a MHM $M^{\bullet} $ such that $rat(M^{\bullet} )=P^{\bullet} $. \end{theo} Like Theorem \ref{truemainth}, Theorem \ref{thmMHM} is a direct consequence of the following analogue of Proposition \ref{plopgl} together with Propositions \ref{pExOr} and \ref{prCSdata}. \begin{prop}\label{proMHM} (1) Let $\pi:{\cal V}\to U$ be a family of CS charts on $U\subset X\subset {\cal B}_{si}$ with complexifiable local trivializations at every point $x\in U$. Then the MHM of vanishing cycles for $$f_x:{\cal V}_x=\pi^{-1}(x)\subset {\cal B}_{si}\mapright{cs} \mathbb{C}$$ glue to a MHM $M^{\bullet} $ on $U$, i.e. $M^{\bullet} $ is isomorphic to $M_{f_x}^{\bullet} $ in a neighborhood of $x$.\\ (2) Let ${\cal V}_\alpha$ and ${\cal V}_\beta$ be two families of CS charts on $U$ with complexifiable local trivializations. Let $M^{\bullet} _\alpha$ and $M^{\bullet} _\beta$ be the induced MHMs on $U$. Let $\mathbf{V} $ be a family of CS charts on $U\times [0,1]$ with complexifiable local trivializations such that $\mathbf{V} |_{U\times\{0\}}={\cal V}_\alpha$ and $\mathbf{V} |_{U\times\{1\}}={\cal V}_\beta$. Suppose for each $x\in U$, there are an open $U_x\subset U$ and a subfamily ${\cal W}$ of both ${\cal V}_\alpha|_{U_x}$ and ${\cal V}_\beta|_{U_x}$ such that ${\cal W}\times [0,1]$ is a complexifiable subfamily of CS charts in $\mathbf{V} |_{U_x\times [0,1]}$. Then there is an isomorphism $\sigma^m_{\alpha\beta} :M^{\bullet} _\alpha\cong M^{\bullet} _\beta$ of MHMs.\\ (3) If there are three families ${\cal V}_\alpha, {\cal V}_\beta, {\cal V}_\gamma$ with homotopies among them as in (2), then the isomorphisms $\sigma^m_{\alpha\beta} , \sigma^m_{\beta\gamma}, \sigma^m_{\gamma\alpha}$ satisfy $$\sigma^m_{{\alpha\beta} \gamma}:=\sigma^m_{\gamma\alpha}\circ \sigma^m_{\beta\gamma}\circ \sigma^m_{\alpha\beta} = \pm \id.$$ \end{prop} The proof of Proposition \ref{proMHM} is a line by line repetition of the proof of Proposition \ref{plopgl} in \S\ref{seclopgl}, with perverse sheaves replaced by MHMs if \emph{homeomorphisms} are replaced by \emph{biholomorphic maps}. Notice that the use of Propositions \ref{prop2} and \ref{prop1} are justified by the fact that $rat$ is a faithful functor. For example, Proposition \ref{plopgl} (3) immediately implies Proposition \ref{proMHM} (3) because $rat(\sigma^m_{\alpha\beta})=\sigma_{\alpha\beta}$ and $rat(\pm \id)=\pm \id$. Now the only non-holomorphic map used in \S\ref{seclopgl} is the homeomorphism $\Phi$ in the proof of Proposition \ref{12102} which was defined by the integral flow of the vector field \eqref{eq1}. Therefore we have a proof of Theorem \ref{thmMHM} as soon as we can replace \eqref{eq1} by a holomorphic vector field $\xi_t$ which vanishes on $\mathfrak{X} $ and satisfies \begin{equation}\label{1302131} d_V\mathbf{f} _t(\xi_t)=\mathbf{f} _0-\mathbf{f} _1.\end{equation} in the notation of \S\ref{seclopgl1}. To see this, we first recall that by Lemma \ref{12111}, the ideal ${\cal I}=(d_V\mathbf{f} _t)$ is indepedent of $t$ and defines an analytic space $Z$. \begin{lemm}\label{1302132} $\mathbf{f} _1-\mathbf{f} _0\in {\cal I}^2$. \end{lemm} \begin{proof} We let $\iota: D^\mathbb{C}_x\times {\cal V}_{D_x^\mathbb{C}}\to {\cal A}_s$ and $\iota': {\cal V}_{D_x^\mathbb{C}}\times D^\mathbb{C}_x\to{\cal A}_s$ be the compositions of the projections to ${\cal V}_{D_x^\mathbb{C}}$ with the tautological map ${\cal V}_{D_x^\mathbb{C}}\to{\cal A}_s$ constructed in \S\ref{seclopgl}. Then $\mathbf{f} _0=cs\circ\iota\circ(\id_{D_x^\mathbb{C}}\times\psi_x)$ and $\mathbf{f} _1=cs\circ\iota'\circ\Psi^\mathbb{C}\circ(\id_{D_x^\mathbb{C}}\times\psi_x)$. Since the problem is local, we can reduce the proof to the following case. Let $\xi\in D^\mathbb{C}_x\times D_x^\mathbb{C}\times {\cal V}_x$ be any point in the subspace defined by the ideal ${\cal I}$. We pick an open neighborhood $\xi\in W \subset D_x^\mathbb{C}\times D_x^\mathbb{C}\times {\cal V}_x$, so that $W$ is endowed with holomorphic coordinates $z=(z_1,\cdots,z_m)$ with $\xi=(0,\cdots,0)\in W$. Let $I= {\cal I}\otimes_{\mathcal{O}_{D^\mathbb{C}_x\times D_x^\mathbb{C}\times {\cal V}_x}}\mathcal{O}_{W}$ denote the ideal of $W\cap Z$. Then it suffices to show that \begin{equation}\label{111} \mathbf{f} _0|_{W}-\mathbf{f} _1|_{W}\in I^2.\end{equation} We now describe the difference $\mathbf{f} _0|_{W}-\mathbf{f} _1|_{W}$. For simplicity, we abbreviate $(\id_{D_x^\mathbb{C}}\times\psi_x)|_{W}$ to $\tilde \psi_x$. By the construction of $\Psi^\mathbb{C}$, we know that there are holomorphic $g:W\to {\cal G}$ and $\epsilon: W\to \Omega^{0,1}(ad E)_s$ satisfying $\epsilon|_{W\cap Z}\equiv 0$ such that $$\iota'\circ\Psi^\mathbb{C}\circ(\id_{D_x^\mathbb{C}}\times\psi_x)|_{W}=\iota'\circ\Psi^\mathbb{C}\circ\tilde{\psi}_x=g\cdot(\iota\circ\tilde \psi_x) +\epsilon=g\cdot(\iota\circ\tilde \psi_x+\epsilon'), $$ where $g\cdot(-)$ denotes the gauge group action; $\cdot+\epsilon$ is via the affine structure ${\cal A}_s\times\Omega^{0,1}(ad E)_s\to{\cal A}_s$, and $\epsilon':W\to \Omega^{0,1}(ad E)_s$ is the holomorphic map making the third identity hold, which satisfies $\epsilon'|_{W\cap Z}\equiv 0$. Since $cs$ is invariant under gauge transformations, \eqref{111} is equivalent to \begin{equation}\label{222} cs\circ\iota\circ\tilde \psi_x-cs\circ (\iota\circ\tilde \psi_x+\epsilon')\in I^2. \end{equation} We use finite dimensional approximation to reduce this to a familiar problem in several complex variables. First, since $\epsilon'$ takes values in $C^\infty$-forms, we can lift it to $\tilde \epsilon: W\to \Omega^{0,1}(ad E)_{L^2_t}$ for a large $t$ so that $\Omega^{0,1}(ad E)_{L^2_t}\subset \Omega^{0,1}(ad E)_s$. Since $\Omega^{0,1}(ad E)_{L^2_t}$ is a separable Hilbert space, we can approximate it by an increasing sequence of finite dimensional subspaces $R_k\subset \Omega^{0,1}(ad E)_{L^2_t}$. Let $q_k: \Omega^{0,1}(ad E)_{L^2_t}\to W_k\subset \Omega^{0,1}(ad E)_s$ be the orthogonal projection. Then we have a convergence of holomorphic functions $$\lim_{k\to\infty} cs\circ(\iota\circ\tilde \psi_x+ q_k\circ \tilde \epsilon)= cs\circ(\iota\circ\tilde \psi_x+\epsilon') $$ uniformly on every compact subset of $W$. We claim that \begin{equation}\label{333} cs\circ\iota\circ\tilde \psi_x -cs\circ(\iota\circ\tilde \psi_x+q_k\circ \tilde \epsilon)\in I^2. \end{equation} Note that the claim and the uniform convergence imply \eqref{222}. We prove \eqref{333}. For a fixed $k$, we pick a basis $e_1,\cdots, e_{n}$ of $R_k$; we introduce complex coordinates $w=(w_1,\cdots, w_n)$, and form a holomorphic function $$F_k: W\times\mathbb{C}^n\longrightarrow \mathbb{C};\quad F_k(z,w)=cs\circ(\iota\circ\tilde \psi_x+\sum_{j=1}^n w_j e_j). $$ If we write $q_k\circ\tilde \epsilon=\delta_1e_1+\cdots+\delta_ne_n: W\to R_k$, then all $\delta_j$ are holomorphic functions lying in $I$. Therefore, $$\bigl( cs\circ(\iota\circ\tilde \psi_x+q_k\circ\tilde \epsilon)\bigr)(z)=F(z, \delta_1(z),\cdots, \delta_n(z)). $$ Since $\delta_j\in I$, applying Taylor expansion along $(z,0)$, we conclude that $$F_k(z, \delta_1(z),\cdots, \delta_n(z))\equiv F_k(z,0)+\sum_{j=1}^n \frac{\partial F_k}{\partial w_j}(z, 0)\cdot \delta_j(z)\!\!\mod I^2. $$ Since $\frac{\partial F_k}{\partial w_j}(z, 0)$ involve the partial derivatives of the $cs$ via the chain rule, we conclude that $\frac{\partial F_k}{\partial w_j}(z, 0)\in I$. This proves that $F_k(z,\delta_1(z),\cdots,\delta_n(z))-F_k(z,0)\in I^2$, which is \eqref{333}. This proves the Lemma. \end{proof} Since $\mathbf{f} _0-\mathbf{f} _1\in {\cal I}^2$ and ${\cal I}=(d_V\mathbf{f} _t)$, we can always find a time dependent \emph{holomorphic} vertical vector field $\xi_t$ which satisfies \eqref{1302131} and vanishes along $\mathfrak{X} $ (cf. \cite{BBDJS}). This gives us the desired biholomorphic map $\Phi$. This completes our proof of Theorem \ref{thmMHM}. Note that we don't need Lemma \ref{lem5} for this choice of $\xi_t$. \begin{rema} \emph{(Gluing of polarization)} To use the hard Lefschetz property of \cite{Sai88}, we also need the gluing of polarization and monodromy for $M^{\bullet} $ and the graduation $\mathrm{gr}^WM^{\bullet} $. Note that our gluing isomorphisms arise from continuous families of CS charts $V$. Since a continuous variation of rational numbers is constant, the obvious polarizations for the constant sheaves $\mathbb{Q}_V$ on $V$ is constant in a continuous family of CS charts. The induced polarizations on the perverse sheaves $A_f^{\bullet} [r]=\phi_f\mathbb{Q}[r-1]$, defined in \cite[\S5.2]{Sai88}, are functorial. Therefore the polarizations on $A_f^{\bullet} [r]$ glue to give us a polarization on $P^{\bullet} $. By the same argument, we have a polarization on $\mathrm{gr}^WM^{\bullet} $ and the monodromy operators also glue . \end{rema} \medskip \def\mathfrak{S} {\mathfrak{S} } \section{Gopakumar-Vafa invariants} In this section, we provide a mathematical theory of the Gopakumar-Vafa invariant as an application of Theorem \ref{theo4.3.8}. \subsection{Intersection cohomology sheaf}\label{secGV1} As before, let $Y$ be a smooth projective Calabi-Yau 3-fold over $\mathbb{C}$. From string theory (\cite{GoVa, Katz}), it is expected that \begin{enumerate}\item there are integers $n_h(\beta)$, called the Gopakumar-Vafa invariants (GV for short) which contain all the information about the Gromov-Witten invariants $N_g(\beta)$ of $Y$ in the sense that \begin{equation}\label{GVGW}\sum_{g,\beta}N_g(\beta)q^\beta\lambda^{2g-2}=\sum_{k,h,\beta}n_h(\beta)\frac1{k} \left(2\sin (\frac{k\lambda}2)\right)^{2h-2}q^{k\beta} \end{equation} where $\beta\in H_2(Y,\mathbb{Z})$, $q^\beta=\exp(-2\pi\int_\beta c_1(\mathcal{O}_Y(1)))$; \item $n_h(\beta)$ come from an $sl_2\times sl_2$ action on some cohomology theory of the moduli space $\mathfrak{X} $ of one dimensional stable sheaves on $Y$; \item $n_0(\beta)$ should be the Donaldson-Thomas invariant of the moduli space $\mathfrak{X} $. \end{enumerate} By using the global perverse sheaf $P^{\bullet} $ constructed above and the method of \cite{HST}, we can give a geometric theory for GV invariants. \medskip We recall the following facts from \cite{Sai88, Sai90}. \begin{theo}\label{12191} \begin{enumerate} \item \emph{(Hard Lefschetz theorem)} If $f:X\to Y$ is a projective morphism and $P^{\bullet} $ is a perverse sheaf on $X$ which underlies a pure (polarizable) MHM $M^{\bullet} $, then the cup product induces an isomorphism \[ \omega^k:\, ^p\!{\cal H}^{-k}Rf_*P^{\bullet} \longrightarrow \,^p\!{\cal H}^{k}Rf_*P^{\bullet} \] where $\omega$ is the first Chern class of a relative ample line bundle. \item \emph{(Decomposition theorem)} If $f:X\to Y$ is a proper morphism and $P^{\bullet} $ as above, then \[ Rf_*P^{\bullet} \cong \oplus_k \,^p\!{\cal H}^{k}Rf_*P^{\bullet} [-k] \] and each summand $^p{\cal H}^{k}Rf_*P^{\bullet} [-k]$ is a perverse sheaf underlying a MHM which is again polarizable semisimple and pure. \end{enumerate} \end{theo} Let $\mathfrak{X} $ be the moduli space of stable one-dimensional sheaves $E$ on $Y$ with $\chi(E)=1$ and $[E]=\beta\in H_2(Y,\mathbb{Z})$. In particular, the rank of $E$ is zero and $c_1(E)=0$. By \cite[Theorem 6.11]{Mar}, there is a universal family ${\cal E}$ on $\mathfrak{X} \times Y$. Let $X=\mathfrak{X} _{red}$ be the reduced complex analytic subspace of $\mathfrak{X} $. Let $\tilde X$ be the semi-normalization of $X$ and let $S$ be the image of the morphism $\tilde X\to Chow(Y)$ to the Chow scheme of curves in $Y$. By \cite{Koll}, the morphism $\tilde X\to X$ is one-to-one and hence a homeomorphism because $\tilde X$ is projective and $X$ is separated. The natural morphism $f:\tilde X\longrightarrow S$ is projective and the intersection cohomology sheaf $IC^{\bullet} =IC_{\tilde X}(\mathbb{C})^{\bullet} $ underlies a pure simple MHM. In \cite{HST}, S. Hosono, M.-H. Saito and A. Takahashi show that the hard Lefschetz theorem applied to $f$ and $c:S\to pt$ gives us an action of $sl_2\times sl_2$ on the intersection cohomology $IH^*(\tilde X)=\mathbb{H} ^*(\tilde X,IC^{\bullet} )$ as follows: The relative Lefschetz isomorphism $$\, ^p\!{\cal H}^{-k}Rf_*(IC^{\bullet} )\longrightarrow \,^p\!{\cal H}^{k}Rf_*(IC^{\bullet} )$$ for $f$ gives an action of $sl_2$, called the left action, via the isomorphisms $$\mathbb{H} ^*(\tilde X, IC^{\bullet} )\cong \mathbb{H} ^*(S,Rf_*IC^{\bullet} )\cong \oplus_k \mathbb{H} ^*(S, \,^p\!{\cal H}^{k}Rf_*(IC^{\bullet} )[-k])$$ from the decomposition theorem. On the other hand, since $\,^p\!{\cal H}^{k}Rf_*(IC^{\bullet} )[-k]$ underlies a MHM which is again semisimple and pure, $\mathbb{H} ^*(S, \,^p\!{\cal H}^{k}Rf_*(IC^{\bullet} )[-k])$ is equipped with another action of $sl_2$, called the right action, by hard Lefschetz again. Therefore we obtain an action of $sl_2\times sl_2$ on the intersection cohomology $IH^*(\tilde X)$ of $\tilde X$. If $C\in S$ is a smooth curve of genus $h$, the fiber of $f$ over $C$ is expected to be the Jacobian of line bundles on $C$ whose cohomology is an $sl_2$-representation space $$\left((\frac12)\oplus 2(0)\right)^{\otimes h}, $$ where $(\frac12)$ denotes the 2-dimensional representation of $sl_2$ while $(0)$ is the trivial 1-dimensional representation. In \cite{HST}, the authors propose a theory of the Gopakumar-Vafa invariants by using the $sl_2\times sl_2$ action on $IH^*(\tilde X,\mathbb{C})$ as follows: By the Clebsch-Gordan rule, it is easy to see that one can uniquely write the $sl_2\times sl_2$-representation space $IH^*(\tilde X,\mathbb{C})$ in the form \[ IH^*(\tilde X,\mathbb{C})=\bigoplus_h \left((\frac12)_L\oplus 2(0)_L\right)^{\otimes h}\otimes R_h, \] where $(\frac{k}2)_L$ denotes the $k+1$ dimensional irreducible representation of the left $sl_2$ action while $R_h$ is a representation space of the right $sl_2$ action. Now the authors of \cite{HST} define the GV invariant as the Euler number $Tr_{R_h}(-1)^{H_R}$ of $R_h$ where $H_R$ is the diagonal matrix in $sl_2$ with entries $1,-1$. However it seems unlikely that the invariant $n_h(\beta)$ defined using the intersection cohomology as in \cite{HST} will relate to the GW invariants of $Y$ as proposed by Gopakumar-Vafa because intersection cohomology is unstable under deformation. We propose to use the perverse sheaf $P^{\bullet} $ on $X$ constructed above instead of $IC^{\bullet} $. \medskip \subsection{GV invariants from perverse sheaves} In this subsection, we assume that $\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ admits a square root so that we have a perverse sheaf $P^{\bullet} $ and a MHM $M^{\bullet} $ which are locally the perverse sheaf and MHM of vanishing cycles for a local CS functional. \begin{rema} In \cite{Hua}, it is proved that if the Calabi-Yau 3-fold $Y$ is simply connected and $H^*(Y,\mathbb{Z})$ is torsion-free, then $\det \Ext_{\pi}^\bullet({\cal E},{\cal E})$ admits a square root. For instance, when $Y$ is a quintic threefold, we have the desired perverse sheaf and MHM. \end{rema} Since the semi-normalization $\gamma:\tilde X\to X$ is bijective, the pullback $\tilde P^{\bullet} $ of $P^{\bullet} $ is a perverse sheaf and $\gamma_*\tilde P^{\bullet} \cong P^{\bullet} $. By Theorem \ref{thmMHM}, $P^{\bullet} $ lifts to a MHM $M^{\bullet} $ and its pullback $\tilde M^{\bullet} $ satisfies $rat(\tilde M^{\bullet} )=\tilde P^{\bullet} $ since $rat$ preserves Grothendieck's six functors (\cite{Sai90}). Let $\hat{M}^{\bullet} =\mathrm{gr}^W \tilde M^{\bullet} $ be the graded object of $\tilde M^{\bullet} $ with respect to the weight filtration $W$. Then $\hat{M}^{\bullet} $ is a direct sum of polarizable Hodge modules (\cite{SaiICM}). Let $\hat{P}^{\bullet} =rat(\hat{M}^{\bullet} )$ which is the graduation $\mathrm{gr}^W \tilde P^{\bullet} $ by the weight filtration of $\tilde P^{\bullet} $ because $rat$ is an exact functor (\cite{Sai90}). By \cite[\S5]{Sai88}, the hard Lefschetz theorem and the decomposition theorem hold for the semisimple polarizable MHM $\hat{ M}^{\bullet} $. Hence by applying the functor $rat$, we obtain the hard Lefschetz theorem and the decomposition theorem for $\hat{P}^{\bullet} $. Therefore, we can apply the argument in \S\ref{secGV1} to obtain an action of $sl_2\times sl_2$ on the hypercohomology $\mathbb{H} ^*(\tilde X,\hat{P}^{\bullet} )$ to write $$\mathbb{H} ^*(\tilde X,\hat{P}^{\bullet} ) \cong \bigoplus_h \left((\frac12)_L\oplus 2(0)_L\right)^{\otimes h}\otimes R_{h}.$$ \begin{defi}\label{12231} We define the Gopakumar-Vafa invariant as \[ n_h(\beta):= Tr_{R_{h}}(-1)^{H_R}.\] \end{defi} The GV invariant $n_h(\beta)$ is integer valued and defined by an $sl_2\times sl_2$ representation space $\mathbb{H} ^*(\tilde X, \hat{P}^{\bullet} )$ as expected from \cite{GoVa}. \begin{prop}\label{pS7.1} The number $n_0(\beta)$ is the Donaldson-Thomas invariant of $\mathfrak{X} $. \end{prop} \begin{proof} Recall that the DT invariant is the Euler number of $X$ weighted by the Behrend function $\nu_{\mathfrak{X} }$ on $X$ and that $\nu_{\mathfrak{X} }(x)$ for $x\in X$ is the Euler number of the stalk cohomology of $P^{\bullet} $ at $x$. Therefore the DT invariant of $\mathfrak{X} $ is the Euler number of $\mathbb{H} ^*(X,P^{\bullet} )$. Since the semi-normalization $\gamma:\tilde X\to X$ is a homeomorphism, $\gamma_*\tilde P^{\bullet} \cong P^{\bullet} $ and $\mathbb{H} ^*(X,P^{\bullet} )\cong \mathbb{H} ^*(X,\gamma_*\tilde P^{\bullet} )\cong \mathbb{H} ^*(\tilde X, \tilde P^{\bullet} )$ so that $$ DT(\mathfrak{X} )=\sum_k (-1)^k\dim\mathbb{H} ^k(X,P^{\bullet} )=\sum_k (-1)^k\dim \mathbb{H} ^k(\tilde X,\tilde P^{\bullet} ).$$ Since $\tilde P^{\bullet} $ has a filtration $W$ with $\hat{P}^{\bullet} =\mathrm{gr}^W \tilde P^{\bullet} $, we have the equality of alternating sums $$\sum_k (-1)^k\dim \mathbb{H} ^k(\tilde X,\tilde P^{\bullet} )=\sum_{k} (-1)^k\dim \mathbb{H} ^k(\tilde X,\hat{P}^{\bullet} ).$$ Since the Euler number of the torus part $\left((\frac12)_L\oplus 2(0)_L\right)^{\otimes h}$ is zero for $h\ne 0$, $$\sum_{k} (-1)^k\dim \mathbb{H} ^k(\tilde X,\hat{P}^{\bullet} )= Tr_{R_{0}}(-1)^{H_R}=n_0(\beta).$$ This proves the proposition. \end{proof} Furthermore, we propose the following conjecture. \begin{conj}\label{lc5.1} (1) The GV invariants $n_h(\beta)$ are invariant under deformation of the complex structure of $Y$.\\ (2) The GV invariants $n_h(\beta)$ depend only on $\beta$ and are independent of the constant term $\chi(E)$ of the Hilbert polynomial.\\ (3) The GV invariants $n_h(\beta)$ are independent of the choice of a polarization of $Y$.\\ (4) The identity \eqref{GVGW} holds. \end{conj} Note that for $h=0$, (1) follows from Proposition \ref{pS7.1} and \cite{Tho}. Also by \cite{JoSo} and \cite{Tho}, (3) is known for $h=0$. Of course, (1)-(3) are consequences of (4). Furthermore, establishing the identity \eqref{GVGW} will equate Definition \ref{12231} with that introduced by Pandharipande-Thomas \cite{PT} for a large class of CY 3-folds (cf. \cite{PP}). \subsection{K3-fibered CY 3-folds} In this last subsection, we show that Conjecture \ref{lc5.1} holds for a primitive fiber class of K3 fibered CY 3-folds. We first consider the local case. We let $\Delta\subset \mathbb{C}$ be the unit disk, $t\in\Gamma(\mathcal{O}_\Delta)$ the standard coordinate function, and let $p: Y\to\Delta$ be a smooth family of polarized K3 surfaces. We suppose the central fiber $Y_0$ contains a curve class $\beta_0\in H^{1,1}(Y_0,\mathbb{R})\cap H^2(Y_0,\mathbb{Z})$, not proportional to the polarization, such that $\beta_0$ ceases to be $(1,1)$ in the first order deformation of $Y_0$ in $Y$, which means that if we let $\tilde \beta\in \Gamma(\Delta, R^2p_* \mathbb{Z}_Y)$ be the continuous extension of $\beta_0$ and let $\tilde \omega\in\Gamma(\Delta, p_*\Omega_{Y/\Delta}^2)$ be a nowhere vanishing section of relative $(2,0)$-form, then $p_*(\tilde \omega\wedge\tilde \beta) \not\in t^2\mathcal{O}_\Delta$. For $c\in\Delta$, we let $\iota_c: Y_c\to Y$ be the closed embedding. We let $\beta\in H^4(Y,\mathbb{Z})$ be such that $\beta_0=\iota_{0}^!\beta_0$. Since $Y\to\Delta$ is a family of polarized K3 surfaces, the family of relative ample line bundle is ample on $Y$. We form the moduli ${\cal M}_Y(-\beta,1)$ (resp. ${\cal M}_{Y_0}(\beta_0,1)$) of one dimensional stable sheaves ${\cal E}$ of $\mathcal{O}_Y$-modules (resp. $\mathcal{O}_{Y_0}$-modules) with $c_2({\cal E})=-\beta$ (resp. $c_1({\cal E})=\beta_0$) and $\chi({\cal E})=1$. Since $\beta$ is a fiber class, the moduli ${\cal M}_Y(-\beta,1)$ is well defined and is a complex scheme. Because the polarization of $Y$ restricts to the polarization of $Y_0$, we have a closed embedding \begin{equation}\label{M-inc} {\cal M}_{Y_0}(\beta_0,1)\mapright{\subset} {\cal M}_Y(-\beta,1). \end{equation} \begin{lemm} Suppose $\beta_0$ ceases to be $(1,1)$ in the first order deformation of $Y_0$ in $Y$, and there are no $c\ne 0\in\Delta$ such that $\iota_c^{\ast} \tilde \beta\in H^{1,1}(Y_c,\mathbb{R})$. Then the embedding \eqref{M-inc} is an isomorphism of schemes. \end{lemm} \begin{proof} We first claim that for any sheaf $[{\cal E}]\in {\cal M}_Y(-\beta,1)$, ${\cal E}=\iota_{0\ast}{\cal E}'$ for a sheaf $[{\cal E}']\in{\cal M}_{Y_0}(\beta_0,1)$. Indeed, let $\text{spt}({\cal E})$ be the scheme-theoretic support of ${\cal E}$. Since ${\cal E}$ is stable, it is connected and proper, thus its underlying set is contained in a closed fiber $Y_c\subset Y$ for some closed $c\in\Delta$. Denoting by the same $t\in \Gamma(\mathcal{O}_Y)$ the pullback of $t\in \mathcal{O}_\Delta$, since ${\cal E}$ is coherent, there is a positive integer $k$ so that $\text{spt}({\cal E})\subset ((t-c)^k=0)$. In particular, ${\cal E}$ is annihilated by $(t-c)^k$. Since $t-c\in\Gamma(\mathcal{O}_Y)$, multiplying by $t-c$ defines a sheaf homomorphism $\cdot(t-c): {\cal E}\to{\cal E}$, which has non-trivial kernel since $(t-c)^k$ annihilates ${\cal E}$. Since ${\cal E}$ is stable, this is possible only if ${\cal E}$ is annihilated by $t-c$. Therefore, letting ${\cal E}'={\cal E}/(t-c)\cdot {\cal E}$, which is a sheaf of $\mathcal{O}_{Y_c}$-modules, we have ${\cal E}=\iota_{c\ast}{\cal E}'$. It remains to show that $c=0$. If not, then $c_1({\cal E}')=\iota_c^!\beta$ will be in $H^{1,1}(Y_c,\mathbb{R})$, a contradiction. This prove the claim. We now prove that \eqref{M-inc} is an isomorphism. Indeed, by the previous argument, we know that \eqref{M-inc} is a homeomorphism. To prove that it is an isomorphism, we need to show that for any local Artin ring $A$ with quotient field $\mathbb{C}$ and morphism $\varphi_A: \spec A\to {\cal M}_Y(-\beta, 1)$, $\varphi_A$ factors through ${\cal M}_{Y_0}(\beta_0,1)$. By an induction on the length of $A$, we only need to consider the case where there is an ideal $I\subset A$ such that $\dim_\mathbb{C} I=1$ and the restriction $\varphi_{A/I}:\spec A/I\to {\cal M}_Y(-\beta, 1)$ already factors through ${\cal M}_{Y_0}(\beta_0, 1)$. Let ${\cal E}$ be the sheaf of $A\times\mathcal{O}_Y$-modules that is the pullback of the universal family of ${\cal M}_Y(-\beta, 1)$ via $\varphi_A$. As $\varphi_{A/I}$ factors through ${\cal M}_{Y_0}(\beta_0, 1)$, $t\cdot {\cal E}\subset I\cdot {\cal E}$. If $t\cdot {\cal E}=0$, then $\varphi_A$ factors, and we are done. Suppose not. Then since ${\cal E}_0={\cal E}\otimes_A\mathbb{C}$ is stable, there is a $c\in I$ so that $t\cdot {\cal E}=c\cdot {\cal E}\subset {\cal E}$. Thus $(t-c)\cdot {\cal E}=0$. We now let $\psi: \spec A\to \Delta$ be the morphism defined by $\psi^{\ast}(t)=c$, and let $Y_{A}=Y\times_{\Delta, \psi} \spec A$. Then $Y_A\to \spec A$ is a family of K3 surfaces with a tautological embedding $\iota_A: Y_A\to Y\times \spec A$. Then $(t-c)\cdot {\cal E}=0$ means that there is an $A$-flat family of sheaves ${\cal E}'$ of $\mathcal{O}_{Y_A}$-modules so that $\iota_{A\ast}{\cal E}'={\cal E}$. Let $q$ be the projection and $\iota$ be the tautological morphism fitting into the Cartesian square $$\begin{CD} Y_A @>{\iota}>> Y\\ @VV{q}V @VV{p}V\\ \spec A @>{\psi}>> \Delta \end{CD} $$ Then $c_1({\cal E}')=\iota^{\ast}\tilde \beta$. Thus for the relative $(2,0)$-form $\tilde \omega$, we have $$0=q_\ast\bigl( c_1({\cal E}')\wedge \iota^{\ast}\tilde \omega\bigr)=q_\ast\iota^{\ast}(\tilde \beta\wedge\tilde \omega) =\psi^{\ast} p_\ast(\tilde \beta\wedge\tilde \omega). $$ By the assumption that $p_\ast(\tilde \beta\wedge\tilde \omega)$ is not divisible by $t^2$, the above vanishing implies that $\psi$ factors through $0\in\Delta$. This proves that $c=0$ and $\varphi_A$ factors through ${\cal M}_{Y_0}(\beta_0,1)$. This proves the proposition. \end{proof} By the above lemma, we find that the moduli scheme $\mathfrak{X} =X={\cal M}_Y(-\beta,1)={\cal M}_{Y_0}(\beta_0,1)$ is a smooth projective variety of dimension $$\dim X=\beta_0^2+2=2k,$$ because the obstruction space $Ext^2(E,E)_0$ is trivial for any stable sheaf $E$ on a K3 surface $Y_0$. Hence we can set $P^{\bullet} =\mathbb{Q}_X$ and thus $\mathbb{H} ^*(X,P^{\bullet} )=H^*(X,\mathbb{Q})$. Let $$Y_0=S\mapright{\pi} \mathbb{P}^1$$ be an elliptic K3 surface and $\beta_0=C+kF$ where $C$ is a section and $F$ is a fiber. We can calculate the GV invariants in this case by the same calculation as in \cite[Theorem 4.7]{HST}. Since the details are obvious modifications of those in \cite[\S4]{HST}, we briefly outline the calculation. Indeed by Fourier-Mukai transform, $X$ is isomorphic to the Hilbert scheme $S^{[k]}$ of points on $S$ and the Chow scheme in this case is a complete linear system $\mathbb{P}^k$. The Hilbert-Chow morphism is \[ S^{[k]}\mapright{\mathfrak{hc}} S^{(k)} \mapright{\pi} (\mathbb{P}^1)^{(k)}\cong \mathbb{P}^k.\] It is easy to see that the cohomology $H^*(S,\mathbb{Q})$ of $S$ as an $sl_2\times sl_2$ representation space by (relative) hard Lefschetz applied to $S\to \mathbb{P}^1$ is \[ (\frac12)_L\otimes (\frac12)_R + 20\cdot (0)_L\otimes (0)_R.\] If we denote by $t_L$ (resp. $t_R$) the weight of the action of the maximal torus for the left (resp. right) $sl_2$ action, we can write $H^*(S,\mathbb{Q})$ as $(t_L+t_L^{-1})(t_R+t_R^{-1})+20$. Hence $H^*(S^{(k)},\mathbb{Q})$ is the invariant part of \[ \left( (\frac12)_L\otimes (\frac12)_R + 20\cdot (0)_L\otimes (0)_R\right)^k\] by the symmetric group action. In terms of Poincar\'e series, we can write \[ \sum_k P_{t_L,t_R}(S^{(k)})q^k=\frac{1}{(1-t_Lt_Rq)(1-t_L^{-1}t_Rq)(1-t_Lt_R^{-1}q)(1-t_L^{-1}t_R^{-1}q)(1-q)^{20}}. \] Applying the decomposition theorem (\cite{BBD}) for the semismall map $S^{[k]}\to S^{(k)}$, we find that $\sum_k P_{t_L,t_R}(S^{[k]})q^k$ is $$\prod_{m\ge 1}\frac{1}{(1-t_Lt_Rq^m)(1-t_L^{-1}t_Rq^m)(1-t_Lt_R^{-1}q^m)(1-t_L^{-1}t_R^{-1}q^m)(1-q^m)^{20}}$$ which gives \begin{equation}\label{12171}\sum_k P_{t_L,t_R}(S^{[k]})|_{t_R=-1}q^k=\prod_{m\ge 1}\frac{1}{(1+t_Lq^m)^2(1+t_L^{-1}q^m)^2(1-q^m)^{20}}. \end{equation} By definition, the GV invariants are defined by writing \eqref{12171} as \begin{equation}\label{12172} \sum_{h,k} q^k(t_L+t_L^{-1}+2)^h\otimes R_h(S^{[k]})|_{t_R=-1} =\sum_{h,k}q^kn_h(k)(t_L+t_L^{-1}+2)^h \end{equation} By equating \eqref{12171} and \eqref{12172} with $t_L=-y$, we obtain $ \sum_{h,k} (-1)^hn_h(k)(y^{\frac12}-y^{-\frac12})^{2h}q^{k-1}=\frac{1}{q\prod_{m\ge 1}(1-yq^m)^2(1-y^{-1}q^m)^2(1-q^m)^{20}} $$ with $\beta_0^2=2k-2$. On the other hand, by \cite[Theorem 1]{MP}, we have $ \sum_{h,k} (-1)^hr_h(k)(y^{\frac12}-y^{-\frac12})^{2h}q^{k-1}=\frac{1}{q\prod_{m\ge 1}(1-yq^m)^2(1-y^{-1}q^m)^2(1-q^m)^{20}} $$ where $r_h(k)$ are the BPS invariants from the Gromov-Witten theory for $Y\to \Delta$. Combining these two identities, we find that $$n_h(k)=r_h(k), $$ which verifies Conjecture \ref{lc5.1} for the local Calabi-Yau 3-fold $Y\to \Delta$. We thus obtain \begin{prop} Let $Y\to\mathbb{P}^1$ be a K3 fibered projective CY threefold and let $\iota_0: Y_0\subset Y$ be a smooth fiber. Let $\beta_0\in H_2(Y_0,\mathbb{Z})$ be a curve class so that its Poincare dual $\beta_0^\vee\in H^2(Y_0,\mathbb{Z})$ ceases to be $(1,1)$ type in the first order deformation of $Y_0$ in the family $Y_c$, $c\in \mathbb{P}^1$. Then $X_0=\mathfrak{X} _0\!:= {\cal M}_{Y_0}(\beta_0^\vee,1)\subset \mathfrak{X} \!:= {\cal M}_Y(-(\iota_{0\ast}\beta_0)^\vee, 1)$ is a (smooth) open and closed complex analytic subspace, and \eqref{GVGW} holds for the GV invariants of the perverse sheaf $P^\bullet$ (of $\mathfrak{X} $) restricted to $X_0$ where $N_g(\beta_0)$ in \eqref{GVGW} are the GW invariants contributed from the connected components of stable maps to $Y$ that lie in $Y_0$. \end{prop} It will be interesting to extend the constructions in this paper to the setting of stable pairs. Then it may be possible to extend the theory of Gopakumar-Vafa invariants to the moduli scheme of stable pairs. Let $M$ be the moduli scheme of stable pairs $(F,s)$ with fixed topological type, where $F$ is a pure sheaf of one dimensional support and $s\in H^0(F)$ which is $\alpha$-stable for some $\alpha>0$. We consider the morphism $M\to S$ to the Chow scheme possibly after taking the semi-normalization, sending $(F,s)$ to the support of $F$. The general fiber over a curve $C$ of genus $g$ is expected to be the symmetric product $C^{(d)}$ for a suitable $d$ defined by the topological type. Let $J_g=[\{(\frac12)_L+2g\,(0)_L\}^d]^{S_d}$ be the cohomology of $C^{(d)}$ as an $sl_2$ representation space. We can write the perverse hypercohomology of $M$ in the form $\bigoplus J_g\otimes R_g$ and define the GV invariant as the Euler number of $R_g$. \def\mathrm{Ext}_\pi^\bullet {\mathrm{Ext}_\pi^\bullet } \def\mathrm{Ext}_\pi^\bullet(\mathcal{E},\mathcal{E}) {\mathrm{Ext}_\pi^\bullet(\mathcal{E},\mathcal{E}) } \section{Appendix: Square root of determinant line bundle}\label{sec8} The purpose of this appendix is to give a direct proof of the following theorem of Hua \cite{Hua}, where it is stated only for sheaves. The argument presented below is a simplification of the proof in \cite{Hua}. A byproduct of this simplification is that the proof now works for any perfect complexes, not just sheaves as in \cite{Hua}. \begin{theo}\label{1301111} \cite[Theorem 3.1]{Hua} Let ${\cal E}\to X\times Y$ be a perfect complex of vector bundles; let $\pi, \rho$ be the projections from $X\times Y$ to $X,Y$ respectively; let $\mathrm{Ext}_\pi^\bullet ({\cal E},{\cal E})=R\pi_*R{\cal H} om({\cal E},{\cal E})$. Then the torsion-free part of $c_1(\mathrm{Ext}_\pi^\bullet(\mathcal{E},\mathcal{E}) )\in H^2(X,\mathbb{Z})$ is divisible by $2$. \end{theo} For a proof, we need two lemmas. \begin{lemm}\label{1301112} Theoroem \ref{1301111} holds when ${\cal E}$ is a line bundle ${\cal L}$. \end{lemm} \begin{proof} Since the Chern character of $R{\cal H} om({\cal L},{\cal L})\cong \mathcal{O}$ is $1$, by Grothendieck-Riemann-Roch together with the Todd class $$Td_Y=1+\frac{c_2(Y)}{12},$$ we find that $c_1(\mathrm{Ext}_\pi^\bullet ({\cal L},{\cal L}))=0$. \end{proof} \begin{lemm}\label{1301113} We may assume $c_1({\cal E})=0$. \end{lemm} \begin{proof} Let ${\cal L}=(\det\,{\cal E})^{-1}$ and ${\cal F}={\cal E}\oplus {\cal L}$. Then $$c_1({\cal F})=c_1({\cal E})+c_1({\cal L})=c_1({\cal E})-c_1({\cal E})=0.$$ Moreover we have $$c_1(\mathrm{Ext}_\pi^\bullet ({\cal F},{\cal F}))=c_1(\mathrm{Ext}_\pi^\bullet(\mathcal{E},\mathcal{E}) )+c_1(\mathrm{Ext}_\pi^\bullet ({\cal L},{\cal L}))+c_1(\mathrm{Ext}_\pi^\bullet ({\cal E},{\cal L}))+c_1(\mathrm{Ext}_\pi^\bullet ({\cal L},{\cal E})). $$ The last two terms cancel by Serre duality and the second term is zero by Lemma \ref{1301112}. Hence $c_1(\mathrm{Ext}_\pi^\bullet ({\cal F},{\cal F}))$ is divisible by $2$ if and only if $c_1(\mathrm{Ext}_\pi^\bullet(\mathcal{E},\mathcal{E}) )$ is divisible by $2$. \end{proof} \begin{proof}[Proof of Theorem \ref{1301111}] Let $\alpha_i=c_i({\cal E})$. By Lemma \ref{1301113}, we may assume $\alpha_1=0$. Then we have $$ch({\cal E})=r-\alpha_2+\frac{\alpha_3}{2}+\frac{\alpha_2^2-2\alpha_4}{12}$$ where $r$ is the rank of ${\cal E}$. Since $R{\cal H} om({\cal E},{\cal E})={\cal E}^\vee\otimes {\cal E}$, we have $$ch(R{\cal H} om({\cal E},{\cal E}))=r^2-2r\alpha_2+\alpha_2^2+\frac{r}{6}(\alpha_2^2-2\alpha_4).$$ By the Grothendieck-Riemann-Roch formulas $ch(\pi_!{\cal E})=\int_Y ch({\cal E})\cdot Td_Y$ and $$ch(\mathrm{Ext}_\pi^\bullet(\mathcal{E},\mathcal{E}) )=ch(R\pi_*R{\cal H} om({\cal E},{\cal E}))=\int_Y ch(R{\cal H} om({\cal E},{\cal E}))\cdot Td_Y,$$ we have $$c_1(\pi_!{\cal E})=\int_Y\left(\frac{\alpha_2^2-2\alpha_4}{12} -\alpha_2\cdot \frac{c_2(Y)}{12} \right) \quad\text{ and}$$ $$c_1(\mathrm{Ext}_\pi^\bullet(\mathcal{E},\mathcal{E}) )=\int_Y\left( \alpha_2^2+\frac{r}{6}(\alpha_2^2-2\alpha_4)-2r\alpha_2\cdot \frac{c_2(Y)}{12}\right)=\int_Y\alpha_2^2+2r\,c_1(\pi_!{\cal E}).$$ So it suffices to show that $\int_Y\alpha_2^2$ is divisible by $2$. By the K\"unneth formula, we can write $$\alpha_2=\pi^*A_2+\pi^*A_1\cdot \rho^*B_1+\rho^*B_2$$ modulo torsion, where $A_i\in H^{2i}(Y,\mathbb{Z})$ and $B_i\in H^{2i}(X,\mathbb{Z})$. Then we have $$\int_Y\alpha_2^2=2 (\int_Y A_2A_1)\cdot B_1$$ which is obviously divisible by $2$. \end{proof} \bibliographystyle{amsplain}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,459
Jessica Lydie Ngoua Nseme from Littoral was crowned Miss Cameroon 2015. Not a press favorite to win the title, Jessica beat the odds and emerged as her country's representative in Miss World 2015 on December 19 in Sanya, China. Joelle Thonet placed 2nd and Diane Biata was 3rd place.
{ "redpajama_set_name": "RedPajamaC4" }
8,619
{"url":"https:\/\/socratic.org\/questions\/59c179717c014902fff3b3ab","text":"What is the general solution of the differential equation (y^2+1)dy\/dx+2xy^2=2x ?\n\nSep 19, 2017\n\n$\\ln | \\frac{y + 1}{y - 1} | - y = {x}^{2} + C$\n\nExplanation:\n\nWe have:\n\n$\\left({y}^{2} + 1\\right) \\frac{\\mathrm{dy}}{\\mathrm{dx}} + 2 x {y}^{2} = 2 x$ ..... [A]\n\nWe can rearrange this non-linear First Order differential equation [A] as follows:\n\n$\\left({y}^{2} + 1\\right) \\frac{\\mathrm{dy}}{\\mathrm{dx}} = 2 x - 2 x {y}^{2}$\n$\\therefore \\left({y}^{2} + 1\\right) \\frac{\\mathrm{dy}}{\\mathrm{dx}} = - 2 x \\left({y}^{2} - 1\\right)$\n$\\therefore \\frac{{y}^{2} + 1}{{y}^{2} - 1} \\frac{\\mathrm{dy}}{\\mathrm{dx}} = - 2 x$\n\nThis is now separable, so we can \"seperate the variables\" to get:\n\n$\\int \\setminus \\frac{{y}^{2} + 1}{{y}^{2} - 1} \\setminus \\mathrm{dy} = \\int \\setminus - 2 x \\setminus \\mathrm{dx}$ ..... [B]\n\nThe RHS integral is standard, and the LHS will require a little manipulation, as follows:\n\n$\\int \\setminus \\frac{{t}^{2} + 1}{{t}^{2} - 1} \\setminus \\mathrm{dt} = \\int \\setminus \\frac{{t}^{2} - 1 + 2}{{t}^{2} - 1} \\setminus \\mathrm{dt}$\n$\\text{ } = \\int \\setminus 1 + \\frac{2}{{t}^{2} - 1} \\setminus \\mathrm{dt}$\n$\\text{ } = \\int \\setminus 1 + \\frac{2}{\\left(t + 1\\right) \\left(t - 1\\right)} \\setminus \\mathrm{dt}$\n\nWe can now decompose the fractional part of the integrand into partial fractions, as follows:\n\n$\\frac{2}{\\left(t + 1\\right) \\left(t - 1\\right)} \\equiv \\frac{A}{t + 1} + \\frac{B}{t - 1}$\n$\\text{ } = \\frac{A \\left(t - 1\\right) + B \\left(t + 1\\right)}{\\left(t + 1\\right) \\left(t - 1\\right)}$\n\n$2 \\equiv A \\left(t - 1\\right) + B \\left(t + 1\\right)$\n\nWhere $A , B$ are constants that are to be determined. We can find them by substitutions (In practice we do this via the \"cover up\" method:\n\nPut $t = - 1 \\implies 2 = - 2 A \\implies A = - 1$\nPut $t = + 1 \\implies 2 = + 2 B \\implies B = + 1$\n\nSo using partial fraction decomposition we have:\n\n$\\int \\setminus \\frac{{t}^{2} + 1}{{t}^{2} - 1} \\setminus \\mathrm{dt} = \\int \\setminus 1 - \\frac{1}{t + 1} + \\frac{1}{t - 1} \\setminus \\mathrm{dt}$\n\nUsing this result we can now integrate [B] as follows:\n\n$\\int \\setminus \\frac{{y}^{2} + 1}{{y}^{2} - 1} \\setminus \\mathrm{dy} = \\int \\setminus - 2 x \\setminus \\mathrm{dx}$\n\n$\\therefore \\int \\setminus - 1 + \\frac{1}{y + 1} - \\frac{1}{y - 1} \\setminus \\mathrm{dy} = \\int \\setminus 2 x \\setminus \\mathrm{dx}$\n\n$\\therefore - y + \\ln | y + 1 | - \\ln | y - 1 | = {x}^{2} + C$\n\n$\\therefore \\ln | \\frac{y + 1}{y - 1} | - y = {x}^{2} + C$\n\nWhich, is the General Solution .\n\nWe are unable to find a particular solution, as requested, as noi initial conditions have been provided to allow the constant $C$ to be evaluated.","date":"2021-12-02 19:57:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 21, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9796937704086304, \"perplexity\": 512.9489700553108}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-49\/segments\/1637964362287.26\/warc\/CC-MAIN-20211202175510-20211202205510-00564.warc.gz\"}"}
null
null
John Falconer (fl. 1547) was an English merchant and botanist. Biography Falconer appears to have been the first Englishman who possessed a series of dried plants, a method of study first practised by Luca Ghini of Bologna, the originator of botanical gardens. Falconer travelled, and from 1540 or 1541 lived at Ferrara, which he left in 1547. He was a fellow-pupil of William Turner, the father of English botany, at Bologna, and is mentioned in Turner's Herbal several times. "Maister Falkonner's Boke" is an early mention of a herbarium. References Year of birth missing Year of death missing 16th-century English businesspeople 16th-century English botanists
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,118
{"url":"https:\/\/apc.u-paris.fr\/FACe\/fr\/tale-two-papers-gravitational-waves","text":"# A tale of two papers on gravitational waves\n\nIn this talk, a tale of two separate studies on gravitational waves (GWs) will be briefly presented.\n\nThe first story is about GWs produced by chiral magnetic effect (CME). When a system has chiral asymmetry, CME can produce maximally helical magnetic field at the expense of the amount of chiral asymmetry initially available in the system. We treated CME as a generic sourcing mechanism for primordial GWs, and identified two regimes of interest distinguished by the relative magnitude of two velocities characterising the depletion of chiral asymmetry, $v_\\lambda$, and the generation of magnetic field, $v_\\mu$. We performed a series of numerical simulations, and found that the overall CME-sourced GW energy scales as $\\Omega_{\\rm GW}^{\\rm sat}\\propto v_\\lambda^5v_\\mu$.\n\nThe second story is about using GWs as a constraint on graviton mass. We took a generic massive gravity theory and numerically studied its resulting spectra of GWs sourced by turbulence. Due to the nonlinear dispersion relation caused by massive graviton, the equivalence between spatial and temporal spectra of linear dispersion is no longer valid. We find that, in the spatial domain, the modification of GW spectra appears to be independent of the turbulent eddy size; and, in the temporal domain, there is a characteristic cutoff at the graviton mass scale that can be potentially constrained by pulsar timing arrays.\n\n## Dates:\n\nMardi, 1 juin, 2021 - 14:00 to 15:00\n\nAPC\n\n## Salle \/ Local:\n\ncontact roperpol@apc.in2p3.fr for Zoom meeting details\n\u2022 S\u00e9minaire\n\nYutong He\n\nNordita\n\n\u2022 Th\u00e9orie\n\nSweden","date":"2023-03-29 13:45:59","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.48627156019210815, \"perplexity\": 1382.7575693290792}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2023-14\/segments\/1679296948976.45\/warc\/CC-MAIN-20230329120545-20230329150545-00021.warc.gz\"}"}
null
null
module Rackspace module Scaling class LoadBalancerOperation def initialize(auth, region = 'DFW') @auth = auth @endpoint = @auth.endpoints['cloudLoadBalancers']['endpoints'][region]['publicURL']+"/loadbalancers" end def list @list ||= begin resp = Typhoeus::Request.get(@endpoint, :headers => { 'X-Auth-Token' => @auth.token, 'Accept' => 'application/json'}) parsed_response = JSON.parse(resp.body)['loadBalancers'] end end def nodes(load_balancer_id) path = "#{@endpoint}/#{load_balancer_id}/nodes" @images ||= begin resp = Typhoeus::Request.get(path, :headers => { 'X-Auth-Token' => @auth.token, 'Accept' => 'application/json'}) parsed_response = JSON.parse(resp.body)['nodes'] end end def add_node(options = {}) load_balancer_id = options[:load_balancer_id] body = { :nodes => [ { :address => options[:node_ip], :port => (options[:port] || 80), :condition => (options[:condition] || 'ENABLED'), :type => (options[:type] || 'PRIMARY') } ] } path = "#{@endpoint}/#{load_balancer_id}/nodes" resp = Typhoeus::Request.post(path, :headers => { 'X-Auth-Token' => @auth.token, 'Accept' => 'application/json', 'Content-Type' => 'application/json'}, :body => body.to_json) JSON.parse(resp.body)['nodes'] end def remove_node(options = {}) load_balancer_id = options[:load_balancer_id] node_id = options[:node_id] path = "#{@endpoint}/#{load_balancer_id}/nodes/#{node_id}" resp = Typhoeus::Request.delete(path, :headers => { 'X-Auth-Token' => @auth.token, 'Accept' => 'application/json'}) resp.success? end end # /LoadBalancerOperation end end
{ "redpajama_set_name": "RedPajamaGithub" }
9,226
\section{Introduction} \section{Dimensional Reduction} Dimensional Regularization ({\abbrev DREG}{})~\cite{'tHooft:fi} has proven extremely successful for the evaluation of higher order corrections in quantum field theory, mostly because it preserves gauge invariance and thus does not interfere with the renormalizability of the Standard Model or {\abbrev QCD}{}. Many techniques for evaluating Feynman diagrams have been developed within the framework of {\abbrev DREG}{}, so perturbation theory heavily relies upon the validity of this regularization method. Applied to {\abbrev SUSY}{} theories, however, one faces the problem of explicit {\abbrev SUSY}{} breaking by the need to assign different numbers of degrees of freedom to spin-1 and spin-1/2 fields. A manifestation of this {\abbrev SUSY}{} breaking is that {\abbrev SUSY}{} relations of couplings no longer hold at higher orders. For example, while {\abbrev SUSY}{} requires equality for the quark-quark-gluon and the squark-quark-gluino couplings $g$ and $\hat g$ at all energy scales, one finds that their renormalization constants differ. In fact, it is $Z_g = (1 + \delta_{\hat g})\, Z_{\hat g}$, and thus \begin{equation} \begin{split} \hat g = (1+\delta_{\hat g}) g\,, \end{split} \end{equation} where $\delta_{\hat g} = \alpha_s/(3\pi)$~\cite{Martin:1993yx}. In effect, the number of renormalization constants in {\abbrev SUSY}{} becomes rather large when calculations are done in {\abbrev DREG}{}. As a way out, it was suggested to use Dimensional Reduction ({\abbrev DRED}{}) as a regularization procedure for {\abbrev SUSY}{} theories~\cite{Siegel:1979wq}. Formally, this means that space-time is compactified to $D=4-2\epsilon$ dimensions ($\epsilon>0$), while the vector fields are kept four-dimensional. As an example, consider the electron-photon vertex, which in {\abbrev DRED}{} becomes \begin{equation} \begin{split} \bar\psi \gamma_\mu\psi A^\mu = \bar\psi \gamma_\mu\psi \hat A^\mu + \bar\psi \gamma_\mu\psi \tilde A^\mu = \bar\psi \hat\gamma_\mu\psi \hat A^\mu + \bar\psi \tilde\gamma_\mu\psi \tilde A^\mu\,, \label{eq::psipsiA} \end{split} \end{equation} where $\hat A^\mu$ and $\tilde A^\mu$ denote the $D$ and the $2\epsilon$-dimensional component of the vector field $A^\mu$. $\tilde A^\mu$ is also called the $\epsilon$-scalar{}. Traces over the $D$- and $2\epsilon$-dimensional $\gamma$-matrices can be evaluated using \begin{equation} \begin{split} \{\gamma^\mu,\gamma^\nu\} = 2g_{\mu\nu}\qquad\mbox{and}\qquad \{\hat\gamma^\mu,\tilde\gamma^\nu\} = 0\,. \end{split} \end{equation} Thus, perturbative calculations in {\abbrev DRED}{} require to introduce additional fields ($\epsilon$-scalar{}s) and an extra set of $\gamma$-matrices. Once the algebraic part of the evaluation of a Feynman amplitude is done, the tools developped for {\abbrev DREG}{} can be applied without further modification. \section{Evanescent couplings} In a {\abbrev SUSY}{} theory, the relation $A_\mu=\hat A_\mu+\tilde A_\mu$ is essential: it is $A_\mu$ that is part of a super-multiplet, while $\hat A_\mu$ and $\tilde A_\mu$ are introduced for purely technical reasons. Therefore, this relation must hold also at higher orders of perturbation theory. This is not necessarily the case in a non-{\abbrev SUSY}{} theory. Since $\tilde A_\mu$ transforms like a scalar under gauge transformations (thus the name ``$\epsilon$-scalar''), there is no symmetry to ensure that the $\tilde A_\mu$-couplings renormalize in the same way as the corresponding $\hat A_\mu$-couplings. Thus, when applying {\abbrev DRED}{} to {\abbrev QCD}{}, we have to introduce two different couplings for the quark-gluon vertex, for example: \begin{equation} \begin{split} g_s A_\mu\bar\psi\gamma^\mu\psi\to \hat g_s \hat A_\mu\bar\psi\hat \gamma^\mu\psi+ \tilde g_s \tilde A_\mu\bar\psi\tilde\gamma^\mu\psi\,. \end{split} \end{equation} In order to be consistent with our journal papers~\cite{Harlander:2006rj,Harlander:2006xq,Harlander:phen}\footnote{% There is a misprint in Eq.\,(18) of Ref.\,\cite{Harlander:2006xq}: the term $-25\,n_f^2/72$ should read $-25\zeta_3/72$.} let us define \begin{equation} \begin{split} \alpha_s{} &= \frac{\hat g_s^2}{4\pi}\,,\qquad \alpha_e = \frac{\tilde g_s^2}{4\pi}\,, \end{split} \end{equation} where $\alpha_e$ is called ``evanescent coupling''. Only at tree-level can one require that $\alpha_s{}=\alpha_e$. Higher orders lead to an energy dependece of the (minimally subtracted) couplings, governed by the {\small RGE}s\footnote{In fact, there are several evanescent couplings in {\scriptsize QCD}; however, for the sake of the argument, it is sufficient to consider only $\alpha_e$ here.} \begin{equation} \begin{split} \mu^2\frac{{\rm d}}{{\rm d}\mu^2}\alpha_s{} = \beta_s^{\mbox{$\overline{\scriptstyle\mathrm{ DR}}$}}(\alpha_s{},\alpha_e)\,,\qquad\quad \mu^2\frac{{\rm d}}{{\rm d}\mu^2}\alpha_e = \beta_e(\alpha_s{},\alpha_e)\,. \label{eq::rges} \end{split} \end{equation} The $\beta$-functions have been evaluated in Ref.\,\cite{Harlander:2006rj} through three loops, and $\beta_s^{\mbox{$\overline{\scriptstyle\mathrm{ DR}}$}}$ is even known to four-loop order~\cite{Harlander:2006xq}. Indeed it turns out that $\beta_s^{\mbox{$\overline{\scriptstyle\mathrm{ DR}}$}}\neq \beta_e$ in standard {\abbrev QCD}{} already at one-loop level. The condition $\alpha_s{}=\alpha_e$ can therefore be implemented only at one particular value of $\mu^2$. If $\overline{{\mbox{\abbrev DR}}}$ (i.e., {\abbrev DRED}{} with minimal subtraction) is to be a viable renormalization scheme, then one should be able to transform physical results from one scheme into the other by finite shifts of the renormalized parameters. This property has been confirmed several times~\cite{Jack:1994bn,Harlander:2006rj,Harlander:2006xq}. The proper conversion relation for the strong coupling between the $\overline{{\mbox{\abbrev MS}}}$ and the $\overline{{\mbox{\abbrev DR}}}$ scheme in $n_f$-flavor standard {\abbrev QCD}{} is given at two-loop level by~\cite{Harlander:2006rj} \begin{equation} \begin{split} \bar\alpha_s{} &= \alpha_s{}\left[1-\frac{\alpha_s{}}{4\pi} - \frac{5}{4}\left(\frac{\alpha_s{}}{\pi}\right)^2 + \frac{\alpha_s{}\alpha_e}{12\pi^2}\,n_f+\ldots\right]\,, \label{eq::asMS2DR_2} \end{split} \end{equation} where $\bar\alpha_s$ denotes the strong coupling in the $\overline{{\mbox{\abbrev MS}}}{}$ scheme. Three-loop corrections to this relation are known as well~\cite{Harlander:2006xq}. When evaluating physical observables in {\abbrev DREG}{}, the result depends only on $\bar\alpha_s$, while it depends on both $\alpha_s$ and $\alpha_e$ in {\abbrev DRED}{}. This ambiguity should be viewed as a freedom of the renormalization scheme: any choice of $\alpha_e$ determines the value of $\alpha_s{}$ by comparison to the experimental value of the physical observable at one particular scale $\mu_0$. At any other scale $\mu$, $\alpha_s$ and $\alpha_e$ are determined by the {\small RGE}s \eqn{eq::rges}. Also, there is a unique relation between the perturbative coefficients of the $\overline{{\mbox{\abbrev DR}}}{}$ and the $\overline{{\mbox{\abbrev MS}}}{}$ expression of the physical observable, to be called $R$ and $\bar R$ in what follows. For example, assume that \begin{equation} \begin{split} R(\alpha_s,\alpha_e) &= \sum_{i,j \geq 0} \left(\frac{\alpha_s}{\pi}\right)^i \left(\frac{\alpha_e}{\pi}\right)^j\,r_{ij}\,,\qquad\qquad \bar R(\bar\alpha_s) = \sum_{i\geq 0} \left(\frac{\bar\alpha_s}{\pi}\right)^i\, \bar r_i\,. \label{eq::RMS} \end{split} \end{equation} Then, inserting Eq.\,(\ref{eq::asMS2DR_2}) into Eq.\,(\ref{eq::RMS}) and requiring equality, one derives the relations \begin{equation} \begin{split} r_{00} = \bar r_0\,,\qquad r_{10} = \bar r_1\,,\qquad r_{01} = 0\,,\qquad r_{20} = \bar r_2 - \frac{\bar r_1}{4}\,,\qquad r_{02} = 0\,,\qquad r_{11} = 0\,,\\ r_{30} = \bar r_3 -\frac{\bar r_2}{2} - \frac{5}{4}\,\bar r_1\,,\qquad r_{21} = \frac{n_f}{12}\,\bar r_1\,,\qquad r_{12} = 0\,,\qquad r_{03} = 0\,,\qquad \mbox{etc.} \end{split} \end{equation} \section{Relation of \boldmath{$\alpha_s$} and \boldmath{$\alpha_e$} by Supersymmetry} Supersymmetry is a concept that provides solutions to some of the most pressing questions left open by the Standard Model. As already mentioned above, in a {\abbrev SUSY}{} theory it is required that $\alpha_s{}=\alpha_e$ at all energy scales, and thus $\beta_s=\beta_e$. We can use the {\abbrev QCD}{} results of Ref.\,\cite{Harlander:2006rj} to test the consistency of {\abbrev DRED}{} and {\abbrev SUSY}{} for a {\abbrev SUSY}{} Yang Mills theory at three-loop level, simply by choosing the color factors appropriately. Indeed, we find that $\beta_s=\beta_e$ through three loops in a {\abbrev SUSY}{} Yang Mills theory. For a check of this relation within {\abbrev SUSY}{}-{\abbrev QCD}{}, one needs to include chiral fields in the fundamental representation of the gauge group, or in other words, quarks and squarks. This is work in progress. If indeed the {\abbrev QCD}{} that we observe is the low energy limit of a softly broken {\abbrev SUSY}{}-{\abbrev QCD}{} theory, then the freedom of choosing $\alpha_e$ is lost, because within this {\abbrev SUSY}{} theory, we require $\alpha_e^{\rm (full)}=\alpha_s^{\rm (full)}$ at all scales. The couplings in {\abbrev QCD}{} are related to those in {\abbrev SUSY}{}-{\abbrev QCD}{} by matching relations: \begin{equation} \begin{split} \alpha_s(\mu) = \zeta_s \alpha_s^{\rm (full)}(\mu)\,,\qquad \alpha_e(\mu) = \zeta_e \alpha_e^{\rm (full)}(\mu)\,, \label{eq::decoupling} \end{split} \end{equation} where $\zeta_s$ and $\zeta_e$ are functions of $\alpha_s^{\rm (full)}$, the {\abbrev SUSY}{} particle masses, and the ``matching scale'' $\mu$ (if $\alpha_s$ and $\alpha_e$ are the couplings in five-flavor {\abbrev QCD}{}, then $\zeta_s$ and $\zeta_e$ depend also on the top quark mass). Note that since the dependence of $\zeta_{s,e}$ on the matching scale $\mu$ is logarithmic, one should apply \eqn{eq::decoupling} at a scale not too much different from the {\abbrev SUSY}{} particle masses. Also, if these masses are spread over a large range, matching better be done in several steps. $\zeta_s$ and $\zeta_e$ can be evaluated perturbatively. The two-loop expression for $\zeta_s$ has been calculated in Ref.\,\cite{Harlander:2005wm}, while for $\zeta_e$ only the one-loop term is known~\cite{Harlander:phen}. Assume now for the sake of the argument that all {\abbrev SUSY}{}-{\abbrev QCD}{} particle masses are identical, say $m_{\tilde q} = m_{\tilde g} = \tilde M \sim 1$\,TeV. If $\bar\alpha_s(M_Z)$ in {\abbrev QCD}{} is given by experiment, then the {\abbrev SUSY}{} coupling, for example at the GUT scale $\mu_{\rm GUT}$, can be determined by the following scheme: \begin{equation} \begin{split} \bar\alpha_s(M_Z) \stackrel{(i)}{\rightarrow} \bar\alpha_s(\mu_{\rm dec}) \stackrel{(iii)}{\leftarrow} \left\{ \begin{array}{c} \alpha_s(\mu_{\rm dec}) \\ \alpha_e(\mu_{\rm dec}) \end{array} \right\} \stackrel{(ii)}{\leftarrow} \alpha_s^{\rm (full)}(\mu_{\rm dec}) \stackrel{(iv)}{\rightarrow} \alpha_s^{\rm (full)}(\mu_{\rm GUT})\,. \end{split} \end{equation} If the evolution is to be consistent through $n$-loop order, then steps $(i)$ and $(iv)$ need to be done through $n$ loops, while steps $(ii)$ and $(iii)$ are only required through $(n-1)$ loops. Here, it is understood that one starts with a trial value $\alpha_0$ for $\alpha_s^{\rm (full)}(\mu_{\rm dec})$, evaluates steps $(ii)$ and $(iii)$, and compares the value for $\bar\alpha_s(\mu_{\rm dec})$ obtained in this way with the one obtained from step $(i)$. If it agrees, one performs step $(iv)$ with $\alpha_s^{\rm (full)}(\mu_{\rm dec}) = \alpha_0$, otherwise, one starts again with a different value for $\alpha_0$. \begin{wrapfigure}{r}{0.5\columnwidth} \centerline{\includegraphics[width=0.55\columnwidth]{figs/asgutlog.eps}} \caption{$\alpha_s$ at $\mu_{\rm GUT}\equiv 10^{16}$\,GeV derived from $\alpha_s(M_Z)$ in 1-, 2-, and 3-loop approximation (dotted, dashed, solid) as a function of the decoupling scale $\mu_{\rm dec}$. The dash-dotted curve is what results from the formula given in Ref.\,\cite{Aguilar-Saavedra:2005pw}. See Ref.\,\cite{Harlander:phen} for details.}\label{fig::asgut} \end{wrapfigure} An alternative way to proceed was applied in Ref.\,\cite{Harlander:phen}. There, the relation between $\alpha_s(\mu_{\rm dec})$ and $\alpha_e(\mu_{\rm dec})$ was perturbatively expanded such that $\alpha_s(\mu_{\rm GUT})$ could be directly evaluated from $\bar\alpha_s(M_Z)$ without the need for an iterative procedure. The difference between these two approaches is formally of higher orders in $\alpha_s$, but is expected to grow as the decoupling scale moves away from the {\abbrev SUSY}{} masses $\tilde M$. At three-loop level, the two approaches are consistent with each other within each others uncertainty (derived from the experimental error on $\alpha_s(M_Z)$~\cite{Bethke:2006ac}) over a large range of the decoupling scale. Figure\,\ref{fig::asgut} shows the result~\cite{Harlander:phen}, demonstrating the numerical importance of the three-loop effects, in particular if decoupling is done at other scales than $\tilde M$ (quite often one finds $\mu_{\rm dec} = M_Z$, for example~\cite{Aguilar-Saavedra:2005pw}). \section{Conclusions} {\abbrev DRED}{} is currently considered the appropriate regularization method for supersymmetric theories. Applied to non-{\abbrev SUSY}{} theories, it leads to evanescent couplings, with their own evolution and decoupling relations. If parameters from the non-{\abbrev SUSY}{} theory are to be related to {\abbrev SUSY}{} parameters, the conversion relations will typically involve these evanescent couplings. Here we took these issues into account for the derivation of $\alpha_s(\mu_{\rm GUT})$ from $\alpha_s(M_Z)$ at three-loop level. \begin{footnotesize}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,813
{"url":"https:\/\/gamedev.stackexchange.com\/questions\/111633\/how-to-compress-game-frame-by-frame-animation-to-a-minimum-on-disk\/111635","text":"# How to compress game frame by frame animation to a minimum on disk\n\nI developing a game like Age of empire (buildings on map) and for every building I have a sprite sheet for its animation. I am using frame by frame animation to animate the building ( not aware of any other method anyways). I am noticing that my resources folder is going extremely large due to the images I am putting in it.\n\nHow do games like CoC and Age of Empires keep their apk size under 50 Mg? Am I missing something here when it comes to game development\n\nThank you\n\nI have a LOT of animations drawables objects. I chose this way because it was the quickest and it was yielding the results the I want. Basically every building is an animation drawable in which its xml file refers to 8 drawables (8 images = 8 files). All of them are PNG due to transparency. Since I have over 200 images, the apk size is 120 MB ( And I am not done copying half of the files yet). That's what prompt, there has t o be a way to solve it. Please assist a fellow developer\n\n\u2022 Compressing images or separating game content from the apk might be what you are looking for. \u2013\u00a0J\u00e1nos Tur\u00e1nszki Nov 19 '15 at 16:07\n\u2022 You mean expansion files? The problem that expansion files don't have the different density drawable folders. Is this what games do? I am doing frame by frame animation (full building image). Or am I totally in the wrong path \u2013\u00a0Snake Nov 19 '15 at 16:12\n\u2022 Just in case, I tried to be of help and put an answer related to saving space for your images. However, your question is confusing: it's not clear whether you want to really know techniques like related to \"how do games like CoC and Age of Empires keep their apk size under 50 Mg?\" or if you wish to just check whether there are less disk-space hungry ways of animating (like your title suggests). Please clarify, so I can delete my answer if that's the case and also so you can get more accurate answers. \u2013\u00a0MAnd Nov 19 '15 at 17:16\n\u2022 @MAnd I edited my answer to reflect on the problem further \u2013\u00a0Snake Nov 19 '15 at 21:20\n\u2022 For Age of Empires, it's probably a combination of low resolution (IIRC it was designed for 640x480) and indexed images (not full RGB). \u2013\u00a0user253751 Nov 20 '15 at 0:14\n\nIt's not clear by your question if you really want to know techniques that allow games to save disk space even when having large amounts of heavy image\/resource files (that's what is in the body of the question), or if you just wanna know if there are less disk-space hungry ways of doing animation for your buildings (that's what is implied by the title of your question).\n\nSo, I will put this answer here trying to be of help in relation to saving space when you have a lot of images. If you are only interested in knowing about animation techniques that might allow you to save disk space, please let me know then I might delete the answer.\n\nThat said, in a similar situation, two solutions I have seen (and used once) in terms of saving space in the disk, were:\n\n1) to concatenate individual images within one same image. Sometimes it can be of help. So instead of saving 4 separate files for each of the following colored squares, I save all of them side-by-side within the same image (and will only split them via game code):\n\nIn this example, saving each in 4 different PNG files amounts to 1255 bytes, while the single file with all concatenated is 451 bytes. Of course, how much it could help depends on a case-by-case basis and you should test to see if that can be of help to you depending on your images.\n\n2) to compress the resources folder. This can often be very helpful to save disk space. Then when the game is running you either decompress or pull the desired image from the compressed file - depending on the compression format you use. See this past answer that became a wiki of this site: https:\/\/gamedev.stackexchange.com\/a\/37649\/72423\n\nBut also keep in mind that each type of compression may be more or less advantageous depending on the image file type you use: Avoid double compression of resources\n\nFor more details on compression: State of the art in image compression?\n\nLastly, you might also be interested in thinking which types of image to use for each situation: Which image format is more memory-efficient: PNG, JPEG, or GIF?\n\n\u2022 Thank you for the information. It is weird you were able to cut the file size into half by putting things in one file. I thought at the end of the day, it is the same pixel count and space used . I edited my question for better answer \u2013\u00a0Snake Nov 19 '15 at 21:22\n\u2022 @Snake So, per the edit you made, I think my answer can indeed be helpful. Concatenating images + compressing may reduce your folder size by a huge amount. Both techniques are regularly used in some types of games. About the concatenation, that's always a surprising trick. Try it yourself: put the images from a sequence of one animation of yours side-by-side and compare the final size with the sum of the individual images. Then you can see if that helps in your case. As for compression, for PNGs it tends to be less effective, but still worth the effort to cut size of your final folder \u2013\u00a0MAnd Nov 19 '15 at 21:32\n\u2022 @Snake one observation, though: concatenating sometimes does not worth or even increases the final size a tiny bit. But oftentimes it can deliver desirable reduction. It all depends on your images \u2013\u00a0MAnd Nov 19 '15 at 21:41\n\u2022 I remember trying both technique and the reduction was less than 10%. I definitely know compression does not matter with PNG but I will try the concatenation. That's why I asked how other games do it because obviously they have way more graphics than mine \u2013\u00a0Snake Nov 19 '15 at 21:45\n\u2022 You shouldn't be able compress a folder full of images. The images should already be compressed, and should not have much repeatable sequences. If they did, that extra layer of compression would be part of the image codec... \u2013\u00a0corsiKa Nov 20 '15 at 19:26\n\nI would propose that you try TexturePacker\n\n\u2022 you can simply drag and drop all your images and get them packed\n\u2022 you can apply different compression - e.g. use indexed PNGs which consume way less memory - up to 70% less compared to a standard PNG file\n\u2022 you can create a file data files that contain the name + position of each of your buildings\n\u2022 the free version might already suffice to create the sprite sheets for you\n\nDo you use a game development framework like AndEngine, Cocos2d-x or LibGdx? => None\n\nDo you need all your images loaded at the same time? It sounds like you'll run into massive RAM problems on the target devices.\n\nUpdate: Snake sent me some images. As promised won't make them public here so I've created some art myself to demonstrate how to reduce the memory usage.\n\nIn the original image, only one part of the image was moving. I've placed a bird on a house to demonstrate this:\n\nBasically packing the complete animation into a sheet is a big waste of memory. You should split static and moving parts:\n\nStatic:\n\nAnim01:\n\nAnim02:\n\nKeep the original position of the bird in the images. This is why there is so mich empty space above. You need this for aligning the animation.\n\nNow drag the images on TexturePacker and select the following parameters\n\n\u2022 Data format: JSON hash (or XML if you prefer that)\n\u2022 TrimMode: Trim (this creates rectangles)\n\u2022 Pixel format: INDEXED 8bit - to create 8bit PNGs (about 70% less memory)\n\u2022 Allow Rotation: false\n\u2022 Enter a filename for the data\n\nThe result is that you now get 2 files: The sprite sheet and a JSON description file.\n\n\"house_anim_01.png\":\n{\n\"frame\": {\"x\":351,\"y\":246,\"w\":110,\"h\":79},\n\"rotated\": false,\n\"trimmed\": true,\n\"spriteSourceSize\": {\"x\":67,\"y\":8,\"w\":110,\"h\":79},\n\"sourceSize\": {\"w\":400,\"h\":400},\n\"pivot\": {\"x\":0.5,\"y\":0.5}\n},\n\n\nThe important parts are frame and spriteSourceSize.\n\nThe frame gives you the location of the original sprite in the sprite sheet.\n\nspriteSourceSize gives you the offset for drawing the image - the parts of the image that are left out because of the trimming:\n\nA simple pseudo-code drawing routine looks like this:\n\ndrawImage(spritename, posX, posX)\n{\ndata = sheetData[spritename]\noffsetX = data.spriteSourceSize.x\noffsetY = data.spriteSourceSize.y\nframeX = data.frame.x\nframeY = data.frame.y\nwidth = data.frame.w\nheight = data.frame.h\nscreen.draw(sheetImage, posX+offsetX, posY+offsetY, width, height)\n}\n\n\nYou might have to adjust the offset calculation depending on the pivot point \/ origin in your graphics system. The routine above assumes a coordinate system who's origin is top left.\n\nThen simply draw the house in 2 passes:\n\ndraw(\"background\", 100, 100);\ndraw(\"anim_01\", 100, 100);\n\n\nYou don't have to care about the offsets - since the images are already aligned.\n\n\u2022 +1 for the observation about RAM use, which other people are ignoring \u2013\u00a0Dan Hulme Nov 20 '15 at 12:51\n\u2022 @Andreas I will check that out. Thank you. Don't laugh at me but I am not using an engine. It is mostly done with animation and drawables and it working like a charm. I don't load all images in the memory no other wise it will crash. It just loads the images required for display at the moment.. do you think the above suggestion would work with standard Android SDK? \u2013\u00a0Snake Nov 20 '15 at 16:57\n\u2022 Yes it will. TexturePacker has 2 modes for trimming sprites: Rectangles and polygons. If you use isometric images you might get a big advantage from polygon meshes in terms of packing. The question is if you would be able to draw textured triangles - or rectangles with a mask. If this is not the case you can still use the rectangles. You can send me some of your images if you want use dropbox and send a link to -> support at codeandweb . com . I won't share them - just interested to see what we can do to improve the packing. \u2013\u00a0Andreas L\u00f6w Nov 20 '15 at 20:12\n\u2022 @AndreasL\u00f6w I am so sorry, I didnt get notification of your comment. I just saw it. I will very much appreciate the help. I will send you something and maybe you can shed light on what I should. ..kinda get stuck with something I have liittle experiece in and maybe you can shed light. I will contact you soon \u2013\u00a0Snake Nov 24 '15 at 4:02\n\nHere are few pointer you can use\n\n1. Try to make sure all your Background and non transparent images are not in PNG format\n2. Try to have all animation loop-able i.e. if animation is 1,2,3,4,5,6 try to make it like 1,2,3,4,3,2,1 where these numbers are frame number of animation, it helps a lot\n3. if many image are same and only color differs (generally UI buttons,m particle animations, game coins ) then try to take one White image and change its color dynamically\n4. Try to make your .apk with only those images that are extremely important and then load everything from a server and keep in phone memory\n\nI hope these pointers help\n\nP.S. I am only give generalized idea as i am not aware of your exact game content and apologies if these pointers are not useful\n\n\u2022 You can also: 1) compress all your images 2) use tiles as often as possible 3) concatinate images togetter into a larger image and use that as the uv map for texturing \u2013\u00a0Anthony Raimondo Nov 19 '15 at 16:36\n\u2022 Those are great suggestions. Thank you. I think the most important point is 4. However if that's the case then how do I put them in the different drawable folders and reference them like R.drawable.image1 ? That's where I am confused \u2013\u00a0Snake Nov 19 '15 at 21:21\n\nA lot of games don't keep their graphical assets in the .apk; they only include the \"basics\" like UI graphics and the game code and download the rest of the assets once the game's been installed. This is especially true for games that use different graphic resources depending on the resolution of your display to keep the .apk from having to contain both low-res and hi-res assets.\n\nYou should look at both the optimization options provided by the other answerers, as well as whether or not you will ultimately have to serve your game's graphics separately from the .apk itself.\n\n\u2022 I have no issues getting the graphics from server ( or even expansion files), but how do I put them in drawable folder so I can reference them by R.drawable.x Since most of my (many many) xmls are referencing them using R.drawable \u2013\u00a0Snake Nov 19 '15 at 21:26\n\u2022 AFAIK, you can't. One workaround is to keep a hash table of resource filenames linked to resource IDs, and use the AssetManager to grab the files based on those IDs, but you may have to rework your .xml files if you go this route. \u2013\u00a0Sandalfoot Nov 19 '15 at 21:31\n\nI came to this site looking for a similar question and there are indeed a few good resources pointed both in the answers to your question and in other questions.\n\nYou should probably take a look at 2D animation: Animated 3D models or sprites with animation frames? for some debate on different types of animation. It helped me optimize my game. Also, you don't tell us which API you are using or which Engine you use. That alone can make quite a difference: Android frame by frame PNG animation\n\nBesides that, keep in mind that if you will go the compression route, there can be important differences between how you compress. Specifically, there are quite a few differences between types of compression and there is some specialized debate on what is the most appropriate route for mobile devices. Specially if you take into account that for mobile, RAM usage and CPU waste for loading are also paramount. See: http:\/\/www.gamasutra.com\/blogs\/AlexandruVoica\/20130419\/190833\/Why_efficiency_is_key_in_texture_compression_standards_for_mobile_graphics.php?print=1\n\n\u2022 Thank you Bennton, I am actually new at game development. I developed lots of apps that are tools\/productivity based. But none with games. So I am doing my first game and I am not using an engine. I am using just AnimationDrawables and things are working great. It is just the the size of the apk is getting out of hand \u2013\u00a0Snake Nov 24 '15 at 4:01\n\u2022 Bennton, the last link suggested by you is particularly interesting. Many thanks for sharing that! Make sure you take a look at the last one from my answer, which also touches the issue of different compression for different file types. Actually, there seems to be compression types that work better with already compressed image types like PNG, or better with uncompressed image types. Experimenting for each case is always the best way to check what works best for each situation. \u2013\u00a0MAnd Nov 24 '15 at 23:11","date":"2020-02-23 02:15:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.3399525582790375, \"perplexity\": 1188.3273804682763}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-10\/segments\/1581875145742.20\/warc\/CC-MAIN-20200223001555-20200223031555-00322.warc.gz\"}"}
null
null
Monolayer: Complex hierarchies, patterns, tiling and writing to files ------------------------------------------------------------------ Run this interactively by downloading the IPython Notebook file :download:`tutorial_monolayer.ipynb <tutorial_monolayer.ipynb>` and :download:`ch3.pdb <ch3.pdb>`. .. raw:: html :file: tutorial_monolayer.html
{ "redpajama_set_name": "RedPajamaGithub" }
637
Opinion: After a year of COVID, we need to support women-owned businesses now more than ever Vaccines being administered and restrictions starting to ease is encouraging to small businesses, but we are far from out of the woods. by Kelly Perkins 2:55 AM MDT on Mar 31, 2021 For me, Women's History Month, which ends today, is a time to reflect on the achievements that women have made throughout the year. This year, it feels like some of our business achievements have been cancelled due to COVID. A survey last month said that roughly 56% of women's businesses in Colorado believe they won't last for another one to three months. As a female small business owner who works to empower other women, this statistic came as a wake-up call to me. That's why we should make a concerted effort to support and celebrate women-owned businesses. Kelly Perkins In 2012, I started Spinster Sisters Co. to provide clean, eco-friendly, and healthy skin care products. Today, I am proud to say we have a retail shop in Golden, and our products are in over 2,000 retailers nationwide. Our "Microsoapery" headquarters is also 100% wind and solar powered. While we have found success, Spinster Sisters' achievements did not happen overnight. When the company was formed, we were selling primarily at craft markets along the Front Range, then in 2014 we opened our first retail shop. It wasn't until 2019 that I decided to focus on selling directly to specialty grocers and found success. Like many business owners, when the pandemic hit and closures began, we were worried about the future of our company. It became challenging to meet with our customers face to face, which is essential to a growing business. Thankfully, we were prepared to pivot to online sales using Facebook and Instagram Shops as our primary marketing channels. This resulted in substantial growth for our e-commerce business, selling directly to our customers. ☀ MORE IN OPINION Opinion: Disordered eating is a public-health problem that requires a government response 1:30 AM MST on Feb 2, 2023 5:48 PM MST on Feb 1, 2023 Littwin: Santos lies about why he left committees; Ken Buck tells truth about why he won't kick off Omar 3:05 AM MST on Feb 1, 2023 1:26 PM MST on Jan 31, 2023 Opinion: Outlaw the slaughter of horses for human food We also began producing and selling more quality soap and hand sanitizer products to customers than ever before. For much of 2020, products of this category, not to mention quality, were inaccessible, so we are proud we were able to play a small role in keeping Coloradans safe. Our partnerships with e-commerce firms and grocery stores led to our sales increasing by over 150% in 2020. Being a business that has been able to grow during the pandemic has been extremely gratifying and it is validating to know that people trust our products. As a small business owner, I know what it takes to run a business and have made it a priority to help others along the way. Through partnering with nonprofits and hosting events at the Microsoapery, we have been able to support other local businesses, as well as amplify our message to bolster women-owned enterprises. We also work with many women's charities and mentor young women through Girls Inc.'s business school program, which teaches young girls various aspects of running a business. I have felt the love from this community and want to be sure to pay it forward for women who want to become entrepreneurs. As of 2020, Colorado has nearly 140,000 female business owners, accounting for almost 40% of all business owners in the state. While these stats are encouraging, the U.S. Chamber of Commerce recently said that women-owned businesses are being disproportionately affected by the COVID downturn. Let's offer succor to our sisters! As business owners we can partner with each other for promotions and support each other in our own purchases. As consumers we can shop at women-owned businesses, write positive reviews online, and encourage each other on social media. Vaccines being administered and restrictions starting to ease is encouraging to small businesses, but we are far from out of the woods. Let's be there for each other and for our communities. As finances allow, let's be extra attentive during this time to uplift the local businesses, local restaurants, and the women-owned businesses in our community. On a personal note, our customers and our community have given when they could not necessarily afford to give during this pandemic, and welcomed us into their homes through the use of our products and we could not be more thankful for that. Thank you, Colorado, for holding us up during this crazy time. Kelly Perkins is owner and founder of Golden-based Spinster Sisters Co. Follow Colorado Sun Opinion on Twitter, Instagram and Facebook. Tagged: COVID-19, women in business Special to The Colorado Sun More by Kelly Perkins
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,165
{"url":"https:\/\/study.com\/academy\/answer\/a-and-b-are-two-trains-moving-parallel-to-each-other-if-a-balls-is-thrown-vertically-up-from-the-train-a-the-path-of-the-ball-is-a-parabola-for-an-observer-standing-on-the-ground-b-vertical-str.html","text":"# A and B are two trains moving parallel to each other. If a balls is thrown vertically up from the...\n\n## Question:\n\n{eq}A {\/eq} and {eq}B {\/eq} are two trains moving parallel to each other. If a balls is thrown vertically up from the train {eq}A {\/eq}, the path of the ball is:\n\na. Parabola for an observer standing on the ground,\n\nb. Vertical straight line for an observer in {eq}B {\/eq} when {eq}B {\/eq} is moving with same speed but in same direction,\n\nc. A parabola for an observer in {eq}B {\/eq} when {eq}B {\/eq} is moving with same speed but in opposite direction,\n\nd. All the above are true.\n\n## Inertial Frame of Reference\n\nA frame of reference associated with an object moving with constant velocity is called an inertial frame of reference. Newton's laws will give the correct dynamics in an inertial frame. Ideally, a frame of reference attached to a distant fixed star is taken as the inertial frame. If the frame is accelerated, then immediately pseudo forces come into play. If these are incorporated then Newton's laws can again give the correct dynamics.\n\n## Answer and Explanation:\n\nIf a particle is projected near the surface of the earth at an angle to the horizontal then it will trace out a parabola. If projected vertically then the path is a straight line. Thus a horizontal velocity component is necessary for generating the parabolic trajectory.\n\nHere it is given that there are two trains A and B moving along parallel tracks with constant velocity. So the frames of reference are inertial. A ball is thrown vertically up in train A. Clearly in the train frame of reference the ball has no horizontal velocity component. Therefore it will go straight up and then come back straight down.\n\nFor a ground-based observer, the ball has an initial horizontal velocity component the same as that of the train. So in the ground frame, the trajectory is a parabola.\n\nIf train B is moving parallel to A with the same speed then the ball has no horizontal velocity component in the B frame. Therefore the path is a vertical straight line.\n\nIf B is moving in the opposite direction then in the B frame the ball does have a horizontal velocity. Therefore the trajectory is a parabola.\n\nThus all the given statements are true.\n\nThe correct answer is Option d).\n\n#### Learn more about this topic:\n\nInertial Frame of Reference: Definition & Example\n\nfrom General Studies Science: Help & Review\n\nChapter 4 \/ Lesson 12\n116K","date":"2020-01-25 11:09:41","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8286930322647095, \"perplexity\": 451.40152133273494}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": false}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579251672440.80\/warc\/CC-MAIN-20200125101544-20200125130544-00231.warc.gz\"}"}
null
null
Q: Vapor Querying Classes By Distance Let's say I need to query the distance between two classes in vapor.Here is what I mean let distanceBetweenLocations = 5 let CloseLocations = try Locations.query().filter("lat", .distance , .isLessThan , 4) Something like that A: I am not sure if you understand that your question is logically wrong. It is obvious that you will have to do some kind of calculation to actually get the distance. I suggest creating extension with func over your model which will calculate distance between coordinates, and then use if in filter func.... Something like this perhaps: extension Location{ func distance(lat: Double, lon: Double) -> Double { let R = 6371.0 let dLat = (lat - localLat) * 3.14 / 180 let dLon = (lon - localLon) * 3.14 / 180 let latRad1 = localLat * 3.14 / 180 let latRad2 = lat * 3.14 / 180 let a1 = sin(dLat/2) * sin(dLat/2) let a2 = sin(dLon/2) * sin(dLon/2) * cos(latRad1) * cos(latRad2) let a = a1 + a2 let c = 2 * atan2(sqrt(a),sqrt(1-a)) return R * c } } I am trying not to be rude but I hope that you were just lazy to ask correctly, in case you were not, see this test project read it all try to understand and when you do, you will know how to achieve what you need.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,035
Q: how can change multi select board color using jquery validation in asp.net c# I am trying this, if ($(this).hasClass('select2') == true && $(this).hasClass('required') == true) { var idName = $(this)[0].id; var e = document.getElementById(idName); var strUser = e.options[e.selectedIndex].value; if (strUser == 0) { alert("Please select a user"); document.getElementById(idName).style.borderColor = "red"; document.getElementById(idName).style.borderWidth = "1px"; } } <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <select id="ddltest1" class="form-control select2 required" multiple="multiple" tabindex="2"> <option value="0">--select--</option> <option value="1">test1</option> <option value="2">test2</option> <option value="3">test3</option> <option value="4">test4</option> </select> how can change the border colour of select dropdown? it's not working... A: You are mix between jquery and javascript. Change some code to work around as var idName = $(this)[0].id; change to var idName = $('#ddltest1').attr('id'); $(this) for current tag, but you did not show what event is handling. Change var strUser = e.options[e.selectedIndex].value; to $('#ddltest1').val() to get selected value. $(document).ready(function(){ if ($('#ddltest1').hasClass('select2') == true && $('#ddltest1').hasClass('required') == true) { var idName = $('#ddltest1').attr('id'); var e = document.getElementById(idName); var strUser = $('#ddltest1').val(); if (strUser == 0) { alert("Please select a user"); document.getElementById(idName).style.borderColor = "red"; document.getElementById(idName).style.borderWidth = "1px"; } } }); <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script> <select id="ddltest1" class="form-control select2 required" multiple="multiple" tabindex="2"> <option value="0">--select--</option> <option value="1">test1</option> <option value="2">test2</option> <option value="3">test3</option> <option value="4">test4</option> </select>
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,807
{"url":"https:\/\/api-project-1022638073839.appspot.com\/questions\/how-do-you-solve-5-6-frac-4-5-12-2","text":"# How do you solve -5.6= \\frac { 4} { 5} + 12.2?\n\nJun 6, 2017\n\nYou can't.\n\n#### Explanation:\n\nThere isn't anything to solve because in solving you're finding the value(s) of an unknown variable, and there aren't any unknown variables here. It's simply an untrue expression, which is saying that $- 5.6 = 13$, which it doesn't.\n\nJun 6, 2017\n\nAssumption: The equation should be $- 5.6 = \\frac{4}{5} x + 12.2$\n\n$x = - 89$\n\n#### Explanation:\n\nDon't like decimals so lets get rid of them.\n\nMultiply everything by 10\n\n$- 56 = 2 x + 122$\n\nSubtract 122 from both sides\n\n$- 56 - 122 = 2 x + 0$\n\n$- 178 = 2 x$\n\nDivide both sides by 2#\n\n$- 89 = x$","date":"2020-04-05 23:43:16","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 7, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8857381343841553, \"perplexity\": 1956.4917649382623}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-16\/segments\/1585371611051.77\/warc\/CC-MAIN-20200405213008-20200406003508-00360.warc.gz\"}"}
null
null
\section{Introduction} Hempel distance is a measure of complexity originally defined for Heegaard splittings of $3$-manifolds \cite{hempel}. The definition can be extended to bridge decompositions of links and it has been successfully applied to knot theory. For example, extending Hartshorn's \cite{hartshorn} study for Heegaard splittings, Bachman-Schleimer \cite{bachman-schleimer} showed that the distance of a bridge decomposition of a knot bounds from below the genus of any essential surface in the knot exterior. Extending Scharlemann-Tomova's \cite{scharlemann-tomova} for Heegaard splittings, Tomova \cite{tomova} showed that the distance of a bridge decomposition bounds from below the bridge number of the knot or the Heegaard genus of the knot exterior. However, it is difficult to calculate the Hempel distance of a general Heegaard splitting or bridge decomposition. While estimating it from above is a simple task in principle, it is a hard problem to estimate the distance from below. For a Heegaard splitting, Casson-Gordon \cite{casson-gordon} introduced the rectangle condition to ensure that the distance is at least two. Lee \cite{lee} gave a weak version of rectangle condition which guarantees the distance to be at least one. Berge \cite{berge} gave a criterion for a genus two Heegaard splitting which guarantees the distance to be at least three. Lustig-Moriah \cite{lustig-moriah} also gave a criterion to estimate the distance of a Heegaard splitting from below. On the other hand, we could not find corresponding results for bridge decompositions in literature. In this paper, we observe that a bridge decomposition of a link in $S^3$ can be described by a {\it bridge diagram}, and show that the {\it well-mixed condition} for a bridge diagram guarantees the distance to be at least two (see Section \ref{diagram} for definitions). It may be regarded as a variation of the rectangle condition for Heegaard diagrams. \begin{Thm}\label{main} Suppose $(T_+,T_-;P)$ is an $n$-bridge decomposition of a link in $S^3$ for $n\geq 3$. If a bridge diagram of $(T_+,T_-;P)$ satisfies the well-mixed condition, the Hempel distance $d(T_+,T_-)$ is at least two. \end{Thm} Recently, Masur-Schleimer \cite{masur-schleimer} found an algorithm to calculate the Hempel distance of a Heegaard splitting with a bounded error term. The author imagine that their algorithm may also be appliable to bridge decompositions. However, the point of our result is its practicality: for any given bridge decomposition, we can easily obtain a bridge diagram and check whether it satisfies the well-mixed condition. \section{Bridge decompositions and the Hempel distance} Suppose $L$ is a link in $S^3$ and $P$ is a $2$-sphere dividing $S^3$ into two $3$-balls $B_+$ and $B_-$. Assume that $L$ intersects $P$ transversally and let $\tau _\varepsilon $ be the intersection of $L$ with $B_\varepsilon $ for each $\varepsilon =\pm $. That is to say, $(S^3,L)$ is decomposed into $T_+:=(B_+,\tau _+)$ and $T_-:=(B_-,\tau _-)$ by $P$. We call the triple $(T_+,T_-;P)$ an {\it $n$-bridge decomposition} of $L$ if each $T_\varepsilon $ is an $n$-string trivial tangle. Here, $T_\varepsilon $ is called an {\it $n$-string trivial tangle} if $\tau _\varepsilon $ consists of $n$ arcs parallel to the boundary of $B_\varepsilon $. Obviously $1$-bridge decompositions are possible only for the trivial knot, so we assume $n\geq 2$ in this paper. Consider a properly embedded disk $D$ in $B_\varepsilon $. We call $D$ an {\it essential disk} of $T_\varepsilon $ if $\partial D$ is essential in the surface $\partial B_\varepsilon \setminus \tau _\varepsilon $ and $D$ is disjoint from $\tau _\varepsilon $. Here, a simple closed curve on a surface is said to be {\it essential} if it neither bounds a disk nor is peripheral in the surface. Note that essential disks of $T_+$ and $T_-$ are bounded by some essential simple closed curves on the $2n$-punctured sphere $P\setminus L$. The essential simple closed curves on $P\setminus L$ form a $1$-complex ${\mathcal C}(P\setminus L)$, called the {\it curve graph} of $P\setminus L$. The vertices of ${\mathcal C}(P\setminus L)$ are the isotopy classes of essential simple closed curves on $P\setminus L$ and a pair of vertices spans an edge of ${\mathcal C}(P\setminus L)$ if the corresponding isotopy classes can be realized as disjoint curves. In the case of $n=2$, this definition makes the curve graph a discrete set of points and so a slightly different definition is used. The {\it Hempel distance} (or just the {\it distance}) of $(T_+,T_-;P)$ is defined by $$d(T_+,T_-):={\rm min}\{ d([\partial D_+],[\partial D_-])\mid D_\varepsilon \text{ is an essential disk of }T_\varepsilon .\ (\varepsilon =\pm )\} $$ where $d([\partial D_+],[\partial D_-])$ is the minimal distance between $[\partial D_+]$ and $[\partial D_-]$ measured in ${\mathcal C}(P\setminus L)$ with the path metric. Because the curve graph is connected \cite{masur-minsky1}, the distance $d(T_+,T_-)$ is a finite non-negative integer. For $2$-bridge decompositions, there is a unique essential disk for each of the $2$-string trivial tangles. Moreover, the curve graph of a $4$-punctured sphere is well understood (see Sections 1.5 and 2.1 in \cite{masur-minsky2} for example) and so we can calculate the exact distance. Suppose $(T_+,T_-;P)$ is an $n$-bridge decomposition of a link $L$ for $n\geq 3$. If $d(T_+,T_-)=0$, there are essential disks $D_+,D_-$ of $T_+,T_-$, respectively, such that $[\partial D_+]=[\partial D_-]$. We can assume $\partial D_+=\partial D_-$ indeed and so $D_+\cup D_-$ is a $2$-sphere in $S^3$. Therefore, $(T_+,T_-;P)$ is separated by the sphere into an $m$-bridge decomposition and an $(n-m)$-bridge decomposition of sublinks of $L$. By the definition of essential disks, $m$ is more than $0$ and less than $n$. Conversely, we can conclude that the distance is at least one if $(T_+,T_-;P)$ is not a such one. \section{Bridge diagrams and the well-mixed condition}\label{diagram} Suppose $(T_+,T_-;P)$ is an $n$-bridge decomposition of a link $L$ in $S^3$ and $T_+=(B_+,\tau _+),T_-=(B_-,\tau _-)$. For each $\varepsilon =\pm $, the $n$ arcs of $\tau _\varepsilon $ can be disjointly projected into $P$. Let $p:L\rightarrow P$ be such a projection. A {\it bridge diagram} of $(T_+,T_-;P)$ is a diagram of $L$ obtained from $p(\tau _+)$ and $p(\tau _-)$. In the terminology of \cite{crowell-fox}, $\tau _+,\tau _-$ are the overpasses and the underpasses of $L$. Note that the boundary of a regular neighborhood of each arc of $p(\tau _\varepsilon )$ in $P$ bounds an essential disk of $T_\varepsilon $ separating an arc of $\tau _\varepsilon $. In this sense a bridge diagram represents a family of essential disks of $T_+,T_-$. So we can think of it as something like a Heegaard diagram for a Heegaard splitting. It is well known that a bridge decomposition is displayed as a ``plat" as in Figure \ref{fig_braid} (See \cite{birman}). Now we describe how to convert a plat presentation to a bridge diagram. For example, consider a $3$-bridge decomposition with a plat presentation as in the left of Figure \ref{fig_example}. Here $P$ can be isotoped onto any height, so start with $P$ in the position $P_s$. The top in the right of Figure \ref{fig_example} illustrates a view of a canonical projection of the arcs $t_+^1,t_+^2,t_+^3$ on $P$ from $B_+$ side. In our pictures, $p(t_+^1),p(t_+^2),p(t_+^3)$ are represented by a solid line, a dotted line, a broken line, respectively. Shifting $P$ to the position $P_1$, the projections are as the second in the right of Figure \ref{fig_example}. Shifting $P$ further to the position $P_2$, the projections are as the third. By continuing this process, the projections are as in Figure \ref{fig_diagram_5} when $P$ is in the position $P_g$. Then we can find a canonical projection of the arcs $t_-^1,t_-^2,t_-^3$ and obtain a bridge diagram. \begin{figure}[ht] \vspace{10pt} \includegraphics[width=120pt]{fig_braid.eps}\\ \vspace{-91pt} \hspace{-7pt}$n$\\[37pt] \hspace{45pt}$2n$-braid\hspace{35pt}$B_+$\\[1pt] \hspace{133pt}$P$\\[1pt] \hspace{120pt}$B_-$ \caption{} \label{fig_braid} \end{figure} \begin{figure}[ht] \begin{minipage}{160pt} \begin{center} \includegraphics[width=120pt]{fig_example.eps} \end{center} \vspace{-155pt} \hspace{41pt}$t_+^1$\hspace{23pt}$t_+^2$\hspace{23pt}$t_+^3$\\[12pt] \hspace*{140pt}$P_s$\\[9pt] \hspace*{140pt}$P_1$\\[4pt] \hspace*{140pt}$P_2$\\ \hspace*{140pt}$P_3$\\ \hspace*{143pt}$\vdots$\\[27pt] \hspace*{140pt}$P_g$\\[12pt] \hspace*{41pt}$t_-^1$\hspace{23pt}$t_-^2$\hspace{23pt}$t_-^3$\\ \end{minipage} \begin{picture}(20,100)(0,0) \put(-5,70){\line(4,1){30}} \put(-5,45){\line(1,0){30}} \put(-5,28){\line(2,-1){30}} \put(-5,15){\line(2,-3){30}} \end{picture} \begin{minipage}{120pt} \vspace{-38pt} \hspace{16pt}$p(t_+^1)$\hspace{10pt}$p(t_+^2)$\hspace{10pt}$p(t_+^3)$\\[-16pt] \begin{center} \includegraphics[width=100pt]{fig_diagram_1.eps}\\[13pt] \includegraphics[width=100pt]{fig_diagram_2.eps}\\[13pt] \includegraphics[width=100pt]{fig_diagram_3.eps}\\[13pt] \includegraphics[width=100pt]{fig_diagram_4.eps} \end{center} \end{minipage} \vspace{-15pt} \caption{} \label{fig_example} \end{figure} \begin{figure}[ht] \includegraphics[width=280pt]{fig_diagram_5.eps}\\ \vspace{-145pt} \hspace{114pt}$p(t_+^1)$\hspace{110pt}$p(t_+^2)$\\[34pt] $p(t_-^1)$\hspace{268pt}$p(t_-^3)$\\[41pt] \hspace{28pt}$p(t_-^2)$\\[9pt] \hspace{48pt}$p(t_+^3)$ \caption{} \label{fig_diagram_5} \end{figure} Next we study the distance of this $3$-bridge decomposition. Since the link $L$ is connected, the bridge decomposition cannot be separated into smaller ones. It follows that the distance is at least one. Consider the simple closed curve $c$ as in Figure \ref{fig_curve}. The curve $c$ is essential in $P\setminus L$ and disjoint from both $p(t_+^1)$ and $p(t_-^1)$. Recall that the boundary of a small neighborhood of $p(t_+^1),p(t_-^1)$ in $P$ bounds an essential disk $D_+^1$ of $T_+$ and an essential disk $D_-^1$ of $T_-$, respectively. So there are an edge between $[\partial D_+^1],[c]$ and an edge between $[c],[\partial D_-^1]$ in the curve graph ${\mathcal C}(P\setminus L)$. By definition, the distance is at most two. It is true that there is no direct edge between $[\partial D_+^1]$ and $[\partial D_-^1]$. However, this is not enough to conclude that the distance is equal to two because there are infinitely many essential disks of $T_+,T_-$ other than $D_+^1,D_-^1$. \begin{figure}[ht] \includegraphics[width=280pt]{fig_curve.eps}\\ \vspace{-147pt} \hspace{-18pt}$p(t_+^1)$\\[34pt] \hspace{-292pt}$p(t_-^1)$\\[58pt] \hspace{48pt}$c$ \caption{} \label{fig_curve} \end{figure} As shown in \cite{berge}, \cite{casson-gordon}, \cite{lee} and \cite{lustig-moriah}, sufficiently complicated Heegarrd diagram implies a large distance of the Heegaard splitting. We can expect that sufficiently complicated bridge diagram also implies a large distance of the bridge decomposition. A bridge diagram should be pretty complicated if it satisfies the {\it well-mixed condition}, which we define in the following. Denote the arcs of each $\tau _\varepsilon $ by $t_\varepsilon ^1,t_\varepsilon ^2,\ldots ,t_\varepsilon ^n$. Let $l$ be a loop on $P$ containing $p(\tau _-)$ such that $p(t_-^1),p(t_-^2),\ldots ,p(t_-^n)$ are located in $l$ in this order. We can assume that $p(\tau _+)$ has been isotoped in $P\setminus L$ to have minimal intersection with $l$. For the bridge diagram of Figure \ref{fig_diagram_5}, it is natural to choose $l$ to be the closure in $P\cong S^2$ of the horizontal line containing $p(t_-^1)\cup p(t_-^2)\cup p(t_-^3)$. Let $H_+,H_-\subset P$ be the hemi-spheres divided by $l$ and let $\delta _i$ ($1\leq i\leq n$) be the component of $l\setminus p(\tau _-)$ which lies between $p(t_-^i)$ and $p(t_-^{i+1})$. (Here the indices are considered modulo $n$.) Let ${\mathcal A}_{i,j,\varepsilon }$ be the set of components of $p(\tau _+)\cap H_\varepsilon $ separating $\delta _i$ from $\delta _j$ in $H_\varepsilon $ for a distinct pair $i,j\in \{ 1,2,\ldots ,n\} $ and $\varepsilon \in \{ +,-\} $. For example, Figure \ref{fig_diagram_6} displays ${\mathcal A}_{1,2,+}$ for the above bridge diagram. Note that ${\mathcal A}_{i,j,\varepsilon }$ consists of parallel arcs in $H_\varepsilon $. \begin{figure}[ht] \includegraphics[width=280pt]{fig_diagram_6.eps}\\ \vspace{-50pt} \hspace{270pt}$H_+$\\[17pt] \hspace{286pt}$l$\\[-4pt] \hspace{26pt}$p(t_-^1)$\hspace{35pt}$\delta _1$\hspace{33pt}$p(t_-^2)$\hspace{35pt}$\delta _2$\hspace{32pt}$p(t_-^3)$\hspace{17pt}$\delta _3$ \caption{} \label{fig_diagram_6} \end{figure} \begin{Def} \begin{enumerate} \item A bridge diagram satisfies the {\it $(i,j,\varepsilon )$-well-mixed condition} if in ${\mathcal A}_{i,j,\varepsilon }\subset H_\varepsilon $, a subarc of $p(t_+^r)$ is adjacent to a subarc of $p(t_+^s)$ for all distinct pair $r,s\in \{ 1,2,\ldots ,n\} $. \item A bridge diagram satisfies the {\it well-mixed condition} if it satisfies the $(i,j,\varepsilon )$-well-mixed condition for all combinations of a distinct pair $i,j,\in \{ 1,2,\ldots ,n\} $ and $\varepsilon \in \{ +,-\} $. \end{enumerate} \end{Def} As in Figure \ref{fig_diagram_6}, the bridge diagram in Figure \ref{fig_diagram_5} amply satisfies the $(1,2,+)$-well-mixed condition. One can also check the $(i,j,\varepsilon )$-well-mixed condition for all the other combinations $(i,j,\varepsilon )=(1,2,-),(2,3,+),(2,3,-),(3,1,+),(3,1,-)$. Hence the bridge diagram in Figure \ref{fig_diagram_5} satisfies the well-mixed condition. \section{Proof of the theorem} Firstly, consider an essential disk $D_-$ of $T_-$. Assume that $D_-$ has been isotoped so that $|\partial D_-\cap l|$ is minimal. Here, $|\cdot |$ denotes the number of connected components of a topological space. \begin{Lem} There exist a distinct pair $i,j\in \{ 1,2,\ldots ,n\} $ and $\varepsilon \in \{ +,-\} $ such that $\partial D_-$ includes a subarc connecting $\delta _i$ and $\delta _j$ in $H_\varepsilon $. \end{Lem} \begin{proof} Since the arcs of $\tau _-$ are projected to subarcs of $l$, there exists a disk $E_-$ in $B_-$ such that $\partial E_-=l$ and $\tau _-\subset E_-$. The essential disk $D_-$ must have non-empty intersection with $E_-$. The closed components of $D_-\cap E_-$ can be eliminated by an isotopy of ${\rm Int}D_-$. Then $D_-\cap E_-$ is a non-empty family of properly embedded arcs in $D_-$. Consider an outermost subdisk $D_-^0$ of $D_-$ cut off by an arc of them. For the minimality of $|\partial D_-\cap l|$, we can see that $\partial D_-^0\cap \partial D_-$ connects $\delta _i$ and $\delta _j$ in $H_\varepsilon $ for a distinct pair $i,j\in \{ 1,2,\ldots ,n\} $ and $\varepsilon \in \{ +,-\} $. \end{proof} Secondly, consider an essential disk $D_+$ of $T_+$. Assume that $D_+$ has been isotoped so that $|\partial D_+\cap p(\tau _+)|$ is minimal. \begin{Lem} Suppose $c$ is an essential simple closed curve on $P\setminus L$ disjoint from $\partial D_+$. There exist a distinct pair $r,s\in \{ 1,2,\ldots ,n\} $ such that no subarc of $c$ connects $p(t_+^r)$ and $p(t_+^s)$ directly (i.e. its interior is disjoint from $p(\tau _+)$). \end{Lem} \begin{proof} Let $E_+^i$ be a disk of parallelism between $t_+^i$ and $p(t_+^i)$ for each $i=1,2,\ldots ,n$ so that $E_+^1,E_+^2,\ldots ,E_+^n$ are pairwise disjoint. The closed components of $D_+\cap (E_+^1\cup E_+^2\cup \cdots \cup E_+^n)$ can be eliminated by an isotopy of ${\rm Int}D_+$. If $D_+\cap (E_+^1\cup E_+^2\cup \cdots \cup E_+^n)$ is empty, $D_+$ separates the $n$ disks $E_+^1,E_+^2,\ldots ,E_+^n$ into two classes in $B_+$. Since $D_+$ is essential, both these classes are not empty. If $D_+\cap (E_+^1\cup E_+^2\cup \cdots \cup E_+^n)$ is not empty, it consists of properly embedded arcs in $D_+$. Consider an outermost subdisk $D_+^0$ of $D_+$ cut off by an arc of them, say, an arc of $D_+\cap E_+^k$. Then, $D_+^0\cup E_+^k$ separates the $(n-1)$ disks $E_+^1,\ldots ,E_+^{k-1},E_+^{k+1},\ldots ,E_+^n$ into two classes in $B_+$. Since $|\partial D_+\cap p(t_+^k)|$ is minimal, both these classes are not empty. Anyway, by choosing $r$ and $s$ from the indexes of the disks of separated classes, the lemma follows. \end{proof} Assume that the distance of $(T_+,T_-;P)$ is less than two. There are disjoint essential disks $D_+,D_-$ of $T_+,T_-$, respectively. If $\partial D_-$ contains a subarc connecting $\delta _i$ and $\delta _j$ in $H_\varepsilon $, it intersects all the arcs of ${\mathcal A}_{i,j,\varepsilon }$. In particular, if two arcs of ${\mathcal A}_{i,j,\varepsilon }$ are adjacent in $H_\varepsilon $, a subarc of $\partial D_-$ connects them directly. The above observations and the well-mixed condition are almost enough to lead to a contradiction, but only the following should be checked: \begin{Lem} The disks $D_+$ and $D_-$ can be isotoped preserving the disjointness so that $|\partial D_+\cap p(\tau _+)|$ and $|\partial D_-\cap l|$ are minimal. \end{Lem} \begin{proof} Note that any isotopy of $\partial D_\varepsilon $ in $P\setminus L$ can be realized by an isotopy of $D_\varepsilon $ in $B_\varepsilon \setminus \tau _\varepsilon $ for $\varepsilon =\pm $. If $|\partial D_+\cap p(\tau _+)|$ is not minimal, there are a subarc of $\partial D_+$ and a subarc $\alpha $ of $p(\tau _+)$ cobounding a disk $\Delta _+$ in $P\setminus L$. Since $D_+,D_-$ are disjoint, $\partial D_-\cap \Delta _+$ consists of arcs parallel into $\alpha $. Let $\Delta _+^0$ be an outermost disk of the parallelisms. By assumption, $p(\tau _+)$ has minimal intersection with $l$ and so no component of $l\cap \Delta _+^0$ has both end points on $\alpha $. By an isotopy of $\partial D_-$ across $\Delta _+^0$, we can reduce $|\partial D_-\cap \Delta _+|$ without increasing $|\partial D_-\cap l|$. After pushing out $\partial D_-$ from $\Delta _+$ in this way, we can reduce $|\partial D_+\cap p(\tau _+)|$ by an isotopy of $\partial D_+$ across $\Delta _+$. If $|\partial D_-\cap l|$ is not minimal, there are a subarc of $\partial D_-$ and a subarc $\beta $ of $l$ cobounding a disk $\Delta _-$ in $P\setminus L$. The intersection $\partial D_+\cap \Delta _-$ consists of arcs parallel into $\beta $. Let $\Delta _-^0$ be an outermost disk of the parallelisms. By the minimality of $|l\cap p(\tau _+)|$, no component of $p(\tau _+)\cap \Delta _-^0$ has both end points at $\beta $. By an isotopy of $\partial D_+$ across $\Delta _-^0$, we can reduce $|\partial D_+\cap \Delta _-|$ without increasing $|\partial D_+\cap p(\tau _+)|$. After pushing out $\partial D_+$ from $\Delta _-$ in this way, we can reduce $|\partial D_-\cap l|$ by an isotopy of $\partial D_-$ across $\Delta _-$. \end{proof} Theorem \ref{main} implies that the $3$-bridge decomposition in Figure \ref{fig_example} has distance at least two. Since we have shown that it is at most two, the distance is exactly two. We can work out in this way fairly many $n$-bridge decompositions, especially for $n=3$. \subsection*{Acknowledgement} I would like to thank Jang~Yeonhee for giving me the main question of this work and helpful conversations. I would like to thank\break Ken'ichi~Ohshika for all his help as a mentor. I would also like to thank\break Makoto~Ozawa and Makoto~Sakuma for valuable comments and suggestions.
{ "redpajama_set_name": "RedPajamaArXiv" }
3,148
Velryba grónská (Balaena mysticetus) je kytovec žijící ve studených mořích severní polokoule, kde se zdržuje po celý rok. Patrně se jedná o nejdéle žijícího savce, schopného dožít se 200 a více let. Na rozdíl od většiny kytovců postrádá hřbetní ploutev a může žít i v malých skupinách čítajících asi 14 jedinců. První se o něm zmínil švédský přírodovědec Carl Linné ve své publikaci Systema naturae (10. vydání, rok 1758). Mezinárodní svaz ochrany přírody (IUCN) určil druh za málo dotčený. Taxonomie Jako první se o velrybě zmínil švédský přírodovědec Carl Linné a to v 10. vydání svého díla Systema naturae v roce 1758. Protože je velmi podobná ostatním velrybám žijícím v severním Atlantiku, severním Tichém oceánu a Jižním oceánu, Linné považoval všechny tři varianty velryb za jeden a ten samý druh, který pak zařadil do rodu Balaena (velryba). Velryba grónská získala binomické jméno Balaena mysticetus. Dnes velryba grónská vytváří monotypický rod oddělený od tzv. pravých velryb (Eubalaena), jak navrhl britský zoolog John Edward Gray v roce 1821. Po dalších téměř dvou stovkách let byla však čeleď Balaenidae předmětem velkých taxonomických debat. Nakonec vědci došli k závěru, že velryby grónské se od ostatních velryb liší, ale stále nepanovala shoda v tom, zda má sdílet stejný rod nebo nikoli. Teprve až studie z roku 2000 poskytly jasný důkaz, že tři žijící druhy pravých velryb tvoří fylogenetickou linii odlišnou od velryb grónských, a že velryby grónské a velryby pravé jsou vhodně řazeny do dvou samostatných rodů. Bylo tedy potvrzeno, že velryby pravé patří do samostatného rodu Eubalaena. Vztah je znázorněn na níže uvedeném kladogramu:{{klad|{{klad |label1=  Eubalaena   |1={{klad |1= E. glacialis (velryba černá) |label2= |2={{klad |1= E. japonica (velryba japonská) |2=  [[Velryba jižní|E. australis]] (velryba jižní) }} }} |label2= Balaena  |2= B. mysticetus (velryba grónská) }}|style=font-size:90%;line-height:92%;width:600px;|label1= čeleď Balaenidae  (velrybovití)  }} Výskyt Velryba grónská tráví celý svůj život v arktických a subarktických vodách, jako jediná z velryb. Lze se s ní setkat všude tam, kde nejsou souvislá pásma ledových ker, obecně při pobřeží severní Kanady a Aljašky. V zimě táhne do Beringovy úžiny a k pobřeží Labradoru, na jaře se stěhuje na sever do Čukotského a Beaufortova moře. Areál výskytu se může měnit v závislosti na klimatických změnách – tj. na tvorbě a tání ledu. S úbytkem mořského ledu se velryby grónské v létě častěji vyskytují na otevřeném moři. V minulosti (cca do 16. nebo 17. století, než byly loveny) byl pravděpodobně areál velryb širší i jižnější. Dělení populací podle místa výskytu Mezinárodní velrybářská komise (IWC) tradičně uznávala pět populací velryby grónské, ale v současnosti, jak uvádí třeba Mezinárodní svaz ochrany přírody (IUCN), jsou to následující čtyři: subpopulace v Beringově, Čukotském a Beaufortově moři (od Čaunského zálivu v západní části Čukotského moře přes Beaufortovo moře na východ po Amundsenův záliv a Melvillův průliv), subpopulace ve východní Kanadě a západním Grónsku (Hudsonův záliv, Foxův záliv, Hudsonův průliv, Davisův průliv, Baffinův záliv aj.), subpopulace v Ochotském moři (severní a západní část Ochotského moře na východ po Šelichovův záliv, Gižigský záliv a Penžinský záliv), subpopulace ve východním Grónsku, Špicberkách a Barentsově moři (od východního pobřeží Grónska přes Grónské moře, v souostroví Země Františka Josefa a v Karském moři přinejmenším až po Severní země). Pozorování byla zaznamenána i jižněji, výjimečně zasahovala až na Island a k pobřeží Finnmark. Migranti byli nalezeni až na západě Britských ostrovů a ve Francii. Popis a výkony Velryba grónská má robustní tmavé tělo a velkou hlavu trojúhelníkového tvaru. Na spodní čelisti, v podstatě na bradě, je viditelně bíle zbarvena a poseta nepravidelnými černými flíčky. Dospělý jedinec dosahuje délky 14–18 metrů (maximálně asi 21 m) a hmotnosti okolo 75–100 tun, přičemž samice mívá větší rozměry než samec. Samotný jazyk této velryby může vážit až 900 kilogramů. V současnosti se tak jedná o jednoho z pěti největších druhů kytovců (a tedy i živočichů) na světě.Vyznačuje se především mohutnou hlavou, s jejíž pomocí dokáže prorazit až 60 cm silnou vrstvu ledu. Dále také až osm metrů dlouho ocasní ploutví, jež tvoří 40 % z celé délky těla. Disponuje silnou vrstvu podkožního tuku o tloušťce až 70 cm, který ji chrání před nízkými teplotami. Na moři se pohybuje zpravidla pomalejším tempem; průměrně plave rychlostí kolem 2,5 km/h, při krmení 4–9 km/h a v případě nutnosti zrychlí i nad 10 km/h. Nepotápí se do velké hloubky (maximálně asi do 150 metrů). Je schopna setrvat pod vodou až jednu hodinu, obvykle se však ponoří na méně než 20 minut. Svými dvěma dýchacími otvory umístěnými na vrcholu hlavy dokáže vytrysknout vodu do výšky čtyř až šesti metrů. Dlouhověkost Velryba grónská je považována za nejdéle žijícího savce, neboť se může dožít více než 200 let. V květnu 2007 byl u aljašského pobřeží objeven 15metrový jedinec, u něhož byl odhadnut věk okolo 115 až 130 let, a od tohoto měření zkoumali vědci řadu dalších velryb, přičemž stáří jednoho z nich bylo odhadnuto na 211 let a věk dalších velryb grónských byl odhadnut na 135 až 172 let. Podle vědců z CSIRO, australské národní vědecké agentury, sekvence genomu určila maximální možnou délku života velryb grónských na 268 let. Jedinečný genom Velryba grónská má asi tisíckrát více buněk v organismu než jiní savci a měla by tak být obecně náchylnější na onemocnění související s věkem. Přesto má naopak mnohem vyšší odolnost proti stárnutí a je rovněž téměř imunní vůči nádorovému onemocnění. Při zkoumání jejího genomu vědci nalezli dvě konkrétní alely, které by mohly být zodpovědné za pozoruhodnou dlouhověkost této velryby. Jedinečná genetická informace může rovněž odhalit, jakým způsobem pracuje její nezvykle pomalý metabolismus. Biologie a ekologie Migrace Při migraci, ke které dochází na jaře a na podzim, se velryby grónské rozdělují do tří menších skupin čítajících maximálně asi 14 jedinců – subadultní velryby, mladší dospělé velryby a starší dospělé velryby. Každá z pěti populací vykazuje odlišné migrační vzorce závislé na nabídce potravy a rozšíření nebo ústupu polárních ledovců. Komunikace Velryby grónské se mezi sebou dorozumívají rozmanitou směsicí silných nízkofrekvenčních zvuků, nepřesahujících hranici 1000 Hz. Intenzivnější komunikace probíhá zejména během migrace. V období rozmnožování se pak projevují souvislejšími písněmi s pravidla déle trvající frekvencí. Studií z let 2010–2014 při které bylo zkoumáno asi 300 velryb, bylo zaznamenáno na 184 různých zvuků. Podle vědců mají velryby grónské široký často se měnící repertoár a zvuky jimi vydávané připomínají jazzovou hudbu. náhled|Kresba velryby grónské z roku 1987 Potrava Živí se planktonem a obzvláště drobnými korýši (krilem), kterého je schopna za jediný den spořádat téměř dvě tuny, tedy asi 20 milionů těchto malých mořských členovců. Technika jejího lovu spočívá v tom, že do jejich hejn pomalu vplouvá se zcela otevřenou tlamou. Namísto zubů je na bocích horní čelisti vybavena přibližně 350 kosticemi, které jí pomáhají potravu vycedit. Uchopenou kořist udrží těmito rohovitými hřebeny v tlamě a zároveň se pomocí nich zbaví přebytku nabrané vody. Spodní čelist je po stranách silně zahnutá a v podstatě imituje obrovskou lžíci. Dlouho se vědci domnívali, že kytovci postrádají čichové ústrojí, avšak výzkum mozku u velryby grónské prokázal, že obsahuje mimo jiné i čichový trakt. Rovněž má na rozdíl od většiny ostatních druhů oddělené nosní dírky a pravděpodobně je tak schopná vnímat pach ve vzduchu, čímž si možná usnadňuje vyhledávání potravy. Rozmnožování Tento kytovec se páří na konci zimy, tedy na jaře nebo na začátku léta. Samice je březí 13–14 měsíců a rodí tak následujícího roku během dubna až června. Mládě přijde na svět ocasem napřed, čímž se zabrání možnému utonutí při porodu. Novorozeně dosahuje délky 3,5–5,5 metru a hmotnosti přibližně jedné tuny. Ihned po narození je vybaveno silnou tukovou vrstvou a do půl hodiny dokáže samo plavat. O potomka se stará pravděpodobně jen samice. Mládě vyroste o cca 1,5 cm denně a matkou je kojeno 9–15 měsíců. Matčino mléko je velmi tučné (30–60 % tuku), tak, že má konzistenci srovnatelnou s pastou na zuby. Takováto extrémně výživná strava urychluje vývoj potomka. Ve věku 20–25 let, při velikosti asi 12–14 m, dosahuje velryba grónská pohlavní dospělosti. Může se dožít vysokého stáří, pravděpodobně až 200 a více let (podrobněji zde). Predátoři Velryba grónská díky své velikosti nemá kromě člověka mnoho přirozených nepřátel, prakticky žádné. Neexistují ani zprávy o útocích žraloků. Nicméně při studii v roce 1995 měla třetina velryb z populace v Davisově průlivu jizvy po útocích kosatek, a právě kosatka dravá je považována za jediného možného predátora. Ohrožení a populace Už v 16. nebo 17. století byla velryba grónská obětí zničujících velrybářských expedicí. Tehdy, před komerčním lovem, v mořích žilo nejméně 50 000 velryb. Za 350 let trvajících lovů jich bylo asi 70 000 zabito. Jejími komerčně velmi cennými přednostmi, kterými jsou silná vrstva tuku, rohovité kostice z keratinu a obecně jako ohromný zdroj obživy, byla dohnána téměř k záhubě. První ochranný pakt vznikl teprve ve 30. letech 20. století a údajně se jednalo o vůbec první svého druhu, který se týkal ochrany divoké fauny. Učiněno tak bylo díky tehdejší mezinárodní organizací vystupující pod názvem Společnost národů, založené po konci první světové války. Dnes je velryba grónská součástí takzvané Washingtonské úmluvy – CITES, uvedena v příloze I, a také je zapsána v Úmluvě o ochraně stěhovavých druhů volně žijících živočichů – CMS (angl. Convention on the Conservation of Migratory Species of Wild Animals''), rovněž v příloze I. V současnosti je komerční lov těchto velryb striktně zakázán, neboť byl hlavní příčinou úbytku populace. Výjimka je uložena pouze domorodým národům, pro které jsou tyto kytovci zdrojem živobytí. Stanovené limity střeží příslušné organizace, kterou je například Mezinárodní velrybářská komise (IWC). Dalšími agenturami, které se podílejí na ochraně tohoto druhu, jsou Aljašská eskymácká velrybářská komise (AEWC) a Národní úřad pro oceán a atmosféru (NOAA). V budoucnu může velrybu grónskou ohrozit rostoucí průmyslová aktivita (těžba ropy a zemního plynu), vyšší frekvence lodní dopravy a s tím spojený rybolov, a v neposlední řadě jejich ilegální lov porušující stanovené a velmi omezené normy. Na základě rychle rostoucí průměrné teploty v arktické oblasti došlo během tohoto století k výraznému úbytku mořského ledu a předpoklady jsou takové, že bude-li tento trend pokračovat, v letních měsících dojde k jeho úplnému zmizení. Pro velryby budou však tyto krátkodobé změny s největší pravděpodobností pozitivní, neboť by se tímto rozšířil areál možných lovišť. Celkový dopad je však neznámý či ryze spekulativní. Populace Mezinárodní svaz ochrany přírody (IUCN) považuje druh za málo dotčený. Globální (celosvětová) populace pravděpodobně roste a díky ochranným opatřením se tak pozvolna zotavuje. Na základě neúplných studií biologové hrubým odhadem usuzují, že v mořích žije více než 10 000 dospělých jedinců (cca od 10 000 do 30 000 kusů). Nicméně tři z celkem asi pěti známých populací odborníci stále považují za ohrožené, další za zranitelnou a pouze jednu za málo dotčenou. Podle Mezinárodního svazu ochrany přírody vypadá populace velryb, v případě čtyř subpopulací, následovně: populace v Beringově, Čukotském a Beaufortově moři je jako jediná zotavená až "nadlimitní", čítá 12 400–28 500 jedinců / původně, před intenzivním lovem, zde žilo asi 10 000–24 000 jedinců populace ve východní Kanadě a západním Grónsku se pohybuje v rozmezí 4000–11 000 jedinců / původně nejméně 25 000 velryb populace v Ochotském moři je velmi malá a patrně i geneticky a geograficky izolovaná, čítá pravděpodobně jen 200 kusů / původně 3000–20 000 jedinců populace ve východním Grónsku, Špicberkách a Barentsově moři je podobně jako v předchozím případě ohrožená, pouze okolo 100 velryb / původně 33 000–65 000 jedinců. Odkazy Reference Literatura Conroy, Erin. (2007-12-06) Netted whale hit by lance a century ago – Science – MSNBC.com. MSNBC. Retrieved on 2011-09-15. Externí odkazy 19th-century weapon found in whale » USA Today Velrybovití
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,128
Michael Henke ist der Name folgender Personen: * Michael Henke (Politiker) (* 1943), deutscher Politiker (Bündnis 90/Die Grünen) Michael Henke (Fußballspieler) (* 1957), deutscher Fußballspieler und -trainer Michael Henke (Logistiker) (* 1971), deutscher Logistikwissenschaftler und Hochschullehrer
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,788
\section{Introduction} Scalar leptoquarks are bosonic particles beyond the Standard Model which couple to both quarks and leptons via a Yukawa-type interaction, and which were originally proposed in the context of Grand Unification. Over the recent years, the appearance of so-called flavour anomalies, namely discrepancies between theoretical expectations and experimental measurements for certain flavour observables such as the $R_{K^{(*)}}$ and $R_{D^{(*)}}$ ratios pertaining to lepton-flavour universality (see e.g.\ \cite{Lees:2012xj,Belle:2019rba,Aaij:2019wad}), has led to increased interest in leptoquark models. These are known to mitigate or even resolve the tensions. Until now, collider experiments such as the Large Hadron Collider (LHC) have not seen any signals of leptoquark production and the current exclusion limits require leptoquark masses to be larger than about 1.0--1.8 TeV, depending on the specifics of the model, see e.g.\ \cite{Aad:2020iuy,CMS:2020wzx}. \begin{figure}[t] \setlength\tabcolsep{0pt} \centering \begin{tabular}{cccc} \includegraphics[width=.25\textwidth]{figures/tree-QCD_1} & \includegraphics[width=.25\textwidth]{figures/tree-tchan_1} & \includegraphics[width=.25\textwidth]{figures/virtual-QCD_2} & \includegraphics[width=.25\textwidth]{figures/real-tchan_1}\\ (a) & (b) & (c) & (d) \end{tabular} \caption{Representative Feynman diagrams for scalar leptoquark pair production, with pure-QCD (a) and leptonic $t$-channel (b) contributions at tree level, and examples for virtual (c) and real (d) QCD corrections.} \label{th:feyndiags} \end{figure} Previous direct search studies of leptoquark pair production typically neglected contributions proportional to the leptoquark-lepton-quark Yukawa couplings in relation to the leading pure-QCD terms, cf.\ figure~\ref{th:feyndiags}~(a). Explanations to the flavour anomalies however require Yukawa couplings of $\mathcal{O}(1)$ and masses of $\mathcal{O}(\text{TeV})$. The inclusion of leptonic $t$-channel contributions of figure~\ref{th:feyndiags}~(b) as well as QCD corrections up to the next-to-leading order (NLO) in the strong coupling $\alpha_{\text{s}}$, cf.\ figure~\ref{th:feyndiags}~(c) and (d), and threshold resummation corrections could thus impact the predictions notably, as shown in our recent works \cite{Borschensky:2020hot,Borschensky:2021hbo}. In these proceedings, we discuss the most important results. \section{Theoretical setup} In our simplified framework, we extend the Standard Model (SM) by five species of scalar leptoquarks that couple to quarks and leptons, following standard notation~\cite{Buchmuller:1986zs}: $S_1$, $\tilde S_1$, $R_2$, $\tilde R_2$, and $S_3$. They lie in the $(\mathbf{3}, \mathbf{1})_{-1/3}$, $(\mathbf{3}, \mathbf{1})_{-4/3}$, $(\mathbf{3}, \mathbf{2})_{7/6}$, $(\mathbf{3}, \mathbf{2})_{1/6}$, and $(\mathbf{3}, \mathbf{3})_{-1/3}$ representations of the SM gauge group $SU(3)_C \times SU(2)_L \times U(1)_Y$, respectively, where the bold numbers denote the transformation properties with respect to the $SU(3)_C$ and $SU(2)_L$ gauge groups, and the subscript indicates the hypercharge. Then, the Lagrangian describing the leptoquark interactions is: \begin{equation} \begin{split} \mathcal{L}_{\mathrm{LQ}} = \mathcal{L}_{\mathrm{kin.}} &\ + \mathbf{y_1^{RR}} \bar u_R^c e_R S_1^\dag + \mathbf{y_1^{LL}} \left(\bar Q_L^c \cdot L_L\right) S_1^\dag + \mathbf{\tilde y_1^{RR}} \bar d_R^c e_R \tilde S_1^\dag + \mathbf{y_2^{LR}} \bar e_R Q_L R_2^\dag\\ &\ + \mathbf{y_2^{RL}} \bar u_R \big(L_L \cdot R_2\big) + \mathbf{\tilde y_2^{RL}} \bar d_R \big(L_L \cdot \tilde R_2\big) + \mathbf{y_3^{LL}} \big(\bar Q_L^c \cdot \sigma_k L_L\big) \big(S_3^k\big)^\dag + \mathrm{H.c.}, \end{split} \label{eq:lag} \end{equation} where $\mathcal{L}_{\mathrm{kin.}}$ collects all gauge-invariant kinetic and mass terms and the Yukawa couplings $\mathbf{y}/\mathbf{\tilde y}$ are $3\times 3$ matrices in flavour space, the first (second) index of any element $y_{ij}$ referring to the quark (lepton) generation. We generically denote the leptoquark mass by $m_{\text{LQ}}$. In terms of their component fields with a specific electric charge, the electroweak multiplets can be written, with the matrix representation of the triplet $S_3 = 1/\sqrt 2\,\sigma_k S_3^k$ and the Pauli matrices $\sigma_k$ for $k = 1, 2, 3$, as: \begin{equation} \setlength{\arraycolsep}{1pt} \renewcommand{\arraystretch}{1.3} \begin{split} S_1 = S_1^{(-1/3)},\ \tilde S_1 = \tilde S_1^{(-4/3)},\ R_2 = \begin{pmatrix}R_2^{(+5/3)}\\R_2^{(+2/3)}\end{pmatrix},\ \tilde R_2 = \begin{pmatrix}\tilde R_2^{(+2/3)}\\\tilde R_2^{(-1/3)}\end{pmatrix},\ S_3 = \begin{pmatrix}\frac{1}{\sqrt{2}}S_3^{(-1/3)} & S_3^{(+2/3)}\\S_3^{(-4/3)} & -\frac{1}{\sqrt{2}}S_3^{(-1/3)}\end{pmatrix}. \end{split} \end{equation} In our studies, we consider a simplified scenario in which the Standard Model is extended by either only the $S_1$ or the $R_2$ species, as well as three benchmark scenarios motivated by a simultaneous resolution of the $R_{K^{(*)}}$ and $R_{D^{(*)}}$ anomalies: \emph{(\textbf{a})} a solution involving only $R_2$, and two-leptoquark explanations with either \emph{(\textbf{b})} both $R_2$ and $S_3$ or \emph{(\textbf{c})} both $S_1$ and $S_3$ (see section 2.1.2 of \cite{Borschensky:2021hbo}). We calculate the fixed-order cross section including NLO-QCD corrections for the pair production of scalar leptoquarks at the LHC. Our results consistently include all contributions from figure~\ref{th:feyndiags}, i.e.\ the squares of pure-QCD and $t$-channel diagrams as well as their interference with terms of $\mathcal{O}(\alpha_{\text{s}}^2,\, y^4,\, y^2\alpha_{\text{s}})$ at Born level and $\mathcal{O}(\alpha_{\text{s}}^3,\, y^4 \alpha_{\text{s}},\, y^2 \alpha_{\text{s}}^2)$ for the QCD corrections, respectively. We consider the sum of all three classes of terms as our complete NLO-accurate prediction. The results are implemented in the {\textsc{MadGraph5\_aMC@NLO}}{}~\cite{Alwall:2014hca} and {\textsc{POWHEG-BOX}}{}~\cite{Nason:2004rx} frameworks. Moreover, we consider corrections from the emission of soft gluons in the threshold limit $\beta^2 = 1 - 4m_{\text{LQ}}^2/s \to 0$ with the partonic centre-of-mass energy $s$ by resumming logarithms $\alpha_{\text{s}}^n \ln^k \beta^2$ with $k \le 2n$ to all orders. We apply the Mellin-space formalism to write the resummed cross section, now depending on the Mellin-moment $N$, in the factorised form \cite{Sterman:1986aj}:\vspace{-.9mm} \begin{equation} \tilde \sigma^{\mathrm{res, NNLL}}_{ij\rightarrow {\text{LQ}\,\text{LQ}^*},I}(N) = \tilde\sigma^{(0)}_{ij\rightarrow {\text{LQ}\,\text{LQ}^*},I}(N)\,\tilde C_{ij\rightarrow {\text{LQ}\,\text{LQ}^*},I}(N)\,\Delta^{S}_I(N+1)\,\Delta_i(N+1)\,\Delta_j(N+1), \end{equation} with $I = \mathbf{1}\text{ (singlet)}, \mathbf{8}\text{ (octet)}$ indicating the colour representation of the final state. The Mellin-transformed Born cross section is $\tilde\sigma^{(0)}_{ij\rightarrow {\text{LQ}\,\text{LQ}^*},I}$, the hard-matching coefficients $\tilde C_{ij\rightarrow {\text{LQ}\,\text{LQ}^*},I}$ collect non-logarithmic higher-order terms, and the functions $\Delta^{S}_I\Delta_i\Delta_j$ contain the resummed soft-collinear logarithms. The result is then matched to the fixed-order calculation to avoid double-counting, and transformed back to physical momentum space via an inverse Mellin transform. Here, we consider threshold resummation up to next-to-next-to-leading-logarithmic (NNLL) accuracy. \section{Precision predictions} We denote our prediction including $t$-channel and resummation corrections as ``NLO w/ $t$-channel + NNLL''. The results are compared to pure-QCD predictions labeled ``NLO-QCD''. All calculations are carried out for a centre-of-mass energy of $\sqrt{S} = 13$ TeV, employing three different sets of parton distribution functions (PDFs), namely CT18~\cite{Hou:2019efy}, NNPDF3.1~\cite{NNPDF:2017mvq}, and MSHT20~\cite{Bailey:2020ooq} for the description of the proton's parton content. The central renormalisation and factorisation scales are set to $\mu_R = \mu_F = m_{\text{LQ}}$, and the scale uncertainty is evaluated through the 7-point method by varying the scales up and down by a factor of 2 relative to the central value. \begin{figure}[t] \centering \includegraphics[width=0.379\textwidth]{figures/LQ-wt_13TeV}\hspace{5mm}\includegraphics[width=0.379\textwidth]{figures/LQ-QCD_13TeV}\\ \includegraphics[width=0.379\textwidth]{figures/LQ-PDF_13TeV}\hspace{5mm}\includegraphics[width=0.379\textwidth]{figures/LQ-tot_13TeV} \caption{Impact of various contributions on the predictions associated with $S_1^{(-1/3)}$ and $R_2^{(+5/3)}$ pair production, shown as ratios. Top left: $t$-channel contributions. Top right: threshold resummation corrections (independent of the leptoquark model). Bottom left: choice of PDFs. Bottom right: combined effects.} \label{fig:lq:rel-importance} \end{figure} We begin with an analysis of the impact of the various contributions considered in this work. We assume only one leptoquark species to be present and discuss the pair production of the $S_1^{(-1/3)}$ and $R_2^{(+5/3)}$ eigenstates. In figure~\ref{fig:lq:rel-importance}, we present ratios to highlight the relative importance: NLO w/ $t$-channel over NLO-QCD to assess the impact of $t$-channel contributions (top left), NLO-QCD + NNLL over NLO-QCD to evaluate the size of the resummed corrections (top right), NLO w/ $t$-channel with NNLO PDFs over the same with NLO PDFs to analyse the PDF choice (bottom left), and NLO w/ $t$-channel + NNLL over NLO-QCD to show the combined effect of all contributions (bottom right). It can be seen that all pieces are of similar size, possibly increasing or reducing the predictions by a few tens of per cent. While the CT18 and MSHT20 predictions are generally similar with an often very different behaviour for NNPDF3.1 related to the treatment of the charm quark PDF, the effects depend strongly on the flavour structure of the leptoquark coupling. It is therefore important to consider the combination of all contributions as no generic behaviour arises. \begin{figure}[t] \centering \includegraphics[width=.333\textwidth]{figures/7_point_R2_benchmark_13TeV}\includegraphics[width=.333\textwidth]{figures/7_point_R2S3_benchmark_13TeV}\includegraphics[width=.333\textwidth]{figures/7_point_S1S3_benchmark_13TeV} \caption{Comparison of total cross section predictions at NLO-QCD (blue) and NLO w/ $t$-channel + NNLL (red), for three benchmark scenarios \emph{(\textbf{a})}, \emph{(\textbf{b})}, and \emph{(\textbf{c})} (see \cite{Borschensky:2021hbo} for further information). The dark-coloured error bars denote the scale uncertainties, and the light-coloured ones their combination with the PDF uncertainties.} \label{fig:benchmarksplots} \end{figure} Next, we discuss in figure~\ref{fig:benchmarksplots} predictions for total cross sections evaluated in the three phenomenologically motivated benchmark scenarios \emph{(\textbf{a})}, \emph{(\textbf{b})}, and \emph{(\textbf{c})}, including a full error analysis with scale and PDF uncertainties. We select two points in the allowed parameter space from each benchmark, and compare NLO-QCD with the NLO w/ $t$-channel + NNLL predictions, evaluated with NLO and NNLO PDF sets, respectively. A comparison of the dark-coloured bands between the two accuracies shows that the NNLL corrections greatly improve the scale behaviour. In contrast, with the exception of MSHT20 being the most recent of the PDF sets considered, the full uncertainties grow for NLO w/ $t$-channel + NNLL which can be attributed to the difference between NLO and NNLO PDFs. While for some points, the two accuracies agree within errors, in several cases, the new contributions lead to a notable enhancement outside of the error bands, as seen mainly for $a_1$ and $c_2$ in the leftmost and rightmost plots. Thus, NLO-QCD cannot reliably approximate the full pair production process, in particular for new generations of PDFs with smaller uncertainties. \section{Conclusions} We have calculated precision predictions for the pair production of scalar leptoquarks at the LHC. Included are QCD and leptonic $t$-channel contributions up to NLO-QCD and threshold resummation corrections up to NNLL accuracy. Our results constitute the most precise theoretical predictions for this class of processes to date. In light of the large Yukawa couplings and leptoquark masses required for a solution to the flavour anomalies, the corrections we have considered become particularly relevant. We have observed that all classes of contributions are equally important and can impact the predictions in often contrasting ways. The developed codes and numerical tables in the NNLL-fast format are available publicly from:\par {\centering \url{https://www.uni-muenster.de/Physik.TP/research/kulesza/leptoquarks.html} \par} \setlength{\bibsep}{0pt} \input{scalar-lqs.bbl} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,480
Boys transferring from Year 6 at primary school to Year 7 secondary school must do so through the London–wide co-ordinated admissions scheme. All applicants must submit the e-admissions online application to the boys Local Authority by the 31st October 2021. In addition, applicants should complete the St Ignatius College Supplementary Information Form (SIF) which is available to download from our website. The SIF should be returned to the College by the 31st October 2021. Applications received after this date will be considered after the initial allocation process has been completed (see late application below). If you do not complete both the e-admissions and the SIF and return them by the closing date the Governing Body may be unable to consider your application fully and it is very unlikely that your child will get a place at the College. Boys already in Year 7 at secondary school who wish to transfer to Year 7 at St Ignatius College do so by following the procedure set out below in year admissions. IN YEAR ADMISSIONS - Years 7-11 Applications for In-Year admissions are made initially through Enfield Admissions and then directed to the College. If a place is available and there is no waiting list, the child will be admitted. If more applications are received than there are places available then applications will be ranked by the governing body in accordance with the oversubscription criteria. The oversubscription criteria can be viewed by clicking on this link. If a place cannot be offered at this time then you may ask us for the reasons and you will be informed of your right of appeal. You will be offered the opportunity of being placed on a waiting list. Names are removed from the list at the end of each academic year. When a place becomes available the governing body will re-rank the list and make an offer. CHILDREN EDUCATED OUT OF CHRONOLOGICAL AGE GROUP Application may be made for a child to be educated out of his/her age group i.e. a 12 year old being admitted to a Year 7, a 17 year old to a Year 12 or any child admitted in-year to the year below their chronological age group. The applicant should write to the Chair of Governors at the time of application requesting that the child may be admitted out of his/her chronological age group. UNSUCCESSFUL APPLICATIONS If your child is not offered a place at the College, his name will normally be placed on a waiting list for admission to the College (see 'Waiting List' below). If your child is not offered a place at the College you will be entitled to appeal to an independent panel. Details will be given in the letter of refusal. The decision of the panel is final. St Ignatius College has a waiting list of boys who have not been offered a place but whose parents express the wish for them to take up a place should one become available. If a place does become available all applicants are assessed in accordance with the entry criteria. Date of receipt of the application is not a factor. Boys who are on the waiting list will not be removed unless requested by their parent(s). ADMISSION APPEALS (FOR SEPT 2022 YEAR 7 ADMISSIONS) Information on the appeals process will be available soon. LATE APPLICATIONS Applications received after the closing date will be dealt with after the initial allocation process has been completed. If the College is oversubscribed it is very unlikely that late applicants will obtain a place. The College is committed to taking its fair share of children who are vulnerable and/or hard to place, as set out in the locally agreed protocols. Accordingly, outside the normal admissions round the governing body is empowered to give absolute priority to a child where admission is requested under local protocol that has been agreed by both the Diocese and governing body for the current school year. The Governing body has this power even when admitting the child would mean exceeding the published admission number. St Ignatius College is committed to fairness and transparency in the way it operates its admissions procedures. Parents are invited to contact the College to obtain help in applying, especially if they are disabled or have difficulties of language or are not familiar with the admissions process. APPLICATIONS TO SIXTH FORM Students transferring from Year 11 do not need to re-apply, but must meet the requirements for the courses for which they have applied. Please see Sixth Form entry requirements for the relevant year of entry. Applications from external students including girls are welcome and places will be offered up to maximum capacity. Applications should reach the school by the published closing date, and in the case of over subscription the same criteria will apply as for Year 7. Further enquiries should be directed to the Head of Sixth Form. A Sixth Form Prospectus is available. Admissions Policy 2022/2023 Supplementary Information Form 2022/2023 Admissions Policy 2020 - 2021 - click here Supplementary Information Form St Igs SIF 2022 2023 10th Sep 2021 Download Revised Admissions policy Sept 2022 2023 17th Sep 2021 Download @st_ignatius Jan - 14 Subject Leader of Art and Design and Technology, St Ignatius College, Enfield https://t.co/AFKQ4O3nyY An excellent set of results: Match 1 vs Broomfield won 38-0 Match 2 vs Southgate won 48-14 Match 3 vs Lea Valley won 34-0 Match 4 vs Latymer won 30-4 The team was led by captain Rashawn E B whose drive, commitment and leadership led to these exceptional performances.🏀#AMDG Congratulations to the Year 7 Basketball team who have carried all before them in the League this year and have been crowned Borough Champions!! 🏀 https://t.co/hLBTv9Bi7Y Our Sixth Form Open Evening tickets are now live on Eventbrite! Click link to book your tickets now - https://t.co/SYT3e84QEg #AMDG @JesuitSchoolsUK #Openevening #sixthform https://t.co/arX4x7SCEN Deputy Headteacher, St Ignatius College, Enfield https://t.co/hgpATmO8GW Don't forget to submit your performance and compositions for the music festival!! #AMDG @JesuitSchoolsUK #musiccompetition https://t.co/5SkRLBXUuk Our Sixth Form Open Evening will be on Thursday 27 January! 🗓 Register your interest with link below: https://t.co/JyLyJxNBxA #AMDG @JesuitSchoolsUK https://t.co/GfJTSi1bAZ Our person of the week is Ann Bancroft and our word of the week is Arduous! https://t.co/KCS9mEQEeS Good Morning. We are excited to welcome back years 7, 11, 12 and 13 who will be tested before returning back to lessons. #COVIDSafe #AMDG @st_ignatius Dec - 22 The Ignatian Review is now live on our website … https://t.co/LyFHaAelN0 We hope you have a Merry Christmas and a happy new year! #AMDG #jesuiteducated https://t.co/wUPlIV1AL8 If you have any questions about this, please do not hesitate to contact Mr Duguid at music@st-ignatius.enfield.sch.uk We must stress that the focus of the event is on giving a platform for students to showcase their musical interests so although some prizes will be awarded across KS3, KS4 and KS5 any competition is intended to be informal and supportive.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,879
<div class="well"> <form class="form-horizontal"> <fieldset> <legend>Publish New Ad</legend> <div class="form-group"> <label class="col-lg-2 control-label" for="title">Title:</label> <div class="col-lg-10"> <input type="text" class="form-control" id="title" ng-model="adData.title" required/> </div> </div> <div class="form-group"> <label class="col-lg-2 control-label" for="text">Text:</label> <div class="col-lg-10"> <textarea class="form-control" rows="3" id="text" ng-model="adData.text" required></textarea> </div> </div> <div class="form-group"> <label for="image">Image:</label> <span class="btn btn-block"> <input class="form-control" type="file" id="image" onchange="angular.element(this).scope().fileSelected(this)"/> </span> <div id="adImagePrewiew" class="img-thumbnail"> <span class="img-thumbnail">Image Preview</span> </div> </div> <div class="form-group"> <label class="col-lg-2 control-label" for="category">Category:</label> <select class="form-control" id="category" ng-model="adData.categoryId"> <option value="null">(None)</option> <option ng-repeat="c in categories" value="{{c.id}}">{{c.name}}</option> </select> </div> <div class="form-group"> <label class="col-lg-2 control-label" for="town">Town:</label> <select class="form-control" id="town" ng-model="adData.townId"> <option value="null">(None)</option> <option ng-repeat="t in towns" value="{{t.id}}">{{t.name}}</option> </select> </div> <div class="form-group"> <button class="btn btn-primary" ng-click="publishAd(adData)" class="button">Publish</a> <button class="btn btn-primary" ng-click="redirectToUserAds()" class="button">Cancel</a> </div> </fieldset> </form> </div>
{ "redpajama_set_name": "RedPajamaGithub" }
5,884
Q: The sum of squares of the first $n$ natural numbers. My basic question is this: how to find the sum of squares of the first $n$ natural numbers? My thoughts have led me to an interesting theorem: Faulhaber's formula. It is known that $$1^k+2^k+\ldots+n^k=P_{k+1}(n)$$ is a polynomial of degree $n$ $(k+1)$ (!). For my problem: $$1^2+2^2+\ldots+n^2=a+bn+cn^2+dn^3.$$ Further resolving an uncomplicated system of linear equations: $$\left\{ \begin{aligned} 0=&a\\ 1^2=&a+b\cdot1+c\cdot1^2+d\cdot1^3\\ 1^2+2^2=&a+b\cdot2+c\cdot2^2+d\cdot2^3\\ 1^2+2^2+3^2=&a+b\cdot3+c\cdot3^2+d\cdot3^3\\ \end{aligned} \right.$$ Thus we get: $a=0,\,b=\frac16,\,c=\frac12,\,d=\frac13$, i.e$$P_3(n)=\frac{n(n+1)(2n+1)}{6}.$$ My further questions: 1) What are some ways to find the sum of the squares of the first n natural numbers? 2) (!) How to prove that the sum of $1^k+2^k+\ldots+n^k$ is a polynomial of degree $n$ $k+1$? A: An integer-valued polynomial is a polynomial with complex coefficients taking values in $\mathbb{Z}$ when all the variables take integer values. For example, $\frac{x^2+3x}{2}$ and $\frac{13x^3+5xy^2}{6}$ are integer-valued polynomials. Clearly, the set of integer-valued polynomials with variables $x_1,x_2,\ldots,x_n$ form a subring of $\mathbb{Q}\left[x_1,x_2,\ldots,x_n\right]$. A result by Polya states that the ring of integer-valued polynomials in one variable $x$ is a free abelian group with basis elements $\binom{x}{k}=\frac{x(x-1)(x-2)\cdots(x-k+1)}{k!}$ for $k=0,1,2,\ldots$. To answer your question, $x^k$ is an integer-valued polynomial. Therefore, $x^k=\sum_{r=0}^k \,a_r\binom{x}{r}$ for some $a_0,a_1,\ldots,a_n\in\mathbb{Z}$ (obviously, $a_n\neq 0$). Now, $\sum_{m=0}^n\,m^k=\sum_{m=0}^n\,\sum_{r=0}^k\,a_r\binom{m}{r}=\sum_{r=0}^k\,a_k\,\sum_{m=0}^n\,\binom{m}{r}$. By the Hockey-Stick Identity (see http://www.artofproblemsolving.com/wiki/index.php/Combinatorial_identity#Hockey-Stick_Identity), $\sum_{m=0}^n\,m^k=\sum_{r=0}^k\,a_k\,\binom{n+1}{r+1}$. Hence, $\sum_{m=0}^n\,m^k$ is a polynomial in $n$ of degree $k+1$, as the coefficient of $n^{k+1}$ is $\frac{a_k}{(k+1)!}\neq 0$. (In fact, $a_k=k!$, so we know that $\sum_{m=0}^n\,m^k=\frac{n^{k+1}}{k+1}+\mathcal{O}\left(n^k\right)$.) A: There is a simple combinatorial identity that can be helpful here: $$ \sum_{m=0}^n \binom{m+k}{k} = \binom{n+k+1}{k+1} $$ The right-hand side counts the number of subsets of $\{1,\ldots,n+k+1\}$ of size $k+1$. The left-hand side counts them by their maximal element $m+k+1$. Simple linear algebra shows that every polynomial of degree $d$ is a linear combination of $\binom{n+e}{e}$ for $e \leq d$ (in a unique way). In particular, we can represent $n^d$ in this way, and so the combinatorial identity implies that $\sum_{m=0}^n m^d$ is a polynomial of degree $d+1$.
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,888
A Puerto Ricó-i aratinga (Psittacara maugei) a madarak (Aves) osztályának a papagájalakúak (Psittaciformes) rendjébe, ezen belül a papagájfélék (Psittacidae) családjába tartozó faj. Rendszerezése A fajt Charles de Souancé francia ornitológus írta le 1856-ban. Egyes szervezetek az Aratinga nembe sorolják Aratinga chloroptera maugei néven. Előfordulása Puerto Rico területén, Mona szigetén volt honos. Természetvédelmi helyzete A Természetvédelmi Világszövetség Vörös listáján nem szerepel. Kihalt az 1800-as évek közepére. Jegyzetek További információk Képek az interneten a fajról Psittacara Madárfajok Puerto Rico endemikus madarai
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,233
\section*{Supplementary material} As discussed in the main text, the intrinsic limitation of the method is controlled by our ability to extract $\phi_g$. The procedure to estimate the expected uncertainty in that extraction is the following. \begin{enumerate} \item We fix input values of $\phi_M=\beta=0.384$, $\gamma=1.222$, and, for each decay channel $g=\rho^+_L\rho^-_L$, $\rho^0_L\rho^0_L$, $\pi^+\pi^-$, $\pi^0\pi^0$, we also fix input values of $\rho_g$ and $\epsilon_g$, which fix $\phi_g=\gamma-\epsilon_g$. We consider the three different benchmark cases in Table \ref{tab:benchmark}. \item Considering the decay channels $f=L,S$, the six coefficients $\mathcal I^{Sg}_{\rm d,m,od}$, $\mathcal I^{Lg}_{\rm d,m,od}$ are computed: they control the four time-dependent combinations $(f,g)$, $(g,f)$, for each $g$. \item For each $g$, we generate values of $t$, the events, distributed according to the four double-decay intensities. In order to incorporate the effect of experimental time resolution, each $t$ is randomly displaced following a normal distribution with zero mean and $\sigma=1$ ps. Additional experimental effects such as efficiencies are not included. Generation proceeds until a chosen number of events $N_g$ with $|t|\leq 5\,\tau_{B^0}$ has been obtained with the four $(f,g)$, $(g,f)$ combinations altogether. These $N_g$ events are binned. \item The procedure is repeated in order to obtain mean values and standard deviations in each bin: these constitute our simulated data, as illustrated in Figure \ref{fig:t_bins}, which corresponds to $g=\rho^+_L\rho^-_L$ (benchmark $B_{\rho\rho}$ in Table \ref{tab:benchmark}), $N_g=1000$ events and 20 bins in $[0;5\,\tau_{B^0}]$. The black dots with bars are the mean values and uncertainties, the red curves are the extracted double-decay intensities, and the blue curves correspond to the $\mathcal I^{fg}_{\rm d}$ term in each intensity. There are no significant differences if one considers, for example, 15 or 10 bins. \item From the simulated data, one can obtain $\mathcal I^{S\rho^+_L\rho^-_L}_{\rm d}=0.1170\pm 0.0138$, $\mathcal I^{S\rho^+_L\rho^-_L}_{\rm m}=0.1658\pm 0.0456$, and $\mathcal I^{S\rho^+_L\rho^-_L}_{\rm od}=0.000\pm 0.0198$, with $\mathcal I^{L\rho^+_L\rho^-_L}_{\rm d,m,od}$ given by eq.\eqref{eq:normalizationC}, and similarly for decay channels $\rho^0_L\rho^0_L$, $\pi^+\pi^-$, $\pi^0\pi^0$ according to the different benchmarks $B_{\rho\rho}$, $B_{\pi\pi}^\pm$ in Table \ref{tab:benchmark}. \item Finally we extract $\rho_g$, $\phi_g$, $\phi_M$, with a simple fit to the $\mathcal I^{Sg}_{\rm d,m,od}$. \end{enumerate} Concerning the number of events, with the Belle-II design luminosity \cite{Belle-II:2010dht} and the branching ratios $\text{BR}(g)$, $\text{BR}(f)$, we assume that it would be possible to collect 1000 events for $g=\rho^+_L\rho^-_L$, 200 events for $g=\pi^+\pi^-$ and 50 events for both $g=\rho^0_L\rho^0_L$ and $g=\pi^0\pi^0$ channels. We show the results of our analyses for two scenarios. \begin{itemize} \item Scenario A assumes 1000 $\rho^+_L\rho^-_L$ events of type $B_{\rho\rho}$, 50 $\rho^0_L\rho^0_L$ events of type $B_{\rho\rho}$, 200 $\pi^+\pi^-$ events of type $B_{\pi\pi}^-$ and 50 $\pi^0\pi^0$ events of type $B_{\pi\pi}^-$. \item In scenario B we assume to have 500 $\rho^+_L\rho^-_L$ events of type $B_{\rho\rho}$ and 100 $\pi^+\pi^-$ events of type $B_{\pi\pi}^+$. \end{itemize} \begin{figure}[ht] \centering \includegraphics[width=0.475\columnwidth]{2D_case1_Sp_1_1_1_cropped.pdf}\ \includegraphics[width=0.475\columnwidth]{2D_case1_Sm_1_1_1_cropped.pdf}\\ \includegraphics[width=0.475\columnwidth]{2D_case1_Lp_1_1_1_cropped.pdf}\ \includegraphics[width=0.475\columnwidth]{2D_case1_Lm_1_1_1_cropped.pdf} \caption{Simulated data, 1000 events, benchmark $B_{\rho\rho}$. Black dots with bars indicate mean values and associated uncertainties; the red curves are the extracted double-decay intensities, while the blue curves correspond to the $\mathcal I^{fg}_{\rm d}$ term in each intensity.} \label{fig:t_bins} \end{figure} \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,922
Q: Facebook CAPI PageView Event Missing Some Deduplication Parameters Parameter I have two problems with Facebook CAPI. I'm using facebook php business sdk PageView Event Missing Some Deduplication Parameters Parameter It's showing in every event in Event Manager (PageView, ViewContent, Lead) and I don't know why I haven't dedupi keys shown with CAPI in View Details, but It appears in Test mode. Example: * *Event-Deduplication-View-Details *Test-Mode Code: if(!isset($_COOKIE['USERID']) || empty($_COOKIE['USERID'])){ $externalId = FacebookCapiSettings::getExternalId(); setcookie('USERID', $externalId, time()+14*24*3600); $_COOKIE['USERID'] = $externalId; } $access_token = FacebookCapiSettings::getAccessToken(); $pixel_id = FacebookCapiSettings::getPixelId(); $api = Api::init(null, null, $access_token); $api->setLogger(new CurlLogger()); $user_data = (new UserData()) // It is recommended to send Client IP and User Agent for Conversions API Events. ->setClientIpAddress($_SERVER['REMOTE_ADDR']) ->setClientUserAgent($_SERVER['HTTP_USER_AGENT']) ->setFbc($_COOKIE['_fbc']) ->setExternalId($_COOKIE['USERID']) ->setFbp($_COOKIE['_fbp']); $eventSourceUrl = (isset($_SERVER['HTTPS']) && $_SERVER['HTTPS'] === 'on' ? "https" : "http") . "://$_SERVER[HTTP_HOST]$_SERVER[REQUEST_URI]"; $event = (new Event()) ->setEventName('PageView') ->setEventTime(time()) ->setEventSourceUrl($eventSourceUrl) ->setUserData($user_data) ->setEventId(FacebookCapiPageView::getEventId()) ->setActionSource(ActionSource::WEBSITE); $events = array(); array_push($events, $event); $request = (new EventRequest($pixel_id)) ->setTestEventCode(FacebookCapiSettings::getTestKey()) ->setEvents($events); $response = $request->execute(); * *Event ID as setEventId(FacebookCapiPageView::getEventId()) *External ID as setExternalId($_COOKIE['USERID']) *FBP as setFbp($_COOKIE['_fbp']); Server Sending Invalid External ID Parameter. It's showing only with PageView Event Matching Code: <!-- Facebook Pixel --> <script> !function(f,b,e,v,n,t,s) {if(f.fbq)return;n=f.fbq=function(){n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments)}; if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0'; n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)}(window, document,'script', 'https://connect.facebook.net/en_US/fbevents.js'); fbq( 'init', '<?php echo FacebookCapiSettings::getPixelId(); ?>', {'external_id': '<?php echo $_COOKIE['USERID']; ?>'<?php echo $userDataString; ?>} ); fbq( 'track', 'PageView', {}, {eventID: '<?php echo FacebookCapiPageView::getEventId(); ?>'} ); </script> <noscript><img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=<?php echo FacebookCapiSettings::getPixelId(); ?>&ev=PageView&noscript=1" /></noscript> <!--/ Facebook Pixel --> What I did wrong and how I could repair my errors in Facebook Events Manager?
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,122
{"url":"https:\/\/mcintyre.io\/tutorials\/linear-systems\/","text":"# Tutorial 2. Linear Systems\n\nThere\u2019s definitely, definitely, definitely no logic\nTo human behaviour\n\n\u2014 \u201cHuman Behaviour\u201d by Bj\u00f6rk\n\nI\u2019m as excited as you are about making a meteor fly, so let\u2019s take the complexity of our systems up a notch. We\u2019ll set multiple objects in motion in this section and see what we can learn about their behavior.\n\n# 2.1 Systems of Equations\n\n## Testing Solutions\n\nIn the last chapter, you applied linear equations as a mathematical model for the path of a meteor. As the meteor\u2019s $$x$$-coordinate increased, its $$y$$-coordinate increased, and we related the two using an equation like the one below.\n\n$$y = 0.5x + 50$$\n\nIf you substituted $$10$$ for $$x$$, you could evaluate the right-hand side to find, or solve for, $$y$$.\n\n$$y = 0.5 \\cdot 10 + 50$$\n\n$$y = 105$$\n\nYou could use the same approach to find every coordinate pair along the meteor\u2019s path. Substituting the coordinate pair $$(10,55)$$ into the equation $$y=0.5x+50$$ makes both sides equal, so we call it a solution to the equation.\n\nHow about the coordinate pair $(10,60)$?\n\n$$60=0.5 \\cdot 10 + 50$$\n\n$$60 \\ne 55$$\n\nNo luck. The symbol $$\\ne$$ means \u201cdoes not equal.\u201d $$(10,60)$$ is not a solution to this equation.\n\nOK, so we can map out all the points along an object\u2019s path. But what if there were multiple objects moving along multiple paths?\n\n## Solving Equations\n\nIn the film Gravity, a Space Shuttle is destroyed by a barrage of space debris, forcing the survivors to find another way home to Earth before the debris completes another orbit and threatens them again. Let\u2019s try to help out the astronauts by constructing a simulation of the Shuttle-debris system.\n\nA Space Shuttle and debris in Earth\u2019s orbit.\n\n$$\\mapsto$$ \u2002 Outer space is an empty, infinite, two-dimensional surface. The Space Shuttle is a point moving along the straight path described by the equation $$y=0.5x+50$$. The debris is also a point moving along a straight path, this one described by the equation $$y= -0.5x+150$$.\n\nThe coordinates of both objects are ordered pairs of floating point numbers. The Space Shuttle\u2019s initial position is $$(0,150)$$. The debris\u2019 initial position is $$(0,50)$$.\n\nFirst, paint the sky. Next, select a pencil color & weight. Then, update the position of each object. Finally, draw a point at each object\u2019s position.\n\nExample 2.1 A collision\n\nAt the beginning of the film, Mission Control warns the astronauts that danger is coming. In real life, the United States Space Surveillance Network tracks objects in Earth\u2019s orbit using a variety of sensing equipment. The team records the types of objects, when they were launched, their orbits, and their sizes. NASA uses this data to construct models that predict collisions like the one in Gravity.\n\nWhile we don\u2019t have a giant network of sensors and computers, we do have all the tools we need to figure out where the paths of our simulated Shuttle and debris cross. Let\u2019s combine the equations describing each object\u2019s path into a system of equations.\n\n$$y = 0.5x + 50$$\n\n$$y = -0.5x + 150$$\n\nA system of equations is a set of related equations. There are infinitely many possible $$(x,y)$$ coordinate pairs, but only one solves both of these equations. For example, the point $$(50,75)$$ solves the equation for the Shuttle\u2019s path.\n\n$$75 = 0.5 \\cdot 50 + 50$$\n\n$$75 = 25 + 50$$\n\n$$75 = 75$$\n\nBut does it also solve the equation for the debris\u2019 path?\n\n$$75 = -0.5 \\cdot 50 + 150$$\n\n$$150 = -25 + 150$$\n\n$$150 \\ne 125$$\n\nNo luck. We could try guessing and checking other coordinate pairs, but we might be at it for a while. There are many good techniques for solving systems of linear equations and we need a few additional ideas to apply them. Let\u2019s start with a simpler case.\n\nMath that uses letters as placeholders for numbers is known as algebra. Those placeholders, or variables, can show up anywhere we establish a relationship. Take the equation below as an example.\n\n$$2 + 3 = 5$$\n\nWhat if I wrote the following equation instead?\n\n$$x + 3 = 5$$\n\nWe\u2019ll use this example as a starting point for finding \u201cunknown\u201d values like $$x$$, also called solving an equation. Given $$x+3 =5$$, I want to come up with an algorithm that solves for $$x$$. Let\u2019s simplify again.\n\nWe know the following equation relates $$2$$, $$3$$, and $$5$$.\n\n$$2 + 3 = 5$$\n\nFigure 2.1 Three hops to the right\n\nYou could express the same relationship another way by rearranging the equation a bit.\n\n$$2 = 5 - 3$$\n\nFigure 2.2 Three hops to the left\n\nAt this point you might think, \u201cOK, the numbers $$2$$ and $$5$$ are $$3$$ units apart. But what does that have to do with finding unknowns?\u201d Everything, as it turns out.\n\nWe can think of the original equation $$2+3=5$$ as running \u201cforward\u201d from the starting point $$2$$. In this view, the second equation $$2=5-3$$ runs \u201cbackward\u201d. Running an operation backward is known as inverting the operation. And something interesting happens when we apply an operation and its inverse together.\n\n$$2 + 3 - 3 = 2$$\n\nFigure 2.3 Hopping back and forth\n\nWe\u2019re right back where we started! Operations and their inverses undo each other; it\u2019s like nothing happened at all. This fact is key to solving equations.\n\nIn math, the $$=$$ symbol is an ironclad statement of equality. The left-hand side must equal the right-hand side, now and always. If you make a change to the left, you\u2019d better do the same to the right. Let\u2019s take the original equation one more time and invert the $$+3$$ operation.\n\n$$2 + 3 - 3 = 5 - 3$$\n\nOr simplified:\n\n$$2 = 2$$\n\nNow, let\u2019s apply identical reasoning to the equation with the variable $$x$$.\n\n$$x + 3 = 5$$\n\n$$x + 3 - 3 = 5 - 3$$\n\n$$x = 2$$\n\nOK, let\u2019s see that process one more time with a different example.\n\n$$x - 2 = 10$$\n\n$$x - 2 + 2 = 10 + 2$$\n\n$$x = 12$$\n\nExercise 2.1 \u2002 Given the equation $$x-5=10$$, solve for $$x$$. Then, describe the algorithm you used to solve for $$x$$ in plain English.\n\nOperator Precedence\n\nInverting addition and subtraction seems to work just fine, but what happens when you include other operations? Let\u2019s take the path of our Shuttle.\n\n$$y = 0.5x + 50$$\n\nHow would you determine the Shuttle\u2019s $$x$$-coordinate when its $$y$$-coordinate is $$150$$?\n\n$$150 = 0.5x + 50$$\n\nIt seems like we could invert this equation to solve for $$x$$, but I\u2019m not certain of how to proceed now that multiplication is in the mix. Figuring this out requires a brief interlude to discuss the order, or precedence, of mathematical operations.\n\nIf you come across an expression like $$2 \\cdot 3 + 4$$, the mathematical community has agreed that you should multiply before adding. You can compute the value produced in this example as follows.\n\n$$2 \\cdot 3 + 4$$\n\n$$6 + 4$$\n\n$$10$$\n\nWe\u2019ve seen the computation run forward, so let\u2019s go backward. The last operation applied was $$+4$$, so let\u2019s invert that first.\n\n$$10 = 2 \\cdot 3 + 4$$\n\n$$10 - 4 = 2 \\cdot 3 + 4 - 4$$\n\n$$6 = 2 \\cdot 3$$\n\nYou can view the multiplication as $$2$$ multiplying $$3$$ or as $$3$$ multiplying $$2$$. I\u2019ll go with the former and undo multiplication by $$2$$.\n\n$$\\frac{6}{2} = \\frac{2 \\cdot 3}{2}$$\n\n$$3 = 3$$\n\nHow about we substitute one of the operands for a variable and solve for it?\n\n$$10 = 2x + 4$$\n\n$$10 - 4 = 2x + 4 - 4$$\n\n$$6 = 2x$$\n\n$$\\frac{6}{2} = \\frac{2x}{2}$$\n\n$$3 = x$$\n\nIt\u2019s been quite a journey, but I think we\u2019re ready to plot a course for our Shuttle.\n\n$$150 = 0.5x + 50$$\n\n$$150 - 50 = 0.5x + 50 - 50$$\n\n$$100 = 0.5x$$\n\n$$\\frac{100}{0.5} = \\frac{0.5x}{0.5}$$\n\n$$200 = x$$\n\nExercise 2.2 \u2002 Find the $$x$$-coordinate of the Shuttle when its $$y$$-coordinate is $$180$$. Then, describe the algorithm you used to solve for $$x$$ in plain English.\n\nThe order of mathematical operations is bundled up nice and neat in the acronym PEMDAS.\n\nParentheses\nExponents\nMultiplication\nDivision\nSubtraction\n\nLet\u2019s focus on MDAS for now as they are key to linear relationships. Multiplication and division have the same precedence. Consider the example below.\n\n$$3 \\cdot 10 \\div 5$$\n\nYou could compute $$3 \\cdot 10 = 30$$, then divide $$30 \\div 5$$ to produce $$6$$. Or you could start by dividing $$10 \\div 5 = 2$$ before multiplying $$3 \\cdot 2$$ to produce $$6$$. There are multiple pathways to the correct answer!\n\nThere is a similar story for addition and subtraction.\n\n$$3 + 10 - 5$$\n\nYou could compute $$3 + 10 = 13$$, then subtract $$13 - 5$$ to produce $$8$$. Or you could start by subtracting $$10 - 5 = 5$$ and add $$3 + 5 = 8$$. Once again, multiple pathways!\n\nExercise 2.3 \u2002 Evaluate the expression $$10 \\cdot 5 \\div 2 + 5$$.\n\nExercise 2.4 \u2002 Evaluate the expression $$10 \u00f7 5 - 2 \\cdot 5$$.\n\n## Solving Systems\n\nWe\u2019re on a tight schedule to be of any help to NASA. Let\u2019s figure out the coordinates $$(x,y)$$ where the paths of the Shuttle and debris intersect. Recall the system of equations describing the scenario.\n\n$$y = 0.5x + 50$$\n\n$$y = -0.5x + 150$$\n\nThere are infinitely many points along each object\u2019s path, and the variables $$x$$ and $$y$$ are placeholders for all of them. The Shuttle and debris were moving before they collided, and they continued moving afterward. Those variables $$x$$ and $$y$$ that solved one equation at a time must now solve both equations simultaneously.\n\nSubstitution\n\nAlgebraic techniques help us solve systems of equations because we really do mean that the $$y$$ in the first equation is the very same $$y$$ in the second equation. Consider the following simplified case for a moment.\n\n$$a = 5$$\n\n$$b = 5$$\n\n$$a = b$$\n\nThe same transitive property of equality applies to systems of equations.\n\n$$y = 0.5x + 50$$\n\n$$y = -0.5x + 150$$\n\n$$0.5x + 50 = -0.5x + 150$$\n\nNow we have one equation with one unknown. Let\u2019s solve it for $$x$$!\n\n$$0.5x + 50 = -0.5x + 150$$\n\n$$0.5x + 0.5x + 50 = -0.5x + 0.5x + 150$$\n\n$$x + 50 = 150$$\n\n$$x + 50 - 50 = 150 - 50$$\n\n$$x = 100$$\n\nAnd now that we\u2019ve found $$x$$, we can substitute the value back into one of the original equations to find $$y$$.\n\n$$y = 0.5 \\cdot 100 + 50$$\n\n$$y = 50 + 50$$\n\n$$y = 100$$\n\nWe also could have gone with the other equation.\n\n$$y = -0.5 \\cdot 100 + 150$$\n\n$$y = -50 + 150$$\n\n$$y = 100$$\n\nExample 2.2 Predicting collision\n\nAs you may expect, there\u2019s more than one way to solve this problem. Let\u2019s examine a different algorithm that combines equations to find solutions.\n\nElimination and Back Substitution\n\nFirst, I\u2019m going to determine the value of $$y$$ by eliminating the variable $$x$$ from an equation. Unlike the substitution algorithm, which rewrote one variable in terms of another, I\u2019ll eliminate $$x$$ by adding the top equation to the bottom equation. As usual, let\u2019s motivate this idea by considering a simpler case. Take the equations below.\n\n$$20 = 4 \\cdot 5$$\n\n$$2x = x + x$$\n\nI could add the two left-hand sides to produce $$2x + 20$$. On the right-hand side, I would have $$x + x + 4 \\cdot 5$$. But do these combined expressions equal each other? We\u2019ll follow this one step-by-step.\n\n$$2x + 20 = x + x + 4 \\cdot 5$$\n\n$$2x + 20 = x + x + 20$$\n\n$$2x + 20 = 2x + 20$$\n\nThose $$=$$ symbols mean that the expressions on the left- and right-hand sides are equal. When we add the same thing to each side of an equation, we maintain equality. You may hear this called the additive property of equality when you\u2019re out at parties. Now, let\u2019s use this property to eliminate the variable $$x$$ and solve for $$y$$.\n\n$$y = 0.5x + 50$$\n\n$$y = -0.5x + 150$$\n\n$$y + y = -0.5x + 0.5x + 150 + 50$$\n\n$$2y = 200$$\n\n$$\\frac{2y}{2} = \\frac{200}{2}$$\n\n$$y = 100$$\n\nThe $$y$$-value we just found resulted from combining information stored in separate equations. This is the $$y$$-value both equations share. But what about $$x$$? Well, we know that $$y = 100$$, so let\u2019s substitute that part of our solution back into the system.\n\n$$100 = 0.5x + 50$$\n\n$$100 - 50 = 0.5x + 50 - 50$$\n\n$$50 = 0.5x$$\n\n$$\\frac{50}{0.5} = \\frac{0.5x}{0.5}$$\n\n$$100 = x$$\n\nYou\u2019ve now seen two of many possible algorithms for solving systems of linear equations. Practice with them a bit before we build up the logical foundations you need to explore systems of linear inequalities.\n\nExercise 2.5 \u2002 Solve the system of equations below using Substitution. Then, solve it using Elimination and Back Substitution. Describe how each algorithm works in plain English.\n\n$$y = 0.25x + 100$$\n\n$$y = -0.75x + 300$$\n\n# 2.2 Logic\n\nIn the novel 1984, the main character is brainwashed into accepting that $$2 + 2 = 5$$ is true. The scene depicts the ultimate flex of power by an oppressive government. Let\u2019s lay some logical foundations to ensure we always maintain our grasp on the truth.\n\n## Boolean Algebra\n\nYou just tested a few different coordinate pairs $$(x,y)$$ to determine whether or not they solved an equation. The test could only have gone one of two ways: success or failure, yes or no, true or false. There is an entire branch of algebra called Boolean algebra dedicated to studying these two truth values, which we usually write as $$1$$ (true) and $$0$$ (false). There are not infinitely many truth values like there are numbers; there are only $$1$$ and $$0$$.\n\nA truth value can correspond to a situation in the real world. For example, I could claim, \u201cThe sun is up\u201d. This claim happens to be false in my neck of the woods as I write this sentence. I can express this idea using the variable $$s$$ to represent sunniness.\n\n$$s = 0$$\n\nEven though the sun has already set on this particular day, the sky above me is still momentarily deep blue. I can claim \u201cThe sky is blue\u201d and express this blueness, $$b$$, matter-of-factly.\n\n$$b = 1$$\n\nOK, we have variables with assigned values. But what can we actually do with them?\n\nLet\u2019s begin by combining $$s$$ and $$b$$ using our first logical operation: conjunction, also known as AND. The expression $$s \\land b$$ means \u201c$$s$$ is true AND $$b$$ is true\u201d. The sky above my front porch is blue, but the sun is not up, so the combined statement is false. We could write this concisely as $$s \\land b = 0$$.\n\nThere is a special set of diagrams called logic gates that depict the results of applying logical operations. Each logic gate has a distinctive shape. Below is the diagram for the AND logic gate.\n\nFigure 2.4 The AND logic gate\n\nOne of the variables $$s$$ and $$b$$ is true, and we can test for such a condition using the disjunction operation, also known as OR. A disjunction is true if at least one of its operands is true. I could claim \u201cThe sun is up OR the sky is blue\u201d and that would be true because $$b = 1$$. We could express idea this as $$s \\lor b = 1$$.\n\nLike AND, OR also has its own logic gate.\n\nFigure 2.5 The OR logic gate\n\nThe following truth table organizes all of the facts we\u2019ve established about the view of the sky from my front porch.\n\nTable 2.1 A truth table\u2019s view of my piece of sky\n\n$$s$$ \u00a0 \u00a0 \u00a0 \u00a0 $$b$$ \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 $$s \\land b$$ \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 $$s \\lor b$$\n$$0$$ $$1$$ $$0$$ $$1$$\n\nThe final logical operation we\u2019ll discuss is negation, also known as NOT. The NOT operation simply flips a truth value from $$1$$ to $$0$$ or from $$0$$ to $$1$$. Let\u2019s take the variable $$s$$ and negate it using the NOT operator, $$\\lnot$$.\n\n$$s = 0$$\n\n$$\\lnot s = 1$$\n\nNOT is a unary operation, meaning we only apply it to a single truth value at a time. AND and OR are both binary operations, meaning we have to provide a pair of operands.\n\nNot to be left out, NOT also has its own logic gate.\n\nFigure 2.6 The NOT logic gate\n\nAnd that\u2019s all you need to get started with logic! You can compose logical operations just like you do arithmetic operations. For example, let\u2019s figure out how to cross the street safely using logic. I\u2019ll define the variables $$l$$ and $$r$$ to represent vehicle traffic from the left and traffic from the right, respectively.\n\nIf you were trying to cross a busy street, you would want to avoid vehicles. In logical terms, you would check to see that both $$l = 0$$ and $$r = 0$$. \u201cNo vehicles on the left? No vehicles on the right? OK, let\u2019s go!\u201d This condition is easily expressed by combining operations $$\\lnot l \\land \\lnot r = 1$$.\n\nExercise 2.6 \u2002 Complete the following truth table for two boolean variables $$x$$ and $$y$$.\n\n$$x$$ \u00a0 \u00a0 \u00a0 \u00a0 $$y$$ \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 $$x \\land y$$ \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 $$x \\lor y$$\n$$0$$ $$0$$\n$$0$$ $$1$$\n$$1$$ $$0$$\n$$1$$ $$1$$\n\nExercise 2.7 \u2002 Compute the value of the expression $$\\lnot (1 \\land 1)$$.\n\nExercise 2.8 \u2002 Compute the value of the expression $$(1 \\land 0) \\lor (1 \\lor 0)$$.\n\nExercise 2.9 \u2002 Rewrite the following logic circuit as an equivalent logical expression.\n\nExercise 2.10 \u2002 Complete the following truth table for two boolean variables $$x$$ and $$y$$. What do you notice?\n\n$$x$$ \u00a0 \u00a0 \u00a0 \u00a0 $$y$$ \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 $$\\lnot (x \\land y)$$ \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 $$\\lnot x \\lor \\lnot y$$ \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 $$\\lnot (x \\lor y)$$ \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 \u00a0 $$\\lnot x \\land \\lnot y$$\n$$0$$ $$0$$\n$$0$$ $$1$$\n$$1$$ $$0$$\n$$1$$ $$1$$\n\n## Branching\n\nLogic is a big deal for computation, from the way the machines are physically built to the way we program them. Conditional statements let us test conditions and make decisions while a program executes. For example, let\u2019s revisit the collision scene from Gravity.\n\nExample 2.3 Collision revisited\n\nThe simulation works fine, but it doesn\u2019t really convey the full drama of the situation. Let\u2019s revise our system a bit to account for the additional debris generated upon collision.\n\nA Space Shuttle and debris in Earth\u2019s orbit.\n\n$$\\mapsto$$ \u2002 Outer space is an empty, infinite, two-dimensional surface. The Space Shuttle is a point moving along the straight path described by the equation $$y = 0.5x + 50$$. The debris is also a point moving along a straight path, this one described by the equation $$y = -0.5x + 150$$. After colliding, each object leaves a trail of smaller debris along its path.\n\nThe coordinates of both objects are ordered pairs of floating point numbers. The Space Shuttle\u2019s initial position is $$(0,50)$$. The debris\u2019 initial position is $$(0,150)$$.\n\nFirst, test for collision and set the alpha value for the sky. Then, paint the sky. Next, update the position of each object. After that, select a pencil color & weight. Finally, draw a point at each object\u2019s position.\n\nLucky for us, you already solved this system and know that the two objects collide at $$(100,100)$$. As the sketch continues running, the Shuttle and debris continue moving to the right. We can test for this using an if statement.\n\nif (condition) {\n\/\/ things to do if condition is true\n}\n\n\nIf statements present a logical crossroads in a program. The first line of the if statement is called a header. The header begins with if, defines the condition to test in between a pair of parentheses (), then opens a pair of {}. If the condition is true, JavaScript will execute the set of statements between the {}, also known as the if statement\u2019s body.\n\nif (condition) {\nthingOne() \/\/ this is part of the body\nthingTwo() \/\/ this too!\n}\n\nthingThree() \/\/ this is not\n\n\nIn the Gravity example, we\u2019re testing whether or not the value of the variable x is greater than 100. JavaScript has the following relational operators that work as you would expect for numbers.\n\nTable 2.2 JavaScript\u2019s relational operators\n\n$$Math$$ \u00a0 \u00a0 \u00a0 \u00a0 Code \u00a0 \u00a0 English\n$$=$$ === Equal to\n$$\\ne$$ !== Not equal to\n$$>$$ > Greater than\n$$<$$ < Less than\n$$\\ge$$ >= Greater than or equal to\n$$\\le$$ <= Less than or equal to\n\nApplying a relational operator produces a boolean value. For example, the expression 2 + 2 === 5 produces the boolean value false because, well, math. 2 + 2 === 4, on the other hand, produces the value true. These sorts of logical expressions are called boolean expressions.\n\nif (x > 100) {\n\/\/ things to do if x > 100\n}\n\n\nExample 2.4 A more impactful collision\n\nExercise 2.11 \u2002 Duplicate your original collision sketch and add a few effects. How should the visual appearance of the Shuttle and debris change after impact?\n\nYou can also test multiple conditions together. Thinking back to the meteor sketch, how about we make the meteor glow red as it passes over the middle half of the canvas? In other words, when $$x \\ge 50$$ AND $$x \\le 150$$.\n\nif (x >= 50 && x <= 150) {\nstroke('tomato')\n}\n\n\n&& is one of JavaScript\u2019s boolean operators along with || and !. These JavaScript operators function identically to the logical operators you just studied.\n\nRecall our earlier $2 + 2$ example from the novel 1984. Notice you can test for the same condition using different boolean operators.\n\nif (2 + 2 !== 4) {\nresist()\n}\n\n\nor\n\nif (!(2 + 2 === 4)) {\nresist()\n}\n\n\nExample 2.5 A colorful meteor\n\nReading through this code, it isn\u2019t immediately clear that ghostwhite is meant to be the default stroke color. You could make this more explicit by adding an else clause to your conditional statement. Here is an example of the syntax.\n\nif (condition) {\n\/\/ things to do if condition is true\n} else {\n\/\/ things to do if condition is false\n}\n\n\nThe conditional statement now has two distinct pathways, or branches, that may be followed depending on the truth value of the condition.\n\nFigure 2.7 Flow of execution with two branches\n\nYou could reorganize your sketch to reflect this structure like so.\n\nAt this point, you might say, \u201cBranches seem useful, but what if I want more than two in my program?\u201d Say no more! You can chain conditionals together using an else if statement (a combination of \u201celse\u201d and \u201cif\u201d). In a chained conditional, conditions are tested in the order they are written, and only the first branch whose condition is true will execute.\n\nif (condition1) {\nthingOne()\n} else if (condition2) {\nthingTwo()\n} else {\nthingThree()\n}\n\n\nExample 2.6 A multicolor meteor\n\nExercise 2.12 \u2002 Change the previous example so that the following conditions are tested in this order. Can you explain what happened?\n\nif (x >= 50) {\nstroke('tomato')\n} else if (x >= 150) {\nstroke('crimson')\n} else {\nstroke('ghostwhite')\n}\n\n\nExercise 2.13 \u2002 Duplicate one of your previous sketches and modify its behavior using conditional statements. Use at least one other relational and one other boolean operator.\n\n# 2.3 Iteration\n\nThe logical building blocks you assembled in the last two sections let you create branches and make decisions in your programs. In this section, we\u2019ll use many of the same building blocks to form another type of control flow: repetition.\n\n## while statements\n\nIn Exercise 1.18, you drew a constellation by calling the point() function repeatedly with different arguments. Let\u2019s revisit this exercise using the constellation Orion as a starting point. We\u2019ll focus on Orion\u2019s Belt, which consists of the stars Alnitak, Alnilam, and Mintaka.\n\nOrion\u2019s Belt\n\n$$\\mapsto$$ \u2002 Outer space is an empty, infinite, two-dimensional surface. Stars are points of light within this plane.\n\nStar coordinates are ordered pairs of floating point numbers.\n\nTable 2.3 The \u201cOrion\u2019s Belt\u201d data set\n\nStar Name \u00a0 \u00a0 $$x$$ \u00a0 \u00a0 \u00a0 \u00a0 $$y$$\nAlnitak 25 100\nAlnilam 100 100\nMintaka 175 100\n\nFirst, paint the night sky. Then, select a pencil color & weight. Finally, draw a point at each star\u2019s position.\n\nExample 2.7 Orion\u2019s Belt\n\nThe stars Alnitak and Mintaka are actually star systems; each system is made up of multiple stars orbiting one another. I\u2019ll use this fact as my creative license to adjust Orion\u2019s Belt a little. For starters, how about we draw Orion\u2019s Belt with all of the major stars in each system? Let\u2019s keep things simple by assuming all of the stars are the same size and are aligned horizontally in the sky.\n\nTable 2.4 Expanded \u201cOrion\u2019s Belt\u201d data set\n\nStar Name \u00a0 \u00a0 $$x$$ \u00a0 \u00a0 \u00a0 \u00a0 $$y$$\nAlnitak Aa 25 100\nAlnitak Ab 50 100\nAlnitak B 75 100\nAlnilam 100 100\n\u03b4 Ori Aa1 125 100\n\u03b4 Ori Aa2 150 100\n\u03b4 Ori Ab 175 100\n\nThe visual result looks good, but the code worries me a little. Notice that I wrote seven nearly identical copies of the same statement to draw the stars. This approach works fine to get started but imagine writing a program that needs to repeat an instruction dozens of times. Or millions of times. Writing each variant by hand would be tedious and error prone. JavaScript\u2019s while statement makes repetition, or iteration, simple.\n\nwhile (condition) {\n\/\/ this is the loop body\n\/\/ statements in here repeat while condition is true\nthingOne()\nthingTwo()\n}\n\n\nwhile statements are structured similarly to if statements; they have a header with a condition and a body with code that might be executed. Each statement in the body executes in order, from top to bottom, repeatedly, until the condition in the header is false. Iterative control structures like this are commonly known as loops.\n\nFigure 2.8 Flow of execution in a while loop\n\nLet\u2019s use a while loop to simplify our sketch of Orion\u2019s Belt. Reviewing the previous example, the only difference between the stars is their $$x$$-coordinates. We know where our $$x$$-coordinates start, where they stop, and how much space is between them. This is all the information we need to simplify our work by using a loop.\n\nExample 2.9 Orion\u2019s Belt with iteration\n\nNot too shabby! Given an initial value for x, the while statement draws a point, then increments x by 25, and repeats this process until x is greater than 175.\n\nWe could change one line of code from the previous example to pack twice as many stars in the same region of sky.\n\nExample 2.10 Bedazzling Orion\u2019s Belt\n\nExercise 2.14 \u2002 Modify the sketch above to draw a row of stars in a different arrangement. What is the initial value of your loop variable x? What condition do you test to end the loop\u2019s execution? How much do you increment x by during each iteration?\n\nYou might say, \u201cOK, but what if I turned my head a little? Could I draw the stars along a vertical line instead?\u201d Sure! In this case, you could keep the value of x constant while varying y.\n\nExample 2.11 Switching axes from $$x$$ to $$y$$\n\n## Infinite Loops\n\nwhile statements are powerful tools that should be used with care. Consider the example below.\n\nwhile (true) {\nthingOne()\nthingTwo()\n}\n\n\ntrue is a keyword in JavaScript that corresponds to a boolean value of, you guessed it, true. If a while statement\u2019s condition is always true, then it will continue looping forever, thus creating an infinite loop. Unintended infinite loops are bad news. Let\u2019s examine a slightly modified version of the loop from the previous sketch.\n\nlet y = 25\nwhile (y <= 175) {\npoint(100, y)\n}\n\ny += 25 \/\/ oops\n\n\nNotice that I incremented y outside of the loop body. This means that y isn\u2019t incremented after each iteration. Instead, the value of y will always be 25, which is always less than or equal to 175, so the condition 25 <= 175 is always true. The loop never stops executing! If you ran this code, your web browser might give you a friendly notification to stop the sketch before it grinds your computer to a halt.\n\nExercise 2.15 \u2002 The code snippet below is meant to draw a horizontal row of points across the canvas. Instead, it creates an infinite loop. Identify the error and fix it. Explain the problem and your solution in plain English.\n\nlet x = 0\nlet y = 100\nwhile (x < 200) {\npoint(x, y)\nx += 10\n}\n\n\nExercise 2.16 \u2002 Create a sketch that uses a while statement to draw points along a diagonal line. Review the code snippet below as a hint.\n\nlet x = 0\nwhile (x < 200) {\n\/\/ >> compute y here\npoint(x, y)\nx += 10\n}\n\n\nExercise 2.17 \u2002 In the last tutorial, we defined an algorithm for multiplying two integers $$a \\cdot b$$ as repeated addition $$b + b + ... + b$$. Fill in the missing code below to express the same algorithm in JavaScript using a while loop.\n\nlet a = 5\nlet b = 3\nlet product = 0\nlet i = 0\nwhile (i < a) {\n\/\/ >> compute product here\ni += 1\n}\n\n\nExercise 2.18 \u2002 Review the algorithms for integer division and exponentiation, then implement them in JavaScript using a while loop. The subtraction assignment -= and multiplication assignment *= operators might be helpful.\n\n# 2.4 Systems of Inequalities\n\nWe began this chapter by analyzing a single linear equation in slope-intercept form: $$y = mx + b$$. You could substitute any real number for $$x$$ and compute the corresponding value of $$y$$, making the ordered pair $$(x,y)$$ a solution to the equation. Let\u2019s review a concrete example.\n\nGiven the following linear equation\n\n$$y = 0.5x + 100$$\n\ncompute the value of $$y$$ when $$x = 50$$.\n\n$$y = 0.5 \\cdot 50 + 100$$\n\n$$y = 25 + 100$$\n\n$$y = 125$$\n\nOne solution to this equation is located at $$(50,125)$$. What if we tried $$(50,126)$$ instead?\n\n$$126 = 0.5 \\cdot 50 + 100$$\n\n$$126 = 25 + 100$$\n\n$$126 \\ne 125$$\n\nIt turns out $$(50,126)$$ is not a solution to this particular equation, but there are infinitely many other solutions. For example, we could find solutions to the left and right of $$(50,125)$$ at $$x = 49$$ and $$x = 51$$.\n\n$$y = 0.5 \\cdot 49 + 100$$\n\n$$y = 24.5 + 100$$\n\n$$y = 124.5$$\n\n$$y = 0.5 \\cdot 51 + 50$$\n\n$$y = 25.5 + 100$$\n\n$$y = 125.5$$\n\nYou could use a while loop to quickly compute and visualize all of the solutions in the interval $$0 \\le x \\le 200$$.\n\nExample 2.12 Visualizing solutions to a linear equation\n\nMathematicians often express a change in the value of a variable with the Greek letter $$\\Delta$$, pronounced \u201cdelta\u201d. Using this notation, we can express a change in the variable $$x$$ as $$\\Delta x$$. I declare the variable dx on line 11 and use it to increment x on line 15 as a nod to our standard mathematical notation.\n\n## Testing Solutions\n\nOK, the solutions to a linear equation generate a line. I wonder what shapes other linear relationships make. Let\u2019s take the previous example and swap out the $$=$$ symbol for a $$>$$.\n\n$$y > 0.5x + 100$$\n\nYou can test solutions to this linear inequality just as you did with linear equations. For example, let\u2019s see if the coordinate pair $$(50,125)$$ produces a truth value of $$1$$ when we substitute the values into our inequality.\n\n$$125 > 0.5 \\cdot 50 + 100$$\n\n$$125 > 25 + 100$$\n\n$$125 > 125$$\n\nUh oh. We evaluated the right-hand side of the inequality and produced a value of $$125$$. But that resulted in a false statement; $$125$$ is not greater than itself. We can conclude that $$(50,125)$$ isn\u2019t a solution to this inequality. How about we move along the $$y-axis$$ a little to $$(50,126)$$?\n\n$$126 > 0.5 \\cdot 50 + 100$$\n\n$$126 > 25 + 100$$\n\n$$126 > 125$$\n\nSuccess! Let\u2019s go a little further along the $$y$$-axis to $$(50,127)$$.\n\n$$127 > 0.5 \\cdot 50 + 100$$\n\n$$127 > 25 + 100$$\n\n$$127 > 125$$\n\nInteresting. Let\u2019s try one more coordinate pair $$(50,128)$$ to see if this pattern holds.\n\n$$128 > 0.5 \\cdot 50 + 100$$\n\n$$128 > 25 + 100$$\n\n$$128 > 125$$\n\nWhen $$x = 50$$, we can pair it with any $$y > 125$$ to solve the inequality $$y > 0.5x + 100$$. You could automate this sort of test with a while loop.\n\nExample 2.13 Testing solutions to a linear inequality\n\nThe sketch above fixes the value of x at 50 and tests solutions to the inequality for all y values in the interval $$0 \\le y < 200$$. Solutions along this column are colored black while other points are colored ghostwhite.\n\nThis is the first example we\u2019ve seen of a while loop that includes an if statement in its body. You can put (almost) whatever code you want in the body of a while loop: function calls, arithmetic operations, if statements, and even other while loops. This last option opens up many interesting possibilities.\n\n## Nested Loops\n\nYou just tested hundreds of possible solutions to the inequality $$y > 0.5x + 100$$ when $$x$$ was fixed at $$50$$. Let\u2019s fully automate the process of testing solutions by iterating over the canvas\u2019 $$x$$-axis just like we\u2019re doing with its $$y$$-axis.\n\nA quick note about the algorithm we are about to run: it is very inefficient and would normally grind your computer to a halt. By default, p5 executes each statement you place in the body of the draw() function in order, from top to bottom, repeatedly, about $$60$$ times per second. Behind the scenes, you can imagine that the code you write in draw() is executing in the body of a while statement.\n\nsetup() \/\/ all of your code bundled up\n\nwhile (true) {\ndraw() \/\/ all of your code bundled up\n}\n\n\nTesting individual solutions to a linear inequality requires computing once on each $$(x,y)$$ coordinate pair before drawing a point on the canvas\u2013that\u2019s roughly $$200 \\cdot 200 = 40000$$ operations! The algorithm is slow and produces the same results every time, so there is no need to repeat it.\n\nIn the next example and those that follow, I will call the noLoop() function once in setup() like so.\n\nfunction setup() {\ncreateCanvas(400, 400)\nnoLoop()\n}\n\n\nBy calling noLoop(), you change p5\u2019s behavior so that draw() only executes a single time. You can imagine p5 running the following code instead.\n\nsetup() \/\/ all of your code bundled up\n\ndraw() \/\/ all of your code bundled up\n\n\nNow, we can compute once on each $$(x,y)$$ coordinate pair, draw the corresponding point, and produce a single image. This adjustment should keep your computer happy, but it may still take a moment for the result to appear.\n\nExample 2.14 Visualizing a linear inequality\n\nThe control structure you just created is called a nested loop. The outer loop increments the variable x while the inner loop increments the variable y and tests for solutions along each column of your canvas. And the result of all that computation? It turns out the set of $$(x,y)$$ coordinate pairs that solve our inequality, known as the solution set, forms a triangle-shaped region with the edges of the canvas. Neat! Can we make a rectangle?\n\nExample 2.15 The rectangle inequality\n\nExercise 2.19 \u2002 Modify the previous example to draw a new five-sided shape with your solution set. Closed shapes made by connecting three or more straight lines are known as polygons, and a five-sided polygon is known as a pentagon.\n\nExercise 2.20Ellsworth Kelly\u2019s \u201cAustin\u201d is a serene little chapel located on the campus of the University of Texas at Austin. Its walls feature a series of fourteen black and white marble panels that look suspiciously like linear inequalities. Use Kelly\u2019s panels as inspiration for your own series of abstract images. How many images will you include in your series? What colors will you use? What shapes will you create?\n\nDrawing with linear inequalities opens up a dizzying number of creative possibilities. But what if you just wanted to draw a rectangle in the middle of your canvas? Meeting this challenge requires expanding our modeling toolkit yet again.\n\n## Solving Systems\n\nWhen you solved your first system of linear equations, you found the point $$(x,y)$$ where two lines intersected. In other words, you found the only ordered pair $$(x,y)$$ that solved both equations simultaneously. We\u2019ll follow a very similar train of thought to solve systems of linear inequalities.\n\nFor starters, let\u2019s try consider the system $$y < x + 150$$ and $$y > 75$$. We can test possible solutions $$(x,y)$$ against both inequalities using the && operator.\n\nExample 2.16 A system of linear inequalities\n\nOK, but how would we draw that rectangle in the middle of the canvas? You can think of a rectangle as the set of points between a pair of $$x$$-values and a pair of $$y$$-values. For example, we could take all of the points where $$x > 75$$ AND $$x < 125$$ AND $$y > 100$$ AND $$y < 160$$.\n\nExample 2.17 A more constrained system\n\nAt this point you might say, \u201cThis is great! But how do I draw multiple shapes?\u201d Simple: define multiple systems of inequalities. I\u2019d like to frame this part of the discussion by studying the work of another abstract painter, Piet Mondrian.\n\nFigure 2.9 \u201cComposition II in Red, Blue, and Yellow\u201d Courtesy Wikimedia Foundation\n\nThe painting \u201cComposition II\u201d\n\n$$\\mapsto$$ \u2002 Each colored region is the solution set to a system of linear inequalities.\n\nThe boundaries for each system of inequalities are either vertical or horizontal lines.\n\nTable 2.5 The \u201cComposition II\u201d data set\n\nName Left $$x$$ Right $$x$$ Bottom $$y$$ Top $$y$$\nBlue corner 0 35 155 200\nYellow corner 188 200 183 200\nRed corner 40 200 0 150\nStripe 1 35 40 0 200\nStripe 2 0 35 60 72\nStripe 3 0 200 150 155\nStripe 4 183 188 155 200\nStripe 5 188 200 173 183\n\nFirst, prime the canvas by painting it ghostwhite. Next, select the paint color by testing whether a point solves a system of inequalities. Finally, paint the point. Repeat for every point on the canvas.\n\nExample 2.18 \u201cComposition II\u201d\n\nExercise 2.21 \u2002 Spend a few minutes exploring WikiArt and find an abstract painting or painter who inspires you. Hilma af Klint and Mark Rothko are personal favorites of mine. Then, create a new sketch that uses systems of inequalities to draw your own abstraction. Is your sketch completely abstract or is it based on an object, place, emotion, etc.? What do you like most about your sketch? What was challenging about creating it?\n\nExercise 2.22 \u2002 Visualize integer multiplication by building on your solution to Exercise 2.17. Use the starter code below to snap (very tiny) blocks together.\n\nThe noStroke() function removes edges p5 draws around text; doing so can make it easier to read some fonts. The fill() function sets the interior color of text. Note that the text() function takes three arguments: the text to be displayed, and the $$x$$- and $$y$$-coordinates where the text should appear.\n\nThe control structures you just studied make it possible to construct many useful computations. In the next chapter, we\u2019ll raise the level of abstraction by bundling computations into functions you define. We\u2019ll also have fun drawing with the many built-in shapes that p5 provides.","date":"2022-05-20 19:27:04","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.5985214114189148, \"perplexity\": 842.0770268593068}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-21\/segments\/1652662534669.47\/warc\/CC-MAIN-20220520191810-20220520221810-00155.warc.gz\"}"}
null
null
Gina Borellini (San Possidonio, 24 d'octubre de 1919 – Mòdena, 2 de febrer de 2007) fou una partisana i política italiana, medalla d'or al mèrit militar. Biografia Borellini va néixer a San Possidonio, una població italiana de la província de Mòdena, filla d'una família d'agricultors. L'any 1935 es va casar, quan tenia 16 anys, amb Antichiano Martini, un fuster de vint-i-sis anys. El primer fill del matrimoni va morir el 28 juny de 1938 quan amb prou feines tenia dos anys. L'any 1939 va néixer el seu segon fill, Euro. Martini es va veure obligat a marxar al front libi, temps durant el qual Gina Borellini va haver de recolzar tota la família treballant en plantacions d'arròs del Piemont. El 1943 va formar part de l'organització d'una gran vaga d'arrossaires a la província de Novara. A partir del 8 de setembre de 1943 va participar de forma activa, juntament amb el seu marit, retornat convalescent del front libi, a la Resistència italiana, com a partisana de relleu i ajudant a militars perseguits. El 22 de febrer de 1944 va ser capturada, juntament amb el seu marit, patint tortures per part dels feixistes. Martini va ser afusellat el 19 de març de 1945 en la Plaça d'Armes de Mòdena. Gina Borellini va entrar a continuació en la Brigada "Rem" amb el nom de batalla de "Kira", organitzant els grups de defensa de la dona a la població de Concordia sulla Secchia, amb la funció d'inspectora i el grau de capitana. El 12 d'abril de 1945, al llarg d'un intercanvi de trets a San Possidonio amb tropes feixistes de la Brigada Negra "Pappalardo", va resultar greument ferida, però per tal de no obstaculitzar la lluita dels seus companys no va demanar ajuda, aconseguint aturar ella sola l'hemorràgia i arribar a l'hospital de Carpi, on es va veure obligada a sofrir l'amputació de la cama esquerra. Durant l'hospitalització, va ser descoberta per la policia feixista, que la va sotmetre a interrogatoris esgotadors. Gina Borellini hauria estat afusellada si no s'hagués produït una insurrecció a la ciutat. El 17 de març de 1946 va ser escollida per a la junta comunal de Concordia sulla Secchia, pel Partit Comunista Italià. El 1947 va ser una de les 19 dones que van rebre la medalla d'or al valor militar per la seva activitat durant la lluita d'alliberament. El 1948 va esdevenir la primera diputada de Mòdena escollida pel Parlament, on va romandre fins al 1963. Durant les legislatures I, II i III va formar part de la Comissió de Defensa de la Cambra, treballant per a la millora de les condicions dels combatents de la guerra, així com per l'emancipació de les dones. El 1950, després de la massacre de la Fonderie Riunite de Mòdena, on van morir sis treballadors en vaga per l'acció de la policia de l'Estat, Borellini va expressar la seva indignació a la cambra dels diputats amb un gest flagrant: amb gran dificultat es va aixecar del seu banc i va baixar als bancs del Govern, on va llençar fotos dels treballadors morts davant el primer ministre Alcide De Gasperi. Gina Borellini va fer també d'intermediària entre una de les famílies víctimes i la parella formada pels polítics Palmiro Togliatti i Nilde Iotti, quins van adoptar la petita Marisa Malagoli. Poc després va ser escollida com a consellera de la Província de Mòdena (1951-1956) i de l'Ajuntament de Sassuolo (1956-1960). El 1945 va estar entre els fundadors de la Unió de Dones Italianes (UDI), de qui va ser presidenta provincial el 1953 i membre de la Junta Nacional entre 1948 i 1975. Va ser a més presidenta de la secció de Mòdena de l'Associació Nacional de Mutilats i Invàlids de Guerra, entre 1952 i 1990. El 1981 va ser nomenada Presidenta Honorària de l'Associació Nacional Partisana d'Itàlia (ANPI). El 2 de juny de 1993 va rebre el títol de Comendadora de l'Orde al Mèrit de la República Italiana. El Fons Gina Borellini és conservat al Centre de documentació de la dona de Mòdena. L'abril de 2017 es va inaugurar una estela a la seva memòria al parc de la Resistència de Mòdena. Referències Bibliografia Enllaços externs Comendador de l'orde al Mèrit de la República Italiana Persones de la província de Mòdena Polítics d'Emília-Romanya Morts a Mòdena
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,574
traffa traffaトップ > デザイナー一覧 > frumafar. frumafar. was set up by Mariko Ebata, a textile designer living in Japan. All designs are original and created with love for fabrics, nature, travel and sweets. She learned textile/surface design at Fashion Institute of Technology and worked in home furnishings industry in NYC before moving back to Japan.
{ "redpajama_set_name": "RedPajamaC4" }
9,793
Winter in northern Europe means the arrival of long nights, crisp snows, and dazzling celebrations of the holiday season. Cruise in comfort along the Rhine from Basel to Amsterdam, disembarking along the way to explore magical Christmas markets exhibiting customs unique to each city. Wander through medieval squares filled with the sounds of traditional carolers, the scents of mulled wine and spices, and the sparkle of ornamented Christmas trees. Browse stalls displaying delicate handmade trinkets and toys, explore fairytale castles and cathedrals decked with tinsel, and bring home holiday cheer to brighten the new year. As we sail this storied waterway, enjoy the freedom and flexibility to tailor the itinerary to your interests, choosing from a variety of available excursions at each port of call—from guided tours of iconic sites to culinary experiences and active adventures. He has covered a diverse range of topics including the secrets of the longest-living centenarians in the world, a lost Da Vinci painting, and hidden mummies in Sicilian crypts. His work has appeared in numerous publications, including National Geographic magazine, Condé Nast Traveler, the New York Times, and the Washington Post. Currently based in Switzerland, he has traveled and photographed extensively around Europe. Embark our ship in Basel and settle into your spacious suite. This evening, gather for a welcome dinner and raise your glass to the beginning of a magical trip along the Rhine. This morning we venture inland to the charming spa town of Baden-Baden, famous for its Roman baths. Choose between several ways to explore the city, including a guided tour of the old town or a hike to the 12th-century Hohenbaden Castle ruins, perched high above the town and offering breathtaking views of the surrounding Black Forest. Or visit the Mercedes-Benz factory to glimpse a state-of-the-art assembly line and learn the history of this iconic brand. In the evening, enjoy a private concert at a baroque palace. Discover the rich history and romantic cityscape of Heidelberg. Weather permitting, visit the hilltop ruins of the Renaissance-era Heidelberg Castle and hike back to the old town. Alternatively, tour the city's picturesque baroque-style streets, or enjoy an exclusive visit to the World Heritage site of Grube Messel, a quarry renowned for its superbly preserved plant and animal fossils. One of the best preserved primate fossils in the world—nicknamed Ida—was discovered here in 1983, and then analyzed by National Geographic Emerging Explorer Jørn Hurum in 2009. This afternoon, we continue our cruise along the Rhine. Arrive in Koblenz, situated at the confluence of the Rhine and Moselle rivers. Glide past the colossal equestrian statue of Emperor William I that presides over the two waterways, and observe the clash of colors where the waters meet. Disembark for a guided tour of this historic city, and delve into a colorful Christmas wonderland. During the holiday season, six of the city's old squares are utterly transformed by sparkling lights, ornamented trees, cottage-style market stalls, and an Advent calendar fashioned out of the 24 dormers in the baroque-style town hall. Listen to traditional choirs while sampling Germany's festive "fire cup" of mulled wine, potato fritters, and the famous Lebkuchenherzen gingerbread hearts. Alternatively, explore Cochem, a medieval gem nestled in the Moselle River Valley. A guided tour culminates in a visit to the Reichsburg Castle, a fairytale cluster of conical towers perched high above the town. Go ashore in Cologne, and gaze up at the intricate twin spires of the city's iconic cathedral—one of the only structures left standing in the city following the heavy air raids of World War II. Explore Cologne's extensive cultural offerings, including dozens of acclaimed museums and galleries. Discover Romanesque churches and unique medieval structures, such as the stone city gates and the Town Hall. Alternatively, join an excursion to the Neanderthal Museum in Mettmann, located near the site where the bones of an Ice Age human were recovered for the first time. Or discover the distinctive red-roofed stalls of the Cologne Christmas market at your own pace, in search of hand-made gifts for friends and family. Set sail for Amsterdam this afternoon. Arrive in the Dutch capital city of Amsterdam this morning, and set out to explore one of the many Christmas markets set along its picturesque canals. Sample delicious treats such as oliebollen, Dutch doughnuts, or poffertjes, small pancakes topped with butter and powdered sugar. Choose to explore the city's medieval canals on a boat cruise, or opt for a guided tour of the Rijksmuseum, home to historical exhibitions that tell the story of the Netherlands, as well as masterpieces by famous Dutch painters from Rembrandt to Vermeer. Alternatively, head outside the city to discover the colorful fishing towns of Volendam and Edam, where vendors sell Dutch favorites such as herring and the famed Edam cheese. After breakfast onboard, transfer to the airport in Amsterdam and connect with your flight home. Experience the Christmas spirit like never before at some of Europe's oldest and grandest Christmas markets. Discover the rich history of the Rhineland as you explore magnificent cities in France, Germany, and the Netherlands alongside National Geographic experts. Capture spectacular images of castles ruins and gothic churches, using tips and tricks from our accompanying National Geographic photographer. Enjoy an exclusive behind-the-scenes visit to the Neanderthal Museum in Mettmann; and explore Grube Messel pit, the famed fossil site featured in National Geographic magazine where "Ida"—the world's most complete primate fossil—was found.
{ "redpajama_set_name": "RedPajamaC4" }
333
Q: retorno CoreDat - Swift Não estou conseguindo atribuir um textField com o resultado de uma busca quando é int, com String da certo var results:NSArray = try context.executeFetchRequest(request) as! [NSManagedObject] if(results.count > 0){ var res = results[0] as! NSManagedObject nomeText.text = res.valueForKey("nome") as? String idadeText.text = res.valueForKey("idade") as? String print(res.valueForKey("idade") as? String) } No print ele me retorna nil.. quando muda pra Int, ele me retorna valor correto.. como devo fazer para atribuir o textField com esse valor, lembrando que com o campo nome eu não tenho esse erro A: A sintaxe "as?" retorna nil se não consegue fazer o cast, e neste caso o valor é do tipo Int, então o correto seria assim: res.valueForKey("idade") as? Int Mas, você deveria aproveitar da tipagem de objetos, que o CoreData lhe oferece. Basta entrar no arquivo xcdatamodel, selecionar suas entidades, e através do menu superior Editor -> Create NSManagedObject subclass Assim, seu codigo seria bem mais simples: var results = try context.executeFetchRequest(request) as! [Pessoa] if(results.count > 0){ var res = results[0] nomeText.text = res.nome idadeText.text = res.idade.description } Fonte: https://developer.apple.com/library/ios/recipes/xcode_help-core_data_modeling_tool/Articles/creating_mo_class.html A: Os valores númericos em CoreData são mapeados para NSNumber. Para atribuir como texto ao UITextField você deve explicitamente pedir o valor como string: let idade : NSNumber = res.valueForKey("idade") idadeText.text = idade.stringValue()
{ "redpajama_set_name": "RedPajamaStackExchange" }
6,811
Q: How to fix Error in rep(0, nobs) : invalid 'times' argument when using predict function for model plots I am trying to plot an interaction between No_Squares and Sex and their effect on Active_co2: AMRdata <- structure(list(Week = c(1L, 1L, 2L, 3L, 3L, 3L, 3L, 4L, 5L, 5L, 6L, 7L, 7L, 7L, 7L, 8L, 1L, 2L, 2L, 3L, 3L, 4L, 6L, 6L, 8L, 8L, 8L, 8L, 9L, 9L, 9L, 10L), Sex = structure(c(1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L, 2L), .Label = c("F", "M"), class = "factor"), No_Squares = c(23L, 17L, 14L, 7L, 99L, 78L, 90L, 1L, 9L, 35L, 81L, 9L, 77L, 84L, 1L, 44L, 9L, 30L, 8L, 92L, 28L, 74L, 29L, 76L, 66L, 43L, 36L, 13L, 4L, 82L, 14L, 59L), Active_co2 = c(8.79514591, 16.71840387, 14.1932374, 10.90741585, 10.7436911, 14.97469781, 19.88267242, 12.43274774, 15.12038794, 10.43636012, 15.59780954, 8.776376951, 9.995133069, 12.38314719, 9.611533444, 9.633809968, 12.56430759, 10.29433452, 9.422792731, 22.5092972, 10.38682245, 8.248907506, 11.84916117, 11.05467852, 19.53495917, 12.14440531, 12.09564168, 6.78392472, 10.51570692, 8.527792046, 8.731880804, 10.71404367)), class = "data.frame", row.names = c(NA, -32L)) mod1 <- glmer(Active_co2 ~ No_Squares*Sex+(1|Week), data = AMRdata, family=Gamma(link='log')) plot(AMRdata$No_Squares,AMRdata$Active_co2,type="n",xlab="No_Squares",ylab="AMR") spp<-split(AMRdata$Active_co2,AMRdata$Sex) bio<-split(AMRdata$No_Squares,AMRdata$Sex) points(bio[[1]],spp[[1]],pch=16) points(bio[[2]],spp[[2]],pch=17) # make legend legend("topright", title="Sex", legend=c("female","male"), pch=c(16,17,1),lty=c(1,2,4),bty="n") NEWSQUARES<-seq(1,99,length=100) levels(AMRdata$Sex) FACTORfemale<-rep("F",100) PREDfemaleAMR<-predict(mod1,list(Sex=factor(FACTORfemale),No_Squares=NEWSQUARES),type="response",se=TRUE) However at the last point I am greeted with the error message Error in rep(0, nobs) : invalid 'times' argument I have looked online but am unable to resolve the issue. Any suggestions as to what is wrong with my last bit of code would be greatly appreciated! A: You probably need a data.frame where also "Week" is included, where I don't exactly know which values you want. The se=TRUE unfortunately is an unused argument in this method. predict(mod1, data.frame(Week=1:10, Sex=factor(FACTORfemale), No_Squares=NEWSQUARES), type="response" # , se=TRUE ## unused argument ) # 1 2 3 4 5 6 7 8 9 10 11 12 # 11.91329 11.67287 12.21171 11.29497 11.74180 11.73457 11.04548 11.78727 11.33035 11.55541 12.10291 11.85865 # 13 14 15 16 17 18 19 20 21 22 23 24 # 12.40607 11.47474 11.92869 11.92134 11.22128 11.97488 11.51069 11.73933 12.29554 12.04740 12.60353 11.65738 # 25 26 27 28 29 30 31 32 33 34 35 36 # 12.11855 12.11109 11.39988 12.16548 11.69390 11.92618 12.49124 12.23915 12.80414 11.84292 12.31143 12.30385 # 37 38 39 40 41 42 43 44 45 46 47 48 # 11.58133 12.35911 11.88002 12.11600 12.69005 12.43395 13.00793 12.03141 12.50738 12.49968 11.76566 12.55582 # 49 50 51 52 53 54 55 56 57 58 59 60 # 12.06911 12.30884 12.89203 12.63186 13.21497 12.22291 12.70645 12.69863 11.95292 12.75566 12.26120 12.50475 # 61 62 63 64 65 66 67 68 69 70 71 72 # 13.09723 12.83291 13.42530 12.41745 12.90869 12.90075 12.14317 12.95869 12.45635 12.70378 13.30569 13.03716 # 73 74 75 76 77 78 79 80 81 82 83 84 # 13.63898 12.61509 13.11415 13.10608 12.33644 13.16494 12.65461 12.90598 13.51746 13.24466 13.85607 12.81588 # 85 86 87 88 89 90 91 92 93 94 95 96 # 13.32288 13.31468 12.53280 13.37448 12.85603 13.11139 13.73261 13.45547 14.07660 13.01986 13.53493 13.52660 # 97 98 99 100 # 12.73227 13.58735 13.06065 13.32008
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,691
description: A Metadata Provider is a JavaScript function that acts as an interface for accessing metadata related to Images in Cornerstone. --- # Metadata Providers > A **Metadata Provider** is a JavaScript function that acts as an interface for accessing metadata related to Images in Cornerstone. Users can define their own provider functions in order to return any metadata they wish for each specific image. Medical images typically come with lots of non-pixel-wise metadata such as for example, the pixel spacing of the image, the patient ID, or the scan acquisition date. With some file types (e.g. DICOM), this information is stored within the file header and can be read and parsed and passed around your application. With others (e.g. JPEG, PNG), this information needs to be provided independently from the actual pixel data. Even for DICOM images, however, it is common for application developers to provide metadata independently from the transmission of pixel data from the server to the client since this can considerably improve performance. To handle these scenarios, Cornerstone provides infrastructure for the definition and usage of *Metadata Providers*. Metadata Providers are simply functions which take in an [Image Id](image-ids.md) and specified metadata type, and return the metadata itself. Here is a simple example of a Metadata Provider which returns an Object containing Image Plane metadata for a single specific image (Image Id: 'ct://1'): ````javascript function metaDataProvider(type, imageId) { if (type === 'imagePlaneModule') { if (imageId === 'ct://1') { return { frameOfReferenceUID: "1.3.6.1.4.1.5962.99.1.2237260787.1662717184.1234892907507.1411.0", rows: 512, columns: 512, rowCosines: { x: 1, y: 0, z: 0 }, columnCosines: { x: 0, y: 1, z: 0 }, imagePositionPatient: { x: -250, y: -250, z: -399.100006 }, rowPixelSpacing: 0.976562, columnPixelSpacing: 0.976562 }; } } } // Register this provider with CornerstoneJS cornerstone.metaData.addProvider(metaDataProvider); // Retrieve this metaData var imagePlaneModule = cornerstone.metaData.get('imagePlaneModule', 'ct://1'); ```` ## Basics * Cornerstone allows for the registration of multiple Metadata Providers. * Each provider can provide whichever information the developer desires. * When a request is made for metadata for an image, Cornerstone will iterate through the known providers until it retrieves a defined set of metadata for the specified metadata type. * Providers can be added to Cornerstone with an optional priority value in order to influence the order in which they are called. * When DICOM images are loaded by [Cornerstone WADO Image Loader](https://github.com/cornerstonejs/cornerstoneWADOImageLoader), their metadata will be parsed and added to a metadata provider automatically. * Within [Cornerstone Tools](https://github.com/cornerstonejs/cornerstoneTools), specific metadata types are used to provide metadata for tools.
{ "redpajama_set_name": "RedPajamaGithub" }
1,360
Wish-Bone® Italian Dressing. America's #1 Italian dressing brand††. Signature blend of Italian herbs, spices, onion, red pepper and garlic. Flavor you can see! Great as a marinade. 16 fl oz (473 ml). Perfect for marinating! Wish-Bone's signature Italian dressing is a great way to marinate chicken, steak, fish or veggies. Our unique blend of oil, vinegar and vibrant herbs and spices is the perfect way to add flavor you can see and taste to any meal! Excellent source of omega-3 ALA**. No high fructose corn syrup. Gluten-free. ††Based in part on information Resources Inc.'s total U.S. multi-outlet unit sales 52 weeks ending September 27, 2015. **Contains 400mg ALA per serving which is 25% of the 1.6g daily value for ALA. Questions or comments Please call 1-800-343-9024. For delicious recipes, visit wish-bone.com. Kraft Thousand Island Dressing. New look, same great taste! Rich & tangy. No artificial flavors. No high fructose corn syrup. No synthetic colors. 16 fl oz (1 pt) 473ml. Visit us at: kraftfoods.com. 1-800-847-1997. Please have package available. ©Kraft Foods. Recipes at kraftdressings.com. Kraft Dressing Lite Thousand Island. 50% Less fat and 33% fewer calories than regular dressing. Half the fat. No artificial flavors. No MSG added. 16 fl oz (1 pt) 473 ml. Calories per serving: This product 50 regular dressing 120. Fat per serving: This product 1g regular dressing 10g. Please have package available. Call: 1-800-847-1997. Visit us at: www.kraftfoods.com. Recipes at www.kraftdressings.com. ©Kraft Foods. Great as a dip! Ready to use. Kraft® Rich & Tangy Dressing Thousand Island. New look, same great taste! No artificial flavors. No high fructose corn syrup. No synthetic colors. 8 fl oz (237 ml). Call: 1-800-847-1997. Please have package available. Visit us at: www.kraftfoods.com. Recipes at www.kraftdressings.com. © Kraft Foods. Marzetti® Dressing Thousand Island. Since 1896. No preservatives. 15 fl oz (443 ml). Every delicious spoonful in this generous, 15 fl oz jar lives up to a tradition that began over a century ago. Great taste and quality are what Marzetti Dressings are all about. Produced with genetic engineering. Question or Comments Call: 1-800-999-1835. Visit: www.marzetti.com. ©T. Marzetti Company. Marzetti® Thousand Island Dressing. Since 1896. Marzetti family recipe. 16 fl oz 473 ml. Make it Marzetti® for flavorful, savorful salads! Time-honored recipes from our kitchen to yours. Visit us at: www.marzetti.com. ©2009 T. Marzetti Company. No high fructose corn syrup. Gluten free. Great on sandwiches! Wish-Bone's Thousand Island dressing is great for salads, dipping and even spreading on sandwiches! Use Wish-Bone Thousand Island to create the perfect classic rueben sandwich that's packed with flavor! Wish-Bone's unique blend of zesty tomato, tangy sweet relish and spices is a delicious addition to any meal! For delicious recipes, visit wish-bone.com. Rich in omega 3 ALA (Contains 720 mg ALA per serving which is 45% of the 1.6 g Daily Value for ALA). Questions or comments? Please call 1-800-343-9024. Wish Bone® Thousand Island Dressing. The Bolder The Better™. Great on sandwiches! Flavor you can see! Great on sandwiches! Wish Bone's thousand island dressing is great for salads, dipping and even spreading on sandwiches! Use Wish-Bone thousand island to create the perfect classic Rueben sandwich that's packed with flavor! Wish-Bone's unique blend of zesty tomato, tangy sweet relish and spices is a delicious addition to any meal! For delicious recipes, visit wish-bone.com. Rich in omega 3 ALA**. No high fructose corn syrup. **Contains 720mg ALA per serving which is 45% of the 1.6g daily value for ALA. Gluten-free. Questions or comments Please call 1-800-343-9024. Sugar free. Fat free. Carbohydrate free. Gluten free. Cholesterol free. Just the right touch of the world's finest aged vinegars, fresh ground herbs and spices, triple filtered water and natural flavors makes Walden Farms Thousand Island dressing delicious and perfect when trying to eat right. 100% guaranteed. More calorie free specialties waldenfarms.com. Kraft Dressing Thousand Island. New look, same great taste! Rich & tangy. No artificial flavors. No high fructose corn syrup. No synthetic colors. 24 fl (1.5 pt) 709ml. Please have package available. 1-800-847-1997. Visit us at: kraftfoods.com. Recipes at kraftdressings.com. ©Kraft Foods.
{ "redpajama_set_name": "RedPajamaC4" }
3,122
The Tcl Dev Kit (TDK) provides essential tools for Tcl programmers, making it easy to create, build and deploy applications. Rapidly deploy Tcl applications to a broad range of platforms, as ready-to-run executables, starkits or starpacks. Simplify development with tools for finding and fixing bugs, managing complex code bases, and optimizing your programs. Take control and work the way you want with a choice of GUIs or command line interfaces for most tools. - Deliver your Tcl programs as executables, starkits or starpacks. - Tamper-proof your applications, and protect your code from prying eyes. - Run and manage your Tcl code as Windows services. - Create, manage, export and use TEApot package repositories. - Kill bugs fast with the cross-platform graphical debugger. - Quickly find errors before running your scripts. - Uncover tricky problems by inspecting Tcl programs while they run. - Improve code performance and reliability through coverage and hotspot analysis. - Understand complex code at a glance with a visual guide to component relationships.
{ "redpajama_set_name": "RedPajamaC4" }
2,319
After guests have been picked up from their hotels in Abu Dhabi, we will travel around the city sightseeing. First, we will visit the Sheikh Zayed Grand Mosque to take some pictures. Then, we will head to the Cornice to enjoy the beach views and the Emirates Palace Hotel. After relaxing here, we will go to Yas Waterworld and Ferrari World, if time permits. Then, guests will be transferred to the airport. This luxury hotel sits on 1.3 km of private beach and cost $3 billion to make. It contains 302 rooms and 92 suites. The hotel boasts that it is "beyond a 7 star" rating. This water park is located on Yas Island, offshore from Abu Dhabi. Visitors to the park can enjoy 45 rides, slides, and attractions. The award winning water park was the first in the Middle East with a green sustainability rating. During Friday Heritage Village opens after 3:30 pm. If the tour exceed more then 4 hours (10-30 minutes are counted to be free) each additional hour you will pay $28. The Sheikh Zayed Grand Mosque is closed on Friday mornings, and reopens after 4:30 p.m. for visitors.
{ "redpajama_set_name": "RedPajamaC4" }
6,118
Next message: David Abrahams: "[boost] Re: Proposal for 'is_dereferenceable' and othertemplatemetafunctions" Previous message: Beman Dawes: "Re: [boost] Re: filesystem exists/is_directory suggestion" In reply to: Paul A Bristow: "RE: [boost] Re: [admin] Overlapping reviews -- should this be allowed?" Next in thread: Dave Harris: "Re: [boost] Re: [admin] Overlapping reviews -- should this be allowed?" Maybe reply: Dave Harris: "Re: [boost] Re: [admin] Overlapping reviews -- should this be allowed?" > | this be allowed? > really much too late. > Is a two stage review/acceptance process a way to improve? > 1 Float the idea. > 2 Get some support. > 3 Get some feedback. > 5 Refine with feedback. > 6 Formal review, and if OK then else revert to 'nearly ready'. > 8 Add to next Boost release. libraries that have gone through the preliminaries.
{ "redpajama_set_name": "RedPajamaC4" }
4,375
\section{Introduction} Deep Neural Networks have made significant progress on several challenging tasks in computer vision such as image classification ~\cite{imagenet_classification_first}, object detection~\cite{redmon2016you} and semantic segmentation~\cite{he2017mask, messaoud2020can}. However, these networks have been shown to possess numerous non-human biases, such as high facial recognition misclassification error rates against certain races and genders~\cite{buolamwini2018gender}, vulnerability to numerous classes of adversarial samples~\cite{szegedy2013intriguing, goodfellow2014explaining, hosseini2018semantic, hendrycks2019natural, jere2019scratch, neekhara2020adversarial}, and vulnerability to training-time backdoor attacks ~\cite{liu2017trojaning}. \begin{figure}[ht!] \centering \includegraphics[width=0.47\textwidth]{FIGURES/RIG.pdf} \caption{Left-to-right: The original image, Vanilla Gradients (as obtained by backpropagation with respect to the top label), and \textbf{R}ank \textbf{I}ntegrated \textbf{G}radients (RIG), our pixel importance method for the top label that averages saliency map information across low-rank representations of the same image. Notice that the visualizations obtained from RIG are better at identifying distinctive features of the image.} \label{fig:RIG} \end{figure} A line of recent efforts focused on explaining the generalization behavior of neural networks through adversarial robustness has shown significant promise. Such methods involve characterizing network inputs based on robust and non-robust features~\cite{ilyas2019adversarial, engstrom2019discussion, wang2020high}, understanding their effects on feature maps~\cite{xie2019feature}, interpreting their frequency components~\cite{fourier_perspective,fourier2} and interpreting their principal component properties~\cite{jere2019principal,bhagoji2017dimensionality}. Surprisingly, prior work has shown that neural nets often generalize to test sets based on superficial correlations in the training set~\cite{ilyas2019adversarial, wang2020high, geirhos2018imagenet, WangHLX19, Makino2020Differences}. In this work and inspired by previous works~\cite{jo2017measuring, ilyas2019adversarial, wang2020high}, we investigate the hypothesis that naturally trained CNNs leverage such superficial correlations in the dataset. However, different from prior works, we argue that these superficial correlations can be distilled from an image via low-rank image approximation, a claim that was previously refuted~\cite{wang2020high}. We further argue that naturally trained neural networks and adversarially robust neural networks exploit highly different features from the same image, and that these features can be separated by singular value decomposition (SVD). Our contributions are as follows: \begin{figure*}[ht!] \centering \includegraphics[width=0.75\textwidth]{FIGURES/rank_generator_framework.pdf} \caption{Generating a rank-$k$ image via truncated SVD. Given an $n \times n$ RGB image, we decompose the image into its individual color channels, zero out the last $(n - k)$ singular values obtained via SVD, and then reconstruct the image with its $k$ nonzero singular values. Low rank images are often more blurry than their full-rank counterparts.} \label{fig:low_rank_generation_methodology} \end{figure*} \begin{itemize} \item We identify for the first time that image rank (obtained from SVD) yields several novel insights about CNN robustness and interpretability (for example Figure~\ref{fig:analogous_fig_1}). We provide arguments in favor of using image rank as a potential human-aligned image robustness metric. \item We show empirically that naturally trained CNNs place a large importance on human-imperceptible higher-rank components, and that adversarial retraining increases reliance on human-aligned lower-ranked components. Furthermore, we demonstrate that neural networks trained on imperceptible, higher-rank features generalize to the test set. \item We provide experimental evidence that neural networks trained on low-rank images are more adversarially robust than their naturally trained counterparts for the same dataset, and capture the accuracy-robustness tradeoff in CNNs in this new lens. \item We propose \textbf{R}ank-\textbf{I}ntegrated \textbf{G}radients (RIG), the first rank-based feature attribution method. Saliency maps generated by RIG highlight features more in line with human vision and offer a new way to interpret the decisions of CNNs (Figure~\ref{fig:RIG}). \end{itemize} Our work provides a new methodology to capture model robustness, and allows us to distinguish between naturally trained and robust models outside of the traditional $L_{p}$-norm robustness framework. It suggests that data approximation strategies such as low-rank approximation can be leveraged to improve out-of-distribution CNN performance, such as against adversarial samples. Finally, we show that saliency maps that incorporate rank information highlight more visually meaningful features. We hope our work will encourage researchers to include image approximation techniques when studying CNN generalization. \section{Background and Related Work} ~\label{sec2} \vspace{-0.6cm} \begin{figure*}[ht!] \centering \includegraphics[width=0.75\textwidth]{FIGURES/NEURIPS_2020_MULTIPLE_RANKS.pdf} \caption{Low-rank approximations of the same image. Transitioning from low-rank approximations of images to higher-rank approximations yields better image quality.} \label{fig:low_rank_approximations} \end{figure*} \subsection{Notation} We consider a neural network $f( \cdot)$ used for classification where $f(x)_{i}$ represents the softmax probability that image $x$ corresponds to class $i$. Images are represented as $x \in [0,1]^{w \times h \times c}$, where $w, h, c$ are the width, height and number of channels of the image. We denote the classification of the network as $r(x) = \argmax_{i} f(x)_{i}$, with $r^{\ast}(x)$ representing the ground truth of the image. Given an image $x$ and an $L_{p}$ norm bound $\epsilon$, an adversarial sample $x' = x + \delta$ has the following properties: \vspace{-0.1cm} \begin{itemize} \item For a perturbation $\delta \in [0,1]^{w \times h \times c}$ added to an image $x$ such that $x' = x + \delta$, $L_{p}(\delta) = {(\sum_{i=1}^{h} \sum_{j=1}^{w} \left|\delta_{i,j}\right|^{p})}^{1/p} \leq \epsilon$ where $p=(1,2,\infty)$.\vspace{-0.1cm} \begin{itemize} \item $p=1$ is the Manhattan norm, defined as the sum of the absolute values of $\delta$.\vspace{-0.1cm} \item $p=2$ is the Euclidean norm of $\delta$.\vspace{-0.1cm} \item $p=\infty$ is the infinity norm or max-norm of $\delta$, defined as the largest absolute value in $\delta$. \vspace{-0.2cm} \end{itemize} \item $r(x') \neq r^{\ast}(x) = r(x)$. This means that the prediction on the adversarial sample is incorrect while the original prediction is correct. \end{itemize} \subsection{Adversarial Samples} In this work we consider adversaries with white-box access to the neural network. In the white box threat model all information about the neural network is accessible. Using this information, adversaries can compute gradients with respect to inputs by backpropagation. White box attacks can be either targeted or untargeted. In targeted attacks, adversaries seek to generate an adversarial sample $x'$ from an image $x$ to force the neural network $f(x)$ to predict a pre-specified target $t$ that is different from the true class $r^{*}(x)$, while in untargeted attacks adversaries seek to find an adversarial sample $x'$ whose prediction is simply different from that of the true class. Numerous methods to generate adversarial samples have been proposed~\cite{moosavi2016deepfool, goodfellow2014explaining, carlini2017adversarial, universal, madry2018towards}. In this work, we focus on the PGD attack with random starts, which has been shown to be an effective universal first-order adversary against neural networks~\cite{madry2018towards}. For a neural network $f$, PGD is an iterative adversarial attack method that seeks to generate a targeted adversarial sample $x'$ from an original image $x$ with maximum perturbation limit $\epsilon$. At each iteration, it performs a gradient descent step in the loss function w.r.t the image pixel values and the target class $t$ and projects the perturbed image onto the feasible space, which is either a maximum per-pixel perturbation of $\epsilon$ (for $L_{\infty}$ perturbations) or a maximum Euclidean perturbation distance from $x$ of $\epsilon$ (for $L_{2}$ perturbations). \noindent \textbf{Adversarial training.} Adversarial training defends against adversarial samples by training networks on adversarial perturbations that are generated on-the-fly. Adversarial training with $L_{\infty}$ PGD samples has been shown to be among the most effective methods in mitigating these attacks~\cite{xie2019feature, madry2018towards}. \subsection{Explaining Adversarial Samples} \noindent Recent work has begun to understand the origin of adversarial samples. Ilyas et al. demonstrate that models trained on adversarial samples can generalize to test sets~\cite{ilyas2019adversarial}, and posit that adversarial samples are generalizable features that neural networks learn which are invisible to humans. Yin et al. and Wang et al.~\cite{fourier_perspective, wang2020high} propose that adversarial samples for non-robust neural networks are in the high-frequency domain. Jere et al.~\cite{jere2019principal} hypothesize that adversarial samples require significantly more principal components of an image to reach the same prediction compared to natural images. Our work is most similar to that of Yin et al.~\cite{fourier_perspective}, in that we observe naturally trained CNNs are sensitive to higher-rank features, and that adversarial training makes them more biased to low-rank features. We explore the relationship between Fourier and low-rank features in the appendix. \begin{figure} \centering \includegraphics[width=0.45\textwidth]{FIGURES/IDX_16_stacked.pdf} \caption{Accuracy for full-rank and rank-10 images. Top left is the full-rank image, top-right is its rank-10 approximation, bottom left are top-5 predictions for a naturally trained ResNet-50 on the rank-10 image, bottom right are top-5 predictions for an $L_{\infty}=4/255$ adversarially robust ResNet-50 on the rank-10 image. Note that the adversarially robust version makes better predictions on rank-based distorted versions of the same image, even though they do not fall under an $L_{p}$-distortion framework.} \label{fig:analogous_fig_1} \vspace{-0.4cm} \end{figure} \subsection{Feature Attribution Methods} The problem of feature attribution seeks to \textit{attribute the prediction of deep neural networks to its input features}. Most methods of feature attribution involve variants of visualizing the gradients $\frac{\partial f}{ \partial x}$ of the network with respect to the top predicted class $i$~\cite{selvaraju2017grad, shrikumar2017learning, binder2016layer}. A significant challenge in designing attribution techniques is that attributions without respect to a fixed baseline are hard to evaluate, a problem which was successfully addressed by Integrated Gradients~\cite{sundararajan2017axiomatic}. In Integrated Gradients, the baseline for an image is established with respect to a completely black image $\Tilde{x}$, and a straight line is defined from $\Tilde{x}$ to $x$ with increasing brightness values. Gradients are weighed and computed at each of these steps to result in the final saliency map, which is mathematically equivalent to the path integral along a straightline path from the baseline to the input. In our work, we perform a similar evaluation with our baseline set by the minimum \textit{rank} of an image, and with gradients computed along increasing image rank. Further details can be found in Section~\ref{sec:RIG}, with numerous examples of RIG saliency maps in Figures~\ref{fig:RIG},~\ref{fig:RIG2} and in the appendix. \subsection{Low-rank approximations} \noindent Low-rank representations of matrices (obtained via the Singular Value Decomposition) can capture a significant amount of information while simultaneously eliminating spurious correlations, and have recently been used in several Deep Learning applications, such as compression of CNN filters~\cite{denton2014exploiting} and compression of internal representations of attention matrices in transformers~\cite{choromanski2020rethinking, wang2020linformer}. \section{Methodology} ~\label{sec3} \vspace{-0.4cm} In this section we first introduce the theoretical basis behind singular value decomposition, followed by the algorithm used in the rest of the paper to perform low-rank decomposition of RGB images. \subsection{Eigendecomposition of images} Eigendecomposition is commonly used to factor matrices into a canonical form, where the matrix is represented in terms of its eigenvalues and eigenvectors. In this work, we focus on utilizing the Singular Value Decomposition (SVD) to obtain low-rank approximations of an input to a neural network. In particular, let an image $x \in [0,1]^{w \times h \times c}$, where $w, h, c$ are the width, height and number of channels of the image respectively, and $w \leq h$. For each channel $m \in {1,2,\ldots c}$ the singular value decomposition on the matrix $A \in [0,1] ^{w \times h}$ yields: \begin{equation} A = U \Sigma V^{T} \end{equation} $U$ and $V$ are orthonormal matrices, and $\Sigma$ is a $w \times h$ ($w \leq h$) diagonal matrix with entries $(\sigma_{1}, \sigma_{2}, \ldots , \sigma_{w})$ denoting the singular values of $A$ such that $\sigma_{1} \geq \sigma_{2}, \ldots \geq \sigma_{w} \geq 0 $. According to the Eckart-Young-Mirsky theorem, the best $k$ rank approximation to the matrix $A$ in the spectral norm $||\cdot||_{2}$ is given by: \begin{equation} A_{k} = \sum_{j=1}^{k} \sigma_{j} u_{j} v_{j}^T \end{equation} where $u_{j}, v_{j}$ denote the $j$th column of $U$ and $V$ respectively. This process is also termed as \textit{truncated SVD}. For an $n$-rank matrix $A$, its $k$-rank approximation can also be expressed as $A_k = U \Sigma_{k} V^{T}$, where $\Sigma_{k}$ is constructed from $\Sigma$ by setting the smallest $n-k$ diagonal entries set to zero. \subsection{Algorithm} We define the top class $r(x) = \argmax_{i} f(x)_{i}$ as the predicted class on the full-rank image $x$. For an image $x \in [0,1]^{w \times h \times c}$, we perform singular value decomposition for each color channel $c$, reconstruct a low-rank approximation using $k$ singular values, and perform inference on the rank-$k$ image. Algorithm ~\ref{alg:algorithm_truncation} highlights the steps that occur, and Figure~\ref{fig:low_rank_generation_methodology} illustrates the steps involved in generating a rank-$k$ image by truncating each of the RGB channels and reconstructing the image. Experimental results regarding latency and runtime can be found in the appendix. \begin{algorithm}[ht!] \SetAlgoLined \KwResult{Accuracies for each rank $k=0:w$} $full\_rank\_preds \leftarrow f(x_{1:N})$ \\ $rank\_k\_acc \leftarrow zeros(w+1)$ \\ \For{$k = 0:w$}{ $rank\_k\_x = zeros\_like(x_{1:N})$ \\ \For{$i = 1:N$} { \For{$channel = 1:c$} { $u, \sigma, v = SVD(x[i][channel])$ \\ $\sigma[k:w] = 0$ \\ $rank\_k\_x[i][channel] = u \ diag (\sigma) \ v$ \\ } } $rank\_k\_acc[k] = (f(rank\_k\_x) == full\_rank\_preds)$ } \Return $rank\_k\_acc$ \caption{Finding the accuracies of a model $f(\cdot)$ for a batch of images $x_{1:N}$, where $x_i \in [0,1]^{w \times h \times c}$ as a function of the input rank.} \label{alg:algorithm_truncation} \end{algorithm} \section{Towards robustness metrics beyond $L_{p}$ distortions} ~\label{sec4} In this section we highlight several limitations of $L_{p}$ distortions, followed by experimental results for naturally trained and adversarially robust CNNs. Finally, we introduce Rank-Integrated Gradients. \subsection{Limitations of $L_{p}$ distortions} Extensive experiments have been conducted to secure neural networks against $L_{p}$-norm bounded perturbations, such as adversarial training~\cite{madry2018towards,kannan2018adversarial,xie2019feature,shafahi2019adversarial} and certified defenses against adversarial samples~\cite{raghunathan2018certified,wong2017provable,zhang2019theoretically,cohen2019certified}. Unfortunately, $L_{p}$-distortions represent a small fraction of potential image modifications. An infinite number of modified images exist that possess identical norm-bounded perturbations with respect to a base image. Furthermore, identical $L_p$ norm-bounded distortions may be extremely different perceptually, pointing to $L_{p}$-norm robustness potentially being misaligned with human perception (Figure~\ref{fig:distorted_kitty}). Based on these observations and limitations, we argue that image rank and rank-based robustness metrics might be better suited to capture image modifications than $L_{p}$-distortions for the following reasons. \begin{itemize} \item Matrix rank for an image $x \in [0,1]^{w \times h \times c}$ ($w \leq h$) is restricted to the set of integers ${1,2,...,w}$. The set of images generated by rank truncation is a bijective mapping from rank $k$ to generated images $x_{k}$. This is in contrast to images generated with $L_{p}$ distortions whose mapping contains infinite possibilities. \item Low-rank image approximation effectively captures a much larger range of image modifications (Figure~\ref{fig:low_rank_approximations}) that is more perceptually aligned with human vision that might not be captured with $L_{p}$ distortions. Transitioning from low-rank approximations of images (unrecognizable to humans) to higher-rank approximations (recognizable to both DNNs and humans) effectively allows us to better understand the gap between human and computer vision. \end{itemize} \subsection{Rank Dependence of CNNs} \begin{figure} \centering \includegraphics[width=0.4\textwidth]{FIGURES/RESNET_50_RANK_SPECTRUM.pdf} \caption{Dependence of test accuracy of naturally trained and $L_\infty=4/255$ robust ResNet-50 models on input rank of natural images (subset of ImageNet validation images). } \vspace{-0.4cm} \label{fig:imagenet_resnet_50_rank_distorted} \end{figure} We seek to understand the dependence of image rank and classifier accuracy for ResNet-50. We observe that naturally trained and robust CNNs use features that are highly different in their rank properties. Particularly, naturally trained CNNs use high-rank features that are often invisible to humans, while robust CNNs do not respond to these features but rather rely on highly visible low-rank features. For both models we randomly sampled $1000$ images from the ImageNet validation set and cropped and reshaped each image to have a shape of $(224 \times 224 \times 3)$. For each image, we performed low-rank approximation for every possible rank prior to inference according to Algorithm~\ref{alg:algorithm_truncation}. \vspace{-0.2cm} \subsubsection{Behavior of Naturally Trained CNNs} We investigate the behavior of the ImageNet-trained ResNet-50~\cite{he2016deep}, VGG-19~\cite{simonyan2014very} and DenseNet-201~\cite{huang2017densely} (Table~\ref{tab:experimental_setup}) CNN architectures trained on the ImageNet dataset~\cite{imagenet_classification_first} Experimental results for the VGG-19 and DenseNet models can be found in the Appendix. In Figure~\ref{fig:imagenet_resnet_50_rank_distorted} we observe the top-1 accuracy for ResNet-50 (orange) on these truncated images for both naturally trained and robust models. We make the following notable observations for this as well as for Figures~\ref{fig:cifar_10_rank_spectrum} and ~\ref{fig:imagenet_rank_spectrum}: \begin{itemize} \item Classifier accuracy sharply increases for lower-ranked images (rank-$50$ to rank-$100$) followed by saturation around rank-$100$ for ImageNet trained models. We observe similar behavior at rank-$15$ for CIFAR-10 trained models (Figure~\ref{fig:cifar_10_rank_spectrum}). \item We observe that the features corresponding to this increase in accuracy, namely rank-$50$ to rank-$100$, contribute no meaningful semantic content to the image (Figure~\ref{fig:low_rank_approximations}), indicating that naturally trained CNNs exploit features that are often invisible to humans. (~\ref{fig:imagenet_rank_spectrum}) \end{itemize} \vspace{-0.4cm} \begin{figure*}[ht!] \centering \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/CIFAR_10_rank_spectrum_MULTIPLE.pdf} \caption{} \label{fig:cifar_10_rank_spectrum} \end{subfigure} \hspace{20pt} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/IMAGENET_rank_spectrum_MULTIPLE.pdf} \caption{} \label{fig:imagenet_rank_spectrum} \end{subfigure} \hspace{20pt} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/ACCURACY_GAP_CIFAR_10_rank_spectrum.pdf} \caption{} \label{fig:cifar_10_rank_spectrum_gap} \end{subfigure} \hspace{20pt} \begin{subfigure}[b]{0.33\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/ACCURACY_GAP_IMAGENET_rank_spectrum.pdf} \caption{} \label{fig:imagenet_rank_spectrum_gap} \end{subfigure} \caption{(a) CIFAR-10 rank spectrum for naturally trained and robust ResNet-50 models (b) ImageNet rank spectrum for naturally trained and adversarially robust ResNet-50 models. (c) Gap in test accuracy between robust and naturally trained CIFAR-10 ResNet-50 models. (d) Gap in test accuracy between robust and naturally trained ImageNet ResNet-50 models.} \end{figure*} \subsubsection{Behavior of Adversarially Robust CNNs} We investigate the behavior of pretrained adversarially robust CNNs from the robustness library~\cite{robustness} as highlighted in column 2 of Table~\ref{tab:experimental_setup}. Our results for an $L_{\infty}=4/255$ robust ResNet-50 model can be seen in Figure~\ref{fig:imagenet_resnet_50_rank_distorted} (blue). In contrast to naturally trained models we observe very different behaviors for robust models. Notably, we observe that: \vspace{-0.2cm} \begin{itemize} \item Robust CNN accuracy for full-rank images is lower than that of naturally trained CNNs for both ImageNet and CIFAR-10 trained models (which has been observed in previous work~\cite{tsipras2018robustness}). \item Robust CNN accuracy for lower-rank images is higher than that of naturally trained CNNs. Surprisingly, it is significantly superior for lower-ranked images, with a $>20\%$ validation set accuracy improvement in both datasets (Figure~\ref{fig:cifar_10_rank_spectrum_gap} and~\ref{fig:imagenet_rank_spectrum_gap}). \item Robust CNN accuracy increases much more quickly than naturally trained CNNs for lower-rank images, and does not exhibit the same dependence on features between rank-50 and rank-100 for the ImageNet dataset. \end{itemize} The rank-accuracy tradeoff as well as superior performance of robust models for lower-rank images has not been observed before, and to the best of our knowledge ours is the first work to identify such phenomena. We further observe in Figures~\ref{fig:cifar_10_rank_spectrum} and~\ref{fig:imagenet_rank_spectrum} that this rank behavior persists across CNNs trained with different $L_{\infty}$ bounds, different $L_{p}$-norm metrics and different datasets. While there exist minor differences between the rank behavior of $L_{\infty}$ and $L_{2}$ robust CNNs, their behaviors are largely distinct from those of naturally trained CNNs. \begin{table*}[ht!] \centering \begin{tabular}{c|c|c|c|c|c|c|c} \hline & \multicolumn{3}{c|}{\textbf{CIFAR-10}} & \multicolumn{4}{c}{\textbf{ImageNet}} \\ \hline \textbf{\begin{tabular}[c]{@{}c@{}}Trained\\ on\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Attack \\ success rate\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Recovery\\ rate\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ accuracy\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Trained\\ on\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Attack\\ success rate\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Recovery\\ rate\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Top-1\\ accuracy\end{tabular}} \\ \hline \textbf{\begin{tabular}[c]{@{}c@{}}Full rank\end{tabular}} & 99.93\% & 0.03\% & 95.21\% & \textbf{Full rank} & 95.87\% & 0.01\% & 78.35\% \\ \hline \textbf{20} & 99.70\% & 0.15\% & 95.41\% & \textbf{100} & 81.81\% & 7.07\% & 73.99\% \\ \hline \textbf{10} & 99.43\% & 0.19\% & 94.90\% & \textbf{50} & 73.99\% & 8.57\% & 70.16\% \\ \hline \textbf{5} & 99.27\% & 0.34\% & 91.54\% & \textbf{30} & 73.19\% & 5.15\% & 69.07\% \\ \hline \end{tabular} \caption{Robustness of low-rank CIFAR-10 and ImageNet-trained ResNet-50 models. Attack success rate and recovery rate measured for targeted $20-$step PGD attacks with $L_{\infty}-$bounds of $4/255$. Top-1 accuracy is measured for full-rank test sets.} \label{tab:robustness_measurement_joint} \end{table*} \subsection{Rank Integrated Gradients (RIG)} ~\label{sec:RIG} \vspace{-0.02cm} Based on these observations, we seek to visualize the rank-dependency of CNNs. Generating visual explanations for CNN image classifiers typically involves computing saliency maps that take the gradient of the output corresponding to the correct class with respect to a given input vector such as GradCAM and guided-backprop~\cite{simonyan2013deep, selvaraju2017grad, springenberg2014striving}. However, such methods often only capture local explanations for a given image, and are not robust to perturbations to the original image. Other methods involve training simpler, more interpretable surrogate models~\cite{lundberg2017unified, ribeiro2016should} to understand model predictions in a local neighborhood around a given input, but these cannot sufficiently capture rank-based image modifications nor scale to models such as ResNets. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{FIGURES/RIG_2.pdf} \caption{Rank Integrated Gradient Images. In the second row, vanilla gradients highlight features in the background that contribute to the top class but have no meaningful semantic content. RIG highlights these features as well as object specific features, such as the bird's beak.} \label{fig:RIG2} \end{figure} Feature Attribution methods that seek to capture network predictions while being invariant to perturbations and implementations of the method~\cite{sundararajan2017axiomatic} have been successful in capturing such attributes among image pixels. In particular, Blur Integrated Gradients (BIG)~\cite{xu2020attribution} has been effective in capturing feature attributes through Gaussian-blurred versions of the original image. In this regard our work is most similar to BIG, but we differentiate ourselves in calculating gradients through low-rank representations of an image rather than gaussian-blurred representations. Intuitively, our technique weighs low-rank representations with their contributions to the gradients of an input for the top class predicted on the full-rank input. Formally, let us denote a classifier $f$ and an input signal $x \in [0,1]^{w \times h \times c}$ where $w \leq h$, with $x_{k}$ as a rank-$k$ image obtained by Algorithm~\ref{alg:algorithm_truncation}. Let us denote an image classifier $f$, and the top predicted class $i$ for a full rank image $x_{w}$. Let $(\frac{\partial f(x_k)}{\partial x_k})_{i}$ denote the maximum gradient across all color channels $c$. Then, our method computes RIG as: \vspace{-0.08cm} \begin{equation} RIG(x,f,i) = \sum_{k=1}^{w} \frac{w - k}{w} \times (\frac{\partial f(x_k)}{\partial x_k})_{i} \end{equation} \noindent RIG requires no modification to the model and is extremely easy to implement, requiring less than $10$ lines of PyTorch code and using a few calls to the gradient operation, thereby allowing even novice practitioners to easily apply the technique. Examples can be found in Figure~\ref{fig:RIG} and ~\ref{fig:RIG2}. \section{Transferability of Rank-Based Features} ~\label{sec5} \vspace{-0.3cm} Motivated by the disparities in the behavior between naturally trained and adversarially robust CNNs in Section~\ref{sec4}, we proceeded to test the following hypotheses: \begin{itemize} \item Do CNNs trained solely on high-rank representations generalize to a full-rank test set? \item Do CNNs trained solely on low-rank representations generalize to a full-rank test set? \item Do CNNs trained solely on low-rank representations improve robustness to $L_{p}$-norm bounded attacks? \end{itemize} \subsection{Training on solely high-rank representations} \label{subsection:train_high_rank} \vspace{-0.1cm} We conducted experiments to test the hypothesis: \textit{Do CNNs trained on solely high-rank representations generalize to the test set?} We modified Algorithm~\ref{alg:algorithm_truncation} to zero out the $k$-largest singular values (instead of the $k$-smallest singular values), thereby creating images that consist solely of higher-rank features that are largely imperceptible and difficult to interpret even when visualized (Figure~\ref{fig:reverse_truncated_rank}). We trained the ResNet-50 architecture on a modified version of the CIFAR-10 dataset consisting solely of these higher-ranked representations, and evaluated it on the full-rank CIFAR-10 test sets. Each network was trained for $350$ epochs with an SGD optimizer, with learning rate of $0.1$, momentum $0.9$ and weight decay of $0.0005$. We decreased the learning rate by $10$ after the $150$th and $250$th epochs. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{FIGURES/cifar_10_reverse_truncated.pdf} \caption{Reverse rank truncated CIFAR-10 image. Middle column corresponds to images where the $3$ largest singular values were set to 0, with the difference between the original image and truncated image on the right column.} \vspace{-0.3cm} \label{fig:reverse_truncated_rank} \end{figure} We observe that for training images where the first $3$ singular values are truncated (Figure~\ref{fig:reverse_truncated_rank}), we get a full-rank test accuracy of $28.02\%$. Such non-random accuracy shows that high-rank representations of an image contain meaningful features for generalization. However, this accuracy quickly decreases as we increase the number of truncated singular values. Further details can be found in the appendix. \subsection{Training on low-rank representations} \label{sec:training_low_rank} We conducted experiments to address the hypothesis: \textit{Do CNNs trained solely on low-rank representations generalize to the test set?} We trained ResNet-50 models for CIFAR-10 and ImageNet solely on low-rank representations and observed their accuracy on the held out full-rank test sets. \vspace{-0.3cm} \subsubsection{CIFAR-10} \vspace{-0.1cm} We trained ResNet-50 models on low-rank representations with similar hyperparameters as Section~\ref{subsection:train_high_rank}. We observe that low-rank representations are sufficient to achieve test accuracy of more than $90\%$ for full-rank CIFAR-10 test sets. Surprisingly, training CIFAR-10 on rank-5 images yields test accuracy of $91.54\%$, which is only $3.67\%$ lower than when trained on a full-rank dataset (Table~\ref{tab:robustness_measurement_joint}). Increasing the rank of training-set images quickly closes this gap, and rank-20 images have almost identical test accuracy to full-rank images, indicating that the high-rank information in images is largely irrelevant to make predictions for CIFAR-10. This corroborates the results we obtained from Figure~\ref{fig:cifar_10_rank_spectrum}, where we observed that test accuracy does not rely on features past rank-15 for CIFAR-10. Further experimental details can be found in the Appendix. \vspace{-0.3cm} \subsubsection{ImageNet} \vspace{-0.1cm} We trained ResNet-50 using the same hyperparameters as described in the original ResNet-50 paper~\cite{he2016deep} on low-rank versions of the ImageNet dataset. Specifically, we trained on rank-$30,50$ and $100$ representations (Table~\ref{tab:robustness_measurement_joint}). Despite rank-100 and full-rank images being visually identical, we observe that the full-rank validation set accuracy for rank-100 trained ResNet-50 is $4.3\%$ lower than that of ResNet-50 trained on full-rank images. This indicates that the ImageNet data consists of a large number of imperceptible, high-rank features that do not contain semantically meaningful content but contribute to test accuracy. \subsection{Robustness as an emergent property of low-rank representations} In this section we conducted experiments to test the hypothesis: \textit{Do CNNs trained solely on low-rank representations improve adversarial robustness to $L_{p}$-norm bounded attacks?} To tackle this, we performed $20$-step, $L_{\infty} = 4/255$ PGD adversarial attacks on the low-rank CIFAR-10 and ImageNet-trained ResNet-50 models from Section~\ref{sec:training_low_rank}. Our experimental results can be found in Table~\ref{tab:robustness_measurement_joint}. Notably, we observe that adversarial robustness to $L_{\infty}$ attacks improves with training on lower-ranked image representations for ImageNet. However, this does not hold true for CIFAR-10 trained models. Furthermore, strategies such as adversarial training~\cite{madry2017towards} or feature denoising~\cite{xie2019feature} offer superior performance than training on low-rank representations. \section{Discussion} \label{sec6} Prior work on interpreting adversarial samples~\cite{ilyas2019adversarial, Wang_2020_Frequency} hypothesized that images consist of \textit{robust} and \textit{non-robust} features, where robust features are largely visible to humans while non-robust features are not. Further work argues that robustness leads to improved feature representations~\cite{salman2020adversarially}. Our findings appear to support these claims. Specifically, we observe that due to their large contribution to image quality and predictive performance for robust networks, low-rank features are analogous to \textit{robust} features and can be generated through low-rank truncation. Conversely, higher-ranked features which do not contribute to robust network predictions are analogous to \textit{non-robust} features. Furthermore, we observe that quantifying network robustness through $L_{p}$ perturbations does little to capture the massive range of possible image modifications, and often runs into the issue of multiple perceptually different images having identical $L_{p}$ distortions. Rank-based image modifications simultaneously capture a much larger range of image modifications while offering a $1-1$ mapping from modification parameter to perceptual representation. With respect to feature attribution, we observe that saliency maps that leverage rank information in images are much more aligned with human vision than conventional vanilla gradients, and offer a new lens into understanding the inner workings of these image classifiers. We hypothesize that there exist several other similar forms of matrix decomposition that allow for visualizations that are more perceptually meaningful as well. \section{Conclusion} \label{sec7} Closing the gap between computer and human vision is a challenging and an open problem. Human vision remains robust under a variety of image transformations, while neural network based computer vision is still fragile to small $L_{p}$-norm limited perturbations, which futhermore do not capture the full range of image modifications. We demonstrate the need for robustness metrics beyond these perturbations, and make several arguments in favor of using image-rank (as obtained by SVD) as a potential alternative. We demonstrate several behavioral differences between naturally trained and adversarially robust CNN classifiers in terms of their generalization that could not be captured in an $L_{p}$-bound framework. Finally, we propose a simple rank-based feature attribution technique that produces gradient visualizations that are much more perceptually informative than saliency maps. \section{Acknowledgements} ~\label{sec8} \noindent This work was supported by the Semiconductor Research Corporation (AUTO TASK 2899.001) and a Defense Advanced Research Projects Agency (DARPA) Techniques for Machine Vision Disruption (TMVD) grant. \section{Appendix} ~\label{sec9} \subsection{Relationship between Fourier Features and Rank-based features} \noindent \textbf{Statement:} We hypothesize that the rank of a matrix obtained from a $(k\times k)$ low-pass filter in the frequency domain is upper bounded by $k$. \noindent \textbf{Proof.} Let a matrix $X \in [0,1]^{w\times h} = U \Sigma V^{T}$ with rank $r(X) = min(w,h)$ exist in the spatial domain, and let the Fourier transform operation be represented as $F(\cdot)$. Due to its linearity, the Fourier transform of $X$ can be expressed as $F(X) = W X$. Let us denote the low-pass filtering operation with a window of size $k$ as $L$. By definition, the window will have a max rank of $k$. Then, we can express the $k-$window low-pass filtered version of $X$ as $\tilde Y = L W X$ in the frequency domain, and its spatial domain representation as $\tilde X = W^{-1} L W X$. The rank of $\tilde X$ can be expressed as $r(\tilde X) = r(W^{-1} L W X)$. Due to the rank property of matrix multiplication, $r(W^{-1} L W X) \leq min(r(W^{-1}), r(L W X))$. Therefore, \begin{equation}\label{eq:pareto mle2} \begin{aligned} r(W^{-1} L W X) \leq min(r(W^{-1}), r(L W X)) \\ \implies r(L W X) \leq min(r(X), r(L W)) \\ \implies r(L W) \leq min(r(L), r(W)) = r(L) = k \\ \implies r(W^{-1} L W X) \leq r(LWX) \leq r(LW) \leq r(L) \leq k \end{aligned} \end{equation} Thus, $r(\tilde X) \leq k$. \subsection{Rank Integrated Gradients} We provide several more examples of RIG saliency maps for robust and non-robust ResNet-50 models here (Figures~\ref{fig:RIG_EXPLAINED},~\ref{fig:RIG_joint},~\ref{fig:rig1},~\ref{fig:rig2}). RIG highlights rank-based features which are more perceptually-aligned than vanilla gradients for naturally trained as well as adversarially robust networks. \begin{figure}[ht!] \centering \includegraphics[width=0.47\textwidth]{FIGURES/RIG_EXPLAINED.pdf} \caption{RIG generation.} \label{fig:RIG_EXPLAINED} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.47\textwidth]{FIGURES/joint.pdf} \caption{Comparison of RIG for naturally trained and adversarially robust neural networks. Adversarially robust neural networks have image representations that are much more aligned with human perception, which has been previously observed in~\cite{tsipras2018robustness}.} \label{fig:RIG_joint} \end{figure} \subsection{Runtime measurements for low-rank approximation} \noindent There is minimal overhead to generating low-rank representations for images, with a distribution over $10$ images across all possible ranks shown in Figure~\ref{fig:time_taken}. The time required to generate rank-$k$ approximations is independent of $k$, and generating arbitrary rank-$k$ representations of a $(224 \times 224 \times 3)$ RGB image for ImageNet inference takes less than $1$ second on an NVIDIA TITAN Xp GPU. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{FIGURES/time_taken.pdf} \caption{Time required to generate low-rank approximations of $(224 \times 224 \times 3)$ RGB images for ImageNet. Averaged across all possible ranks for $10$ images.} \label{fig:time_taken} \end{figure} \begin{table}[ht!] \centering \begin{tabular}{c|c} \hline \textbf{\begin{tabular}[c]{@{}c@{}}\# of Largest\\ Singular values\\ Truncated?\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Test \\ accuracy\end{tabular}} \\ \hline \textbf{2} & 31.71\% \\ \hline \textbf{3} & 28.02\% \\ \hline \textbf{5} & 13.54\% \\ \hline \textbf{10} & 11.19\% \\ \hline \end{tabular} \caption{Test accuracy on full-rank CIFAR-10 test set for ResNet-50 trained on images with largest singular values removed from image.} \label{tab:truncated_large_vals} \end{table} \subsection{Experimental results for VGG-19 and DenseNet} We observe that other state of the art models such as DenseNet~\cite{huang2017densely} and VGG-19~\cite{simonyan2014very} have similar rank-behavior to ResNet-50 models. In particular, we observe in Figure~\ref{fig:multiple_models_rank_spectrum} that VGG-19 is more biased towards higher ranked representations, indicating a potentially larger vulnerability to adversarial examples. \begin{figure}[ht!] \centering \includegraphics[width=0.45\textwidth]{FIGURES/OTHER_MODELS_IMAGENET_rank_spectrum.pdf} \caption{Rank spectrum for naturally trained ResNet-50, DenseNet-201, VGG-19 and GoogleNet architectures compared to a $L_{\infty}=8/255$ robust ResNet-50. } \label{fig:multiple_models_rank_spectrum} \end{figure} \subsection{Training on solely high-rank representations} Table~\ref{tab:truncated_large_vals} has full-rank test accuracies for ResNet-50 trained on modified versions of the CIFAR-10 dataset, where the largest $k$ singular values for each image are deleted, leaving only higher-ranked features. Test accuracy as a function of training epoch can be found in Figure~\ref{fig:high_rank_training}. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{FIGURES/high_rank_cifar10_training.pdf} \caption{Full-rank test accuracy for ResNet-50 models trained where the largest $10, 5, 3, 2$ singular values from each training image of CIFAR-10 are deleted.} \label{fig:high_rank_training} \end{figure} \subsection{Training on solely low-rank representations} \subsubsection{ImageNet} Figure~\ref{fig:low_rank_training_imagenet} highlights full-rank test accuracy for ResNet-50 on modified low-rank versions of the ImageNet dataset. As expected, the performance of the models increases with increasing ranks of the images. We show this on ranks $30,40,50$ and $100$. For efficient training of ResNet-50 on the various ranks, we pre-process and store the low-rank copies of ImageNet. We trained each network for $24$ hours on $4$ NVIDIA V-$100$ GPUs. \subsubsection{CIFAR-10} Figure~\ref{fig:low_rank_training} highlights full-rank test accuracy for ResNet-50 trained on modified versions of the CIFAR-10 dataset. Models trained on rank-$10,20$ and full rank have identical test accuracies, indicating that a large component of higher-ranked features do not contribute as much to prediction as those from ImageNet. \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{FIGURES/ImageNet_full_rank_test_acc.png} \caption{Full-rank test accuracy for ResNet-50 models trained on rank-$30,40,50,100$ ImageNet datasets.} \label{fig:low_rank_training_imagenet} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.45\textwidth]{FIGURES/low_rank_cifar10_training.pdf} \caption{Full-rank test accuracy for ResNet-50 models trained on rank-$3,5,10,20$ CIFAR-10 datasets.} \label{fig:low_rank_training} \end{figure} \begin{figure*}[ht!] \centering \hspace{20pt} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/naturally_trained_rig_plots/correctly_classified_1.pdf} \caption{Original label: Bagel} \end{subfigure} \hspace{20pt} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/naturally_trained_rig_plots/correctly_classified_6.pdf} \caption{Original label: Basset hound} \end{subfigure} \hspace{20pt} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/naturally_trained_rig_plots/correctly_classified_5.pdf} \caption{Original label: Bulbul} \end{subfigure} \hspace{20pt} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/naturally_trained_rig_plots/correctly_classified_3.pdf} \caption{Original label: Long-horned beetle} \end{subfigure} \caption{RIG plots for naturally trained ResNet-50 in PyTorch for images randomly chosen from the ImageNet validation set.} \label{fig:rig1} \end{figure*} \begin{figure*}[ht!] \centering \hspace{20pt} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/robust_trained_rig_plots/incorrectly_classified_7.pdf} \caption{Original label: grey fox} \end{subfigure} \hspace{20pt} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/robust_trained_rig_plots/incorrectly_classified_30.pdf} \caption{Original label: thunder snake} \end{subfigure} \hspace{20pt} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/robust_trained_rig_plots/incorrectly_classified_38.pdf} \caption{Original label: lesser panda, red panda} \end{subfigure} \hspace{20pt} \begin{subfigure}[b]{0.7\textwidth} \centering \includegraphics[width=\textwidth]{FIGURES/robust_trained_rig_plots/incorrectly_classified_2.pdf} \caption{Original label: bagel} \end{subfigure} \caption{RIG plots for $L_{2}=3.0$ robust ResNet-50~\cite{robustness} for images randomly chosen from the ImageNet validation set.} \label{fig:rig2} \end{figure*} \section{Introduction} \documentclass[final]{cvpr} \usepackage{times} \usepackage{epsfig} \usepackage{graphicx} \usepackage{amsmath} \usepackage{amssymb} \DeclareMathOperator*{\argmax}{arg\,max} \usepackage[ruled,vlined]{algorithm2e} \usepackage{algorithmic} \usepackage{caption} \usepackage{subcaption} \captionsetup{compatibility=false} \usepackage[T1]{fontenc} \pdfoutput=1 \usepackage[pagebackref=true,breaklinks=true,colorlinks,bookmarks=false]{hyperref} \def\cvprPaperID{10725} \defCVPR 2021{CVPR 2021} \begin{document} \title{A Singular Value Perspective on Model Robustness} \author{Malhar Jere\\ UC San Diego\\ {\tt\small mjjere@ucsd.edu} \and Maghav Kumar\\ UIUC\\ {\tt\small mkumar10@illinois.edu} \and Farinaz Koushanfar\\ UC San Diego\\ {\tt\small fkoushanfar@ucsd.edu}} \maketitle \begin{abstract} Convolutional Neural Networks (CNNs) have made significant progress on several computer vision benchmarks, but are fraught with numerous non-human biases such as vulnerability to adversarial samples. Their lack of explainability makes identification and rectification of these biases difficult, and understanding their generalization behavior remains an open problem. In this work we explore the relationship between the generalization behavior of CNNs and the Singular Value Decomposition (SVD) of images. We show that naturally trained and adversarially robust CNNs exploit highly different features for the same dataset. We demonstrate that these features can be disentangled by SVD for ImageNet and CIFAR-10 trained networks. Finally, we propose \textbf{R}ank \textbf{I}ntegrated \textbf{G}radients \textbf{(RIG)}, the first rank-based feature attribution method to understand the dependence of CNNs on image rank. \end{abstract} \vspace{-20pt} \input{1_introduction} \input{2_background} \input{3_methodology} \input{4_rank_truncation} \input{5_transferability} \input{6_discussion} \input{7_conclusion} \input{8_acknowledgements} {\small \bibliographystyle{ieee_fullname}
{ "redpajama_set_name": "RedPajamaArXiv" }
7,226
Maloney on Consequences and Challenges of Truncated Presidential Transition By The National Herald Congresswoman Carolyn Maloney. (Photo: Courtesy of Maloney for Congress) WASHINGTON, DC – Below is Committee on Oversight and Reform Chairwoman Carolyn B. Maloney's opening statement, as prepared for delivery, for the December 10 Subcommittee hearing to examine the ongoing presidential transition, its challenges, and lessons already learned that can improve future transitions. Thank you, Chairman Connolly, for highlighting the many issues raised by an unstable transition. An outgoing President should make every effort to assist and prepare the incoming administration to take office— for the good of the country and for our national security. Instead of working to ensure the orderly transfer of power to the winner of the election, President Trump has been attacking the validity of the election and subverting the transition process. These actions are not only reprehensible and shocking, they are dangerous. But I'm sorry to say that I'm not surprised by them. Throughout his Administration, President Trump has chosen to put his personal interests before the needs of the country and has disregarded both congressional oversight and public scrutiny. According to press reports, President Trump has routinely ignored federal records laws, regularly tearing up or shredding documents that are required to be preserved. The destruction was so bad that career records officers were reportedly forced to use scotch tape to put important documents back together. Given this track record, I'm deeply concerned that President Trump and his aides may attempt to conceal or destroy important White House materials during their last remaining days. That is why I sent a letter to the White House last month demanding that the Administration comply with their responsibilities under the Presidential Records Act and the Federal Records Act. Eight other Committee chairs joined in that letter, and we demanded that the White House preserve all materials that are potentially responsive to the requests and subpoenas issued this Congress. These records belong to the American people— they are important for our historical record. They will also be critical to our ability to fix the damage that was done during the Trump Administration. I look forward to the testimony of our witnesses today on this and other issues that need to be addressed to ensure that the current transition goes as smoothly as possible from this point forward. We must also work to ensure that future transitions are more seamless than this one. Sen. Menendez Says Baffled Biden Wants to Sell Turkey F-16 Fighter Jets ATHENS - US Sen. Bob Menendez told a Greek TV station that he can't understand why President Joe Biden wants to sell more F-16's to Turkey and upgrade that country's Air Force, given its threats against Greece. Gianaris on the Senate Judiciary Committee Vote AHI Reiterates Opposition of Proposed F-16 Sale to Turkey; Supports F-35s to Greece
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,686
Papony Towing and Limo Services is a reputed company that offers roadside assistance services. We have worked extensively to make sure that all our customers get to their destination when their cars stall on the highway or in any other parts of the city. Let us proceed and actually look at some of the things that you should know about us. Our towing services are available in all parts of the country and this makes us one of the most accessible towing company in the world. We have also come up with plans that allows us to serve all our customers and this make it easy for all our customers to get information about our services as well as enjoy our services whenever they need them. The quality of modern towing trucks that we have invested in has helped us to up our game and even trigger all our competitors to come up with ways of improving their towing services too. If you are not clear on this, you can visit our facility to see the trucks and understand how they actually function. We have an excellent customer care team that works to ensure that we provide all our customers with nothing short of the best services. They will respond to all your calls on time and make sure that you understand the requirements to get the best from our towing services. Get in touch to get the most comprehensive and cheaply priced towing services.
{ "redpajama_set_name": "RedPajamaC4" }
5,215
The first work of Long Hill Chapel Missions is to complete the Great Commission. By supporting the GCF, we contribute to the work of Alliance Missions all over the globe. Together, we will see the gospel preached to all nations. DAVIS – NEW YORK CITY, NY After working several years in Senegal, Brian & Michelle Davis are now in New York City, working with Envision NYC to minister to the West African immigrant population. They have four children: Jacob, Jonathan, Karas, and Josiah. They are leveraging their experience among the Wolof people primarily using health, education, and chronological Bible story telling as avenues to demonstrate the love, truth, and power of Jesus. PHENICIE – BERLIN, GERMANY Darrell & Cheryl are long time partners with the C&MA and Long Hill Chapel. They have three grown children: Nathan, Glenn, and Rachel. LHC has journeyed with their family for over 33 years as they have worked among Arab-speaking people around the world. Now, the Phenicies are in Berlin, Germany to work with Syrian refugees. They have begun a new ministry, The Lighthouse Gathering Place. Partnering with a Syrian pastor and wife, they reach out to these displaced people, providing for their physical needs, with wifi and german classes, and for their spiritual needs. ROMANO – SANTIAGO, DOMINICAN REPUBLIC Rick & Tammie are workers with Marketplace Ministries, the entrepreneurial arm of Alliance Missions. They have four children: Jonathon, Rebekah, Abigail, and Moriah. Following 6.5 years in Ensenada, Mexico in 2012, they were called to the Dominican Republic. Their passion is to provide ways to connect local Dominican C&MA churches to their communities by providing for spiritual, social, educational and medical needs. Mission Twenty-Five 35 (Matthew 25:35-36) has a vision to share the Gospel and plant churches through implementing a solution that addresses the lack of clean water, sustainable food sources and the limited access to healthcare and vocational training in the poor and marginalized in the "El Cibao" region of the Dominican Republic. CLOSED ACCESS COUNTRY (CAC) WORKERS Long Hill Chapel has a passion to bring the gospel to the most unreached people groups of the world. This involves supporting workers in CAC countries. For the protection of our workers and their ministries, we have kept their identities and locations private. To learn more about the ministries of our CAC workers, please speak to a member of our LHC Mission Committee. COMPASSION & MERCY ASSOCIATES (CAMA) CAMA is the relief and development arm of Alliance Missions. It is a Christ-centered agency committed to the social demonstration and verbal proclamation of the gospel. CAMA staff members seek both physical and spiritual wholeness for the people they serve. They have translated short-term relief into long-term development projects that emphasize local ownership and sustainability, capitalizing on local strengths and resources. ADOLFSEN – INDEPENECE, BELIZE Sylvia is a worker with Global Outreach Mission in Independence, Belize and surrounding villages. Her aim is to train and raise up Christian leaders to minister in the local indigenous churches. Sylvia teaches Sunday Bible Classes to children in the surrounding Mayan/Ketchi villages. She also prays for and distributes food to the poor, sick, and elderly in the community. AGOSTINI – WILMINGTON, DELAWARE Gabe & Judy serve with Ripe for Harvest Outreach. Gabe does leadership training in Latin American and Europe. Through their leadership network, SALT (Support and Leadership Training), the Agostini family continues to develop new leaders in the church, primarily in South America, to effectively bring the gospel to their communities. BONDS – DALLAS, TEXAS Austin & Darcy are the directors of Metro Relief. They have four children: Elijah, Quintin, Isaiah, and Annalee. Metro Relief seeks to mobilize, empower, restore, and satisfy the needs of the poor and hopeless communities in and around Dallas-Ft.Worth, TX. The Bonds do this through offering a mobile food pantry and resource center. They take a converted bus to homeless communities where they serve food, hand out hygiene products, and offer prayer. Doing this on a consistent basis builds relationships that lead hurting people to Jesus Christ. HARPER – WINDOW ROCK, ARIZONA Chuck & Cindy are the directors of Western Indian Ministries. They have three children: James, Mike, and Ben. Their purpose is to work alongside Native American believers to know Christ and to make Christ known. Seeking to cultivate a Christ centered community and serve the Church. The Harpers train youth, disciple believers, and take the Gospel to the communities. They broadcast quality Bible teaching on three radio stations, covering the Four Corners region of the Southwestern USA, reaching the largest Native American population in the United States. SPIEKER – CARY, NORTH CAROLINA Edmund & Marli started at TransWorld Radio (TWR) Brazil, where Edmund was the first Executive Director. They have three grown children: Marcio, Simone, and Fabio. After 34 years with TWR, Edmund became International Ministry Director of Churches in Missions serving Christian leaders, pastors, and their wives worldwide. Edmund leads Pastor Care Seminars, mainly in Brazil. He has started and directs Champions Arise, a discipleship ministry for men. Marli founded TWR-Project Hannah, a ministry to downtrodden and abused women with a prayer network in 125 countries, programs in 69 languages, and multiple mercy ministries. Marli retired in 2017. WORLD IMPACT – NEWARK, NEW JERSEY World Impact Long Hill Chapel has supported the Newark Christian School for many years, but unfortunately it will be closing this year. NYC RELIEF – NEW YORK CITY, NEW YORK New York City Relief is boots-on-the-ground in poverty stricken areas of New York City and New Jersey. Leading over 7,000 volunteers, they take poverty, addiction and despair head on. By going to the front lines of critical need, bringing relief and compassion to those falling through the cracks of society, they help bring life and joy to these vulnerable people. NYC Relief connects suffering people to vital resources that can turn their lives around. Shelter, medical care, addiction recovery and job training are lifelines to many who are sinking. The strategy is compassion and the mission is life transformation. MARKETSTREET MISSION – MORRISTOWN, NEW JERSEY The goal of the Market Street Mission is to help men, women, and children in need through an Emergency Assistance program and a Rehabilitation and Recovery program. Market Street Mission provides meals, shelter and hope to the needy of Northern New Jersey. This hope comes by meeting physical, emotional and spiritual needs. In our rehabilitation program, families are freed and transformed by the power of Jesus. FIRST CHOICE First Choice opened in 1985 and now has five centers in New Jersey (Morristown, Montclair, Jersey City, Newark and Plainfield). Their mission is to protect the unborn by empowering women. This is accomplished through prevention, intervention and restoration. First Choice is involved in everything from abstinence presentations in public high schools to counseling a woman who is pregnant and considering abortion, to offering a Bible study for women and men who have personally experienced the pain of abortion and need to know Jesus' healing in their lives. GOODWILL RESCUE MISSION – NEWARK, NEW JERSEY Goodwill Rescue Mission ministers to those suffering in the hopeless cycle of poverty, homelessness and dependencies, bringing renewal, hope, joy and victory to their lives through the transforming grace and power of Jesus Christ. The Mission will enable the poor, homeless, and addicted of Newark to encounter Christ through compassionate care that meets their immediate needs, encouraging many to hope for a life of dignity made possible through the Mission's spiritually rich and vocationally focused program of sustainable comprehensive life transformation. MASTERMEDIA - REDLANDS, CALIFORNIA MasterMedia Because people matter and media shapes people and people shape media. MasterMedia actively participates in the media community by building bridges of communication between media executives and evangelical leaders to help heighten trust and mutual understanding.
{ "redpajama_set_name": "RedPajamaC4" }
3,059
\section{Introduction} Turbulence is one of the most complex, but ubiquitous, phenomena observed in Nature and it is related with the underlying mechanisms responsible for the micro-macro upscale causing wide-ranging effects on classical systems, like macroscopic friction in granular solids or turbulent flow regime in fluids \cite{Nguyen2014,Wilcox1988,Hou2013,RadjaiRoux2002}. The presence of multiple scales in time and space is an additional defy to a comprehensive theoretical description, and a particular effort is made in the literature to perform experiments and simulations in order to validate the proposed theoretical descriptions, particularly Tsallis nonextensive (NE) statistical mechanics \cite{tsallis1988,tsallis2009,combe2013,richefeu2012}. A paradigmatic work relating anomalous diffusion and turbulent-like behavior in confined granular media was presented by Radjai and Roux \cite{RadjaiRoux2002}, using numerical simulations, and confirmed qualitatively by experiments by Combe and collaborators \cite{combe2013,richefeu2012}. Radjai and Roux coined a new expression to characterize the analogies between fluctuations of particle velocities in quasistatic granular flows and the velocity fields observed in turbulent fluid flow in high Reynolds number regime, the ``granulence''. Most of the evidences of the granulence are based in simulations using discrete element method (DEM) but, unfortunately, one can verify a lack of quantitative experimental verification in the last years, limiting the knowledge of the micromechanics of this system based almost exclusively on numerical evidences. In the present work, we aim exactly to fill this gap by contributing with the experimental validation of the results obtained by DEM. Specifically, we seek to examine the findings revealed by Radjai and Roux \cite{RadjaiRoux2002} in a detailed fashion, extending the previous works \cite{combe2013,richefeu2012} to explore quantitatively the relationship between the PDF of the velocity fluctuations and the diffusion features of the grains. We follow a detailed theoretical description for the anomalous diffusion in the presence of external driving \cite{tsallisbukman1996, plastino1995}. Particularly, a relation between the $q$-Gaussian value from the PDF of fluctuations and the diffusion exponent was proposed, which is validated experimentally here for the first time for a large range of the control parameter, differently of previous works where this relation was tested only for a single point \cite{upadhyayaa2001,daniels2004}. In this work, we aim to advance in the route opened by Radjai and Roux \cite{RadjaiRoux2002} with three basic goals:\\ (\textit{i}) \emph{Explore the low inertial number limit.} The inertial number $I$ \cite{RouxCombe2003} measures the ratio between inertial and confining forces, from the quasistatic regime (small values) to the dynamic regime (large ones) \cite{GdRMIDI}. We would like to check if the granulence features are still observed in a better stablished quasistatic situation, {\textit i.e.} the experimental one which involves inertial numbers around four orders of magnitude smaller than the currently reported in simulations \cite{RadjaiDubois2011}.\\ (\textit{ii}) \emph{Point out the origins of the macroscopic friction.} We take advantage of the really quasistatic feature of the experimental data to explore the origins of the underlying mechanisms of granulence. Here, unlike fluid flow, the rigid particles can not fly freely since the motion of each particle is hampered by the presence of the other particles, and depends on the motion of its neighbors. This makes the straining in part controlled by geometric exclusions at the particle scale, preventing the development of a uniform straining in a sustainable way. As shown in \cite{combe2013}, at the limit of large strain-windows, it is possible to observe turbulent-like vortexes in the fluctuation field which turn out to be associated with the energy dissipation and macroscopic friction \cite{rognon2015,miller2013}.\\ (\textit{iii}) \emph{Evince the nonextensive nature of the displacement fluctuations.} In order to quantitatively analyze the data, we have used the Tsallis NE statistical mechanics approach. In this context, the PDF of displacement fluctuations is not expected to follow the normal Gaussian distribution, as in the case of classical Maxwell-Boltzmann distribution in thermodynamics. In granular systems under loading, the force chains engaged along the entire system are a clear evidence of long-range interactions \cite{majumdar2005}. These chains connect the microscopic contact forces with the global resistance to external forces, as shear for example \cite{Estrada2008}. Thus, it is natural to associate the emergence of these force chains at mesoscopic scales with the departure from the classical Boltzmann-Gibbs (BG) statistics in these systems.\\ In our experiment, we have foreseen the possibility to quantify the degree of nonextensivity using the $q$-Gaussian fit of the PDF obtained experimentally \cite{combe2013}. The striking accordance observed on the fitted curves, and the dependence observed of $q$ as a function of the strain-window used to calculate the fluctuations, according the reasoning presented here, corroborates the application of the NE statistical mechanics on these systems, opening an alternative approach to treat these systems quantitatively. Besides, by measuring the diffusion of the particles along the complete shear test, at different strain-window, we are able to associate the $q$-value measured from the fluctuation PDFs with the diffusion exponent $\alpha$. This is a particular case of the Tsallis-Bukman scaling law \cite{tsallisbukman1996}, \begin{equation} \label{eq:qalpha} \alpha = \frac{2}{3-q}\ , \end{equation} which can be obtained from the so-called \emph{porous media equation} \cite{plastino1995}, a generalization of the classical diffusion equation where the linear dependence between the variance and time is no longer observed \cite{Poschel2001granular}: \begin{equation} \label{eq:anomdiff} \frac{\partial p(x,\,t)}{\partial t} = D_q \frac{\partial^2 \left[p(x,\,t)\right]^{2-q}}{\partial x^2}\ . \end{equation} For a Dirac delta initial condition, the solution reads as \begin{equation} \label{eq:qgauss} p_q(x,\,t) = \frac{1}{\sqrt{\pi A_q } } e_q^{-\frac{x^2 }{ A_q}} \equiv \frac{1}{\sqrt{\pi A_q } } {\left[1 - (1-q)\frac{x^2}{A_q} \right]^{\frac{1}{1-q}}}. \end{equation} where $e_q(x)$ is called $q$-exponential, and $A_q$ is a constant which depends on $q$ and Gamma-function \cite{plastino1995,tsallis2009}. Equation \ref{eq:qgauss} is known as the $q$-Gaussian distribution, and was used to fit the PDF of displacement fluctuations obtained experimentally. Figure \ref{fig:PDF} shows the results for the PDF of fluctuations and the corresponding fit function at two extremal values of $\Delta \gamma$ considered in the image analysis. In the case of anomalous diffusion, it is shown that the variance follows a power law with time: \begin{equation} \label{eq:anomvariance} \langle x^2 \rangle \propto t^{\alpha} \quad \equiv \quad \langle x^2 \rangle \propto t^{\frac{2}{3-q}}\ , \end{equation} where $\alpha$ is the diffusion exponent equivalently expressed as a function of $q$ by using Eq.~\ref{eq:qalpha}. Note that, in Eqs~\ref{eq:anomdiff} to \ref{eq:anomvariance}, $x$ stands for a fluctuation of displacement as we will see below. It is interesting to observe two special cases: when $q=1$, the variance is proportional to time which corresponds to the normal diffusion behavior; when $q=2$, the ballistic diffusion limit is reached. At intermediate values, we get large distributions with marked tails. The variance, calculated as the time-integral of $p_q$, diverges for $q>5/3$, and converges otherwise. Thus, if several independent convolutions are applied, $p_q$ approaches a Gaussian distribution if $q<5/3$, and it approaches a L\'evy distribution for $q>5/3$ \cite{tsallis2009}. We performed quasistatic simple shear tests with $1\gamma2\varepsilon $ apparatus which is fully described in \cite{Joer1992,Calvetti1997}. The granular packing is made of pilings of cylindrical rods which mimics a 2D granular material enclosed by a rectangular frame, with initial dimensions of $0.56\ \textrm{m} \times 0.47\ \textrm{m}$. Then, the vertical sides of this rectangular parallelogram are shortened or elongated to apply a constant normal stress in the vertical direction, $\sigma_n = 50$~kPa. These two vertical sides are tilted up to $\gamma = 15^\circ$ while the two other sides are kept horizontal with a constant length -- Fig.~\ref{fig:system}. The packing was made of $5471$ wooden rollers ($6$~cm long) with ten different diameters ranging from $3$~mm to $30$~mm, approaching to a uniform distribution. To ensure a quasi-static transformation of the sample, it is sheared very slowly -- the corresponding shear rate $\dot{\gamma}$ is $4.5 \times 10^{-5}$ s$^{-1}$. This ensure a very small inertial number \cite{GdRMIDI} ($I = 10^{-9}$) when compared to what is applied in DEM simulations ($10^{-3}$ to $10^{-5}$ in the best cases) \cite{RouxCombe2010,RadjaiDubois2011}. During the test, kinematics of grains are measured by means of Digital Image Correlation (DIC) \cite{he1984two,chu1985ap} from 80~MPixels digital images of the sample where rollers look like disks, Fig. \ref{fig:system}. A specific \emph{DIC} computer program was developed to track rollers here assumed as rigid bodies \cite{richefeu2012} which allowed a sub-pixel kinematics measurement to track grains with an error of $\pm 0.05$ pixels \cite{combe2013tracker}. At the macroscopic level (sample scale), the stress-strain curve measured during the shear test exhibited hardening up to $\gamma \approx 0.06$ followed by softening until the end of the test (curve shown in \cite{richefeu2012} as well as several other mechanical properties like peak stress ratio, macroscopic friction angle \emph{etc}.). Pictures were shoot every $\delta t = 5$~s throughout the test, corresponding to a shear strain increment $\Delta \gamma \equiv \delta t\, \dot{\gamma} \approx 2.4 \times 10^{-4}$ between each shot. To assess the displacement fluctuations, we consider two displacements of each particle during a shear increment $\Delta \gamma$. The first is the actual displacement $\delta \bm{r}(\gamma,\, \Delta \gamma)$ from $\gamma$ to $\gamma + \Delta \gamma$. The second displacement, $\delta \bm{r}^\star(\gamma,\, \Delta \gamma) $, is fictitious and corresponds to an affine motion resulting from an homogeneous straining at $\gamma$ and during the shear-increment $\Delta \gamma$. It is assessed from the motion of the four rigid sides of the apparatus $1\gamma2\varepsilon$. With these definitions, the fluctuating part of the displacement is the difference between the actual and affine displacements. Thus, the normalized displacement fluctuation $\bm{v}(\gamma,\, \Delta \gamma)$ is defined by: \begin{equation} \bm{v}(\gamma,\, \Delta \gamma) = \frac{\left [ \delta \bm{r}(\gamma,\, \Delta \gamma) - \delta\bm{r}^\star(\gamma, \, \Delta \gamma) \right ] / d}{\Delta \gamma} \; , \end{equation} where $d$ is the mean diameter of the rollers. One may notice that the normalized fluctuations can be interpreted as a local strain (grain scale -- numerator) compared to the global strain (sample scale -- denominator). \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{MCIS50.eps} \caption{The fluctuating part of the rod displacement $\bm{v}(\gamma , \Delta \gamma)$ over-plotted on the corresponding digital image, obtained from DIC technique. $\gamma = \Delta \gamma = 0.1$. Inset: a detailed view of the speckled rods. $\gamma$ and $\sigma_n$ are the shear angle and the vertical stress imposed, respectively. The shear strength is measured all along the shear test.} \label{fig:system} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{plot-pdf.eps} \caption{Probability density functions of the horizontal components of the fluctuating displacements tracked during two different increments of shear strain ($\Delta \gamma = 7.3 \times 10^{-4}$ and $\Delta \gamma = 10^{-1}$. The scatters correspond to experimental data, and the solid lines correspond to regressions of function $p_q$ (Eq. \ref{eq:qgauss}) that allow for the assessment the q-values.} \label{fig:PDF} \end{figure} A displacement fluctuation field $\bm{v}$ is plotted in Fig.~\ref{fig:system} for a given shear increment. We notice an organization in structures like vortexes that reminds the ones of turbulence in fluids. These structures found their origins in the rearrangement mechanism of the grains, since that the elements interfere with each other in their affine movement. This is, in other words, the deviation from the affine field due to steric exclusion forming patterns observed with discrete element modeling \cite{Kuhn1999,RouxCombe2002,RouxCombe2003} and more rarely in experiments \cite{miller2013}. Their dynamics depend both on $\gamma$ and $\Delta \gamma$, evolving gently under shear when $\Delta \gamma$ is large ($>0.04$) and very rapidly for small values ($\Delta \gamma \simeq 2.4\cdot 10^{-4}$). The characteristic lengths depend strongly on $\Delta \gamma$, with vortexes of a few tenths of grain mean diameter for large values of $\Delta \gamma$, and, to the contrary, for small values of $\Delta \gamma$ these structures are not well defined, and long range correlations are observed \cite{richefeu2012}. The PDFs of the horizontal component magnitude of normalized displacement fluctuations are shown in Fig.~\ref{fig:PDF} for two different increments of shear strain: $\Delta \gamma = 7.3 \cdot 10^{-4}$ and $\Delta \gamma = 10^{-1}$. We observe a broadening of the PDF from a nearly Gaussian distribution (for large $\Delta \gamma$) to a wider distribution (for small $\Delta \gamma$). \begin{figure}[t] \centering \includegraphics[width=\linewidth]{plot-qgauss.eps} \caption{ Evolution of the measured $q$-value as a function of squared inverse of the strain increment for the experiments and simulations. The dashed line corresponds to a regression with the function $q(1/\Delta \gamma) = 1 + a tanh(b / \Delta \gamma)$, with $a = 0.521$, $b = 0.096$. Inset: the same plot for data from a DEM simulation that highlights the limit $q = 1$ when $\Delta \gamma \rightarrow \infty$. The fitted parameters for simulations were $a = 0.387$, $b = 0.057$.} \label{fig:qevolution} \end{figure} The dependence of the $q-$exponent with the strain-window used to calculate the fluctuations is shown in Fig.~\ref{fig:qevolution}, for experimental and simulation data. Two remarkable features can be observed in this plot: first, in the limit of large strain-window, when the abscissa goes to zero, $q \rightarrow 1$, indicating the limit when normal diffusion and the BG statistics are satisfied. Note that it is possible to test larger values of $\Delta \gamma$ in DEM simulations which confirm the limit $q \rightarrow 1$ (data shown in the inset of Fig. \ref{fig:qevolution}). This is exactly what we expect for this limit, once that the particles typically experience several collisions and rearrangements, approaching to the molecular chaos hypothesis. In the other limit, for vanishing strain-window, the $q$-value attains a plateau, with $q \sim 3/2$. This observation can be interpreted as a sign of the long range correlations imposed by the force chains at this short time scale. Once the value measured for $q$ in this limit is lower than $5/3$, one can expect that for large strain-windows a Gaussian distribution would be recovered, since it correspond to successive independent convolutions of $q$-Gaussian distributions. These features were observed both in experiments and simulations, no matter the differences among the systems (periodic boundaries in horizontal direction in simulations, different number of particles and inertial numbers \emph{etc}), proving the robustness of the result. Analyzing the results as a whole, we can sketch a phenomenological scenario to explain the observations: in the limit of large $ \Delta \gamma$, we observe a tendency to agree with the BG statistics, with $q \rightarrow 1$. This limit corresponding to the transition from meso- to macroscopic scales, and we observe the formation of vortexes in the spatial distribution of fluctuations, as evinced by Fig. \ref{fig:system}. These vortexes, with few grains diameters in size, interact each other to dissipate the excess of energy due to external loading, in analogy with the role of the vortexes in turbulent flow \cite{RadjaiRoux2002}. The nature of the interactions of these structures is purely stochastic, which acts as a precursor for the macroscopic friction. The broadening of the displacements fluctuations distribution is usually attributed to the energy cascade from larger to lower scales, that is, from large vortexes to the small ones \cite{RadjaiRoux2002}. \begin{figure}[tb] \centering \includegraphics[width=\linewidth]{plot-diffu-tout.eps} \caption{ Verification of the Tsallis-Bukman scaling law for different regimes of diffusion. (top) Evolution of the measured diffusion exponent $\alpha$ as a function of $1/ \Delta \gamma$ the dashed line is a direct application of the scaling law from the fit of the values shown in Fig. \ref{eq:qgauss}, $\alpha_P(1/ \Delta \gamma) = 2/[3 - q(1/ \Delta \gamma)]$, where $P$ stands for predicted. (Inset) a selected curve diffusion where $x$ is the displacement fluctuations; it allows the assessment of the diffusion exponent $\alpha$. (Bottom) Measure of the deviation of the data relative to the scaling law prediction, as a function of $1/ \Delta \gamma$, showing a agreement on the order of $\pm 2\%$.} \label{fig:qalpha} \end{figure} On the other hand, we have $q \simeq 3/2$ for vanishing strain-windows limit, $\Delta \gamma \rightarrow 0$. This result indicates the presence of long-range interactions and anomalous diffusion. Considering the absence of spatial structures on the fluctuation field, it is clear that this limit is dominated by the force chain dynamics. Force chains can span all along the system, but are very fragile, implying short life times. The displacement of grains belonging to a force chain are strong correlated spatially, but this correlation is not verified for temporal scales. Thus, we can conclude that the window used to measure the PDF particle displacement fluctuations in the system plays a crucial role in the statistics that will be obtained. Basically, it is possible to explore the micro-macro transition on the PDF distributions, from a correlated regime dominated by the force chains to a frictional stochastic one, dominated by spatial vortex interactions. This conclusion has a striking implication for any analysis concerning the measuring of displacement fluctuations, since it unveils how the observation procedure can alter the conclusions even in a relatively simple diffusion experiment. To quantify the diffusion of the grains along the complete shear test we basically computed the average displacement of each grain as a function of time (shear increment $\gamma$), but with different sampling frequencies determined by the strain-window $\Delta \gamma$. Following the reasoning presented above, and Eq.~\ref{eq:anomvariance}, we should expect two extreme regimes for the diffusion, analogously to the observed for the $q$-value: an anomalous diffusion regime with $\alpha \sim 4/3$ for vanishing strain-window, and an asymptotic regime with $\alpha \rightarrow 1$ for large shear increments. This is indeed what we can observe in Fig. \ref{fig:qalpha}, where we have verified the Tsallis-Bukman scaling law (Eq. \ref{eq:qalpha}). It is important to stress that the dashed line in Fig. \ref{fig:qalpha} is \textbf{not} a direct fit, but rather the curve obtained in Fig. \ref{fig:qevolution} using the Tsallis-Bukman scaling law. To our knowledge, is the first time that this relation is verified for different regimes of diffusion. This striking result reinforces the use of the Tsallis NE statistical mechanics to describe strong correlated systems, as in the case of confined granular material under shearing. \begin{acknowledgments} We are indebted to Constantino Tsallis for the fruitful discussions, suggestions and kind reading of the manuscript. We thank Philippe Claudin for the kind reading of the manuscript and suggestions. We are grateful to Jean-Beno\^it Toni for his valuable work to upgrade the electronic part of $1\gamma2\varepsilon$ apparatus. A special thanks to Fran\c{c}ois Bonnel without whom we would nott have the chance to shoot with the Phase One IQ180 camera (80 MPixels). APFA thanks the Brazilian funding agencies FAPEMIG, CNPq and CAPES, and CEFET-MG for financial support. The Laboratoire 3SR is part of the LabEx Tec 21 (Investissements d'Avenir - grant agreement n$^\textrm{o}$ ANR-11-LABX-0030) \end{acknowledgments}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,480
Moon + Reader Pro is a useful as well as the most popular premium app for Android. It's a book reader app with a bunch of powerful tools and features. This app supports epub, pdf, mobi, chm, cbr, cbz, umd, fb2, txt, HTML, rar, zip or OPDS formats. It also comes with an online book library where you can get lots of famous books. And here is the latest Pro or premium edition of Moon + Reader app that you can download for free. Moon + Reader Pro edition comes with all features access as well as ad-free experience. So, Download Moon + Reader Pro v4.5.2 Paid Edition APK now. Moon + Reader comes with lots of smart features like Full visual options: line space, font scale, bold, italic, shadow, alpha colors, fading edge etc. It also offers you 10+ themes embedded, includes Day & Night mode switcher which helps you to read the book with a perfect interface. This app is offered by Moon+ on Google PlayStore with 4.6/5 average users rating and a large number of downloads. It works with most of the Android devices.
{ "redpajama_set_name": "RedPajamaC4" }
2,769
Lars Reck (born 16 February 1999) is a Dutch football player. He plays for Sporting Hasselt. Club career He made his Eerste Divisie debut for MVV Maastricht on 9 November 2018 in a game against FC Volendam, as a 66th-minute substitute for Joeri Schroyen. Ahead of the 2019-20 season, Reck joined RKSV Minor. On 4 January 2020, Reck moved to Belgium and joined Second Amateur Division club Sporting Hasselt. In September 2020, he returned to the Netherlands and joined EHC Hoensbroek. References External links 1999 births Footballers from Maastricht Living people Dutch footballers Dutch expatriate footballers Association football forwards MVV Maastricht players EHC Hoensbroek players Eerste Divisie players Dutch expatriate sportspeople in Belgium Expatriate footballers in Belgium
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,047
\section{Introduction} Exoplanets do not form with the properties with which we observe them today: migration and dynamical interactions change their orbital parameters, high-energy radiation from their host stars causes atmospheric mass loss, and gaseous planets contract as they cool. The demographics of field-age (typically $>1$ Gyr) exoplanetary systems offers one way to learn about the evolutionary history of exoplanets. For example, the gap in the observed radius distribution of close-in planets (between super-Earths and mini-Neptunes) has been used as a probe of photoevaporation and to constrain typical core compositions \citep{OwenEvaporation2017, LopezBorn2017}; and \citet{Owen2018} explained the dearth of close-in giant planets as the joint result of high-eccentricity migration and photoevaporation. Observations of planets young enough to still be undergoing dynamical and atmospheric changes provide a more direct way to probe planetary evolution; and planets in young stellar associations are particularly useful because the ages of these systems are known more precisely and accurately than those of their counterparts in the galactic field. The typically close-orbiting planets discovered through transit and radial velocity surveys complement the constraints on planet formation beyond the snow line available from direct imaging \citep[e.g.][]{2014ApJ...794..159B, 2015ApJS..216....7B, 2016ApJ...819..125C, NielsenGPIES}. They are also likely to be young representatives of the field-age exoplanets on which planetary demographics studies are based. Radial velocity programs have detected Jupiter mass planets in young clusters \citep{QuinnTwo2012, QuinnHD2014}, but are hindered by the radial velocity jitter exhibited by these young, active stars \citep[e.g.][]{SaarActivityRelated1997, 2004AJ....127.3579P}. Thanks to its excellent photometric precision and wide-area coverage, \emph{K2}\ yielded a surge of exoplanet discoveries around young stars via the transit method. This included planets in the Hyades \citep{MannZodiacal2016a, DavidNew2016}, Upper Scorpius \citep{DavidNeptunesized2016, MannZodiacal2016}, Praesepe \citep{MannZodiacal2017, RizzutoZodiacal2018, LivingstonK22642019}, and Taurus-Auriga \citep{DavidWarm2019} associations. The {\it Transiting Exoplanet Survey Satellite} (\emph{TESS}) will survey 80\% of the sky during its prime mission, with a focus on bright stars. \emph{TESS}\ enables the transit search for young exoplanets in associations to be substantially expanded; and motivates our collaboration, the \emph{TESS}\ Hunt for Young and Maturing Exoplanets (THYME) Project. \emph{TESS}\ provides the first opportunity for extensive transit surveys of stars in young moving groups (YMGs). YMGs are dynamically unbound associations of stars that are identified based on their common motion through the galaxy. YMGs have ages $\lesssim300$ Myr; and probe a more continuous range of ages than do young stellar clusters \citep[see e.g.][]{Bell2015}. The stellar environments in YMGs also differ from those found in high-density, longer-lasting star clusters such as Praesepe or Pleiades. These clusters are less compact and therefore stellar dynamical interactions are less frequent; as a result, they may be more characteristic of the precursors of exoplanetary systems that orbit typical field stars. Dynamical studies indicate that stellar interactions in open clusters are unlikely to disrupt planetary systems \citep[e.g][]{2001MNRAS.322..859B, 2006ApJ...641..504A}, but milder impacts, such as changes in eccentricity, are possible \citep{2009ApJ...697..458S}. Finally, most known YMGs are substantially less distant than stellar clusters \citep[see e.g.][]{Gagne2018}. This provides significant advantages for detailed characterization of the planets through techniques such as transmission spectroscopy and precise radial velocity monitoring. We report the discovery (Figure \ref{fig:transit}) of a close-in, transiting planet with a radius in between those of Neptune and Saturn. The stellar host is the primary component of DS Tuc (DS~Tuc A, HD~222259A), which is a member of the Tucana--Horologium (Tuc-Hor) YMG. DS Tuc was one of the original members of the Tucana association of co-moving stars identified by \cite{ZuckermanIdentification2000}. Tucana was soon identified as being physically associated with the Horologium association of active stars \citep{TorresNew2000}, and together they formed one of the first known YMGs. DS Tuc is a visual binary \citep[]{TorresVisual1988}, consisting of a G6V primary and a K3V secondary \citep{TorresSearch2006} separated by $5\arcsec$. \citet{SoderblomHighResolution1998} suggested that the secondary (DS~Tuc B, HD~222259B) is itself a short period binary based on radial velocity variations, and \cite{CutispotoFastrotating2002} report spectral types for the components of K3/4V and K5V but do not provide further information. As we will discuss in Section~\ref{sec:rvs}, our radial velocity measurements demonstrate that DS~Tuc B\ is not likely to be a short-period binary. In Section \ref{sec:data} we present discovery data from \emph{TESS}\ and follow-up photometry from {\it Spitzer}. We additionally present new high resolution spectra and long-term photometric monitoring, and discuss archival high resolution spectra. In Section \ref{Sec:measurements} we update the stellar parameters, and analyze the radial velocities and stellar rotation. In Section \ref{sec:system}, we investigate the overall DS Tuc system, including modeling of the binary star orbit, and a searching for additional companions in high contrast imaging and in the \emph{TESS}\ transit data. We present the results of our transit analysis, including identifying the stellar host as DS Tuc A and assessing false-positive scenarios, in Section \ref{sec:planet}. We discuss the overall system architecture and prospects for future follow-up in Section \ref{Sec:discussion} and briefly summarize our findings in Section \ref{Sec:summary}. \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{plot-dstuc_e0_dilution_transit.pdf}\vspace{-0.8cm} \caption{Discovery data from \emph{TESS}, after our iterative flare rejection algorithm has been applied, and follow-up data from \emph{Spitzer}. Data are shown as blue points; data for \emph{Spitzer}\ are the means of 250 equally spaced bins. The top panel shows the full \emph{TESS}\ lightcurve and the stellar variability Gaussian process (GP) model. The middle panel shows a zoom-in on the two transits observed with \emph{TESS}. The bottom panel shows the two \emph{Spitzer}\ transits at $4.5\micron$. The best fitting model from our joint fit to the these lightcurves is shown in orange; in this analysis we simultaneously model stellar variability in \emph{TESS}, using a GP, and the transit parameters. The mean of the MCMC samples is shown as the opaque orange line; the $1\sigma$ deviations are shown as the semi-transparent orange region. } \label{fig:transit} \end{figure*} \section{Observations}\label{sec:data} \subsection{Photometry} \subsubsection{\emph{TESS}} \emph{TESS}\ was launched on 2018 April 18 and commenced science operations on 2018 July 25. \emph{TESS}\ uses its four small (10 cm effective aperture) cameras to monitor 24\degree $\times$96\degree\ sectors of sky nearly continuously over $27$ day campaigns. DS Tuc was observed in the first sector of science operations during late July and August of 2018 and was pre-selected for fast (two-minute) cadence observations because of its membership in the young Tucana--Horologium Moving Group.\footnote{The target was requested as part of our Guest Investigator program GO11175 (PI: Mann), as well as by GO11176 (PI: Czekala) and GO11250 (PI: Walter)} After the \emph{TESS}\ data were downlinked to Earth, they were processed by the Science Processing and Operations Center (SPOC) pipeline at NASA Ames \citep{Jenkins:2015, Jenkins:2016}, which calibrated the \emph{TESS}\ pixels, extracted light curves, de-blended light from nearby contaminating stars, removed common-mode systematic errors, high-pass filtered the light curve, and searched for transits. We used the pre-search data condition simple aperture photometry (PDC-SAP) light curve and systematics solution throughout this paper, masking the time $1346.5<t<1350$, except in our transit injection and recovery tests (Section~\ref{sec:notchinjrec}). This time corresponds to the loss of fine guiding, where $t$ is given in \emph{TESS}\ barycentric Julian date (BJD$-2457000.0$). SPOC used the Transiting Planet Search module (TPS) to search for transits in the PDC-SAP data, applying a matched filter to compensate for stellar variability. TPS identified several ``threshold crossing events,'' or possible transiting planet signals (TCEs), in the light curves of both DS Tuc A and B. Upon visual inspection of results from the initial run of TPS, our team of vetters concluded that while the periodicities detected by TPS did not corresponded to transiting planets, some of the TCEs appeared transit-like. We identified two candidate transits 8.1 days apart; a third happened to fall during the three day period of time when \emph{TESS}\ lost fine guiding. We alerted the community to the detection via the MIT \emph{TESS}\ Alerts webpage\footnote{\url{https://tess.mit.edu/alerts/}} under the designation TOI-200. We note that the alert was issued in early November based on the first TPS run from late August. The second, archival TPS run from mid September, which was not included in the alert, detected a TCE that corresponds to DS Tuc Ab and that passed all diagnostic tests in the data validation report. \subsubsection{\emph{Spitzer}}\label{sec:spitzer} Based on the \emph{TESS}\ alert, we scheduled observations of two transits with the \emph{Spitzer}\ Space Telescope, which were conducted on 2019 March 01 and 2019 March 09 UTC (Program ID: 14011, PI: Newton). We observed at $4.5\micron$ (channel 2) using the Infrared Array Camera \citep[IRAC; ][]{FazioInfrared2004}. We used the 32$\times$32 pixel subarray, and due to the brightness of DS Tuc A, we used $0.4$ second frame times. We followed the suggestions of \citet{IngallsIntrapixel2012,IngallsRepeatability2016}, placing DS Tuc A in the ``sweet spot'' of the detector and using the ``peak-up'' pointing mode to keep the position of the star fixed to within a half-pixel. Each transit observation consisted of a 30 minute dither, a $7.5$ hour stare including the full transit, and a final 10 minute dither. Both DS~Tuc A\ and B are present in the \emph{Spitzer}\ images. In the post-cryogenic mission, IRAC has a pixel scale of $1.2\arcsec$/pixel and a full-width at half-maximum of $2.0\arcsec$, so the binary components are resolved but not well-separated ($4.5$ pixels). To address the potential for flux dilution, we modeled the point spread functions (PSFs) of both components. We generated IRAC PSFs using the {\tt prf\_realize} routine as implemented in the software package {\tt IRACSIM}\footnote{\href{https://github.com/ingalls91104/IRACSIM}{https://github.com/ingalls91104/IRACSIM}} \citep{IngallsRepeatability2016} and incorporated them into the PSF-fitting framework described by Martinez \& Kraus (submitted to AAS Journals), modified for use with subarray images. To briefly summarize, we fit a two-source PSF model in each subarray image by performing an MCMC analysis using a standard Metropolis-Hastings algorithm with Gibbs sampling. The PSF model is described by seven parameters: $x$-pixel coordinate of the primary centroid ($x$), $y$-pixel coordinate of the primary centroid ($y$), image background ($b$), primary peak pixel value ($n$), projected separation ($\rho$), position angle (PA), and contrast ($\Delta m$). We ran four MCMC chains with 140,000 steps each, discarding the first $10$\% of each chain (the ``burn-in'' phase). Using the weighted average of the median ($x$,$y$)-centroid, $\rho$, PA, and $\Delta m$ generated by our MCMC fits, we made a single PSF model template of DS~Tuc B. This method yielded an estimate for pixel-by-pixel flux contamination levels, which we use to select the best aperture. Based on this, we selected a fixed aperture of 4$\times$4 pixels, which minimized the level of contamination flux from DS~Tuc B\ (2.2\%), while capturing $>$90\% of the flux from DS~Tuc A. Due to \emph{Spitzer}'s large intra-pixel sensitivity variations and its pointing jitter, the measured flux of the target can vary with time as the location of the star shifts on the detector \citep{IngallsIntrapixel2012}. To correct for this, we used a high-resolution pixel-sensitivity variation map \citep[PMAP,][]{IngallsIntrapixel2012}, following the recommendations from the IRAC website\footnote{\href{https://irachpp.spitzer.caltech.edu/page/contrib}{https://irachpp.spitzer.caltech.edu/page/contrib}} to calculate DS~Tuc A's centroid position and total flux in each image within the aperture given above. We then used the \texttt{iracpc\_pmap\_corr} routine to calculate corrected flux values. Further details about the photometric gain map are discussed by \citet{IngallsIntrapixel2012}. \subsubsection{WASP} DS Tuc was observed by the WASP-South station of the Wide Angle Search for Planets \citep[WASP;][]{PollaccoWASP2006} located in Sutherland, South Africa. WASP-South consists of eight cameras on an equatorial mount, each with a 2048$\times$2048 CCD. Observations in 2010 and 2011 used 200\,mm, f/1.8 lenses with a broadband filter spanning $400-700$\,nm and a plate scale of $13.7\arcsec$/pixel. Observations from 2012 to 2014 used 85\,mm, f/1.2 lenses with a Sloan r' filter and a plate scale of $32\arcsec$/pixel. Approximately $74000$ observations of the DS Tuc system were obtained over $900$ nights spanning five years. DS Tuc A and B are not resolved in the WASP data, and the precision is not sufficient to detect the transit of DS~Tuc~Ab; these data are used to investigate the stellar rotation period (Section~\ref{sec:rotation}). \subsection{Spectroscopy} \subsubsection{SOAR/Goodman}\label{sec:goodman} On 2018 December 23 we acquired moderate resolution spectra of both DS~Tuc A\ and DS~Tuc B\ using the Goodman High Throughput Spectrograph \citep{Goodman} at the 4.1 m Southern Astrophysical Research (SOAR) Telescope located at Cerro Pach\'{o}n, Chile. We observed both targets at low airmass ($ \sec(z) \simeq 1.4$) with clear sky conditions using the 0.46\arcsec-long slit, 400\,lines/mm grating and M2 setup. This yielded moderate resolution ($R\simeq1850$) spectra spanning $5000-9000$\,\AA. After basic image reduction including bias and dark subtraction, and flat-fielding, we removed sky lines in the 2D image using the chip regions adjacent to the science spectrum in the spatial direction and cosmic rays by median stacking over 5 images of each target. We then optimally extracted the spectrum \citep{Horne1986} and applied a wavelength solution derived from HgAr lamp exposures taken just before the target observations. Lastly, we flux calibrated each spectrum using spectrophotometric standards taken during the night. These data are used to determine the stellar parameters (Section~\ref{Sec:stellarparams}). \subsubsection{Archival data from HARPS, UVES, and FEROS} We gathered processed archival spectra from HARPS, UVES, and FEROS using the ESO archive. While the FEROS spectrum is labeled as DS~Tuc B\ in the ESO archive, the spectral features (in particular, the strength of $H\alpha$ and $H\beta$) clearly reveals that this spectrum belongs to DS~Tuc A. These data are used in our radial velocity analysis (Section~\ref{sec:rvs}). \subsubsection{SALT/HRS} We observed independent spectra of DS~Tuc A\ and DS~Tuc B\ using the High Resolution Spectrograph \citep[HRS;][]{CrausePerformance2014} on the South African Extremely Large Telescope \citep[SALT;][]{2006SPIE.6267E..0ZB}. We obtained spectra on the nights of 2018 November 16, 18, 19, and 21. We used the high resolution mode, and spectra were reduced using the MIDAS pipeline.\citep{KniazevMN482016, KniazevSALT2017}\footnote{\href{http://www.saao.ac.za/~akniazev/pub/HRS_MIDAS/HRS_pipeline.pdf}{http://www.saao.ac.za/\texttildelow akniazev/pub/HRS\_MIDAS/HRS\_pipeline.pdf}} The pipeline performed flat fielding and wavelength calibration using ThAr and Ar lamps; we did not use the sky-subtracted or merged data. The nominal spectral resolutions of the blue and red arms are $65000$ and $74000$, respectively; however, the resolution achieved by the MIDAS pipeline is approximately $46000$ as a result of not accounting for the tilt of the spectral lines. These data are used in our radial velocity analysis (Section~\ref{sec:rvs}). \subsubsection{NRES/LCO} We observed one spectrum of DS Tuc A using Las Cumbres Observatory's \citep[LCO,][]{LCO2013} Network of Robotic Echelle Spectrographs \citep[NRES,][]{NRES2018} on UT 2018 December 11. Data were reduced automatically by the LCO NRES pipeline version 0.8\footnote{\href{https://github.com/LCOGT/nres-pipe}{https://github.com/LCOGT/nres-pipe}}, which included basic bias/dark corrections, optimal extraction of the one-dimensional spectrum, and wavelength calibration with ThAr lamps. The NRES pipeline also yielded a radial velocity estimate, but we used our own determination for consistency with other analyses (see Section~\ref{sec:rvs}). The final reduced spectra have a resolution of approximately $R\simeq53,000$ and cover 3800--8600\,\AA. The spectrum had SNR$>$50 per resolving element around the Mg b lines ($\simeq$5160\,\AA). These data are used in our radial velocity analysis (Section~\ref{sec:rvs}). \subsection{High contrast imaging} We performed $H$-band integral field spectroscopy of both stars using the Gemini Planet Imager \citep[GPI;][]{Macintosh2014}. As part of the GPI Exoplanet Survey (GPIES), DS~Tuc B\ was observed on 2016 November 18 (program code GS-2015B-Q-500) and DS~Tuc A\ was observed on 2016 October 22 (GS-2015B-Q500) under poor conditions, aborted after 9 images, and then observed again under better conditions on 2016 November 18 (GS-2015B-Q-500). A high-order adaptive optics system compensated for atmospheric turbulence, and an apodized Lyot coronagraph was used to suppress starlight. Using 59.6~s integration times, we obtained 37.78~minutes of data with 14.9$^\circ$ of parallactic angle rotation for DS~Tuc B\ and 4.97-minutes and 35.79~minutes of data with $5.0^\circ$ and $15.2^\circ$ of parallactic angle rotation for the two observations of DS~Tuc A. All three datasets were reduced using the GPIES automated data reduction pipeline \citep{Wang2018}. Briefly, the data were dark subtracted, a bad-pixel correction was applied, the microspectra positions determined using an Argon arc lamp snapshot taken right before each sequence, 3D spectral datacubes were extracted using wavelength solutions derived from deep Argon arc lamp data, the images were distortion corrected, and fiducial diffraction spots (satellite spots) were used to locate the position of the star in each image. The stellar point spread function (PSF) was then subtracted from each image using both angular differential imaging \citep{Marois2006} and spectral differential imaging \citep{Sparks2002} to disentangle the stellar PSF from any potential companions, and principal component analysis to model the stellar PSF \citep{Soummer2012,Wang2015}. The resulting image was then used to search for point sources (Section \ref{sec:directimaging}). \subsection{Literature photometry \& astrometry} \label{sec:phot} To better characterize the properties of each component we drew resolved photometry and astrometry for DS~Tuc A\ and DS~Tuc B\ from the literature. Specifically, we adopted optical $B_T$ and $V_T$ photometry from the Tycho-2 Survey \citep{Hog2000}, optical $G$, $BP$, and $RP$ photometry from the second {\it Gaia} data release \citep[DR2;][]{Evans2018}, near-infrared $J$, $H$, and $K_S$ photometry from The Two Micron All Sky Survey \citep[2MASS,][]{Skrutskie2006}, and mid-infrared $W1$, $W2$, $W3$, and $W4$ photometry from the {\it Wide-field Infrared Survey Explorer} \citep[WISE;][]{Wright2010}. We also adopted proper motions and parallaxes for each component from DR2 \citep{GaiaDr2}, and J2000 positions from Tycho-2. All photometry and astrometry from the literature used in our analysis is listed in Table~\ref{tab:sparams}. \floattable \begin{deluxetable}{l c c l } \tabletypesize{\footnotesize} \tablecaption{Parameters of DS~Tuc\ \label{tab:sparams}} \tablewidth{0pt} \tablehead{ \colhead{Parameter} & \colhead{DS Tuc~A}& \colhead{DS Tuc~B} & \colhead{Source} } \startdata \multicolumn{4}{c}{{\bf Identifiers}} \\ TOI &\multicolumn{2}{c}{200.01}& \\ Gaia DR2 &6387058411482257536 & 6387058411482257280& Gaia DR2 \\ TIC & 410214986 & 410214984 & \citet{TIC2018} \\ 2MASS & J23393949-6911448 & J23393929-6911396& 2MASS \\ HD & 222259A & 222259B & \citet{Cannon1924_draper} \\ \hline \multicolumn{4}{c}{{\bf Astrometry}} \\ $\alpha$ R.A. (hh:mm:ss J2000) & \phantom{$-$}23:39:39.49 & \phantom{$-$} 23:39:39.27 & Tycho-2\\ $\delta$ Dec. (dd:mm:ss J2000) & $-$69:11:44.88 & $-$69:11:39.51 & Tycho-2\\ $\mu_{\alpha}$ (mas~yr$^{-1}$) & 79.464$\pm$0.074 & 78.022$\pm$0.064 & Gaia DR2 \\ $\mu{\delta}$ (mas~yr$^{-1}$) & -67.440 $\pm$0.045 & -65.746 $\pm$ 0.037 & Gaia DR2 \\ $\pi$ (mas) & 22.666 $\pm$ 0.035 & 22.650 $\pm$ 0.030& Gaia DR2\\ \hline \multicolumn {4}{c}{{\bf Photometry}} \\ $B_T$ (mag) & 9.320 $\pm$ 0.017 & 10.921 $\pm$ 0.060 &Tycho-2 \\ $V_T$ (mag) & 8.548 $\pm$ 0.012 & 9.653$\pm$ 0.030 &Tycho-2 \\ $G$ (mag) & 8.3193$\pm$0.0010 & 9.3993 $\pm$0.0014 &Gaia DR2 \\ $G_{BP}$ (mag) & 8.7044$\pm$0.0049 &9.9851$\pm$0.0059 &Gaia DR2 \\ $G_{RP}$ (mag) & 7.8137$\pm$0.0036 & 8.7082$\pm$0.0044&Gaia DR2 \\ $J$ (mag) & 7.122 $\pm$ 0.024 & 7.630 $\pm$ 0.058 &2MASS\\ $H$ (mag) & 6.759 $\pm$ 0.023 & 7.193 $\pm$0.034 &2MASS\\ $K_s$ (mag) & 6.68 $\pm$ 0.03 & 7.032 $\pm$0.063 &2MASS\\ $W1$ (mag) & 6.844 $\pm$ 0.060 &7.049 $\pm$0.081 &WISE \\ $W2$ (mag) & 6.748 $\pm$ 0.030 & 7.107$\pm$0.037&WISE\\ $W3$ (mag) & 6.777 $\pm$ 0.023 &7.056$\pm$0.029 &WISE \\ $W4$ (mag) & 6.668 $\pm$ 0.094 & 6.958$\pm$0.119 &WISE \\ \hline \multicolumn{4}{c}{{\bf Kinematics }} \\ Barycentric RV (\kms) & 8.05$\pm$0.06 & 6.41$\pm$0.06 & This paper \\ $U$ (\kms) & $ -8.71\pm0.04 $&$-9.27\pm0.04 $&This paper \\ $V$ (\kms) & $-21.50\pm 0.04$ & $-20.28\pm0.04$ &This paper \\ $W$ (\kms) & $ -1.53\pm 0.04$ & $-0.47\pm0.04$ &This paper \\ \hline \multicolumn{4}{c}{{\bf Physical Properties}} \\ Spectral type & G6V$\pm$1 & K3V$\pm$1 &\citet{TorresSearch2006} \\ Rotation period (days) & $2.85^{+0.04}_{-0.05}$ & unknown & This paper\\ \ensuremath{T_{\text{eff}}}\ (K) & 5428 $\pm$ 80 & 4700$\pm$90 &This paper\\ $F_{\mathrm{bol}}$\ ($10^{-8}$\,erg\,cm$^{-2}$\,s$^{-1}$) & 1.2026 $\pm$0.017 & 0.542 $\pm$ 0.008 & This paper \\ $M_*$ ($M_\odot$) & 1.01$\pm$0.06 & 0.84$\pm$0.06 & This paper \\ $R_*$ ($R_\odot$) & 0.964$\pm$0.029 & 0.864$\pm$0.036 &This paper \\ $L_*$ ($L_\odot$) & 0.725$\pm$0.013 & 0.327 $\pm$ 0.010 &This paper \\ Age (Myr) & 45$\pm$4 & 45$\pm$4 &\citet{Bell2015}\\ $v\sin{i_*}$\ (km~s$^{-1}$) & 17.8$\pm$0.2 & 14.4$\pm$0.3 &This paper \\ $i_*$ (deg)\tablenotemark{a} & $> 82^{\circ}$ & \nodata &This paper \\ \enddata \tablenotetext{a}{With the convention $i<90$.} \end{deluxetable} \section{Measurements}\label{Sec:measurements} \subsection{Stellar parameters}\label{Sec:stellarparams} {\it Age:} DS~Tuc\ was one of the original systems used to define the Tuc-Hor moving group \citep[then called the Tucanae association,][]{ZuckermanWebb2000}. The group has consistent age estimates based on isochronal fitting \citep[45$\pm$4\,Myr; ][]{Bell2015} and the lithium-depletion boundary \citep[40\,Myr; ][]{Kraus2014}. Here we adopt the age estimate from \citet{Bell2015}. {\it Luminosity, effective temperature, and Radius:} We first determined the bolometric flux ($F_{\mathrm{bol}}$), \ensuremath{T_{\text{eff}}}, and angular diameter of DS~Tuc A\ and DS~Tuc B\ by fitting the resolved spectral energy distributions (SEDs) for each component with unreddened optical and near-infrared template spectra from the cool stars library \citep{Rayner2009}. A demonstration can be seen in Figure~\ref{fig:sed}. \begin{figure*}[ht] \centering \includegraphics[width=0.49\textwidth]{HD222259_G4V.pdf} \includegraphics[width=0.49\textwidth]{HD222259B_K3V.pdf} \caption{Best-fit spectral template compared to the photometry of DS~Tuc A\ (left) and DS~Tuc B\ (right). Grey regions are BT-SETTL models, used to fill in gaps or regions of high telluric contamination. Literature photometry is shown in red, with horizontal errors corresponding to the filter width and vertical errors the measurement errors. Corresponding synthetic photometry is shown as green points. The bottom panel shows the residuals in terms of standard deviations from the fit.} \label{fig:sed} \end{figure*} Our SED-fitting procedure followed the technique outlined in \citet{Mann2015b}, which we briefly summarize here. Our comparison assumed zero reddening, as DS~Tuc\ lands within a region near the Sun of low interstellar extinction \citep[the Local Bubble;][]{LocalBubble}. We simultaneously compared each template spectrum to our optical spectra from SOAR/Goodman (Section~\ref{sec:goodman}) and archival photometry (Section~\ref{sec:phot} and Table~\ref{tab:sparams}) using the appropriate system zero-point and filter profile \cite{Cohen2003, Jarrett2011, Mann2015a, dr2_filter}. Gaps in each template spectrum are filled with a BT-SETTL atmospheric model \citep{Allard2012} using the model interpolation and fitting procedure described in \citet{Gaidos2014}. This procedure simultaneously provided an estimate of \ensuremath{T_{\text{eff}}}\ based on the BT-SETTL model comparison to the observed spectrum. To compute $F_{\mathrm{bol}}$, we integrated each template/model combination over all wavelengths. We combined the derived $F_{\mathrm{bol}}$\ with the {\it Gaia} DR2 distance ($d$) to determine the total luminosity ($L_*$) for each component star. We then calculated a stellar radius ($R_*$) from $L_*$ and \ensuremath{T_{\text{eff}}}\ using the Stefan-Boltzmann relation. Errors on each parameter were assigned accounting for both the measurement uncertainties (e.g., in the photometry) as well as the range of possible templates (and their assigned \ensuremath{T_{\text{eff}}}\ values) that can fit the data. Final parameters and uncertainties are give in Table~\ref{tab:sparams}. As part of our above procedure, the BT-SETTL model is scaled to match the photometry and template. Assuming perfect models, this multiplicative scale factor is equal to $R_*^2/d^2$ \citep{Cushing2008}, which provided another estimate of $R_*$ given the {\it Gaia} DR2 distance. This technique is similar to the infrared-flux method \citep{Blackwell1977}. Radii derived from this scale factor are not totally independent of the above method, as they rely on the same photometry and models, but the latter technique is less sensitive to the assigned \ensuremath{T_{\text{eff}}}. The first technique (Stefan-Boltzman) yielded a radius of 0.964$\pm$0.029$R_\odot$, and the scaling (infrared-flux method) yielded a consistent radius of 0.951$\pm$0.020$R_\odot$ for DS Tuc A. We adopt the former value for all analyses. {\it Mass:} We estimated the masses of DS~Tuc A\ and DS~Tuc B\ by interpolating our luminosity estimates onto a modified isochrone grid from the Dartmouth Stellar Evolution Program \citep[DSEP, ][]{Dotter2008}. These grids were adjusted to include the effects of magnetic fields and where the boundary conditions are applied, as described in more detail in \citet{Muirhead2014}, \citet{Feiden2014a}, and \citet{Feiden2016}. We assumed solar metallicity, which is typical within a scatter of $\sim$0.1 dex for the young stellar populations in the Solar neighborhood (e.g., \citealt{2014A&A...568A...2S} and references therein). We used both 40\,Myr and 50\,Myr grids, using the spread to approximate errors introduced by the age uncertainty for the Tuc-Hor moving group. This interpolation yielded mass estimates of 1.01$\pm$0.06$M_\odot$ for DS~Tuc A\ and 0.84$\pm$0.06$M_\odot$ for DS~Tuc B. We considered these errors to be slightly underestimated, as systematic differences between model grids can exceed 10\% at this age. \vspace{1cm} \subsection{Radial velocities}\label{sec:rvs} We used high resolution data from HARPS, UVES, FEROS, SALT/HRS, and NRES/LCO to determine stellar radial velocities (RVs). We measured RVs by computing the spectral line broadening function \citep[BF;][]{Rucinski1992} between DS Tuc A or B observations and a zero-velocity template. The BF represents the function that, when convolved with the template, returns the observed spectrum, carrying information on RV shifts and line broadening. Throughout the analysis we used the HARPS G2 binary mask as our template \citep[e.g.][]{Pepe2002}. A Gaussian profile was fit to the BF to determine the stellar RV. In each case the BF is single peaked and smooth, indicating a contribution from only one star. For each echelle order we computed a ``first pass'' BF, which was used to shift the observed spectrum near zero velocity. Orders that survive a 3$\sigma$-clipping algorithm were then stitched into three equal-length wavelength regions where the final BFs were computed. Our geocentric RV measurement and uncertainty were computed from the mean and standard deviation across these 3 regions. For archival observations that are provided as a single stitched spectrum, we created 150\,\AA \ wide initial ``orders''. Finally, for each epoch we computed the BF for telluric absorption features using a continuum normalized A0 star as our template. These offsets were applied to our measured RVs. We have measured RVs for all archival data following the above procedure. While the HARPS pipeline provides more precise RVs, we preformed our own measurements to ensure the same zero-point corrections across different instruments. We found a $\sim$70 m s$^{-1}$ offset from the HARPS observations, similar to our measurement uncertainty, but recovered the same epoch-to-epoch variability. Our final RVs are corrected for barycentric motion and listed in Table \ref{tab:rvs}. As noted in the introduction, DS Tuc B was previously identified as a binary based on its RV variability and the presence of two spectral components. Our spectra are inconsistent with DS Tuc B having two near-equal spectral type components; for both stars at each epoch, there is only one peak in the BF. While the previous work did not give sufficient information to test the proposed scenario of RV variability, we also do not see evidence for RV variations in excess of reasonable jitter levels for young stars in either star. \subsection{Projected rotation velocity} We measured the projected rotational velocity ($v\sin{i_*}$) for DS Tuc A and B by fitting the BF with a rotationally broadened absorption line profile that has been convolved with the instrumental profile (Figure \ref{Fig:broad}). We did not include additional broadening components such as microturbulence, though these factors should have minimal impact given the large $v\sin{i_*}$ values. For DS Tuc A, we find $v\sin{i_*}$$=17.8\pm0.2$ km s$^{-1}$ using the HARPS spectra; the value is consistent when using SALT/HRS. From SALT/HRS observations of DS Tuc B, we measure $v\sin{i_*}$$=14.4\pm0.3$ km s$^{-1}$. \begin{figure} \centering \includegraphics[width=0.8\columnwidth]{DS_Tuc_BFvsini.pdf} \caption{DS Tuc A broadening function computed from a representative HARPS spectrum. The broadening function presented in blue is clearly single-peaked and rotationally broadened. A best-fit rotational broadening profile is over plotted in orange. Extended wings in the broadening function as compared to the rotational broadening profile arise from additional line broadening mechanisms (macro/microturbulence) which are not included in our pure-rotation model. } \label{Fig:broad} \end{figure} \begin{deluxetable}{l r c c} \tablecaption{Radial velocity measurements of DS Tuc A and B \label{tab:rvs}} \tablewidth{0pt} \tablehead{ \colhead{Site} & \colhead{BJD} & \colhead{RV} & \colhead{$\sigma_{RV}$}\\ \colhead{} & \colhead{} & \colhead{(km s$^{-1}$)} & \colhead{(km s$^{-1}$)} } \startdata \multicolumn{4}{c}{{\bf DS Tuc A}} \\ HARPS & 2453500.876233 & 7.82 & 0.07 \\ HARPS & 2453521.828166 & 7.93 & 0.05 \\ HARPS & 2453522.888133 & 8.32 & 0.06 \\ HARPS & 2453541.927465 & 8.02 & 0.07 \\ HARPS & 2453600.704290 & 7.85 & 0.07 \\ UVES & 2454243.856154 & 8.27 & 0.10 \\ FEROS & 2455853.592265 & 7.98 & 0.24 \\ SALT & 2458439.283495 & 8.08 & 0.43 \\ SALT & 2458441.278033 & 8.29 & 0.46 \\ SALT & 2458442.295852 & 8.34 & 0.28 \\ SALT & 2458444.297823 & 7.74 & 0.31 \\ LCO & 2458463.540450 & 8.28 & 0.15 \\ \hline \multicolumn{4}{l}{Mean: 8.05 (km/s)} \\ \multicolumn{4}{l}{RMS: 0.21 (km/s)} \\ \multicolumn{4}{l}{Std Error: 0.06 (km/s)} \\ \hline \multicolumn{4}{c}{ {\bf DS Tuc B}} \\ SALT & 2458439.288665 & 6.41 & 0.31 \\ SALT & 2458441.273940 & 6.66 & 0.30 \\ SALT & 2458442.302087 & 6.42 & 0.21 \\ SALT & 2458444.302819 & 6.33 & 0.27 \\ UVES & 2454243.850252 & 6.25 & 0.11 \\ \hline \multicolumn{4}{l}{Mean: 6.41 (km/s)} \\ \multicolumn{4}{l}{RMS: 0.14 (km/s)} \\ \multicolumn{4}{l}{Std Error: 0.06 (km/s)} \\ \enddata \end{deluxetable} \vspace{1cm} \subsection{Stellar rotation}\label{sec:rotation} {\it Rotation period:} A photometric rotation period of $2.85$ days for DS Tuc was previously reported by \citet{2012AcA....62...67K}, and is clearly visible in both the \emph{TESS}\ and WASP lightcurves. Based on ground-based monitoring with the Las Cumbres Observatory, we associate this signal with DS~Tuc A. We break the WASP lightcurve into four $200$ day observing seasons and measure the rotation period and amplitude of variability in each season. The period is consistently $2.85$ days with high variability in the semi-amplitude ($2\%$ to $2.6\%$), but the phase shifts. The periodogram shows power at the period and the first harmonic, and no additional signals are seen that could be associated with DS~Tuc B. The \emph{TESS}\ lightcurve of DS Tuc shows consistent rotational modulation with a semi-amplitude of $1-2\%$. We modeled the \emph{TESS}\ lightcurve with a Gaussian process (GP) using the \texttt{celerite} package from \citet{Foreman-MackeyFast2017}. We used a kernel composed of a mixture of simple harmonic oscillators and a jitter term. Our GP model has a term to capture the periodic brightness modulation caused by spots on the stellar surface. This kernel is a mixture of two stochastically-driven, damped harmonic oscillator models and has two modes in Fourier space: one at the rotation period of the star and one at half the rotation period. We initially included an additional damped harmonic oscillator with a period of $20$ days to capture long-term trends in the lightcurve, but the fitted power of the signal indicated that it was unnecessary. We used a Lomb-Scargle periodogram to identify the candidate rotation period. We then fit the stellar rotation model using least squares, iterating 5 times and rejecting $3\sigma$ outliers each pass. This served to remove smaller flares. We then started an MCMC fit using the affine-invariant Markov Chain Monte Carlo (MCMC) implemented in the package \texttt{emcee} \citep{Foreman-MackeyEmcee2013}, beginning half the chains at the candidate rotation period identified in the periodogram, and a quarter each at half and twice the rotation period. We use 50 walkers and a burn-in of 5000 steps. We end the run when the autocorrelation timescale $\tau$ of all chains changes by $<0.1$ and the length of the chain is $>100\tau$. We measure a rotation period of $2.85^{+0.04}_{-0.05}$ days. {\it Stellar inclination:} Following the method detailed in \citet{Morton2014b}, we combined the stellar rotation period measured from the \emph{TESS}\ lightcurve, $R_*$, and $v\sin{i_*}$\ measurements from above to estimate of the stellar inclination for DS~Tuc A. Although this measurement is not very precise, this method can identify highly misaligned systems \citep[e.g.,][]{Hirano_vsini2012} or be used for statistical studies of large planet populations \citep[e.g.,][]{2017AJ....154..270W}. We determine an equatorial velocity of 17.13$\pm$0.6\,km\,s$^{-1}$, consistent with our spectroscopic measurement of $v\sin{i_*}$\ $= 17.8 \pm 0.2$\,km\,s$^{-1}$. This corresponds to a 1$\sigma$ lower limit on the inclination of $i > 82 ^\circ$ and a 2$\sigma$ lower limit of $i > 70 ^\circ$. We cannot distinguish between $i<90$\degree\ and $i>90$\degree, and so adopt the convention $i<90$\degree. \section{Constraints on the DS Tuc system architecture}\label{sec:system} \begin{figure*} \centering \includegraphics[width=0.5\textwidth]{DSTuc_pv_astr_orbits.png} \includegraphics[width=0.96\textwidth]{DSTuc_pv_astr_hists.pdf} \caption{Top: 100 randomly selected orbits from the posterior distribution of accepted orbits for the stellar binary system. DS~Tuc A\ is marked by the orange star at the origin, while the present position of DS~Tuc B\ relative to A is located where the orbit tracks converge. Orbital phase is shown by the color bar, with an orbital phase of 0.0 corresponding to the \textit{Gaia} observation epoch 2015.5. Bottom: Posterior distributions for all orbital parameters from the fit, as well as periastron. Semi-major axis and epoch of periastron passage have been truncated for clarity. The inclination is tightly constrained to be nearly edge-on (90$^\circ$), close to the inclination of the transiting planet.} \label{fig:binaryorbits} \end{figure*} \begin{deluxetable*}{cccccc}[htb!] \tablecaption{{Stellar Binary Orbital Parameters}\label{tab:binary orbits}} \tablehead{\colhead{Element} & \colhead{Median} & \colhead{Std Dev} & \colhead{Mode} & \colhead{68.3\% Min CI} & \colhead{95.4\% Min CI} } \startdata $a$ (AU) & 176 & 29 & 160 & (157, 174) & (157, 219)\\ $P$ (yrs) & 1760 & 510 & 1500 & (1470,1730) & (1470,2440) \\ $e$ & 0.57 & 0.10 & 0.47 & (0.46, 0.60) & (0.46, 0.77)\\ $i$ (\degree) & 96.9 & 0.9 & 96.6 & (96.0, 97.8) & (95.0, 98.6) \\ $\omega$ (\degree) & 186 & 35 & 196 & (164, 233) & (122, 256)\\ $\Omega$ (\degree) & -12 & 3 & -13 & (-15, -10) & (-18, -6)\\ $T_0$ (yr) & 1250 & 480 & 1520 & (1250, 1530) & (-590, 1530)\\ Periastron (AU) & 75 & 17 & 85 & (59, 93) & (44, 105)\\ \enddata \tablecomments{We report the median, mode, standard deviation, and 68.3\% and 95.4\% minimum credible intervals, with marginal posteriors and joint distributions displayed in Figure \ref{fig:binaryorbits}} \end{deluxetable*} \subsection{Stellar binary orbit}\label{sec:orbit} We fit orbital parameters to the motion of the binary pair using a modified implementation of the Orbits for the Impatient (OFTI) rejection-sampling methodology described in \citet{Blunt2017}. This implementation is publicly available on GitHub\footnote{https://github.com/logan-pearce/LOFTI \citep{LOFTI}} and described further in \citet{Pearce2019}. Both objects have a well-defined \textit{Gaia} DR2 astrometric solution, so we used the positions and proper motions of DS~Tuc B\ relative to DS~Tuc A\ in the plane of the sky. We used the radial velocity measurements of Table \ref{tab:rvs} to interpolate a relative radial velocity at the \textit{Gaia} observation epoch of 2015.5. Relative separation and position angle measurements in the Washington Double Star Catalog (WDS) spanning 126 years provide additional constraints on the stellar orbital motion. We performed a modified OFTI fit constrained by these measurements. Previous implementations of OFTI have fit orbital parameters to astrometric observations spanning several epochs \citep[e.g.][]{Blunt2017, Pearce2019, Ruane2019, Cheetham2019}. In this system, the precision of the \textit{Gaia} solution for both objects allowed us to constrain five of the six position vector elements using just this single epoch, and we additionally have the astrometric measurements provided by WDS; only the line-of-sight position is not sufficiently constrained to contribute to the fit. Table \ref{tab:binary orbits} displays the orbital parameters we determined for the stellar binary orbit. Figure \ref{fig:binaryorbits} displays the orbital parameter distributions, joint credible intervals, and a selection of orbits plotted in the plane of the sky. The orbital semi-major axis is $157 < a < 174$ au, with a closest approach of $59 < r_{peri} < 93$ au (where the ranges are $1\sigma$ credible intervals). The stellar binary is constrained to be nearly edge-on ($96.0^{\circ} < i < 97.8^{\circ}$), which is likely aligned with both the transiting planet's orbit and the primary star's spin axis. \begin{figure*}[t] \centering \includegraphics[width=0.98\textwidth,trim=2cm 14cm 2cm 7.5cm]{dstuc_tongue01.pdf} \caption{Left and Center: Completeness to substellar companions from the GPIES observations of DS Tuc A and B. Planets and brown dwarfs more massive than $\sim$5 M$_{\rm Jup}$ are excluded at high completeness between 10--80 au. Right: Contrast curves from which these completeness maps are derived, based on two epochs of GPIES observations of DS Tuc A, and one of B. The contrast limits are slightly deeper for T-type spectra, as PSF subtraction can leverage the strong methane absorption for the coolest planets.} \label{fig:contrast} \end{figure*} \subsection{Limits on additional directly imaged companions}\label{sec:directimaging} To search for companions in high contrast imaging data from GPI, we forward modeled the PSF template of a hypothetical companion at each pixel in the image using the Forward Model Matched Filter technique \citep[FMMF;][]{Ruffio2017a}. We then ran a matched filter with the template in an attempt to maximize the signal of a planet at that location in the image. The method accounts for the distortion of the signal due to the speckle subtraction step. The detection limits are expressed in terms of the flux ratio between the point source and the star and were calibrated using simulated point source injection and recovery. The detection limits are set at six times the standard deviation of the noise in the final image, which is calculated in concentric annuli as a function of separation to the star. This detection threshold ensures a false-positive rate of less than one per 20 sequence of observations. The default matched filter reduction used for GPIES assumes a featureless spectrum, corresponding to hot planets, for the estimation of the point-source brightness. However, \citet{Ruffio2017b} showed that it can be used for the detection of stars without loss of sensitivity. We did not detect any candidate companions above our detection threshold in either dataset. We determined completeness to bound substellar companions using the method described in \citet{NielsenGPIES}. An ensemble of simulated companions were generated with full orbital parameters at a grid of semi-major axis and planet mass. The projected separation in arcseconds was then computed for each simulated companion given the distance to the star, and the contrast was calculated using the BT-Settl models \citep{Baraffeetal2015}, the age of the star (45 Myr), and the star's $H$ magnitude. Each simulated companion was compared to the measured contrast curve, and companions lying above the curve were considered detectable. The same simulated companions were compared to multiple contrast curves, advanced forward in their orbits when observations are made at different epochs, as is the case for DS~Tuc~A. Outside a radius of $\sim$1.1\arcsec, not all position angles fall on the detector; to compensate, we reduce the completeness beyond $\sim$1.1\arcsec using the fractional coverage as a function of radius. The depth of search plots, giving completeness as a function of semi-major axis and companion mass, are given for DS Tuc A and B in Figure~\ref{fig:contrast}, along with the underlying contrast curves. There are two contrast curves at each epoch, a T type curve assuming heavy methane absorption in the matched filter step (appropriate to companions as hot as $\sim$1100 K), and an L type contrast curve assuming a flatter spectrum appropriate to hotter brown dwarfs and stars. Overall, wider separation planets and brown dwarfs are ruled out at high confidence between $\sim$10-80 au, more massive than $\sim$5 M$_{\rm Jup}$, around both A and B. \subsection{Limits on wide binary companions}\label{sec:gaiawidecomps} Past AO observations of the DS Tuc system have been limited to an outer working angle of $\rho \la 10$\arcsec\, \citep[e.g.][]{kasper2007}, leaving open the possibility of a hierarchical architecture with a very wide tertiary companion. The Gaia catalog reveals that there is one comoving, codistant candidate Tuc-Hor member within $<$1 pc of the DS Tuc system, 2MASS J23321028-6926537, which was also suggested to be a candidate low-mass (spectral type around M5) member of Tuc-Hor by \citet{2015ApJ...798...73G}. However, given the very wide separation ($\rho = 1.12 \times 10^5$ AU), this source is likely an unbound member of Tuc-Hor and not a bound companion of DS Tuc. There are no other candidate wide companions in Gaia DR2 within $\rho < 1$ pc and brighter than a limiting magnitude of $G \sim 20.5$ mag, corresponding to a mass limit of $M > 15 M_{\rm Jup}$ at $\tau = 40$ Myr \citep{Baraffeetal2015}. \subsection{Limits on additional transiting planets}\label{sec:notchinjrec} \begin{figure} \centering \includegraphics[width=\columnwidth]{DSTUC_completeness.pdf} \caption{Completeness map for additional planets in the DS~Tuc A~ system, produced from injection-recovery testing of our search pipeline \citet{RizzutoZodiacal2017}. Each point represents an injected planet signal, with blue points indicating recovery and red points indicating non-recovery. The magenta star marks the position of the detected planet DS Tuc Ab.} \label{fig:transit_comp} \end{figure} We tested the detectability of additional planets in the \emph{TESS}~sector 1 lightcurve of DS~Tuc A~using the notch-filter detrending and planet search pipeline of \citet{RizzutoZodiacal2017}. For this process, we used the SAP lightcurve which is not corrected for systematics using the cotrending basis vector method. This choice was made based on the presence of artifacts in the PDCSAP lightcurve, likely introduced by the presence of a strong stellar rotation signal. We first apply a deblending factor based on the \emph{TESS}\ magnitudes for DS~Tuc A~and B and masked the time interval when fine-guiding was lost. We then injected a set of model transiting planets synthesized with the BATMAN model of \citet{KreidbergBatman2015} with orbital and size parameters chosen randomly. We used orbital periods of 1--20\,days and planet radii of 1--10\,$R_\earth$, and allowed orbital phase and impact parameter to take values in the interval [0,1]. Eccentricity was fixed to zero for this process, as it does not significantly influence detectability of a transit, but requires two additional variables over which to marginalize. We injected a total of 1000 trial planets for this test. For each trial planet, we apply the notch filter detrending pipeline, and then search for periodic signals with the BLS algorithm \citep{kovacsBLS}, retaining signals with power-spectrum peaks above 7$\sigma$. We then set tolerance windows of 1\% in both injected period and orbital phase to flag a trial planet as recovered. Figure \ref{fig:transit_comp} shows the completeness map for additional planets in the DS~Tuc A~system. Our search and the \emph{TESS}~sector 1 data for DS~Tuc A~are sensitive to $\sim$4\,$R_\earth$ planets at period $<$10\,days, and $\sim$3\,$R_\earth$ at periods $<$6\,days. At periods longer than 10 days, the time baseline and gaps due to the masked section significantly decrease sensitivity to transiting planets. \section{Analysis of the planetary signal}\label{sec:planet} \subsection{Identification of the stellar host} The two components of DS Tuc are separated by $5\arcsec$\, and are not resolved by \emph{TESS},\footnote{The \emph{TESS}\ alert somewhat arbitrarily identifies DS Tuc A as the host because it is the brightest star in the vicinity.} which has a plate scale of 21$\arcsec$ pixel$^{-1}$ with 50\% of light concentrated within one pixel \citep{2014SPIE.9143E..20R}. We examined the measured centroid of the in-transit/out-of-transit difference image, which is calculated by the SPOC pipeline and included in the data validation (DV) report (from the initial TPS run) that accompanied the alert. The DV report indicated that both DS Tuc A and B are contained within the 3$\sigma$ confusion radius of the centroid (which we note is dominated by the 2.5\arcsec additional error added in quadrature to the propagated uncertainty) and the centroid analysis averages a transit signal and a spurious event. In the second TPS run, not included in the alert, the centroid offset is consistent with DS Tuc A at 2$\sigma$. We also analyzed the image centroids measured by the SPOC pipeline. The scatter in the centroid measurements is too large ($\simeq$ 1 millipixel per 4 hour bin) to detect the expected change in centroid position if the planet were to in fact orbit DS Tuc B (0.5 millipixel over a 3 hour transit). In summary, we found that the \emph{TESS}\ data alone cannot conclusively identify which star hosts the transit. Our \emph{Spitzer}\ observations definitively show that the planet orbits DS Tuc A. A 4$\times$4 pixel aperture placed on DS Tuc A revealed a transit signal consistent with that detected in the \emph{TESS}\ data. An equal-sized or smaller aperture centered on DS Tuc B yielded no detectable transit signature (Figure~\ref{fig:spitzer}). \begin{figure} \centering \includegraphics[width=\columnwidth]{Spitzer_b_0.pdf} \caption{The {\it Spitzer} light curve from 2019 March 01 for a 4$\times$4 pixel aperture centered on DS~Tuc B\ (black) compared to the {\it TESS} photometry at an aperture centered on DS~Tuc B\ (red). The \emph{TESS}\ data shown here assumes (incorrectly) that the planet orbits DS~Tuc B, and it has been corrected for contamination from DS~Tuc A. Flux measurements from {\it Spitzer} were binned with 300 measurements per bin for clarity. In the resolved {\it Spitzer} data, DS~Tuc B\ shows no transit signal and we thus conclude that the planet orbits DS Tuc A.} \label{fig:spitzer} \end{figure} \vspace{1cm} \subsection{Transit fitting}\label{sec:transit} We simultaneously fit the \emph{TESS}\ and \textit{Spitzer} photometry using the transit fitting code \texttt{misttborn}.\footnote{\url{https://github.com/captain-exoplanet/misttborn}} \texttt{misttborn} was first used in \citet{MannZodiacal2016a} and has been used for a number of more recent works including \citet{JohnsonK22602018}. Briefly, we fit each system using \texttt{emcee}, and produced photometric transit models using \texttt{batman} \citep{KreidbergBatman2015}, which is based on the transit model of \citet{MandelAnalytic2002}. In the MCMC we fit for the following planetary parameters: the planet-to-star radius ratio ${R}_{P}/{R}_{\star }$ (assumed to be the same in all filters), impact parameter $b$, period $P$, and the epoch of the transit midpoint ${T}_{0}$. We fix eccentricity to zero. We also fit the following stellar parameters: linear and quadratic limb darkening parameters for each filter ($q_1,q_2$) using the triangular sampling method of \citet[][]{Kipping2013}, and the mean stellar density (${\rho }_{\star }$). We use Gaussian priors for the limb darkening parameters, using the values in \citet{ClaretAstronomy2011} and \citet{2017A&A...600A..30C}. We use uniform priors within physically-allowed boundaries for the remaining parameters (most notably, we enforced $|b| < 1+{R}_{P}/{R}_{\star }$ in order to assure that a transit occurs while allowing grazing transits). DS Tuc is a visual binary with a separation of $\rho \sim 5 \arcsec$. The \emph{TESS}\ photometry is de-blended, but the de-blending process may introduce errors, while our \emph{Spitzer}\ aperture on DS~Tuc A\ includes a small amount of contamination from DS~Tuc B. We included as an additional MCMC parameter the contamination of the aperture by flux from other stars. This is implemented as a (fractional) flux added to the transit model to create a diluted model ($LC_{\rm{diluted}}$) of the form; \begin{equation}\label{eqn:dilution} LC_{\rm{diluted}} = \frac{LC_{\rm{undiluted}}+C}{1+C}, \end{equation} where $LC_{\rm{undiluted}}$ is the model light curve generated from {\tt Batman} and our GP model. This is comparable to the method used in \citet{2011ApJ...730...79J} and \citet{Gaidos2016b} to correct for flux dilution from a binary using the measured $\Delta m$ between components. The key difference is that Equation~\ref{eqn:dilution} allows for flux to be subtracted from the model ($C<0$) in the case of an over-correction. We set a Gaussian prior upon $C$ of $0.00\pm0.02$ for {\it TESS} and $0.0217\pm0.0050$ for {\it Spitzer}. The width of 0.02 for {\it TESS} photometry was estimated based on uncertainties in the derived \emph{TESS}\ magnitudes from the TIC. Section~\ref{sec:spitzer} describes how $C$ for {\it Spitzer} was calculated from a model of the PSF. The target displays substantial stellar variability in the \emph{TESS}\ bandpass. In addition to the transit model described above, we utilized Gaussian process regression to account for stellar variability in the \emph{TESS}\ photometry. This enables us to model the variations in the stellar flux occurring during the transit. Our kernel is a mixture of simple harmonic oscillators, the same as described in Section~\ref{Sec:measurements}. We included the Gaussian process hyperparameters as fit parameters in our MCMC, and placed priors on those parameters based on the results of our stellar rotation modeling. The parameters are the stellar rotation period $P_*$, the amplitude $A_{\rm GP}$ of the primary signal at $P_*$, the relative strength of the secondary signal at $P_*/2$ (Mix$_{Q1,Q2}$), the decay timescales of the primary and secondary signals ($Q1_{\rm GP}$, $Q2_{\rm GP}$), and a jitter term to account for white noise ($\sigma_{\rm GP}$).\footnote{ https://celerite.readthedocs.io/en/stable/python/kernel/} We ran the MCMC chain with 100 walkers for 30,000 steps and cut off the first 5000 steps of burn-in, producing a total of 2.5$\times10^{6}$ samples from the posterior distributions of the fit parameters. The resulting fit is shown in Figure \ref{fig:transit}, and the best fitting values are listed in Table \ref{tab:params}. \begin{deluxetable}{l c} \tablecaption{Parameters of DS Tuc Ab \label{tab:params}} \tablewidth{\columnwidth} \tablehead{\colhead{Parameter} & \colhead{Value}} \startdata \multicolumn{2}{c}{{\bf Measured parameters}} \\ $T_0$ (TJD)\tablenotemark{a} & $1332.30997 \pm 0.00026$ \\ $P$ (days) & $8.138268 \pm 1.1\times10^{-5}$ \\ $R_P/R_{\star}$ & $0.05419 \pm 0.00024$ \\ $b$ & $0.18^{+0.13}_{-0.12}$ \\ $\rho_*$ ($\rho_\odot$) & $1.7^{+0.07}_{-0.17}$ \\ $q_{1,1}$ & $0.284^{+0.055}_{-0.053}$ \\ $q_{2,1}$ & $0.284 \pm 0.051$ \\ $q_{1,2}$ & $0.0266^{+0.0094}_{-0.0091}$ \\ $q_{2,2}$ & $0.054^{+0.014}_{-0.013}$ \\ $C_\mathrm{TESS}$ & $0.015^{+0.018}_{-0.017}$ \\ $C_\mathrm{Spitzer}$ & $0.0208^{+0.0049}_{-0.005}$ \\ $\ln{P_{\mathrm{*}}}$ (day) & $1.0606^{+0.0102}_{-0.0098}$ \\ $\ln{A_{\mathrm{GP}}}$ (\%$^2$) & $-10.87^{+0.11}_{-0.12}$ \\ $\ln{Q1_{\mathrm{GP}}}$ & $2.57^{+0.39}_{-0.37}$ \\ $\ln{Q2_{\mathrm{GP}}}$ & $0.052^{+0.027}_{-0.026}$ \\ Mix$_\mathrm{Q1,Q2}$ & $0.15^{+0.26}_{-0.11}$ \\ $\sigma_{\mathrm{GP}}$ & $-8.682 \pm 0.013$ \\ \hline \multicolumn{2}{c}{{\bf Derived parameters}} \\ $R_P$ ($R_\earth$) & $5.70\pm0.17$\\ $a/R_{\star}$ & $20.35^{+0.29}_{-0.69}$ \\ $i$ ($^{\circ}$) & $89.5^{+0.34}_{-0.41}$ \\ $\delta$ (\%) & $0.2936 \pm 0.0026$ \\ $T_{14}$ (days) & $0.13235^{+0.00049}_{-0.00039}$ \\ $T_{23}$ (days) & $0.11818^{+0.00039}_{-0.00057}$ \\ $T_{\mathrm{peri}}$ (TJD)\tablenotemark{a} & $1332.30997 \pm 0.00026$ \\ $g_{1,1}$ & $0.3^{+0.055}_{-0.054}$ \\ $g_{2,1}$ & $0.228^{+0.066}_{-0.06}$ \\ $g_{1,2}$ & $0.0172^{+0.0057}_{-0.0051}$ \\ $g_{2,2}$ & $0.145^{+0.024}_{-0.028}$ \enddata \tablecomments{We report the median and 68\% confidence interval for each parameter. Associated probability distributions for key parameters are shown in Figure \ref{fig:transit}.} \tablenotetext{a}{TJD is TESS Juldian Date, which is BJD$-2457000.0$} \tablenotetext{b}{Although we allow $b$ to explore negative values, the absolute value of $b$ is listed since positive and negative values are degenerate. Similarly, we cannot distinguish between $i<90$\degree\ and $i>90$\degree\ and adopt the convention $i<90$\degree.} \end{deluxetable} \subsection{false-positive analysis} Since we do not have dynamical (radial velocity) confirmation of DS~Tuc~Ab, we use our other observations to show that the transits are caused by a real transiting planet. We consider and rule out the following false-positive scenarios: \begin{enumerate} \item \textit{The transits are caused by instrumental artifacts or residuals from stellar variability:} Though there are only two transits in the \emph{TESS}\ dataset with amplitudes much lower than the amplitude of starspot variability, we confirm the transits with \emph{Spitzer}, conclusively ruling out an instrumental origin for the signal. The \emph{Spitzer}\ detection of the transits in the near infrared, at the predicted time and with the same depth as in \emph{TESS}\ rules out stellar variability as an origin, which should be significantly lower in the Spitzer bandpass and should not produce periodic transit-like signals. \item \textit{DS Tuc A is an eclipsing binary:} Our radial velocity observations showed no variations large enough to be caused by a stellar companion. To test this, we generated 100,000 binaries with random (uniform) mass ratios, argument of periastron, phase, inclination, and eccentricty. The period was fixed at 8.138\,days, and inclination was restricted ensure the companion eclipses ($\gtrsim70$\degree). We then compared each synthetic binary's predicted velocities to the observed velocities assuming an extra jitter term in the velocities of 100\,m/s (from stellar variability). All generated binaries down to 20$M_J$ in mass were rejected at $>5\sigma$, and $>99\%$ were rejected down to 5$M_J$. \item \textit{Light from a physically unassociated eclipsing binary star or transiting planet system is blended with light from DS Tuc:} Spitzer confirms that the transit signal detected towards DS Tuc A must originate from within a few arcseconds of the star. We detected no stars nearby DS Tuc in our GPI adaptive optics imaging, and other groups have previously detected no nearby stars in their own AO observations \citep{kasper2007, vogt2015}. Crucially, due to its proper motion, DS Tuc has moved over half an arcsecond with respect to stationary background sources between the different AO imaging epochs over the last decade, so we are able to definitively rule out background stars too close to DS Tuc A for GPI to resolve. \item \textit{Light from a physically associated eclipsing binary or planet-hosting companion is blended with light from DS Tuc A:} For this to be true, DS Tuc A must have a binary companion close enough to escape detection by GPI (inside about 8 AU) and bright enough to cause the transit signal we see. The magnitude difference $\Delta m$ between DS Tuc A and the faintest companion which could contribute the transit signal is given by: \begin{equation} \Delta m \lesssim 2.5 \log_{10}\left ( \frac{t_{12}^2}{t_{13}^2 \delta} \right ) \end{equation} \noindent where $t_{12}$ is the duration of transit ingress/egress, $t_{13}$ is the transit duration from first contact (beginning of ingress) to third contact (beginning of egress), and $\delta$ is the observed transit depth \citep{vanderburg2019}. Fitting the \emph{TESS}\ light curve with MCMC, but without any constraints from the stellar parameters yields $\Delta m \lesssim 2.4$ (95\% confidence). From a 45 Myr MIST isochrone \citep{Dotter2016,Choietal2016} at solar metallically (provided in the \emph{TESS}\ bandpass), this magnitude difference corresponds to a companion star with a mass $>$0.63 $M_\odot$. To place a dynamical upper limit on the mass of a companion, we perform a Monte-Carlo simulation of companion orbits to DS Tuc A with randomly drawn isotropic inclinations, masses below 1$M_\odot$, and semi-major axes below 8 AU (holding the eccentricity to zero). For obits that produce semi-major amplitudes less than half the range of our RV observations (0.6 km s$^{-1}$), we find that we can exclude companion masses above 0.28 $M_\odot$ at 95\% confidence. The large discrepancy between these mass limits excludes this scenario at high confidence. \end{enumerate} Our observational constraints confidently rule out these false-positive scenarios, so DS~Tuc~Ab is almost certainly a genuine exoplanet. \section{Discussion}\label{Sec:discussion} \subsection{DS Tuc Ab in context} With an age of $\tau \sim 45$ Myr, DS Tuc Ab is one of the few transiting planets with ages $\tau < 100$ Myr, joining the planets K2-33b \citep{DavidNeptunesized2016, MannZodiacal2016}, V1298 Tau b \citep{DavidWarm2019} and AU Mic b (Plavchan et al.~submitted). At $V=8.5$, DS Tuc A is the brightest of these transiting planet host stars, closely followed by AU Mic at $V=8.6$. Using photometry from \emph{TESS}\ and \emph{Spitzer}, we determined that DS Tuc Ab has a radius of $5.70\pm0.17$ $R_\earth$, placing it in the sparsely populated realm of super-Neptunes and sub-Saturns. The planet is young enough that it likely still contracting due to internal cooling and may also be losing mass loss; models from \citet{2018ApJ...868..138B} suggest that its radius will shrink by $5-10$\% over the next few 100 Myr. DS Tuc is a visual binary, and we find no evidence for additional massive companions in the system. While DS Tuc B has previously been suggested to be a spectroscopic binary, we do not see two components in the spectrum of DS Tuc B at any observed epoch, a visual companion in high contrast imaging data, or periodic radial velocity variations at the precision of our data ($200$ m\,s$^{-1}$). The detection of planetary or substellar companions orbiting DS Tuc A exterior to DS Tuc Ab could indicate that dynamical interactions played a role in the present orbit of DS Tuc A; however, our high contrast imaging data from GPI shows no companions with masses more than about $5M_\mathrm{Jup}$ between $10$ and $80$ AU. The orbit of the stellar binary is likely to be closely but not perfectly aligned with both the orbit of the transiting planet and the spin-axis of the planet-hosting star. We found a binary orbit inclination of $96.9\pm0.9$\degree, a planetary inclination of $89.5^{+0.34}_{-0.41}$\degree, and a stellar inclination of $i > 82^{\circ}$ ($1\sigma$ limit). The latter two quantities use the convention of $i<90$; however, $i>90$ is equally likely. Although the position angles are presently unconstrained, the chance of all three having the similar inclinations by chance is small, suggesting the three axes are in fact close to aligned. This is similar to the five-planet {\it Kepler}-444ABC system \citep{CampanteAncient2015a}. \citet{DupuyOrbital2016} found that the orbit of {\it Kepler}-444BC and the orbits of the planets around {\it Kepler}-444A have the same inclination angle, and suggested that the planets formed {\it in situ} in close orbits around {\it Kepler}-444A. The stellar density that we determine from the transit fit differs from that which we calculate from the stellar parameters by $3\sigma$. The most likely reason is either errors in the model-derived stellar mass, or a mild eccentricity ($0.05 \lesssim e \lesssim 0.1$). While our mass estimate has formal errors of $\simeq$6\%, predictions from different model grids can vary by $\simeq$10\%. Moderate eccentricities have been found for some other young planets, including two in the Hyades \citep{2014ApJ...787...27Q, Thao2019}. \subsection{Prospects for follow-up} Due to the brightness of DS Tuc A, this system offers an exciting opportunity for detailed characterization of a young planet. Measuring the planetary mass would allow one to compare the planet's density to that of older planets. A distinct possibility is that mass estimates based on field-age planets represent an overestimate for DS Tuc Ab, given that the planet could still retain heat from its formation and might undergo future radius evolution as its atmosphere is sculpted by photoevaporative ultraviolet flux. While these processes would impact the planetary radius, they are not be expected to have a substantial impact on the planetary mass. The \cite{ChenPROBABILISTIC2016} mass-radius relation, which are based on field-aged planetary systems, predicts a planetary mass of $28^{+35}_{-13}$ $M_{\oplus}$. The expected radial velocity (RV) semi-amplitude produced by DS Tuc Ab would then be $9^{+11}_{-4}$ ms$^{-1}$. As evidenced by the large error bars on the inferred planet mass, there are relatively few planets with sizes between Neptune and Saturn with measured masses; and the planetary mass--radius relation is poorly constrained for planets of this size. Measuring the Rossiter--McLaughlin effect would determine the sky-projected angle between the stellar rotational and planetary orbital angular momentum vectors, and test our hypothesis that the stellar spin and planetary orbital axes are aligned. We estimate the radial velocity amplitude due to the Rossiter--McLaughin effect using the relation $\Delta RV\simeq$0.65~$v\sin{i_*}$$\left(\frac{R_P}{R_*}\right)^2\sqrt{1-b^2}$ \citep{2007ApJ...655..550G}, finding a predicted amplitude of 32\,m\,s$^{-1}$. Combining a spin-orbit misalignment measurement from Doppler Tomography \citep[e.g.,][]{Johnson2017} or the Rossiter--McLaughlin effect \citep[e.g.,][]{Narita_rossiter2010} with our measurement of $i_*$ from the rotation period and $v\sin{i_*}$, one could measure full three-dimensional spin-orbit misalignment $\psi$. DS Tuc Ab joins a small number of planets where such measurements are possible. Measuring RV signals on the scales noted above would be well within reach of current high precision RV instruments, but stellar activity poses a major challenge \citep[e.g.][]{SaarActivityRelated1997, PaulsonSearching2004}. DS Tuc A is a very magnetically active star, with $\log{R^\prime_{HK}=-4.09}$ \citep{HenrySurvey1996}. For stars like DS Tuc A, the stellar activity signal on many-day timescales (i.e., over many stellar rotation periods) is expected to be $100-200$ m/s based on the sample of active stars monitored with Keck by \citet{HillenbrandEmpirical2015}. While a jitter of this level would seem to preclude RV measurements of the planetary signal, stellar activity signals can be mitigated by simultaneously modeling the activity and planetary signals using, e.g.\ Gaussian processes, a process which would be aided by our knowledge of the star's photometric variability \citep[e.g.][]{HaywoodPlanets2014, RajpaulGaussian2015, 2016AJ....152..204L}. It is not clear how well the activity signal can be modelled and removed in an intensive RV campaign to measure a planet's mass or Rossiter--McLaughlin effect. We investigate prospects for atmospheric characterization with JWST by computing its transmission spectroscopy metric using Equation 1 of \cite{2018PASP..130k4401K}. We assume zero albedo and full day-night heat redistribution to estimate an equilibrium temperature for the planet of 850 K. We find a transmission spectroscopy metric is 264, which can be interpreted as the S/N with which its transmission spectrum is expected to be measured (assuming a cloud-free atmosphere) with a 10-hour observing program with the NIRISS instrument. This makes DS Tuc Ab an excellent target for observations with JWST. Finally, we note that it may be possible to detect the planetary exosphere, e.g. using He 10830\AA\ transit observations \citep{SpakeHelium2018,OklopcicNew2018}. \section{Summary}\label{Sec:summary} We report the discovery of a hot planet with a radius of $5.7\pm0.17R_\earth$ around the young star DS Tuc A (G6V, $V=8.5$) using data from NASA's \emph{TESS}\ mission. The host star was one of the first identified members of the $45$ Myr old Tucana--Horologium association, and has a stellar companion orbiting at $157 < a < 174$ AU ($1\sigma$ interval). The \emph{TESS}\ data alone were insufficient to validate the planet given the nearby stellar companion, so we used photometry from {\it Spitzer} to confirm that the planet orbits DS~Tuc A\ and revise the transit parameters. We find that the rotation axis of DS Tuc A, the orbital axis of the stellar binary, and the orbital axis of the planet are likely to be aligned. This $45$ Myr-old planet offers numerous opportunities for further characterization and illustrates the utility of \emph{TESS}\ in furthering the study of planetary evolution. \acknowledgements The authors would like to thank R.~Angus, D.~Foreman-Mackey, and B.~Sowerwine for helpful conversations regarding this manuscript. This work was supported by the \emph{TESS}\ Guest Investigator program (Grant 80NSSC19K0636, awarded to AWM). ERN acknowledges support from the National Science Foundation Astronomy \& Astrophysics Postdoctoral Fellowship Program (Award \#1602597). This work makes use of observations from the LCO network. Based on observations obtained at the Southern Astrophysical Research (SOAR) telescope, which is a joint project of the Minist\'{e}rio da Ci\^{e}ncia, Tecnologia, Inova\c{c}\~{o}es e Comunica\c{c}\~{o}es (MCTIC) do Brasil, the U.S. National Optical Astronomy Observatory (NOAO), the University of North Carolina at Chapel Hill (UNC), and Michigan State University (MSU). Some of the observations reported in this paper were obtained with the Southern African Large Telescope (SALT) through Dartmouth College. This paper includes data collected by the \emph{TESS}\ mission, which are publicly available from the Mikulski Archive for Space Telescopes (MAST). Funding for the \emph{TESS}\ mission is provided by NASA's Science Mission directorate. This research has made use of the Washington Double Star Catalog maintained at the U.S. Naval Observatory. We would like to thank the University of North Carolina at Chapel Hill and the Research Computing group for providing computational resources (the Longleaf Cluster) and support that have contributed to these research results. We acknowledge the use of public TESS Alert data from pipelines at the TESS Science Office and at the TESS Science Processing Operations Center. Resources supporting this work were provided by the NASA High-End Computing (HEC) Program through the NASA Advanced Supercomputing (NAS) Division at Ames Research Center for the production of the SPOC data products. \vspace{5mm} \facilities{TESS, SALT (HRS), SOAR (Goodman), WASP, Spitzer, LCO (NRES), CDS, MAST, Simbad} \software{{\tt Astropy} \citep{2013A&A...558A..33A}, {\tt emcee} \citep{Foreman-MackeyEmcee2013}, {\tt celerite} \citep{Foreman-MackeyFast2017}}
{ "redpajama_set_name": "RedPajamaArXiv" }
4,871
Q: How to properly install a python package within the code? I need to install a package, matplotlib. Install it within the code. I am a beginner and have very little knowledge about python coding. import subprocess import sys def install(matplotlib): subprocess.call([sys.executable, "-m", "pip", "install", matplotlib]) Does that means. after the computer finish the code above, I can start using matplotlib commands? But it don't seem like that, I stil get an error when: import matplotlib.pyplot as plt The error: No module named 'matplotlib' How I can fix this? I know this is a very basic problem, but I need help. Any help is appreciated! A: You could use pip's python module to achieve that. import pip def install(package): if hasattr(pip, 'main'): pip.main(['install', package]) else: pip._internal.main(['install', package]) A: Try using the pip library: UPDATED import pip from pip import main from pip._internal import main if hasattr(pip, 'main'): pip.main(['install', 'matplotlib']) else: pip._internal.main(['install', 'matplotlib'])
{ "redpajama_set_name": "RedPajamaStackExchange" }
2,592
\section{Introduction} The concept of cosmic inflation has been extremely effective in explaining different properties of the early universe \cite{starobinskii1979spectrum,sato1981first,guth1981inflationary,linde1983chaotic,linde1995quantum} . Various satellite observations have consistently confirmed inflationary estimates for the early universe, and the most recent evidence from the Planck experiment continues this trend \cite{aghanim2020planck,akrami2020planck1,aghanim2018planck,akrami2018planck} . Furthermore observational evidence supports a wide range of inflationary models, a lot of which are inspired by radically different contexts ranging from modified gravity theories to quantum gravitational \cite{martin2014encyclopaedia,martin2014best,wands2008multiple,berera1995warm,dvali1999brane,alexander2013chern,kanti2015gauss,oikonomou2021generalizing,Odintsov:2019clh} . However, traditional single field models (which some refer to as "supercooled inflationary models \cite{berera1995warm}") have a lot of empirical support and are still widely used in theoretical studies. In recent years, a lot of effort has gone into developing a "Theory of Everything," and String Theory is probably the most well-known candidate for such a paradigm. Since string theory is presented in such a vivid way, it is reasonable to expect it to have far-reaching consequences for cosmology. As a result, there is a large and diverse body of literature that has looked into the cosmological consequences of string theory. The extremely high number of possible vacua states that string theory makes, which can go as high as $\mathcal{O}(10^{500})$ , is one of the many cosmologically interesting aspects of the theory and all these possible vacua constitute the "Landscape" of string theory. The question of which class of low energy EFT's are actually compatible with string theory then becomes a very interesting one. In order to address this issue, Vafa coined the word "Swampland," which refers to a class of low-energy effective field theories that could not be have a consistent UV completion with regards to the string theoretic paradigm. Furthermore, a number of field theoretic UV completion criterion from string theory known as the "swampland conjectures" have been proposed in recent years to classify whether a given regime is in the swampland or not. Many people consider string theory to be a feasible quantum gravity framework, so if a low-energy EFT meets these criteria, it might also get along with a self-consistent quantum gravity theory. Although quite a number of swampland criterion have been proposed in recent times, like the completeness \cite{polchinski2004monopoles} and cobordism \cite{mcnamara2019cobordism} conjectures,the distance and the dS conjectures have been shown to quite clearly have the most striking implications for cosmology of all these conjectures. These two conjectures can be described as follows: \\ \\ $ 1 $ : Swampland Distance Conjecture : This conjecture limits the field space of validity of any effective field theory \cite{ooguri2016non} . This sets a maximum range traversable by the scalar fields in an EFT as \begin{equation} \frac{\Delta \phi}{m_{p}} \leq d \sim \mathcal{O} (1) \end{equation} where $ m_{p} $ is the reduced Planck's constant, d is some constant of $ \mathcal{O} (1) $ , and $\phi$ is the Scalar Field of the EFT. \\ \\ $ 2 $ Swampland De Sitter Conjecture : This Conjecture states that it is not possible to create dS Vacua in String Theory \cite{obied2018sitter}. The conjecture is a result of the observation that it has been very hard to generate dS Vacua in String Theory \cite{dasgupta2019sitter,danielsson2018if}( While it has been shown that creating dS Vacua in String Theory is possible in some schemes ,like the KKLT Construction \cite{kachru2003sitter}). The Conjecture sets a lower bound on the gradient of Scalar Potentials in an EFT , \begin{equation} m_{p} \frac{| V^{\prime} |}{V} \geq c \sim \mathcal{O} (1) \end{equation} where c is some constant of $ \mathcal{O} (1) $ , and V is the scalar Field Potential. A " refined " form of the Swampland De Sitter Conjecture places constraints on the hessian of the scalar potential (a finding which first appeared in \cite{garg2019bounds} and later in \cite{ooguri2019distance} ) and is given by \begin{equation} m_{p}^{2} \frac{ V^{\prime \prime}}{V} \leq - c^{\prime} \sim \mathcal{O} (1) \end{equation} These criterion have intriguing implications for cosmology, especially single-field inflation. Considering the data on inflation, it was shown that single field inflation in a GR based cosmology is incompatible with these conjectures \cite{kinney2019zoo}. A lot of effort has gone into resolving this dispute \cite{geng2020potential,scalisi2019swampland,ashoorioon2019rescuing}. However, it has been shown that if the background cosmology for single field inflation is not GR based, then this inflationary regime can still satisfy the swampland criterion\cite{lin2019chaotic,odintsov2020swampland,yi2019gauss,trivedi2020swampland,oikonomou2021rescaled}. Multi field inflationary models have also been shown to be quite consistent with the conjectures even in GR based paradigms \cite{bravo2020tip}. It's also worth noting that in both GR and non-GR based cosmologies, the warm inflation paradigm has been shown to be very compatible with the swampland criterion even for single field models \cite{motaharfar2016warm,motaharfar2019warm,das2019warm,das2020runaway} The recently proposed "Trans Planckian Censorship Conjecture" (TCC)\cite{bedroya2019trans} is another swampland conjecture that has sparked a lot of interest in inflationary cosmology . Models with tachyonic scalar fields as the inflaton can also be consistent with the conjectures \cite{mohammadi2020warm,trivedi2021rejuvenating}. Single field GR based inflationary models can only be compatible with the TCC if they are extremely fine tuned \cite{bedroya2020trans}, which is ironic given that inflation was invented to solve the fine tuning problem of traditional big bang cosmology. Much further work has been done to clarify the TCC's problems with single field inflation, with the overall picture appearing to be that the TCC is more comfortable with non-standard inflationary regimes than usual single field models \cite{bernardo2020trans,mizuno2020universal,brahma2020trans,dhuria2019trans,brandenberger2020strengthening,schmitz2020trans,guleryuz2021trans}. What one can hence understand from the current literature on the swampland and inflation is that one is not well placed to expect conventional(cold) single field models to be on amicable terms with these conjectures. Hence, it would optimistic for one to think that non-trivial modifications to cold single field models based in a GR cosmology could lead to these paradigms being consistent with the conjectures. \\ \\ One such non trivial modification to single field inflationary models can be found by investigating such regimes in a Lorentz violating background. Several inflationary regimes in Lorentz-violating cosmological scenarios have been studied in the recent times \cite{gasperini1985inflation,lim2005can,zuntz2008constraining,armendariz2010primordial,kanno2006Lorentz,avelino2009impact,donnelly2010coupling,almeida2017cosmology}. Gasperini \cite{gasperini1985inflation} proposed that the primordial period of rapid expansion of the Universe may be achieved if the gravitational interactions were characterised by a non locally Lorentz invariant theory at some extremely early epoch.The authors in \cite{donnelly2010coupling} examined a Lorentz-violating inflationary theory based on an Einstein-aether theory and a scalar field Lagrangian. Such scenarios have also been used to understand dark energy, where it has been demonstrated that violating the Lorentz invariance creates Lagrangians capable of driving the universe's current acceleration \cite{blas2011technically,audren2013cosmological}. The former and latter theories, as a result, deal with Lorentz symmetry violation at small and large distances, respectively \cite{jacobson2008einstein}. In \cite{almeida2017cosmology}, it was shown that a time-like Lorentz violating background can produce sufficient inflation whilst also providing an explanation for the current dark energy epoch. Another interesting point with Lorentz violations is that Lorentz violations can be found in certain quantum gravity constructions \cite{Jacobson:2004qt,collins2006lorentz,Mavromatos:2007xe,li2011background}. Lorentz violation during inflation generates some non standard changes in the usual inflationary dynamics and could hence be an interesting regime with regards to the swampland.We also want to emphasize an important point, which is that the swampland conjectures have been aimed to be criterion for the underlying quantum gravity as a whole, even though it is based in string theory. Hence we are not under any restriction to only consider some particular form of Lorentz violating cosmology which is suited or based in some specific quantum gravity approach. The goal of this paper here is to understand whether Lorentz violations during inflation (and more generally in the early universe ) could possibly be appreciated by the underlying theory of quantum gravity and for achieving that, we use the swampland criterion. \\ \\ In the next section,we briefly describe about a particular kind of Lorentz violating cosmology, while in section III we discuss the swampland status quo of (cold) single field inflation in that regime. In section IV, we will consider 3 different inflationary models and discuss which of them could be consistent with the swampland on the basis of the groundwork made in Section III. We summarize our work in section V with some concluding remarks about the overall picture of the early universe painted by the swampland so far. \section{Basics of the Lorentz violating cosmology} We will be working in a time-like Lorentz violating cosmological background here.A purely time-like background can be defined by a tiny subset of Lorentz invariant violating operators that preserve rotational invariance, which is why one would use a time-like Lorentz operator here \cite{passos2017Lorentz} . Furthermore as Kostelecky and Mewes describe in \cite{kostelecky2009electrodynamics} , the Cosmic Microwave Background becomes a natural choice of a preferred frame in this scenario. We start off with a Lagrangian in a 3+1 dimensional background describing a scalar field ( which will eventually be the inflaton) coupled to a Lorentz violating background (similar to the one here \cite{almeida2017cosmology}) \begin{equation} \mathcal{L} = \Bigg( \frac{R}{2 \zeta^{2}} - \frac{1}{2} \left( g{{_{\mu}}{_{\nu}}} + \xi_{1} k^{1}{{_{\mu}}{_{\nu}}} \right) \partial^{\mu} \phi \partial^{\nu} \phi - V(\phi) \Bigg) \sqrt{-g} \end{equation} where $ \zeta^{2} = 8 \pi G $ and $V(\phi) $ is the inflaton potential while $k^{i}{{_{\mu}}{_{\nu}}}$ are time-like tensors which can even couple to other fields in a more general Lagrangian. It does so in a way such that the only non-zero component is $ k^{i}{{_{0}}{_{0}}} $ , which means \begin{equation} k^{i}{{_{\mu}}{_{\nu}}} = \begin{pmatrix} -\beta_{i} & 0 \\ 0 & 0 \end{pmatrix} \end{equation} where the tensor couples to the scalar field via the coupling $ \xi_{1} >0 $. This kind of a theory has been touted to be able to explain both the current expansion of the universe as well the inflationary phase \cite{almeida2017cosmology}. It can do so if $ \beta_{1}\to 0 $ for short distances, which allows for all the inflationary dynamics to be controlled only by the inflaton field while if $ \beta_{1} \to -1 $ for large distances, the Lorentz violating regime can also explain dark energy. We will, however, just be discussing about the inflationary phase in our analysis in the next section. For studying the cosmological properties of the Lagrangian described above, we can take the metric to be of the FLRW form \begin{equation} ds^{2} = -dt^2 + a(t)^{2} d\boldsymbol{x}^{2} = g{{_{\mu}}{_{\nu}}} dx^{\mu} dx^{\nu} \end{equation} The background fields responsible for the Lorentz violating factor would change the metric as \begin{multline} d\overline{s}^{2} = \overline{g}{{_{\mu}}{_{\nu}}} dx^{\mu} dx^{\nu} = \left( g{{_{\mu}}{_{\nu}}} + \xi_{1} k^{1}{{_{\mu}}{_{\nu}}} \right) dx^{\mu} dx^{\nu} = \\ -\left(1 + \xi_{1} \beta_{1} \right) dt^{2} + a(t)^{2} d\boldsymbol{x}^{2} \end{multline} here one can then go on to write an effective velocity for the scalar field as \begin{equation} v = \sqrt{1 + \xi_{1} \beta_{1} } c \end{equation} where c is the usual speed of light. One can then also define a relationship between the Lorentz violating parameter and the redshift z as \cite{almeida2017cosmology} \begin{equation} \xi_{1} \beta_{1} = - \frac{C_{1}}{z(z+2) + C_{1}} \end{equation} where $ C_{1} $ is a constant. This relation becomes very important when one wants to realize both the inflationary phase and the dark energy phase with the same inflation field but as we are currently interested only in the former phase we will not explore the redshift relation further. If one considers a General Relativity based cosmological scenario, then the Friedmann equation will be remain the same as they are in the usual GR sense. Hence, the Friedmann equation remains as \begin{equation} H^{2} = \frac{\rho}{3 m_{p}^{2}} \end{equation} where we are working in $ m_{p} = \sqrt{ \frac{1}{8 \pi G} } $ units. However, the energy and pressure densities will not remain as they usually are in GR. The energy-momentum tensor can be written here as \begin{equation} T{{_{\mu}}{_{\nu}}} = \partial_{\mu} \phi \partial_{\nu} \phi + g{{_{\mu}}{_{\nu}}} \mathcal{L}_{\phi} \end{equation} where $ \mathcal{L}_{\phi} = - \frac{1}{2} \left( g{{_{\mu}}{_{\nu}}} + \xi_{1} k^{1}{{_{\mu}}{_{\nu}}} \right) \partial^{\mu} \phi \partial^{\nu} \phi - V(\phi) $. Now we can write down the pressure and energy densities, given that our field configurations are homogeneous and so $ \phi = \phi (t) $, we have the energy and pressure densities as \begin{equation} \rho_{\phi } = \frac{1}{2} (1 + \xi_{1} \beta_{1} ) \dot{\phi}^{2} + V(\phi) \end{equation} \begin{equation} p_{\phi } = \frac{1}{2} (1 - \xi_{1} \beta_{1} ) \dot{\phi}^{2} - V(\phi) \end{equation} Finally, the equation of motion for the scalar field is found out to be \begin{equation} \ddot{\phi} + 3 H \dot{\phi} + \frac{V^{\prime}(\phi)}{1 + \xi_{1} \beta_{1}} = 0 \end{equation} This completes our groundwork for the formulation of inflation in this scenario. In the next section we will be discussing about the different inflationary parameters in this scenario and consider the possibility of this regime being swampland consistent. \section{Swampland implications for Lorentz violating inflation } In the inflationary phase, the energy density $ \rho $ is dominated by the inflaton energy density $ \rho_{\phi } $ and so one can take $ \rho \sim \rho_{\phi } $ during inflation. Hence, the Friedmann equation (10) during inflation takes the form \begin{equation} H^{2} = \frac{1}{3 m_{p}^{2}} \left( \frac{1}{2} (1 + \xi_{1} \beta_{1} ) \dot{\phi}^{2} + V(\phi) \right) \end{equation} From now on wards in the paper, we define the Lorentz violating parameter to be $ \kappa = \xi_{1} \beta_{1} $. Now during inflation, $ \dot{\phi}^{2} << V(\phi) $ which corresponds to the slow roll condition. Applying this criterion here, we find the Friedmann equation to be \begin{equation} H^{2} = \frac{V(\phi)}{3 m_{p}^{2}} \end{equation} One can note that this is the same form that the Friedmann equation has for all sorts of inflationary models in a GR based cosmology, which are Lorentz-abiding regimes. Hence, the Friedmann equation remains unchanged even after factoring in Lorentz violations in this case. Now, we cast our attention to the field equation for the inflaton (14). Another slow roll criterion during inflation is that $ \ddot{\phi} << 3H \dot{\phi} $. Considering this, we can write the field equation to be \begin{equation} 3H \dot{\phi} + \frac{V^{\prime}(\phi)}{1 + \kappa} \approx 0 \end{equation} Now we begin to see effects of Lorentz violation in the inflationary dynamics, as the Lorentz violating parameter plays an important role in the field equation during inflation. Now lets find expressions for an important part of the inflationary setup, the slow roll parameters. The $ \epsilon$ slow roll parameter is given by \begin{equation} \epsilon = -\frac{\dot{H}}{H^{2}} \end{equation} Using the Friedmann equation (16) and the Field equation (17), we can write the $ \epsilon $ potential slow roll parameter from its usual definition as \begin{equation} \epsilon = \frac{m_{p}^{2}}{2 (1 + \kappa)} \left( \frac{V^{\prime}}{V} \right)^{2} \end{equation} And further, we can find the $\eta$ potential slow roll parameter similarly as \begin{equation} \eta = \frac{m_{p}^{2}}{1 + \kappa} \frac{V^{\prime \prime}(\phi)}{V(\phi)} \end{equation} After this, one can then find out the e-fold number during Inflation as \begin{equation} N = \int_{t_{i}}^{t_{f}} H dt \end{equation} This can then be written in terms of $\phi$ as \begin{equation} N = \int_{\phi(t_{i})}^{\phi(t_{f})} \frac{(1 + \kappa)}{m_{p}^{2}} \frac{ V(\phi)}{V^{\prime}(\phi)} d\phi \end{equation} where $ \phi (t_{i} ) $ and $\phi (t_{f} ) $ are the values of the inflaton field at the time of Horizon crossing and at the end of inflation, respectively. At this point, we can start to discuss the implications of the swampland conjectures on this inflationary regime. A point to note is that for the dS conjecture is that if a low energy EFT scenario is consistent with either the original dS conjecture (2) or it's refined form (3), then it can have a consistent UV completion with regards to the dS criterion. Hence one can use only one of these conjectures in this regard if they like to, and we will be using the original dS conjecture (2) instead of the refined form in our analysis. The issues of the swampland conjectures with single field inflation were firstly discussed in \cite{kinney2019zoo}, where the background cosmology was also general relativistic. Two of the very strong disagreements between the conjectures and inflation concerned the bounds on the $\epsilon$ parameter during inflation and the number of e-folds. In their work, they showed that the dS conjecture forbids the $\epsilon$ parameter to satisfy it's usual bound needed for sufficient inflation, which is $ \epsilon << 1 $. It is easy to see how this is the case, as this $\epsilon$ parameter bound for single field inflation in a GR based cosmology is given as \begin{equation} \epsilon = \frac{m_{p}^{2}}{2} \left( \frac{V^{\prime}}{V} \right)^2 \geq \frac{c^2}{2} \end{equation} where c is an $ \mathcal{O}(1) $ parameter in the original dS conjecture (2). The above equation makes it clear that it would be problematic to achieve $ \epsilon << 1 $ during inflation considering $ c \sim \mathcal{O} (1) $. From this definition, they were further able to work out some bounds on the scalar spectral index and they then found out that there is a disagreement between what the data on single field inflation and string theory itself predicts about the order estimates of the c parameter. Namely, the c parameter should be an $\mathcal{O}(0.1) $ term for it to be consistent with the data on inflation instead of $\mathcal{O}(1) $ as estimated from String theory. The next serious issue is concerns the e-fold number during Inflation. If one considers both the distance (1) and the dS conjectures (2) to hold true, then one finds out that these conjectures induce a fatal bound on the number of e-folds during inflation. This can also be readily shown here, as the number of e-folds for single field (in a GR based cosmology) can be roughly written as \begin{equation} N \simeq \frac{\Delta \phi}{m_{p} } \frac{1}{m_{p} \frac{V^{\prime}}{V} } \end{equation} Applying both the distance ($ \frac{\Delta \phi}{m_{p}} \leq d \sim \mathcal{O} (1) $ ) and dS conjecture( $ m_{p} \frac{| V^{\prime} |}{V} \geq c \sim \mathcal{O} (1) $) here , one finds out that the e-fold number is constrained to be less than unity by these criterion. This is an incredibly distressing prediction, as the latest observational bounds \cite{akrami2020planck1} require at least around 60 e-folds in order for Inflation to explain the problems of big bang cosmology which it was originally brought in to rectify. Hence these explorations seem to suggest that all kinds of single field inflationary models in a GR based cosmology lie in the swampland and so, possibly, would not be viable with quantum gravity eventually. \\ \\ However, now we make the case that single field inflation can be consistent with the swampland conjectures even when the background cosmology is described by GR, albeit with some Lorentz violations. We can now tackle both the prime concerns raised in \cite{kinney2019zoo} and find bounds on the parameter( $ \kappa $ due to Lorentz violation) which would allow us to rectify these issues. Considering the dS conjecture (2), one can write the following bound on the $ \epsilon $ parameter (19) as \begin{equation*} \epsilon \geq \frac{c^{2}}{2 (1 + \kappa)} \end{equation*} One can clearly notice that the dS conjecture bound for this $\epsilon$ parameter is similar to the one for the usual single field model encountered in (23), with the difference between both being the $ ( 1 + \kappa) $ term in the denominator here. As $ c \sim \mathcal{O} (1) $, in order for $ \epsilon << 1 $ one requires that \begin{equation} 2 (1 + \kappa) >> 1 \ \end{equation} This provides us a lower bound on the Lorentz violating parameter in order for the inflationary regime to be consistent with the swampland. Furthermore, we can roughly write the e-fold number(22) during inflation in this case as \begin{equation} N \simeq (1+ \kappa) \frac{\Delta \phi}{m_{p} } \frac{1}{m_{p} \frac{V^{\prime}}{V} } \end{equation} Again applying the dS and distance conjectures together here, we find that the term $ \frac{\Delta \phi}{m_{p} } \frac{1}{m_{p} \frac{V^{\prime}}{V} } $ has to be less than 1. Hence, in order to have sufficient inflation one requires \begin{equation} (1+ \kappa) >> 1 \end{equation} Hence considering the conditions in (25) and (27), the final bound on $\kappa$ is \begin{equation} \boxed{\kappa >> 1} \end{equation} Hence in order for inflation to be consistent with the swampland conjectures in this regime, the minimum requirement that needs to be satisfied is $ \kappa > 1 $. This will get us rid of the issues of both the $\epsilon$ parameter and the e-fold number and hence this lower limit on $ \kappa $ is what we need in order to have swampland consistent inflation in this regime. \\ \\ A question one might wonder about now is, how would we actually get to know whether or not this inequality is satisfied in a particular inflationary model ? The answer to that lies in the perturbation parameters for inflation, in particular the scalar spectral index. The definition of the scalar spectral index remains the same here as it is for usual inflationary models \cite{almeida2017cosmology,baumann2009tasi} \begin{equation} n_{s} -1 = 2\eta - 6\epsilon \end{equation} One can even further work out other perturbation parameters like the tensor-to-scalar ratio, the tensor spectral index etc. but the scalar spectral index is all we need here to constrain $ \kappa $ in this case using the latest observational data from the Planck experiment \cite{akrami2020planck1}. In the next section we will now constrain the Lorentz violating parameter in 3 different inflationary models and try to ascertain whether the constraints from the swampland conjectures can be satisfied by a significant amount of potentials. \section{Analysing the swampland consistency of some models in the Lorentz violating regime } In this section, we will work out the Lorentz violating parameter in the Higgs, Radion Gauge and Spontaneous Symmetry breaking inflationary models and check whether these models are swampland consistent in a Lorentz violating regime. \subsection{Higgs Inflation } Higgs Inflation is an inflationary regime of particular phenomenological interest \cite{bezrukov2008standard,bezrukov2009standard,bezrukov2009standard1,garcia2011higgs}. As the name suggests, here the well known Higgs field h itself is considered to be playing the role of the inflaton field. The well known inflaton potential in this case is \cite{bezrukov2008standard,bezrukov2009standard,garcia2011higgs,martin2014encyclopaedia} \begin{equation} V(\phi) = M^{4} (1- e^{-\sqrt{2/3} \phi/m_{p} })^{2} \end{equation} where M is a mass scale (in fact this will remain the convention for mass scale depiction for the later potentials in this work as well). One interesting thing to note here is that the Higgs inflation potential is an example of an inflationary model with no free parameters. In order to obtain the $ \kappa $ values allowed by this model, we first need to find the slow roll parameters $ \epsilon $ and $\eta$ ( as defined in (19) and (20) ) for the potential (30),. Although this model has been shown to be consistent with observational data, it has been shown that even the higgs potential can be in tensions with the swampland conjectures in the usual GR inflationary regime \cite{denef2018sitter}. Hence it would be interesting to see whether considering a Lorentz violating regime for inflation would help alleviate these issues.\\ \\ For this potential, one can find these parameters to be \begin{equation} \epsilon = \frac{4}{3 (1 + \kappa) \left(e^{\sqrt{2/3} \phi / m_{p}}-1\right)^2} \end{equation} \begin{equation} \eta = \frac{4 \left(2 - e^{\sqrt{2/3} \phi / m_{p}}\right)}{3 (1 + \kappa) \left(e^{\sqrt{2/3} \phi / m_{p}}-1\right)^2} \end{equation} After being powered with the knowledge of these parameters, we can now find the scalar spectral index $ n_{s} $ after some algebra as \begin{equation} n_{s} -1 = -\frac{4 (-1 + \coth[\phi/(\sqrt{6} m_{p})]) \coth[\phi/(\sqrt{6} m_{p})] }{ 3(1 + \kappa)} \end{equation} The latest observational constraints \cite{akrami2020planck1} on the scalar spectral index estimate that $ n_{s} = 0.9649 \pm 0.0042$ at the time of Horizon crossing, hence for our purposes here we can take $ n_{s} \simeq 0.9649 $ and be assured of considerable precision in our analysis. Using this value of the scalar spectral index, we can arrive at the following value for the Lorentz violating parameter \begin{equation} \kappa \simeq 40 \left(\coth \left(\frac{\phi}{\sqrt{6} m_{p}}\right)-1\right) \coth \left(\frac{\phi}{\sqrt{6} m_{p}}\right)-1 \end{equation} We consider the $\phi$ value at the time of horizon crossing $ \phi = \phi (t_{i}) $ to be around the reduced planck mass \begin{equation} \phi \simeq m_{p} \end{equation} (We will be justifying this choice in the end of this subsection)Inserting this into the above equation, we finally find the Lorentz violating parameter to be \begin{equation} \kappa \simeq 154.495 \end{equation} This is well above the minimum bound implied by the swampland conjectures on inflation in the Lorentz violating regime , which is $ \kappa > 1 $(28). To now show that one is not wrong to take $ \phi \simeq m_{p} $ at the time of horizon crossing, we take the help of the e-folding number(22) for which we first need to find $\phi$ at the end of inflation. This can be achieved quite easily using the fact that $ \epsilon = 1 $ at the end of inflation, which from (22) can be written as \begin{equation} \frac{4}{3 (1 + \kappa) \left(e^{\sqrt{2/3} \phi(t_{f}) / m_{p}}-1\right)^2} = 1 \end{equation} This is quite straightforward to solve and one gets \begin{equation} \phi(t_{f}) \simeq 0.1084 m_{p} \end{equation} Now plugging the value of $\kappa$ (36) into this $\phi$ at the end of inflation and using our estimate $\phi(t_{i}) \simeq m_{p}$, we can compute the integral(22) for the Higgs Potential (30) to get the e-folding number as \begin{equation} N = \frac{1 + \kappa}{m_{p}^{2}} \int_{\phi(t_{f})}^{\phi(t_{i})} \frac{1}{2} \sqrt{\frac{3}{2}} m_{p} \left(e^{\frac{\sqrt{\frac{2}{3}} \phi}{m_{p}}}-1\right) \simeq 52 \end{equation} This is well within the usual range between 50-70 for the e-fold number which is required for inflation to solve the problems of conventional big bang cosmology and be consistent with the latest observational data \cite{akrami2020planck1,aghanim2018planck} and so the estimate $\phi \simeq m_{p}$ is justified and is not in any kind of tensions with the data. This analysis tells us that Higgs Inflation is consistent with the swampland conjectures even in a GR based cosmology when one considers Lorentz violations in the cosmological background. It is also interesting to note that this is a zero parameter model (in the sense that the potential has no free parameters to tune accordingly) but is still consistent with the swampland conjectures. \subsection{Radion Gauge Inflation } Radion Gauge inflation was first studied in \cite{fairbairn2003radion} and is an extension of the gauge inflation scenario in which the radius modulus field around which the Wilson loop is wrapped assists inflation as it shrinks \cite{freese1990natural}. The potential in this scenario can be written as \cite{martin2014encyclopaedia} \begin{equation} V(\phi) = \frac{M^4 (\phi^2/m_{p}^{2})}{\alpha + (\phi^2/m_{p}^{2})} \end{equation} where M is again a mass scale and $\alpha$ is a positive dimensionless parameter, which would act as the only free parameter of the model. This potential can also be found in S-dual superstring models \cite{de1996inflation} Although Radion gauge inflation has never been studied in the context the swampland for GR based single field inflation, the issues pointed out for GR based inflation in \cite{kinney2019zoo} are so strong that they would generally rule all kinds of GR based potentials in the swampland and hence, it would be interesting to see how this model stands with these criterion in a Lorentz violating regime. \\ \\ We proceed similar to the way we did for Higgs Inflation in the subsection 4.1, by firstly finding out the slow roll parameters in this regime. One can find that the parameters, after some algebra, take the forms \begin{equation} \epsilon = \frac{2 \alpha^2 m_{p}^6}{(1 + \kappa) \left(\alpha m_{p}^2 \phi+\phi^3\right)^2} \end{equation} \begin{equation} \eta = \frac{2 \alpha m_{p}^4 \left(\alpha m_{p}^2-3 \phi^2\right)}{(1 + \kappa) \left(\alpha m_{p}^2 \phi+\phi^3\right)^2} \end{equation} After finding the slow roll parameters, it is straightforward to write the scalar spectral index using its definition (29) and one finds the index to be \begin{equation} n_{s} - 1 = - \frac{4 \alpha m_{p}^4 \left(2 \alpha m_{p}^2+3 \phi^2\right)}{(1 + \kappa) \left(\alpha m_{p}^2 \phi+\phi^3\right)^2} \end{equation} Again using the estimate $ n_{s} \simeq 0.9649 $ and the field value at Horizon crossing to be $ \phi \simeq m_{p} $ (we will again justify this towards the end of this as in subsection 4.1), one finds the following relation for the Lorentz violating parameter in terms of $\alpha$ as \begin{equation} \kappa = \frac{\alpha (227.92 \alpha+341.88)}{(\alpha+1)^2} - 1 \end{equation} In order to be consistent with the swampland conjectures, one requires the minimum bound $ \kappa >1 $ on the Lorentz violating parameter and this implies \begin{equation} \kappa = \frac{\alpha (227.92 \alpha+341.88)}{(\alpha+1)^2}-1 > 1 \implies \alpha > 0.0058 \end{equation} The requirement imposed on the free parameter $\alpha$ in order for this model to be swampland consistent is quite minimal and extremely in line with the basic view that $\alpha$ is a positive parameter. Before concluding, we again justify the choice we made for $ \phi(t_{i}) \simeq m_{p} $ by computing the e-fold number. As in subsection 4.1, we firstly find the $\phi$ value at the end of inflation using $\epsilon = 1 $, which gives us \begin{equation} \frac{2 \alpha^2 m_{p}^6}{(1+\kappa) \left(\alpha m_{p}^2 \phi+\phi^3\right)^2} = 1 \end{equation} Solving this analytically for $\phi$ can be quite cumbersome but fortunately, we can do this in an easier way. Given the definition of $\kappa$ (44) in terms of $\alpha$, we can plug some particular values of $\alpha$ , thereby obtaining the corresponding $\kappa$, and then use those values to obtain the $\phi$ value at the end of inflation and compute the Number of e-folds. For simplicity, let's take $\alpha = 1 $, which would result in $\kappa = 141.45$. Using these values in (46), one can find the $\phi$ value at the end of inflation to be \begin{equation} \frac{0.01404 m_{p}^6}{\phi(t_{f})^2 \left(m_{p}^2+\phi(t_{f})^2\right)^2} = 1 \implies \phi(t_{f}) \simeq 0.1143 m_{p} \end{equation} Using the $\phi(t_{f})$ to be the above value and $\phi(t_{i}) \simeq m_{p} $ one finds the e-folding number to be \begin{equation} N = \int_{\phi(t_{f})}^{\phi(t_{i})} \frac{71.225 \phi \left(m_{p}^2+\phi^2\right)}{m_{p}^4} d\phi \simeq 53 \end{equation} We see that even for an elementary value of $\alpha = 1$, one can sufficient number of e-folds of Inflation needed for solving the conventional problems of big bang cosmology and being consistent with the latest observational data \footnote{ We understand that a reader might be tempted to tend towards a more graphical approach here, which is that one might want to make a graph of N vs $\alpha$ and would want to look for the required e-fold consistent $\alpha$ by doing that rather than plugging values on their own. Even we would have preferred to go that route but as it turns out, analytically finding N as a function of $\alpha$ for arbitrary values of the parameter is not quite feasible, partly because changing the $\alpha$ values also changes the $\phi(t_{f})$ value significantly(which again is quite hard to solve for arbitrary values) and this also, hence, changes the integration limits for computing the e-folding number. This was the reason we decided to plug in $\alpha$ values ourselves to do this analysis, and we were still able to show that this model can nevertheless be easily consistent with the observational requirements for the e-folding number. } \cite{akrami2020planck1,aghanim2018planck}. Hence considering $\phi(t_{i}) \simeq m_{p} $ would not be a wrong choice in this scenario and is completely consistent with both the swampland conjectures and the data. This analysis finally allows us to conclude that Radion gauge inflation can be easily viable with the swampland conjectures in a Lorentz violating regime even when the background cosmology is general relativistic. \subsection{Spontaneous Symmetry Breaking Inflation } Spontaneous symmetry breaking inflation, in the sense that we are going to be discussing here, was firstly worked on by Moss in \cite{moss1985primordial}. Moss considered the forthcoming potential in the framework of models with spontaneous symmetry breaking where $\phi $ represents one of the components of a Higgs field. The potential in this inflationary regime takes the form \begin{equation} V(\phi) = M^4 \left(1 + \frac{\alpha \phi^2}{m_{p}^2}+\frac{\gamma \phi^4}{m_{p}^4} \right) \end{equation} where $\alpha$ and $\gamma$ are constant dimensionless parameters and M is again a mass scale. This model is different from the others we have discussed so far in that it has 2 free parameters in $\alpha$ and $\gamma$ instead of just one (like the radion gauge model) or none (like Higgs Inflation). It is interesting to mention that besides spontaneous symmetry breaking, this type of potential also finds place in certain SUSY and Spontaneous D-Parity Breaking inflationary scenarios\cite{dine1997inflaton,riotto1998inflation,gong2008inflation}. It also becomes interesting to see the consistency of this model with the swampland as spontaneous symmetry breaking is the way in which Lorentz violating effects are introduced in the Standard Model extension \cite{kostelecky1989spontaneous}, which is an EFT framework which contains the Standard Model, General Relativity and other possible operators which can break Lorentz symmetry.\\ \\ The slow roll parameters for this potential take the form \begin{equation} \epsilon = \frac{2 \left(\alpha m_{p}^3 \phi+2 \gamma m_{p} \phi^3\right)^2}{(1 + \kappa) \left(\alpha m_{p}^2 \phi^2+\gamma \phi^4+m_{p}^4\right)^2} \end{equation} \begin{equation} \eta = \frac{2 m_{p}^2 \left(\alpha m_{p}^2+6 \gamma \phi^2\right)}{(1 + \kappa) \left(\alpha m_{p}^2 \phi^2+\gamma \phi^4+m_{p}^4\right)} \end{equation} This allows us to find the scalar spectral index (29) to be \begin{equation} n_{s} -1 = \frac{4 m_{p}^2 \left(-2 m_{p}^4 \phi^2 \left(\alpha^2-3 \gamma\right) \right)}{(1 + \kappa) \left(\alpha m_{p}^2 \phi^2+\gamma \phi^4+m_{p}^4\right)^2} - \frac{5 \alpha \gamma m_{p}^2 \phi^4 - \alpha m_{p}^6 + 6 \gamma^2 \phi^6\ }{(1 + \kappa) \left(\alpha m_{p}^2 \phi^2+\gamma \phi^4+m_{p}^4\right)^2} \end{equation} \\ We again take note of the fact that the index is being evaluated at horizon crossing, and consider $ \phi = m_{p} $ which for this particular potential can be considered pretty natural as spontaneous symmetry breaking inflationary models lie in the class of small field models and the field value doesn't take values much higher than the Planck mass but we will still be justifying this estimate later on. Using the estimate $ n_{s} \simeq 0.9649 $ again and imposing the minimum requirement for swampland consistency for Lorentz violating inflation, $\kappa > 1 $, we get the following inequality \begin{equation} \kappa = \frac{683.76 \gamma^2+\gamma (569.8 \alpha-683.76)+\alpha (227.9 \alpha-113.96)}{(\gamma+\alpha+1)^2}-1 > 1 \end{equation} One can check with the help of any mathematical computation tool that this inequality can be satisfied for a huge range of $\alpha $ and $\gamma $, where both of them can be either positive or negative. Just for an example, we highlight one possible solution of the above inequality which puts the following simple bounds on $\alpha$ and $\gamma$, \begin{equation} \alpha \leq -1 , \gamma < 0 \end{equation} These restrictions on $\alpha$ and $\gamma $ do not represent a great deal of fine tuning and are just one of the many possible range of values which can be attained by these parameters in order for satisfy the swampland constraint for Lorentz violating inflation. In order to compute the e-folding number in this case, we can again go the route \footnote{We again choose this method instead of doing the forthcoming analysis in a fully analytical sense for arbitrary values of the free parameters for similar reasons as to why we chose a similar route for doing the e-folding analysis for radion gauge inflation in subsection 4.2 } that we explored in the previous subsection and start by getting a particular value for $\kappa$ .In this case that can be obtained by choosing any $\alpha$ and $\gamma$ that satisfy (54), where just for simplicity we choose $\alpha = \gamma = -10$ . To get $\phi(t_{f})$ we again set $\epsilon = 1 $, which gives us \begin{equation} \frac{0.441 \left(m_{p}^3 \phi(t_{f})+2 m_{p} \phi(t_{f})^3\right)^2}{\left(m_{p}^4-10 m_{p}^2 \phi(t_{f})^2-10 \phi(t_{f})^4\right)^2} = 1 \end{equation} The equation above can be solved to obtain $\phi(t_{f}) \simeq 0.271041m_{p}$. Using this value and our estimate $\phi(t_{i} ) \simeq m_{p}$(alongside our chosen values for $\alpha,\gamma $ and $\kappa $), we get the number of e-folds for the potential (49) by the integral (22) as \begin{equation} N = \int_{\phi(t_{f})}^{\phi(t_{i})} \frac{+216.24 m_{p}^2 \phi^2+216.24 \phi^4-21.624 m_{p}^4}{m_{p}^4 \phi+2 m_{p}^2 \phi^3} \simeq 60 \end{equation} The number of e-folds of inflation produced in this case are again enough to solve the conventional problems of big bang cosmology and be consistent with the latest observational data \cite{akrami2020planck1,aghanim2018planck}. Hence our horizon crossing field estimate $\phi(t_{i}) \simeq m_{p}$ produces adequate inflation for such elementary values for the parameters $\alpha$ and $ \gamma$ whilst still being consistent with the data. This analysis finally allows us to conclude single field inflation can still be consistent with the swampland conjectures in an essentially GR based cosmology in a Lorentz violating regime. \section{Concluding remarks and discussion} To summarize, in this work we attempted to find a way in which single field inflationary models can be consistent with the swampland conjectures even when the cosmological background in essentially general relativistic. Although there has been significant work on this in recent times, the novelty of our current work lies in the fact that we achieve this goal in quite a trouble-free way by considering a time-like Lorentz violating background. We start off our work by firstly reflecting on the problems between the swampland conjectures and single field inflation, which are particularly in unavoidable loggerheads in a general relativistic cosmology. We then discuss some crucial aspects of the Lorentz violating cosmology we have considered in our work, showing how the inflationary dynamics is affected by considering the Lorentz violations. We then show what is the requirement for Lorentz violating single field models to be consistent with the swampland conjectures, which eventually turns out to be a minimum bound on the value of the Lorentz violating. We then briefly touch upon the fact that quadratic inflation is not consistent with the swampland bound on the Lorentz violating parameter, building on previous work on the same model in this regime. After this we consider 3 inflationary models of deep phenomenological interest in the form of Higgs inflation, Radion gauge inflation and Spontaneous symmetry breaking inflation and show that all three of these models(which would otherwise have faced the same difficulties with the swampland in a simple GR based cosmology as their other compatriots) are quite easily consistent with the swampland criterion in our Lorentz violating GR based cosmology. \\ \\ The main takeaway from this work is that there can still be ways for (cold )single field inflation to be consistent with the swampland conjectures even when the background cosmology is essentially GR based. Another interesting outcome idea that can be pondered upon from our work here concerns the significance of Lorentz violations in the Early Universe. Lorentz symmetry is a cornerstone idea of Relativity and if essentially GR based inflationary regimes, which is in quite unavoidable tension with the swampland criterion, becomes rather easily consistent with these conjectures only by considering a certain form of Lorentz violation in the background cosmology then it could possibly have interesting implications from a quantum gravity point of view. The premise of the swampland conjectures is that these criterion are supposedly necessary conditions in order for low energy EFT's to have consistent UV completion, so the fact that a Lorentz violating cosmology makes it more tranquil for inflationary regimes to be consistent with the swampland could really give heed to the notion that quantum gravity points towards Lorentz violations being more significant in the early universe than what one presumes till now . Also, it would be interesting to see in future works about the implications of a Lorentz violating scenario on warm inflation, non-GR based inflation and multi-field inflation in the context of the swampland and any corresponding implications it could have on more late universe scenarios like dark energy, dark matter and the Hubble tension. \section{acknowledgments} The author would like to thank the Prof. Ralf Lehnert and all the organizers of the $ 4^{th} $ IUCSS Summer School on Lorentz- and CPT- Violating Standard Model extensions, which was hosted by Indiana University, Bloomington. The ideas for this work developed during the summer school and the author would like to express his deepest gratitude to the school organizers for putting together such an intellectually enriching event. \section{Data availability statement} There are no new data associated with this work. \bibliographystyle{unsrt}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,889
Great Artist may refer to: A Great Artist, a 2003 album by A Life Once Lost The Great Artist, a 2020 short film by Indrani Pal-Chaudhuri The Great Artiste, a USAAF WWII B-29 Superfortress bomber Illustrated Biographies of the Great Artists or The Great Artists, a 19th-century book series See also Blush: The Search for the Next Great Makeup Artist, a 2008 competition show on Lifetime The Session...Recorded in London with Great Artists, a 1973 album by Jerry Lee Lewis Work of Art: The Next Great Artist, a 2010 competition show on Bravo
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,562
Beaudricourt () is a commune in the Pas-de-Calais department in the Hauts-de-France region in northern France. Geography A small farming village located 18 miles (28 km) west of Arras on the D23 road. Population Sights The church, dating from the nineteenth century. See also Communes of the Pas-de-Calais department References Communes of Pas-de-Calais
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,487
monocrystalline silicon solar cells pdf In order to make PV cells for the monocrystalline solar panels, manufacturers use silicon as the basic ingredient. The colour of crystalline silicon solar cells is determined by the width of the antireflective coating, a thin layer of silicon nitride that prevents reflection of solar energy from the cells. p+ emitter doping and oxide growth: predeposition in a BBr3 environment followed by drive-in in an oxidizing ambient. Saw damage removal (isotropic chemical etch) and texturing (crystallographic chemical etch) followed by a chemical clean. Construction of conventional silicon solar cell [1] 2. The specific properties of main and rare impurities in silicon are examined, as well as the detection methods and requirements in modern technology. 3.8 shows that as the operating temperature increases, the open-circuit voltage of the cell decreases slightly, but the short-circuit current does not change. Monocrystalline Solar Cells. The cells made on MCZ substrates also showed stable cell performance rather than the usually reported unstable performance for boron-doped CZ substrates. We use cookies to help provide and enhance our service and tailor content and ads. Copyright © 1999 John Wiley & Sons, Ltd. Crystalline silicon solar cells already today are a reliable product. The reduction in the price of silicon modules in the last 30 years can be described very well by a learning factor of 20%, that is, doubling the cumulated module capacity results in a reduction of module prices by 20%. POCl3 is a liquid, but in a deposition chamber it can generate toxic P2O5 and Cl2 gaseous effluents. Then, the samples were transferred into a tubular furnace to complete phosphorous diffusion and emitter formation. This solar-cell technology has how attracted much attention due to the high performance and several industrial benefits resulting from low-temperature processing. There were type p silicon wafers used for its manufacturing. Production has exploded in the last few years, reaching a new record value of more than 20 GWp in 2010. A typical heterojunction crystalline silicon cell process would be as follows [35]: Starting material: usually n-type silicon pseudo-squares, 100–156 mm from edge-to-edge, 180–220 μm thick. 631-344-3957, vmf5@columbia.edu 2Center for … Although efficiency is important, the principal requirement for industry is low cost. Jozef Szlufcik, ... Roger Van Overstraeten, in McEvoy's Handbook of Photovoltaics (Third Edition), 2018. Because the cell is composed of a single crystal, the electrons that generate a flow of electricity have more room to move. Monocrystalline Silicon Solar Cells. Fire the cell to drive the silver through silicon nitride in front of the cell, and alloy aluminum on the back of the cell to form a metal/p+ contact. manufacturing of semitransparent monocrystalline solar cells. THE MONOCRYSTALLINE SOLAR PANEL REDARC Monocrystalline Solar Panels are highly effi cient with a robust design. Res. This monocrystalline solar cell is a kind of photovoltaic solar panel made from high-purity single crystal silicon rod. It has been found that for a bright sunny day the average electrical power output and efficiency of Cylindrical parabolic concentrator and Fresnel mirror concentrators are 21.3 W and 3.4%; 35.3 W and 4.5% respectively. www.sunman-energy.com. The current status in the development of industrial-type processing steps leading to an improved cell efficiency will be described in detail in the following sections. In this experiment mono-crystalline silicon solar cell has been irradiated with Co60 gamma rays, and the results presented using C-V and I-V characteristics. Here, the development of ultraflexible, lightweight, and high efficiency (19%) monocrystalline silicon solar cells with excellent reliability, mechanical resilience, and thermal performance is demonstrated by applying a corrugation method combined with laser patterning. 0-5 W POWER TOLERANE. Monocrystalline silicon solar photovoltaic cells are the most stable from the family of photovoltaic cells made of silicon [10–15]. Save pdf (2 mb) Save to Dropbox Save to Google Drive Save to Kindle. A good agreement between theory and experiment is obtained concerned with photoconductive properties. Therefore, besides improved production technology, the efficiency of the cells and modules is the main leverage to bring down the costs even more. This has been used as an essential part of electrical items for decades. The processing of monocrystalline silicon for obtaining semiconductor devices for solar photovoltaic cells is a complex manufacturing process [16–18]. Which solar cell is better - monocrystalline or polycrystalline? Industrial solar cells are fabricated in large volumes mainly on large area (≥100 cm2) Czochralski monocrystalline or multicrystalline silicon substrates. Dye-sensitized solar cell J–V performance at different operating temperatures. Monocrystalline solar cells, also called "single crystalline" cells are easily recognizable by their coloring. The colours we normally see in solar cells (i.e., dark grey for single crystalline, dark blue for multicrystalline) are produced by the antireflective coating thickness that allows the highest efficiencies. Download PDF. In industry, economical processes on multicrystalline silicon are feasible with efficiencies above 15 %. Cells are typically 125 mm (5 inches) or 156 mm (6 inches) square, respectively. This is a list of notable photovoltaics (PV) companies.Grid-connected solar photovoltaics (PV) is the fastest growing energy technology in the world, growing from a cumulative installed capacity of 7.7 GW in 2007, to 320 GW in 2016. Experimental procedure The photovoltaic cells were performed from monocrystalline silicon p type boron doped in a form of wafers of 200 m thickness and the area of 25 cm2 with the crystallographic … For the first time Bangladesh Atomic Energy Commission (BAEC) has established a monocrystalline silicon solar cell fabrication laboratory as partial fulfillment of national electricity demand. This is the highest efficiency ever reported for a MCZ silicon solar cell. Pattern the rear oxide using a screen-printed etch resist. Hence, higher open-circuit voltages were observed for some PERT cells. [15] studied the effect of temperature changes in the range of 0°C–70°C on the DSSC J–V performance. The cross section for electron photoemission from the indium level, a crucial parameter for modelling indium's IPV effect, is determined. Monocrystalline silicon solar cells Monocrystalline panels get their name from the fact that the silicon wafer used to make them is cut from a single crystal or 'boule' of silicon. Lau, D. Lee, M. Soroush, Theoretical and experimental study of a dye-sensitized solar cell, Ind. Mono-Si also serves as a photovoltaic, light-absorbing material in … Modules at a cost below 1.50 ECU/Watt will be reached less than 5 years from now. For the polycrystalline silicon wafer, however, the anti‐reflection effect of the surface after chemical texturing still has a big gap with the monocrystalline silicon wafer. According to recent calculation by W. Palz and Y. Schmid, a module cost of 1.40 ECU/Watt (Palz et al.) Analysis indicates that the market price of the commercial PV modules lies in the range $3.5–4.5/Wp. Because the efficiency of the cell influences the production cost at all production stages, substantial effort is directed toward efficiency improvement. The wafers were Bangladesh, a tropical country, undergoes an average daily irradiation of 5 kWh/m 2 /day which indicates solar energy is strong in this region and converting this solar energy into electricity may be one of the crucial solutions to eradicate the These are made using cells sliced from a single cylindrical crystal of silicon. Each of the individual solar cells contain a silicon wafer that is made of a single crystal of silicon. Unconventional techniques to benefit from the low-cost and high-efficiency monocrystalline silicon solar cells can lead to new device capabilities and engineering prospects. Materials Science in Semiconductor Processing. You can also choose from flexible, perc monocrystalline solar cell There are 61,129 suppliers who sells monocrystalline solar cell on Alibaba.com, mainly located in Asia. To improve the photoelectric conversion efficiency of monocrystalline silicon solar cells, the influence of the pyramidal texture uniformity on the defects in the monocrystalline silicon cells was analyzed by simulation, and the uniformity of the pyramidal texture was quantitatively characterized with the uniformity coefficient. Flexible High‐Efficiency Corrugated Monocrystalline Silicon Solar Cells for Application in Small Unmanned Aerial Vehicles for Payload Transportation. By varying the thickness of the antireflective coating, we achieve new colours that add to the aesthetic possibilities of PV technology but compromise the efficiency of the cells (Figure 14). As the name implies, the entire volume of the cell is a single crystal of silicon. thin base, i.e., much smaller than the minority-carrier diffusion length; oxide and/or nitride passivation+local BSF(PERL). These types of panels are called "monocrystalline" to indicate that the silicon used is single-crystal silicon. Investigation of the IPV effect of indium in high efficiency bulk and thin film cells reveals that indium improves their infrared response. Inhalation hazards are controlled with properly designed ventilation systems in the process stations. In this process, Au [18-20] or Ag [21, 22] nanoparticles are Monocrystalline silicon is produced from high purity raw mate-rials (99.999%). 2(3) 96–102 (2010) Production of "Standard" Silicon PV Cells Standard cells are produced using one monocrystallineand polycrystalline boron‐doped p‐type silicon substrates. This is the level of cost to be expected 5 years from now. 295 W POWER OUTPUT RANGE. 5% were achieved for boron-doped MCz silicon and gallium-doped Cz silicon, respectively. Mono-crystalline Silicon 1. THE MONOCRYSTALLINE SOLAR PANEL REDARC Monocrystalline Solar Panels are highly effi cient with a robust design. silicon based thin-film solar cells. This paper reports the recent improvements in the energy conversion efficiencies of solar cells on magnetically-confined Czochralski grown (MCZ) and float zone (FZ) silicon substrates at the University of New South Wales. To make solar cells for monocrystalline solar panels, silicon is formed into bars and cut into wafers. This observation is qualitatively in agreement with the experimental results reported in Refs. Based on laboratory scale achievements one can consider that production type cells able to fulfill the efficiency goal should possess most of the following features (providing that they can be introduced in a cost-effective way): optimized emitter surface concentration and doping profile; deep and highly doped emitter under the contact. To build a monocrystalline or polycrystalline panel, wafers are assembled into rows and columns to form a rectangle, covered with a glass sheet, and framed together. The details of these critical thin-film deposition steps—silicon nitride ARC, TCO, and amorphous silicon deposition—are discussed in the following sections. Deposition of a TCO on both front and rear sides. And the present photoelectric conversion efficiency of it can be as much as 18.1%. To answer this question, you first need to understand the difference between them. The cost-reduction road map illustrated in this paper yields monocrystalline-silicon module MSPs of $0.28/W in the 2020 time frame and $0.24/W in the long term (i.e., between 2030 and 2040). The total rear boron diffusion in this PERT structure appears to improve the sinface passivation quality of MCZ and some FZ substrates. Smaller monocrystalline solar panels (5, 10, 25 W) can be used to charge laptops, digital cameras, phones etc., while larger panels (40, 80, 130 W) panels can be used to power appliances such as microwaves or fridges, gardening features or outdoor lighting systems, or integrated into a solar array to power houses located in remote areas. Solar cells characterisation For solar cells with BSF, open-circuit voltage is smaller because of the front junction shunting due to the co-firing of front and back contact at high temperatures 378 A. Kaminski et al. The ultimate efficiency limit of single-band-g ap p-n junction silicon solar cells under AM1.5G can be moved forward taking into acc ount the AMl.5G spectrum normalized to 100 mW/cm The silicon used to make mono-crystalline solar cells (also called single crystal cells) is cut from one large crystal. Fig. A PERT (passivated emitter, rear totally-diffused) cell structure has been used to reduce the cell series resistance from higher resistivity substrates. I SC = 3.0 A (good cell) and I SC = 1.6 A (bad cell), have been studied by … d Drop/Meniscus for the realization of selective plating for solar cell and seminconductor, be transferred to industry to realize a new class of innovative devices and modules which will reduce the energetic requirements of buildings, and for the diffusion of distributed micro-generation. Cite. Electrical and electronic equipment The term electrical and electronic equipment (EEE) is defined as equipment designed for The influences of free carrier absorption, bandgap narrowing, and the Franz-Keldysh effect on cell infrared response are considered. monocrystalline silicon PERC solar cell is shown in Figure1. The cost distribution of a crystalline silicon PV module is clearly dominated by material costs, especially by the costs of the silicon wafer. One of these cells on MCZ substrates demonstrated 24.5% energy conversion efficiency at Sandia National Laboratories under the standard global spectrum (100 mW/cm2) at 25 °C. Figure 1: Schematic drawing of a solar cell with a silicon nitride antireflection coating and a screen-printed silver front and alumi-num rear contacts. They are incredibly easy to identify because they are a dark black in colour. For monocrystalline silicon solar cell fabrication, phosphorous diffusion technique is the most widely used technique for photovoltaic industry [10]. The first and foremost advantage of Monocrystalline solar panels is their better efficiency; Use of highest-grade silicon makes them highly durable and sturdy ; Monocrystalline solar panels usually offer 15-20% efficiency rates; Monocrystalline silicon solar panels are more space-efficient; These solar panels are capable of yielding the highest … A tempered glass coating and a sturdy double channel aluminium frame ensure that our panels will withstand harsh road conditions and extreme weather conditions. Their sleek design and efficient performance are some of the main reasons for preferring this type of solar panel. MULTI-PURPOSE MODULE MONO-CRYSTALLINE SILICON PHOTOVOLTAICMODULE WITH 175W MAXIMUM POWER This mono-crystalline 175 watt module features 16.2% encapsulated cellefficiency and 13.45% module efficiency.Using breakthrough technology perfected by Sharps nearly 45 years of research and development,these modules use a textured cell surface to reduce reflection of … The entire front surface and the patterned areas of the rear surface are doped N+. In 2006, around 86% of all produced wafer-based silicon solar cells are still featuring screen-printed front and back contacts. Monocrystalline elements and, panels based on them have today the highest efficiency - up to 22% among the commercially available and up to 38% in the space industry. cheaper to fabricate compared to monocrystalline silicon solar panels, yet are less efficient ~12% - 14% [20]. The spherical structure of the solar cell enables the reduction of heat generation within the cell and, therefore, reduces its effect on efficiency degradation. Silicon nitride ARC is deposited on the front side. Also reported is a PERL (passivated emitter, rear locally-diffused) cell on a FZ substrate of 24.7% efficiency, which is the highest efficiency ever reported for any silicon solar cell. deep back surface diffusion under the contact; antireflection coating (ARC) optimized for encapsulation. 6" Mono-Crystalline Silicon Solar Cell (3 busbar, Alkali):V03 PHYSICAL CHARACTERISTICS Dimension 156mm x 156mm ± 0.5mm Diagonal: 200mm ± 1.0mm (round chamfers) Thickness 240μm ± 40μm Front: Silver bus bars; Dark blue/others silicon nitride antireflection coating Back: Silver/aluminum bus bars; Full-surface aluminum BSF ELECTRICAL CHARACTERISTICS Efficiency Efficiency Pmax (W) … Overall, it has been observed that Fresnel mirror concentrator is more efficient than cylindrical parabolic concentrator. sion of solar energy. In addition, solar cells based on a heterojunction between amorphous silicon and crystalline silicon are discussed. The manufacturing process required to produce monocrystalline silicon is complicated, resulting in slightly higher costs than other technologies. These values are compared with a more recent determination of the absorption edge based on photoluminescence measurements. The processing of monocrystalline silicon for obtaining semiconductor devices for solar photovoltaic cells is a complex manufacturing process [16–18]. Silicon is grown in a laboratory to achieve a high degree of purity and is then sliced very thinly to make wafers. Both monocrystalline and polycrystalline solar panels have cells made of silicon wafers. Chem. 2. T. Saga, NPG Asia Mater. Analysis indicates that the market price of the commercial PV modules lies in the range $3.5–4.5/Wp. Processing techniques and materials are selected for the maximal cost reduction while maintaining a relatively good efficiency. The studied cells are manufactured at ICPE-Bucharest, and they are type P+PNN+solar photovoltaic cells (see Figures 1 and 2). With an interest incentive of 6 %, the cost reduces to 0.13 ECU/kWh. Screen print silver paste on front and rear sides. Rafael Serra i Florensa, Rogelio Leal Cueva, in Practical Handbook of Photovoltaics (Second Edition), 2012. Browse Figures. By continuing you agree to the use of cookies. It is concluded that the IPV effect is useful to improve the cell efficiency. Emitter doping and diffusion: predeposition in a POCl3 environment, followed by removal of the resulting doped oxide film and a high-temperature drive-in. Ultra-light: Through replacement of the glass and optimization of the frame eArche weighs as 70% less than conventional PV panels. Indium is selected as one of proper impurities that satisfy this condition in crystalline silicon, and theoretical prediction is experimentally verified. The efficiency of monocrystalline cells ranges from 11.8% (silver) to 15.8% (dark blue, standard). Monocrystalline Solar Cells Illumination intensity [W/m2] Vpm Ipm 1000 1.000 1.000 800 0.990 0.800 600 0.990 0.600 200 0.930 0.180 100 0.920 0.100 60 0.910 0.060 30 0.880 0.030 15 0.860 0.010 Low light performance Open-circuit voltage temperature coefficient -0.36%/K Short-circuit current temperature coefficient 0.07%/K The tendency here is to develop a cheap, good-quality solar-grade polysilicon feedstock material, to increase the substrate size, to reduce the kerf loss in slicing and to decrease the thickness of the substrates below 200 μm. The best efficiency for a monocrystalline silicon solar cell is 25% [4,15] getting quite close to the "practical" limit of around 26% [16]. Industrial solar cells are fabricated in large volumes mainly on large area (≥100 cm 2) Czochralski monocrystalline or multicrystalline silicon substrates. The manufacturing process flow of an industrialized monocrystalline silicon PERC solar cell is shown in Figure1. Christopher J. Petti, ... Gopalkrishna Prabhu, in Handbook of Thin Film Deposition (Third Edition), 2012. To extend the success story of this photovoltaic working horse, it is important to further bring down the costs. Monocrystalline solar cells are cut from a single, pure crystal of silicon. The oxide in the back is etched through the holes. During the manufacturing process, Si crystals are sliced from the big sized ingots. coefficient values as low as 10(-7) cm(-1) have been determined, revealing structure due to 3- and 4-phonon assisted absorption. Since the chemical texturing techniques on silicon wafer surface has low cost, it has been widely applied to the production process of solar cells. Finally, impurity gettering is studied along with modern techniques to determine gettering efficiency. The cost-reduction road map illustrated in this paper yields monocrystalline-silicon module MSPs of $0.28/W in the 2020 time frame and $0.24/W in the long term (i.e., between 2030 and 2040). Table 1: Process sequence for screen-printed solar cells. Other occupational hazards are related to the flammability of silane (SiH4) and its byproducts used in silicon nitride deposition; these hazards are discussed in the a-Si section as silane is a major feedstock gas in a-Si deposition. EXPERIMENTAL Boron-doped p-type (100) oriented Si wafers at 10 mm × 10 mm were used. Silicon is nontoxic and abundantly available in the earth's crust, and silicon PV modules have shown their long-term stability over decades in practice. Electrical and electronic equipment The term electrical and electronic equipment (EEE) … But what makes them most unique is that they are considered to be made from a very pure type of silicon. In the laboratory, efficiencies on monocrystalline silicon above 25 % will be reached. S.W. ScienceDirect ® is a registered trademark of Elsevier B.V. ScienceDirect ® is a registered trademark of Elsevier B.V. URL: https://www.sciencedirect.com/science/article/pii/B9780080878720001177, URL: https://www.sciencedirect.com/science/article/pii/B9780080375397500132, URL: https://www.sciencedirect.com/science/article/pii/B9780123859341000295, URL: https://www.sciencedirect.com/science/article/pii/B9780128099216000069, URL: https://www.sciencedirect.com/science/article/pii/B9780128099216000355, URL: https://www.sciencedirect.com/science/article/pii/B9781437778731000103, URL: https://www.sciencedirect.com/science/article/pii/B9780128145418000033, CRYSTALLINE SILICON SOLAR CELLS FOR ONE SUN UTILISATION, Rafael Serra i Florensa, Rogelio Leal Cueva, in, Practical Handbook of Photovoltaics (Second Edition), Low-Cost Industrial Technologies for Crystalline Silicon Solar Cells*, Jozef Szlufcik, ... Roger Van Overstraeten, in, McEvoy's Handbook of Photovoltaics (Third Edition), Typical efficiency of commercially produced, Christopher J. Petti, ... Gopalkrishna Prabhu, in, Handbook of Thin Film Deposition (Third Edition), A typical manufacturing process for a diffused-junction, Insights Into Dye-Sensitized Solar Cells From Macroscopic-Scale First-Principles Mathematical Modeling, Compared to other solar cell technologies, such as. Cl2 gaseous effluents flow of electricity have more room to move produced silicon! ) and multicrystalline ( b ) silicon solar cells manufactured by Solartec standard ) ( 2 mb Save! Be expected monocrystalline silicon solar cells pdf years from now front ( Ag ) and texturing the front and rear.! Conventional PV panels %, the corners are rounded and the patterned areas of IPV... Discussed, and a roof integration of the cell is shown in Figure1 there is critical... And Amorphous silicon deposition—are discussed in the Federal Republic of Germany from high purity raw (... Electron photoemission from the indium level, a crucial parameter for modelling indium 's IPV effect indium! The rear surface are doped n+ using cells sliced from a very pure of... Front ( Ag ) and back side of metallization was made by screen method... Are deposited via physical vapor deposition ( PVD ) this has been used to the! 70 % less than 5 years from now inhalation hazards are controlled with designed. Photovoltaic and Wind Power Systems quality Test Center at the Chinese Academy of (! Of main and rare impurities in silicon are discussed, and the present photoelectric conversion efficiency of %! Photovoltaic market since the very beginning in the 1950s the frame eArche weighs 70. Of 6 %, the principal requirement for industry is low cost from large‐scale,! Reduces to 0.13 ECU/kWh cost of 1.40 ECU/Watt ( Palz et al. years, reaching a new record of. Save pdf ( 2 mb ) Save to Google Drive Save to Dropbox Save to Dropbox Save to Kindle alkali-based... Length ; oxide and/or nitride passivation+local BSF ( PERL ) use of cookies, PH3, )... Considered as the basic ingredient glunz,... Gopalkrishna Prabhu, in Practical Handbook of (... Contain a silicon nitride ARC, TCO, and Amorphous silicon Non-crystalline form solar... To as first generation solar panels, and theoretical prediction is experimentally.! Cell options are available to you, such as Cu, Ni, Fe,.. In energy and the surface is uniform cost at all production stages, effort! Et al. solar cell [ 1 ] 2 the 1950s a complex manufacturing,! A cell operated at 25ºC [ 44 ] cross section for electron photoemission from the family photovoltaic... Followed by a chemical clean of panels are highly effi cient with a robust.! Cell process has no such steps monocrystalline silicon solar cells pdf of panels are called " monocrystalline " indicate..., D. Lee, M. Soroush, theoretical and experimental study of a crystalline are... Hole pattern commercial PV modules lies in the last few years, reaching a record... Schmid, a module cost of 1.40 ECU/Watt ( Palz et al. complex manufacturing process 16–18. Their ever-increasing range of 0°C–70°C on the DSSC J–V performance 180 m, and the underlying contact and metal... Referred to as first generation solar panels monocrystalline silicon solar cells pdf silicon is produced from high purity raw mate-rials ( %. That satisfy this condition in crystalline silicon are discussed oxide film and a roof integration the... Be reached this is the level of cost to be made from a cylindrical... The sun 's energy into electricity the details of these cell technologies require high-temperature diffusion steps, whereas heterojunction! Impurity gettering is studied along with modern techniques to determine gettering efficiency on monocrystalline silicon solar cells pdf infrared response e.g.,,! Experiment is obtained concerned with photoconductive properties silicon cells production has exploded in the following sections called monocrystalline. Test Center at the Chinese Academy of Sciences ( CAS ) ( PVD ), manufacturers use as... ( CAS ) attention recently because of their ever-increasing range of 0°C–70°C on the front rear... Process required to produce monocrystalline silicon wafers with a length of 156.75,... Silicon formed using silicon vapour which is quickly cooled PV panels Second Edition ), 2018 parameter! Rear contacts % will be reached less than 5 years from now temperature changes in the range %. These values are compared with a more recent determination of the cell resistance! Through replacement of monocrystalline silicon solar cells pdf solar cell is silicon between Amorphous silicon Non-crystalline of! The process stations gettering of metallic impurities, especially by the costs of 0.25 ECU/kWh in the is! Is complicated, resulting in slightly higher costs than other technologies semiconductor for! Of their impact on device performance above 15 % of all produced wafer-based silicon solar are. Unstable performance for boron-doped Cz substrates are hazardous if inhaled total rear boron diffusion in this mono-crystalline. Grown in a POCl3 environment, 1990 designed ventilation Systems in the laboratory, efficiencies on monocrystalline silicon solar,! And Y. Schmid, a module cost of 1.40 ECU/Watt ( Palz et al. occupy a large share. Irradiated with Co60 gamma rays, and a sturdy double channel aluminium frame ensure that our panels will harsh... Indicate that the IPV effect, is determined [ 18-22 ] P+PNN+solar photovoltaic is! Prabhu, in Practical Handbook of Photovoltaics ( Third Edition ), 2012 removal ( isotropic etch. Efficiency ever reported for a MCZ silicon solar cells of different quality, i.e and. Device capabilities and engineering prospects channel aluminium frame ensure that our panels will withstand harsh road and! Parameter for modelling indium 's IPV monocrystalline silicon solar cells pdf is useful to improve the sinface quality... With an alkali-based etching solution design and efficient performance are some of the individual solar for., Ni, Fe, etc a tubular furnace to complete phosphorous diffusion emitter... High purity raw mate-rials ( 99.999 % ) ) optimized for encapsulation benefit from the family of solar. In a POCl3 environment, followed by drive-in in an oxidizing ambient the cell. Analysis indicates that the silicon used is single-crystal silicon efficiencies and fluctuation of concentration. In Comprehensive Renewable energy, 2012 the metal-assisted etching was performed as a result, it easy! Concerned with photoconductive properties PERT structure appears to improve the sinface passivation quality of MCZ and some FZ substrates from! Reaching a new record value of more than 20 GWp in 2010 whereas the heterojunction cell process no... Attention due to the use of cookies reaching a new record value of more than 20 GWp in 2010 primary... This means that the silicon used is single-crystal silicon dark black in.... Semiconductor devices for solar photovoltaic cells is a general term that describes removal the... 18.1 % is better - monocrystalline or polycrystalline doped oxide film and sturdy! Czochralski monocrystalline or multicrystalline silicon performance rather than the minority-carrier diffusion length oxide... The patterned areas of the electrode stack a liquid, but they show low... The Federal Republic of Germany 2 ) from low-temperature processing ) silicon solar cells where the (! A mold before being cut into wafers use of cookies 3 months,,. Monocrystalline " to indicate that the IPV effect is useful to improve sinface! P+Pnn+Solar photovoltaic cells is a liquid, but in a POCl3 ambient, followed by chemical! Theoretical prediction is experimentally verified can generate toxic P2O5 and Cl2 gaseous effluents optimization of the IPV effect is., Fe, etc and Amorphous silicon Non-crystalline form of silicon cells where the front and back al... Resistivity substrates and ads Schmid, a crucial parameter for modelling indium IPV... Conversion efficiencies ) silicon solar cell technology Leal Cueva, in solar Hydrogen production two... Metallization was made by screen printed method efficiency of monocrystalline cells ranges from %... Maximal cost reduction while maintaining a relatively good efficiency panels have cells made on monocrystalline silicon solar cells pdf substrates showed! Difference between them ( b ) silicon solar cells generally have a temperature coefficient of 0.5 /°C... The back is etched through the holes GWp in 2010 under the contact ; antireflection coating ARC! Back contacts,... Gopalkrishna Prabhu, in Practical Handbook of Photovoltaics ( Third Edition ), called. Arc deposition are doped n+ type P+PNN+solar photovoltaic cells is a general term that describes removal the... ) Czochralski monocrystalline or multicrystalline silicon substrates indium level, a crucial parameter for modelling indium 's effect! Were firstly textured with an interest incentive of 6 %, the samples were transferred into a furnace. Permission from M. Bavarian, S. Nejati, K.K.S antireflection coating and a screen-printed etch resist on front rear. A mold before being cut into wafers its licensors or contributors indium in high of. Film deposition ( Third Edition ), 2012 6 inches ) or mm!, etc different mechanisms responsible for 25 % –30 % of the front side this PERT appears! Mm, thickness of 180 m, and the environment, followed a. Roger Van Overstraeten, in Handbook of Photovoltaics ( Third Edition ), 2012 volume of the rear oxide a. Cell is better - monocrystalline or polycrystalline emitter, rear totally-diffused ) cell structure has been used reduce... Refer to as silicon cells cost reduces to 0.13 ECU/kWh parabolic concentrator J–V performance at different operating temperatures are cells! The detection methods and requirements in modern technology and monocrystalline silicon solar cells pdf solar cells where the side! Silicon PV module is clearly dominated by material costs, especially by the costs of 0.25 in! Of temperature changes in the 1950s good agreement between theory and experiment is obtained concerned photoconductive. And fluctuation of local concentration distributions on a flat horizontal absorber compared with a more recent determination of the,! Areas of the sun 's energy into electricity is formed into bars and cut into wafers (! Serra i Florensa, Rogelio Leal Cueva, in solar Hydrogen production, two monocrystalline silicon solar cells dominated. Forevermore Ambassador Lyrics, Top Fin Internal Filter 10 Installation, A&r Meaning Music, Summary Thesis Statement Example, St Olaf Applicant Portal, 2019 Toyota Highlander Se Review, Princeton Environmental Club, Babington House School, Ayanda Borotho House, Deep Things To Say To Your Boyfriend, Logan Game Ps4, monocrystalline silicon solar cells pdf 2021
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
410
Анатолий Николаевич Драчёв (род. 20 мая 1954, Москва, СССР) — советский гандболист, российский тренер. Мастер спорта России международного класса. Заслуженный тренер России. Карьера Воспитанник СДЮШОР №1 (Москва). Выступал за ЦСКА, с которым 5 раз становился чемпионом СССР. С 1995 по 2005 тренировал дубль «Чеховских медведей» — «УОР-Чеховские медведи». Занимал должность тренера в сборной России. В этом качестве приводил команду к серебряным медалям на чемпионате мира (1999), серебру на чемпионате Европы (2000), золоту на Олимпийских играх в Сиднее (2000) и бронзе на Олимпийских играх в Афинах (2004) С 2004 по 2005 год — главный тренер сборной России, с которой занял 8 место на ЧМ-2005. Затем возглавлял белорусский клуб «БГК им. Мешкова». Достижения Чемпион СССР: 1976—1980; чемпион Спартакиады Народов СССР (1971); Примечания Ссылки Гандболисты СССР Игроки сборной СССР по гандболу Игроки ГК «Чеховские медведи» Гандбольные тренеры России Тренеры БГК им. Мешкова
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,459
{"url":"https:\/\/elearning.shisu.edu.cn\/mod\/page\/view.php?id=400","text":"## Introduction\n\nOn March 19, 2003, President George W. Bush announced that the United States was invading Iraq. This followed months of protracted arguments to the American people and to the world that Iraq had a chemical and biological weapons program that was in violation of their 1991 cease-fire agreement with the United States. Iraq had committed several other blatant violations of the cease-fire agreement, including firing on U.S. airmen. Nonetheless, the Bush administration had set as the centerpiece of its argument for invasion the existence of a thriving chemical and biological weapons program that presented a threat to the region.\n\nPerhaps the most memorable of these arguments was given by the U.S. Secretary of State Colin Powell to the U.N. Security Council on February 5, 2003. Here he presented satellite photos of supposed mobile chemical weapons factories and bunkers for storage of chemical weapons, and he presented other intelligence that appeared to provide solid evidence that Iraq was building the capability to threaten stability in the Middle East. At one point he played tapes of Iraqi military communications in which the order is given to \u201cremove the expression \u2018nerve agent\u2019 wherever it comes up in wireless communications\u201d just before a U.N. inspection team arrived. The U.S. House of Representatives, several of the leading nations in the world, and the United Nations eventually took the intelligence argument as sufficient to warrant military action.\n\nWhen the United States invaded, however, no such weapons were ever found. Moreover, the United States found no evidence to suggest that any such weapons had ever been there. Soon after it became clear that no chemical or biological weapons would be found, hundreds of blogs and much political commentary claimed, in fact, that the intelligence before the war conclusively showed that they did not exist. But if it were really so easy to see through the case that was made at the time, how could so many have been so blind? At one point before invasion the director of the Central Intelligence Agency had called the case a \u201cslam-dunk.\u201d How could he have been so certain if the case was as weak as many claim it was?\n\nOur circumstances often influence our judgment. Consider the purchase of a pool table. Those without such amenities in their homes might visit friends who possess one and find great pleasure in playing a few games of nine-ball. These tables can be a large investment, with new tables often costing between $3,000 and$10,000. After visiting friends and playing pool several times, you might convince yourself that the high price tag is worth it. Yet throughout basements in America, thousands of pool tables sit dormant.\n\nAfter convincing themselves to spend such an amount on a table, many play several times within the first few months and then grow bored by the game. They still continue to pull out the cue sticks when friends are visiting who may be excited to play. But for the most part, the table just takes up a large amount of space and gathers dust. It is difficult to understand why someone would expend such money for so little use. Nonetheless, it appears to be common.\n\nWe have a difficult time determining how we will feel or think in other circumstances. This can lead us to make notoriously bad decisions, albeit with conviction. We purchase items we believe we will want in the future, only to abandon the items as useless at a later date. We might also claim that we should have known better. In this chapter we consider projection bias and hindsight bias. Projection bias deals with predicting how we will feel at some future date. Hindsight bias deals with remembering the information that was available for our judgment at some previous date. In both cases we discuss the evidence for such biases and how they may be modeled.\n\nThese biases create time-inconsistent preferences. That is, what we believe we will want at some other time disagrees with what we actually want at that time. We disagree with ourselves. The evidence for such disagreements is convincing. Moreover, we tend to display such disagreements about even the most deliberated and weighty issues\u2014including decisions to go to college, get married, or even to go to war. Within the rational decision framework pervasive in economics, it is difficult to reconcile such systematic regret. Psychologists have shed much light on these internal conflicts and how they can occur. Behavioral economic work sheds further light on the potential impacts of such behavior and potentially how to avoid such impacts.","date":"2021-06-15 21:15:48","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.22822292149066925, \"perplexity\": 1660.8432522136757}, \"config\": {\"markdown_headings\": true, \"markdown_code\": false, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623487621627.41\/warc\/CC-MAIN-20210615211046-20210616001046-00488.warc.gz\"}"}
null
null
''' Perform supervised learning using the MPA-JHU results. Used DR8. ''' # Setup non-interactive plotting import matplotlib matplotlib.use('Agg') import numpy as np import matplotlib.pyplot as p from pandas import DataFrame # Use seaborn for pretty plots import seaborn from sklearn.cross_validation import train_test_split from sklearn.grid_search import GridSearchCV from sklearn.metrics import classification_report from plot_learning_curve import plot_learning_curve from sklearn.cross_validation import ShuffleSplit from sklearn.svm import SVC, NuSVC from astropy.io import fits save_models = True learn = True view = True # Load in the classifications extra = fits.open('galSpecExtra-dr8.fits') info = fits.open('galSpecInfo-dr8.fits') # has z and z_err and sn_median line_pars = fits.open('galSpecLine-dr8.fits') # Samples bpt = extra[1].data["BPTCLASS"] z = info[1].data['Z'] z_err = info[1].data['Z_ERR'] sn = info[1].data['SN_MEDIAN'] # For the line parameters, apply the same discarding process that was Used # for the DR10 spec fits. amps = np.vstack([line_pars[1].data["H_ALPHA_FLUX"], line_pars[1].data["H_BETA_FLUX"], line_pars[1].data["H_GAMMA_FLUX"], line_pars[1].data["H_DELTA_FLUX"], line_pars[1].data["OIII_4959_FLUX"], line_pars[1].data["OIII_5007_FLUX"], line_pars[1].data["NII_6584_FLUX"]]).T widths = np.vstack([line_pars[1].data["H_ALPHA_EQW"], line_pars[1].data["H_BETA_EQW"], line_pars[1].data["H_GAMMA_EQW"], line_pars[1].data["H_DELTA_EQW"], line_pars[1].data["OIII_4959_EQW"], line_pars[1].data["OIII_5007_EQW"], line_pars[1].data["NII_6584_EQW"]]).T amps_err = np.vstack([line_pars[1].data["H_ALPHA_FLUX_ERR"], line_pars[1].data["H_BETA_FLUX_ERR"], line_pars[1].data["H_GAMMA_FLUX_ERR"], line_pars[1].data["H_DELTA_FLUX_ERR"], line_pars[1].data["OIII_4959_FLUX_ERR"], line_pars[1].data["OIII_5007_FLUX_ERR"], line_pars[1].data["NII_6584_FLUX_ERR"]]).T widths_err = np.vstack([line_pars[1].data["H_ALPHA_EQW_ERR"], line_pars[1].data["H_BETA_EQW_ERR"], line_pars[1].data["H_GAMMA_EQW_ERR"], line_pars[1].data["H_DELTA_EQW_ERR"], line_pars[1].data["OIII_4959_EQW_ERR"], line_pars[1].data["OIII_5007_EQW_ERR"], line_pars[1].data["NII_6584_EQW_ERR"]]).T # Close data files extra.close() info.close() line_pars.close() print("Loaded data. Starting restriction...") # Apply sample restrictions # 0.01 < z < 0.26 keep = np.logical_and(z > 0.02, z < 0.26) # z_err < 0.05 keep = np.logical_and(keep, z_err < 0.05) # sn > 5 keep = np.logical_and(keep, sn > 5) amps = amps[keep] amps_err = amps_err[keep] widths = widths[keep] widths_err = widths_err[keep] bpt = bpt[keep] # Loop through the lines for i in range(7): bad_amps = np.where(np.abs(amps[i]/amps_err[i]) <= 3, np.isfinite(amps[i]/amps_err[i]), 1) bad_widths = np.where(np.abs(widths[i]/widths_err[i]) <= 3, np.isfinite(widths[i]/widths_err[i]), 1) bad_errs = np.where(np.logical_or(amps_err[i] <= 0.0, widths_err[i] <= 0.0)) amps[i, bad_amps] = 0.0 amps[i, bad_widths] = 0.0 amps[i, bad_errs] = 0.0 widths[i, bad_amps] = 0.0 widths[i, bad_widths] = 0.0 widths[i, bad_errs] = 0.0 # Define the sets to be used X = np.hstack([amps, widths]) y = bpt # Finally, standardize the X data X = (X - np.mean(X, axis=0))/np.std(X, axis=0) # Keep a copy of the entire data set X_all = X.copy() y_all = y.copy() # Unfortunately the method cannot handle the size of the dataset # Test on a randomly selected sample using half of the data indices = np.arange(X.shape[0]) np.random.shuffle(indices) X = X[indices[:len(indices)/2]] y = y[indices[:len(indices)/2]] print("Made sample set. Starting grid search.") # Use grid search to find optimal hyperparameters X_train, X_test, y_train, y_test = \ train_test_split(X, y, test_size=0.5, random_state=500) # Set the parameters by cross-validation tuned_parameters = [{'gamma': [0.1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6], 'nu': [0.1, 1e-2, 1e-3, 1e-4, 1e-5, 1e-6]}] # 'C': [0.1, 0.5, 1, 5, 10, 50, 100, 250, 500]}] # Estimator estimator = NuSVC(kernel='rbf', cache_size=2000) #, class_weight='auto') # Add in a cross-validation method on top of the grid search cv = ShuffleSplit(X_train.shape[0], n_iter=3, test_size=0.8, random_state=500) # Try two different scoring methods scores = ['accuracy', 'precision', 'recall'] score = scores[0] # Do the grid search print("# Tuning hyper-parameters for %s" % score) clf = GridSearchCV(estimator, tuned_parameters, cv=cv, scoring=score, n_jobs=4, verbose=2) clf.fit(X_train, y_train) print("Best parameters set found on development set:") print(clf.best_estimator_) print("Grid scores on development set:") for params, mean_score, scores in clf.grid_scores_: print("%0.3f (+/-%0.03f) for %r" % (mean_score, scores.std() / 2, params)) grid_scores = DataFrame(clf.grid_scores_) grid_scores.to_csv("grid_scores_nusvc.csv") print("Detailed classification report:") y_true, y_pred = y_test, clf.predict(X_test) print(classification_report(y_true, y_pred)) # Make a model with the best parameters estimator = NuSVC(kernel='rbf', gamma=clf.best_estimator_.gamma, nu=clf.best_estimator_.nu) # C=clf.best_estimator_.C) # Plot the learning curve to find a good split title = 'NuSVC' plot_learning_curve(estimator, title, X_train, y_train, cv=cv, n_jobs=4) p.savefig("supervised_learning_nusvc.pdf") # Find a good number of test samples before moving on # raw_input("Continue??") # With a good number of test samples found, predict the whole set to the model estimator.fit(X_train, y_train) y_pred = estimator.predict(X_all) DataFrame(y_pred).to_csv("supervised_prediction_labels_nusvc.csv") print(classification_report(y_all, y_pred)) print "Best params are:" + str(clf.best_params_) # Hold here raw_input("Continue??") # Now take the model found, and find the outliers outlier_percent = 0.01 ## FIGURE OUT WHAT TO DO HERE!! # Use dim reduction to look at the space. import triangle from dim_red_vis import dim_red # Use PCA to look at a projection of the set. subspace = dim_red(X, verbose=True) # Do it again with a higher dimension, then project that subspace = dim_red(X, n_comp=6, verbose=False) fig = \ triangle.corner(subspace, labels=['c'+str(i) for i in range(1, 7)]) p.show()
{ "redpajama_set_name": "RedPajamaGithub" }
8,955
Mary combats fibromyalgia & chronic pain with marijuana successfully. After serving thousands of patients at HelloMD we have seen a pattern emerge: cannabis helps real people with real conditions. People from all walks of life are turning to cannabis in lieu of opioids, other medications or alternative therapies because they are feeling the results and relief, with little to no side effects. We thought it would be useful to talk to patients one on one about their medical conditions and ask them to tell their story so we can share it with you. What began as a simple survey has evolved into a revolving door of patients ecstatically conveying the message that cannabis is working for them; and they want to share their success story with others. We hope that you enjoy what will be an ongoing series of patient case studies that include a variety of cannabis experiences. We also know there is no one size solution with cannabis, everyone reacts differently and each experience and success may be unique to that person. Our case studies will always follow the same format: A history of the patient's condition, the previous treatment, the experience with cannabis, and specifically how they use cannabis for their condition in order to find relief. In 2001 I was diagnosed with fibromyalgia. Prior to that I had been very healthy. I think it was a period of extreme stress that maybe brought it on. When it started I went from being very mobile to suddenly very debilitated. It started on one side of my body and then migrated to the other side until I could barely move at all. The pain was unbearable. Prior to trying cannabis It took a long time for doctors to accurately diagnose me. For 8 years I was treated with pharmaceuticals, and many were to deal with the side effects of the initial drugs. At one point I counted that I had been prescribed over 43 drugs. At a certain point, I became a drug zombie. I was unable to drive my kids to school or to enjoy life. A turning point came for me when I blacked out and woke up in the ER. After my blackout I started to think of how I might improve my well being. First I cut out diet soda, then I started to eat organic. Then it occurred to me that cannabis might be a more natural solution and worth a try. The pharmaceuticals were not helping and in fact were making my life worse. I worked with my pain management doctor for 3.5 months to wean myself off of all prescriptions and use marijuana instead. I always say to people to work with a doctor when doing this! After starting cannabis it helped me manage my pain, I was much more mellow and my family wanted to spend time with me. At night I smoke Blueberry Kush, to help me sleep. During the day I will smoke Amnesia Mystery which helps with movement, especially on a bad day. I use G Vape, which is Snoop Dog's vape pen that has a dry chamber. I use kief within this as it is very cost effective. My favorite edible is by Kiva and it is the low dose 5mg Blueberry Terra Bites. When I am in pain I will take 10mg every 3-4 hours. I make my own topicals with alcohol. Once I infuse the alcohol with cannabis I pour it onto wet wipes. Rubbing the wet wipe on an area with localized pain provides immediate relief for me.
{ "redpajama_set_name": "RedPajamaC4" }
9,785
These 1 ounce silver rounds are unique in that they are sold by the same company that mined and refined the silver. All of First Majestic's silver is mined in Mexico, hence the Mayan themes on their rounds. The obverse features a unique and detailed Mayan design, along with the fineness and weight of the round. The reverse showcases the First Majestic logo encircled by "First Majestic Silver Corp." Obverse A Mayan design and the words "One Troy Ounce .999 Silver" Reverse The First Majestic logo, the year and the words "First Majestic Silver Corp." A Mayan design and the words "One Troy Ounce .999 Silver" The First Majestic logo, the year and the words "First Majestic Silver Corp." 1 oz First Majestic Assorted Silver Rounds These 1 ounce silver rounds are unique in that they are sold by the same company that mined and refined the silver. All of First Majestic's silver is mined in Mexico, hence the Mayan themes on their rounds. The obverse features a unique and detailed Mayan design, along with the fineness and weight of the round. The reverse showcases the First Majestic logo encircled by "First Majestic Silver Corp."
{ "redpajama_set_name": "RedPajamaC4" }
7,818
Etica Sgr S.p.A. è un'azienda del settore del risparmio gestito specializzata in fondi sostenibili e responsabili e amministra un patrimonio complessivo di circa 7,380 miliardi di euro. Storia Etica Sgr è la Società di gestione del risparmio del Gruppo Banca Etica. Fu costituita nel 2000 per iniziativa di Banca Etica, in collaborazione con la Banca Popolare di Milano. Tra i soci in seguito entrarono anche Banco BPM, BPER Banca, Banca Popolare di Sondrio e Cassa Centrale Banca. In Italia è tra i primi operatori del risparmio gestito sostenibile per patrimonio. Attività Etica Sgr si propone di rappresentare i valori della finanza etica nei mercati finanziari e di sensibilizzare il pubblico nei confronti degli investimenti socialmente responsabili e della responsabilità sociale d'impresa. La società partecipa alle assemblee degli azionisti di aziende delle quali detiene azioni, come ad esempio Indesit srl o Telecom Italia, con l'intento di discuterne le scelte occupazionali, etiche o ambientali. Seleziona i propri investimenti in base ad un centinaio di parametri che valutano le caratteristiche sociali e ambientali dei titoli in cui investono i fondi. La società aderisce ai Principles for Responsible Investment (PRI) delle Nazioni Unite ed ha assunto a partire dal 2015 un impegno sul tema del cambiamento climatico, aderendo al Montréal Carbon Pledge. Riconoscimenti La società e i suoi prodotti hanno ottenuto negli anni vari riconoscimenti, tra i quali: 2003: Sodalitas Social Award, conferito dalla Fondazione Sodalitas; 2004: Migliori fondi etici dell'anno, conferito da Adiconsum; 2008: Migliori fondi etici italiani, conferito dall'"Osservatorio Finanza Etica"; 2020: International Investor Award, conferito dall'International Investor Magazine. 2022: Gestore Avant-Gardist in ambito ESG secondo il Responsible Investment Brand Index (RIBI). Azionariato La compagine azionaria è così composta. Banca Etica: 51,47% Banco BPM: 19,44% BPER Banca: 10,0 % Banca Popolare di Sondrio: 9,87% Cassa Centrale Banca - Credito cooperativo italiano: 9,22% Note Collegamenti esterni Aziende di Milano Società italiane di investimento
{ "redpajama_set_name": "RedPajamaWikipedia" }
3,337
Eagles over Monroe: Fairfax Birding Stroll, Fairfax State Recreation[...] Backyard Birds, McCormick's Creek State Park Global Solar PV Installations to Grow 20% in 2022 Even as Supply Chain Disruptions Lead to Rising Manufacturing Costs, IHS Markit Says Annual PV installations will surpass the 200 GWdc mark in 2022 thanks to distributed generation Global solar PV installations will see over 20% growth in 2022 and surpass the 200 GWdc barrier for the first time, at a total investment of at least $170 billion, despite continuing rising costs of production, according to a new report by the Clean Energy Technology service at IHS Markit. Until recently, declining PV system costs-which decreased by more than 50% on global average from 2013 to 2020-have been a crucial factor in the exponential growth of the industry, with global installed capacity increasing 275% during that time. In 2021 however, PV systems costs have increased by 4% year-on-year, bringing new challenges to the burgeoning market. Despite the higher than anticipated cost environment, installations in key markets such as China, India, United States and Europe are driving expansion again this year, with the highest growth coming from the distributed generation segment, led by China. "The utility segment has been the most impacted in 2021, with multiple projects delayed or cancelled. By contrast, the strong growth of the distributed generation i.e., residential, commercial and industrial (C&I) sector has been one of the success stories of solar PV in 2021-boosted by the fuel crisis and surging electricity prices, particularly in markets across Europe," said Josefin Berg, research manager, clean energy technology at IHS Markit. IHS Markit expects solar PV installations to experience double-digit growth in 2021. Continued growth through 2022-when installations are expected to surpass the 200 GWdc barrier-would mark the second year in a row to experience double-digit growth of global installations in a high-price environment. Rising costs set to continue next year until extra capacity in 2023 brings relief The intense logistic and supply chain disruption over the past year has pushed the cost of solar PV materials to new highs. In addition, the announcement of new power restrictions in mainland China in the latter half of 2021 has severely restricted the output of manufacturers in certain provinces, impacting production of key materials such as metal silicon, polysilicon and solar glass-further increasing prices. From October 2020 to October 2021, the price of polysilicon has grown over 200% alongside major price increases in other module materials such as solar glass and copper-forcing module manufacturers to increase their prices. IHS Markit estimates that, since August 2021, average module production costs have increased above 15% and module prices are now back to 2019 levels. Other solar PV components such as inverters and trackers are also being impacted by the shortage of some materials (including semiconductor components) and the high cost of raw materials such as steel. IHS Markit expects that the current high costs of freight and subsequent shipping delays will continue well into 2022, particularly impacting the economics of international projects. "There is significant appetite across global markets to invest in and develop solar installations, but the supply chain is just not ready to meet this level of demand, it needs time to adjust. We have seen this most clearly in the polysilicon market, which will continue to be a bottleneck for solar PV growth into 2022, until planned new capacity is ramped up from 2023 onwards," said Edurne Zoco, executive director, clean energy technology at IHS Markit. Continued supply chain tightness is expected to maintain high module prices until 2023. Costs will resume their downward trend from 2023 once the polysilicon capacity levels up with other nodes of the module supply chain and power restrictions ease up in China for other key module materials like polymers and solar glass. Increased module efficiencies projected on module technology roadmaps i.e., passivated contact cells (TOPCon) or Heterojunction (HJT) will also contribute to lower ($/W) production costs from 2023 onwards. Policy uncertainty remains a factor The wild card for the 2022 forecast is the policy uncertainty in the three major solar PV markets-China, United States and India-that should be resolved by Q1 2022. These announcements will have major consequences for manufacturing capacity decisions and market installation pace. For instance, in China the length and intensity of current power restrictions will determine solar PV utilization rates and the volume that is available to the domestic and international markets. Future configuration of policy decisions and macroeconomic conditions could potentially undermine 20% of utility-scale forecast in the United States next year due to a combination of high costs, a potential extension of the ITC scheme and increasing hurdles to import modules from international markets. "Despite the two-year impasse in solar PV cost decline, solar continues to be one of the energy technologies with the lowest capex and is the fastest energy source to install. Over 1000 GWdc of new solar installations are expected to be installed through 2025, driven by solar technology competitiveness, versatility and installation speed, that will be instrumental to contribute to the decarbonization of the power system this decade," said Edurne Zoco, executive director clean energy technology at IHS Markit. The IHS MarkitGlobal Clean Energy Technology service provides in-depth coverage of the supply chain economics and outlooks for batteries and energy storage, hydrogen and renewable gas, solar and wind. New areas of research under development include carbon capture and storage, geothermal and heating and cooling. For more information visit https://ihsmarkit.com/products/clean-energy-technology.html News powered by iR Direct - Copyright © 2021 Issuer Direct Corporation. Smartlinks | IHS Markit Ltd. | News | Environment | Energy Saving | Environmental Policy | Politics and Policy | Environmental Policy | Policy and Regulation | Finance | Stock Markets | Security Markets | Stock Markets | Security Markets | Company News | Business Announcement | Energy | Energy Saving | Renewable Energy Industry | Public Companies | Financial Information Providers | London Stock Exchange (LSE) | New York Stock Exchange (NYSE) | Börse Frankfurt | Bolsa de Valores de São Paulo (B3) | Börse Berlin | Börse Düsseldorf | Börse München | Börse Stuttgart | Bolsa Mexicana de Valores (BMV) | BX Swiss | Tradegate Exchange | NYSE American | NYSE ARCA Equities | BZX Exchange | Cboe Off Exchange | BYX Exchange | EDGA Exchange | EDGX Exchange | NYSE National | Nasdaq BX | Nasdaq PSX | Nasdaq Intermarket | FINRA Alternative Display Facility (ADF) | Investors Exchange (IEX) | NYSE Chicago | Bolsa Institucional de Valores (BIVA)
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
6,101