text
stringlengths
14
5.77M
meta
dict
__index_level_0__
int64
0
9.97k
using System; using System.Collections.Generic; using ZyGames.Framework.Model; namespace ZyGames.Framework.Net { /// <summary> /// /// </summary> /// <param name="entity"></param> /// <param name="column"></param> /// <param name="fieldValue"></param> public delegate void EntityPropertySetFunc<T>(T entity, SchemaColumn column, object fieldValue) where T : new(); /// <summary> /// 数据接收处理接口 /// </summary> public interface IDataReceiver : IDisposable { /// <summary> /// /// </summary> /// <typeparam name="T"></typeparam> /// <returns></returns> bool TryReceive<T>(out List<T> dataList) where T : ISqlEntity, new(); } }
{ "redpajama_set_name": "RedPajamaGithub" }
2,515
Lew Alexandrowitsch Besymenski (; * 30. Dezember 1920 in Kasan; † 26. Juni 2007 in Moskau) war ein russischer Autor, Historiker, Journalist sowie Professor für Militärgeschichte an der Moskauer Akademie für Militärwissenschaften. Leben Er absolvierte die philosophische Fakultät der Moskauer Staatlichen Universität und war im Zweiten Weltkrieg (vgl. Großer Vaterländischer Krieg) als Dolmetscher und Aufklärungsoffizier für die Marschälle Schukow und Rokossowski tätig. Neben der Teilnahme an mehreren Schlachten dolmetschte er unter anderem 1943 bei der Vernehmung des deutschen Oberbefehlshabers Friedrich Paulus am Ende der Schlacht von Stalingrad. Nach Ende des Krieges beschäftigte er sich im Rahmen einer Geheimaktion der sowjetischen Führung mit der Erforschung des Führerbunkers. Im Anschluss daran war er unter anderem als Journalist in Bonn sowie als Historiker tätig. Bekannt wurde er auch als Autor zahlreicher Bücher über den Zweiten Weltkrieg und das deutsch-sowjetische Verhältnis zu dieser Zeit. Seit 1985 gehörte er dem Beirat des Zentrums für Studien zur Deutschen Geschichte in Moskau an; seine Professur an der Moskauer Akademie für Militärwissenschaften trat er 1999 an. Zuletzt arbeitete er für die Moskauer Zeitschrift Nowoje Wremja. Für ihn war der russische Widerstandskampf und Krieg im 2. Weltkrieg nicht umsonst. Wäre das Unternehmen Barbarossa erfolgreich gewesen, existierte mein Land nicht mehr. Ich selbst wäre zumindest dreimal getötet: Als Komsomolze, als Sohn von Altbolschewiken, schließlich als Jude. Die Wege der deutsch-sowjetischen Beziehungen sind steinig, verworren, tückisch. Sogar mit Sackgassen. Veröffentlichungen Die letzten Notizen von Martin Bormann. Ein Dokument und sein Verfasser. Deutsche Verlags-Anstalt, Stuttgart 1974, ISBN 3-421-01660-7. Sonderakte Barbarossa. Dokumente, Darstellung, Deutung. Deutsche Verlags-Anstalt, Stuttgart 1968. Taschenbuchausgabe: Rowohlt, Reinbek 1973, ISBN 3-499-16838-3. Die Schlacht um Moskau 1941. Pahl-Rugenstein, Köln 1981, ISBN 3-7609-0570-6. Der Tod des Adolf Hitler. 2. Auflage. Herbig, München/Berlin 1982, ISBN 3-7766-1018-2. Hrsg. mit Gerd R. Ueberschär: Der deutsche Angriff auf die Sowjetunion 1941. Die Kontroverse um die Präventivkriegsthese. Primus, Darmstadt 1998. Neuausgabe 2011, ISBN 978-3-89678-776-7. Stalin und Hitler – Das Pokerspiel der Diktatoren. Aufbau Verlag, Berlin 2002, ISBN 3-351-02539-4. Mit Ulrich Völklein: Die Wahrheit über Raoul Wallenberg. Steidl, Göttingen 2000, ISBN 3-88243-712-X. Weblinks Russland.ru – Schriftsteller Lew Besymenski verstorben Einzelnachweise Literatur (Russisch) Autor Historiker Person (Kasan) Russe Sowjetbürger Geboren 1920 Gestorben 2007 Mann
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,864
"Language is the central point of difference between the human species and all others. Language lies at the root of that transformation of the environment that we call civilization...Language is an instrument of collective thought...Hence, language is truly the expression of a kind of super intelligence." The foundations that make a difference. And trust to give you the results you want. Stunning pictures. Full continuum. Each unit contains everything you need to do the presentations, and to allow your children to complete their work. Tested and proven in the classroom, these materials incorporate current brain research and were prepared by a team of experienced Montessorians.
{ "redpajama_set_name": "RedPajamaC4" }
4,535
@interface PodsDummy_Pods_RhythmBox_Tests : NSObject @end @implementation PodsDummy_Pods_RhythmBox_Tests @end
{ "redpajama_set_name": "RedPajamaGithub" }
1,488
\section{Introduction} Given an embedded projective variety $X\subset \mathbb{P}^n$, its $k$th Fano scheme $\mathbf{F}_k(X)$ is the fine moduli space parametrizing $k$-dimensional projective linear subspaces of $X$. Such Fano schemes have been extensively studied when $X$ is a generic (or at least smooth) hypersurface, see e.g. \cite{altman:77a,barth:81a,langer:97a,harris:98a}, or more generally when $X$ is a complete intersection \cite{debarre:98a}. Only several isolated cases of Fano schemes have been studied for varieties $X$ which are not generic (or at least smooth) complete intersections, see for example \cite{ilten:15a,larsen:14a}. In this article, we consider the Fano scheme $\mathbf{F}_k(X_\mathcal{A})$ when $X_\mathcal{A}$ is the projective toric variety associated to a finite set of lattice points $\mathcal{A}$, see \S\ref{sec:toric}. Our main result is a complete description of the irreducible components of $\mathbf{F}_k(X_\mathcal{A})$ in their reduced structure, which we now summarize. A \emph{face} of $\mathcal{A}$ is the intersection of $\mathcal{A}$ with a face of its convex hull. A \emph{Cayley structure} for such a face $\tau$ is an affine-linear surjective map $\pi$ from $\tau$ to the set $\Delta_l$ of standard basis vectors in $\mathbb{Z}^{l+1}$. In \S \ref{sec:cayley} we define a natural partial order on the set of Cayley structures for faces of $\mathcal{A}$. \begin{mainthm}[Theorem \ref{thm:main}] There is a bijection between irreducible components of $\mathbf{F}_k(X_\mathcal{A})$ and maximal Cayley structures $\pi:\tau\to\Delta_l$ where $\tau$ is a face of $\mathcal{A}$ and $l\geq k$. \end{mainthm} We give an explicit description of the component $Z_{\pi,k}$ of $\mathbf{F}_k(X_\mathcal{A})$ corresponding to a Cayley structure $\pi$ in \S\ref{sec:comps}. These components are locally toric, and if a general $k$-plane parametrized by $Z_{\pi,k}$ does not extend to a $k+1$-plane in $X_\mathcal{A}$, the component is (globally) a toric variety. We say that such components are \emph{components of maximal $k$-planes}; they correspond to maximal Cayley structures $\pi:\tau\to \Delta_l$ for $l=k$. In Theorem \ref{thm:polytope}, we describe the set of lattice points $\mathcal{A}_{\pi}$ such that a component $Z_{\pi,k}$ of maximal $k$-planes is given by the projective toric variety $X_{\mathcal{A}_\pi}$ with respect to the Pl\"ucker embedding of $\mathbf{F}_k(X_A)$. We also give a simple formula for the dimension of all components $Z_{\pi,k}$ (Proposition \ref{prop:dim}). Our explicit understanding of the irreducible components of $\mathbf{F}_k(X_\mathcal{A})$ allows us to give a combinatorial characterization of the connected components of $\mathbf{F}_k(X_\mathcal{A})$ (Theorem \ref{thm:connected}). Furthermore, we show that if $X_\mathcal{A}$ is non-singular in dimension $k$, then every component of $\mathbf{F}_k(X_\mathcal{A})$ is smooth (in its reduced structure), see Corollary \ref{cor:regular}. Finally, we are able to explicitly describe the intersection behaviour of the components (Theorem \ref{thm:intersect}). In \cite{casagrande:08a}, Casagrande and Di Rocco show that a $\mathbb{Q}$-factorial normal projectively embedded toric variety $X_\mathcal{A}$ is covered by lines if and only if there is a Cayley structure $\pi:\mathcal{A}\to \Delta_1$. Ito generalizes this in \cite{ito:15a} to show that a polarized normal toric variety $(X,\mathcal{L})$ is covered by $k$-planes if and only if the set of lattice points $\mathcal{A}$ corresponding to eigensections of $\mathcal{L}$ admits a Cayley structure $\pi:\mathcal{A}\to \Delta_k$. As a corollary of our main theorem, we recover Ito's result in the case that $\mathcal{L}$ defines an embedding (i.e. is very ample). While Ito uses an ingenious degeneration argument to produce a Cayley structure, our argument constructs the Cayley structure directly by considering the columns of a $(k+1)\times\#\mathcal{A}$ matrix whose rowspan is a $k$-dimensional linear space of $X_\mathcal{A}$ intersecting the torus, see Remark \ref{rem:cover}. The key step in proving our main theorem (Theorem \ref{thm:main}) is to prove that for any linear subspace $L\subset X_\mathcal{A}$, there is a corresponding Cayley structure $\pi$ such that $[L]$ is contained in $Z_{\pi,k}$. This statement has been proven concurrently by Furukawa and Ito in independent work using a different argument, see \cite[Theorem 3.2]{furukawa:16a}. They make use of this result to describe the dual defect of a toric variety. As a first step towards understanding the (potentially) non-reduced structure of $\mathbf{F}_k(X_\mathcal{A})$, we locally give a combinatorial description of the scheme structure of $\mathbf{F}_{k}(X_\mathcal{A})$ when $k=\dim X_\mathcal{A}-1$ and $X_\mathcal{A}$ is smooth in codimension one, see \S\ref{sec:mult}. In particular, we give formulas for the multiplicities of isolated hyperplanes in $X_\mathcal{A}$. This generalizes the results of the first author concerning the Fano scheme of lines on a toric surface \cite{ilten:14a}. This codimension-one case is particularly amenable to study since in this situation, each toric fixed point of $\mathbf{F}_{k}(X_\mathcal{A})$ is contained in a single irreducible component. Our study of Fano schemes is partially motivated by a number of recent applications. Kiraly and Larsen use Fano schemes to solve an identifiability problem in stationary subspace analysis \cite{larsen:14a}. Results on the Fano schemes for determinantal and permanental hypersurfaces have found applications in geometric complexity theory, leading to new components of the boundary of the orbit closure of the determinant \cite[\S 5]{landsberg} and to a quadratic lower bound on the determinantal complexity of the permanent \cite{landsberg:13a}. Furthermore, recent work by the first author and Teitler used results on certain Fano schemes to determine the product or Chow rank of the $3\times 3$ permanent and determinant polynomials \cite{ilten:16a}. In fact, the key calculation of \cite[Proposition 3.2]{ilten:16a} is closely related to the toric case, see Remark \ref{rem:chow}. We hope that our results here may lead to further similar applications. \subsection*{Acknowledgements} The first author was partially supported by an NSERC discovery grant. The second author was partially supported by an SFU-VPR undergraduate student research award. We thank Dustin Cartwright, Christian Haase, and Atsushi Ito for helpful conversations. \section{Background and Notation}\label{sec:background} \subsection{Toric Varieties}\label{sec:toric} For details on toric varieties, see e.g.~\cite{CLS}. Throughout this article, we work over an algebraically closed field $\mathbb{K}$. Let $M$ be lattice, and $\mathcal{A}$ a finite set of elements of $M$. By $M_\mathcal{A}$ we denote the lattice generated by $u-v$ for $u,v\in \mathcal{A}$, and by $S_\mathcal{A}$ we denote the semigroup in $M\times\mathbb{Z}$ generated by $(u,1)$ for $u\in\mathcal{A}$. The projective toric variety associated to $\mathcal{A}$ is \[ X_\mathcal{A}=\proj \mathbb{K}[S_\mathcal{A} ]. \] Here, $\mathbb{K}[S_\mathcal{A}]$ is the semigroup algebra associated to $S_\mathcal{A}$, and the $\mathbb{Z}$ grading of this algebra is given by projection onto the final $\mathbb{Z}$-factor. More concretely, $X_\mathcal{A}$ is the subvariety of the projective space $\mathbb{P}^{\#\mathcal{A}-1}$ with coordinates $x_u$ for $u\in \mathcal{A}$ cut out by binomial equations of the form \[ \prod x_u^{a_u}=\prod x_u^{b_u} \] for $a_u,b_u\in\mathbb{Z}_{\geq 0}$ satisfying the affine relation \[ \sum a_uu=\sum b_uu,\qquad \sum a_u=\sum b_u. \] We denote by $\chi^w$ the element of $\mathbb{K}[S_\mathcal{A}]$ corresponding to $w\in S_\mathcal{A}$. The \emph{dimension} of $\mathcal{A}$ is the dimension of the smallest affine subspace of $M\otimes \mathbb{R}$ containing it, which is equivalently the rank of $M_\mathcal{A}$ and the dimension of $X_\mathcal{A}$. The variety $X_\mathcal{A}$ comes equipped with a faithful action by the torus $T=\spec \mathbb{K}[M_\mathcal{A}]$ induced by the $M\times \mathbb{Z}$-grading of $\mathbb{K}[S_\mathcal{A}]$. A \emph{face} $\tau$ of $\mathcal{A}$ is a set of the form $\mathcal{A}\cap F$, where $F$ is a face of the convex hull of $\mathcal{A}$. If $\tau$ is a face of $\mathcal{A}$, we write $\tau\prec \mathcal{A}$. For $\tau\prec \mathcal{A}$, there is a natural closed embedding \[ X_\tau\hookrightarrow X_\mathcal{A} \] given by setting $\chi^{(u,1)}=0$ for $u\notin \tau$. This induces a bijection between faces of $\mathcal{A}$ and orbits of $X_\mathcal{A}$: the orbit corresponding to a face $\tau\prec \mathcal{A}$ is given by the orbit of a general point of $X_\tau\subset X_\mathcal{A}$ \cite[Corollary 3.A.6]{CLS}. \begin{figure} \lppic \caption{Lattice points in Example \ref{ex:surface1}\label{fig:ex}} \end{figure} \begin{ex}\label{ex:surface1} Consider the set $\mathcal{A}$ whose elements in $\mathbb{Z}^2$ are given by the columns of \[ \left(\begin{array}{c c c c} 0& 0& 1& 1\\ 0& 1& 0& 2 \end{array}\right). \] This collection of lattice points is pictured in Figure \ref{fig:ex}. The corresponding projective variety is a non-normal hypersurface cut out by the single binomial $xy^2=zw^2$. $\mathcal{A}$ has four $1$-dimensional faces, all of which are empty simplices. \end{ex} \begin{ex}\label{ex:birkhoff1} Let $\mathcal{A}$ be the subset of $\mathbb{Z}^{3\times 3}$ consisting of the six $3\times 3$ permutation matrices. Then $\mathcal{A}$ is $4$-dimensional, and its convex hull is known as the Birkhoff polytope $B_3$. If we label the elements of $\mathcal{A}$ by $u_0,u_1,u_2,v_0,v_1,v_2$ with $u_i$ representing an even permutation and $v_j$ representing an odd one, the projective variety $X_{\mathcal{A}}$ is cut out by the single binomial \[ x_{u_0}x_{u_1}x_{u_2}=x_{v_0}x_{v_1}x_{v_2}. \] The set $\mathcal{A}$ has exactly nine $3$-dimensional faces, given by omitting exactly one of the $u_i$ and one of the $v_j$. \end{ex} \subsection{Fano Schemes}\label{sec:fano} Let $X\subset \mathbb{P}^n$ any projective $\mathbb{K}$-scheme. For any natural number $k\in \mathbb{N}$, the \emph{$k$th Fano scheme} $\mathbf{F}_k(X)$ is the fine moduli space parametrizing $k$-planes of $\mathbb{P}^n$ contained in $X$, see e.g. \cite[\S IV.3]{eisenbud:00a} for a detailed description. The scheme $\mathbf{F}_k(X)$ is a subscheme of the Grassmannian $\mathbb{G}(k,n)$ parametrizing $k$-planes of $\mathbb{P}^n$. For all of this article except for \S\ref{sec:mult}, the reader may forget the scheme structure of $\mathbf{F}_k(X)$ and simply think of it as the subvariety of $\mathbb{G}(k,n)$ whose points correspond to $k$-planes of $\mathbb{P}^n$ contained in $X$. Given a $k$-plane $L\subset X$, we denote the corresponding point of $\mathbf{F}_k(X)$ by $[L]$. The Pl\"ucker embedding of $\mathbb{G}(k,n)$ induces an embedding of $\mathbf{F}_k(X)$ in $\mathbb{P}^{{n+1}\choose {k+1}}$. We will call the corresponding affine charts of $\mathbf{F}_k(X)$ the \emph{Pl\"ucker charts}. Concretely, these Pl\"ucker charts may be thought of as follows. Any $k$-plane of $\mathbb{P}^n$ may be represented non-uniquely as the rowspan of a $(k+1)\times(n+1)$ matrix $P$ of full rank. The ${n+1}\choose{k+1}$ Pl\"ucker charts are obtained by choosing some $(k+1)\times(k+1)$ square submatrix and imposing the condition that it be invertible. On this chart, any $k$-plane may be represented uniquely as the rowspan of a $(k+1)\times(n+1)$ matrix, where we impose the condition that the chosen square submatrix is just the identity matrix. Coordinate functions on this chart are given by the remaining entries of the matrix. If $G$ is an algebraic group acting on $\mathbb{P}^n$ which fixes $X$, then the Fano schemes $\mathbf{F}_k(X)$ inherit a $G$-action. In particular, the Fano scheme $\mathbf{F}_k(\mathcal{A})$ has a natural torus action. The torus fixed points of the Fano schemes $\mathbf{F}_k(X_\mathcal{A})$ are easy to describe. By an \emph{empty $k$-simplex} we mean $k+1$ lattice points whose affine span is $k$-dimensional. \begin{prop}\label{prop:fixed} The torus fixed points of $\mathbf{F}_k(X_\mathcal{A})$ are in bijection with the set of faces of $\mathcal{A}$ which are empty $k$-simplices. \end{prop} \begin{proof} Any fixed point of $\mathbf{F}_k(X_\mathcal{A})$ must be the closure of a $k$-dimensional $T$-orbit in $X_\mathcal{A}$, which are in bijection to the set of $k$-dimensional faces of $\mathcal{A}$. Given a face $\tau\subset \mathcal{A}$, the corresponding orbit closure as a subvariety of $\mathbb{P}^n$ is simply $X_\tau$ sitting inside of a linear subspace of $\mathbb{P}^n$. But $X_\tau$ is a linear space if and only if there are no affine relations among the elements of $\tau$, that is, $\tau$ is an empty $k$-simplex. \end{proof} \noindent We denote by $L_\sigma$ the $k$-plane of $X_\mathcal{A}$ corresponding to an empty $k$-simplex $\sigma$. \section{Cayley Structures and Main Result}\label{sec:cayley} We denote by $\Delta_l\subset \mathbb{Z}^{l+1}$ the standard basis vectors $e_0,\ldots,e_{l}$. Fix a finite set of lattice points $\mathcal{A}\subset M$. \begin{defn} Let $\tau$ be a face of $\mathcal{A}$. A \emph{Cayley structure} for $\tau$ is a surjective map $\pi:\tau\to \Delta_l$ which preserves affine relations. \end{defn} \begin{rem} If $\mathcal{A}$ is the set of lattice points of a lattice polytope $P$, then the existence of a Cayley structure $\pi:\mathcal{A}\to\Delta_l$ means that $P$ can be written as a Cayley polytope of length $l+1$. \end{rem} \begin{figure} \cspic \caption{A Cayley Structure in Example \ref{ex:surface2}\label{fig:ex2}} \end{figure} \begin{ex}\label{ex:surface2} Considering the set $\mathcal{A}$ from Example \ref{ex:surface1}, the map $\mathcal{A}\to \Delta_1$ sending $\{(0,0),(0,1)\}$ to $e_0$ and $\{(1,0),(1,2)\}$ to $e_1$ is a Cayley structure, see Figure \ref{fig:ex2}. On the other hand, the map $\mathcal{A}\to \Delta_1$ sending $\{(0,0),(1,0)\}$ to $e_0$ and $\{(0,1),(1,2)\}$ to $e_1$ is not a Cayley structure, since it does not preserve the affine relation \[2\cdot (0,0)+(1,2)=(1,0)+2\cdot(0,1).\] \end{ex} A Cayley structure on a face $\tau$ of $\mathcal{A}$ determines a toric subvariety $Z_\pi$ of $\mathbf{F}_l(X_\mathcal{A})$ as follows. Indeed, since $\tau$ is a face of $\mathcal{A}$, we have a closed embedding $X_\tau\hookrightarrow X_\mathcal{A}$, see \S\ref{sec:toric}. On the other hand, a Cayley structure $\pi:\tau\to \Delta_l$ induces a surjection \begin{align*} \mathbb{K}[S_\tau]&\to \mathbb{K}[y_{e_0},\ldots,y_{e_l}]\\ \chi^{(u,1)}&\mapsto y_{\pi(u)} \end{align*} leading to a closed embedding $\mathbb{P}^l\hookrightarrow X_\tau$, and thus an $l$-plane $L_\pi\subset X_\mathcal{A}$ and corresponding point $[L_{\pi}]\in \mathbf{F}_l(X_\mathcal{A})$. The variety $Z_\pi$ is defined to be the torus orbit closure of $[L_{\pi}]$ in $\mathbf{F}_k(X_\mathcal{A})$. By construction, it is a (potentially non-normal) toric variety. Given a Cayley structure $\pi:\tau\to \Delta_l$, we also construct a subvariety $Z_{\pi,k}$ of $\mathbf{F}_k(X_\mathcal{A})$ for any $k\leq l$. Indeed, we take $Z_{\pi,k}$ to be the subvariety of $\mathbf{F}_k(X_\mathcal{A})$ whose points correspond to those $k$-dimensional linear spaces which are contained in any $l$-dimensional linear space $L'$ parameterized by $Z_\pi$. This set $Z_{\pi,k}\subset \mathbf{F}_k(X_\mathcal{A})$ is indeed a variety: consider the pullback $\mathcal{U}$ to $Z_\pi$ of the universal bundle on $\mathbf{F}_l(X_\mathcal{A})$. Then the Grassmann bundle $\mathbb{G}(k,\mathcal{U})$ parametrizing projective $k$-planes in the fibers of $\mathcal{U}$ has a natural proper map to $\mathbf{F}_k(X_\mathcal{A})$. Its image is closed, and it exactly $Z_{\pi,k}$ as described above. In order to state our main theorem, we define a partial order on the set of Cayley structures. We define $\pi:\tau \to \Delta_l$ to be greater than or equal to $\pi':\tau'\to \Delta_{l'}$ (written $\pi\succeq \pi'$) if \begin{enumerate} \item $\tau'$ is a face of $\tau$; and \item There is a surjection $\rho:\Delta_{l}\to\Delta_{l'}$ such that $(\rho\circ\pi)_{|\tau'}=\pi'$. \end{enumerate} In other words, $\pi\succeq\pi'$ if there exists a map $\rho$ making the diagram commute: \[ \xymatrix{ \tau' \ar[r]^{\pi'}\ar@{^{(}->}[d] &\Delta_{l'}\\ \tau \ar[r]^{\pi} & \Delta_{l} \ar@{-->}[u]^{\rho}}. \] Note that when considering Cayley structures $\pi:\tau\to\Delta_l$ and $\pi':\tau\to\Delta_l$, we do not differentiate between $\pi$ and $\pi'$ if $\pi\succeq \pi'\succeq \pi$, that is, they differ by a permutation of $\Delta_l$. We call such Cayley structures equivalent. \begin{thm}\label{thm:main} For $k\geq 1$, the irreducible components of $\mathbf{F}_k(X_\mathcal{A})$ with their reduced structure are exactly the varieties $Z_{\pi,k}$, as $\pi$ ranges over all maximal Cayley structures $\pi:\tau\to \Delta_l$ for $\tau\prec \mathcal{A}$ and $l\geq k$. \end{thm} \noindent We will prove this theorem in \S\ref{sec:proof}. First, we need to better understand the varieties $Z_{\pi,k}$, which we do in the next section. \begin{ex}\label{ex:birkhoff2} Consider the set $\mathcal{A}$ from Example \ref{ex:birkhoff1} whose convex hull is the Birkhoff polytope $B_3$. There are exactly six Cayley structures (up to equivalence) of the form $\pi:\mathcal{A}\to\Delta_2$, given by by projecting a permutation matrix in $\mathbb{Z}^{3\times 3}$ onto a single row or a single column. These Cayley structures are all maximal. On the other hand, the nine facets $\tau\prec \mathcal{A}$ described in Example \ref{ex:birkhoff1} are all empty $3$-simplices, so they naturally give Cayley structures $\pi:\tau\to\Delta_2$. These Cayley structures are also maximal. In fact, every maximal Cayley structure for $\mathcal{A}$ is of the above form. By Theorem \ref{thm:main}, we conclude that $\mathbf{F}_k(X_\mathcal{A})$ has $15$ irreducible components for $k=1,2$ and $\mathbf{F}_3(X_\mathcal{A})$ has $9$ irreducible components. \end{ex} \section{Geometry of $Z_{\pi,k}$}\label{sec:comps} Let $\pi:\tau\to \Delta_l$ be a Cayley structure. The $l$-plane $L_\pi$ arises as the rowspan of the $(l+1)\times \# \mathcal{A}$ matrix $P=(p_{iu})$ where \[ p_{iu}= \begin{cases} 1 & u\in \tau \ \textrm{and}\ \pi(u)=e_i\\ 0 & \textrm{otherwise}. \end{cases} \] Here, the rows of $P$ are indexed by $i=0,\ldots,l$ and the columns by $u\in \mathcal{A}$. Given $t\in T$, we then have that $t\cdot L_\pi$ is the rowspan of the matrix $t\cdot P$ where $T$ acts on column $u$ by multiplication with $\chi^u$, that is, \[ t\cdot P=(\chi^u(t) p_{iu}). \] We now describe the local structure of $Z_\pi$ on the Pl\"ucker charts of $\mathbf{F}_l(X_\mathcal{A})$. Choose some subset $\sigma\subset \mathcal{A}$ of size $l+1$ for which the corresponding minor of $P$ is non-vanishing. This occurs if and only if $\sigma\subset \tau$ and $\sigma$ contains exactly one element from each fiber $\pi^{-1}(e_i)$. \begin{defn}Fixing $\sigma\subset\tau$ as above, for each $u\in \tau$ we define $\lambda(u)$ to be the unique $v\in\sigma$ such that $\pi(v)=\pi(u)$. \end{defn} Setting the Pl\"ucker coordinate corresponding to $\sigma$ equal to one, $t\cdot L_\pi$ is expressed uniquely as the rowspan of the matrix $(p_{vu}(t))$, with \begin{align*} p_{vu}(t)=\begin{cases} \chi^{u-\lambda(u)}(t)& u\in \tau\ \textrm{and}\ \pi(u)=\pi(v)\\ 0 & \textrm{else}. \end{cases} \end{align*} Here, the rows and columns of $(p_{vu}(t))$ are labeled respectively by elements of $\sigma$ and $\mathcal{A}$. On the Pl\"ucker chart corresponding to $\sigma$, $Z_\pi$ is thus isomorphic to the affine toric variety whose coordinate ring is \[ \mathbb{K}[S(\pi,\sigma)] \] where $S(\pi,\sigma)$ is the semigroup generated by the lattice elements $u-\lambda(u)$ for $u\in\tau$. If $\sigma$ is a face of $\mathcal{A}$, then the semigroup $S(\pi,\sigma)$ is pointed. Indeed, let $w\in M^*$ be such that $\sigma$ is the face of $\mathcal{A}$ on which $w$ is minimized. Then $w(u-\lambda(u))\geq 0$, with equality if and only if $u\in\sigma$. Hence, every non-trivial element $s$ of $S(\pi,\sigma)$ satisfies $w(s)>0$, so $S(\pi,\sigma)$ is pointed. In such situations, this chart of $Z_\pi$ has a unique torus fixed point \cite[Proposition 1.3.2]{CLS}, which is exactly the fixed point corresponding to $\sigma$ under Proposition \ref{prop:fixed}. We thus have \begin{prop} The torus fixed points of $Z_\pi$ are in bijection with empty simplicial $l$-faces of $\tau$ surjecting onto the vertices of $\Delta_l$. \end{prop} We also need to understand the local structure of $Z_{\pi,k}$ for a Cayley structure $\pi:\tau\to \Delta_l$, $l\geq k$. Similar to above, the Pl\"ucker charts containing $Z_{\pi,k}$ correspond to choosing some subset $\sigma\subset \mathcal{A}$ of size $k+1$ for which the corresponding submatrix of $P$ has full rank. This is the same as requiring that $\sigma$ surjects onto $k+1$ elements of $\Delta_l$. To locally describe $Z_{\pi,k}$, we first restrict to a local chart of $Z_\pi$ corresponding to some $\widetilde\sigma\subset \tau$ as above with $\widetilde\sigma$ in bijection with the vertices of $\Delta_l$. On this chart of $Z_\pi$, the universal bundle $\mathcal{U}$ trivializes by means of the matrix $(p_{vu}(t))$ from above. The projective $k$-planes parametrized by the restriction of $\mathbb{G}(k,\mathcal{U})$ to this chart are thus the $k$-dimensional subspaces of the $l$-dimensional rowspan of $(p_{vu}(t))$ in projective space. The local Pl\"ucker charts of this relative Grassmannian correspond to choosing a $k$-face $\sigma$ of $\widetilde\sigma$. On such a chart, a parametrization of the corresponding linear spaces is given by the rowspan of the matrix $(q_{vu}(t))$ where \[ q_{vu}(t)=\begin{cases} \chi^{u-\lambda(u)}(t) & u\in \tau\ \textrm{and}\ v=\lambda(u)\in\sigma\\ \lambda_{v\lambda(u)}\chi^{u-\lambda(u)}(t) & u\in \tau\ \textrm{and}\ \lambda(u)\notin\sigma\\ 0 & \textrm{otherwise} \end{cases} \] for parameters $\lambda_{vw}$ with $v\in\sigma$, $w\in \widetilde\sigma\setminus\sigma$. This matrix looks like \begin{equation}\label{eqn:matrix} \begin{blockarray}{cccccc} & \lambda(u)=v_0 & \lambda(u)=v_1 & \lambda(u)=v_k & \lambda(u)=w\notin\sigma' & u\notin \tau \\ \begin{block}{c(c|c|c|c|c)} v_0 & \chi^{u-v_0} &0 &0&\lambda_{v_0w}\chi^{u-w} & 0 \\ v_1 & 0& \chi^{u-v_{1}} &0&\lambda_{v_1w}\chi^{u-w} & 0 \\ & \vdots & &&\vdots & \vdots \\ v_k & 0&0& \chi^{u-v_{k}} &\lambda_{v_kw}\chi^{u-w} & 0 \\ \end{block} \end{blockarray} \end{equation} where $\sigma=\{v_{0},\ldots,v_{k}\}$. Note that to obtain this matrix, we have simply taken the $k+1$ rows of $(p_{vu}(t))$ corresponding to the elements of $\sigma$, and then added on generic linear combinations of the remaining rows. Let $S(\pi,\widetilde\sigma,\sigma)$ be the semigroup of $M\times \mathbb{Z}^{\sigma\times(\widetilde\sigma\setminus\sigma)}$ generated by the lattice elements \begin{equation}\label{eqn:gens} \begin{aligned} \gamma(u):=&u-\lambda(u) &\qquad u\in \tau\ \textrm{and}\ \lambda(u)\in\sigma\\ \gamma(v,u):=&e_{v\lambda(u)}+u-\lambda(u) &\qquad v\in\sigma,\ u\in \tau\ \textrm{and}\ \lambda(u)\notin\sigma\\ \end{aligned} \end{equation} where $e_{v\lambda(u)}$ are the standard basis vectors of $\mathbb{Z}^{\sigma\times(\widetilde\sigma\setminus\sigma)}$. Then the closure of the image of $\mathbb{G}(k,\mathcal{U})$ in the Pl\"ucker chart of $\mathbf{F}_k(X_\mathcal{A})$ corresponding to $\sigma$ is the affine toric variety whose coordinate ring is \[ \mathbb{K}[S(\pi,\widetilde\sigma,\sigma)]. \] In other words, on this Pl\"ucker chart, $Z_{\pi,k}$ is the above affine toric variety. Note that different choices of $\widetilde\sigma$ lead to isomorphic semigroups $S(\pi,\widetilde\sigma,\sigma)$. If $\sigma$ is a face of $\mathcal{A}$, this chart of $Z_{\pi,k}$ has a (unique) torus fixed point, which is exactly the fixed point corresponding to $\sigma$ under Proposition \ref{prop:fixed}. We thus have \begin{prop}\label{prop:fixed2} The torus fixed points of $Z_{\pi,k}$ are in bijection with empty simplicial $k$-faces $\sigma$ of $\tau$ mapping to $k+1$ vertices of $\Delta_l$. \end{prop} \begin{ex}\label{ex:birkhoff3} Using the above local description of $Z_{\pi,k}$, we get explicit local descriptions for the irreducible componenets of $\mathbf{F}_2(X_\mathcal{A})$, where $\mathcal{A}$ is as in Examples \ref{ex:birkhoff1} and \ref{ex:birkhoff2}. Indeed, for Cayley structures of the form $\pi:\tau\to\Delta_3$, $\tau$ a facet of $\mathcal{A}$, $Z_\pi$ is just the single point $[L_\pi]$, and $Z_{\pi,2}$ is thus isomorphic to the Grassmannian $G(3,4)\cong\mathbb{P}^3$. On the other hand, consider one of the maximal Cayley structures $\pi:\mathcal{A}\to\Delta_2$, for example, $\pi(u_i)=\pi(v_i)=e_i$. If we choose $\sigma=\{v_0,u_1,u_2\}$, the corresponding matrix $p_{vu}(t)$ has the form \[ \begin{blockarray}{ccccccc} & v_0&u_1&u_2&u_0&v_1&v_2\\ \begin{block}{c(cccccc)} v_0 & 1 &0 &0&t_1t_2 & 0 & 0\\ u_1 & 0& 1 &0& 0& t_1& 0 \\ u_2 & 0&0& 1 &0 & 0 & t_2\\ \end{block} \end{blockarray} \] \end{ex} \noindent for appropriate choice of parameters $t_1,t_2$. Hence, the component $Z_\pi$ is covered by copies of $\mathbb{A}^2$. \begin{rem}\label{rem:chow} A central computation of \cite{ilten:16a} involves understanding those $5$-planes of $\mathbb{P}^{11}$ which are not contained in a coordinate hyperplane, yet are contained in the hypersurface $X$ on which \[ f=x_0x_1x_2+x_3x_4x_5+x_6x_7x_8+x_9x_{10}x_{11} \] vanishes. Setting $f_1=x_0x_1x_2+x_3x_4x_5$ and $f_2=x_6x_7x_8+x_9x_{10}x_{11}$, clearly if a $5$-plane $L$ is contained in the variety $V(f_1,f_2)=V(f_1)\cap V(f_2)$ on which $f_1$ and $f_2$ vanish, it is contained in $X$. But since $V(f_i)$ is just the cone over the variety $X_\mathcal{A}$ from Example \ref{ex:birkhoff3}, it is straightforward to see that any $5$-plane $L$ contained in $V(f_1,f_2)$ but not in a coordinate hyperplane comes in a $4$-dimensional family (up to certain permutations of the variables). Somewhat surprisingly, after allowing for different choices of $f_1$ and $f_2$ by permuting the terms of $f$, all $5$-planes of $X$ not contained in a coordinate hyperplane arise in this fashion \cite[Proposition 3.2]{ilten:16a}. It would be interesting to provide a non-computational proof of this fact, and understand more generally in what situations linear spaces of a special (yet non-toric) hypersurface arise in this toric fashion. \end{rem} \begin{prop}\label{prop:contain} Let $\pi:\tau \to \Delta_{l}$ and $\pi':\tau'\to \Delta_{l'}$ be Cayley structures for $l,l'\geq k\geq 1$. Then $Z_{\pi,k}$ contains $Z_{\pi',k}$ if and only if $\pi\succeq \pi'$. \end{prop} \begin{proof} It is straightforward to check from the construction of $Z_{\pi,l}$ that $\pi\succeq \pi'$ implies $Z_{\pi',k}\subset Z_{\pi,k}$. On the other hand, suppose that $Z_{\pi',k} \subset Z_{\pi,k'}$. Since every $k$-plane of $L_{\pi'}$ must be contained in $X_{\tau}\subset X_\mathcal{A}$, we see that $\tau'\subset \tau$, so $\tau'$ is in fact a face of $\tau$. To construct a map $\rho:\Delta_{l}\to\Delta_{l'}$ such that $(\rho\circ\pi)_{|\tau'}=\pi'$, define $\rho(e_i)$ to be $e_0$ if $e_i\notin \pi(\tau')$. Otherwise, choose some $u\in \tau'$ such that $\pi(u)=e_i$ and set $\rho(e_i)=\pi'(u)$. We must show that for $u,u'\in \tau'$, if $\pi(u)=\pi(u')$, then $\pi'(u)=\pi'(u')$. If $\pi'(u)\neq \pi'(u')$, note that for any matrix parametrizing a sufficiently general $k$-plane of $L_{\pi'}$, the two columns corresponding to $u$ and $u'$ will be linearly independent. On the other hand, consider any empty simplicial $k$-face $\sigma$ of $\tau'$ such that $\pi(\sigma)$ contains $\pi(u)$. By inspecting the matrix $(q_{iu}(t))$ parametrizing $Z_{\pi,k}$ on the local chart corresponding to $\sigma$, we note that the two columns corresponding to $u$ and $u'$ will be linearly dependent. Hence, we must have $\pi'(u)=\pi'(u')$. \end{proof} \section{Proof of Main Result}\label{sec:proof} Let $L$ be a $k$-dimensional linear subspace of $X_\mathcal{A}$. We will show that $[L]$ is a point of $Z_{\pi,k}$ for some Cayley structure $\pi$. Hence, every point of $\mathbf{F}_k(X_\mathcal{A})$ is contained in some $Z_{\pi,k}$, so Theorem \ref{thm:main} will then follow from Proposition \ref{prop:contain}. We can represent $L$ as the rowspan of a full-rank $((k+1)\times \#\mathcal{A})$-matrix $(\alpha_{iu})$ with $i=0,\ldots,k$ and columns indexed by $u\in \mathcal{A}$. We let \[ y_u=\sum_{i=0}^k\alpha_{iu}y_i \] be a linear form in indeterminates $y_0,\ldots,y_k$. The criterion that $L$ be contained in $X_\mathcal{A}$ is exactly the condition that for any any relation \[ \sum_{u\in\mathcal{A}} a_u u=\sum_{u\in\mathcal{A}} b_u u \] with $a_u,b_u\in \mathbb{Z}_{\geq 0}$ and $\sum a_u=\sum b_u$, we have \begin{equation}\label{eqn:key} \prod_{u\in\mathcal{A}} y_u^{a_u} =\prod_{u\in\mathcal{A}} y_u^{b_u}. \end{equation} Indeed, the defining equations of $X_\mathcal{A}$ are binomials corresponding to such affine relations above. Let $\tau$ consist of those $u\in\mathcal{A}$ such that $y_u$ is non-zero. \begin{lemma} The set $\tau$ is a face of $\mathcal{A}$. \end{lemma} \begin{proof} Considering a generic point $\eta$ of $L$ with coordinates $\eta_u$ for $u\in\mathcal{A}$, the set $\tau$ consists of exactly those $u$ with $\eta_u\neq 0$. On the other hand, $\eta$ is contained in a torus orbit $\mathfrak{o}$ of $X_\mathcal{A}$. Such an orbit $\mathfrak{o}$ correspond to a face $\widetilde\sigma$ of $\mathcal{A}$, see \S\ref{sec:toric}. The points $\zeta=(\zeta_u)$ contained in the orbit $\mathfrak{o}$ are precisely those points for which $\zeta_u\neq 0$ if and only if $u\in \widetilde\sigma$. The claim now follows. \end{proof} Let $V$ be the vector space of linear forms in $\mathbb{K}[y_0,\ldots,y_k]$. Then we have a map \begin{align*} \rho:\tau&\to \mathbb{P}(V)\\ u&\mapsto \langle{y_u}\rangle \end{align*} where $\langle{y_u}\rangle$ is the line in $\mathbb{P}(V)$ spanned by $y_u$. Composing $\rho$ with a bijection between $\rho(\tau)$ and the elements of $\Delta_l$ for $l=\#\rho(\tau)-1$, we obtain a map $\pi:\tau\to\Delta_l$. Note that since the rank of $(\alpha_{iu})$ is $k+1$, we must have $l\geq k$. \begin{lemma}\label{lemma:Cayley} The map $\pi$ is a Cayley structure. \end{lemma} \begin{proof} Consider any affine relation \[ \sum_{u\in\tau} a_u u=\sum_{u\in\tau} b_u u \] with $a_u,b_u\in \mathbb{Z}_{\geq 0}$ and $\sum a_u=\sum b_u$. Then by equation \eqref{eqn:key} and the fact that $\mathbb{K}[y_0,\ldots,y_k]$ is a unique factorization domain, we must have \[ \sum_{\rho(u)=v} a_u =\sum_{\rho(u)=v} b_u \] for each $v\in\rho(\tau)$. But this implies that \[ \sum_{u\in\tau} a_u \pi(u)=\sum_{u\in\tau} b_u \pi(u). \] Hence, $\pi$ is a Cayley structure. \end{proof} \begin{rem}\label{rem:cover} Lemma \ref{lemma:Cayley} provides a straightforward proof that $X_\mathcal{A}$ is covered by $k$-planes if and only if there is a Cayley structure $\pi:\mathcal{A}\to \Delta_k$. Indeed, if $X_\mathcal{A}$ is covered by $k$-planes, there is a $k$-plane $L$ containing a point in the dense torus orbit of $X_\mathcal{A}$. For this plane $L$, the set $\tau$ as above must be all of $\mathcal{A}$, and by the lemma we obtain a Cayley structure $\pi:\mathcal{A}\to\Delta_l$ for some $l\geq k$. We can compose this with any affine surjection $\Delta_l\to\Delta_k$ to get the desired Cayley structure. On the other hand, if $\pi:\mathcal{A}\to\Delta_k$ is a Cayley structure, the set of $k$-planes parametrized by $Z_{\pi}$ clearly covers $X_\mathcal{A}$. We thus recover \cite[Theorem 1.2]{ito:15a} for polarized normal toric varieties $(X,\mathcal{L})$ in the special case that $\mathcal{L}$ is ample. \end{rem} Returning to our proof of Theorem \ref{thm:main}, we claim now that the point $[L]\in\mathbf{F}_k(X_\mathcal{A})$ lies in $Z_{\pi,k}\subset\mathbf{F}_k(X_\mathcal{A})$. Choose $\sigma\subset \tau$ such that the $(k+1)\times (k+1)$ matrix $(\alpha_{iv})_{v\in \sigma}$ is invertible. We can thus represent $L$ by a matrix $(\alpha_{vu})$ with rows and columns indexed by $v\in \sigma,u\in \mathcal{A}$ such that the square submatrix $(\alpha_{vu})_{v,u\in\sigma}$ is the identity matrix. Similarly, the forms $y_v$, $v\in \sigma$ are a basis of the vector space $V$ and we have $y_u=\sum_{v\in\sigma} \alpha_{vu}y_v$. Consider any $\widetilde\sigma\subset \tau$ such that $\sigma\subset \widetilde\sigma$ and $\pi$ gives a bijection between $\widetilde\sigma$ and the vertices of $\Delta_l$. As we saw in \S\ref{sec:comps}, the choice of $\sigma$ gives an affine chart $U$ of $Z_{\pi,k}$. This chart $U$ is the toric variety corresponding to the semigroup $S(\pi,\widetilde\sigma,\sigma)$ generated by $\gamma(u)$ and $\gamma(v,u)$ as in Equation \eqref{eqn:gens}. Equivalently, this chart is the affine toric variety parametrized by the entries of the matrix from equation \eqref{eqn:matrix}. We will show that $[L]$ is a point in this chart. Now, all relations among these parametrizing functions are generated from those binomial equations $f=0$ corresponding to relations of the form \begin{equation}\label{eqn:general} \begin{aligned} \sum_{\lambda(u)\in\sigma} a_u \gamma(u) +\sum_{\substack{\lambda(u)\notin\sigma\\v\in\sigma}} a_u \gamma(v,u) =\sum_{\lambda(u)\in\sigma} b_u \gamma(u) +\sum_{\substack{\lambda(u)\notin\sigma\\v\in\sigma}} b_u \gamma(v,u) \end{aligned} \end{equation} in $M\times\mathbb{Z}^{\sigma\times(\widetilde\sigma\setminus\sigma)}$. Here, $a_u,b_u\in\mathbb{Z}_{\geq 0}$. We need to show that all such binomials $f$ vanish at the point $[L]=(\alpha_{vu})$, that is, that \begin{equation}\label{eqn:firstsat} \prod_{\lambda(u)\in\sigma} \alpha_{\lambda(u)u}^{a_u}\prod_{\substack{\lambda(u)\in\widetilde\sigma\setminus\sigma\\v\in\sigma}} \alpha_{vu}^{a_u}= \prod_{\lambda(u)\in\sigma} \alpha_{\lambda(u)u}^{b_u}\prod_{\substack{\lambda(u)\in\widetilde\sigma\setminus\sigma\\v\in\sigma}} \alpha_{vu}^{b_u}. \end{equation} Fix an order $<$ on the elements of $\sigma$. We define a map $\kappa:\widetilde{\sigma}\setminus\sigma\to\sigma$ as follows: for any $v\in \widetilde\sigma\setminus\sigma$, let $\kappa(v)$ be the smallest element of $\sigma$ such that $\alpha_{\kappa(v)v}\neq 0$. For any $u$ with $\lambda(u)=v\in\widetilde\sigma\setminus\sigma$, we set $\kappa(u)=\kappa(\lambda(u))$. Our conditions on $\kappa(u)$ imply that for any set of natural numbers $a_u$, the coefficient of \[ \prod_{\lambda(u)\in\widetilde\sigma\setminus\sigma}y_{\kappa(u)}^{a_u} \qquad \mathrm{in} \prod_{\lambda(u)\in\widetilde\sigma\setminus\sigma}y_{u}^{a_u} \] is \begin{equation}\label{eqn:monomial} \prod_{\lambda(u)\in\widetilde\sigma\setminus\sigma}\alpha_{\kappa(u)u}^{a_u}. \end{equation} Some special affine relations are those of the form \begin{equation*} \gamma(v,u)+\gamma(\kappa(u), \lambda(u))= \gamma(\kappa(u),u)+\gamma(v,\lambda(u)) \end{equation*} for $v\in\sigma$ and $u\in\tau$ with $\lambda(u)\notin\sigma$. The corresponding binomial evaluated at $[L]$ is exactly \[ \alpha_{vu}\alpha_{\kappa(u)\lambda(u)}-\alpha_{\kappa(u) u}\alpha_{v\lambda(u)}. \] This expression is indeed zero: $\pi(u)=\pi(\lambda(u))$ implies that $y_u$ and $y_{\lambda(u)}$ are linearly dependent. Since $\alpha_{\kappa(v) \lambda(u)}\neq 0$, we can rewrite \[ \alpha_{vu}=\frac{\alpha_{\kappa(u) u}\alpha_{v\lambda(u)}}{\alpha_{\kappa(u)\lambda(u)}}. \] Thus, it suffices to consider relations \eqref{eqn:general} where the only $\gamma(v,u)$ terms appearing are those with either $v=\kappa(u)$ or $u=\lambda(u)$. Indeed, if \eqref{eqn:firstsat} is satisfied for such relations, we obtain equality for arbitrary relations by repeated substitutions using the above expressions for $\alpha_{vu}$. Furthermore, for such a relation, the only terms containing an $e_{v\lambda(u)}$ component for $v\neq \kappa(u)$ are $\gamma(v,\lambda(u))$. We can cancel these from both sides of the relation to assume that the only $\gamma(v,u)$ terms appearing are those with $v=\kappa(u)$. We can thus reduce our general relations \eqref{eqn:general} to \begin{equation}\label{eqn:special} \begin{aligned} \sum_{\lambda(u)\in\sigma} a_u \gamma(u) +\sum_{\substack{\lambda(u)\notin\sigma}} a_u \gamma(\kappa(u),u) =\sum_{\lambda(u)\in\sigma} b_u \gamma(u) +\sum_{\substack{\lambda(u)\notin\sigma}} b_u \gamma(\kappa(u),u). \end{aligned} \end{equation} We must then show that \begin{equation}\label{eqn:want} \prod_{\lambda(u)\in\sigma} \alpha_{\lambda(u)u}^{a_u}\prod_{\lambda(u)\in\widetilde\sigma\setminus\sigma} \alpha_{\kappa(u)u}^{a_u}= \prod_{\lambda(u)\in\sigma} \alpha_{\lambda(u)u}^{b_u}\prod_{\lambda(u)\in\widetilde\sigma\setminus\sigma} \alpha_{\kappa(u)u}^{b_u}. \end{equation} Projecting \eqref{eqn:special} to $\mathbb{Z}^{\sigma\times(\widetilde\sigma\setminus\sigma)}$, we obtain that for any $v\in\widetilde\sigma\setminus\sigma$, \[ \sum_{\lambda(u)=v} a_u=\sum_{\lambda(u)=v} b_u. \] Indeed, for such $v$, the coefficient of $e_{\kappa(v)v}$ in the left hand side of the projection of \eqref{eqn:special} is $\sum_{\lambda(u)=v} a_u$, and a similar claim holds for the right hand side. Thus, \[ \prod_{\lambda(u)\notin\sigma} \alpha_{\kappa(u)\lambda(u)}^{a_u}=\prod_{\lambda(u)\notin\sigma} \alpha_{\kappa(u)\lambda(u)}^{b_u}. \] We denote this quantity by $c$; note that by our construction of $\kappa$, $c\neq 0$. On the other hand, rearranging and projecting \eqref{eqn:special} to $M$, we have the affine relation \begin{align*} \sum_{u\in\tau} a_u u+b_u \lambda(u) =\sum_{u\in\tau} b_u u+a_u\lambda(u) \end{align*} among the elements of $\tau\subset \mathcal{A}$. Passing to the induced relation on the $y_u$ from equation \eqref{eqn:key} gives \begin{equation*} \prod_{u\in\tau} y_u^{a_u}\prod_{u\in\tau} y_{\lambda(u)}^{b_u}=\prod_{u\in\tau} y_{\lambda(u)}^{a_u} \prod_{u\in\tau} y_u^{b_u}. \end{equation*} We view both sides of this equation as polynomials in the $y_v$, $v\in\sigma$. Consider the coefficients on both sides of this equation for the monomial \[ \prod_{v\in\sigma}y_v^{d_v}\cdot\prod_{v\in\widetilde\sigma\setminus\sigma} y_{\kappa(v)}^{d_v} \] where \begin{align*} d_v=\sum_{\pi(u)=\pi(v)} a_u+b_u. \end{align*} But by \eqref{eqn:monomial} these are just $c$ times the left and right hand sides of Equation \eqref{eqn:want}. We conclude that $[L]$ satisfies the necessary binomial equations and is thus contained in $Z_{\pi,k}$. We have now seen that $\mathbf{F}_k(X_\mathcal{A})=\bigcup Z_{\pi,k}$, where the union is taken over all maximal Cayley structures $\pi$. But then each $Z_{\pi,k}$ must be an irreducible component by Proposition \ref{prop:contain}. This completes our proof of Theorem \ref{thm:main}. \section{Dimension and Global Description} Having now understood the component structure of $\mathbf{F}_k(X_\mathcal{A})$, we begin to study some properties of the components. We begin with a description of the dimension of the varieties $Z_{\pi,k}$. \begin{prop}\label{prop:dim} Let $\tau$ be an $m$-face of $\mathcal{A}$ and $\pi:\tau\to\Delta_l$ a Cayley structure for some $l\geq k$. Then the subvariety $Z_{\pi,k}$ of $\mathbf{F}_k(X_\mathcal{A})$ has dimension $m-l+(k+1)(l-k)$. \end{prop} \begin{proof} Let $\widetilde \sigma$ be any $l$-face of $\tau$ surjecting onto $\Delta_l$, and $\sigma$ any $k$-face of $\widetilde \sigma$. By the local description of $Z_{\pi,k}$ from \ref{sec:comps}, its dimension is given by the dimension of the semigroup $S(\pi,\widetilde\sigma,\sigma)$ contained in $M\times \mathbb{Z}^{\sigma\times(\widetilde\sigma\setminus\sigma)}$. The projection $\overline S$ of this semigroup to the $M$ factor is generated by $u-\lambda(u)$ for $u\in \tau$. The dimension of $\overline S$ is $m-l$, since $\pi$ is an affine linear map, which when extended to a map from the affine span of $\tau$ to $\mathbb{R}^{l+1}$ has $(m-l)$-dimensional fibers. We thus see that the dimension of $S(\pi,\widetilde\sigma,\sigma)$ is at most \[m-l+\#(\sigma\times(\widetilde\sigma\setminus\sigma)= m-l+(k+1)(l-k). \] But this dimension is actually achieved, since in the projection $S(\pi,\widetilde\sigma,\sigma)\to\overline S$ the fiber over $0$ consists of the $(k+1)(l-k)$ linearly independent $e_{vu}$ for $v\in\sigma$, $u\in\widetilde\sigma\setminus\sigma$. \end{proof} For a Cayley structure $\pi:\tau\to\Delta_l$, $Z_\pi$ is the $T$-orbit closure of the point $[L_\pi]$ in the Fano scheme $\mathbf{F}_l(X_\mathcal{A})$. As such, it is a (potentially non-normal) toric variety. Embedding in $\mathbf{F}_l(X_\mathcal{A})$ in projective space by the Pl\"ucker embedding we arrive at an equivariant embedding of $Z_\pi$. This can be described globally as follows. Let $\mathcal{A}_\pi\subset M$ be \[ \mathcal{A}_\pi=\{u_0+\ldots+u_l\in M\ |\ u_i\in\pi^{-1}(e_i)\} \] \begin{thm}\label{thm:polytope} In its Pl\"ucker embedding, $Z_\pi$ is the toric variety $X_{A_\pi}$. \end{thm} \begin{proof} We simply consider the action of $T$ on each Pl\"ucker coordinate. As noted in \S\ref{sec:comps}, the non-zero Pl\"ucker coordinates for $Z_\pi$ correspond to choosing one element from each fiber $\pi^{-1}(e_i)$, that is, exactly the tuples $(u_0,\ldots,u_l)$ appearing in the construction of $\mathcal{A}_\pi$. Taking into account the action of $T$ on $L_\pi$ as described in \S\ref{sec:comps}, we see that $T$ acts on the coordinate corresponding to the tuple $(u_0,\ldots,u_l)$ with weight $u_0+\ldots+u_l$. \end{proof} \begin{rem} Note that the action of $T$ on $Z_\pi$ is not faithful: a faithful action is given by the torus $\overline T$ whose characters are generated by the kernel of $\pi$. However, the canonical embedding of $Z_\pi$ in projective space does not possess a canonical $\overline T$-equivariant structure. \end{rem} \begin{rem} Although the components $Z_{\pi,k}$ are locally toric, they are in general not globally toric. \end{rem} \begin{ex} Continuing Example \ref{ex:birkhoff3}, we see had already seen that $\mathbf{F}_3(X_\mathcal{A})$ has $6$ $2$-dimensional components, and $9$ $3$-dimensional components, which agrees with the contents of Proposition \ref{prop:dim}. As noted before, the $3$-dimensional components are all isomorphic to $\mathbb{P}^3$. We may apply Theorem \ref{thm:polytope} to determine that for one of the $2$-dimensional components, $\mathcal{A}_\pi$ is given by \begin{align*} &u_0+u_1+u_2\qquad &u_0+u_1+v_2 \qquad &u_0+v_1+u_2\qquad &u_0+v_1+v_2&\\ &v_0+u_1+u_2\qquad &v_0+u_1+v_2 \qquad &v_0+v_1+u_2\qquad &v_0+v_1+v_2& \end{align*} up to some permutation of the $v_j$. This configuration of lattice points is isomorphic to the set of lattice points in $\mathbb{Z}^2$ given by the columns of \[ \left(\begin{array}{c c c c c c c c} 0 & 0 & 0 & 1 & 1 & -1 & -1\\ 0 & 1 & -1& 0 & 1 & 0 & -1 \end{array}\right). \] The corresponding toric variety is just $\Bl_3\mathbb{P}^2$, the blowup of $\mathbb{P}^2$ in $3$ points, embedded by its anticanonical divisor. \end{ex} \section{Smoothness of $\mathbf{F}_k(X_\mathcal{A})$ } \begin{defn}\label{defn:smooth} Let $\sigma$ be an empty simplicial $k$-face of $\mathcal{A}$. We say that $\mathcal{A}$ is \emph{smooth at $\sigma$} if the semigroup generated by $\mathcal{A}-\sigma$ is isomorphic to $\mathbb{Z}^{k}\times\mathbb{Z}_{\geq0}^{n-k}$, where $n$ is the dimension of $\mathcal{A}$. \end{defn} \begin{rem}\label{rem:regular} The set $\mathcal{A}$ is smooth at $\sigma$ if and only if $X_\sigma\subset X_\mathcal{A}$ is not contained in the singular locus of $X$. Indeed, $X_\sigma$ is the orbit closure of the distinguished point $\eta$ (cf. \cite[\S3.2]{CLS}) of the affine patch $U=\spec \mathbb{K} [\mathbb{N}\cdot (\mathcal{A}-\sigma)]$, so it suffices to check if the point $\eta$ is in the singular locus of $X_\mathcal{A}$. Furthermore, $U$ is the smallest torus invariant open set containing $\eta$. Since the singular locus of $X_\mathcal{A}$ is torus invariant, $X_\mathcal{A}$ is smooth at $\eta$ if and only if $U$ is smooth. This is equivalent to $\mathbb{N}\cdot(\mathcal{A}-\sigma)$ being isomorphic to $\mathbb{Z}^{k}\times\mathbb{Z}_{\geq0}^{n-k}$, see \cite[\S1.3]{CLS}. \end{rem} \begin{thm}\label{thm:regular} Let $\sigma$ be an empty simplicial $k$-face of $\mathcal{A}$. Suppose that $\mathcal{A}$ is smooth at $\sigma$. Then any component $Z$ of $\mathbf{F}_k(X_\mathcal{A})$ containing the point $[L_\sigma]$ corresponding to $\sigma$ is smooth at $[L_\sigma]$. \end{thm} \begin{proof} We know by Theorem \ref{thm:main} and Proposition \ref{prop:fixed2} that the component $Z$ is of the form $Z_{\pi,k}$ for some Cayley structure $\pi:\tau\to \Delta_{l}$, $l\geq k$ with $\sigma$ a face of $\tau$ mapping to $k$ vertices of $\Delta_l$. Let $m$ denote the dimension of $\tau$. Note that the set of all $u\in\tau$ with $\lambda(u)\in\sigma$ is a $m+k-l$-dimensional face $\omega$ of $\tau$. Since $\mathcal{A}$ is smooth at $\sigma$, we find $u_1,\ldots,u_{n-k}\in\mathcal{A}$ such that every $u\in\mathcal{A}$ can be written uniquely as \begin{equation}\label{eqn:u} u=\sum_i a_i u_i+\sum_{v\in\sigma} b_vv \end{equation} with $a_i\in\mathbb{Z}_{\geq0}$, $b_v\in \mathbb{Z}$, and $\sum_i a_i+\sum_v b_v=1$. After reordering the $u_i$, we can assume that exactly $u_1,\ldots,u_{m-k}$ are in $\tau$, and $u_1,\ldots,u_{m-l}$ are in $\omega$. This follows from the fact that both sets are faces of $\mathcal{A}$. Furthermore, $u$ is in $\tau$ (respectively $\omega$) if and only if the representation \eqref{eqn:u}, has $a_i=0$ for $i>m-k$ (respectively $i>m-l$). To study the Pl\"ucker chart of $Z_{\pi,k}$ containing $[L_\sigma]$, we consider the set $\widetilde\sigma=\sigma\cup\{u_{m-l+1},\ldots,u_{m-k}\}$. This has exactly $l+1$ elements and surjects (via $\pi$) onto the vertices of $\Delta_l$. Indeed, consider any $u\in\tau$ with $\pi(u)=e_j\notin\pi(\sigma)$. Then since $\pi$ is affine, using \eqref{eqn:u} we have \[ e_j=\sum_{i\leq m-k} a_i\pi(u_i)+\sum_{v\in\sigma}b_v\pi(v). \] The only way this can happen is if $\pi(u_i)=e_j$ for some $i$, since $a_i\geq 0$. Thus, as in \S\ref{sec:comps}, the Pl\"ucker chart of $Z_{\pi,k}$ containing $x_\sigma$ is the toric variety associated to the semigroup $S(\pi,\widetilde\sigma,\sigma)$. We will show that $S(\pi,\widetilde\sigma,\sigma)$ is generated by $\gamma(u_i)=u_i-\lambda(u_i)$ for $i<m-l$, and $e_{vw}$ for $w\in \widetilde\sigma\setminus\sigma$, $v\in\sigma$. If this claim is true, we are done, since these generators are linearly independent; this implies the smoothness of $U$. Since \eqref{eqn:u} is an affine relation, we have that \begin{equation*} \pi(u)=\sum_i a_i \pi(u_i)+\sum_{v\in\sigma} b_v\pi(v) \end{equation*} for any $u\in\tau$. But the relations among the points of $\sigma$ are exactly the same as those among the points of $\Delta_l$, hence \begin{equation*} \lambda(u)=\sum_i a_i \lambda(u_i)+\sum_{v\in\sigma} b_vv \end{equation*} and we obtain \[ u-\lambda(u)=\sum_{i\leq l-k} a_i(u_i-\lambda(u_i))+\sum_{i > l-k} a_i(u_i-u_i)=\sum_{i\leq l-k} a_i(u_i-\lambda(u_i)). \] In particular, if $u\in \omega$, then $\gamma(u)$ is generated by the $\gamma(u_i)$, $i=1,\ldots,m-l$. Furthermore, if $u\in\tau\setminus\omega$, then $\gamma(v,u)$ is generated by $e_{v\lambda(u)}$ along with the $\gamma(u_i)$, $i=1,\ldots,m-l$. To complete the proof, we note that our desired generators are indeed in $S(\pi,\widetilde\sigma,\sigma)$: $\gamma(u_i)$ obviously so, and $e_{v\lambda(u)}$ since $\gamma(v,\lambda(u))=e_{v\lambda(u)}$. \end{proof} \begin{cor}\label{cor:regular} Suppose that the dimension of the singular locus of $X_\mathcal{A}$ is at most $k-1$. Then in its reduced structure, each component of $\mathbf{F}_k(X_\mathcal{A})$ is smooth. \end{cor} \begin{proof} A component $Z_{\pi,k}$ is smooth if and only if it is smooth at its toric fixed points. The claim now follows from Theorem \ref{thm:regular} and Remark \ref{rem:regular}. \end{proof} \section{Intersections and Connectedness} We can also completely describe the intersection behaviour of the subvarieties $Z_{\pi,k}$ of $\mathbf{F}_k(X_\mathcal{A})$. \begin{thm}\label{thm:intersect} Let $\pi_1:\tau_1\to\Delta_{l_1}$, $\pi_2:\tau_\to\Delta_{l_2}$ be Cayley structures with $l,l'\geq k$. Then \[ Z_{\pi_1,k}\cap Z_{\pi_2,k}=\bigcup_{\substack{\pi\preceq \pi_i\\i=1,2}} Z_{\pi,k}. \] \end{thm} \begin{proof} By Proposition \ref{prop:contain}, the right hand side is clearly contained in the left. On the other hand, consider some linear space $L$ such that $[L]\in Z_{\pi_1,k}\cap Z_{\pi_2,k}$. We may construct a Cayley structure $\pi:\tau\to\Delta_l$ as in \S\ref{sec:proof} such that $[L]\in Z_{\pi,k}$. Using notation as in \S\ref{sec:proof}, $u\in\tau$ only if $y_u\neq 0$, but since $[L]\in Z_{\pi_1,k}\cap Z_{\pi_2,k}$ this can only happen if $\tau\subset \tau_1\cap\tau_2$. Hence, $\tau$ is a face of $\tau_1$ and $\tau_2$. Furthermore, $\pi(u)\neq \pi(v)$ if and only if $y_u$ and $y_v$ are not linearly dependent. But again since $[L]$ is contained in the left hand side above, we must also have $\pi_i(u)\neq \pi_i(v)$. Hence, we can find maps $\rho_i:\Delta_{l_i}\to\Delta_l$ such that $(\rho_i\circ\pi_i)_{\tau}=\pi$, so $Z_{\pi,k}$ contains $[L]$ and appears on the right hand side above. \end{proof} To describe the connected components of $\mathbf{F}_k(\mathcal{A})$, we construct a graph $\Gamma_\mathcal{A}$ based on the combinatorics of $\mathcal{A}$, with the property that its connected components are in bijection with those of $\mathbf{F}_k(\mathcal{A})$. The vertex set of $\Gamma_\mathcal{A}$ consists of the set of maximal Cayley structures $\pi:\tau\to\Delta_{l}$ for $\tau\prec \mathcal{A}$ and $l\geq k$. We connect vertices $\pi:\tau\to \Delta_l$ and $\pi':\tau'\to\Delta_{l'}$ by an edge in $\Gamma_\mathcal{A}$ if there exists an empty simplicial $k$-face $\sigma\prec\tau\cap\tau'$ such that $\pi$ and $\pi'$ are injective on $\sigma$. \begin{thm}\label{thm:connected} Irreducible components of $\mathbf{F}_k(X_\mathcal{A})$ are in bijection with vertices of $\Gamma_\mathcal{A}$. Two components intersect if and only if the corresponding vertices are connected by an edge. Two components are in the same connected component if and only if the corresponding vertices are in the same connected component of $\Gamma_\mathcal{A}$. In particular, the number of connected components of $\mathbf{F}_k(X_\mathcal{A})$ equals the number of connected components of $\Gamma_\mathcal{A}$. \end{thm} \begin{proof} The first claim is just a restatement of Theorem \ref{thm:main}. For the second claim, note that if two components intersect, the intersection is $T$-invariant and projective, and thus must contain a torus fixed point. The claim now follows from Proposition \ref{prop:fixed2}. Two components $Z,Z'$ are in the same connected component if and only if there is a sequence of components $Z=Z_0,Z_1,\ldots,Z_j=Z'$ with $Z_i$ and $Z_{i+1}$ intersecting; the remaining claims follow. \end{proof} \begin{ex} Continuing Example \ref{ex:birkhoff3}, it is easy to check via Theorem \ref{thm:connected} that $\mathbf{F}_2(X_\mathcal{A})$ is connected. Perhaps more interestingly, we may observe how the irreducible components intersect using Theorem \ref{thm:intersect}. Each $\mathbb{P}^3$ component intersects $4$ other $\mathbb{P}^3$ components in a point, missing the other $4$. Each $\mathbb{P}^3$ component intersects $4$ $\Bl_3 \mathbb{P}^2$ components in a $\mathbb{P}^1$, and misses the other $2$. The $\Bl_3 \mathbb{P}^2$ components do not pairwise intersect. \end{ex} \section{Scheme Structure and Multiplicities}\label{sec:mult} In this section, we will completely describe the scheme structure of $\mathbf{F}_k(X_\mathcal{A})$ locally around toric fixed points in the special case that $k=\dim X_\mathcal{A}-1$ and $X_\mathcal{A}$ is smooth in codimension one. We now fix notation for the rest of this section. Assume that $k=\dim X_\mathcal{A}-1$. Recall from Proposition \ref{prop:fixed} that the fixed points of $\mathbf{F}_k(X_\mathcal{A})$ correspond to faces of $\mathcal{A}$ which are empty $k$-simplices. Let us fix such a facet $\sigma=\{v_0,\ldots,v_k\}$ of $\mathcal{A}$. Assume that $\mathcal{A}$ is smooth at $\sigma$ (cf. Definition \ref{defn:smooth}). This is certainly satisfied if $X_\mathcal{A}$ is smooth in codimension one. We will proceed to describe the coordinate ring $R_\sigma$ of the Pl\"ucker chart containing the fixed point corresponding to $\sigma$. By the smoothness assumption, there exists some $w\in\mathcal{A}$ such that every $u\in \mathcal{A}$ can be written uniquely as \begin{equation}\label{eqn:c} u=v_0+h\cdot (w-v_0)-\sum_{i=1}^k c_i (v_i-v_0) \end{equation} for $h\in\mathbb{Z}_{\geq 0}$, $c_i\in \mathbb{Z}$. We fix such an element $w$, and call $h$ the \emph{height} of $u$. We then set \[c_0=h-1-\sum_{i>0} c_i.\] To stress the dependence of $c_i$ and $h$ on $u$, we will sometimes write $c_i(u)$ and $h(u)$. Note that $c_i(\cdot)$ and $h(\cdot)$ preserve affine relations among elements of $\mathcal{A}$. Let $R=\mathbb{K}[z_0,\ldots,z_k]$. For each element $u\in \mathcal{A}\setminus (\sigma\cup\{w\})$, we will describe a set of elements $S_u$ in $\mathbb{Z}_{\geq 0}^{k+1}$ such that a basis for $R_\sigma$ is given by $z^\alpha$ for $\alpha\in \bigcap S_u$. Let $e_0\ldots,e_k$ be the standard basis of $\mathbb{Z}_{\geq 0}^{k+1}$. For $\alpha=(\alpha_0,\ldots,\alpha_k)\in \mathbb{Z}_{\geq 0}^{k+1}$, set $|\alpha|=\sum_i \alpha_i$. Define $S_{<h}$ by $S_{<h}=\{\alpha\ ;\ |\alpha|<h\}$. \begin{figure} \casepic \caption{Cases of Definition \ref{defn:S}.} \label{fig:cases} \end{figure} \begin{defn}\label{defn:S} Fix $u\in \mathcal{A}\setminus (\sigma\cup\{w\})$ and let $h,c_i$ be as in \eqref{eqn:c}. We define $S_u$ as follows: \begin{enumerate} \item If for some $j\neq l$, $c_i=0$ for all $i\neq j,l$ and $c_j=-1$ then \[S_u=\{\alpha\ |\ \alpha_i=0\ (i\neq l)\}\cup S_{<h};\] \label{case:simplex} \item If for some $j$, $0\leq c_i<h$ for all $i\neq j$ and $c_j=-1$, then \[S_u=\{c+e_j \}\cup S_{<h};\] \item If for some $j$, $c_i=0$ for all $i\neq j$, then \[S_u=\{\alpha\ |\ \sum_{i\neq j} \alpha_i \leq 1\}\cup S_{<h};\] \item If for some $j\neq l$, $c_i=0$ for all $i\neq j,l$ and $c_j,c_l>0$, then \[S_u= \{c+e_0,\ldots,c+e_k,c+e_j+e_l\}\cup S_{<h};\]\label{case:middle} \item If $c_i\geq 0$ for all $i\neq j,l$ with at least three non-zero, then \[S_u= \{c+e_0,\ldots,c+e_k\}\cup S_{<h};\] \item Otherwise, $S_u=S_{<h}$.\label{case:other} \end{enumerate} See Figure \ref{fig:cases} for the hyperplane slice $h(u)=4$ illustrating in coordinates $c_0,c_1,c_2$ the possible cases when $k=2$. \end{defn} We can now completely describe the local structure of the scheme $\mathbf{F}_k(X_\mathcal{A})$ in combinatorial terms. \begin{thm}\label{thm:scheme} With notation as above, a basis for the coordinate ring $R_\sigma$ is given by $z^\alpha$ for $\alpha\in \bigcap_{u\in\mathcal{A}\setminus\sigma} S_u$. Multiplication is defined by $z^\alpha\cdot z^\beta=z^{\alpha+\beta}$, where we set $z^{\alpha+\beta}=0$ if $\alpha+\beta\notin \bigcap S_u$. \end{thm} In order to prove the theorem, we first set up some further notation. Let $y_0,\ldots,y_k$ be indeterminates, and for any $u\in\mathcal{A}\setminus\sigma$ and any $i=0,\ldots k$, let $z_{iu}$ be an indeterminate. We set $y_u=\sum z_{iu}y_i$, where if $u=v_j\in \sigma$, $z_{iu}$ is $0$ for $i\neq j$ and $1$ for $i=j$. Locally, the coordinate ring of $\mathbf{F}_k(X_\mathcal{A})$ may be expressed as a quotient of $\mathbb{K}[z_{iu}]$ by the ideal obtained from the conditions on the $z_{iu}$ necessary to satisfy \begin{equation}\label{eqn:rel} \prod y_u^{a_u}=\prod y_u^{b_u}. \end{equation} for any affine relation $\sum a_uu=\sum b_uu$ on the elements of $\mathcal{A}$. For any $i=0,\ldots,k$, set $z_i=z_{iw}$. \begin{lemma}\label{lemma:mon} The monomials $y^{\alpha}$ appearing in \[y_u \prod_{i\geq 0}y_i^{c_i}\] (viewed as a polynomial in the $y_i$ with coefficients $z_{ju}$) are exactly those for which $\alpha\in S_u$ and $|\alpha|=h$. \end{lemma} \begin{proof} The above expression is homogeneous of degree $h$, so clearly we must have $|\alpha|=h$. The lemma may be easily verified from Definition \ref{defn:S} case by case. \end{proof} \begin{lemma}\label{lemma:span} The monomials $z^\alpha$ for $\alpha\in \bigcap_{u\in\mathcal{A}\setminus\sigma} S_u$ span the ring $R_\sigma$ as a $\mathbb{K}$-vector space. \end{lemma} \begin{proof} Considering any $u\in\mathcal{A}$, the affine relation derived from \eqref{eqn:c} imposes the condition \[ y_u=y_w^h \prod_{i\geq 0}y_i^{-c_i}. \] Hence, each $z_{iu}$ can be expressed linearly in terms of the $z_j$. Furthermore, by Lemma \ref{lemma:mon}, this same condition imposes that $z^\alpha=0$ if $\alpha\notin S_u$. Indeed, for $|\alpha|\leq h$ this follows directly from the lemma. For $|\alpha|>h$, one may verify case by case that each $S_u$ is the set of exponent vectors for the standard monomials of a monomial ideal generated in degree $h$, and the claim follows. \end{proof} \begin{proof}[Proof of Theorem \ref{thm:scheme}] From Lemma \ref{lemma:span}, we see that $R_\sigma$ is a quotient of the $\mathbb{K}$-algebra $R_\sigma'$ with basis $z^\alpha$ for $\alpha\in \bigcap_{u\in\mathcal{A}\setminus\sigma} S_u$, with multiplication as defined in the statement of the theorem. To show the equality of $R_\sigma$ and $R_\sigma'$, it suffices to verify the following: For any affine relation $\sum a_uu=\sum b_uu$, the equality \eqref{eqn:rel} is satisfied in $R_\sigma'$, where we have written each $z_{iu}$ as a $\mathbb{K}$-linear combination of the $z_i$ as in Lemma \ref{lemma:span}. Now, our construction of $S_u$ ensures that \[ y_u=y_w^{h(u)} \prod_{i\geq 0}y_i^{-c_i(u)} \] holds in $R_\sigma'$, see the proof of Lemma \ref{lemma:span}. But then for the above affine relation, we have \begin{align*} \prod y_u^{a_u}&=\prod_u\left( y_w^{a_uh(u)} \prod_{i\geq 0}y_i^{-a_uc_i(u)}\right)= y_w^{\sum_u a_uh(u)}\prod_{i\geq 0} y_i^{-\sum_u a_uc_i(u)}\\ \\ &=y_w^{\sum_u b_uh(u)}\prod_{i\geq 0} y_i^{-\sum_u b_uc_i(u)} =\prod y_u^{b_u} \end{align*} in $R_\sigma'$, exactly as desired. \end{proof} \begin{cor} Assume that the fixed point corresponding to $\sigma$ is an isolated point of $\mathbf{F}_k(X_\mathcal{A})$. Then the corresponding $k$-plane $L_\sigma$ of $X_\mathcal{A}$ has multiplicity equal to \[ \#\bigcap_u S_u. \] \end{cor} \begin{proof} The multiplicity of $L_\sigma$ is just the degree of the isolated point $[L_\sigma]$ of $\mathbf{F}_k(X_\mathcal{A})$. But this equals the dimension of $R_\sigma$ as a $\mathbb{K}$-vector space, and the result follows from Theorem \ref{thm:scheme}. \end{proof} \begin{cor}\label{cor:two} Assume that the fixed point of $\mathbf{F}_k(X_\mathcal{A})$ corresponding to $\sigma$ is an isolated point, and assume that $\mathcal{A}$ contains some $w'\neq w$ in height one. Then the multiplicity of $L_\sigma$ in $X_\mathcal{A}$ is equal to the smallest natural number $m$ for which the set \[ \{ u\in\mathcal{A}\ |\ \textrm{height of}\ u\leq m\} \] is contained in $\sigma+\mathbb{N}\cdot (w-v_i)$ for some $i$. \end{cor} \begin{proof} Let $w'\neq w$ be another element of $\mathcal{A}$ in height one. If $w'$ is not contained in $\sigma+(w-v_i)$ for some $i$, then for $u=w'$ we are in case \ref{case:other} of Definition \ref{defn:S}, and it follows that $\#\bigcap S_u=1$, as desired. If instead $w'$ is contained in $\sigma+(w-v_i)$ for some $i$, then for $u=w'$ we are in case \ref{case:simplex}, and so $S_{w'}$ consists of those $\alpha$ of the form $\alpha=\lambda\cdot e_i$. Hence, the multiplicity of $L_\sigma$ is the smallest number $m$ for which $m e_i\notin \bigcap S_u$. But this is easily seen to be the $m$ from above. \end{proof} \begin{rem} Our Theorem \ref{thm:scheme} implies the results of \cite[Theorem 2.2]{ilten:14a}, in which the scheme structure of $\mathbf{F}_1(X_\mathcal{A})$ for projectively normal toric surfaces $X_\mathcal{A}$ is completely determined. Indeed, if there are at least two elements in height one over $\sigma$, then the argument from the proof of Corollary \ref{cor:two} applies and the result is straightforward. If there is only a single lattice point in height one and $\mathcal{A}$ is the set of lattice points of a polytope $P$, a straightforward argument from \cite{ilten:14a} shows that either $\mathcal{A}$ is contained in a prism over $\sigma$, $\mathcal{A}$ contains a point in height two, or $\mathcal{A}$ contains a point in height three satisfying case \ref{case:middle} of Definition \ref{defn:S}. An analysis in each case using Theorem \ref{thm:scheme} yields the result. \end{rem} As we will see in the following example, the Fano scheme $\mathbf{F}_k(X_\mathcal{A})$ may have embedded components. \begin{ex} Consider the set $\mathcal{A}$ from Example \ref{ex:surface1} and Figure \ref{fig:ex}. Let $\sigma=\{(0,0),(1,0)\}$. Then $\mathcal{A}$ is smooth at $\sigma$, so we may apply Theorem \ref{thm:scheme} to obtain the scheme structure of $\mathbf{F}_k(X_\mathcal{A})$ around $[L_\sigma]$. Here, $w=(0,1)$, so the only other point we have is $u=(1,2)$. This $u$ is in case \ref{case:simplex} of Definition \ref{defn:S}, and $S_u=\{(1,0)\}\cup \{\lambda\cdot (0,1)\}_{\lambda\in\mathbb{Z}_{\geq 0}}$. Hence, $\mathbf{F}_k(X_\mathcal{A})$ is locally an affine line with a fat point at the origin. \end{ex} \begin{rem} While it seems rather challenging to give a palatable combinatorial description of the scheme structure of $\mathbf{F}_k(X_\mathcal{A})$ for $k<\dim X_\mathcal{A}-1$, a good first step would be to describe exactly what the embedded components of $\mathbf{F}_k(X_\mathcal{A})$ are. Conjecturally, these are all of the form $Z_{\pi,k}$ where $\pi$ is some non-maximal Cayley structure. \end{rem} \bibliographystyle{amsalpha} \renewcommand{\MR}[1]{\relax}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,155
\section{Introduction} \label{Intro} The low energy dynamics of QCD in the $\delta-$regime is to lowest order in chiral perturbation theory ($\chi$PT) described by a quantum rotator for the spatially constant Goldstone modes \cite{Leutwyler:1987ak}. We recall that for a system in a periodic spatial box of sides $L$ the $\delta-$regime is where the ``time" extent $L_t \gg L$ and $m_\pi L$ is small (i.e. small or zero quark mass) whereas $F_\pi L$, ($F_\pi$ the pion decay constant) is large. Many other systems described by non-linear sigma models, also in $d=2,3$ dimensions, are similarly approximated by a quantum rotator to leading order in the analogous perturbative domain. Accordingly, the lowest energy momentum zero states in a representation $r$ of the symmetry group have, to leading order perturbation theory, energies of the form \begin{equation}\label{E_Casimir} E(r) \propto\mathcal{C}_2(r)\,, \end{equation} where $\mathcal{C}_2(r)$ is the eigenvalue of the quadratic Casimir (of the symmetry group) in the representation $r\,$. At 1-loop level it turns out that the Casimir scaling \eqref{E_Casimir} still holds, but it is of course expected that at some higher order the standard rotator spectrum will be modified. The standard rotator describes a system where the length of the total magnetization on a time-slice does not change in time. This is obviously not true in the full effective model given by $\chi$PT. In a previous paper \cite{SUNxSUN} we pointed out that by comparing the already obtained NNLO results for the isospin susceptibility from $\chi$PT at large $\ell\equiv L_t/L$ with that computed from the standard rotator, one can establish, under reasonable assumptions, that at 3-loops there is a correction to the rotator Hamiltonian proportional to the square of the Casimir operator, with a proportionality constant determined by the NNLO low energy constants (LEC's) of $\chi$PT. In ref.~\cite{chi_rot_On} we considered the QM rotator for the group O($n$). In this paper we extend the analysis of the QM rotator to the group $\SUN\times\SUN\,$, which has for $N>2\,$, to our knowledge, not been frequently considered in the literature. This paper is organized as follows: In section 2 we recall the definition and results for the isospin susceptibility of the standard quantum rotator coupled to a chemical potential for the SU$(2)\times {\rm SU}(2)$ and the SU$(3)\times {\rm SU}(3)$ cases. Results for the general SU$(N)\times {\rm SU}(N)$ case are given in section 3. The results in this section are new, in particular Eq. (\ref{SUNconj}) gives the eigenvalue of the quadratic Casimir invariant for a generic SU$(N)$ representation. In section 4 we discuss the corrections to the simple rotator formula calculated in chiral perturbation theory. In sect.~\ref{cased2} we consider the case of $d=2\,$. For $\SUth\times\SUth$ Kazakov and Leurent \cite{Kazakov} have computed the lowest energies of two representations using an alternative to the thermodynamic Bethe ansatz (TBA). Their NLIE (nonlinear integral equation), in contrast to the infinite component TBA, is formulated in terms of finitely many unknown functions and allows for a much better numerical precision than the corresponding TBA calculation. Their data clearly show that Casimir scaling is valid to a very good approximation for $ML<1\,,$ however it was not sufficiently precise to see the expected deviations. Here we present more precise numerical data allowing us to clearly see the deviation from the simple rotator spectrum. Our data are completely consistent with the results of the perturbative calculations. The details of our calculations are given in various appendices, in particular the algorithm of Ref. \cite{Kazakov} to use the NLIE equations for the calculation of the finite size spectrum of the model (for $N=3$) is reviewed in appendix E. The contribution of the main author of this paper, Ferenc Niedermayer, was essential in the formulation of the bulk of this paper. His untimely death on 12 August 2018 denied him the completion of the numerical calculations. We devote this paper to the memory of Ferenc. \section{The isospin susceptibility} \label{susceptibility} Here we consider the Hamiltonian of the $\SUN\times\SUN$ standard quantum rotator with a chemical potential coupled to generators $J_{L3}\,,J_{R3}$: \begin{equation} H_0(h) = \frac{1}{\Theta}\left[J_L^2 + J_R^2 \right] +h\left[J_{L3} - J_{R3}\right]\,, \label{H0h} \end{equation} where $J_X^2$ are the quadratic Casimir operators of the left and right $\mathrm{SU}(N)$ groups: \begin{equation} J_X^2=\sum_{i=1}^{N^2-1}J_{Xi}^2\,,\,\,\,\,X=L,R\,, \end{equation} and $\Theta$ is the moment of inertia. In $d=4$ dimensions to lowest order $\chi$PT one has $\Theta\simeq F^2 L^3\,$. The isospin susceptibility is defined as the second derivative of the free energy wrt $h$: \begin{equation} \chi= \left.\frac{1}{L_t L^{d-1}} \frac{\partial^2}{\partial h^2}\ln Z(h;u)\right|_{h=0}\,, \quad\quad Z(h;u)={\rm Tr}\, \exp\{-H_0(h)L_t\}\,, \end{equation} where $u=2 L_t/\Theta\,.$ The partition function has for small $h\,,$ the expansion \begin{equation} \label{Zuh} Z(u;h) = z_0(u) + \frac12 h^2 L_t^2 z_1(u) + \order{h^4} \,, \end{equation} with \begin{align}\label{z0x} z_0&={\rm Tr}\,\exp\{-H_0(0)L_t\}\,, \\ \label{z1x} z_1&={\rm Tr}\,\left[\left(J_{L3} - J_{R3}\right)^2\exp\{-H_0(0)L_t\}\right]\,. \end{align} The isospin susceptibility is then given by \begin{equation} \chi=\frac{L_t}{L^{d-1}}\frac{z_1(u)}{z_0(u)}\,. \end{equation} We wish to compute $\chi$ for small $u$ for general $N\,,$ however the reader may find it instructive to first consider the special cases $N=2,3$ which we treat in the following subsections. \subsection{$\SUtw\times\SUtw$ case} The quantum mechanics (QM) of a symmetric rotor (rigid body) in 3 dimensions is equivalent to QM of a point particle moving in the SU(2) group manifold, which is the sphere $\mathrm{S}_3\,$. It can be considered as a special case of the O($n$) rotator (point particle moving on the sphere $\mathrm{S}_{n-1}$) for $n=4\,$. At the same time it is a special case of a particle moving on the $\mathrm{SU}(N)$ group manifold with $N=2\,$. The coordinates in the two descriptions are: $U\in \mathrm{SU}(2)$, where $U=s_0 + i s_k \sigma_k\in\mathrm{SU}(2)\,$, ($\sigma_k$ the Pauli matrices) or equivalently, in the O(4) picture $\mathbf s = (s_0, s_1, s_2,s_3) \in \mathrm{S}_3$, ($\mathbf s^2=1$)\,. The wave functions have the form $\psi(U)$ or $\psi(\mathbf s)\,$. The symmetry group of $H_0$ for $h=0$ is $G=\SUtw\times\SUtw \simeq \mathrm{SO}(4)\,,$ and the transformation of a wave function under $g=g_L\times g_R\in G$: $$\psi(U) \to \psi(g_L^{-1} U g_R)\,,\,\,\, {\rm or}\,\,\, \psi(\mathbf s) \to \psi(O_g^{-1}\mathbf s)\,.$$ The symmetry generators are $J_{Li}$ for $\mathrm{SU}(2)_L$ and $\mathbf J_{Ri}$ for $\mathrm{SU}(2)_R$ transformations, ($i=1,2,3$), or alternatively the 6 generators of $\mathrm{SO}(4)\,$. In the $\SUtw\times\SUtw$ picture the wave functions are constructed using the $U$ variables. The set of four wave functions $\psi(U) \in {U_{11}, U_{12}, U_{21}, U_{22} }$ belong to the representation with $j_L=j_R=1/2$. In general, the Hilbert space of the system splits into irreps of $\SUtw\times\SUtw$ for which $j_L=j_R=j\,$ \footnote{In the classical description a given trajectory $U(t)$ of the particle can be reached in two equivalent ways, by left rotations $U(t)=g_L(t)U_0$ or by right rotations, $U(t)=U_0 g_R^\dagger(t)$. Obviously, the energy of a given eigenstate or its multiplicity should not depend on the description chosen.}. To label the irreps of $\mathrm{SU}(2)$ we adopt the convention which is a special case to be used for $\mathrm{SU}(N)$ with $N\ge 3$ below. The representation with given $j$ is denoted by $(p)$ where $p=2j=0,1,2,\ldots$, with the corresponding dimension $p+1=2j+1\,$. Accordingly, the eigenstates of the Hamiltonian \eqref{H0h} $\vert j,m_L\rangle \times \vert j,m_R\rangle$, $-j\le m_L,m_R \le j\,$ belong to the representation $(p)\times(p)$ with multiplicity $(p+1)^2\,$ \footnote{Note that in the equivalent O(4) language the eigenstates with given $l=0,1,2,\ldots$ have multiplicity $(l+1)^2$ (cf. \cite{chi_rot_On}).}. The eigenvalue of the quadratic Casimir invariant in a representation $(p)$ is given by \begin{equation} \label{C_SU2} C_2^{(2)}((p))=\frac14 p(p+2) = \frac14 (p+1)^2 -\frac14\,, \end{equation} which differs by a factor 4 from the O(4) Casimir invariant $l(l+2)$ \footnote{Note that $\exp(i J_3 \phi)$ in $\mathrm{SU}(2)$ rotates by an angle $\phi/2$ around the 3rd axis, while for O(4) $\exp(i J_3 \phi)$ rotates by angle $\phi$.}. The kinetic energy is then given by \begin{equation} \label{E_kin} E_{\text{kin}} = \frac{2 C_2^{(2)}((p))}{\Theta} = \frac{C_{\mathrm{O}(4)}(l)}{2\Theta}\,, \end{equation} consistent with our conventions in ref.~\cite{chi_rot_On}. In the O(4) picture the isospin chemical potential is coupled to generator of rotations in the 12-plane, $L_{12}\,$. It has eigenvalues $m=-l,\ldots,l$ for $\mathrm{SO}(4)$. The corresponding multiplicities are $g_{lm}=l-|m|+1\,$. For $l=1$ one has: $m=\pm 1$: $s_1\pm i s_2$, for $m=0$: $\{s_0, s_3 \}$. In the $\SUtw\times\SUtw$ picture in the $(1)\times(1)$ irrep $m=1$ corresponds to wave function $U_{21}=s_1+is_2$, $m=-1$ to $U_{12}=s_1-is_2$, while $m=0$ to wave functions $U_{11}$ and $U_{22}\,$. The partition function with zero chemical potential is then \begin{equation} \label{z0_SU2} z_0(u) = \sum_{p=0}^\infty \mathrm{e}^{-u C_2^{(2)}((p))} (p+1)^2 = \frac12 \mathrm{e}^{u/4}\sum_{k=-\infty}^\infty \mathrm{e}^{-u k^2/4} k^2 = -\frac{1}{2\pi}\mathrm{e}^{u/4}S'\left( \frac{u}{4\pi}\right)\,, \end{equation} where $S(x)$ is the Jacobi theta-function \begin{equation} S(x)= \sum_{n=-\infty}^\infty \mathrm{e}^{-\pi x n^2}\,. \end{equation} Using $S(x)=x^{-1/2}S(1/x)$ one obtains \begin{equation} \label{z0_SU2_A} z_0(u) = \sqrt{4\pi}\mathrm{e}^{u/4}u^{-3/2}S\left(\frac{4\pi}{u} \right) +16 \pi^{3/2}\mathrm{e}^{u/4}u^{-5/2} S'\left(\frac{4\pi}{u} \right)\,. \end{equation} For small $u$ it has an expansion \begin{equation} \label{z0_SU2_C} z_0(u) = 2\sqrt{\pi}\mathrm{e}^{u/4} u^{-3/2} + \order{u^{-5/2} \mathrm{e}^{-4\pi^2/u}} \,. \end{equation} For $z_1$ we have \begin{equation} \label{z1_SU2} \begin{split} z_1(u) &= 2\sum_{p=0}^\infty \mathrm{e}^{-u C_2^{(2)}((p))}(p+1) \sum_{s=0}^p(p/2-s)^2 \\ &= \frac{1}{12}\mathrm{e}^{u/4}\sum_{k=-\infty}^\infty\mathrm{e}^{-u k^2/4}k^2(k^2-1) = -\frac23 \frac{\partial z_0(u)}{\partial u}\,. \end{split} \end{equation} The $\SUtw\times\SUtw$ rotator susceptibility is then given by \begin{equation} \label{chi_SU2_rot} L^{d-2}\chi_{\text{rot}} = \frac{\Theta}{2 L} - \frac{\ell}{6} + \ldots \end{equation} where $\ell=L_t/L$ with no power-like corrections! This is in agreement with the O($n$) rotator result (2.3) of \cite{chi_rot_On} at $n=4$ and with the $\chi$PT for $\SUN\times\SUN$ at $N=2$ \cite{SUNxSUN}. \subsection{\boldmath The $\SUth\times\SUth$ case} Next we consider the QM of a point particle moving on the group manifold of $\mathrm{SU}(3)$. For the $\mathrm{SU}(3)$ irreducible representations we shall in this subsection use the familiar notation $(p,q)$ where $p,q=0,1,2\ldots\,$ i.e. the first and second rows of the corresponding Young tableaux have $p+q$ and $q$ boxes respectively. The corresponding value of the quadratic Casimir invariant is \begin{equation} \label{C2_SU3} C_2^{(3)}((p,q)) = \frac13 \left( p^2+ q^2+pq+3p+3q \right)\,, \end{equation} while the dimension of the representation is given by \begin{equation} \label{dpq} d(p,q) = \frac12 (p+1)(q+1)(p+q+2)\,. \end{equation} We consider a system described by coordinates $U\in \mathrm{SU}(3)$, and wave functions $\psi(U,U^\star)$ which transform under $g=g_L\times g_R\in \SUth\times\SUth$ according to $$\psi(U,U^\star) \to \psi(g_L^{-1} U g_R\,, g_L^T U^\star g_R^\star)\,.$$ The 9 wave functions $\psi(U) = U_{ab}$, where $a,b\in\{ 1,2,3 \}$ belong to the representation $(1,0)\times(0,1)$ of $\SUth\times\SUth$. The first index, $a$ is for $\mathbf 3\equiv (1,0)$, while $b$ for $\overline{\mathbf 3} \equiv (0,1)$. At this stage we assume that the irreps appearing in the partition function sum over states are of the type $(p,q)\times (q,p)\,$; the motivation for this will be given in subsection~\ref{wavefns} \footnote{From products of $n$ matrix elements $U_{ab}$ one finds the irreps $(p,q)\times(q,p)$ with $p=n-2k$, $q=k$, for $k=0,1,2,\ldots,k_{\mathrm{max}}$, where $k_{\mathrm{max}}=n/2$ for even $n$, and $k_{\mathrm{max}}=(n-1)/2$ for odd $n$.}. The corresponding energy is given by (cf. \eqref{E_kin}) \begin{equation} \label{E_kin3} E_{\text{kin}}=\frac{2 C_2^{(3)}((p,q))}{\Theta}\,, \end{equation} with the multiplicity $d(p,q)^2$ (cf. \eqref{dpq}). We discuss the full dependence of $Z(u;h)$ on $h$ in Appendix \ref{Zuh_SU3} although this information will not be needed in this paper. From eqs.~\eqref{z0x},\eqref{z1x} we obtain \begin{equation} \begin{aligned}\label{z0_SU3} z_0(u) &= \sum_{p,q=0}^{\infty}\mathrm{e}^{-u C_2^{(3)}((p,q))} Q_0^{(3)}((p,q))^2 \,, \\ z_1(u) &= \sum_{p,q=0}^{\infty}\mathrm{e}^{-u C_2^{(3)}((p,q))} 2 Q_0^{(3)}((p,q)) Q_2^{(3)}((p,q))\,, \end{aligned} \end{equation} with \begin{equation}\label{Qk_SU3} Q_k^{(3)}((p,q))\equiv\sum_{s\in(p.q)}\lambda(s)^k\,, \end{equation} where $s$ are eigenstates of $J_3$ with eigenvalues $\lambda(s)$. One has (see \eqref{C1Q2}) \begin{align} Q_0^{(3)}((p,q)) &= d(p,q) \,, \label{Q0} \\ Q_2^{(3)}((p,q)) &= \frac18 C_2((p,q)) d(p,q) \,. \label{Q2} \end{align} In $z_0,z_1$ we have a double sum over integers, hence analytic expressions are not so simple. However the leading terms for small $u$ can be determined analytically. After separating the constant term in \eqref{C2_SU3}, the remaining expressions are homogeneous in $\tilde{p}=p+1$ and $\tilde{q}=q+1\,$: \begin{equation} z_0(u) = \frac14\mathrm{e}^{u}\sum_{\tilde{p},\tilde{q}=1}^{\infty} \exp\left[-\frac{u}{3}\left(\tilde{p}^2+\tilde{q}^2 +\tilde{p}\tilde{q}\right)\right] \tilde{p}^2\tilde{q}^2(\tilde{p}+\tilde{q})^2\,. \end{equation} For small $u$ we can replace the sums by integrals to obtain \begin{equation}\label{z0A} z_0(u)\sim z_{0A}(u) = A_0^{(3)} \mathrm{e}^{u} u^{-4} \,, \end{equation} with \begin{equation} A_0^{(3)} = \frac14\int_0^{\infty}\mathrm{d} x\,\mathrm{d} y\, \exp\left[-\frac13\left(x^2+y^2+xy\right)\right] x^2y^2(x+y)^2=\pi \sqrt{3}\,. \end{equation} To investigate the corrections to \eqref{z0A} one can first proceed numerically e.g. evaluating the difference to 500 digits at $0.1 \le u \le 1$ one has for the relative deviation $(z_0(u)-z_{0A}(u))/z_0(u)$ at $u=1.0$: $\sim 10^{-13}$, and at $u=0.1$: $\sim 10^{-164}\,$, i.e. it decreases faster than any power of $u$. Fitting the difference one obtains the next approximation \begin{equation} z_{0}(u) = \pi\sqrt{3}\, \mathrm{e}^{u} u^{-4} - \sqrt{3}\left(\frac{2\pi}{u}\right)^7\mathrm{e}^{-4\pi^2/u} \left(1+\order{u}\right) \,. \end{equation} The correction to the leading first term is exponentially small for small $u$, and has a structure similar to \eqref{z0_SU2_C}. For $N=3$ using \eqref{z0_SU3} this gives for the susceptibility of the $\SUth\times\SUth$ rotator \begin{equation} \label{chi_SU3_rot} L^{d-2} \chi_{\text{rot}} = \frac{\Theta}{2 L} -\frac14 \ell + \ldots \,. \end{equation} We stress again that for $u\to 0$ the omitted terms decrease faster than any power of $u$. The leading term in \eqref{chi_SU2_rot}, \eqref{chi_SU3_rot} is the classical result for the high temperature expansion of the corresponding rotator (rigid body). The next one, $\propto \ell$ is the leading quantum correction, which does not depend on $\Theta$, only on the corresponding group. It is interesting to note that for $N=2,3$ the $1/(F^2 L^2)$ term (for $d=4$) is absent in the expansion, a property which we will see holds for arbitrary $N$. \section{\boldmath The isospin susceptibility for general $\SUN\times\SUN$} In this section we extend the considerations in the last two subsections for $N=2,3$ to general $N\,.$ \subsection{The quadratic Casimir invariant} As proposed by Gelfand and Tsetlin \cite{Gelfand}, an irrep of $\mathrm{SU}(N)$ can be conveniently described by a non-increasing series of $N$ integers (cf. \cite{SUN_CG} and references therein) $m_1 \ge m_2 \ge \ldots \ge m_N\,$. Two series differing in a constant, $m'_k = m_k + c\,,\,\forall k$ where $c\in \mathbb{Z}$ describe the same irrep. One can choose $m_N=0$, however, for some purposes it is convenient to use the redundant form with $N$ integers. If one sets $m_N=0$ then $m_k$ corresponds to the number of boxes in the $k$'th row of the corresponding Young tableau. The more conventional description of an $\mathrm{SU}(N)$ irrep, like $(p,q)$ for $\mathrm{SU}(3)\,$, is given by the differences $(p_1,p_2,\ldots,p_{N-1})$ where $p_k=m_k-m_{k+1} \ge 0\,$. Following the notation in \cite{SUN_CG}, let $J_z^{(l)}\,,\,l=1,\dots N-1$ be a basis of the Cartan subalgebra. Together with generators $J_\pm^{(l)}$ they generate SU(2) subalgebras for each $l$. The $J_z^{(l)}$ are normalized to have half-integer eigenvalues, and we can identify $J_3$ with one of them, say $J_z^{(1)}$. In a given representation $r$ there is a highest weight vector $\vert H\rangle$ which is annihilated by all $J_+^{(l)}$. Its eigenvalues are given by $J_z^{(l)}\vert H\rangle=\lambda_l(H)\vert H\rangle$ with $\lambda_l(H)=p_l/2\,.$ Eq.~\eqref{Qk_SU3} is generalized to \begin{equation}\label{Qk_SUN} Q_k^{(N)}(r)\equiv\sum_{s\in r}\lambda_1(s)^k\,, \end{equation} where $s$ are eigenstates of $J_3$ with eigenvalues $\lambda_1(s)$. $Q_0^{(N)}(r)$ is the dimension of a given irrep $r$ and is explicitly given by \cite{SUN_CG} \begin{equation} \label{Q0_SUN} Q_0^{(N)}((m_1,\dots,m_N)) = \prod_{1\le k < k' \le N}\left( 1 + \frac{m_k-m_{k'}}{k'-k}\right)\,. \end{equation} The quadratic Casimir invariant can be calculated using the basis of the $\mathrm{su}(N)$ algebra described in \cite{SUN_CG}. Alternatively one can use recursion relations for $Q_0^{(N)}$ and $Q_2^{(N)}\,,$ discussed in appendix ~\ref{app_recursion} to obtain $C_2^{(N)}$ using \begin{equation} \label{C1Q2} \frac{Q_2^{(N)}(r)}{Q_0^{(N)}(r)} = \langle J_3^2 \rangle_r = \frac{1}{N^2-1} \langle J^2 \rangle_r = \frac{1}{N^2-1} C_2^{(N)}(r)\,, \end{equation} where $N^2-1=\text{dim}(\mathrm{SU}(N))$ is the dimension of the group. The recursion relations from $Q_s^{(N-1)}$ to $Q_s^{(N)}$, ($s=0,2$) contain $N-1$ nested summations. For not too large $N$ one can perform these analytically. We have done this for $N\le 5\,$, and obtained a very simple result, which is easy to generalize to arbitrary $N\,$. We conjecture \begin{equation} C_2^{(N)}((m_1,\dots,m_N)) = \frac12 \sum_{k=1}^N m_k^2 - \frac{1}{2N}\left(\sum_{k=1}^N m_k\right)^2 + \sum_{k=1}^N \left( \frac{N+1}{2}-k\right) m_k \,. \label{SUNconj} \end{equation} Note that this expression is invariant under a constant shift $m_k \to m_k + c\,$ as it should. Denoting $n_k=m_k+N-k$ one has for the factor appearing in $Q_0^{(N)}$ in \eqref{Q0_SUN}, \begin{equation} 1+ \frac{m_k-m_{k'}}{k'-k} = \frac{1}{k'-k} (n_k-n_{k'})\,. \end{equation} Hence \begin{equation} Q_0^{(N)}((m_1,\ldots,m_N)) = \left. \overline{Q}_0^{(N)}(n_1,\ldots,n_N)\right|_{n_k=m_k+N-k} \end{equation} where \begin{equation}\label{QovN} \overline{Q}_0^{(N)}(n_1,\ldots,n_N) = \mathcal{B}_N \prod_{1\le k < k' \le N} (n_k-n_{k'}) \end{equation} with \begin{equation} \frac{1}{\mathcal{B}_N} = \prod_{1\le k < k' \le N} (k'-k) = 2!\,3!\ldots (N-1)! \,. \end{equation} The Casimir invariant of the representation in terms of $n$'s is \begin{equation} \label{CovN} \begin{aligned} \overline{C}_2^{(N)}(n_1,\ldots,n_N) & = \frac12 \sum_{k=1}^N n_k^2 - \frac{1}{2N}\left( \sum_{k=1}^N n_k \right)^2 -c_N \\ & = \frac{1}{2N} \sum_{1\le k < k' \le N} (n_k - n_{k'})^2 - c_N \,, \end{aligned} \end{equation} where \begin{equation} c_N \equiv \frac{N (N^2-1)}{24} \end{equation} is proportional to the curvature of the SU$(N)$ manifold. Note that apart from the constant in $\overline{C}_2^{(N)}$ both expressions, \eqref{QovN} and \eqref{CovN}, are homogeneous in the new variables. \subsection{Wave functions} \label{wavefns} First we note that for $U \in \mathrm{SU}(N)$ the complex conjugate of a matrix element equals the corresponding cofactor of the matrix, \begin{equation} U^\star_{ab} = (-1)^{a+b} \det\left( \left( U_{ij} \right)_{i\ne a\,, j\ne b} \right) \,. \end{equation} As a consequence, a function written in terms of products containing $U$'s and $U^\star$'s can be written in terms of the $U$'s alone. Under a general $\SUN\times\SUN$ transformation one has \begin{equation} U \to U' = g_L^{-1} U g_R)\,. \end{equation} Under separate left/right transformations \begin{equation} (g_L^{-1}U)_{a a'} = (g_L^{-1})_{ab} U_{b a'}\,, \quad (Ug_R)_{a a'} = (g_R^{-1})^\star_{a' b'} U_{a b'}\,, \end{equation} i.e.\ $U$ belongs to the representation $(1,0,\ldots,0) \times (0,\ldots,0,1)$, according to its 1st and 2nd index, respectively. Similarly for an arbitrary representation $(r)$ \begin{equation} \begin{aligned} \left[D^{(r)}(g_L^{-1} U)\right]_{i i'} & = \left[D^{(r)}(g_L^{-1})\right]_{ij} \left[D^{(r)}(U)\right]_{j i'}\,, \\ \left[D^{(r)}(U g_R)\right]_{i i'} & = \left[D^{(r)}(g_R^{-1})\right]^\star_{i'j'}\left[D^{(r)}(U)\right]_{i j'}\,, \end{aligned} \end{equation} where $1 \le i,i',j,j'\le \dim(r)\,$. Hence the elements of the matrix $D^{(r)}(U)$ belong to a representation with complex conjugate pair $(r)\times(r^*)$, i.e.\ $(p_1,p_2,\ldots,p_{N-1})\times (p_{N-1},\ldots,p_2,p_1)$. Strictly one should still show also that each such representation enters only once in the Hilbert space of the $\SUN\times\SUN$ rotator; here we accept this as a reasonable hypothesis. \subsection{The partition function and susceptibility} The partition function is given by (set $m_N=0$) \begin{equation} z_0^{(N)}(u) = \sum_{m_1=0}^\infty \, \sum_{m_2=0}^{m_1} \ldots \,\sum_{m_{N-1}=0}^{m_{N-2}} \mathrm{e}^{-u C_2^{(N)}((m_1,\ldots,m_{N-1},0))} \left[Q_0^{(N)}((m_1,\ldots,m_{N-1},0))\right]^2 \,. \end{equation} Changing to the variables $n_k=m_k+N-k$ the condition $m_k \ge m_{k+1}$ transforms into $n_k > n_{k+1}\,$. Also the irreps with $n'_k = n_k + c$ where $c\in {\mathbb{Z}}$ are equivalent and should be taken only once in the partition function. Again a convenient choice is to set $n_N = 0$ and one has \begin{equation} \label{z0Na0} \begin{aligned} z_0^{(N)}(u)&= \mathrm{e}^{u c_N} \sum_{n_1 > n_2 > \ldots > n_N} \delta_{n_N,0} \exp\left[ -\frac{u}{2} \sum_{k=1}^{N} n_k^2 + \frac{u}{2N} \left(\sum_{k=1}^N n_k \right)^2\right] \overline{Q}_0^2(n_1,\ldots,n_N)\\ &= \frac{\mathrm{e}^{u c_N}}{N!} \sum_{\{n\}=-\infty}^{\infty} \delta_{n_N,0} \exp\left[ -\frac{u}{2} \sum_{k=1}^{N} n_k^2 + \frac{u}{2N} \left(\sum_{k=1}^{N} n_k \right)^2\right] \overline{Q}_0^2(n_1,\ldots,n_N)\,, \end{aligned} \end{equation} where the second equality follows since the summand is invariant under permutations. In the conventional ``p-notation'' the Casimir invariant and $Q_0$ for the $r_p=(p,0,0,\ldots,0)$ representation is \begin{equation} \begin{aligned} C_2^{(N)}(r_p) &= \frac{(N-1)}{2N}p(p+N) \,, \\ Q_0^{(N)}(r_p) &= \prod_{n=1}^{N-1}\left(1 + \frac{p}{n}\right) = \frac{(N+p-1)!}{p! (N-1)!} \,. \end{aligned} \end{equation} In particular for the ground state $p=0$ \begin{equation} \begin{aligned} C_2^{(N)}(r_0) &= 0 \,, \\ Q_0^{(N)}(r_0) &= 1 \,, \end{aligned} \end{equation} and for $p=1$ \begin{equation} \begin{aligned} C_2^{(N)}(r_1) &= \frac{N^2-1}{2N} \,, \\ Q_0^{(N)}(r_1) &= N \,. \end{aligned} \end{equation} For the adjoint representation\footnote{Here we assume $N\ge 3$} $r_A=(1,0,\ldots,0,1)$ one obtains \begin{equation} \begin{aligned} C_2^{(N)}(r_A) &= N \,, \\ Q_0^{(N)}(r_A) &= N^2-1 \,. \end{aligned} \end{equation} Since $C_2^{(N)}(r_1) < C_2^{(N)}(r_A)<C_2^{(N)}(r_2)$ the mass gap is given by the states in the representation $r_1\times (0,\ldots,0,1)$, and its conjugate, with a total multiplicity $2 N^2$. The formula for the mass gap is then (cf. \cite{SUNxSUN} eq.~(4.31)) \begin{equation} E_1 = \frac{N^2-1}{N \Theta}\,. \end{equation} The contribution of these states together with the ground state gives \begin{equation} z_0^{(N)}(u) = 1 + 2 N^2 \exp\left(-\frac{N^2-1}{2N} u\right) + (N^2-1)^2 \exp(-N u) + \ldots\,,\quad \text{for } u\gg 1 \,. \end{equation} The behavior of $z_0^{(N)}(u)$ for $u\to 0$ is derived in Appendix~\ref{app_z0_u_small} with the result \begin{equation} \label{z0NN} z_0^{(N)}(u) = A_N u^{-(N^2-1)/2} \mathrm{e}^{uc_N} \left[ 1 + \order{\mathrm{e}^{-4\pi^2/u} u^{-2N+3}}\right] \,, \end{equation} where \begin{equation} \begin{aligned} A_N &= \frac{\mathcal{B}_N^2}{(N-1)!\sqrt{2\pi N}} \int_{-\infty}^\infty \prod_{j=1}^N\left[\mathrm{d} n_j \, \exp\left(-\frac12 n_j^2 \right)\right]\, \prod_{1\le k < k' \le N} (n_k-n_{k'})^2 \\ &=(2\pi)^{(N-1)/2}\sqrt{N}\mathcal{B}_N\,. \end{aligned} \end{equation} Due to \eqref{C1Q2} we have (for general $N$) \begin{equation} \label{z1z0_SUN} z_1(u) = -\frac{2}{N^2-1} \frac{\partial z_0(u)}{\partial u}\,. \end{equation} Thus the susceptibility is obtained from $z_0(u)$ as \begin{equation} \label{chi_SUN_z0} L^2 \chi_{\text{rot}} =\frac{1}{L_t L} \frac{\partial^2}{\partial h^2}\ln Z(u;h) = \ell \frac{z_1(u)}{z_0(u)} =-\frac{2}{N^2-1}\ell\frac{\partial}{\partial u}\log(z_0(u)) \,. \end{equation} The susceptibility is then given by \begin{equation} \label{chi_SUN_PT} L^2 \chi_{\text{rot}} = \frac{\Theta}{2 L} -\frac{N}{12} \ell + \mathrm{O}\left(\frac{\ell^3}{F^4 L^4}\right)\,,\,\, \end{equation} which is in agreement with eq.~(4.48) of \cite{SUNxSUN} obtained by $\chi$PT to NNL order for general $N\,$. \section{\boldmath The $1/\ell$ term in $\chi$PT} As mentioned in \cite{SUNxSUN} the susceptibility calculated in $\chi$PT to NNLO for $\ell\to\infty$ approaches the result obtained in the rotator approximation. However, the approach is not exponentially fast, to this order one obtains besides the exponentially vanishing contribution a $\propto 1/\ell$ term (but no $\ell^{-k}$, $k\ge 2$ terms!). More precisely for the deviation $\delta\chi = \chi - \chi_{\text{rot}}$ in \cite{SUNxSUN} (cf.\ eq.~(4.52)) we found \begin{equation} \label{dchi_chi} \frac{\delta\chi}{\chi} \sim \frac{c}{(F L)^4} \frac{1}{\ell}\,,\quad\quad {\rm for}\,\,d=4\,. \end{equation} From here one concludes that the rotator spectrum should be distorted at some higher order in $g_0^2$ already at small energies, not only the energies $\sim L^{-1}$ of the $\mathbf{p}\ne 0$ modes. Let us assume that the distortion of the spectrum has the form \begin{equation} \delta E(r) = \frac{\Phi(L)}{L}(C_2^{(N)}(r))^\kappa\,, \end{equation} then one obtains \begin{equation} \begin{aligned} z_1(u) & = \frac{2}{N^2-1} \left(-\frac{\partial }{\partial u}\right) z_0(u) \,, \\ \delta z_0(u) & = - \Phi(L)\ell \left(-\frac{\partial}{\partial u}\right)^\kappa z_0(u) \,, \\ \delta z_1(u) & = -\frac{2}{N^2-1} \Phi(L) \ell \left(-\frac{\partial}{\partial u}\right)^{\kappa+1} z_0(u) \,. \end{aligned} \end{equation} Taking $z_0(u)\propto u^{-a}$ with $a=(N^2-1)/2$ one gets for the leading term \begin{equation} \begin{aligned} \frac{\delta \chi_{\text{rot}}}{\chi_{\text{rot}}} &= \frac{\delta z_1(u)}{z_1(u)} - \frac{\delta z_0(u)}{z_0(u)} = -\kappa\Phi(L) \ell u^{-\kappa} (a+1)(a+2)\ldots(a+\kappa-1) + \ldots \\ &= -2^{-\kappa} \kappa\Phi(L) \left(\frac{L}{\Theta}\right)^{-\kappa} \ell^{1-\kappa}(a+1)(a+2)\ldots(a+\kappa-1) + \ldots \end{aligned} \end{equation} The observed deviation \eqref{dchi_chi} requires then $\kappa=2$ and since $\Theta\sim F^2L^3$ for $d=4$ we need $\Phi(L)\propto (FL)^{-8}$ for $d=4\,$. \section{\boldmath Delta regime in $d=2$} \label{cased2} The susceptibility computed in $\chi$PT is for $d=2$ given by \cite{SUNxSUN} \begin{equation} \chi=\frac{1}{2\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^2(1/L)} -\frac{N}{8\pi}\gamma_2^{(2)}(\ell) -\frac{N^2}{16\pi^2}r_2(\ell)\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^2(1/L) +\ldots \label{chi_d2} \end{equation} where $\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}(q)$ is the minimal subtraction (MS) scheme running coupling at momentum scale $q\,,$ and \begin{equation} r_2(\ell)=\overline{w}(\ell)-2\kappa_{10}(\ell)-\frac12\gamma_2^{(2)}(\ell) \left(\alpha_1^{(2)}(\ell)-\frac{1}{\ell}-\frac12\gamma_2^{(2)}(\ell)\right) -\frac{1}{2\ell}\left(\gamma_3^{(2)}(\ell)+1\right)\,. \label{r2} \end{equation} The large $\ell$ behavior of the shape functions appearing in \eqref{chi_d2} and \eqref{r2} are discussed in \cite{chi_rot_On}. In particular we find for $\ell\gg1$: \begin{align} \gamma_2^{(2)}(\ell)&\simeq -Z+\frac{2\pi\ell}{3}\,, \\ r_2(\ell)&\simeq \frac34-\frac{Z}{2}-\frac{5\zeta(3)}{4\pi\ell}\,, \end{align} where \begin{equation} \label{Zdef} Z\equiv \ln(4\pi)-\gamma\,,\quad\quad(\gamma=-\Gamma'(1))\,. \end{equation} On the other hand the susceptibility computed from the simple rotator is given by: \begin{equation}\label{chirot_d2} \chi_{\mathrm{rot}} = \frac{1}{2 \overline{g}_{\mathrm{FV}}^2(L) } -\frac{N}{12}\ell + \mathrm{O}(\overline{g}_{\mathrm{FV}}^4(L))\,, \quad (d=2)\,, \end{equation} where $\overline{g}_{\mathrm{FV}}$ is the LWW running coupling \cite{LWW} defined through the finite volume mass gap: \begin{equation}\label{FVcoupling} \overline{g}_{\mathrm{FV}}^2(L)\equiv \frac{N}{N^2-1}L E_1(L)\,. \end{equation} Its expansion in terms of the running coupling in the MS scheme of dimensional regularization (DR) is given by \begin{equation}\label{gFVgMS} \overline{g}_{\mathrm{FV}}^2(L) = \overline{g}_{\mathrm{{MS\kern-0.14em}\kern0.14em}}^2(1/L) + c_1 \overline{g}_{\mathrm{{MS\kern-0.14em}\kern0.14em}}^4(1/L) + c_2 \overline{g}_{\mathrm{{MS\kern-0.14em}\kern0.14em}}^6(1/L) +\cdots \end{equation} The first two coefficients are obtained using the methods of ref.~\cite{LWW}: \begin{align} c_1 &= -\frac{N}{4\pi}Z\,, \\ c_2 &= \frac{N^2}{16\pi^2}\left(Z^2-Z+\frac32\right)\,. \end{align} Combining the results we arrive at \begin{equation}\label{chidiffPT_d2} \frac{\chi-\chi_{\mathrm{rot}}}{\chi} =\frac{5N^2}{32\pi^3\ell}\zeta(3)\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^4(1/L)+\dots \end{equation} On the other hand, from our considerations of the modified rotator in the previous subsection we would expect \begin{equation}\label{chidiffrot_d2} \frac{\chi-\chi_{\mathrm{rot}}}{\chi} = -\frac{1}{4\ell}\Phi_3(N^2+1)\overline{g}_{\mathrm{{MS\kern-0.14em}\kern0.14em}}^4(1/L)+\dots \end{equation} where $\Phi_3$ is the leading coefficient in the perturbative expansion of $\Phi(L)$, assuming the expansion starts at order $\overline{g}_{\mathrm{{MS\kern-0.14em}\kern0.14em}}^8$: \begin{equation} \Phi(L)=\sum_{r=3}\Phi_r\overline{g}_{\mathrm{{MS\kern-0.14em}\kern0.14em}}^{2r+2}(1/L)\,. \end{equation} Comparing \eqref{chidiffPT_d2} with \eqref{chidiffrot_d2} determines \begin{equation} \Phi_3=-\frac{5N^2}{(N^2+1)\pi^3}f_3\,, \end{equation} where \begin{equation} f_3=\frac18 \zeta(3)=0.15025711290\,. \label{f3ex} \end{equation} The low-lying spectrum to order $\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^8$ is given by \begin{equation} \begin{aligned} LE(r)&=2C_2^{(N)}(r)\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^2(1/L) \Big\{ 1+c_1\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^2(1/L)+c_2\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^4(1/L) \\ & +\overline{c}_3 \overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^6(1/L) + \ldots \Big\} + C_2^{(N)}(r)^2\Phi_3\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^8(1/L) +\dots \end{aligned} \end{equation} where \begin{equation} \overline{c}_3 = c_3 - \frac{(N^2-1)}{4N}\Phi_3\,. \end{equation} Hence we conclude, for example, \begin{equation} LE(r_1)-\frac{(N+1)}{2(N+2)}LE(r_2) =-\frac{(N-1)^2(N+1)(N+3)}{4N^2}\Phi_3\,\overline{g}_\mathrm{{MS\kern-0.14em}\kern0.14em}^8(1/L)+\dots\,. \end{equation} In subsection \ref{sectN3} we test this prediction for $N=3\,$. \subsection{Running coupling functions} First, following Balog and Hegedus \cite{Balog:2003yr} we introduce a function $\overline{g}_\mathrm{J}^2(L)$ of the box size $L$ through \begin{equation}\label{gJcoupling} \frac{1}{\overline{g}_\mathrm{J}^2(L)}+\frac{b_1}{b_0}\ln(b_0\overline{g}_\mathrm{J}^2(L)) =-b_0\ln(\Lambda_{\mathrm{FV}}L)\,, \end{equation} where $b_0\,,b_1$ are the universal first perturbative coefficients of the $\beta-$function \footnote{The 3-loop coefficient in the MSDR scheme is $b_{2\mathrm{{MS\kern-0.14em}\kern0.14em}}=3N^3/(64\pi^3)\,.$} \footnote{Note $b_0,b_1$ in \eqref{b0b1} are factors $4,16$ respectively larger than the coefficients $\beta_0,\beta_1$ given in eq.(20) of ref.~\cite{Balog:1992cm}. The reason for this is that the definition of the square of the coupling in this paper is a factor 4 smaller than that in \cite{Balog:1992cm}.}: \begin{equation}\label{b0b1} b_0=\frac{N}{2\pi}\,, \quad\quad b_1=\frac{N^2}{8\pi^2}\,, \end{equation} and $\Lambda_\mathrm{FV}$ is the $\Lambda-$parameter of the LWW finite volume coupling in \eqref{FVcoupling}. We chose the solution which is small for $\Lambda_{\mathrm{FV}}L$ small, which has the property \footnote{For further remarks concerning this coupling see \cite{chi_rot_On}} \begin{equation} \overline{g}_\mathrm{J}^2(L)=\overline{g}_{\mathrm{FV}}^2(L)+\mathrm{O}(\overline{g}_{\mathrm{FV}}^6(L))\,, \,\,\,\,\,\Lambda_{\mathrm{FV}}L\ll1\,. \end{equation} We consider $\overline{g}_J^2$ as a function of $z=ML$ where $M$ is the infinite volume mass gap: \begin{equation} \frac{1}{\overline{g}_\mathrm{J}^2(z)}+\frac{b_1}{b_0}\ln(b_0\overline{g}_\mathrm{J}^2(z)) =-b_0\ln(z)+b_0\ln(M/\Lambda_{\mathrm{FV}})\,. \end{equation} The ratio $M/\Lambda_{\mathrm{FV}}$ is known using the result in ref.~\cite{Balog:1992cm} \footnote{ $M/\Lambda_\mathrm{\overline{MS\kern-0.14em}\kern0.14em}=\sqrt{8\pi/\mathrm{e}}\sin(\pi/N)/(\pi/N)$ and $\Lambda_\mathrm{FV}/\Lambda_\mathrm{{MS\kern-0.14em}\kern0.14em}=\exp\left\{-Z/2\right\}= \Lambda_\mathrm{{MS\kern-0.14em}\kern0.14em}/\Lambda_\mathrm{\overline{MS\kern-0.14em}\kern0.14em}\,.$} \begin{equation} \frac{M}{\Lambda_\mathrm{FV}}= \sqrt{\frac{8}{\pi\mathrm{e}}}N\mathrm{e}^Z\sin\left(\frac{\pi}{N}\right)\,. \end{equation} Defining $\alpha_\mathrm{J}=\overline{g}_\mathrm{J}^2/(2\pi)$ the equation becomes \begin{equation} \frac{1}{\alpha_\mathrm{J}(z)}+\frac{N}{2}\ln(\alpha_\mathrm{J}(z))=-N\ln(z)+J(N)\,, \end{equation} with \begin{equation} J(N)=\frac{N}{2} \left[2Z+\ln\left\{\frac{N}{\pi}\sin^2\left(\frac{\pi}{N}\right)\right\} -1+\ln(8)\right]\,. \end{equation} The LWW coupling has the following expansion in terms of $\overline{g}^2_\mathrm{J}$: \begin{equation} \overline{g}_{\mathrm{FV}}^2=\overline{g}^2_\mathrm{J}\left\{1+\frac{N^2}{2}\alpha_\mathrm{J}^2+\dots\right\}\,. \end{equation} \subsection{\boldmath Results for the $r=r_p=(p,0,0)\,,\,\,p=1,2$ energies for SU(3)} \label{sectN3} In Table~\ref{tab:table1} we reproduce the data for the energy gaps $E(r_p)$ calculated from the numerical results given in Table \ref{tab:table2} in appendix E. $f_{3,\mathrm{est}}$ appearing in the last column is defined in \eqref{f3est}. \begin{table}[h] \centering \caption{$\SUth\times\SUth$ energies for representations $r_p=(p,0,0)\,,\,\,p=1,2$ } \label{tab:table1} \vspace{0.5cm} \begin{tabular}{|l|l|l|l|l|l|} \hline \quad$z$ & $\quad\alpha_\mathrm{J}(z)$ &$\quad LE(r_1)$ & $\quad LE(r_2)$ & $E(r_2)/E(r_1)$ & $f_{3,\mathrm{est}}$ \\ \hline $0.01$&$0.03896665$&$0.6576493028$&$1.643478(3) $&$2.499019(4) $&$0.1856(9)$ \\ $0.02$&$0.04264730$&$0.7208499394$&$1.8011779(6)$&$2.4986863(8)$&$0.1898(1)$ \\ $0.03$&$0.04515478$&$0.7640825060$&$1.908986(4)$ &$2.498403(4) $&$0.1947(6)$ \\ $0.04$&$0.04712791$&$0.7982109882$&$1.994061(4)$ &$2.498163(5) $&$0.1971(6)$ \\ $0.05$&$0.04878634$&$0.8269748893$&$2.06573(1)$ &$2.49794(1) $&$0.19936(15)$ \\ $0.06$&$0.05023433$&$0.8521505407$&$2.1284405(6)$&$2.4977283(7)$&$0.20159(6) $ \\ $0.07$&$0.05153033$&$0.8747342979$&$2.1846712(7)$&$2.4975255(8)$&$0.20357(7) $ \\ $0.08$&$0.05271070$&$0.8953461736$&$2.2359752(2)$&$2.4973304(2)$&$0.20533(2) $ \\ $0.09$&$0.05379972$&$0.9144006339$&$2.283386(3) $&$2.497140(3) $&$0.2070(2) $ \\ $0.1$ &$0.05481451$&$0.9321896996$&$2.327633(1) $&$2.496952(1) $&$0.20871(7) $ \\ $0.2$ &$0.06264221$&$1.0705947394$&$2.671321(2) $&$2.4951754(2)$&$0.22245(2) $ \\ $0.5$ &$0.07755223$&$1.3414119783$&$3.339568(1) $&$2.489591(1) $&$0.25595(2) $ \\ $1.0$ &$0.09516460$&$1.6789366076$&$4.158326(2) $&$2.476762(2) $&$0.31546(2) $ \\ \hline \end{tabular} \end{table} From the fifth column of Table~\ref{tab:table1}, we see that the ratio $E(r_2)/E(r_1)$ is close to the ratio 10/4 of the Casimir eigenvalues. However, our numerical precision is sufficient to establish that the simple effective rotator model requires corrections. \begin{figure}[htb] \centering \includegraphics[width=0.9\textwidth]{sh11E.pdf} \caption{Plot of the estimate for $f_3$ given in \eqref{f3est} for SU(3); circles with error bars are data from Table \ref{tab:table1}. The blue line is a quadratic fit to the first 11 data points. The green line is a constrained quadratic fit where \eqref{f3ex} at $\alpha_{\rm J}=0$ is kept fixed. } \label{fig_f3est_n3} \end{figure} To see even more clearly the agreement of our analysis with the data in Table~\ref{tab:table1} in Fig.~\ref{fig_f3est_n3} we plot estimates for $f_3$ given by \begin{equation}\label{f3est} f_{3,\mathrm{est}}=\frac{(N^2+1)}{10(N-1)^2(N+1)(N+3)2\pi\alpha_\mathrm{J}^4} \left[LE(r_1)-\frac{(N+1)}{2(N+2)}LE(r_2)\right]\,, \end{equation} for the case $N=3$. In the figure the extrapolation to zero volume is also shown. We do not have measured values close enough to $\alpha_{\rm J}=0$ to make linear fits. Our extrapolation is based on a quadratic least squares fit (weighted by the error bars given in the last column in Table~\ref{tab:table1}) to the first 11 data points in the range $\alpha_\mathrm{J} < 0.063$ giving $f_3({\rm quadratic},\, 11)=0.147$. If we use the first 10 data points only, we get $f_3({\rm quadratic},\, 10)=0.127$. The green line is a constrained fit where the zero volume limit \eqref{f3ex} is kept fixed at $\alpha_{\rm J}=0$. This shows that our measurements are completely consistent with our prediction in \eqref{f3ex}. \vspace{1.0cm} {\bf \ \ Acknowledgments} \noindent We thank \'Arp\'ad Heged\H us for providing us with his unpublished notes. We also thank Sebastien Leurent for a correspondence about problems related to their NLIE equations. This work was partially supported by the Hungarian National Science Fund OTKA (under K116505). \vspace{1.0cm} \begin{appendix} \section{\boldmath Recursion relations for $\overline{Q}_0^{(N)}$ and $\overline{Q}_2^{(N)}$} \label{app_recursion} In the next subsection we show for the irrep $r$ specified by $(m_1,\ldots,m_N)$ and $n_k\equiv m_k+N-k$ $\overline{Q}_0^{(N)}(n_1,\ldots,n_N) = \mathrm{dim}(r)$ and $\overline{Q}_2^{(N)}(n_1,\ldots,n_N)$ satisfy the recursion relations \begin{equation} \label{Qk_rec} \overline{Q}_k^{(N)}(n_1,\ldots,n_N) = \sum_{l_1=n_2+1}^{n_1} \, \sum_{l_2=n_3+1}^{n_2} \ldots \sum_{l_{N-1}=n_N+1}^{n_{N-1}} \overline{Q}_k^{(N-1)}(l_1,\ldots,l_{N-1})\,,\,\,\,\,k=0,2\,. \end{equation} The summation goes over $l_1,\ldots,l_{N-1}$ which satisfy the condition $n_1 \ge l_1 > n_2 \ge l_2 > \ldots \ge l_{N-2} > n_{N-1} \ge l_{N-1} > n_N$. To prove the formula for $\overline{C}_2^{(N)}(n_1,\ldots,n_N)$ given by \eqref{CovN} one can then use \eqref{C1Q2}. \subsection{Proof of the recursion relations} \label{recursion_proof} To prove the recursion relations \eqref{Qk_rec} we generalize the notion of summation to the case of symbolic limits for polynomial summands. Using the Pochhammer polynomials $(x)_0=1$, $(x)_m=x(x-1)\ldots(x-m+1)$, for $m=1,2\ldots$ for the finite difference operator $\Delta f(x)\equiv f(x+1)-f(x)$ one has \begin{equation} \label{dif_Pm} \Delta\, (x)_m \equiv (x+1)_m - (x)_m = m\, (x)_{m-1} \,'. \end{equation} From this one gets \begin{equation} \label{sum_Pm} \sum_{k=0}^{n-1} (k)_m = \frac{1}{m+1} (n)_{m+1} \,. \end{equation} These are analogous to $\mathrm{d} x^m/\mathrm{d} x = m x^{m-1}$ and $\int_0^x t^m \mathrm{d} t = x^{m+1}/(m+1)$. Decomposing a polynomial $P(x)$ by \begin{equation} P(x)=\sum_m c_m\, (x)_m\,, \quad c_m = \frac{1}{m!}\Delta^m P(0) \end{equation} one obtains \begin{equation} \sum_{k=0}^{n-1} P(k) = \sum_m \frac{c_m}{(m+1)} (n)_{m+1} \,. \end{equation} Define for arbitrary real (or complex) $a$ and $b$ \begin{equation} \label{Sab_def} \mathcal{S}(P(x),x,[a,b]) = \sum_m \frac{c_m}{(m+1)} \left[ (b)_{m+1} - (a)_{m+1} \right] \,. \end{equation} For $a,b\in{\mathbb{Z}}\,,a<b$ one has \begin{equation} \mathcal{S}(P(x),x,[a,b]) = \sum_{k=a}^{b-1} P(k)\,. \end{equation} The operation \eqref{Sab_def} can be viewed as an extension of the summation for polynomials \footnote{This is the rule used by symbolic programs, e.g. Maple or Mathematica to evaluate the sum for general limits.}. With the definition \eqref{Sab_def} one has \begin{equation} \label{Sab_prop} \begin{aligned} & \mathcal{S}(P(x),x,[a,a]) = 0 \,, \\ & \mathcal{S}(P(x),x,[b,a]) = -\mathcal{S}(P(x),x,[a,b]) \,, \\ & \mathcal{S}(P(x),x,[a,b]) + \mathcal{S}(P(x),x,[b,c]) = \mathcal{S}(P(x),x,[a,c]) \,, \\ \end{aligned} \end{equation} which are similar to the properties of $\int_a^b P(x)\mathrm{d} x$. Similarly, we introduce a generalization of multiple sums \begin{equation} \begin{aligned} & \mathcal{S}(P(x_1,x_2,\ldots),[x_1,x_2,\ldots] , [[a_1,b_1],[a_2,b_2],\ldots]) = \sum_{k_1=a_1}^{b_1-1} \sum_{k_2=a_2}^{b_2-1}\ldots P(k_1,k_2,\ldots) \\ &\quad = \mathcal{S} \left(\mathcal{S}(P(x_1,x_2,\ldots),[x_2,\ldots] , [[a_2,b_2],\ldots]), x_1, [a_1,b_1]\right) \,. \end{aligned} \end{equation} in an analogous way, calculating the rhs for integer values of the limits with $a_l < b_l$ and using the obtained expression as a definition for symbolic limits $a_l$, $b_l$. Using the invariance under $n_k\to n_k-1$ the recursion relation \eqref{Qk_rec} can for $k=0$ be rewritten as \begin{equation} \begin{aligned} & \overline{Q}_0^{(N)}(x_1,x_2,x_3,\ldots,x_N) \\ & \qquad = \mathcal{S}\left( \overline{Q}Q_0^{(N-1)}(t_1,t_2,\ldots,t_{N-1}), [t_1,t_2,\ldots,t_{N-1}], \right. \\ &\qquad \left. \phantom{\overline{Q}_0^{(N-1)}} [[x_1,x_2],[x_2,x_3],[x_3,x_4]\ldots,[x_{N-1},x_{N}]] \right)\,. \end{aligned} \end{equation} Due to \eqref{Sab_prop} this vanishes for $x_1=x_2$ or $x_2=x_3$, etc. Next, inserting $x_3\to x_1$ \begin{equation} \begin{aligned} & \overline{Q}_0^{(N)}(x_1,x_2,x_1,\ldots,x_N) \\ & \qquad = -\mathcal{S}\left(\overline{Q}_0^{(N-1)}(t_1,t_2,\ldots,t_{N-1}), [t_1,t_2,\ldots,t_{N-1}],\right. \\ &\qquad \left. \phantom{\overline{Q}_0^{(N-1)}} [[x_1,x_2],[x_1,x_2],\ldots,[x_{N-1},x_{N}]] \right) \end{aligned} \end{equation} Since the limits for $t_1$ and $t_2$ coincide, and the summand $\overline{Q}_0^{(N-1)}(t_1,t_2,\ldots,t_{N-1})$ changes sign for $t_1 \leftrightarrow t_2$, the rhs vanishes for $x_3=x_1$, and therefore it contains a factor $(x_1-x_3)$. Obviously, it also contains all factors of type $(x_k-x_{k+2})$. Consider next the case when $x_4=x_1$. On the rhs.\ we get the limits $[[x_1,x_2],$ $[x_2,x_3]$, $[x_1,x_3],\ldots]$. The range $[x_1,x_3]$ appearing here is the union of $[x_1,x_2]$ and $[x_2,x_3]$, hence the rhs can be written as a sum of two expressions with $[[x_1,x_2], [x_2,x_3], [x_1,x_2],\ldots]$ and $[[x_1,x_2], [x_2,x_3], [x_2,x_3],\ldots]$. Here the ranges $[x_1,x_2]$ (and $[x_2,x_3]$ respectively) appear twice, and from the antisymmetry wrt. $t_1\leftrightarrow t_2$ (respectively $t_2\leftrightarrow t_3$) one concludes that the rhs vanishes also for $x_4=x_1$. In this way one can show that the rhs contains all factors $(x_k-x_{k'})$, $1\le k < k' \le N$ appearing in $\overline{Q}_0^{(N)}(x_1,x_2,x_1,\ldots,x_N)$. Since the orders of the polynomials on the two sides also coincide, they must be equal, apart from a possible constant factor, which can be shown to be 1. \section{The partition function $z_0^{(N)}$ for small $u\,$.} \label{app_z0_u_small} Noting \begin{equation} \exp\left[ \frac{u}{2N} \left(\sum_k n_k\right)^2 \right] = \sqrt{\frac{N}{2\pi u}}\int_{-\infty}^{\infty} \mathrm{d}\alpha \exp\left( - \frac{N}{2u}\alpha^2 -\alpha \sum_k n_k \right)\,, \end{equation} we can rewrite \eqref{z0Na0} as \begin{equation} \label{z0Na2} \begin{aligned} z_0^{(N)}& = \frac{\mathrm{e}^{u c_N}}{N!} \sqrt{\frac{N}{2\pi u}} \int_{-\infty}^{+\infty} \mathrm{d}\alpha \exp\left(-\frac{N}{2u}\alpha^2\right) \\ & \qquad \times \left[ \overline{Q}_0^2\left(\frac{\partial}{\partial\alpha_1}, \ldots,\frac{\partial}{\partial\alpha_{N-1}},0\right) \prod_{k=1}^{N-1} \phi(u,\alpha_k) \right]_{\alpha_k=\alpha} \end{aligned} \end{equation} where \begin{equation} \label{phiua} \phi(u,\alpha) = \sum_{n=-\infty}^\infty \exp\left( -\frac{u}{2} n^2 -\alpha n\right)\,. \end{equation} The function $\phi(u,\alpha)$ can be expressed through the Jacobi theta-function \begin{equation} \label{Svz} \begin{aligned} S(v,z) & = \sum_{n=-\infty}^\infty \mathrm{e}^{-\pi v (n+z)^2} = v^{-1/2} \sum_{n=-\infty}^\infty \mathrm{e}^{-\pi n^2/v} \cos(2\pi n z) \\ & = \mathrm{e}^{-\pi v z^2} v^{-1/2} S\left(v^{-1}, i v z \right) \,. \end{aligned} \end{equation} The relation is given by \begin{equation} \label{phiua1} \phi(u,\alpha) = \sqrt{\frac{2\pi}{u}} S\left(\frac{2\pi}{u}, \frac{i\alpha}{2\pi}\right) = \exp\left(\frac{\alpha^2}{2u}\right) S\left( \frac{u}{2\pi}, \frac{\alpha}{u} \right) \,. \end{equation} The function $\phi(u,\alpha)$ satisfies the duality relation \begin{equation} \label{phi_phi} \phi(u,\alpha) = \sqrt{\frac{2\pi}{u}} \exp\left(\frac{\alpha^2}{2u}\right) \phi\left( \frac{4\pi^2}{u}, i\frac{2\pi\alpha}{u}\right) \,. \end{equation} For $u < 1$ it is convenient to use the fast converging expression \begin{equation} \label{phiua2} \begin{aligned} \phi(u,\alpha) &= \sqrt{\frac{2\pi}{u}} \exp\left(\frac{\alpha^2}{2u}\right) \sum_{n=-\infty}^\infty \exp\left( -\frac{2\pi^2}{u}n^2\right) \cos\left(\frac{2\pi\alpha}{u} n\right) \\ &= \sqrt{\frac{2\pi}{u}} \exp\left(\frac{\alpha^2}{2u}\right) \left[ 1 + 2 \exp\left( -\frac{2\pi^2}{u}\right) \cos\left(\frac{2\pi\alpha}{u} \right) + \cdots \right]\,. \end{aligned} \end{equation} Here the $n\ne 0$ terms in \eqref{phiua2} are suppressed exponentially for $u\to 0$. For $u\ll1$ and defining $w=4\pi^2/u\,,$ one obtains for $N=2,3,4\,,$ the expansions \begin{equation} \label{z0N} \begin{aligned} z_0^{(2)}(u) & = 2\sqrt{\pi} \mathrm{e}^{u/4} u^{-3/2} \left[ 1 - 2\,\mathrm{e}^{-w} (2w - 1) + \ldots \right] \,, \\ z_0^{(3)}(u) & = \sqrt{3} \pi \mathrm{e}^{u} u^{-4} \left[1 - \mathrm{e}^{-w} (2 w^3-9 w^2 +18 w -6) + \ldots \right] \,, \\ z_0^{(4)}(u) & = \frac{\sqrt{2}}{3} \pi^{3/2} \mathrm{e}^{5u/2} u^{-15/2} \left[1 - \frac16 \,\mathrm{e}^{-w} \left( 2 w^5 -25 w^{4} \right. \right. \\ & \quad \left. \phantom{\frac16} \left. +128 w^{3} -276 w^{2}+288 w -72\right) + \ldots \right] \,. \end{aligned} \end{equation} \section{The partition function $Z(u;h)$} \label{app_Zuh} In this appendix we consider the full dependence of the partition function on the chemical potential $h$. Below we use the short-hand notation $\hat{h}=h L_t$ \subsection{U(1) case} The irreps of the U(1) group are labeled by $m\in \mathbb{Z}$, and all have dimension 1, while the Casimir invariant is $C=m^2\,$. The partition function is \begin{equation} Z(u;h)= \sum_{m=-\infty}^\infty \mathrm{e}^{-u m^2 -m \hat{h}} = \exp\left(\frac{\hat{h}^2}{4u}\right) S\left( \frac{u}{\pi}, \frac{\hat{h}}{2u} \right)\,. \end{equation} \subsection{SU(2) case} For SU(2) the partition function is given by \begin{equation} \begin{aligned} Z(u;h) & = \sum_{p=0}^{\infty}\,\, \exp\left(-\frac14 p(p+2) u\right) \left[\sum_{m=-p/2}^{p/2} \mathrm{e}^{-m \hat{h} } \right]^2 \\ & = \mathrm{e}^{u/4}\sum_{n=1}^{\infty} \exp\left(-\frac14 n^2 u\right) \left(\frac{\sinh(\hat{h} n/2)}{\sinh(\hat{h} /2)}\right)^2\,. \end{aligned} \end{equation} Simplifying one gets \begin{equation} \label{z0uh_N2} Z(u;h) = \frac{\mathrm{e}^{u/4}}{4 \sinh^2(\hat{h} /2)} \left[ \mathrm{e}^{\hat{h}^2/u} S\left(\frac{u}{4\pi},\frac{2\hat{h} }{u}\right) - S\left(\frac{u}{4\pi},0\right) \right]\,. \end{equation} In the limit $h\to 0:\,\,$ $z_0^{(N)}(u) = \lim_{h\to 0} Z(u;h)$ thereby recovering \eqref{z0_SU2}. \subsection{$\mathrm{SU}(N)$ case} For general $N$ we have \begin{equation} Z(u;h) = \sum_{r}\,\, \exp\left(-u C_2^{(N)}(r)\right) \left[R(r;\hat{h}) \right]^2\,, \end{equation} where the sum goes over irreps $r$, and \begin{equation} R(r;\hat{h}) = \mathrm{Tr}_r \exp\left(-\hat{h} J_3\right) =\sum_{s\in r}\,\, \exp\left(-\lambda_1(s) \hat{h} \right) \end{equation} where $s$ runs over an appropriate basis of the representation $r$ of $\mathrm{SU}(N)$, and $\lambda_1(s)$ is the corresponding eigenvalue of $J_3$. Using the convention $n_k = m_k + N-k$ one can write $R(r;\hat{h})$ in the form \begin{equation} \label{Rbar} \overline{R}(n_1,\ldots,n_N) = \rho(\hat{h}) \sum_l b^{(l)} \sinh\left( \hat{h} \sum_{k=1}^N a_k^{(l)} n_k \right)\,. \end{equation} The coefficients $a_1^{(l)},\ldots,a_N^{(l)}$, $b^{(l)}$ and $\rho(\hat{h})$ will be calculated below explicitly for $\mathrm{SU}(3)$. From this one obtains \begin{equation} \label{Rbarsq} \begin{aligned} \left[\overline{R}(n_1,\ldots,n_N)\right]^2 &= \frac12 \rho^2(\hat{h}) \sum_{l l'} b^{(l)}b^{(l')} \\ & \left[ \cosh\left( \sum_{k=1}^N (a_k^{(l)}+a_k^{(l')})\hat{h} n_k \right) -\cosh\left( \sum_{k=1}^N (a_k^{(l)}-a_k^{(l')})\hat{h} n_k \right) \right]\,. \end{aligned} \end{equation} Similarly to the steps in \eqref{z0Na2} one obtains \begin{equation} \label{z0uhN} Z(u;h) = \frac{\mathrm{e}^{u c_N}}{N!}\sqrt{\frac{N}{8\pi u}}\, \rho^2(\hat{h}) \int_{-\infty}^{+\infty} \mathrm{d}\alpha \exp\left(-\frac{N}{2u}\alpha^2\right) \Psi(u,\alpha,\hat{h}) \end{equation} where \begin{multline} \label{Psiuah} \Psi(u,\alpha,\hat{h}) = \sum_{l l'} b^{(l)}b^{(l')} \left\{ \prod_{k=1}^{N-1} \phi\left(u,\alpha+\hat{h} (a_k^{(l)}+a_k^{(l')})\right) \right. \\ \left. -\prod_{k=1}^{N-1} \phi\left(u,\alpha+\hat{h} (a_k^{(l)}-a_k^{(l')})\right) \right\} \end{multline} where $\phi(u,\alpha)$ is as in \eqref{phiua1}\,. For small $u$ one can use the expansion \eqref{phiua2}. \subsection{SU(2) case, again} Using $m_1\equiv m_{12}$, $m_2\equiv m_{22}$ \begin{equation} R(m_1,m_2;\hat{h}) = \sum_{m_{11}=m_2}^{m_1} \exp\left\{-\hat{h} \left( m_{11}-\frac12 (m_{1}+m_{2})\right)\right\}\,. \end{equation} With $n_1=m_1+1$, $n_2=m_2$ and $n_{11}=m_{11}$ this gives \begin{equation} \overline{R}(n_1,n_2;\hat{h}) = \sum_{n_{11}=n_2}^{n_1-1} \exp\left\{-\hat{h} \left( n_{11}-\frac12 (n_1+n_2-1)\right)\right\} =\frac{\sinh((n_1-n_2)\hat{h} /2)}{\sinh(\hat{h} /2)}\,. \end{equation} Hence $\rho(\hat{h})=1/\sinh(\hat{h} /2)$ and we have only one term, $l=1$ with $b^{(1)}=1$ and $a^{(1)} = \left[\frac12, -\frac12\right]\,.$ From \eqref{z0uhN} we get \begin{equation} Z(u;h) = \frac{\mathrm{e}^{u/4}}{4 \sinh^2\frac{\hat{h} }{2}}\sqrt{\frac{1}{\pi u}} \int_{-\infty}^{\infty}\mathrm{d}\alpha \mathrm{e}^{-\alpha^2/u} \left[ \phi(u,\alpha+\hat{h} )- \phi(u,\alpha)\right]\,. \end{equation} Using for $\phi(u,\alpha)$ the expansion \eqref{phiua} one gets \begin{equation} \begin{aligned} Z(u,h) &= \frac{\mathrm{e}^{u/4}}{4\sinh^2\frac{\hat{h} }{2}} \sqrt{\frac{1}{\pi u}} \int_{-\infty}^{\infty}\mathrm{d}\alpha\mathrm{e}^{-\alpha^2/u} \sum_{n=-\infty}^\infty \mathrm{e}^{-u n^2/2-\alpha n} \left(\mathrm{e}^{-\hat{h} n}-1\right) \\ &= \frac{\mathrm{e}^{u/4}}{4\sinh^2\frac{\hat{h} }{2}} \sum_{n=-\infty}^\infty \mathrm{e}^{-u n^2/4} \left(\mathrm{e}^{-\hat{h} n}-1\right)\,. \end{aligned} \end{equation} Further \begin{equation} \sum_n \mathrm{e}^{-u n^2/4} \mathrm{e}^{-\hat{h} n} = \mathrm{e}^{\hat{h}^2/u} \sum_n \mathrm{e}^{-u (n+2\hat{h} /u)^2/4} = \mathrm{e}^{\hat{h}^2/u} S\left(\frac{u}{4\pi},\frac{2\hat{h} }{u}\right)\,, \end{equation} and inserting this one recovers \eqref{z0uh_N2}. \subsection{SU(3) case} \label{Zuh_SU3} With the notation $m_k\equiv m_{k3}$ one has \begin{equation} R(m_1,m_2,m_3;\hat{h}) = \sum_{m_{12}=m_2}^{m_1} \sum_{m_{22}=m_3}^{m_2} \sum_{m_{11}=m_{22}}^{m_{12}} \exp\left\{-\hat{h} \left( m_{11}-\frac12 (m_{12}+m_{22})\right)\right\}\,. \end{equation} With $n_{kM}=m_{kM}+M-k$ and $n_{kN}\equiv n_k$ this gives \begin{equation} \begin{aligned} & \overline{R}(n_{1},n_{2},n_{3};\hat{h}) = \sum_{n_{12}=n_{2}}^{n_{1}-1}\,\, \sum_{n_{22}=n_{3}}^{n_{2}-1}\,\,\sum_{n_{11}=n_{22}}^{n_{12}-1}\,\, \exp\left\{-\hat{h} \left( n_{11}-\frac12 (n_{12}+n_{22}-1)\right) \right\} \\ & \quad = \frac{\sinh\left((n_{2}-n_{1})\hat{h} /2\right) +\sinh\left((n_{3}-n_{2})\hat{h} /2\right) +\sinh\left((n_{1}-n_{3})\hat{h} /2\right)}% {2\sinh\frac{\hat{h} }{2} \left(\cosh\frac{\hat{h} }{2}-1\right)} \end{aligned} \end{equation} As a check one has \begin{equation} \lim_{\hat{h}\to 0}\overline{R}(n_{1},n_{2},n_{3};\hat{h}) = \frac12 (n_{1}-n_{2}) (n_{1}-n_{3})(n_{2}-n_{3}) = \overline{Q}_0(n_{1},n_{2},n_{3})\,. \end{equation} One gets \begin{equation} \rho(\hat{h}) = \frac{1}{4\sinh\frac{\hat{h} }{2} \sinh^2\frac{\hat{h} }{4}} = 8 \hat{h}^{-3} + \order{\hat{h}^{-4}}\,. \end{equation} We have 3 terms, $l=1,2,3$, with $b^{(l)}=1$ and the coefficients $[a^{(1)}_1,a^{(1)}_2,a^{(1)}_3]$ are \begin{equation} \label{aaa} a^{(1)} = \left[-\frac12, \frac12,0\right] \,, \quad a^{(2)} = \left[0, -\frac12, \frac12 \right] \,, \quad a^{(3)} = \left[\frac12, 0, -\frac12\right] \,. \end{equation} One has (using the symmetry $h\to -h$) \begin{equation} \begin{aligned} \Psi(u,\alpha,\hat{h}) &= \phi(u,\alpha-\hat{h} )\phi(u,\alpha+\hat{h} ) +2\phi(u,\alpha)\phi(u,\alpha+\hat{h} ) \\ & \quad +4\phi(u,\alpha)\phi(u,\alpha+\hat{h} /2) +\phi(u,\alpha+\hat{h} /2)\phi(u,\alpha-\hat{h} /2) \\ & \quad -4\phi(u,\alpha+\hat{h} )\phi(u,\alpha-\hat{h} /2) -\phi(u,\alpha+\hat{h} /2)^2 -3 \phi(u,\alpha)^2\,. \end{aligned} \end{equation} Neglecting exponentially small terms $\order{\exp(-2\pi^2/u)}$ in \eqref{phiua2} one obtains \begin{equation} \begin{aligned} Z(u;h) & \simeq \sqrt{3} \pi \mathrm{e}^{u} \frac{\left(\exp(\frac{\hat{h}^2}{2 u})-1\right) \left(\exp(\frac{\hat{h}^2}{4 u})-1\right)^2}{% 32 u \sinh^2\frac{\hat{h} }{2}\sinh^4\frac{\hat{h} }{4}} \\ & = \sqrt{3} \pi \mathrm{e}^{u} u^{-4} \left[ 1 + \frac12 \hat{h}^2 \left(\frac{1}{u}-\frac14 \right) + \order{\hat{h}^4} \right]\,. \end{aligned} \end{equation} This is in agreement with \eqref{Zuh}, \eqref{z0N}, \eqref{z1z0_SUN}. \section{Relation of $z_0(u)$ to the heat kernel $K(U,t)$} The heat kernel $K(U,t)$ on the group manifold $\mathrm{SU}(N)$ in the $U\to I$ limit (where $I$ is the identity) is related to our partition function $z_0(u)$ at $u=t$. It is given by (see eqs.~(3), (6) from \cite{Menotti:1981ry}) \begin{equation} \label{KUt} K(U,t) = \langle I | \mathrm{e}^{-t \hat{J}^2 } | U \rangle = \sum_r d^{(r)} \chi^{(r)}(U) \exp\left(-t \mathcal{C}_2^{(r)}\right)\,, \end{equation} where $r$ runs over all irreducible unitary representations, $\chi^{(r)}(U)$ is the character of the representation, $d^{(r)}= \chi^{(r)}(I)$ its dimension and $\mathcal{C}_2^{(r)}$ its quadratic Casimir invariant. Formally one has \begin{equation} K(I,t) = \langle I | \exp(-t \hat{L}^2) | I \rangle = \int_U \langle U | \exp(-t \hat{L}^2) | U \rangle = \mathrm{Tr}( \exp(-t\hat{L}^2 ) ) = z_0(t)\,. \end{equation} From \eqref{KUt} one also has \begin{equation} K(I,t) = \sum_r [d^{(r)}]^2\exp\left(-t \mathcal{C}_2^{(r)}\right) = z_0(t)\,. \end{equation} For SU(2), eq.~(9),\cite{Menotti:1981ry} \begin{equation} K(\phi,t) = \mathcal{N}_2 \sum_{n=-\infty}^\infty \frac{\phi + 2\pi n}{\sin{\phi}} \exp\left(-\frac{(\phi + 2\pi n)^2}{t}\right) \end{equation} where $U=S \, \mathrm{diag}[\phi,-\phi]\, S^\dagger$, and the prefactor $\mathcal{N}_2$ does not depend on $\phi$. Note, however, that it depends on the parameter $t$, a fact which was irrelevant for the discussion of \cite{Menotti:1981ry}). In the limit $U\to I$ this gives \begin{equation} \lim_{\phi\to 0} K(\phi,t) = \mathcal{N}_2(t) \left[ S\left(\frac{4\pi}{t}\right) + \frac{8\pi}{t}S'\left(\frac{4\pi}{t} \right) \right]\,. \end{equation} From our result in eq.~\eqref{z0_SU2_A} we deduce \begin{equation} \mathcal{N}_2(t) = \sqrt{4\pi} \mathrm{e}^{t/4} t^{-3/2} \,. \end{equation} Note that for $t\to 0$ the square bracket goes to $1$ exponentially fast and the important information for the isospin susceptibility in this limit is hidden entirely in the undetermined $t$-dependence of $\mathcal{N}(t)$. For $\mathrm{SU}(N)$ one has from eq.~(7),\cite{Menotti:1981ry} \begin{equation} z_0(t)=\lim_{\phi\to 0} K(\phi,t) = \mathcal{N}_N(t) \sum_{\{l\}=-\infty}^{\infty} \prod_{i<j}\left[ 1 - \frac{4\pi^2}{t N}(l_i-l_j)^2\right] \exp\left( - \frac{2\pi^2}{t N} (l_i-l_j)^2\right)\,. \end{equation} Comparing this with \eqref{z0NN} one gets \begin{equation} \mathcal{N}_N(t) = \frac{(2\pi)^{(N-1)/2}\sqrt{N}}{2!\,3!\ldots (N-1)!} t^{-(N^2-1)/2} \exp\left( c_N t\right)\,. \end{equation} \newsavebox{\SSa} \sbox{\SSa}{ \setlength{\unitlength}{1.2mm} \begin{picture}(140,25) (-30,-12.5) \put(0,0){\circle*{3}} \put(10,0){\circle{3}} \put(20,0){\circle{3}} \put(30,0){\circle{3}} \put(40,0){\circle{3}} \put(-10,0){\circle{3}} \put(-20,0){\circle{3}} \put(-30,0){\circle{3}} \put(-40,0){\circle{3}} \put(0,7){\circle*{3}} \put(10,7){\circle{3}} \put(20,7){\circle{3}} \put(30,7){\circle{3}} \put(40,7){\circle{3}} \put(-10,7){\circle{3}} \put(-20,7){\circle{3}} \put(-30,7){\circle{3}} \put(-40,7){\circle{3}} \put(1.5,0){\line(1,0){7}} \put(11.5,0){\line(1,0){7}} \put(21.5,0){\line(1,0){7}} \put(31.5,0){\line(1,0){7}} \put(41.5,0){\line(1,0){4}} \put(-8.5,0){\line(1,0){7}} \put(-18.5,0){\line(1,0){7}} \put(-28.5,0){\line(1,0){7}} \put(-38.5,0){\line(1,0){7}} \put(-45.5,0){\line(1,0){4}} \put(1.5,7){\line(1,0){7}} \put(11.5,7){\line(1,0){7}} \put(21.5,7){\line(1,0){7}} \put(31.5,7){\line(1,0){7}} \put(41.5,7){\line(1,0){4}} \put(-8.5,7){\line(1,0){7}} \put(-18.5,7){\line(1,0){7}} \put(-28.5,7){\line(1,0){7}} \put(-38.5,7){\line(1,0){7}} \put(-45.5,7){\line(1,0){4}} \put(0,1.5){\line(0,1){4}} \put(10,1.5){\line(0,1){4}} \put(20,1.5){\line(0,1){4}} \put(30,1.5){\line(0,1){4}} \put(40,1.5){\line(0,1){4}} \put(-10,1.5){\line(0,1){4}} \put(-20,1.5){\line(0,1){4}} \put(-30,1.5){\line(0,1){4}} \put(-40,1.5){\line(0,1){4}} \multiput(44.1,3.5) (1,0) {4} {\circle*{0.2}} \multiput(-48.5,3.5) (1,0) {4} {\circle*{0.2}} \put(0,-3){\makebox(0,0)[t]{{\protect\scriptsize 0}}} \put(10,-3){\makebox(0,0)[t]{{\protect\scriptsize 1}}} \put(20,-3){\makebox(0,0)[t]{{\protect\scriptsize 2}}} \put(30,-3){\makebox(0,0)[t]{{\protect\scriptsize 3}}} \put(-10,-3){\makebox(0,0)[t]{{\protect\scriptsize --1}}} \put(-20,-3){\makebox(0,0)[t]{{\protect\scriptsize --2}}} \put(-30,-3){\makebox(0,0)[t]{{\protect\scriptsize --3}}} \end{picture}} \section{Hirota dynamics and NLIE for the SU$(3)$ principal model} In this appendix we give formulas necessary to numerically compute the finite volume energy spectrum of the SU$(N)$ principal chiral model for $N=3$ using the NLIE equations constructed in Ref. \cite{Kazakov}. \cite{Kazakov} discusses the case for general $N\geq3$, but here for simplicity we restrict our attention to $N=3$ only. (The $N=2$ case was discussed before in \cite{KL0}.) We give all formulas necessary to perform the numerical computation in a \lq\lq cookbook style'' and refer to the original paper \cite{Kazakov} for the derivation of the equations and further details. \subsection{SU$(3)$ T-system and Y-system} Based on previous experience with integrable models, where similar equations were constructed by starting from integrable lattice regularizations and/or by bootstrap methods the following double-infinite T-system is proposed as the basis for the description of the finite volume spectrum of the SU$(3)$ principal model: \begin{equation} T^+_{a,s}(\theta)\,T^-_{a,s}(\theta)= T_{a,s+1}(\theta)\,T_{a,s-1}(\theta)+ T_{a+1,s}(\theta)\,T_{a-1,s}(\theta), \label{Tsys} \end{equation} where the T-functions $T_{a,s}(\theta)$ are indexed by $a=0,1,2,3$ and $s=0,\pm1,\pm2,\dots$ and by definition \begin{equation} T_{-1,s}(\theta)=T_{4,s}(\theta)\equiv0\,. \end{equation} Here for any function $f(\theta)$ the notation $f^\pm(\theta)$ stands for \begin{equation} f^\pm(\theta)=f\left(\theta\pm\frac{i}{2}\right)\,. \end{equation} For the description of a particular state in the spectrum of the model we have to specify the corresponding solution of the T-system (\ref{Tsys}). Starting from the T-system (Hirota equations) one can go to the corresponding double-infinite Y-system, which is used to construct the TBA integral equations, or, alternatively, to the finite Q-system, which is used in the NLIE approach. The SU$(3)$ Y-system for the Y-functions $Y_{a,s}(\theta)\,$, $a=1,2$, $s=0,\pm1,\pm2,\dots$ is \begin{equation} Y^+_{a,s}(\theta)\,Y^-_{a,s}(\theta)= [1+Y_{a,s+1}(\theta)][1+Y_{a,s-1}(\theta)]\, \frac{Y_{a+1,s}(\theta)}{1+Y_{a+1,s}(\theta)}\, \frac{Y_{a-1,s}(\theta)}{1+Y_{a-1,s}(\theta)}, \label{Ysys} \end{equation} with the convention \begin{equation} Y_{0,s}(\theta)=Y_{3,s}(\theta)\equiv\infty\,. \end{equation} \begin{figure}[tbp] \label{SU3TBA} \begin{center} \begin{picture}(140,30)(0,-15) \put(-35,-50) {\usebox{\SSa}} \end{picture} \end{center} \caption{\footnotesize TBA-diagram associated with the SU(3) model Y-system.} \end{figure} This Y-system is illustrated by Fig. 2, where the $s=0$ nodes are black indicating that they are the massive nodes with asymptotic behavior \begin{equation} Y_{a,0}(\theta)\sim{\rm e}^{-ML\cosh(v\theta)}\cdot{\rm const.} \qquad\quad \vert\theta\vert\longrightarrow\infty, \end{equation} where \begin{equation} v=\frac{2\pi}{3}\,. \end{equation} Here $M$ is the mass of the particles in infinite volume and $L$ is the size of the system. All other (magnonic) nodes behave as \begin{equation} Y_{a,s}(\theta)\sim{\rm const.}\qquad\quad \vert\theta\vert\longrightarrow\infty,\qquad s\not=0\,. \end{equation} The relation between the T-functions and Y-functions is \begin{equation} Y_{a,s}(\theta)=\frac{T_{a,s+1}(\theta)\,T_{a,s-1}(\theta)} {T_{a+1,s}(\theta)\,T_{a-1,s}(\theta)}\,,\qquad\qquad 1+Y_{a,s}(\theta)=\frac{ T^+_{a,s}(\theta)\,T^-_{a,s}(\theta)} {T_{a+1,s}(\theta)\,T_{a-1,s}(\theta)}\,. \label{TY} \end{equation} We will consider the U$(1)$ sector only, where all particles are highest weight states in the defining representation of SU$(3)$. For these states one can establish, using the T-Y relations (\ref{TY}) that \begin{equation} \begin{split} 1+Y_{1,0}(\theta) \ {\rm has\ zeroes\ at:}\ \theta&=\theta_{1,j}+\frac{3i}{4}; \quad \theta=\theta_{1,j}-\frac{i}{4},\\ 1+Y_{1,0}(\theta) \ {\rm has\ poles\ at:}\ \theta&=\theta_{2,j}-\frac{i}{4}\\ \end{split} \label{Y10poles} \end{equation} and \begin{equation} \begin{split} 1+Y_{2,0}(\theta) \ {\rm has\ zeroes\ at:}\ \theta&=\theta_{2,j}-\frac{3i}{4}; \quad \theta=\theta_{2,j}+\frac{i}{4},\\ 1+Y_{2,0}(\theta) \ {\rm has\ poles\ at:}\ \theta&=\theta_{1,j}+\frac{i}{4}.\\ \end{split} \label{Y20poles} \end{equation} The position of singularities are not independent; they are related by the T-Y relations. They are parameterized in terms of two complex quantities $\theta_{1,j}$ and $\theta_{2,j}$, which are deformations of (and for large $L$ are exponentially close to) the real asymptotic rapidities (Bethe roots) $\theta_j$. The index $j$ ($j=1,\dots,{\cal N}$) labels the particles. The two Y-functions are conjugates of each other: \begin{equation} \left[Y_{1,s}(\theta)\right]^*=Y_{2,s}(\theta^*) \label{conj12} \end{equation} and consequently \begin{equation} \theta_{1,j}^*=\theta_{2,j}. \end{equation} Whether one uses the infinite set of TBA equations, which can be derived from the Y-system (\ref{Ysys}), or the finite set of NLIE equations of Ref. \cite{Kazakov}, it is always necessary to construct at least the $s=0$ Y-functions corresponding to the massive nodes, since these are entering the energy formula\footnote{Note that in this appendix the symbol $E$ generically denotes the energy of the given state and not as in the main text the energy gap between this energy level and the ground state energy.} \begin{equation} \begin{split} E=m&\sum_{j=1}^{\cal N}\left\{ \cosh\left[v\left(\theta_{1,j}+\frac{i}{2}\right)\right]+ \cosh\left[v\left(\theta_{2,j}-\frac{i}{2}\right)\right]\right\}\\ -\frac{m}{3}&\int_{-\infty}^\infty{\rm d}\theta\cosh(v\theta)\,\ln\left( [1+Y_{1,0}(\theta)][1+Y_{2,0}(\theta)]\right)\,. \label{ener} \end{split} \end{equation} This energy formula was conjectured in Ref. \cite{Kazakov}, but it can also be systematically derived~\cite{Heg} from an integrable lattice regularization of the model. \subsection{Asymptotic solutions} We are not going to use the infinite set of TBA equations in our numerical calculations but we used them to derive some large volume asymptotic formulas to determine the exponentially small shifts of the parameters $\theta_{a,j}$ and to calculate the first exponentially small corrections to the energy (\ref{ener}). In this subsection we will use the \lq\lq natural'' normalization of the rapidity parameters (to avoid confusion we denote them by $T$ instead of $\theta$). The relation between the two normalizations is \begin{equation} T_{a,j}=v\theta_{a,j},\qquad\quad T_j=v\theta_j. \end{equation} To leading order the energy is just the sum of the individual free particle energies given by \begin{equation} E^{(0)}=\sum_{j=1}^{\cal N}M\cosh T_j\,, \end{equation} where the asymptotic rapidities satisfy the Bethe quantization conditions \begin{equation} 2\pi n_j=ML\sinh T_j+\sum_{i\not=j}\delta(T_j-T_i)\,, \end{equation} where $n_j$ are integer quantum numbers and $\delta(\theta)$ is the phase shift for the scattering of highest weight triplet particles. In the simplest 1-particle case the quantization condition reduces to \begin{equation} \sinh T_1=\frac{2\pi n_1}{ML}\,. \end{equation} The next (leading exponential, also called L\"uscher) corrections consist of two parts: \begin{equation} E^{({\rm L})}=E^{(\mu)}+E^{(F)}\,, \end{equation} where the mu-term $E^{(\mu)}$ comes from the deformation of the rapidity parameters in the first line of (\ref{ener}) and the F-term $E^{(F)}$ is the leading exponential approximation of the second line (the integral term). We can write the change of the rapidities as \begin{equation} T_{1,j}=T_j+x_j-iy_j,\qquad\quad T_{2,j}=T_j+x_j+iy_j\,, \end{equation} where both $x_j$ and $y_j$ are exponentially small. The sign of $y_j$ determines whether when moving away from the $L=\infty$ limit the exact rapidities move upwards or downwards in the complex plane. We now write down the L\"uscher order asymptotic formulas for ${\cal N}=1$ and for the lowest energy zero total momentum ${\cal N}=2$ state. First we introduce some notations and definitions. \begin{equation} \lambda(\theta)={\rm e}^{2ib(\theta)},\qquad\quad b(\theta)=\frac{\pi}{2}- \arctan\left(\frac{1}{\sqrt{3}}\tanh\frac{\theta}{2}\right), \end{equation} \begin{equation} a(\theta)=\sqrt{\frac{3}{4}+\sinh^2\left(\frac{\theta}{2}\right)},\qquad \Gamma\left(1+\frac{i\theta}{2\pi}\right)\, \Gamma\left(\frac{1}{3}-\frac{i\theta}{2\pi}\right)= A(\theta){\rm e}^{iB(\theta)}\,, \label{AB} \end{equation} \begin{equation} {\cal L}(\theta)=\frac{{\rm e}^{2iB(\theta)}}{2+\frac{3i\theta}{\pi}},\qquad D(\theta)=\left\vert\Gamma\left(\frac{2}{3}+\frac{i\theta}{2\pi}\right) \right\vert^2\,, \end{equation} \begin{equation} {\cal K}(\theta)=\frac{a(\theta)\,D^2(\theta)} {\sinh\frac{\theta}{2}\,A^2(\theta)},\qquad {\cal A}=\frac{64\pi^4}{9\Gamma^6(1/3)}\,. \end{equation} We note that in (\ref{AB}) $A(\theta)$ and $B(\theta)$ are real for $\theta$ real. Further \begin{equation} \mu(\theta)=\lambda(\theta)\left[{\cal L}(\theta) \left(2+\frac{9i\theta}{\pi}\right)\right]^2\,, \end{equation} \begin{equation} g(\alpha,\beta)=\lambda(\alpha)\lambda(\beta)\left( {\cal L}(\alpha){\cal L}(\beta)\left[ 4+\frac{6i}{\pi}(\alpha+\beta)-\frac{27}{\pi^2}\alpha\beta\right]\right)^2\,, \end{equation} \begin{equation} f(\alpha,\beta)=g\left(\alpha+\frac{i\pi}{2},\beta+\frac{i\pi}{2}\right), \qquad\quad f_1(\alpha,\beta)=\frac{\partial}{\partial\alpha} f(\alpha,\beta)\,. \end{equation} For ${\cal N}=1$ we find \begin{equation} y_1=-\sigma{\cal A}(-1)^{n_1}\,{\rm e}^{-\sigma z\cosh T_1}\,, \label{N1y1} \end{equation} \begin{equation} E^{(\mu)}=-\frac{32M\pi^4(-1)^{n_1}}{3\Gamma^6(1/3)\cosh T_1}\, {\rm e}^{-\sigma z\cosh T_1}\,, \label{N1mu} \end{equation} \begin{equation} E^{(F)}=-\frac{M}{2\pi\cosh T_1}\int_{-\infty}^\infty{\rm d}\theta\, \cosh\theta \,{\rm e}^{-z\cosh(\theta+T_1)}\left[\mu\left(\theta+\frac{i\pi}{2}\right)+ \mu\left(\frac{i\pi}{2}-\theta\right)\right]\,. \end{equation} Here \begin{equation} \sigma=\frac{\sqrt{3}}{2},\qquad\quad z=ML\,. \end{equation} The coefficient of the exponential in the mu-term (\ref{N1mu}) (for the standing particle $T_1=0$) is different from that given by Eq.~(95) of Ref.~\cite{Kazakov}. Although our coefficient is larger by a factor $\pi/3\approx 1.05$ only, we have demonstrated numerically that the difference between the two formulas is clearly visible and that it is indeed (\ref{N1mu}) that agrees asymptotically with the exact result. For the lowest energy parity symmetric ${\cal N}=2$ state with \begin{equation} T_1=\bar T=-T_2>0 \end{equation} we find \begin{equation} y_1=y_2=-\sigma {\cal A}\,{\rm e}^{-\sigma z\cosh \bar T}\,{\cal K}(2\bar T)\,, \label{N2y12} \end{equation} \begin{equation} E^{(\mu)}=-\frac{M{\cal A}}{\cosh\bar T}\,{\rm e}^{-\sigma z\cosh\bar T} \left\{3{\cal K}(2\bar T)+\frac{8\sigma}{z}\sinh\bar T\,{\cal K}^\prime(2\bar T) \right\}\,, \end{equation} \begin{equation} \begin{split} E^{(F)}=-\frac{M}{\pi}\int_{-\infty}^\infty{\rm d}\theta\,{\rm e}^{-z\cosh\theta} &\Big\{\cosh\theta f(\theta-\bar T,\theta+\bar T)\\ +\frac{\tanh \bar T}{z}&\left[f_1(\theta+\bar T,\theta-\bar T) -f_1(\theta-\bar T,\theta+\bar T)\right]\Big\}\,. \end{split} \end{equation} \subsection{Alternative energy formula} In Ref. \cite{Kazakov} an alternative energy formula is used: \begin{equation} \begin{split} E_{\rm KL}=-\frac{M}{3}\int_{-\infty}^\infty{\rm d}\theta\Big\{ &\cosh v\left(\theta-\frac{i}{4}\right)\ln\left[1+Y_{1,0} \left(\theta-\frac{i}{4}\right)\right]\\ +&\cosh v\left(\theta+\frac{i}{4}\right)\ln\left[1+Y_{2,0} \left(\theta+\frac{i}{4}\right)\right]\Big\}\,. \label{51} \end{split} \end{equation} The reason for suggesting this alternative formula is, as we will see later, that it is more suitable for the NLIE approach. On the other hand, there is a problem with (\ref{51}) since as can be seen from (\ref{Y10poles}) and (\ref{Y20poles}) there are zeroes/poles dangerously close to the integration contours. These singularities tend to the integration contours in the $L\to\infty$ limit and asymptotically coincide. For this reason the energy formula (\ref{51}) should be applied with care. First of all the equivalence of it with the established formula (\ref{ener}) should be proved. In Ref. \cite{Kazakov} the starting point for the proof of this equivalence is the energy formula \begin{equation} \begin{split} E^\prime_{\rm KL}=\frac{M}{2\pi}\int_{-\infty}^\infty{\rm d}\theta\Big\{ &\sinh v\left(\theta-\frac{i}{4}\right)\frac{{\rm d}}{{\rm d}\theta} \ln\left[1+Y_{1,0}\left(\theta-\frac{i}{4}\right)\right]\\ +&\sinh v\left(\theta+\frac{i}{4}\right)\frac{{\rm d}}{{\rm d}\theta} \ln\left[1+Y_{2,0}\left(\theta+\frac{i}{4}\right)\right]\Big\}\,, \label{51p} \end{split} \end{equation} which is what one obtains from (\ref{51}) by formal partial integration. The strategy of the proof is to shift the integration contour to the real line in order to match the integral with the integral part of (\ref{ener}). During this deformation of the contours we encounter singularities at the poles/zeroes given by (\ref{Y10poles}) and (\ref{Y20poles}), provided they lie inside the strip bordered by the contours. There are two cases. If \begin{equation} {\rm Im}\,\theta_{1,j}>0\qquad{\rm (case\ I)}\,, \end{equation} then the corresponding zeroes are inside the strip (poles are outside). If \begin{equation} {\rm Im}\,\theta_{1,j}<0\qquad{\rm (case\ II)}\,, \end{equation} then the corresponding zeroes are outside the strip (poles are inside). One can show that during the contour deformation using Cauchy's theorem we pick up residue contributions that make the formulas (\ref{51p}) and (\ref{ener}) exactly coincide in case I only. In case II (\ref{51p}) and (\ref{ener}) are definitely different. Using the asymptotic result (\ref{N1y1}) we can see that the 1-particle states belong to case I only for even quantum numbers $n_1$. (\ref{N2y12}) shows that the lowest energy ${\cal N}=2$ symmetric state also belongs to case I. Thus luckily the states we are interested in (lowest energy 1 and 2-particle states) are all case I states. Another problem is that (except for the ground state) (\ref{51}) is not equal to its formally partially integrated version (\ref{51p}). During partial integration the boundary terms at infinity of course vanish since $Y_{a,0}(\theta)$ are exponentially small there. However, there are boundary terms coming from certain points on the contour. As we have seen there are (at least for large $L$) poles and zeroes very close to each other and to the integration contour. This implies that there must be some real $\hat\theta$ such that $1+Y_{1,0}(\hat\theta-i/4)$ is real and negative. Using the standard definition of the log function, there are extra boundary contributions coming from the fact that $\ln[1+Y_{1,0}(\theta-i/4)]$ jumps at $\theta=\hat\theta$ by $\pm2i\pi$. Simultaneously $\ln[1+Y_{2,0}(\theta+i/4)]$ jumps at the same point by $\mp2i\pi$ since it is the complex conjugate. For example for the standing 1-particle state we find that as $\theta$ goes along the integration contour from $-\infty$ to $\infty$ the value of $1+Y_{1,0}(\theta-i/4)$, starting from $+1$, crosses the negative real axis from above at $\theta=\hat\theta$ and than goes back to $+1$ in the lower half plane. The corresponding extra term is \begin{equation} -iM\sinh\left(v\left[\hat\theta-\frac{i}{4}\right]\right)\,. \end{equation} $\hat\theta$ is defined by \begin{equation} {\rm Im}\,Y_{1,0}\left(\hat\theta-\frac{i}{4}\right)=0,\qquad\quad {\rm Re}\,Y_{1,0}\left(\hat\theta-\frac{i}{4}\right)<-1\,. \label{hat1} \end{equation} Then also \begin{equation} {\rm Im}\,Y_{2,0}\left(\hat\theta+\frac{i}{4}\right)=0,\qquad\quad {\rm Re}\,Y_{2,0}\left(\hat\theta+\frac{i}{4}\right)<-1 \label{hat2} \end{equation} giving the contribution \begin{equation} iM\sinh\left(v\left[\hat\theta+\frac{i}{4}\right]\right)\,. \end{equation} We find that by parity symmetry $\hat\theta=0$ and so \begin{equation} E_{{\rm KL}}=E^\prime_{\rm KL}-2M\sin\frac{\pi}{6}=E^\prime_{\rm KL}-M=E-M\,. \end{equation} Similarly for our ${\cal N}=2$ state because of parity symmetry (\ref{hat1}) and (\ref{hat2}) are satisfied at \begin{equation} \hat\theta=\pm B_2 \end{equation} and we have \begin{equation} E=E^\prime_{\rm KL}=E_{\rm KL}+2M\cosh(vB_2)\,. \end{equation} To summarize, we can use the energy formula (\ref{51}) for the zero momentum 0,1, and 2-particle states with the standard definition of the log function but the correct energy is given by \begin{equation} E=E_{\rm KL}+{\cal N}M\cosh(vB_{\cal N}) \end{equation} with $B_1=0$ and $B_2$ determined from the requirements \begin{equation} {\rm Im}\,Y_{1,0}\left(B_2-\frac{i}{4}\right)=0,\qquad\quad {\rm Re}\,Y_{1,0}\left(B_2-\frac{i}{4}\right)<-1\,. \label{B2} \end{equation} We emphasize that for states belonging to case II (for example moving 1-particle states with odd momentum quantum numbers) further modifications are necessary. \subsection{NLIE integral equations} The unknowns to be determined are two imaginary functions along the real axis, $f_2(\eta)$ and $f_3(\eta)$, and $\cal N$ real\footnote{The reality conditions on $f_{2,3}$ and $\beta_\alpha$ are sufficient to ensure the conjugacy properties (\ref{conj12}) of the Y-functions built out of them.} parameters $\beta_\alpha$, $\alpha=1,\dots,{\cal N}$. First we build the 4 Q-functions defined by \begin{equation} \begin{split} q_2(\theta)&=\theta+F_2(\theta),\qquad {\rm Im}\,\theta<0,\\ \bar q_2(\theta)&=\theta+\bar F_2(\theta),\qquad {\rm Im}\,\theta>0,\\ q_3(\theta)&=P(\theta)+F_3(\theta),\qquad {\rm Im}\,\theta<0,\\ \bar q_3(\theta)&=P(\theta)+\bar F_3(\theta),\qquad {\rm Im}\,\theta>0\,, \end{split} \end{equation} where \begin{equation} F_j(\theta)=\frac{1}{2\pi i}\int_{-\infty}^\infty\frac{f_j(\eta)}{\theta-\eta} {\rm d}\eta,\qquad{\rm Im}\,\theta<0\,, \end{equation} \begin{equation} \bar F_j(\theta)=\frac{1}{2\pi i}\int_{-\infty}^\infty\frac{f_j(\eta)}{\theta-\eta} {\rm d}\eta,\qquad{\rm Im}\,\theta>0\,, \end{equation} and $P(\theta)$ is a polynomial of degree ${\cal N}+2$ \begin{equation} P(\theta)=\sum_{j=0}^{{\cal N}+2}p_j\theta^j\,, \end{equation} satisfying \begin{equation} 2P(\theta)-P(\theta-i)-P(\theta+i)=\delta_{{\cal N},0}+ (1-\delta_{{\cal N},0})\prod_{\alpha=1}^{\cal N}(\theta-\beta_\alpha)\,. \end{equation} We further restrict $P(\theta)$ by requiring $p_1=p_0=0$. The absence of linear and constant terms is a kind of gauge choice. In particular, for ${\cal N}=0$: \begin{equation} p_2=\frac{1}{2}. \end{equation} For ${\cal N}=1$: \begin{equation} p_3=\frac{1}{6},\qquad p_2=-\frac{1}{2}\beta_1\,. \end{equation} For ${\cal N}=2$: \begin{equation} p_4=\frac{1}{12},\qquad p_3=-\frac{1}{6}(\beta_1+\beta_2),\qquad p_2=\frac{1}{12}+\frac{1}{2}\beta_1\beta_2\,. \end{equation} The functions $q_j$, $\bar q_j$ are only needed in their respective domains of definition, except for real $\theta$ where we can use the ${\cal P}+i\pi\delta$ prescription. In particular for $\theta$ real \begin{equation} \frac{1}{2}[q_2(\theta)+\bar q_2(\theta)]=\theta+\frac{1}{2i}H(f_2)(\theta)\,, \end{equation} \begin{equation} \frac{1}{2}[q_3(\theta)+\bar q_3(\theta)]=P(\theta) +\frac{1}{2i}H(f_3)(\theta)\,, \end{equation} \begin{equation} q_j(\theta)-\bar q_j(\theta)=f_j(\theta)\,, \end{equation} where $H(f)$ denotes the Hilbert transform of $f$, \begin{equation} H(f)(x)=\frac{1}{\pi}{\cal P} \int_{-\infty}^\infty\frac{f(y)}{x-y} {\rm d}y\,. \end{equation} \subsubsection{The integral equations} We introduce the notation \begin{equation} f^{[\pm k]}(\theta)=f\left(\theta\pm\frac{i}{2}k\right),\qquad\quad f^\pm=f^{[\pm1]}\,. \end{equation} The two NLIE equations are given by Eq.~(82) of Ref.~\cite{Kazakov} with $Z=1$. They can be written in the form \begin{equation} \begin{split} Af_2+Bf_3&=D_1,\\ -\bar Af_2-\bar Bf_3&=D_2, \end{split} \end{equation} where \begin{equation} \begin{split} A&=-iq_3^{[-2]}+\frac{i}{2}(q_3+\bar q_3),\\ B&=\phantom{-}iq_2^{[-2]}-\frac{i}{2}(q_2+\bar q_2),\\ \bar A&=\phantom{-}i\bar q_3^{[2]}-\frac{i}{2}(q_3+\bar q_3),\\ \bar B&=-i\bar q_2^{[2]}+\frac{i}{2}(q_2+\bar q_2)\,. \end{split} \end{equation} The above formulas are obtained from Eqs.~ (75), (49) of Ref.~\cite{Kazakov}. $D_1$ and $D_2$ are shorthand for the complicated expressions of the right hand side of Eq.~(82) of Ref.~\cite{Kazakov} with $Z=1$ and they will be given explicitly below. We can express $f_2$ and $f_3$ from the NLIE equations. Defining \begin{equation} D_3= A\bar B-\bar AB \end{equation} we have \begin{equation} \begin{split} f_2&=\frac{BD_2+\bar BD_1}{D_3},\\ f_3&=-\frac{AD_2+\bar AD_1}{D_3}\,. \end{split} \label{NLIE} \end{equation} \subsubsection{Exact Bethe equations} $f_2$ and $f_3$ must be smooth functions. For ${\cal N}>0$ define the exact Bethe roots $\hat\theta_\alpha$ by \begin{equation} D_3(\hat\theta_\alpha)=0\,. \end{equation} Smoothness requires \begin{equation} \left(BD_2+\bar BD_1\right)\Big\vert_{\theta=\hat\theta_\alpha}=0\,. \label{EBE} \end{equation} (Then the other numerator also vanishes.) \subsubsection{Explicit expressions for $D_1$ and $D_2$} \begin{equation} D_1=\exp\left[-z\cosh\left\{v\left(\theta-\frac{i}{4}\right)\right\}\right] T_{1,1}^{[-1/2]}{\cal T}D_{1a}D_{1b}\,, \end{equation} where \begin{equation} {\cal T}=\frac{T_{0,0}^{[-1/2]}\,T_{3,0}^{[1/2]}}{T_{0,0}^{[-5/2]}\, T_{3,0}^{[5/2]}}\,, \end{equation} and \begin{equation} D_{1a}=\left(\frac{T_{0,0}^{[-9/2]}}{T_{0,0}^{[-5/2]}}\right)^{*K_3}, \qquad\quad D_{1b}=\left(\frac{T_{3,0}^{[11/2]}}{T_{3,0}^{[3/2]}}\right)^{*K^{[-1]}_3}. \end{equation} Here the notation is \begin{equation} f^{*K}=\exp[\ln(f)*K], \end{equation} where $*$ denotes convolution and the kernel $K_3$ is given by \begin{equation} K_3(\theta)=\frac{1}{\sqrt{3}[2\cosh(v\theta)+1]}\,. \end{equation} \begin{equation} D_2=\exp\left[-z\cosh\left\{v\left(\theta+\frac{i}{4}\right)\right\}\right] T_{2,1}^{[1/2]}{\cal T}D_{2a}D_{2b}\,, \end{equation} where \begin{equation} D_{2a}=\left(\frac{T_{3,0}^{[9/2]}}{T_{3,0}^{[5/2]}}\right)^{*K_3}, \qquad\quad D_{2b}=\left(\frac{T_{0,0}^{[-11/2]}}{T_{0,0}^{[-3/2]}}\right)^{*K^{[1]}_3}. \end{equation} \subsubsection{T-functions} The building blocks for $D_1$, $D_2$ are the T-functions given by Eq.~(49) of Ref.~\cite{Kazakov}: \begin{equation} T_{a,s}=-i{\rm Det}{\cal M}_{a,s}\,. \end{equation} (The $T_{a,s}$ expressions below are the $T^{(R)}_{a,s}$ of the R-gauge \cite{Kazakov}.) We only need a few special cases. \begin{equation} {\cal M}_{1,1}=\begin{bmatrix} 1&1&1\\ \bar q_2^{[5/2]}& q_2^{[-3/2]}&q_2^{[-7/2]}\\ \bar q_3^{[5/2]}& q_3^{[-3/2]}&q_3^{[-7/2]} \end{bmatrix}. \end{equation} From this we can see that $T_{1,1}^{[-1/2]}$ is built from $\bar q_j^{[2]}$, $q_j^{[-2]}$ and $q_j^{[-4]}$, all of them in their respective domains of definition. Similarly \begin{equation} {\cal M}_{2,1}=\begin{bmatrix} 1&1&1\\ \bar q_2^{[7/2]}& \bar q_2^{[3/2]}&q_2^{[-5/2]}\\ \bar q_3^{[7/2]}& \bar q_3^{[3/2]}&q_3^{[-5/2]} \end{bmatrix}. \end{equation} From this we see that $T_{2,1}^{[1/2]}$ is built from $\bar q_j^{[4]}$, $\bar q_j^{[2]}$ and $q_j^{[-2]}$, again all of them in their respective domains of definition. The other two matrices we need to construct $D_1$, $D_2$ are \begin{equation} {\cal M}_{0,0}=\begin{bmatrix} 1&1&1\\ q_2^{[1/2]}& q_2^{[-3/2]}&q_2^{[-7/2]}\\ q_3^{[1/2]}& q_3^{[-3/2]}&q_3^{[-7/2]} \end{bmatrix}, \end{equation} \begin{equation} {\cal M}_{3,0}=\begin{bmatrix} 1&1&1\\ \bar q_2^{[7/2]}& \bar q_2^{[3/2]}&\bar q_2^{[-1/2]}\\ \bar q_3^{[7/2]}& \bar q_3^{[3/2]}&\bar q_3^{[-1/2]} \end{bmatrix}. \end{equation} $T_{0,0}^{[\sigma]}$ is needed for $\sigma=-1/2,-3/2,-5/2,-9/2,-11/2$. We see all arguments of $q_j(\theta)$ are in the lower half-plane or on the real axis. Similarly we only need $T_{3,0}^{[\sigma]}$ for $\sigma=1/2,3/2,5/2,9/2,11/2$. All arguments of $\bar q_j(\theta)$ are in the upper half-plane or on the real axis. To construct $1+Y_{1,0}^{[-1/2]}$ and $1+Y_{2,0}^{[1/2]}$ used in the energy formula (\ref{51}) we also need $T_{1,0}$ and $T_{2,0}$, i.e. \begin{equation} {\cal M}_{1,0}=\begin{bmatrix} 1&1&1\\ \bar q_2^{[3/2]}& q_2^{[-1/2]}&q_2^{[-5/2]}\\ \bar q_3^{[3/2]}& q_3^{[-1/2]}&q_3^{[-5/2]} \end{bmatrix}, \end{equation} \begin{equation} {\cal M}_{2,0}=\begin{bmatrix} 1&1&1\\ \bar q_2^{[5/2]}& \bar q_2^{[1/2]}&q_2^{[-3/2]}\\ \bar q_3^{[5/2]}& \bar q_3^{[1/2]}&q_3^{[-3/2]} \end{bmatrix}. \end{equation} Again, we only need $q_j^{[\sigma]}$ for $\sigma=0,-2,-4$ and $\bar q_j^{[\sigma]}$ for $\sigma=0,2,4$, all functions in their respective domains of definition or on the real axis. This is why we insist on using the energy formula (\ref{51}). The energy formula (\ref{ener}) is numerically more stable, but requires analytical continuation of the Q-functions. \subsubsection{Iteration} The NLIE equations are usually solved iteratively. Start from some approximation \begin{equation} \{f^{(\nu)}_j(\eta)\},\ (j=2,3);\qquad \{\beta^{(\nu-1)}_\alpha\},\ \alpha=1,\dots,{\cal N}. \end{equation} To prepare the iteration we compute the approximations \begin{equation} D^{(\nu)}_j, \ (j=1,2,3), \qquad A^{(\nu)}, B^{(\nu)}, \bar A^{(\nu)}, \bar B^{(\nu)}. \label{inter} \end{equation} In these computations we use \begin{equation} \{f^{(\nu)}_j(\eta)\},\ (j=2,3); \qquad \{\beta^{(\nu)}_\alpha\},\ \alpha=1,\dots,{\cal N}. \end{equation} Note we do not yet know the $\beta^{(\nu)}_\alpha$ so these are just intermediate parameters in the computation of (\ref{inter}). Also compute the approximate Bethe roots $\hat\theta^{(\nu)}_\alpha$ by solving \begin{equation} D_3^{(\nu)}(\hat\theta_\alpha^{(\nu)})=0. \end{equation} They depend on the yet unknown parameters $\beta^{(\nu)}_\alpha$. These parameters are now determined by the approximation to the exact Bethe equations \begin{equation} \left(B^{(\nu)}D^{(\nu)}_2+\bar B^{(\nu)}D^{(\nu)}_1\right)\Big\vert_{\theta=\hat\theta^{(\nu)}_\alpha}=0. \end{equation} Now we know the $\beta^{(\nu)}_\alpha$, we can compute the next approximation to the spectral densities \begin{equation} \begin{split} f^{(\nu+1)}_2&=\frac{B^{(\nu)}D^{(\nu)}_2+\bar B^{(\nu)}D^{(\nu)}_1}{D_3^{(\nu)}},\\ f^{(\nu+1)}_3&=-\frac{A^{(\nu)}D^{(\nu)}_2+\bar A^{(\nu)}D^{(\nu)}_1}{D_3^{(\nu)}}. \end{split} \end{equation} The zeroth approximation is \begin{equation} f_2^{(0)}=f_3^{(0)}=0 \end{equation} and in this case \begin{equation} \hat\theta^{(0)}_\alpha=\beta^{(0)}_\alpha,\quad \alpha=1,\dots,{\cal N}. \end{equation} As shown in subsection 7.4 of Ref.~\cite{Kazakov} the zeroth approximation to the exact Bethe equations reduces to the asymptotic Bethe equations \begin{equation} {\rm e}^{iz\sinh\left(v\hat\theta^{(0)}_\alpha\right)}\, \prod_{\beta=1}^{\cal N} S\left(v(\hat\theta^{(0)}_\alpha-\hat\theta_\beta^{(0)})\right)=-1. \end{equation} \subsection{Numerical results} We have calculated the energies for the SU$(3)$ states $r_p=(p,0,0)$ for $p=0,1,2$ (vacuum, standing 1-particle and parity symmetric 2-particle states) to 12 digits numerically for small volumes. These energies are denoted by ${\cal E}_p$ in Table \ref{tab:table2}. \begin{table}[ht] \centering \caption{SU(3) energies for $p=0,1,2$. The last column presents values of a 2-particle NLIE parameter $\beta_1$} \label{tab:table2} \vspace{0.5cm} \begin{tabular}{|l|l|l|l|l|} \hline \quad$z$ & $\quad L\mathcal{E}_0$ & $\quad L\mathcal{E}_1$ & $\quad L\mathcal{E}_2$ & $\quad \beta_1$ \\ \hline 0.01& $-3.420577098958$&$-2.76292779616$&$-1.777099(3) $&$1.4048256851$ \\ 0.02& $-3.342476728778$&$-2.62162678936$&$-1.5412988(6)$&$1.2641055097$ \\ 0.03& $-3.288368668518$&$-2.52428616251$&$-1.379383(3) $&$1.1817120504$ \\ 0.04& $-3.245217335083$&$-2.44700634691$&$-1.251156(4) $&$1.12322793788$\\ 0.05& $-3.208524192169$&$-2.38154930287$&$-1.14279(1) $&$1.07785551177$\\ 0.06& $-3.176149111708$&$-2.32399857099$&$-1.0477086(6)$&$1.04078170872$\\ 0.07& $-3.146890539710$&$-2.27215624177$&$-0.9622193(7)$&$1.00943765385$\\ 0.08& $-3.120000348735$&$-2.22465417518$&$-0.8840251(2)$&$0.98228925202$\\ 0.09& $-3.094978291235$&$-2.18057765731$&$-0.811592(3) $&$0.95834659282$\\ 0.1& $-3.071471895079$&$-2.13928219543$&$-0.743839(1) $&$0.93693354156$\\ 0.2& $-2.883458484829$&$-1.81286374544$&$-0.2121368(2)$&$0.79626871125$\\ 0.5& $-2.489784492441$&$-1.14837251407$&$\phantom{-}0.849784(1)$&$0.611803404$\\ 1.0& $-1.97830660727$ &$-0.29936999966$&$\phantom{-}2.180019(2)$&$0.4754496334$\\ \hline \end{tabular} \end{table} The results agree with those of Ref. \cite{Kazakov} up to the numerical precision given there. Studying the volume dependence of the 2-particle $\beta_1$ parameter for small volumes we conjecture that it is given by a perturbative expansion in the running coupling $\alpha_{\rm J}$ (see subsection 5.1) with coefficients $a_1,a_2,a_3,\dots$ \begin{equation} \beta_1=\frac{a_1}{\alpha_{\rm J}}+a_2+a_3\alpha_{\rm J} +{\rm O}(\alpha_{\rm J}^2)\,, \end{equation} where \begin{equation} a_1\simeq\frac{\pi}{48}\,. \end{equation} \end{appendix}
{ "redpajama_set_name": "RedPajamaArXiv" }
2,012
College and University Search - USA Study in the USA - Maryland Region Geographic Region US Service schools New England CT ME MA NH RI VT Mid East DE DC MD NJ NY PA Great Lakes IL IN MI OH WI Plains IA KS MN MO NE ND SD Southeast AL AR FL GA KY LA MS NC SC TN VA WV Southwest AZ NM OK TX Rocky Mountains CO ID MT UT WY Far West AK CA HI NV OR WA Outlying areas AS FM GU MH MP PR PW VI Religious affiliation Religious Affiliation Baptist Catholic Church Of Christ Episcopal Evangelical Interdenominational Jewish Lutheran Mainline Protestant Mennonite Methodist Mormon Non-denominational Orthodox Christian Other Protestant Pentecostal Presbyterian Seventh Day Adventists Other Not Applicable Urbanization Locale Urbanization City Suburb Town Rural Institution Control Institution Control Public Private not-for-profit Private for-profit Not applicable {Not available} Tuition Range Minimum Tuition Range Min. Tuition $0 $500 $1,000 $1,500 $2,000 $2,500 $3,000 $3,500 $4,000 $4,500 $5,000 $5,500 $6,000 $6,500 $7,000 $7,500 $8,000 $8,500 $9,000 $9,500 $10,000 $10,500 $11,000 $11,500 $12,000 $12,500 $13,000 $13,500 $14,000 $14,500 $15,000 $15,500 $16,000 $16,500 $17,000 $17,500 $18,000 $18,500 $19,000 $19,500 $20,000 $20,500 $21,000 $21,500 $22,000 $22,500 $23,000 $23,500 $24,000 $24,500 $25,000 $25,500 $26,000 $26,500 $27,000 $27,500 $28,000 $28,500 $29,000 $29,500 $30,000 $30,500 $31,000 $31,500 $32,000 $32,500 $33,000 $33,500 $34,000 $34,500 $35,000 $35,500 $36,000 $36,500 $37,000 $37,500 $38,000 $38,500 $39,000 $39,500 $40,000 $40,500 $41,000 $41,500 $42,000 $42,500 $43,000 $43,500 $44,000 $44,500 $45,000 $45,500 $46,000 $46,500 $47,000 $47,500 $48,000 $48,500 $49,000 $49,500 $50,000 $50,500 $51,000 $51,500 $52,000 $52,500 $53,000 $53,500 $54,000 $54,500 $55,000 $55,500 $56,000 $56,500 $57,000 $57,500 $58,000 $58,500 $59,000 $59,500 $60,000 $60,500 $61,000 $61,500 $62,000 $62,500 $63,000 $63,500 $64,000 $64,500 $65,000 $65,500 $66,000 $66,500 $67,000 $67,500 $68,000 $68,500 $69,000 $69,500 $70,000 $70,500 $71,000 $71,500 $72,000 $72,500 $73,000 $73,500 $74,000 $74,500 $75,000 $75,500 $76,000 $76,500 $77,000 $77,500 $78,000 $78,500 $79,000 $79,500 $80,000 $80,500 $81,000 $81,500 $82,000 $82,500 $83,000 $83,500 $84,000 $84,500 $85,000 $85,500 $86,000 $86,500 $87,000 $87,500 $88,000 $88,500 $89,000 $89,500 $90,000 $90,500 $91,000 $91,500 $92,000 $92,500 $93,000 $93,500 $94,000 $94,500 $95,000 $95,500 $96,000 $96,500 $97,000 $97,500 $98,000 $98,500 $99,000 $99,500 $100,000 Maximum Tuition Range Max. Tuition $0 $500 $1,000 $1,500 $2,000 $2,500 $3,000 $3,500 $4,000 $4,500 $5,000 $5,500 $6,000 $6,500 $7,000 $7,500 $8,000 $8,500 $9,000 $9,500 $10,000 $10,500 $11,000 $11,500 $12,000 $12,500 $13,000 $13,500 $14,000 $14,500 $15,000 $15,500 $16,000 $16,500 $17,000 $17,500 $18,000 $18,500 $19,000 $19,500 $20,000 $20,500 $21,000 $21,500 $22,000 $22,500 $23,000 $23,500 $24,000 $24,500 $25,000 $25,500 $26,000 $26,500 $27,000 $27,500 $28,000 $28,500 $29,000 $29,500 $30,000 $30,500 $31,000 $31,500 $32,000 $32,500 $33,000 $33,500 $34,000 $34,500 $35,000 $35,500 $36,000 $36,500 $37,000 $37,500 $38,000 $38,500 $39,000 $39,500 $40,000 $40,500 $41,000 $41,500 $42,000 $42,500 $43,000 $43,500 $44,000 $44,500 $45,000 $45,500 $46,000 $46,500 $47,000 $47,500 $48,000 $48,500 $49,000 $49,500 $50,000 $50,500 $51,000 $51,500 $52,000 $52,500 $53,000 $53,500 $54,000 $54,500 $55,000 $55,500 $56,000 $56,500 $57,000 $57,500 $58,000 $58,500 $59,000 $59,500 $60,000 $60,500 $61,000 $61,500 $62,000 $62,500 $63,000 $63,500 $64,000 $64,500 $65,000 $65,500 $66,000 $66,500 $67,000 $67,500 $68,000 $68,500 $69,000 $69,500 $70,000 $70,500 $71,000 $71,500 $72,000 $72,500 $73,000 $73,500 $74,000 $74,500 $75,000 $75,500 $76,000 $76,500 $77,000 $77,500 $78,000 $78,500 $79,000 $79,500 $80,000 $80,500 $81,000 $81,500 $82,000 $82,500 $83,000 $83,500 $84,000 $84,500 $85,000 $85,500 $86,000 $86,500 $87,000 $87,500 $88,000 $88,500 $89,000 $89,500 $90,000 $90,500 $91,000 $91,500 $92,000 $92,500 $93,000 $93,500 $94,000 $94,500 $95,000 $95,500 $96,000 $96,500 $97,000 $97,500 $98,000 $98,500 $99,000 $99,500 $100,000 Student to Faculty Ratio Student/Teacher Ratio 10:1 or less 11:1 to 15:1 16:1 to 20:1 21:1 to 25:1 26:1 to 30:1 More than 30:1 TOEFL Requirement TOEFL Any Required Recommended Neither required nor recommended Do not know Not reported Not applicable Not Required Hidden Historically Black Colleges Open Admission Policy Lingoda Online Associate / Community College Bachelors / Undergraduate All State Career-Baltimore Baltimore, Maryland Students: 476 Faculty: 60 Student/Faculty Ratio: 26:1 Allegany College of Maryland Cumberland, Maryland Type: Public Setting: Rural: Fringe Est. Tuition: $8,620 Students: 673 Faculty: 509 Student/Faculty Ratio: 13:1 Anne Arundel Community College Arnold, Maryland Est. Tuition: $12,566 Students: 2,398 Faculty: 1,857 Student/Faculty Ratio: 17:1 Award Beauty School Hagerstown, Maryland Est. Tuition: $123,345 Students: 67 Faculty: 123 Student/Faculty Ratio: 123:1 Baltimore City Community College Baltimore, Maryland Setting: City: Large Baltimore Hebrew University Inc Baltimore, Maryland Type: Private not-for-profit Bowie State University Bowie, Maryland Masters / Graduate Setting: Suburb: Large Students: 1,072 Faculty: 581 Student/Faculty Ratio: 16:1 Capitol Technology University Laurel, Maryland Carroll Community College Westminster, Maryland Setting: Suburb: Small Cecil College North East, Maryland Chesapeake College Wye Mills, Maryland College of Southern Maryland La Plata, Maryland Coppin State University Baltimore, Maryland DeVry University's Keller Graduate School of Management-Maryland Bethesda, Maryland DeVry University-Maryland Bethesda, Maryland Fortis College-Landover Landover, Maryland Fortis Institute-Towson Towson, Maryland Frederick Community College Frederick, Maryland Frostburg State University Frostburg, Maryland Setting: Town: Remote Garrett College McHenry, Maryland Setting: Rural: Distant Goucher College Baltimore, Maryland Setting: City: Small Hagerstown Community College Hagerstown, Maryland Setting: Suburb: Midsize Harford Community College Bel Air, Maryland Hood College Frederick, Maryland Note: Some of the schools in the InternationalStudent.com School Search database have not reported all of the data we make available. When this is the case, it is indicated by 'N/A' (not applicable).
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,025
Alexander Roda Roda (Drnovice, 1872. április 13. – New York, 1945. augusztus 20.) osztrák író. Élete Eredeti neve Šandor Friedrich Rosenfeld, nővére, Gisela Januszewska orvos volt. Gyermekkorában családja Szlavóniába költözött. Nevét Šandorról Alexanderre germanizálta, családnevét Rosenfeldről Roda Rodára változtatta. A 'roda' jelentése a horvát nyelven gólya), mert eszéki házuk kéményén gólyák fészkeltek. Több mint harminc éven át élt és alkotott Eszéken. 1894-ben felvette a katolikus vallást. 1902-ben elhagyta a katonaságot, és újságíró lett (az első világháborúban haditudósító volt). Közreműködött a Simplicissimus című német szatirikus lapban. 1938-ban emigrált az Amerikai Egyesült Államokba. Számos komédiát írt (Der König von Crucina, 1892; Bubi, 1912, Gustav Meyrinkkel közösen), meséket és regényeket (Soldatengeschichten, két kötet, 1904; Der Ehegarten, 1913; Der Schnaps, der Rauchtabak und die verfluchte Liebe, 1918; Die Panduren, 1935), valamint önéletrajzi írásokat (Irrfahrten eines Humoristen 1914-1919, 1920; Roda Rodas Roman, 1925). Haláláig Eszéket tartotta szülővárosának. 1911-ben cikksorozatot publikált az osztrák Neue Freie Presse című lapban, 1914 és 1917 közt haditudósítóként csaknem 700 cikket jelentetett meg (többek közt a Budapesten kiadott Pester Lloyd című német nyelvű lapban is). Írói sikereinek csúcsa az 1920-as években volt. Hamvait a bécsi Feuerhalle Simmeringben temették el. Szenvedélyes sakkjátékos volt, gyakran játszott a müncheni Stephanie kávéházban . 1952-ben Bécs Floridsdorf városrészében utcát neveztek el róla. Eszéken az Europska avenija könyvtár előtt szobra áll. Bibliográfia 1892 – Der Gutsherr von Ljublin 1906 – Eines Esels Kinnbacken 1908 – Von Bienen, Drohnen und Baronen 1909 – Bummler,Schummler und Rossetummler 1913 – 500 Schwänke 1925 – Roda Rodas Roman, Roda Roda erzählt 1927 – Donner und Doria Magyarul A szimuláns és egyéb elbeszélések; ford. Benedek Marcell; Lampel, Bp., 1915 k. (Magyar könyvtár) Emmy néni plüssfüggönye; vál., szerk. Borbás Mária, ford. Barna Imre; Európa, Bp., 2000 Filmográfia Der Feldherrnhügel, rendezte Hans Otto és Erich Schönfelder (1926, Roda Roda 1910-ben írt darabja nyomán) Grandstand for General Staff, rendezte Eugen Thiele (1932, a Der Feldherrnhügel című darab nyomán) Grandstand for General Staff, rendezte Ernst Marischka, (1953) Forgatókönyvei K. und K. Feldmarschall, rendezte Karel Lamač, 1930 Er und seine Schwester, rendezte Karel Lamač, 1931 Liebeskommando, rendezte Bolváry Géza, 1931 Jegyzetek Fordítás Források Műveinek listája Nyomtatott műveinek és kéziratainak listája Német nyelvű életrajza 1872-ben született személyek Emigráns osztrákok Osztrák újságírók 1945-ben elhunyt személyek
{ "redpajama_set_name": "RedPajamaWikipedia" }
4,720
/* AUTO-GENERATED FILE. DO NOT MODIFY. * * This class was automatically generated by the * aapt tool from the resource data it found. It * should not be modified by hand. */ package com.pg.soldieri; public final class R { public static final class attr { } public static final class drawable { public static final int arrow=0x7f020000; public static final int ic_action_search=0x7f020001; public static final int ic_launcher=0x7f020002; public static final int icon_bookmark=0x7f020003; public static final int list_bg=0x7f020004; public static final int list_bg_hover=0x7f020005; public static final int list_selector=0x7f020006; public static final int plus=0x7f020007; } public static final class id { public static final int btnAddSite=0x7f07001a; public static final int btnAddSitePeople=0x7f07001e; public static final int btnCancel=0x7f070007; public static final int btnDate=0x7f07000c; public static final int btnSubmit=0x7f070006; public static final int btn_cancel=0x7f070017; public static final int btn_login=0x7f070016; public static final int buttons=0x7f070005; public static final int date=0x7f070026; public static final int layoutHeader=0x7f070018; public static final int lblDate=0x7f07000a; public static final int lblDesc=0x7f07000f; public static final int lblMessage=0x7f070011; public static final int lblNoPeople=0x7f07000d; public static final int lblTourName=0x7f070008; public static final int list=0x7f070023; public static final int logo=0x7f070019; public static final int logoPeople=0x7f07001d; public static final int menu_settings=0x7f070027; public static final int menu_sort=0x7f07001c; public static final int name=0x7f070025; public static final int people_gender=0x7f070022; public static final int people_id=0x7f070020; public static final int people_list=0x7f070004; public static final int people_name=0x7f070021; public static final int radio_female=0x7f070015; public static final int radio_male=0x7f070014; public static final int sqlite_id=0x7f070024; public static final int textView1=0x7f07001b; public static final int textViewPeople=0x7f07001f; public static final int tour_date=0x7f070002; public static final int tour_name=0x7f070001; public static final int tour_people=0x7f070003; public static final int tours_id=0x7f070000; public static final int txtDesc=0x7f070010; public static final int txtTourDate=0x7f07000b; public static final int txtTourName=0x7f070009; public static final int txtTourNoPeople=0x7f07000e; public static final int txt_gender=0x7f070013; public static final int txt_name=0x7f070012; } public static final class layout { public static final int activity_list_tours_items=0x7f030000; public static final int activity_soldieri=0x7f030001; public static final int add_tour=0x7f030002; public static final int dialog_layout=0x7f030003; public static final int header=0x7f030004; public static final int header_gradient=0x7f030005; public static final int menu=0x7f030006; public static final int people_child_list_row=0x7f030007; public static final int people_header=0x7f030008; public static final int people_list_row=0x7f030009; public static final int site_list=0x7f03000a; public static final int site_list_row=0x7f03000b; public static final int tours_list=0x7f03000c; } public static final class menu { public static final int activity_list_tours_items=0x7f060000; public static final int activity_soldieri=0x7f060001; } public static final class string { public static final int app_name=0x7f040000; public static final int btn_cancel=0x7f040008; public static final int btn_submit=0x7f040009; public static final int dialog_cancel=0x7f040011; public static final int dialog_submit=0x7f040010; public static final int dialog_uname=0x7f04000f; public static final int hello_world=0x7f040001; public static final int menu_settings=0x7f040002; public static final int title_activity_list_tours_items=0x7f04000e; public static final int title_activity_soldieri=0x7f040003; public static final int tour_date=0x7f040006; public static final int tour_date_hint=0x7f040007; public static final int tour_desc=0x7f04000c; public static final int tour_desc_hint=0x7f04000d; public static final int tour_name=0x7f040004; public static final int tour_name_hint=0x7f040005; public static final int tour_people=0x7f04000a; public static final int tour_people_hint=0x7f04000b; } public static final class style { public static final int AppTheme=0x7f050000; } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,489
This paisley floral envelope system cash budget wallet with 6 tabbed dividers keeps your cash organized in style at a price that most people can afford. This wallet is fantastic! The quality is superb and I see it lasting for a very long time. I love the dividers inside as well. They are very durable. Thank you for a great product. Item was perfect and exactly what I needed!! Thank you.
{ "redpajama_set_name": "RedPajamaC4" }
6,073
nav { margin: 2rem 0 3rem 0; } input[type="text"] { font-size: 2.5rem; padding: .5rem; min-width: 210px; width: 40%; vertical-align: middle; } .pointer:hover { cursor: pointer; } .progress { position: fixed; bottom: 0; left: 5px; width: 99%; } ul.score { list-style: none; text-align: left; width: 200px; margin: 0 auto; } #timeLeft { position: fixed; bottom: 5rem; width: 50%; } #myScore { position: fixed; top: 5rem; right: 5rem; } @-webkit-keyframes color_change { 0% { background-color: white; } 25% { background-color: #ccfdfd; } 50% { background-color: #ccfdcd; } 75% { background-color: #fdf3cc; } 100% { background-color: #fdccd7; } } @-moz-keyframes color_change { 0% { background-color: white; } 25% { background-color: #ccfdfd; } 50% { background-color: #ccfdcd; } 75% { background-color: #fdf3cc; } 100% { background-color: #fdccd7; } } @-ms-keyframes color_change { 0% { background-color: white; } 25% { background-color: #ccfdfd; } 50% { background-color: #ccfdcd; } 75% { background-color: #fdf3cc; } 100% { background-color: #fdccd7; } } @-o-keyframes color_change { 0% { background-color: white; } 25% { background-color: #ccfdfd; } 50% { background-color: #ccfdcd; } 75% { background-color: #fdf3cc; } 100% { background-color: #fdccd7; } } @keyframes color_change { 0% { background-color: white; } 25% { background-color: #ccfdfd; } 50% { background-color: #ccfdcd; } 75% { background-color: #fdf3cc; } 100% { background-color: #fdccd7; } } body { background-color: white; -webkit-animation: color_change 20s infinite alternate; -moz-animation: color_change 20s infinite alternate; -ms-animation: color_change 20s infinite alternate; -o-animation: color_change 20s infinite alternate; animation: color_change 20s infinite alternate; }
{ "redpajama_set_name": "RedPajamaGithub" }
6,475
\section{Introduction} The topological entropy of an action $G \curvearrowright X$ of an amenable group $G$ on a compact metric space $X$ by homeomorphisms is a non-negative number which counts the asymptotic exponential growth rate of the number of distinguishable orbits of the system. Initially introduced by Adler, Konheim and McAndrew~\cite{AdlerKonheimMcAndrew1965} for $\ZZ$-actions, it is an important conjugacy invariant which has been studied broadly. A particularly interesting case is when $G\curvearrowright X$ is a subshift of finite type ($G$-SFT). Up to dynamical conjugacy, there are countably many distinct subshifts of finite type, and therefore at most countably many real numbers can be attained as the entropy of a subshift of finite type. A classical result by Lind~\cite{Lind1984} classifies the topological entropies attainable by $\ZZ$-SFTs as non-negative rational multiples of logarithms of Perron numbers. This characterization relies on a full description of the configurations of $\ZZ$-SFTs as bi-infinite paths on a finite graph and a study of the eigenvalues of their adjacency matrices. A more recent result by Hochman and Meyerovitch~\cite{HochmanMeyerovitch2010} completely classifies the entropies of $\ZZ^d$-SFTs. Interestingly, they show that for $d \geq 2$ the characterization is of an algorithmic nature. More precisely, the numbers attained as entropies of $\ZZ^d$-SFTs coincides with the set of non-negative upper semi-computable real numbers. Their classification relies on a construction which embeds arbitrarily large computation diagrams of an arbitrary Turing machine into a $\ZZ^d$-SFT. The purpose of this study is to explore what entropies can be achieved by subshifts of finite type defined on an arbitrary amenable group $G$. In particular, we shall present a way to transfer entropies attainable by SFTs on a group $H$ to $G$ whenever $H$ can be ``geometrically embedded into $G$". A simple observation is that whenever $H$ is a subgroup of an amenable group $G$, then any number obtained as the topological entropy of an $H$-SFT $X$ can also be obtained as the topological entropy of a $G$-SFT $Y$. Indeed, this is achieved by letting $Y$ be the set of all configurations such that every $H$-coset contains a configuration of $X$ and there are no restrictions between each individual $H$-coset. In this article we generalize the above construction introducing the notion of group charts. A group chart $(X,\gamma)$ is a dynamical structure consisting of a dynamical system $G \curvearrowright X$ and a continuous cocycle $\gamma\colon H \times X \to G$ that associates configurations in $X$ to partitions of its underlying group $G$ into quotients of $H$. Whenever $X$ is a $G$-subshift, we can use the partitions induced by the chart $(X,\gamma)$ to embed any $H$-subshift $Y$ into a $G$-subshift $Y_{\gamma}[X]$ which stores the information of $Y$ in a natural way. We shall show (\Cref{theorem_addition_formula}) that for any such embedding in which the cocycle induces free actions, the topological entropy satisfies the following addition formula,\[h_{\text{top}}(G \curvearrowright Y_{\gamma}[X]) = h_{\text{top}}(G \curvearrowright X)+h_{\text{top}}(H \curvearrowright Y). \] Furthermore, if both $X$ and $Y$ are SFTs, we have that $Y_{\gamma}[X]$ is an SFT. Therefore this formula can be used to embed the entropies of $H$-SFTs into the set of entropies of $G$-SFTs up to a fixed additive constant. We shall introduce the notion of group charts and give a proof of the addition formula on~\Cref{section_charts}. In~\Cref{section_reduce_ent_charts} we shall show that whenever a group chart is given by a $G$-SFT $X$, then we can choose it in such a way that its entropy is arbitrarily small~(\Cref{corollary_reducing_chart_entropy}). This will follow from a theorem that gives a canonical way of reducing the entropy of subshifts of finite type defined on an arbitrary countable amenable groups~(\Cref{theorem_tilings_forthewin}). We shall prove this theorem using the theory of quasitilings introduced by Ornstein and Weiss~\cite{OrnWei1987} and a recent result of Downarowicz, Huczec and Zhang~\cite{DownarowiczHuczekZhang2019}. In~\Cref{section_conditions_charts} we will characterize the existence of free charts, that is, charts for which every element of $x$ codes a true partition of $G$ into copies of $H$, through the notion of translation-like actions introduced by Whyte~\cite{Whyte1999}. Furthermore, following the ideas of Jeandel~\cite{Jeandel2015}, we shall show that whenever $H$ is finitely presented and there exists a non-empty $H$-SFT on which $H$ acts freely, then one can always find a free chart $(X,\gamma)$ for which $X$ is a $G$-SFT. Putting all of the previous results together, we shall show the following result. { \renewcommand{\thetheorem}{\ref{theorem_HG}} \begin{theorem} Let $G,H$ be finitely generated amenable groups and let $\mathcal{E}_{\text{SFT}}(H)$ and $\mathcal{E}_{\text{SFT}}(G)$ respectively denote the set of real numbers attainable as topological entropies of an SFT in each group. Suppose that \begin{enumerate} \item $H$ admits a translation-like action on $G$. \item $H$ is finitely presented. \item There exists a non-empty $H$-SFT for which the $H$-action is free. \end{enumerate} Then, for every $\varepsilon >0$ there exists a $G$-SFT $X$ such that $h_{top}(G\curvearrowright X) < \varepsilon$ and \[h_{top}(G\curvearrowright X)+ \mathcal{E}_{\text{SFT}}(H) \subset \mathcal{E}_{\text{SFT}}(G).\] \end{theorem} \addtocounter{theorem}{-1} } In~\Cref{section_characterization_Z2} we shall apply the above theorem to study the groups on which $\ZZ^2$ acts translation-like. It shall follow that modulo a computability obstruction, any finitely generated amenable group on which $\ZZ^2$ acts translation-like admits the same characterization of the set of numbers that can be attained as topological entropies of subshifts of finite type as $\ZZ^2$. Namely, { \renewcommand{\thetheorem}{\ref{theorem_caract_entropies_G_z2_translation_like}} \begin{theorem} Let $G$ be a finitely generated amenable group with decidable word problem which admits a translation-like action by $\ZZ^2$. The set of entropies attainable by $G$-subshifts of finite type is the set of non-negative upper semi-computable numbers. \end{theorem} \addtocounter{theorem}{-1} } Finally, in~\Cref{section_consequences} we shall use~\Cref{theorem_caract_entropies_G_z2_translation_like} to give a characterization of the numbers attainable as topological entropies of subshifts of finite type in several classes of groups. More precisely, we shall give a complete classification for polycyclic-by-finite groups~(\Cref{theorem_polycyclic}), products of two infinite and finitely generated amenable groups with decidable word problem~(\Cref{corollary_entropy_ofproducts}), countable amenable groups which admit a presentation with decidable word problem and a finitely generated subgroup on which $\ZZ^2$ acts translation-like~(\Cref{corollary_caract_entropies_full}) and infinite and finitely generated amenable branch groups with decidable word problem~(\Cref{theorem_branch_groups}). \section{Preliminaries and notation} In this note we shall consider left actions $G \curvearrowright X$ of countable amenable groups $G$ over compact metric spaces $X$ by homeomorphisms. Let us denote by $F\Subset G$ a finite subset of $G$ and by $1_G$ the identity of $G$. For $K \Subset G$ and $\varepsilon>0$ we say that $F \Subset G$ is left $(K,\varepsilon)$-invariant if $|KF \triangle F| \leq \varepsilon|F|$. From this point forward we shall omit the word left and plainly speak about $(K,\varepsilon)$-invariant sets. A sequence $\{F_n\}_{n \in \NN}$ of finite subsets of $G$ is called a F\o lner sequence if for every $K \Subset G$ and $\varepsilon>0$ the sequence is eventually $(K,\varepsilon)$-invariant. \subsection{Shift spaces} Let $\Sigma$ be a finite set and $G$ be a group. The set $\Sigma^G = \{ x\colon G \to \Sigma\}$ equipped with the left group action $G \curvearrowright X$ given by $gx(h) \isdef x(hg)$ is the \define{full $G$-shift}. The elements $a \in \Sigma$ and $x \in \Sigma^G$ are called \define{symbols} and \define{configurations} respectively. We endow $\Sigma^G$ with the product topology generated by the clopen subbase given by the \define{cylinders} $[a]_g \isdef \{x \in \Sigma^G~|~x(g) = a\}$. A \define{support} is a finite subset $F \Subset G$. Given a support $F$, a \define{pattern} with support $F$ is an element $p \in \Sigma^F$ and we write $\operatorname{supp}(p) = F$. We denote the cylinder generated by $p$ by $[p] = \bigcap_{h \in F}[p(h)]_{h}$. A subset $X \subset \Sigma^G$ is a \define{$G$-subshift} if and only if it is $G$-invariant and closed in the product topology. Equivalently, $X$ is a $G$-subshift if and only if there exists a set of forbidden patterns $\mathcal{F}$ such that\[X=X_\mathcal{F} \isdef {\Sigma^G \setminus \bigcup_{p \in \mathcal{F}, g \in G} g[p]}.\] Given a subshift $X \subset \Sigma^G$ and a support $F \Subset G$ the \define{language with support $F$} is the set $L_{F}(X) = \{ p \in \Sigma^F \mid [p] \cap X \neq \varnothing \}$ of all patterns which appear in some configuration $x \in X$. The \define{language} of $X$ is the set $L(X) = \bigcup_{F \Subset G}L_{F}(X)$. \begin{remark} It is also possible to define the left $G$-action by $gx(h) \isdef x(g^{-1}h)$ instead of $x(hg)$. In this article we chose the latter in order to minimize the amount of superindices $^{-1}$ and to make the notation compatible with the setting of~\cite{DownarowiczHuczekZhang2019}, whose results we shall use to prove~\Cref{theorem_tilings_forthewin}. \end{remark} \begin{definition} We say that a subshift $X$ is of \define{finite type (SFT)} if there exists a finite set $\mathcal{F}$ of forbidden patterns such that $X = X_{\mathcal{F}}$. \end{definition} \subsection{Topological entropy} Let $G \curvearrowright X$ be the action of a group over a compact metrizable space by homeomorphisms. Given two open covers $\mathcal{U},\mathcal{V}$ of $X$ we define their \define{join} by $\mathcal{U} \vee \mathcal{V} = \{U \cap V \mid U \in \mathcal{U}, V\in \mathcal{V} \}$. For $g \in G$ let $g\mathcal{U} = \{gU \mid U \in \mathcal{U}\}$ and denote by $N(\mathcal{U})$ the smallest cardinality of a subcover of $\mathcal{U}$. If $F$ is a finite subset of $G$, denote by $\mathcal{U}^F$ the join $$\mathcal{U}^F = \bigvee_{g \in F}g^{-1}\mathcal{U}.$$ \begin{definition} Let $G \curvearrowright X$ be the action of a countable amenable group, $\mathcal{U}$ an open cover and $\{F_n\}_{n \in \NN}$ a F\o lner sequence for $G$. We define the \define{topological entropy of $G \curvearrowright X$ with respect to $\mathcal{U}$} as% \[ h_{\text{top}}(G \curvearrowright X,\mathcal{U})=\lim_{n\rightarrow\infty}\frac{1}{\left\vert F_{n}\right\vert }\log N(\mathcal{U}^{F_{n}}). \] \end{definition} The function $F \mapsto \log N(\mathcal{U}^{F})$ is subadditive and thus the limit does not depend on the choice of F\o lner sequence, see for instance~\cite{OrnWei1987,Krieger2007_ornsteinweiss}. The \define{topological entropy} of $G \curvearrowright X$ is defined as \[ h_{\text{top}}(G \curvearrowright X)=\sup_{\mathcal{U}}h_{\text{top}}(G \curvearrowright X,\mathcal{U}). \] In the case where $G \curvearrowright X$ is expansive, any open cover $\mathcal{U}$ whose elements have diameter less than the expansivity constant achieves the supremum. Particularly, in the case of a subshift $X \subset \Sigma^G$ we may consider the partition $\xi = \{ [a]_{1_G} \mid a \in \Sigma \}$. For a finite $F \subset G$ we obtain that $\xi^F = \{ [p] \mid p \in L_F(X) \}$. Hence, whenever $X$ is a subshift its topological entropy can be computed by \[ h_{\text{top}}(G \curvearrowright X)=\lim_{n\rightarrow\infty}\frac{1}{\left\vert F_{n}\right\vert }\log(|L_{F_n}(X)|). \] A more intuitive way to understand this limit, is that the function $F \mapsto \frac{1}{\left\vert F\right\vert }\log(|L_{F}(X)|)$ converges as $F$ becomes more and more invariant, that is, for every $\varepsilon>0$ there exists $K \Subset G$ and $\delta>0$ such that for any $(K,\delta)$-invariant set $F$ we have $|h_{\text{top}}(G \curvearrowright X) - \frac{1}{\left\vert F\right\vert }\log(|L_{F}(X)|) | \leq \varepsilon$. For a self contained proof and relevant background see~\cite[Theorem 4.38]{KerrLiBook2016}. In the case when the open cover $\mathcal{U}$ consists of pairwise disjoint open sets, it can be shown that the function $F \mapsto \log N(\mathcal{U}^{F})$ is not only subadditive, but satisfies Shearer's inequality (see~\cite[Corollary 6.2]{DownFrejRomag2015}). This in turn implies that in the case of a subshift we may write: \begin{align}\label{eq_entropyforidiots} h_{\text{top}}(G \curvearrowright X)=\inf_{F \in \mathcal{F}(G)}\frac{1}{\left\vert F\right\vert }\log(|L_{F}(X)|). \end{align} where $\mathcal{F}(G)$ denotes the set of all finite subsets of $G$, see~\cite[Corollary 6.3]{DownFrejRomag2015}). \begin{remark} In fact the result that topological entropy can be computed as an infimum over all finite subsets holds for any $G\curvearrowright X$, although it may not hold individually for every partition $\mathcal{U}$. This was proven in~\cite{DownFrejRomag2015} using the variational principle. A good way to think about it is that in the context of amenable groups, the topological entropy coincides with the naive entropy of Burton~\cite{burton2017naive}. \end{remark} Let us introduce the following notation which will be useful in the remainder of the article. For a group $G$, we denote the set of real numbers attained as topological entropies of $G$-SFTs by $\mathcal{E}_{\text{SFT}}(G)$. \[ \mathcal{E}_{\text{SFT}}(G) = \{r \in \RR \mid \textrm{there exists a } G\textrm{-SFT } X, h_{\text{top}}(G \curvearrowright X) = r \} \] Let us state two classical theorems from the literature which will be used further on. Recall that a \define{Perron number} is a real algebraic integer greater than $1$ and greater than the modulus of its algebraic conjugates. \begin{theorem}[Lind~\cite{Lind1984}] $\mathcal{E}_{\text{SFT}}(\ZZ)$ is the set of non-negative rational multiples of logarithms of Perron numbers. \end{theorem} In order to state the second result, we need to introduce the notion of upper semi-computable numbers, they are also sometimes called ``right-recursively enumerable numbers''. \begin{definition} A real number $r$ is \define{upper semi-computable} if there exists a Turing machine $T$ which on input $n \in \NN$ halts with the coding of a rational number $q_n \geq r$ on its tape such that $\lim_{n \to \infty} q_n = r$. \end{definition} \begin{theorem}[Hochman and Meyerovitch~\cite{HochmanMeyerovitch2010}]\label{theorem_HochmanTom} For $d \geq 2$, $\mathcal{E}_{\text{SFT}}(\ZZ^d)$ is the set of non-negative upper semi-computable numbers. \end{theorem} \section{Realization of entropies of subshifts of finite type} \subsection{Group charts and the addition formula}\label{section_charts} \begin{definition} Let $G,H$ be two topological groups and let $X$ be a compact topological space on which $G$ acts on the left by homeomorphisms. A continuous map $\gamma \colon H \times X \to G$ is called an \define{$H$-cocycle} if it satisfies the equation \[ \gamma(h_1h_2,x) = \gamma(h_1,\gamma(h_2,x)x)\cdot \gamma(h_2,x) \mbox{ for every $h_1,h_2$ in $H$}.\] \end{definition} The cocycle equation can be represented by the diagram shown on~\Cref{fig:diagram_cocycle}. Let us clarify how this equation fits within the classical setting of cocycles. A continuous map $\gamma$ as above induces an action $H \curvearrowright X$ by setting $h \cdot x = \gamma(h,x)x$, where the product on the right is the one associated to the action $G \curvearrowright X$. With this action $H \curvearrowright X$ in mind, the equation simplifies to the better known equation for cocycles \[ \gamma(h_1h_2,x) = \gamma(h_1,h_2 \cdot x)\cdot \gamma(h_2,x) \mbox{ for every $h_1,h_2$ in $H$}. \] Any $H$-cocycle $\gamma$ induces a family $\{H \overset{x}{\curvearrowright} G\}_{x \in X}$ of left $H$-actions on $G$. Indeed, if for fixed $x \in X$ we define for $h \in H$ and $g \in G$, the action given by $h \cdot_x g \isdef \gamma(h,gx)g$, then for all $h_1,h_2 \in H$ we have \begin{align*} (h_1h_2) \cdot_x g & = \gamma(h_1h_2,gx)g\\ & = (\gamma(h_1,\gamma(h_2,gx)gx)\cdot \gamma(h_2,gx) )g \\ & =\gamma(h_1,(\gamma(h_2,gx)g)x)\cdot (\gamma(h_2,gx)g) \\ & = h_1 \cdot_x (\gamma(h_2,gx)g) \\ & = h_1 \cdot_x (h_2 \cdot_x g). \end{align*} \begin{figure}[h!] \centering \begin{tikzpicture} \node[circle, draw] (A) at (240: 2cm) {$x$}; \node[circle, draw] (B) at (0:2cm) {$y$}; \node[circle, draw] (C) at (120:2cm) {$z$}; \draw [->, thick, shorten >=0.3cm,shorten <=0.3cm] (A) arc (240:360:2cm) node[midway,fill=white] {$\gamma(h_2,x)$}; \draw [->, thick, shorten >=0.3cm,shorten <=0.3cm] (A) arc (240:120:2cm) node[midway,fill=white] {$\gamma(h_1h_2,x)$}; \draw [->, thick, shorten >=0.3cm,shorten <=0.3cm] (B) arc (0:120:2cm) node[midway, fill=white] {$\gamma(h_1,y)$}; \node at (5,1) {$y = \gamma(h_2,x)x$}; \node at (5,-1) {$z = \gamma(h_1,y)y = \gamma(h_1h_2,x)x$}; \end{tikzpicture} \caption{The circles $x,y,z$ represent points in the space $X$ while the arrows represent left multiplication by group elements. The cocycle equation states that the arrows commute: $\gamma(h_1h_2,x) = \gamma(h_1,y)\gamma(h_2,x)$.} \label{fig:diagram_cocycle} \end{figure} \begin{remark} If $H$ is a finitely generated group and $S$ a finite generating set for $H$, then the values of any $H$-cocycle $\gamma$ restricted to $S \times X$ define $\gamma$ completely. Furthermore, whenever $G$ is countable, by continuity of $\gamma$ and compactness of $S \times X$ we have that $\gamma$ must be uniformly bounded on $S \times X$ and thus $\gamma(S \times X) \Subset G$. Hence if $X$ is a $G$-subshift, there exists a finite set $F\Subset G$ such that $\gamma$ restricted to $S \times X$ is completely defined by a finite map $\tilde{\gamma} \colon S \times L_{F}(X)$. \end{remark} The following notion is strongly motivated by the work of Jeandel~\cite{Jeandel2015}. \begin{definition} Let $G,H$ be two countable groups. Given a left action $G\curvearrowright X$ and an $H$-cocycle $\gamma \colon H \times X \to G$ we say the pair $(X,\gamma)$ is a \define{$G$-chart} of $H$. Furthermore, if for each $x \in X$ the action $H \overset{x}{\curvearrowright} G$ is free, we say that $(X,\gamma)$ is a \define{free $G$-chart} of $H$. \end{definition} \begin{example}\label{example_obviouschart} The trivial system $G \curvearrowright \{0\}$ consisting of a single point and the cocycle $\gamma\colon H \times \{0\} \to G$ which sends $(h,0) \mapsto h$ is a free $G$-chart of $H$ for any subgroup $H \leq G$.\hfill\ensuremath{\Diamond}\par \end{example} \begin{example}\label{example_snake} Let $G = \ZZ^2$ and let $\Sigma_{\texttt{snake}}$ be the set of vector pairs given by \[\Sigma_{\texttt{snake}} = \{ (\ell,r) \in \{(1,0),(-1,0),(0,1),(0,-1)\}^2 \mid \ell \neq r \} \] Visually, we may represent $\Sigma_{\texttt{snake}}$ by the set of square unit tiles shown on~\Cref{fig:tiles_Z}. The first vector is represented by the tail of the arrow and the second vector by the outgoing arrow. \begin{figure}[h!] \centering \include{good_tiles} \caption{The alphabet $\Sigma_{\texttt{snake}}$.} \label{fig:tiles_Z} \end{figure} For $a=(\ell,r) \in \Sigma_{\texttt{snake}}$ let $L(a)=\ell$ and $R(a)=r$. We define the \define{snake shift} as the $\ZZ^2$-SFT $X_{\texttt{snake}} \subset (\Sigma_{\texttt{snake}})^{\ZZ^2}$ of all configurations $x$ such that for every position $v \in \ZZ^2$, we have $R(x(v)) = L(x(v + R(x(v))))$ and $L(x(v)) = R(x(v + L(x(v))))$. Visually, these are the configurations such that every outgoing arrow matches with an incoming arrow. Let $\gamma_{\texttt{snake}}\colon \ZZ \times X \to \ZZ^2$ be the $\ZZ$-cocycle defined by $\gamma_{\texttt{snake}}(1,x) = R(x((0,0)))$ and $\gamma_{\texttt{snake}}(-1,x) = L(x((0,0)))$. It can be verified that $(X_{\texttt{snake}},\gamma_{\texttt{snake}})$ is a $\ZZ^2$-chart of $\ZZ$. The $\ZZ^2$-chart $(X_{\texttt{snake}},\gamma_{\texttt{snake}})$ of $\ZZ$ is not free. Indeed, every configuration $x$ in which a cycle appears induces an action $\ZZ \overset{x}{\curvearrowright} \ZZ^2$ which is not free. Let $X^{\texttt{free}}_{\texttt{snake}} \subset X_{\texttt{snake}}$ be the \define{free snake} subshift consisting of all configurations $x \in X_{\texttt{snake}}$ such that no cycles appear, it can be verified that $(X^{\texttt{free}}_{\texttt{snake}}, \gamma_{\texttt{snake}}|_{\ZZ \times X^{\texttt{free}}_{\texttt{snake}}})$ is a free $\ZZ^2$-chart of $\ZZ$. See~\Cref{fig:tiles_Z_example}.\hfill\ensuremath{\Diamond}\par\end{example} \begin{figure}[h!] \centering \include{tiles_Z_example} \caption{On the left we see a local patch of $X_{\texttt{snake}}$. The value of the $\ZZ$-cocycle $\gamma_{\texttt{snake}}(n,x)$ corresponds to the vector of $\ZZ^2$ obtained by following the arrow at the origin $n$ times. On the right we see a local patch of a configuration of $X^{\texttt{free}}_{\texttt{snake}}$. As cycles are forbidden, the cocycle induces a free action.} \label{fig:tiles_Z_example} \end{figure} Let $\Sigma$ be a set. The notion of $G$-chart gives canonical way to recover an $H$-orbit of $\Sigma$ given a $G$-orbit $y \in \Sigma^G$ and basepoints $x \in X$ and $g \in G$. Indeed, if $(X,\gamma)$ is a $G$-chart of $H$ we can associate to every $y \in \Sigma^G$ an orbit $\pi_{x,g}(y) \in \Sigma^H$ by setting \[ \pi_{x,g}(y)(h) \isdef y(h \cdot_x g) = y(\gamma(h,gx)g) \mbox{ for every $h$ in $H$.}\] Moreover, this configuration satisfies that for every $h_1,h_2 \in H$: \begin{align*} (h_2\pi_{x,g}(y))(h_1) & = \pi_{x,g}(y)(h_1h_2) \\ & = y((h_1h_2) \cdot_x g)\\ & = y(h_1\cdot_x (h_2 \cdot_x g))\\ & = (\pi_{x,h_2 \cdot_x g}(y))(h_1) \end{align*}In other words, the left shift action of $h_2$ on $\pi_{x,g}(y)$ is the same as $\pi_{x,h_2 \cdot_x g}(y)$, that is, the configuration obtained by changing the basepoint $g$ by $h_2 \cdot_x g$. From now on, we shall only consider $G$-charts $(X,\gamma)$ where $X$ is a $G$-subshift. \begin{definition} Let $(X,\gamma)$ be a $G$-chart of $H$ and $Y\subset \Sigma^H$ be an $H$-subshift. The \define{$(X,\gamma)$-embedding of $Y$} is the $G$-subshift $Y_{\gamma}[X] \subset \Sigma^G \times X$ which has the property that $(y,x) \in Y_{\gamma}[X]$ if and only if for every $g \in G$ then $\pi_{x,g}(y)$ is in $Y$. \end{definition} In simpler words, $Y_{\gamma}[X]$ is the subshift of all pairs $(y,x)$ where $x \in X$ and every copy of $H$ induced by the action $H \overset{x}{\curvearrowright} G$ is decorated independently with a configuration from $Y$. \newcommand{\alfTriang}{\begin{tikzpicture}[scale = 0.5] \draw[fill = black!5] (0.2,0.2) -- (0.8,0.2) -- (0.5,0.8) -- cycle; \end{tikzpicture}} \newcommand{\alfCircle}{\begin{tikzpicture}[scale = 0.5] \draw[fill = black!5] (0.5,0.5) circle (0.3); \end{tikzpicture}} \newcommand{\alfSquare}{\begin{tikzpicture}[scale = 0.5] \draw[fill = black!5] (0.2,0.2) rectangle (0.8,0.8); \end{tikzpicture}} \begin{example} Consider the free $\ZZ^2$-chart $(X,\gamma)$ of $\ZZ$ from~\Cref{example_snake}, that is, $X = X^{\texttt{free}}_{\texttt{snake}}$ and $\gamma = \gamma_{\texttt{snake}}|_{\ZZ \times X^{\texttt{free}}_{\texttt{snake}}}$. Consider the $\ZZ$-subshift $Y$ consisting on the orbit of the sequence $x$ over the alphabet $\Sigma = \{\alfSquare, \alfTriang, \alfCircle \}$ given by \[ x(n) = \begin{cases} \alfSquare & \mbox{ if } n =0 \bmod{3}\\ \alfTriang & \mbox{ if } n =1 \bmod{3}\\ \alfCircle & \mbox{ if } n =2 \bmod{3}. \end{cases} \] \begin{figure}[h!] \centering \include{tiles_Z_examplecolor} \caption{The subshift $Y_{\gamma}[X]$ is obtained by ``overlaying'' the copies of $H$ induced by $\gamma$ on $X$ with configurations of $Y$.} \label{fig:tiles_Z_examplecolor} \end{figure} The subshift $Y_{\gamma}[X]$ is the set of all configurations $(y,x) \in \{\alfSquare, \alfTriang, \alfCircle \}^{\ZZ^2} \times X$ such that every path in $x \in X$ induced by $\gamma$ is decorated independently with a configuration from $Y$, see~\Cref{fig:tiles_Z_examplecolor}. \hfill\ensuremath{\Diamond}\par \end{example} \begin{remark}\label{remark_SFTchart_SFT_is_SFT} Let $(X,\gamma)$ be a $G$-chart of $H$. If $X$ is a $G$-SFT and $Y$ is an $H$-SFT then $Y_{\gamma}[X]$ is also a $G$-SFT. \end{remark} \begin{remark}\label{remark_chart_having_one_free_action_is_good} If $(X,\gamma)$ is a $G$-chart of $H$ and there is $x \in X$ such that $H \overset{x}{\curvearrowright} G$ is free, then the map $\pi\colon Y_{\gamma}[X] \to Y$ given by $\pi(y,x) = \pi_{x,1_G}(y)$ is surjective. In particular, $Y_{\gamma}[X]$ is non-empty if and only if $Y$ is non-empty. \end{remark} The following result is the main tool that will allow us to take a subshift of finite type with fixed topological entropy defined on a group $H$, and realize it, modulo a fixed constant, as the topological entropy of a subshift of finite type defined on any group where $H$ can be freely charted . It shows that the entropy of any subshift which is embedded in a free chart can be expressed through an addition formula. \begin{theorem}[addition formula]\label{theorem_addition_formula} Let $G,H$ be countable amenable groups. For any free $G$-chart $(X,\gamma)$ of $H$ and for any $H$-subshift $Y$ we have \begin{equation} h_{\text{top}}(G \curvearrowright Y_{\gamma}[X]) = h_{\text{top}}(H \curvearrowright Y)+ h_{\text{top}}(G \curvearrowright X).\end{equation} \end{theorem} \begin{proof} Denote by $\Sigma_X$ and $\Sigma_Y$ the alphabets of $X$ and $Y$ respectively. Let $\varepsilon >0$. There exists $S \Subset H$ and $\delta>0$ such that every non-empty $(S,\delta)$-invariant set $F \Subset H$ satisfies \begin{equation*} \ee^{h_{\text{top}}(H \curvearrowright Y)|F|} \leq |L_F(Y)| \leq \ee^{(h_{\text{top}}(H \curvearrowright Y)+\varepsilon)|F|}. \end{equation*} Let $\gamma\colon H \times X \to G$ be the $H$-cocycle associated to $X$. As $G$ is countable and $S$ is finite, the restriction of $\gamma$ to $S \times X$ is bounded. Let $W_1 \Subset G$ be a set such that $\gamma(S \times X) \subset W_1$. By continuity of $\gamma$ there exists $W_2 \Subset G$ such that a set such that for every $s \in S$ we have $\gamma(s,x) = \gamma(s,y)$ whenever $x|_{W_2} = y|_{W_2}$. For every $\varepsilon'>0$ there exists a finite set $S' \supset W_1 \cup W_2$ and $\delta'>0$ such that every non-empty $(S',\delta')$-invariant $F' \Subset G$ satisfies \begin{equation*} \ee^{h_{\text{top}}(G \curvearrowright X)|F'|} \leq |L_{F'}(X)| \leq \ee^{(h_{\text{top}}(G \curvearrowright X)+\varepsilon')|F'|}. \end{equation*} Let $F' \Subset G$ be an $(S',\delta')$-invariant set and consider a pattern $p \in L_{S'F'}(X)$. As $W_2 \subset S'$, for each $f' \in F'$ and $s \in S$ the map $\gamma_{p}(s,f') \isdef \gamma(s,f'x)$ where $x\in X$ is any configuration such that $x|_{S'F'} = p$ is well defined. Let us define the relation $R \subset F' \times F'$ as the smallest equivalence relation such that whenever $f_1',f_2' \in F'$ satisfy that for some $s_1,s_2 \in S$ we have $\gamma_{p}(s_1,f_1')f_1'=\gamma_{p}(s_2,f_2')f_2'$, then $(f_1',f_2')\in R$. The equivalence relation $R$ induces a partition $F' = F^{p}_1 \uplus F^{p}_2 \uplus \dots \uplus F^{p}_{k(p)}$. Let us denote by $\partial_S F^{p}_i$ the set of all $g' \in S'F' \setminus F'$ for which there is $f' \in F^{p}_i$ and $s \in S$ such that $\gamma_{p}(s,f')f' = g'$. By definition of $R$, note that the sets $\partial_S F^{p}_i$ are pairwise disjoint and $\partial_S F^{p}_i \subset S'F' \setminus F'$. We obtain that $\sum_{i = 1}^{k(p)} |\partial_S F^{p}_i| \leq |S'F' \setminus F'|\leq \delta' |F'|$. Dividing both sides by $|F'|$ and multiplying each left term by $\frac{|F^{p}_i|}{|F^{p}_i|}$ we obtain: \begin{equation*} \sum_{i = 1}^{k(p)} \frac{|\partial_S F^{p}_i|}{|F^{p}_i|}\frac{|F^{p}_i|}{|F'|} \leq \delta'. \end{equation*} Denote by $\mu_i$ the ratio $\mu_i = \frac{|F^{p}_i|}{|F'|}$ and by $\delta_i = \frac{|\partial_S F^{p}_i|}{|F^{p}_i|}$. Note that $\mu_i \in [0,1]$, $\sum_{i = 1}^{k(p)}\mu_i = 1$ and $\delta_i \in [0,|S|]$. Let $I(p)$ be the set of indices such that $\delta_i \leq \delta$. We have that $\sum_{i \in I(p)}\delta_i \mu_i + \sum_{j \notin I(p)}\delta_j \mu_j \leq \delta'$. A simple manipulation of this expression yields \begin{equation}\label{equation_good_proportion} \sum_{i \in I(p)}\mu_i \geq 1-\frac{\delta'}{\delta}. \end{equation} The intuitive meaning of~\Cref{equation_good_proportion} is that the total amount of sites in the $(S',\varepsilon')$-invariant set $F'$ which lie in an induced subset of $H$ which is $(S,\delta)$-invariant can be made arbitrarily large by tweaking the ratio $\frac{\delta'}{\delta}$. As the $G$-chart $(X,\gamma)$ of $H$ is free, we can identify each set $F_i^{p}$ with a subset $H^p_i \Subset H$ and $\partial_S F^{p}_i$ with $SH^p_i \setminus H^p_i$. Furthermore, we have $|H^p_i| = |F_i^{p}|$ and $|SH^p_i \setminus H^p_i| = |\partial_S F^p_i|$. In other words, whenever $|\partial_S F^p_i| \leq \delta |F^p_i|$, the set $H_i$ is $(S,\delta)$-invariant. Now we use this computation to estimate the size of $|L_{F'}(Y_{\gamma}[X])|$. Clearly $|L_{S'F'}(X)| \geq |L_{F'}(X)|$, we can thus obtain \begin{align*} |L_{F'}(Y_{\gamma}[X])| & \leq \sum_{p\in L_{S'F'}(X)}\prod_{i=1 }^{k(p)}|L_{H^p_i}(Y)| \\ & \leq \sum_{p\in L_{S'F'}(X)}\prod_{i\in I(p)}|L_{H^p_i}(Y)|\prod_{j\notin I(p)}|L_{H^p_j}(Y)|\\ & \leq \sum_{p\in L_{S'F'}(X)}\prod_{i\in I(p)}|L_{H^p_i}(Y)|\prod_{j\notin I(p)}|\Sigma_Y|^{|H^p_j|}\\ & \leq |\Sigma_Y|^{\frac{\delta'|F'|}{\delta}} \sum_{p\in L_{S'F'}(X)}\prod_{i\in I(p)}|L_{H^p_i}(Y)|. \end{align*} As each $H^p_i$ for $i \in I(p)$ is $(S,\delta)$-invariant, we get $|L_{H^p_i}(Y)| \leq \ee^{(h_{\text{top}}(H \curvearrowright Y)+\varepsilon)|H_i|}$ and thus \begin{align*} |L_{F'}(Y_{\gamma}[X])| & \leq |\Sigma_Y|^{\frac{\delta'|F'|}{\delta}} \sum_{p\in L_{S'F'}(X)}\prod_{i\in I(p)}\ee^{(h_{\text{top}}(H \curvearrowright Y)+\varepsilon)|H_i|}\\ & \leq |\Sigma_Y|^{\frac{\delta'|F'|}{\delta}} \sum_{p\in L_{S'F'}(X)}\ee^{(h_{\text{top}}(H \curvearrowright Y)+\varepsilon)|F'|(1-\frac{\delta'}{\delta})}\\ & \leq |\Sigma_Y|^{\frac{\delta'|F'|}{\delta}} \ee^{(h_{\text{top}}(H \curvearrowright Y)+\varepsilon)|F'|} |L_{S'F'}(X)|. \end{align*} Therefore we obtain that, \begin{align*} \frac{1}{|F'|}\log(|L_{F'}(Y_{\gamma}[X])|) & \leq \frac{\delta'}{\delta}\log(|\Sigma_Y|) + (h_{\text{top}}(H \curvearrowright Y)+\varepsilon)+ \frac{1}{|F'|}\log(|L_{S'F'}(X)|)\\ & \leq \frac{\delta'}{\delta}\log(|\Sigma_Y|) + h_{\text{top}}(H \curvearrowright Y)+\varepsilon+ \frac{1}{|F'|} \left(\log(|L_{F'}(X)|)+ \log(|L_{S'F'\setminus F'}(X)|) \right) \\ & \leq \frac{\delta'}{\delta}\log(|\Sigma_Y|) + h_{\text{top}}(H \curvearrowright Y)+\varepsilon+ \frac{1}{|F'|}\log(|L_{F'}(X)|) + \frac{|S'F'\setminus F'|}{|F'|}\log(|\Sigma_X|). \end{align*} As $F'$ is an $(S',\delta')$-invariant set, we get that $|S'F'\setminus F'| \leq \delta'|F'|$. Furthermore, by definition this also implies that $\log(|L_{F'}(X)| \leq (h_{\text{top}}(G \curvearrowright X) + \varepsilon')|F'|$, therefore for every $(S',\delta')$-invariant set $F'$ we have, \begin{align*} \frac{1}{|F'|}\log(|L_{F'}(Y_{\gamma}[X])|) \leq h_{\text{top}}(H \curvearrowright Y) + h_{\text{top}}(G \curvearrowright X) +\varepsilon+\varepsilon'+ \frac{\delta'}{\delta}\log(|\Sigma_Y|) + \delta'\log(|\Sigma_X|). \end{align*} By the infimum formula for the entropy, the previous expression is an upper bound for the entropy $h_{\text{top}}(G \curvearrowright Y_{\gamma}[X])$. Now choose $\varepsilon = \varepsilon' = \frac{1}{n}$, this bounds the available values of $\delta$ and $\delta'$ above. We may arbitrarily choose $\delta' \leq \frac{\delta}{n}$. Letting $n$ go to infinity we obtain, \begin{equation} h_{\text{top}}(G \curvearrowright Y_{\gamma}[X]) \leq h_{\text{top}}(H \curvearrowright Y)+ h_{\text{top}}(G \curvearrowright X).\end{equation} For the lower bound,~\Cref{eq_entropyforidiots} shows that a lower bound for $L_{H^p_i}(Y)$ is given by $\ee^{h_{\text{top}}(H \curvearrowright Y)|H^p_i|}$. It is then not hard to see that for every $F' \Subset G$ we have,\begin{align} |L_{F'}(Y_{\gamma}[X])| & \geq |L_{F'}(X)|\ee^{h_{\text{top}}(H \curvearrowright Y)|F'|}. \end{align} From where we obtain the other inequality. \end{proof} Recall that by~\Cref{remark_SFTchart_SFT_is_SFT} if both the subshift $X$ in a chart $(X,\gamma)$ and the embedded subshift $Y$ are SFTs, then $Y_{\gamma}[X]$ is also an SFT. This gives us a way of producing a new $G$-SFT those topological entropy is the sum of their entropies. \begin{corollary}\label{corollary_realize_entropy} If $(X,\gamma)$ is a free $G$-chart of $H$ and $X$ is a $G$-SFT, then for every $H$-SFT $Y$ there exists a $G$-SFT $Z$ which has entropy $h_{\text{top}}(G \curvearrowright X)+h_{\text{top}}(H\curvearrowright Y)$. In other words, \[ h_{\text{top}}(G \curvearrowright X) + \mathcal{E}_{\text{SFT}}(H) \subset \mathcal{E}_{\text{SFT}}(G).\] \end{corollary} In what follows we shall show that if there is at least one free $G$-chart $(X,\gamma)$ of $H$ where $X$ is a $G$-SFT, then it is always possible to find another such chart where $X$ can have arbitrarily low entropy. \subsection{Reducing the entropy of a chart}\label{section_reduce_ent_charts} The goal of this section is to develop a method for reducing the entropy of a subshift of finite type in such a way that the new subshift of finite type preserves any cocycle defined on the original one. In order to do this we will use the machinery of quasitilings developed by Ornstein and Weiss in~\cite{OrnWei1987}. In order to minimize the complexity of the proof, we shall in fact use a recent result by Downarowicz, Huczec and Zhang~\cite{DownarowiczHuczekZhang2019} which shows, for any countable amenable group, the existence of zero-entropy exact tilings where each tile can be made arbitrarily invariant. The ideas presented in this section have been strongly influenced by the work of Frisch and Tamuz~\cite{FrischTamuz2015} which use similar methods to study generic properties of the set of all subshifts. \begin{definition} Let $G$ be a group. A \define{tile set} is a finite collection $\mathcal{T} = \{T_1,\dots,T_n\}$ of finite subsets of $G$ which contain the identity. A \define{tiling} of $G$ by $\mathcal{T}$ is a function $\tau \colon G \to \mathcal{T} \cup \{\varnothing\}$ such that: \begin{enumerate} \item ($\tau$ is pairwise-disjoint) For every $g,h \in G$, if $g \neq h$ then $\tau(g)g \cap \tau (h)h =\varnothing$. \item ($\tau$ covers $G$) For every $g \in G$ there exists $h \in G$ such that $g \in \tau(h)h$. \end{enumerate} \end{definition} \begin{lemma}\label{lemma_tilings_are_SFTS} Let $\mathcal{T}$ be a tileset. The collection of all tilings of $G$ by $\mathcal{T}$ is a $G$-SFT. \end{lemma} \begin{proof} Let $X_{\mathcal{T}} \subset (\mathcal{T}\cup \{\varnothing\})^G$ be the set of all configurations $\tau \colon G \to \mathcal{T} \cup \{\varnothing\}$ which avoid the set of forbidden patterns $\mathcal{D} \cup \mathcal{C}$ where \begin{enumerate} \item $\mathcal{D}$ is the set of all patterns $p$ with support $\{1_G,g\}$ where $g = t_2^{-1}t_1 \neq 1_G$ for some $t_1, t_2 \in \bigcup_{i \leq n}T_i$ and which satisfy that $p(1_G) \cap p(g)g \neq \varnothing$. \item $\mathcal{C}$ consists of all patterns $q$ with support $\bigcup_{i \leq n}T_i^{-1}$ such that $1_G \notin q(g)g$ for every $g \in \supp(q)$. \end{enumerate} Both $\mathcal{D}$ and $\mathcal{C}$ are finite and thus $X_{\mathcal{T}}$ is a $G$-SFT. We claim that $\tau \in X_{\mathcal{T}}$ if and only if $\tau$ is a tiling of $G$ by $\mathcal{T}$. We shall show this in two parts. Let $\tau \in (\mathcal{T}\cup \{\varnothing\})^G$. \begin{enumerate} \item $\tau$ is pairwise disjoint if and only if no pattern from $\mathcal{D}$ appears in $\tau$. Indeed, if $\tau$ is not pairwise disjoint there are $h_1 \neq h_2$ such that $\tau(h_1)h_1 \cap \tau(h_2)h_2 \neq \varnothing$. Letting $\tau' = h_1\tau$ we have $\tau'(1_G) = \tau(h_1)$ and $\tau'(h_2h_1^{-1}) = \tau(h_2)$, therefore $\tau(h_1)h_1 \cap \tau(h_2)h_2 \neq \varnothing$ if and only if $\tau'(1_G) \cap (\tau'(h_2h_1^{-1}))h_2h_1^{-1} \neq \varnothing$. This means that there exist $t_1 \in \tau'(1_G)$ and $t_2 \in \tau'(h_2h_1^{-1})$ such that $t_1 = t_2h_2h_1^{-1}$, equivalently such that $h_2h_1^{-1} = t_2^{-1}t_1$. Let $g = t_2^{-1}t_1$. We get that $\tau'(1_G) \cap (\tau'(g))g \neq \varnothing$ if and only if $\tau'|_{\{1_G,g\}} \in \mathcal{D}$ and thus $\tau'|_{\{1_G,g\}}$ appears in $\tau$. \item $\tau$ covers $G$ if and only if no pattern from $\mathcal{C}$ appears in $\tau$. Indeed, suppose $\tau$ does not cover $G$, then there is $g \in G$ such that for every $h \in G$, $g \notin \tau(h)h$. Letting $\tau' = g\tau$ we obtain that $\tau(h) = \tau'(hg^{-1})$ and hence $g \notin \tau(h)h$ for every $h \in G$ if and only if $1_G \notin \tau'(hg^{-1})hg^{-1}$ for every $h \in G$ which is the same as saying that $1_G \notin \tau'(s)s$ for every $s \in G$. This is equivalent to $\tau'|_{\bigcup_{i \leq n}T_i^{-1}} \in \mathcal{C}$. Therefore $\tau$ does not cover $G$ if and only if there is $g \in G$ such that $(g\tau)|_{\bigcup_{i \leq n}T_i^{-1}} \in \mathcal{C}$, which is the same as saying that a pattern from $\mathcal{C}$ appears in $\tau$. \end{enumerate} Therefore $\tau \in X_{\mathcal{T}}$ if and only if $\tau$ is a tiling of $G$ by $\mathcal{T}$.\end{proof} \begin{remark} The orbit closure of any tiling $\tau \colon G \to \mathcal{T} \cup \{\varnothing\}$ forms a $G$-subshift which is not necessarily of finite type. We shall denote by $h_{\text{top}}(\tau)$ the topological entropy of said subshift. \end{remark} \begin{theorem}[Downarowicz, Huczek and Zhang~\cite{DownarowiczHuczekZhang2019}]\label{teorema_tiling_exacto} Let $G$ be a countable amenable group. For any $F\Subset G$ and $\delta >0$ there exists a tile set $\mathcal{T}$ such that every $T \in \mathcal{T}$ is $(F,\delta)$-invariant and there exists a tiling $\tau$ by $\mathcal{T}$ such that $h_{\text{top}}(\tau )=0$. \end{theorem} \begin{lemma}\label{lemma_SFT_small} Let $G$ be a countable amenable group and $X \subset \Sigma^G$ be a $G$-SFT. Suppose that $Y \subset X$ is a subshift of $X$, then for every $\varepsilon>0$ there exists a $G$-SFT $Z\subset X$ so that \[h_{\text{top}}(G \curvearrowright Y) \leq h_{\text{top}}(G \curvearrowright Z) \leq h_{\text{top}}(G \curvearrowright Y)+\varepsilon.\] \end{lemma} \begin{proof} Fix $\varepsilon >0$. By~\Cref{eq_entropyforidiots} there exists $D \Subset G$ so that $\log(|L_{D}(Y)| \leq |D|(h_{\text{top}}(G \curvearrowright Y)+\varepsilon)$. Let $\mathcal{F}_1$ be a set of forbidden patterns which defines $X$ and let $\mathcal{F} = \mathcal{F}_1 \cup (\Sigma^D \setminus L_{D}(Y))$. Letting $Z$ be the $G$-SFT defined by the set of forbidden patterns $\mathcal{F}$, we have $Z \subset X$ and $Y \subset Z$ from where it follows that $h_{\text{top}}(G \curvearrowright Y) \leq h_{\text{top}}(G \curvearrowright Z)$. Furthermore, by construction we get $L_D(Z) = L_D(Y)$ and thus we have \[ h_{\text{top}}(G \curvearrowright Z) = \inf_{F \in \mathcal{F}(G)} \frac{1}{|F|}\log(|L_F(Z)|) \leq \frac{1}{|D|}\log(|L_D(Z)| \leq h_{\text{top}}(G \curvearrowright Y)+\varepsilon.\] And so $Z$ satisfies the required properties.\end{proof} Let $T,K$ be finite subsets of $G$. The $K$-core of $T$ is the set $\textrm{Core}_K(T) = \{t\in T \mid Kt \subset T \}$. It is an easy exercise to show that if $T$ is a $(K,\frac{\delta}{|K|})$-invariant set, then $|T \setminus \textrm{Core}_K(T)|< \delta |T|$, for a proof, see~\cite[Lemma 2.6]{DownarowiczHuczekZhang2019}. Now we are ready to state the main theorem of this section which shows that every SFT admits subsystems with arbitrarily low topological entropy and which are also SFTs. \begin{theorem}\label{theorem_tilings_forthewin} Let $G$ be a countable amenable group and $X\subset \Sigma^G$ be a $G$-SFT. For every $\varepsilon > 0$ there exists a $G$-SFT $Z \subset X$ such that $h_{top}(G \curvearrowright Z) \leq \epsilon$ \end{theorem} \begin{proof} We claim that it suffices to show that for every $\varepsilon >0$ there exists a $G$-SFT $Y$ (on a different alphabet) such that $h_{top}(G \curvearrowright Y) \leq \epsilon$ and a continuous $G$-equivariant map $\phi\colon Y \to X$. Indeed, if this is the case, using the above result with $\frac{\varepsilon}{2}$ and the property that (for amenable group actions) topological entropy does not increase under topological factor maps, we obtain that $\phi(Y)$ is a subshift of $X$ with entropy $h_{\text{top}}(G \curvearrowright \phi(Y)) \leq \frac{\varepsilon}{2}$. Using~\Cref{lemma_SFT_small} with $\frac{\varepsilon}{2}$ we obtain an SFT $Z \subset X$ whose entropy is bounded by $h_{\text{top}}(G \curvearrowright \phi(Y)) + \frac{\varepsilon}{2} \leq \varepsilon$ as required. Let us show the above claim. Let $\mathcal{F}$ be a finite set of forbidden patterns which defines $X$, let $F = \bigcup_{p \in \mathcal{F}}\supp(p)$ be the union of their supports and $K = FF^{-1}$. By~\Cref{teorema_tiling_exacto} there exists a tileset $\mathcal{T}= \{T_1,\dots,T_n\}$ such that every tile in $\mathcal{T}$ is $(K,\frac{\varepsilon}{4|K|\log(|\Sigma|)})$-invariant and which admits a tiling $\tau^*$ by $\mathcal{T}$ with zero entropy. In particular, the $K$-core of each tile $T \in \mathcal{T}$ satisfies $|T \setminus \textrm{Core}_K(T)|< \frac{\varepsilon}{4\log(|\Sigma|)} |T|$ and we can find a finite set $D \Subset G$ such that $\log(|L_{D}(\overline{ \{g \tau^*\}_{g \in G}} )|) \leq \frac{\varepsilon}{4} |D|$. By~\Cref{lemma_tilings_are_SFTS} the set $X_{\mathcal{T}}$ of all tilings of $G$ by $\mathcal{T}$ is a $G$-SFT. Consider the subshift of finite type $X^{\mathcal{L}}_{\mathcal{T}} \subset X_{\mathcal{T}}$ where we additionally forbid the finite set of patterns $\mathcal{L}$: \[\mathcal{L} = (\mathcal{T}\cup \{\varnothing\})^D \setminus L_{D}(\overline{ \{g \tau^*\}_{g \in G}} ). \] Clearly $\tau^* \in X^{\mathcal{L}}_{\mathcal{T}}$, hence $X^{\mathcal{L}}_{\mathcal{T}}$ is a non-empty $G$-SFT. Furthermore we have \[ h_{\text{top}}(G \curvearrowright X^{\mathcal{L}}_{\mathcal{T}}) = \inf_{F \Subset G} \frac{1}{|F|}\log(|L_{F}(X^{\mathcal{L}}_{\mathcal{T}})|) \leq \frac{1}{|D|}\log(|L_{D}(X^{\mathcal{L}}_{\mathcal{T}})|) \leq \frac{\varepsilon}{4}. \] Consider the set $U \isdef \bigcup_{i \leq n}T_i$. We define $X^{\star}$ as the set of all configurations in $(\Sigma \cup U)^G$ for which no forbidden patterns from $\mathcal{F}$ appear. Finally, we define $Y \subset X^{\mathcal{L}}_{\mathcal{T}} \times X^{\star}$ as the set of all pairs of configurations $(\tau,x)$ such that for every $g \in G$ if we let $(\tau',x') = (g\tau,gx)$ then we have: \begin{enumerate} \item If $h \in \textrm{Core}_K(\tau'(1_G))$ then $x'(h) = h$. \item If $h \in \tau'(1_G) \setminus \textrm{Core}_K(\tau'(1_G))$ we have $x'(h)\in \Sigma$. \item $x'|_{\tau'(1_G)\setminus \textrm{Core}_K(\tau'(1_G))} \in L_{\tau'(1_G)\setminus \textrm{Core}_K(\tau'(1_G))}(X)$. \end{enumerate} In other words, $Y$ is the $G$-subshift which consists of all configurations obtained by overlaying some $x \in X$ with a tiling $\tau \in X^{\mathcal{L}}_{\mathcal{T}}$ and replacing every symbol in the $K$-core of a tile by an address pointing to the center of the tile. We claim $Y$ is a $G$-SFT. Indeed, it can be obtained from the $G$-SFT $X^{\mathcal{L}}_{\mathcal{T}} \times X^{\star}$ by forbidding the finite collection of all patterns $p$ with support $U$ for which the first coordinate of $p(1_G)$ is some $T \in \mathcal{T}$ and either there is $g \in \textrm{Core}_K(T)$ for which the second coordinate of $p(g)$ is not $g$ or the pattern obtained by restricting the second coordinate of $p$ to ${T \setminus \textrm{Core}_K(T)}$ is not in $L_{T\setminus \textrm{Core}_K(T)}(X)$. We leave it as an exercise to the reader to verify that $(\tau,x) \in Y$ if and only if no patterns as above appear. Let us first construct the $G$-equivariant map $\phi \colon Y \to X$. Informally, $\phi$ is the map that erases the tiling $\tau$ and replaces the addresses (which appear in the $K$-core of some $Tg$ for $T \in \mathcal{T}$) by the symbols of some fixed pattern which depends only on the values of $x$ on $Tg \setminus \textrm{Core}_K(T)g$. Formally, associate to every $T \in \mathcal{T}$ and pattern $p \in L_{T\setminus \textrm{Core}_K(T)}(X)$ a pattern $\eta(T,p) \in L_{T}(X)$ such that $\eta(T,p)|_{T\setminus \textrm{Core}_K(T)} = p$. Let $\Phi\colon Y \to \Sigma$ be defined by \[\Phi(\tau,x) \isdef \begin{cases} x(1_G) & \mbox{ if }x(1_G) \in \Sigma \\ \eta(\tau(h^{-1}),(h^{-1}x)|_{\tau(h^{-1}) \setminus \textrm{Core}_K(\tau(h^{-1}))})(h) & \mbox{ if }x(1_G) = h \in U. \end{cases} \] As $U$ is finite this map is local. As a consequence, $\phi \colon Y \to \Sigma^G$ given by $\phi(\tau,x)(g) = \Phi(g\tau,gx)$ is a continuous $G$-equivariant map. Let us show that $\phi(\tau,x) \in X$. If it is not the case, then there exists $p \in \mathcal{F}$ and $g \in G$ such that $\phi(g\tau,gx)|_{\supp(p)} = p$. For simplicity, let us rename $(\tau',x') = (g\tau,gx)$. If for every $s \in \supp(p)$ we have $x'(s) \in \Sigma$ then $\phi(\tau',x')|_{\supp(p)} = x'|_{\supp(p)}$ which cannot be $p$ by definition of $X^{\star}$. Otherwise we have $\bar{s} \in \supp(p)$ such that $x'(\bar{s}) = h \in U$, which in turn means that $\tau'(h^{-1}\bar{s}) \in \mathcal{T}$. In other words, for $f = h^{-1}\bar{s}$ we have $\bar{s} \in \textrm{Core}_K(\tau'(f))f$. By definition of $K$-core, we have that $K\bar{s} \subset \tau'(f)f$. As $\supp(p) \subset F$ and $K = FF^{-1}$ we obtain that $\supp(p)\subset \tau'(f)f$. By definition of $\phi$ and $\eta$ we have that $\phi(f\tau',fx')|_{\tau'(f)} = \eta(\tau'(f), fx'|_{\tau'(f)\setminus \textrm{Core}_K(\tau'(f))}) \in L_{\tau'(f)}(X)$. In particular, $\phi(\tau',x')|_{\tau'(f)f} \in L_{\tau'(f)f}(X)$. As $\supp(p)\subset \tau'(f)f$ this shows that $\phi(\tau',x')|_{\supp(p)} \neq p$, raising a contradiction. Lastly, let us verify that $h_{\text{top}}(G\curvearrowright Y) \leq \varepsilon$. As $h_{\text{top}}(G \curvearrowright X^{\mathcal{L}}_{\mathcal{T}}) \leq \frac{\varepsilon}{4}$, we can find $W_1 \Subset G$ and $\delta_1 >0$ such that any $(W_1,\delta_1)$-invariant set $R$ satisfies $\log(|L_{R}(X^{\mathcal{L}}_{\mathcal{T}})|) \leq |R|\frac{\varepsilon}{2}$. Pick $W \isdef W_1 \bigcup U$ and $\delta < \delta_1$ sufficiently small (for instance $\delta < \min(\delta_1,\frac{\varepsilon}{4|U|\log(|\Sigma|)})$) such that any $(W,\delta)$-invariant set $R$ satisfies that $|R \setminus \textrm{Core}_U(R)|< \frac{\varepsilon}{4\log(|\Sigma|)}|R|$. Fix $\tau \in X^{\mathcal{L}}_{\mathcal{T}}$ and let us denote by $L_{R}(Y,\tau)$ the set of $p \in L_R(Y)$ for which the first coordinate is $\tau|_{R}$. Let us write $R$ as the disjoint union $R_1 \uplus R_2$ where $R_1$ is the set of all $g \in R$ for which there is $h \in R$ such that $\tau(h)h \subset R$. By definition, as $\tau(h) \subset U$, we have that $R_2 \subset R \setminus \textrm{Core}_U(R)$ and hence $|R_2|< \frac{\varepsilon}{4\log(|\Sigma|}|R|$. On the other hand, the symbols in every position in $\textrm{Core}_K(\tau(h))h$ are fixed. As the $\tau(h)h$ cover $R_1$ and $|\tau(h)h \setminus \textrm{Core}_K(\tau(h)h)|< \frac{\varepsilon}{4\log(|\Sigma|)} |\tau(h)|$ we have at most $\frac{\varepsilon}{4\log(|\Sigma|)}|R_1| \leq \frac{\varepsilon}{4\log(|\Sigma|)}|R|$ positions in $R_1$ are potentially free. Therefore we obtain the bound \[ |L_{R}(Y,\tau)| \leq |\Sigma|^{|R_2|} |\Sigma|^{\frac{\varepsilon}{4\log(|\Sigma|)}|R_1|} \leq |\Sigma|^{\frac{\varepsilon}{2\log(|\Sigma|)}|R|}. \] Note that this does not depend upon the choice of $\tau$. We can thus obtain \[ |L_{R}(Y)| \leq |L_{R}(X^{\mathcal{L}}_{\mathcal{T}})||\Sigma|^{\frac{\varepsilon}{2\log(|\Sigma|)}|R|} \leq \exp(|R|\frac{\varepsilon}{2})|\Sigma|^{\frac{\varepsilon}{2\log(|\Sigma|)}|R|}. \] Therefore \[ h_{\text{top}}(G \curvearrowright Y) \leq \frac{1}{|R|}\log(|L_{R}(Y)|) \leq \frac{1}{|R|} \left( |R|\frac{\varepsilon}{2} + |R|\frac{\varepsilon \log(|\Sigma|)}{2 \log(|\Sigma|)} \right) \leq \varepsilon. \] Which completes the proof. \end{proof} Before applying~\Cref{theorem_tilings_forthewin} to reduce the entropy of a chart, let us mention a nice application which shows that for any arbitrary countable amenable group, every subshift of finite type must necessarily contain a subsystem with zero topological entropy. This extends the result of Quas and Trow~\cite[Corollary 2.3]{QuasTrow2000} which shows that minimal $\ZZ^d$-SFTs have zero topological entropy and whose argument works for amenable orderable groups. Let us also remark the work of Frisch and Tamuz~\cite{FrischTamuz2015} also gives a way to obtain Quas and Trow's result for arbitrarily countable amenable groups and that the author is aware of a non-published direct proof by Ville Salo which works for any amenable and finitely generated group and relies on a combinatorial argument. \begin{corollary} Let be $G$ a countably infinite amenable group. Any $G$-SFT $X$ contains a $G$-invariant closed subset with zero topological entropy. In particular, every minimal $G$-SFT has zero topological entropy. \end{corollary} \begin{proof} Let $\varepsilon_n = \frac{1}{n}$ and let $Y_0 = X$. By~\Cref{theorem_tilings_forthewin} there exists a $G$-SFT $Y_1$ such that $h_{\text{top}}(G \curvearrowright Y_1) \leq \varepsilon_1$ and $Y_1 \subset Y_0$. Iterating this procedure we can obtain for every $n \in \NN$ a $G$-SFT $Y_n$ such that $h_{\text{top}}(G \curvearrowright Y_n) \leq \varepsilon_n = \frac{1}{n}$ and $Y_n \subset Y_{n-1}$. As each $Y_n$ is closed we have that $Z = \bigcap_{n \geq 0 }Y_n$ is non-empty. Clearly $Z$ is $G$-invariant as each $Y_n$ is $G$-invariant. Furthermore, $h_{\text{top}}(G\curvearrowright Z)\leq h_{\text{top}}(G \curvearrowright Y_n)$ for every $n \in \NN$, therefore $h_{\text{top}}(G\curvearrowright Z) =0$. \end{proof} Let us also remark that this result is in direct contrast with existence of minimal Toeplitz subshifts of arbitrary positive topological entropy on residually finite groups, see~\cite{Cortez2008, Krieger2007_toeptliz, Marthaentropytoeplitz2016}. To the knowledge of the author, the following question is open even in $\ZZ^2$. \begin{question} Does there exist an amenable group $G$ and a $G$-SFT which does not contain a zero-entropy $G$-SFT? \end{question} Let us go back to reducing the entropy of a chart. \begin{corollary}\label{corollary_reducing_chart_entropy} Suppose there exists a free $G$-chart $(X,\gamma)$ for $H$ such that $X$ is a $G$-SFT. Then for every $\varepsilon >0$ there exists a free $G$-chart $(Y,\gamma')$ for $H$ such that $Y$ is a $G$-SFT and $h_{\text{top}}(G \curvearrowright Y) \leq \epsilon$. \end{corollary} \begin{proof} Apply~\Cref{theorem_tilings_forthewin} to $X$ and $\varepsilon>0$ to obtain a $G$-SFT $Y$ such that $h_{\text{top}}(G \curvearrowright Y) \leq \epsilon$ and $Y \subset X$. Let $\gamma' \colon H \times Y \to G$ be the restriction of $\gamma$ to $Y$. Clearly $\gamma'$ is continuous and an $H$-cocycle. \end{proof} \subsection{Conditions for the existence of free charts}\label{section_conditions_charts} In this section we shall present conditions under which there exist free charts and conditions under which they can be realized with a subshift of finite type. An obvious condition which implies the existence of a free $G$-chart of $H$ is that $H$ embeds into $G$ as a subgroup, see~\Cref{example_obviouschart}. Note that in that case the chart automatically has entropy zero and we obtain the rather obvious corollary that $\mathcal{E}_{\text{SFT}}(H)\subset \mathcal{E}_{\text{SFT}}(G)$. The notion that $H$ embeds into $G$ can be relaxed using the notion of translation-like action introduced by Whyte~\cite{Whyte1999}. We shall see that whenever the groups are finitely generated, this notion is closely related with the existence of free charts. \begin{definition} Let $(X,d)$ be a metric space and $H$ a group. We say that $H \curvearrowright X$ is a \define{translation-like} action if \begin{itemize} \item $H \curvearrowright X$ is free, that is, for every $x \in X$ then $hx = x$ implies that $h = 1_H$. \item $H \curvearrowright X$ is bounded, that is, for every $h \in H$, $\sup_{x \in X} d(x,hx) < \infty$. \end{itemize} \end{definition} Any finitely generated group $G$ can be seen as a metric space by endowing it with a metric induced by a finite set of generators. In that case, the second condition can be replaced by the condition that for every fixed $h \in H$ the set of all $(h \cdot g)g^{-1}$ is finite. \begin{proposition} Let $H,G$ be finitely generated groups. $H$ acts translation-like on $G$ if and only if there exists a free $G$-chart $(X,\gamma)$ of $H$. \end{proposition} \begin{proof} Fix a finite set $S$ of generators of $H$. Suppose there exists a translation-like action $H \curvearrowright G$. As the action is bounded and $S$ is finite, we have that the set $F =\{ f \in G \mid (s\cdot g) = fg \mbox{ for } s\in S, g \in G \}$ is finite. Consider the alphabet $\Sigma = F^S$ and the configuration $x \colon G \to \Sigma$ such that $(x(g))(s) = f \in F$ if and only if $s\cdot g = fg$. Let $X = \overline{\bigcup_{g \in G}\{ gx \}}$ be the orbit closure of $x$. By definition $X$ is a $G$-subshift. For $y \in X$, let $\gamma(s,y) = (y(1_G))(s)$ and extend $\gamma$ to $H \times X$ through the cocycle equation. It is clear that $\gamma$ is continuous. By definition, we have that $s \cdot_x g = \gamma(gx,s)g = (x(g))(s) = (s \cdot g) g^{-1}g = s \cdot g$. In other words, the action $H \overset{x}{\curvearrowright} G$ coincides with $H \curvearrowright G$ and hence it's free. It follows from compactness that the same holds for any $y \in X$ and thus $(X,\gamma)$ is a free $G$-chart of $H$. Conversely, suppose there exists a free $G$-chart $(X,\gamma)$ of $H$ and let $x \in X$. By definition, the action $H \overset{x}{\curvearrowright} G$ is free. Let $h \in G$, the restriction of $\gamma$ to $\{h\} \times X$ takes finitely many values and depends only on finitely many coordinates of $x \in X$. It follows that $(h \cdot_x g)g^{-1} = \gamma(h,gx)gg^{-1} = \gamma(h,gx)$ takes only finitely many values and hence $H \overset{x}{\curvearrowright} G$ is bounded. \end{proof} In other words, the least we can require if we want a free $G$-chart of $H$ is the existence of a translation-like action of $H$ on $G$. In what follows we shall give further conditions under which one can always find a free $G$-chart of $H$ given by a $G$-SFT. The following proof is essentially contained in the work of Jeandel~\cite[Section 2]{Jeandel2015}. \begin{proposition}\label{proposition_all_we_need_for_charts} Let $H,G$ be finitely generated groups such that: \begin{enumerate} \item $H$ admits a translation-like action on $G$. \item $H$ is finitely presented. \item There exists a non-empty $H$-SFT for which the $H$-action is free. \end{enumerate} Then there exists a free $G$-chart $(X,\gamma)$ of $H$ such that $X$ is a non-empty $G$-SFT. \end{proposition} \begin{proof} The first part of the proof is the same as in the last proposition, let $H\curvearrowright G$ be the translation-like action and suppose $\langle S \mid R \subset S^*\rangle$ is a finite presentation of $H$ where $S = S^{-1}$. By definition, the set $F =\{ f \in G \mid (s\cdot g) = fg \mbox{ for } s\in S, g \in G \}$ is finite. Consider the alphabet $\Sigma = F^S$ of all functions from $S$ to $F$ and let $\gamma\colon S^* \times \Sigma^G \to G$ be the map given by $\gamma(s,x)= (x(1_G))(s)$ for $s \in S$ and extended to the free monoid $S^*$ by the condition\[ \gamma(s_1s_2,x) = \gamma(s_1,\gamma(s_2,x)x)\cdot \gamma(s_2,x) \mbox{ for every $s_1,s_2$ in $S^*$}.\] Let us first consider the subshift $Y \subset \Sigma^G$ such that for every $s \in S$ and $g \in G$ we have $(y(g))(s) = f$ then $(y(fg))(s^{-1})= f^{-1}$. This is clearly a subshift of finite type. Let us note that for $y \in Y$, $g \in G$ and $s \in S$ we have,\begin{align*}\gamma(s^{-1}s,gy) & = \gamma(s^{-1},\gamma(s,gy)gy)\cdot \gamma(s,gy)\\ & = \gamma(s^{-1},[(y(g))(s)]gy) \cdot (y(g))(s)\\ & = y([(y(g))(s)](s^{-1}) \cdot (y(g))(s) = 1_G. \end{align*} The same holds for $\gamma(ss^{-1},gy)$. By a similar argument, it can be shown that if $w \in S^*$ is a word that can be freely reduced to the identity, then $\gamma(w,gy)=1_G$ for every $g \in G$. In other words, $(Y,\gamma)$ codes the free group on $S$ generators. Let us define $X \subset Y$ as the set of all configurations $x \in Y$ such that whenever $s_1s_2\dots s_{n-1}s_n \in R$ then for every $g \in G$, if we define $f_1 = (y(g))(s_n)$, $f_2 = (y(f_1g))(s_{n-1})$ and for every $k \leq n$, \[f_k = (y(f_{k-1}\dots f_1g))(s_{n+1-k}).\] Then we have $f_nf_{n-1}\dots f_1 = 1_G$. As $R$ is finite, these conditions can be imposed by forbidding patterns with support bounded by $F^{n}$. In other words, $X$ is also a $G$-subshift of finite type. Again, by the previous calculation, we obtain that for every $w \in R$ and $g \in G$ we have $\gamma(w,gx) = 1_G$ Moreover, as every word which represents $1_G$ in $G$ can be obtained by freely conjugating and concatenating words in $R$, we have that any word $w \in S^*$ which represents the identity satisfies $\gamma(w,gx) = 1_G$. In other words, $(X,\gamma)$ codes a $G$-chart of $H$. It is not true that $(X,\gamma)$ is free. In fact, the configuration such that $(x(g))(s) = 1_G$ belongs to $X$. However, the configuration $\bar{x}\in X$ defined using the free action $H \curvearrowright G$ by $(\bar{x}(g))(s) = (s \cdot g)g^{-1}$ satisfies that $H \overset{x}{\curvearrowright} G$ = $H \curvearrowright G$. By hypothesis, there exists an $H$-subshift $Z$ on which $H$ acts freely. Let us consider $Z_{\gamma}[X]$. By~\Cref{remark_chart_having_one_free_action_is_good} we have that $Z_{\gamma}[X]$ is non-empty. Let $\widehat{\gamma} \colon H \times Z_{\gamma}[X] \to G$ be the map defined by $\widehat{\gamma}(h,(z,x)) = \gamma(h,x)$. We claim the $G$-chart $(Z_{\gamma}[X], \widehat{\gamma})$ of $H$ is free. Indeed, if it is not free, there is $(z,x) \in Z_{\gamma}[X]$ and $h\neq 1_H$ such that $h \cdot_{(z,x)} g = g$. Equivalently, such that $\gamma(h,gx) = 1_G$ or $h \cdot_x g = g$. Hence, we would have that \[h\pi_{x,g}(z) = \pi_{x, h \cdot_{x} g}(z) = \pi_{x,g}(z).\] As $\pi_{x,g}(z) \in Z$, this gives a configuration for which the shift does not act freely, which contradicts the assumption on $Z$.\end{proof} Let us gather all our results in a single theorem for further reference. \begin{theorem}\label{theorem_HG} Let $G,H$ be finitely generated amenable groups. Suppose that \begin{enumerate} \item $H$ admits a translation-like action on $G$. \item $H$ is finitely presented. \item There exists a non-empty $H$-SFT for which the $H$-action is free. \end{enumerate} Then, for every $\varepsilon >0$ there exists a $G$-SFT $X$ such that $h_{top}(G\curvearrowright X) < \varepsilon$ and \[h_{top}(G\curvearrowright X)+ \mathcal{E}_{\text{SFT}}(H) \subset \mathcal{E}_{\text{SFT}}(G).\] \end{theorem} \begin{proof} By~\Cref{proposition_all_we_need_for_charts} there exists a free $G$-chart $(X,\gamma)$ of $H$ such that $X$ is a $G$-SFT. Furthermore, by~\Cref{corollary_reducing_chart_entropy} we can choose it so that $h_{top}(G\curvearrowright X) < \varepsilon$. Finally, we conclude by applying~\Cref{corollary_realize_entropy}. \end{proof} \section{Characterization of entropies: the case $H = \ZZ^2$}\label{section_characterization_Z2} The goal of this section is to exploit~\Cref{theorem_HG} for the case $H = \ZZ^2$. The interest on this particular case comes from the fact we already have a full characterization of the entropies of $\ZZ^2$-SFTs by~\Cref{theorem_HochmanTom}. Furthermore, $\ZZ^2 \cong \langle a,b \mid aba^{-1}b^{-1} \rangle$ is finitely presented, and there exist non-empty $\ZZ^2$-SFTs for which the $\ZZ^2$-action is free, for instance the Robinson tiling~\cite{Robinson1971}. There is a single obstacle that stops us from getting a characterization for all groups on which $\ZZ^2$ acts translation-like: even if we can choose the entropy of the chart to be arbitrarily low, there is no guarantee that said entropy will be an upper semi-computable number. In what follows we shall show that this is indeed the case if $G$ is a finitely generated group with decidable word problem. Given a set $S \subset G$ denote by $S^*$ the formal set of all finite words $s_1s_2\dots s_n \in S^*$. Also, for any such word in $S^*$ denote by $\underline{s_1s_2\dots s_n}$ the unique element of $G$ represented by it. \begin{definition} Let $G$ be a finitely generated group and $S$ a finite set of generators. The \define{word problem} of $G$ is the set of all words over the alphabet $S$ which represent the identity of $G$. \[\texttt{WP}_S(G) =\{ w \in S^* \mid \underline{w} = 1_G \}. \] \end{definition} We say that $G$ has \define{decidable word problem} if the language $\texttt{WP}_S(G)$ is decidable for some finite set of generators $S$. It can be shown that this notion is independent of the chosen set of generators and thus, modulo many-one equivalence, one can speak about the \define{word problem} $\texttt{WP}(G)$ of $G$ without making reference to a specific set of generators. We shall also need to introduce the set of locally admissible patterns. \begin{definition} Let $\Sigma$ be a finite alphabet and $\mathcal{F}$ be a list of forbidden patterns which defines a subshift $X_{\mathcal{F}}$. For $F\Subset G$ We say that $q \in \Sigma^F$ is in the set of \define{locally admissible patterns} $L_{F}^{\texttt{loc}}(X_{\mathcal{F}})$ if no patterns from $\mathcal{F}$ appear in $q$, namely, $[q] \not\subset g[p]$ for every $g \in G$ and $p \in \mathcal{F}$. \end{definition} \begin{lemma}\label{lemma_aproximalito} Let $G$ be a countable group and $X_{\mathcal{F}} \subset \Sigma^G$ be a subshift defined by a set of forbidden patterns $\mathcal{F}$. For any $F\Subset G$ there exists $K \Subset G$ such that $K \supset F$ and $p \in L_{F}(X)$ if and only if there exists $q \in L^{\texttt{loc}}_{K}(X)$ such that $q|_F = p$. \end{lemma} \begin{proof} If $G$ is finite the result is obvious. Otherwise we may fix an enumeration $\{g_n\}_{n \in \NN}$ of $G$, let $F^n = F \cup \bigcup_{k \leq n}\{g_k\}$ and consider $p \in \Sigma^F \setminus L_{F}(X)$. We claim there must exist an integer $n(p)$ such that $q|_F \neq p$ for every $q \in L^{\texttt{loc}}_{F^{n(p)}}(X)$. If this was not the case, we may choose for every $n$ a pattern $q^n \in L^{\texttt{loc}}_{F^{n}}(X)$ such that $q|_F = p$. As the sequence of $[q_n] \subset [p]$ is closed and nested, the intersection $Y = \bigcap_{n \in \NN} [q_n]$ is non-empty and $Y \subset [p]$, and any configuration $y \in Y$ satisfies that no forbidden patterns appear, hence $y \in X \cap [p]$ and thus $p \in L_F(X)$. As $\Sigma^F$ is finite we may define $N \isdef \max_{p \in \Sigma^F \setminus L_{F}(X)}n(p)$ and $K \isdef F^{N}$. By definition of $N$, we have that if $p \in \Sigma^F \setminus L_{F}(X)$ then $q|_F \neq p$ for every $q \in L^{\texttt{loc}}_{K}(X)$. Conversely, if $p \in L_F(x)$ there exists $x$ such that $x|_F = p$. Defining $q \isdef x|_{K}$ we have $q|_{F}=p$ and $q \in L_{K}(X)\subset L^{\texttt{loc}}_{K}(X)$.\end{proof} In what follows we shall need to briefly introduce the notion of pattern codings and effectively closed subshifts in finitely generated groups. An introduction to this topic can be found on~\cite{ABS2017}. \begin{definition} Let $G$ be a finitely generated group, $S$ a finite set of generators and $\Sigma$ an alphabet. A function $c\colon W \to \Sigma$ from a finite subset $W$ of $S^*$ is called a \define{pattern coding}. The cylinder defined by a pattern coding $c$ is given by \[ [c] = \bigcap_{w \in W} \underline{w}[c(w)]. \] \end{definition} In other words, a pattern coding is a coloring of a finite subset of the free monoid $S^*$. A set $\mathcal{C}$ of pattern codings defines a $G$-subshift $X_{\mathcal{C}}$ by setting \[X_{\mathcal{C}} = \Sigma^G \setminus \bigcup_{g \in G, c \in \mathcal{C}} g[c]. \] We say that a $G$-subshift $X$ is \define{effectively closed} if there exists a recursively enumerable set of pattern codings $\mathcal{C}$ such that $X = X_{\mathcal{C}}$. Obviously, every $G$-SFT is effectively closed. We shall need the following result. \begin{lemma}[Lemma 2.3 of~\cite{ABS2017}]\label{lemma_ABS} Let $G$ be a finitely generated and recursively presented group. For every effectively closed subshift $X \subset \Sigma^G$ the maximal --for inclusion-- set of forbidden pattern codings that defines $X$ is recursively enumerable. \end{lemma} \begin{proposition}\label{proposition_ECSubshift_has_USC_entropy} Let $G$ be a finitely generated amenable group with decidable word problem. For every effectively closed subshift $X \subset \Sigma^G$ the topological entropy $h_{\text{top}}(G \curvearrowright X)$ is upper semi-computable. \end{proposition} \begin{proof} Let us fix a symmetric set $S$ of generators for $G$. We shall first define three algorithms $T_{\texttt{WP}},T_{\texttt{pat}},T_{\texttt{color}}$ which will be used in the proof. First, as $G$ has decidable word problem there is an algorithm $T_{\texttt{WP}}$ which on input $w \in S^*$ halts and accepts if and only if $\underline{w}=1_G$. Second, as $X$ is effectively closed, by~\Cref{lemma_ABS} there exists a maximal recursively enumerable set of pattern codings $\mathcal{C}^*$ such that $X = X_{\mathcal{C}^*}$. We define $T_{\texttt{pat}}$ as the algorithm which on input $n \in \NN$ yields the list of the first $n$ pattern codings $[c_1,c_2,\dots,c_n]$ of $\mathcal{C}^*$. Finally, let us denote by $\equiv_{n}$ the equivalence relation on $\bigcup_{k \leq n}S^k$ defined by $u \equiv_{n} v$ if and only if $T_{\texttt{WP}}$ accepts $uv^{-1}$. Let $B_n \isdef \bigcup_{k \leq n}S^k / \equiv_{n}$. We define $T_{\texttt{color}}$ as the algorithm, which on input $n \in \NN$ computes the set of all functions $x\colon B_n \to \Sigma$ such that for every pattern coding $c_i \colon W_i \to \Sigma$ listed by $T_{\texttt{pat}}$ on input $n$ we have that either $W_i \setminus B_n \neq \emptyset$ or $x(w) \neq c(w)$ for at least one $w \in W_i$. In simpler words, $T_{\texttt{color}}$ enumerates all patterns over a representation of the ball of size $n$ of the Cayley graph of $G$ where the first $n$ forbidden pattern codings do not appear at the identity. Now we construct an algorithm $T_{\texttt{ent}}$ which on input $n$ outputs a rational number $h_n$ as follows. First apply algorithm $T_{\texttt{color}}$ on input $n$ to produce a set $\{x_1,\dots, x_{M(n)}\}$ of colorings as above. For each $A\subset B_n$ we define $L^A_n$ as the set of restrictions $\{x_1|_A,\dots, x_{M(n)}|_A\}$ to $A$. Let us define $h^A_n$ as the smallest rational number of the form $\frac{k}{2^n}$ such that \[\frac{1}{|A|}\log(|L^A_n|) < \frac{k}{2^n}.\] Finally, let us define $h_n \isdef \min_{A \subset B_n}\{h^A_n\}$. From the above definitions, it is clear that each $h_n$ can be computed in a finite number of steps with $T_{\texttt{ent}}$. We claim that the sequence $\{h_n\}_{n \in \NN}$ is non-increasing and that $\inf_{n \in \NN}h_n = h_{\text{top}}(G \curvearrowright X)$. Indeed, let $m > n$. Clearly for $A \subset B_n$ we have $L^A_m \subset L^A_n$, hence $|L^A_n| > |L^A_m|$ hence we obtain \[h_{m} = \min_{A \subset B_{m}}\{h^A_{m}\} \leq \min_{A \subset B_n}\{h^A_{m}\} \leq \min_{A \subset B_n}\{h^A_{n}\} = h_n. \] Hence the sequence $\{h_n\}_{n \in \NN}$ is non-increasing. It is clear from the definition that for every $n \in \NN$ such that $B_n \supset A$ we have $L^A_n \supset L_A(X)$, hence $h^A_{n} > \frac{1}{|A|}\log(|L^A_n|) \geq \frac{1}{|A|}\log(|L^A(X)|)$ and thus by~\Cref{eq_entropyforidiots}, \[h_n > \inf_{A \subset B_n}\frac{1}{|A|}\log(|L_A(X)|) \geq h_{\text{top}}(G \curvearrowright X).\] Similarly, by~\Cref{eq_entropyforidiots} for every $\varepsilon >0$ there exists a fixed finite $F \subset G$ such that $\frac{1}{|F|}\log(|L_F(X)|)-h_{\text{top}}(G \curvearrowright X) \leq \epsilon$. By~\Cref{lemma_aproximalito} there exists $K$ such that $p \in L_{F}(X)$ if and only if there exists $q \in L^{\texttt{loc}}_{K}(X)$ such that $q|_F = p$. Choose $N_1$ such that $B_{N_1} \supset K$ and $N_2$ so that all pattern codings of $\mathcal{C}^*$ whose support is contained in $K$ have already appeared. Let $N \geq \max(N_1,N_2)$. By definition we have that $L^{K}_{N}= L^{\texttt{loc}}_{K}(X)$ and thus $L^{F}_{N} = L_F(X)$, hence we have that \[h_N \leq h^F_N \leq \frac{1}{|F|}\log(|L_F(X)|)+\frac{1}{2^N} \leq h_{\text{top}}(G \curvearrowright X) + \epsilon+\frac{1}{2^N}. \] The last inequality shows that $\{h_n\}_{n \in \NN}$ converges to $h_{\text{top}}(G \curvearrowright X)$.\end{proof} From this, we can obtain the following characterization. \begin{theorem}\label{theorem_caract_entropies_G_z2_translation_like} Let $G$ be a finitely generated amenable group with decidable word problem which admits a translation-like action by $\ZZ^2$. The set of entropies attainable by $G$-subshifts of finite type is the set of non-negative upper semi-computable numbers. \end{theorem} \begin{proof} By hypothesis there exists a translation-like action of $\ZZ^2$ on $G$. Therefore $\ZZ^2,G$ satisfy the hypothesis of~\Cref{theorem_HG} which means that for every $\varepsilon>0$ there exists a $G$-SFT $X$ such that $h_{top}(G\curvearrowright X) < \varepsilon$ and \[h_{top}(G\curvearrowright X)+ \mathcal{E}_{\text{SFT}}(\ZZ^2) \subset \mathcal{E}_{\text{SFT}}(G).\] Recall that by~\Cref{theorem_HochmanTom} $\mathcal{E}_{\text{SFT}}(\ZZ^2)$ is precisely the set of non-negative upper semi-computable real numbers. As $G$ has decidable word problem,~\Cref{proposition_ECSubshift_has_USC_entropy} implies that $\mathcal{E}_{\text{SFT}}(G) \subset \mathcal{E}_{\text{SFT}}(\ZZ^2)$. Noting that $0 \in \mathcal{E}_{\text{SFT}}(G)$ and that the set of upper semi-computable numbers is stable under addition, if we let $\varepsilon$ go to zero we obtain \[\mathcal{E}_{\text{SFT}}(G) = \mathcal{E}_{\text{SFT}}(\ZZ^2).\] Which is what we wanted to show\end{proof} \section{Consequences}\label{section_consequences} In the remainder of this section we shall make use of the following simple construction. \begin{definition} Let $H \leq G$ be a subgroup, $\{0\}$ be the trivial $G$-subshift with one point and let the $H$-cocycle $\gamma\colon H \times \{0\} \to G$ be the canonical free $G$-chart of $H$ defined by $\gamma(h,0) = h$. For an $H$-subshift $X$ denote by $X^{\uparrow G}$ the \define{free $G$-extension of $X$} defined by $X_{\gamma}[\{0\}]$. \end{definition} \begin{proposition}\label{proposition_same_entropy_free_subshift} Let $G$ be a countable amenable group, $H \leq G$ and $X$ be an $H$-subshift. Then \[h_{\text{top}}(G \curvearrowright X^{\uparrow G}) = h_{\text{top}}(H \curvearrowright X).\] \end{proposition} \begin{proof} By~\Cref{theorem_addition_formula} we have \[h_{\text{top}}(G \curvearrowright X^{\uparrow G}) = h_{\text{top}}(H \curvearrowright X) + h_{\text{top}}(G \curvearrowright \{0\}) = h_{\text{top}}(H \curvearrowright X).\]Which is what we wanted to show.\end{proof} We shall also need the following result which relates the entropies of subshifts of finite type in a group to those of a finite index subgroup. \begin{lemma}\label{proposition_virt_Z_has_perronentropies} Let $G$ be a countable amenable group and let $H \leq G$ be a finite index subgroup. Assume that $\mathcal{E}_{\text{SFT}}(H)$ is closed under division by positive integers. Then $\mathcal{E}_{\text{SFT}}(G) = \mathcal{E}_{\text{SFT}}(H)$. \end{lemma} \begin{proof} For any $H$-SFT $X$ we can consider the $G$-SFT $X^{\uparrow}$. By~\Cref{proposition_same_entropy_free_subshift} we get $\mathcal{E}_{\text{SFT}}(H)\subset \mathcal{E}_{\text{SFT}}(G)$. For the converse, let $Y \subset \Sigma^G$ be a $G$-SFT and consider $H \curvearrowright Y$ the restriction of the $G$ action on $Y$ to $H$. It is a well known property of topological entropy that $\frac{1}{[G:H]}h_{\text{top}}(H \curvearrowright Y) = h_{\text{top}}(G \curvearrowright Y)$. It suffices to show that $H \curvearrowright Y$ is conjugated to an $H$-SFT. Indeed, as $\mathcal{E}_{\text{SFT}}(H)$ is closed under division by positive integers the above formula yields the result. Choose a set $R$ of left representatives of $G/H$ and define the $R$-higher power shift $X^{[R]}$ by \[X^{[R]} = \{ x \in (\Sigma^R)^H \mid \exists y \in Y, \mbox{ for every } r \in R, h \in H, (x(h))(r) = y(rh) \}. \] As $R$ is finite, it is clear that $X^{[R]}$ is closed and $H$-invariant and hence that it is an $H$-subshift. The function $\phi\colon X^{[R]} \to Y$ that sends $x \mapsto y$ by $\phi(x)(rh) = (x(h))(r)$ is clearly a continuous bijection. It is also $H$-equivariant: \[ h'\phi(x)(rh) = \phi(x)(rhh') = (x(hh'))(r) = (h'x(h))(r) = \phi(h'x)(rh). \] Therefore $H \curvearrowright X^{[R]}$ is conjugated to $H \curvearrowright Y$. The construction of the forbidden patterns that show that $X^{[R]}$ is an $H$-SFT whenever $Y$ is a $G$-SFT is a simple exercise. The reader may find it in either~\cite[Definition 3.1]{CarrollPenland} or in~\cite[Proposition 9.3.33]{AubBarJea2018}. \end{proof} \begin{question} Is there any infinite and finitely generated amenable group $G$ for which $\mathcal{E}_{\text{SFT}}(G)$ is not closed under division by positive integers? \end{question} \subsection{Polycyclic-by-finite groups} The goal of this section is to give a full characterization of the set of real numbers attainable as entropies of subshifts of finite type on a polycyclic-by-finite group. In what follows we shall introduce polycyclic groups and state a few of their properties. A good reference is~\cite{Seg2005} or~\cite{DruKap2018book}. A group $G$ is called \define{polycyclic} if there exists a finite sequence of subgroups \[G = N_1 \triangleright N_2 \triangleright \dots \triangleright N_{n} \triangleright N_{n+1} = \{1_G\}.\] such that every quotient $N_{i}/N_{i+1}$ is cyclic. The number of $i$ such that $N_{i}/N_{i+1}$ is infinite does not depend on the choice of sequence and is thus a group invariant called the \define{Hirsch index} of $G$ and denoted by $h(G)$. If we replace the condition that each $N_{i}/N_{i+1}$ is cyclic by the condition that each $N_{i}/N_{i+1}$ is the infinite cyclic group, we obtain the class of \define{poly-$C_{\infty}$} groups. There are polycyclic groups which are not poly-$C_{\infty}$, for instance any cyclic finite group. However, they are very close in the following sense. A proof can be found in either of the two references mentioned above. \begin{proposition} The following are equivalent: \begin{enumerate} \item $G$ is virtually polycyclic. \item $G$ is polycyclic-by-finite. \item $G$ is poly-$C_{\infty}$-by-finite. \end{enumerate} \end{proposition} In particular, as every short sequence $1 \to N \to G \to \ZZ \to 1$ splits, the last proposition means that any virtually polycyclic group can be written as a series $G = N_0 \triangleright N_1 \triangleright \dots \triangleright N_{n} \triangleright N_{n+1} = \{1_G\}$ such that for $i \geq 1$ we have $N_{i} = N_{i+1} \rtimes \ZZ$ and $G$ is virtually $N_1$. Moreover if this is the case then $h(G)=n$. \begin{theorem}\label{theorem_polycyclic} Let $G$ be a virtually polycyclic group. Then \begin{enumerate} \item If $h(G)=0$ then $\mathcal{E}_{\text{SFT}}(G) = \{ \frac{1}{|G|}\log(n) \mid n \in \ZZ_{+}\}$. \item If $h(G)=1$ then $\mathcal{E}_{\text{SFT}}(G) = \mathcal{E}_{\text{SFT}}(\ZZ)$, the set of non-negative rational multiples of logarithms of Perron eigenvalues. \item If $h(G)\geq 2$ then $\mathcal{E}_{\text{SFT}}(G) = \mathcal{E}_{\text{SFT}}(\ZZ^2)$, the set of non-negative upper semi-computable numbers. \end{enumerate} \end{theorem} \begin{proof} As $G$ is poly-$C_{\infty}$-by-finite, we have that $G = N_0 \triangleright N_1 \triangleright \dots \triangleright N_{h(G)} \triangleright N_{h(G)+1} = \{1_G\}$ where every quotient except the first one is an infinite cyclic group. If $h(G) = 0$, then $G = N_0 \triangleright N_1 = \{1_G\}$ is necessarily a finite group $F$. As every F\o lner sequence in a finite group is eventually the whole group, we have that for any subshift $X \subset \Sigma^F$, \[h_{\text{top}}(F \curvearrowright X) = \frac{1}{|F|}\log(|L_F(X)|).\] In particular, the entropy of every subshift is of the form we claim. To show that every such number occurs, consider the SFT $X^n_{\textrm{unif}} \subset \{1,2,\dots,n\}^{F}$ consisting of the uniform configurations $x_i$ such that $x_i(f) = i$ for every $f \in F$. Clearly $h_{\text{top}}(F \curvearrowright X^n_{\textrm{unif}}) = \frac{1}{|F|}\log(n)$. This proves the first claim. If $h(G)=1$ then $G = N_0 \triangleright N_1 \triangleright N_{2} = \{1_G\}$. As $N_1 \cong \{1_G\}\rtimes \ZZ$ then $N_1 \cong \ZZ$. This means that $G$ is virtually $\ZZ$. By~\Cref{proposition_virt_Z_has_perronentropies} the claim holds for this case as well. Let $h(G)\geq 2$. We will show that $\ZZ^2$ embeds into $G$. Indeed, we have that $N_{h(G)}\cong \ZZ$ and that $N_{h(G)-1} \cong N_{h(G)} \rtimes \ZZ$ is a subgroup of $G$. Hence, have that $N_{h(G)-1} \cong \ZZ \rtimes_{\varphi} \ZZ$ for some homomorphism $\varphi\colon \ZZ \to \textrm{Aut}(\ZZ)$. There are two cases: either $\varphi(1) = \textrm{id}$ or $\varphi(1)$ is multiplication by $-1$. The first case yields $N_{h(G)-1} \cong \ZZ^2$ and hence $\ZZ^2$ embeds into $G$. In the second case note that $\varphi(2)=\textrm{id}$ and thus $\ZZ \rtimes_{\varphi} 2\ZZ$ is isomorphic to $\ZZ^2$. Hence $N_{h(G)-1}$ contains a finite index copy of $\ZZ^2$ and thus as $N_{h(G)-1}$ embeds into $G$, we obtain that $\ZZ^2$ embeds into $G$ as well. Therefore, whenever $h(G)\geq 2$ we have that $\ZZ^2$ embeds into $G$. In particular $\ZZ^2$ acts translation-like on $G$. As every polycyclic-by-finite group is finitely generated and has decidable word problem, we can apply~\Cref{theorem_caract_entropies_G_z2_translation_like} to obtain the desired conclusion. \end{proof} \begin{remark} In the previous proof we did not use the full power of~\Cref{theorem_caract_entropies_G_z2_translation_like}. We only applied it to the case where $\ZZ^2$ actually embeds into $G$. The next application will rely strongly on translation-like actions. \end{remark} \subsection{Products of infinite finitely generated groups} In this section we shall make use of the following theorem by Seward~\cite{Seward2014} \begin{theorem}[Theorem 1.4 of~\cite{Seward2014}]\label{theorem_ofSeward} Every infinite and finitely generated group admits a translation-like action of $\ZZ$. \end{theorem} \begin{corollary}\label{corollary_ofsewards} Let $G_1,G_2$ be infinite and finitely generated groups. Then $G_1 \times G_2$ admits a translation-like action of $\ZZ^2$. \end{corollary} \begin{proof} By~\Cref{theorem_ofSeward}, there exist translation-like actions $\ZZ \overset{\alpha_1}{\curvearrowright} G_1$ and $\ZZ \overset{\alpha_2}{\curvearrowright} G_2$. The $\ZZ^2$-action given by $(n_1,n_2) \cdot (g_1,g_2) \isdef (n_1\cdot_{\alpha_1}g_1,n_2\cdot_{\alpha_2}g_2)$ satisfies the requirements. \end{proof} \begin{corollary}\label{corollary_entropy_ofproducts} Let $G_1,G_2$ be two infinite, amenable and finitely generated groups with decidable word problem. The set of topological entropies of non-empty $G_1 \times G_2$-SFTs is exactly the set of non-negative upper semi-computable numbers. \end{corollary} \begin{proof} Clearly $G_1 \times G_2$ has decidable word problem. By the previous corollary it admits a translation-like action of $\ZZ^2$. The result follows from~\Cref{theorem_caract_entropies_G_z2_translation_like}. \end{proof} \subsection{Countably infinite amenable groups} Let us now consider the case of countably infinite amenable groups which are not necessarily finitely generated. In the remainder of this section we will need to speak about the word problem for arbitrary countable groups. We shall say that a group presentation $\langle \NN \mid R \subset \NN^* \rangle$ has decidable word problem if there exists an algorithm which on entry $w \in \NN^*$ decides whether $\underline{w} = 1$ in the group defined by that presentation. We shall say that a countable group $G$ has \define{decidable word problem} if it admits a presentation with decidable word problem. Note that if $G$ has decidable word problem, then every finitely generated subgroup of $G$ also does, but the converse may not hold, see for instance~\cite[Example 5.4]{Barbieri2017Tesis}. \begin{proposition}\label{proposition_countablegrouphasUSCentropy} Let $G$ be a countably infinite amenable group which admits a decidable presentation and let $X \subset \Sigma^G$ be a $G$-subshift of finite type. Then $h_{\text{top}}(G \curvearrowright X)$ is upper-semi computable. \end{proposition} \begin{proof} If $X$ is a $G$-subshift of finite type, there is a finite set of patterns $\mathcal{F}$ which defines it. Let $S = \bigcup_{p \in \mathcal{F}}\supp(p)$ be the union of the supports of patterns in $\mathcal{F}$ and let $H = \langle S \rangle \leq G$ be the finitely generated subgroup of $G$ generated by $S$. As $G$ is amenable and has decidable word problem, then $H$ is amenable and has decidable word problem. Let $Y$ be the $H$-subshift defined by $\mathcal{F}$. We clearly have that $X = Y^{\uparrow G}$ where $Y^{\uparrow G}$ is the free $G$-extension of $Y$. By~\Cref{proposition_ECSubshift_has_USC_entropy} $h_{\text{top}}(H \curvearrowright Y)$ is upper semi-computable. Therefore by~\Cref{proposition_same_entropy_free_subshift} we have that $h_{\text{top}}(H \curvearrowright Y) =h_{\text{top}}(G \curvearrowright Y^{\uparrow G}) = h_{\text{top}}(G \curvearrowright X)$ and hence $h_{\text{top}}(G \curvearrowright X)$ is also upper semi-computable. \end{proof} \begin{corollary}~\label{corollary_caract_entropies_full} Let $G$ be an amenable countably infinite group with decidable word problem and which admits a finitely generated subgroup on which $\ZZ^2$ acts translation-like. Then \[ \mathcal{E}_{\text{SFT}}(G) = \mathcal{E}_{\text{SFT}}(\ZZ^2). \] \end{corollary} \begin{proof} By~\Cref{proposition_countablegrouphasUSCentropy} we get $\mathcal{E}_{\text{SFT}}(G) \subset \mathcal{E}_{\text{SFT}}(\ZZ^2)$. Let $H$ be a finitely generated subgroup on which $\ZZ^2$ acts translation-like. As $H$ has decidable word problem and is amenable, by~\Cref{theorem_caract_entropies_G_z2_translation_like} $\mathcal{E}_{\text{SFT}}(H)= \mathcal{E}_{\text{SFT}}(\ZZ^2)$. For any $r \in \mathcal{E}_{\text{SFT}}(H)$, there is an $H$-SFT $X$ such that $h_{\text{top}}(H \curvearrowright X)= r$. By~\Cref{proposition_same_entropy_free_subshift} we have $h_{\text{top}}(G \curvearrowright X^{\uparrow}) = r$ and hence $\mathcal{E}_{\text{SFT}}(H) \subset \mathcal{E}_{\text{SFT}}(G)$. This gives $\mathcal{E}_{\text{SFT}}(G) = \mathcal{E}_{\text{SFT}}(\ZZ^2)$. \end{proof} \begin{corollary}~\label{corollary_caract_entropies_full2} Let $G_1,G_2$ be amenable, countably infinite and non-locally finite groups with decidable word problem. Then \[ \mathcal{E}_{\text{SFT}}(G_1 \times G_2) = \mathcal{E}_{\text{SFT}}(\ZZ^2). \] \end{corollary} \begin{proof} $G_1\times G_2$ is amenable, countably infinite and has decidable word problem. Furthermore, as neither group is locally finite, there are infinite and finitely generated subgroups $H_1 \leq G_1$ and $H_2 \leq G_2$. By~\Cref{corollary_ofsewards} $H_1 \times H_2$ admits a translation-like action of $\ZZ^2$. The result follows from~\Cref{corollary_caract_entropies_full}. \end{proof} \begin{remark} The non-locally finite condition in~\Cref{corollary_caract_entropies_full2} is necessary. If $G$ is a locally finite group and $X \subset \Sigma^G$ is a subshift of finite type. We can use the same technique as in~\Cref{proposition_countablegrouphasUSCentropy} to reduce its entropy to the entropy of the group which is finitely generated by the support of its forbidden patterns. But the entropy of any subshift in a finite group is necessarily a rational multiple of the logarithm of a positive integer. \end{remark} \subsection{Branch groups} Suppose that $G$ is a countable amenable group with decidable word problem which contains the product of two non-locally finite and countably infinite subgroups $G_1 \times G_2$ as a subgroup. Then~\Cref{corollary_caract_entropies_full2} and~\Cref{corollary_caract_entropies_full} imply that $\mathcal{E}_{\text{SFT}}(G) = \mathcal{E}_{\text{SFT}}(\ZZ^2)$. There are many examples satisfying the previous hypothesis within the class of branch groups~\cite{BartholdiGrigorchuk2003Branchgroups}. There is more than one definition of branch group, we shall work with the following one: \begin{definition} A group $G$ is called a \define{branch group} if there exist two sequences of groups $(L_i)_{i \in \NN}$ and $(H_i)_{i \in \NN}$ and a sequence of positive integers $(k_i)_{i \in \NN}$ such that $k_0 = 1$, $G = L_0 = H_0$ and: \begin{enumerate} \item $\bigcap_{i \in \NN}{H_i} = 1_G$. \item $H_i$ is normal in $G$ and has finite index. \item there are subgroups $L_i^{(1)},\dots, L_i^{k(i)}$ of $G$ such that $H_i = L_i^{(1)}\times\dots\times L_i^{k(i)}$ and each of the $L_i^{(j)}$ is isomorphic to $L_i$. \item Conjugation by elements of $g$ transitively permutes the factors in the above product decomposition. \item $k_{i}$ properly divides $k_{i+1}$ and each of the factors $L_i^{(j)}$ contains $k_{i+1}/k_i$ factors $L_{i+1}^{(j')}$. \end{enumerate} \end{definition} This allows us to state the following result \begin{theorem}\label{theorem_branch_groups} Let $G$ be an infinite, finitely generated, amenable branch group with decidable word problem. Then $\mathcal{E}_{\text{SFT}}(G) = \mathcal{E}_{\text{SFT}}(\ZZ^2)$. \end{theorem} \begin{proof} By the fifth property above, $k_1 > 1$. Furthermore, as each $H_i$ has finite index, it is also infinite and finitely generated. As $k_1$ is finite, each $L_i$ is also infinite and finitely generated. Thus $H_1 = L_1^{(1)} \times \dots \times L_1^{(k_1)}$ is a subgroup of $G$ on which $\ZZ^2$ acts translation-like. The result follows from~\Cref{corollary_caract_entropies_full}. \end{proof} A canonical example which satisfies all of the above properties is the following. \begin{example} The set of topological entropies of non-empty SFTs in the Grigorchuk group~\cite{Grigorchukgrouporiginal1984} is exactly the set of non-negative upper semi-computable numbers. \end{example} \section{Final remarks} The techniques presented in this work give tools to embed the entropies of SFTs defined on a group $G$ to groups in which $G$ embeds geometrically. As the only known non-trivial base cases are $\ZZ$ and $\ZZ^2$, we can only obtain characterizations which coincide either with $\mathcal{E}_{\text{SFT}}(\ZZ)$ or $\mathcal{E}_{\text{SFT}}(\ZZ^2)$. This raises the following question. \begin{question} Is there any infinite and finitely generated amenable group $G$ with decidable word problem for which $\mathcal{E}_{\text{SFT}}(G)$ is neither $\mathcal{E}_{\text{SFT}}(\ZZ)$ nor $\mathcal{E}_{\text{SFT}}(\ZZ^2)$? \end{question} Furthermore,~\Cref{theorem_caract_entropies_G_z2_translation_like} provides a full characterization of the entropies attainable by SFTs defined on polycyclic-by-finite groups, but it cannot be applied on every solvable group with decidable word problem. Two notable examples where it does not apply (at least not directly) are the Baumslag-Solitar groups $\texttt{BS}(1,n) = \langle a,b \mid bab^{-1} = a^n\rangle$ for $n \geq 2$, and the Lamplighter group $ \ZZ/2\ZZ \wr \ZZ$. \begin{question} For $n \geq 2$, does it hold that $\mathcal{E}_{\text{SFT}}(\texttt{BS}(1,n)) = \mathcal{E}_{\text{SFT}}(\ZZ^2)$? \end{question} \begin{question} Characterize $\mathcal{E}_{\text{SFT}}(\ZZ/2\ZZ \wr \ZZ)$. Does it coincide with either $\mathcal{E}_{\text{SFT}}(\ZZ)$ or $\mathcal{E}_{\text{SFT}}(\ZZ^2)$? \end{question} \begin{acknowledgements*} The author wishes to thank Tom Meyerovitch, Mathieu Sablik and Ville Salo for many fruitful discussions. The author is also grateful to an anonymous referee for their helpful remarks. This research was done while the author was a postdoctoral fellow at the university of British Columbia. It was partially supported by the ANR project CoCoGro (ANR-16-CE40-0005) and the ANR project CODYS (ANR-18-CE40-0007). \end{acknowledgements*} \printbibliography \end{document}
{ "redpajama_set_name": "RedPajamaArXiv" }
8,001
Q: Printing additional line when using System.in.read() public class DWDemo { public static void main(String args[]) throws java.io.IOException { char ch; do { System.out.print("Press a key followed by ENTER: "); ch = (char) System.in.read(); } while (ch != 'S'); } } Trying to learn java. It's a simple function however the result I get is Press a key followed by ENTER: D Press a key followed by ENTER: Press a key followed by ENTER: G Press a key followed by ENTER: Press a key followed by ENTER: E Press a key followed by ENTER: Press a key followed by ENTER: F Press a key followed by ENTER: Press a key followed by ENTER: S System will print "Press a key followed by Enter:" twice in Intellij and in eclipse will print three times. Please help! A: Since you are reading char from console. Every "ENTER" == '\n' will be taken as a character. So your (char)System.in.read() taking 2 chars and loop gets incremented for every character it reads and for also "ENTER".
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,598
sap.ui.define('sap/ui/test/TestUtils', ['jquery.sap.global', 'sap/ui/core/Core'], function(jQuery/*, Core*/) { "use strict"; /*global QUnit, sinon */ // Note: The dependency to Sinon.js has been omitted deliberately. Most test files load it via // <script> anyway and declaring the dependency would cause it to be loaded twice. var rBatch = /\/\$batch($|\?)/, mMessageForPath = {}, // a cache for files, see useFakeServer sRealOData = jQuery.sap.getUriParameters().get("realOData"), rRequestLine = /^GET (\S+) HTTP\/1\.1$/, bProxy = sRealOData === "true" || sRealOData === "proxy", bRealOData = bProxy || sRealOData === "direct", TestUtils; /** * Checks that the actual value deeply contains the expected value, ignoring additional * properties. * * @param {object} oActual * the actual value to be tested * @param {object} oExpected * the expected value which needs to be contained structurally (as a subset) within the * actual value * @param {string} sPath * path to the values under investigation * @throws {Error} * in case the actual value does not deeply contain the expected value; the error message * provides a proof of this */ function deeplyContains(oActual, oExpected, sPath) { var sActualType = QUnit.objectType(oActual), sExpectedType = QUnit.objectType(oExpected), sName; if (sActualType !== sExpectedType) { throw new Error(sPath + ": actual type " + sActualType + " does not match expected type " + sExpectedType); } if (sActualType === "array") { if (oActual.length < oExpected.length) { throw new Error(sPath + ": array length: " + oActual.length + " < " + oExpected.length); } } if (sActualType === "array" || sActualType === "object") { for (sName in oExpected) { deeplyContains(oActual[sName], oExpected[sName], sPath === "/" ? sPath + sName : sPath + "/" + sName); } } else if (oActual !== oExpected) { throw new Error(sPath + ": actual value " + oActual + " does not match expected value " + oExpected); } } /** * Pushes a QUnit test which succeeds if and only if a call to {@link deeplyContains} succeeds * as indicated via <code>bExpectSuccess</code>. * * @param {object} oActual * the actual value to be tested * @param {object} oExpected * the expected value which needs to be contained structurally (as a subset) within the * actual value * @param {string} sMessage * message text * @param {boolean} bExpectSuccess * whether {@link deeplyContains} is expected to succeed */ function pushDeeplyContains(oActual, oExpected, sMessage, bExpectSuccess) { try { deeplyContains(oActual, oExpected, "/"); QUnit.push(bExpectSuccess, oActual, oExpected, sMessage); } catch (ex) { QUnit.push(!bExpectSuccess, oActual, oExpected, (sMessage || "") + " failed because of " + ex.message); } } /** * @classdesc * A collection of functions that support QUnit testing. * * @namespace sap.ui.test.TestUtils * @since 1.27.1 */ TestUtils = /** @lends sap.ui.test.TestUtils */ { /** * Companion to <code>QUnit.deepEqual</code> which only tests for the existence of expected * properties, not the absence of others. * * <b>BEWARE:</b> We assume both values to be JS object literals, basically! * * @param {object} oActual * the actual value to be tested * @param {object} oExpected * the expected value which needs to be contained structurally (as a subset) within the * actual value * @param {string} [sMessage] * message text */ deepContains : function (oActual, oExpected, sMessage) { pushDeeplyContains(oActual, oExpected, sMessage, true); }, /** * Companion to <code>QUnit.notDeepEqual</code> and {@link #deepContains}. * * @param {object} oActual * the actual value to be tested * @param {object} oExpected * the expected value which needs to be NOT contained structurally (as a subset) within * the actual value * @param {string} [sMessage] * message text */ notDeepContains : function (oActual, oExpected, sMessage) { pushDeeplyContains(oActual, oExpected, sMessage, false); }, /** * Activates a sinon fake server in the given sandbox. The fake server responds only to * those GET requests given in the fixture. It is automatically restored when the sandbox * is restored. * * The function uses <a href="http://sinonjs.org/docs/">Sinon.js</a> and expects that it * has been loaded. * * @param {object} oSandbox * a Sinon sandbox as created using <code>sinon.sandbox.create()</code> * @param {string} sBase * The base path for <code>source</code> values in the fixture. The path must be relative * to the <code>test</code> folder of the <code>sap.ui.core</code> project, typically it * should start with "sap". It must not end with '/'. * Example: <code>"sap/ui/core/qunit/model"</code> * @param {map} mFixture * The fixture. Each key represents a URL to respond to. The value is an object that may * have the following properties: * <ul> * <li>{number} <code>code</code>: The response code (<code>200</code> if not given) * <li>{map} <code>headers</code>: A list of headers to set in the response * <li>{string} <code>message</code>: The response message * <li>{string} <code>source</code>: The path of a file relative to <code>sBase</code> to * be used for the response message. It will be read synchronously in advance. In this * case the header <code>Content-Type</code> is determined from the source name's * extension. * </ul> * Requests ending on "/$batch" are handled differently. They are expected to be multipart * mime requests where each part is a GET request. The fixture value is an object in * which the key is a request URL and the value is an object as described above. * * Each multipart request in the batch is responded separately. If the URL is not found * in the fixture, it is responded with a 404. The batch itself is always responded with a * 200. */ useFakeServer : function (oSandbox, sBase, mFixture) { function batch(mUrls, oRequest) { var sBody = oRequest.requestBody, sBoundary, aRequestParts, aResponseParts = [""]; sBoundary = firstLine(sBody); aRequestParts = sBody.split(sBoundary).slice(1, -1); aRequestParts.forEach(function (sRequestPart) { var aMatches, sRequestLine, sResponse; sRequestPart = sRequestPart.slice(sRequestPart.indexOf("\r\n\r\n") + 4); sRequestLine = firstLine(sRequestPart); aMatches = rRequestLine.exec(sRequestLine); sResponse = aMatches && mUrls[aMatches[1]]; if (sResponse) { aResponseParts.push("\r\n" + sResponse); jQuery.sap.log.info(sRequestLine, null, "sap.ui.test.TestUtils"); } else { aResponseParts.push("\r\nContent-Type: application/http\r\n" + "content-transfer-encoding: binary\r\n\r\n" + "HTTP/1.1 404 Not Found\r\n" + "Content-Type: text/plain\r\n\r\nNo mock data found\r\n"); jQuery.sap.log.error(sRequestLine, "No mock data found", "sap.ui.test.TestUtils"); } }); aResponseParts.push("--\r\n"); oRequest.respond.apply(oRequest, [200, { "Content-Type" : "multipart/mixed; boundary=" + sBoundary.slice(2) }, aResponseParts.join(sBoundary)]); } function buildResponses(mFixture, bIsBatch) { var oHeaders, sMessage, oResponse, sUrl, mUrls = {}; for (sUrl in mFixture) { oResponse = mFixture[sUrl]; oHeaders = oResponse.headers || {}; if (!bIsBatch && rBatch.test(sUrl)) { mUrls[sUrl] = batch.bind(null, buildResponses(oResponse, true)); } else { if (oResponse.source) { sMessage = readMessage(sBase + oResponse.source); if (bIsBatch) { // In Git no files may contain CRLF, but multipart responses // require it. So we simply add the CR again. sMessage = sMessage.replace(/\n/g, "\r\n"); } else { oHeaders["Content-Type"] = oHeaders["Content-Type"] || contentType(oResponse.source); } } else { sMessage = oResponse.message || ""; } mUrls[sUrl] = bIsBatch ? sMessage : [oResponse.code || 200, oHeaders, sMessage]; } } return mUrls; } function contentType(sName) { if (/\.xml$/.test(sName)) { return "application/xml"; } if (/\.json$/.test(sName)) { return "application/json"; } return "application/x-octet-stream"; } function firstLine(sText) { return sText.slice(0, sText.indexOf("\r\n")); } function readMessage(sPath) { var sMessage = mMessageForPath[sPath], oResult; if (!sMessage) { oResult = jQuery.sap.sjax({ url: sPath, dataType: "text" }); if (!oResult.success) { throw new Error(sPath + ": resource not found"); } mMessageForPath[sPath] = sMessage = oResult.data; } return sMessage; } function setupServer() { var fnRestore, oServer, mUrls = buildResponses(mFixture, false), sUrl; // set up the fake server oServer = oSandbox.useFakeServer(); oServer.autoRespond = true; for (sUrl in mUrls) { oServer.respondWith(sUrl, mUrls[sUrl]); } // wrap oServer.restore to also clear the filter fnRestore = oServer.restore; oServer.restore = function () { sinon.FakeXMLHttpRequest.filters = []; // no API to clear the filter fnRestore.apply(this, arguments); // call the original restore }; } //TODO remove this workaround in IE9 for // https://github.com/cjohansen/Sinon.JS/commit/e8de34b5ec92b622ef76267a6dce12674fee6a73 sinon.xhr.supportsCORS = true; sBase = "/" + window.location.pathname.split("/")[1] + "/test-resources/" + sBase + "/"; setupServer(); // set up a filter so that other requests (e.g. from jQuery.sap.require) go through sinon.FakeXMLHttpRequest.useFilters = true; sinon.FakeXMLHttpRequest.addFilter(function (sMethod, sUrl, bAsync) { return !(sUrl in mFixture); // do not fake if URL is unknown }); }, /** * If a test is wrapped by this function, you can test that locale-dependent texts are * created as expected, but avoid checking against the real message text. The function * ensures that every message retrieved using * <code>sap.ui.getCore().getLibraryResourceBundle().getText()</code> consists of the key * followed by all parameters referenced in the bundle's text in order of their numbers. * * The function uses <a href="http://sinonjs.org/docs/">Sinon.js</a> and expects that it * has been loaded. It creates a <a href="http://sinonjs.org/docs/#sandbox">Sinon * sandbox</a> which is available as <code>this</code> in the code under test. * * <b>Example</b>: * * In the message bundle a message looks like this: * <pre> * EnterNumber=Enter a number with scale {1} and precision {0}. * </pre> * This leads to the following results: * <table> * <tr><th>Call</th><th>Result</th></tr> * <tr><td><code>getText("EnterNumber", [10])</code></td> * <td>EnterNumber 10 {1}</td></tr> * <tr><td><code>getText("EnterNumber", [10, 3])</code></td> * <td>EnterNumber 10 3</td></tr> * <tr><td><code>getText("EnterNumber", [10, 3, "foo"])</code></td> * <td>EnterNumber 10 3</td></tr> * </table> * * <b>Usage</b>: * <pre> * test("parse error", function () { * sap.ui.test.TestUtils.withNormalizedMessages(function () { * var oType = new sap.ui.model.odata.type.Decimal({}, * {constraints: {precision: 10, scale: 3}); * * throws(function () { * oType.parseValue("-123.4567", "string"); * }, /EnterNumber 10 3/); * }); * }); * </pre> * @param {function} fnCodeUnderTest * the code under test * @since 1.27.1 */ withNormalizedMessages: function (fnCodeUnderTest) { sinon.test(function () { var oCore = sap.ui.getCore(), fnGetBundle = oCore.getLibraryResourceBundle; this.stub(oCore, "getLibraryResourceBundle").returns({ getText: function (sKey, aArgs) { var sResult = sKey, sText = fnGetBundle.call(oCore).getText(sKey), i; for (i = 0; i < 10; i += 1) { if (sText.indexOf("{" + i + "}") >= 0) { sResult += " " + (i >= aArgs.length ? "{" + i + "}" : aArgs[i]); } } return sResult; } }); fnCodeUnderTest.apply(this); }).apply({}); // give Sinon a "this" to enrich }, /** * @returns {boolean} * <code>true</code> if the the real OData service is used. */ isRealOData : function () { return bRealOData; }, /** * Adjusts the given absolute path so that (in case of "?realOData=proxy") the request is * passed through the SimpleProxyServlet. * * @param {string} sAbsolutePath * some absolute path * @returns {string} * the absolute path transformed in a way that invokes a proxy */ proxy : function (sAbsolutePath) { return bProxy ? "/" + window.location.pathname.split("/")[1] + "/proxy" + sAbsolutePath : sAbsolutePath; }, /** * Sets up the fake server for OData V4 responses unless real OData responses are requested. * * The behavior is controlled by the request property "realOData". There are two options: * <ul> * <li>"realOData=proxy" (or "realOData=true"): The test must be part of the UI5 Java * Servlet. Set the system property "com.sap.ui5.proxy.REMOTE_LOCATION" to a server * containing the Gateway test service. * <li>"realOData=direct": The test and the Gateway service must be reachable via the same * host. This can be reached either by deploying the test code to the Gateway host or by * using a reverse proxy like the SAP Web Dispatcher. * </ul> * * @param {object} oSandbox * a Sinon sandbox as created using <code>sinon.sandbox.create()</code> * @param {map} mFixture * the fixture for {@link sap.ui.test.TestUtils#.useFakeServer}. If the value for a URL * contains <code>always:true</code>, this URL is faked even with <code>realOData</code>. * @param {string} [sSourceBase="sap/ui/core/qunit/odata/v4/data"] * The base path for <code>source</code> values in the fixture. The path must be relative * to the <code>test</code> folder of the <code>sap.ui.core</code> project, typically it * should start with "sap". It must not end with '/'. * @param {string} [sFilterBase=""] * A base path for the filter URLs. It is prepended to all keys in <code>mFixture</code>. */ setupODataV4Server : function (oSandbox, mFixture, sSourceBase, sFilterBase) { var mResultingFixture = {}, bStart = false; sFilterBase = sFilterBase || ""; Object.keys(mFixture).forEach(function (sUrl) { if (!bRealOData || mFixture[sUrl].always) { mResultingFixture[sFilterBase + sUrl] = mFixture[sUrl]; bStart = true; } }); if (bStart) { TestUtils.useFakeServer(oSandbox, sSourceBase || "sap/ui/core/qunit/odata/v4/data", mResultingFixture); } } }; return TestUtils; }, /* bExport= */ true);
{ "redpajama_set_name": "RedPajamaGithub" }
3,021
Very Veggie Partners: Incorporating Recipes From the East, West, North, & South of China With Bodhi Kosher Vegetarian Restaurant Bodhi Kosher Vegetarian Restaurant, located in Chinatown, New York. Photo courtesy of Bodhi Kosher Vegetarian Restaurant. Story shared by Bodhi Kosher Vegetarian Restaurant, a partner of the Very Veggie Movement: "If you think about Chinese food, the southern cuisine is sweet, the northern ones are salty, the east is spicy, and the west is sour. Are these many flavors present in vegetarian dishes?" Twenty-six years ago, Kent Zhang was moved to take on a vegetarian diet due to his religious beliefs — and because he's cultivated a deep love of delicious food after many years in the catering industry as well. At Bodhi Kosher Vegetarian Restaurant, located in New York's Chinatown, Kent has merged his ambition and his love of food to develop enticing new vegetarian interpretations of traditional Chinese recipes. And they've certainly won over many of his guests, too. Although the restaurant has struggled during the pandemic like many others, Kent is grateful for all of the kindness he's been shown from customers, and when Tzu Chi volunteers contacted Kent about partnering with the Very Veggie Movement, Kent joined with enthusiasm. Kent's vegetarian journey began in the Western Hills of Beijing, China. He'd been studying English at Beijing Foreign Affairs University, and his class organized a spring outing. Kent, who ran a restaurant and loves cooking, was entrusted with the important task of preparing meals for everyone. "I set up a grill, grilled fish for everyone, and prepared a Buddha's delight (a braised vegetable dish)," he said. "I'd thought, 'if I smell grilled fish and want to eat, then just eat it. But if I don't feel like eating fish while smelling it, then I am ready to be a vegetarian.'" After testing himself that day, Kent began his vegetarian lifestyle and never looked back. Where There Is a Will, Action Must Follow In 1999, Kent and his friends opened a vegetarian restaurant in Flushing, New York. It was indeed a moment of great happiness for Kent. "We are vegetarian. On one hand, there are few restaurants we can gather and eat at, and it is not convenient to go to ordinary restaurants. On the other hand, I was also catering before; when the boss asked me to get fish and crabs for dishes, I had to do it. But I was very, very sad. Now, I no longer need to hurt animals." Kent has been a vegetarian since 1994. When he first became a vegetarian, there were few options to choose from and the flavors were relatively monotonous. However, Kent was committed to being a vegetarian. "Because I believe in karma's causal circle, I am very firm in spirit, and I slowly rejected the taste of meat psychologically. There is no regret." Whether being a vegetarian or running a vegetarian restaurant, Kent works hard to give his commitments 100% of his efforts. Over time, Kent saw the fruits of his efforts grow and flourish. The restaurant moved to Chinatown in Manhattan, New York. And with age, the benefits of being a vegetarian gradually became more apparent: "I go with friends of my age to annual medical checks; my indicators are much better than others. Whether it is cholesterol or blood pressure, they are all normal." Kent said that relieving himself of the psychological burden of his old diet had boosted his spirits considerably, and he now feels more relaxed in general. Kent's Bodhi Kosher Vegetarian Restaurant is clean and cozy. Photo courtesy of Bodhi Kosher Vegetarian Restaurant. North, South, East, and West Before becoming a vegetarian, Kent worked in the catering industry at Bei Hde Guesthouse in the Houhai area of Beijing. He was familiar with everything from purchasing to operation, and cooking to management. Kent said that he has always enjoyed cooking, and he likes delving into the culture of Chinese cuisine. After becoming a vegetarian, he began to pay more attention to the special presentation of vegetarian foods in Chinese cuisine. "I remember eating an impressive vegetable dish in Beijing at that time. It was called "pine stone and crispy eel." It was thinly cut with shiitake mushrooms. It was half black and half white, like eel, wrapped in flour and fried. It's crispy and delicious, and the presentation is very artistic, stacked like a rockery, topped with a cherry, which is so beautiful." Kent remembered this dish and endeavored to make it himself when he returned home. Kent loves to try new dishes and learn from his experiences. Photo courtesy of Bodhi Kosher Vegetarian Restaurant. Kent tasted all kinds of vegetarian foods from the east, west, north, and south of China. During his gradual accumulation of new flavors and culinary techniques, he collected more than two hundred gourmet recipes for his restaurant. "There are many people who misunderstand that vegetarian dishes are indifferent and boring. If you think about Chinese food, the southern cuisine is sweet, the northern ones are salty, the east is spicy, and the west is sour. Are these many flavors present in vegetarian dishes?" Kent proudly told us that although his restaurant chiefly serves Cantonese dim sum, it also includes many other tasty meals from all over China. When speaking of the sweeter southern dishes, Kent mentioned a famous dish from Guangdong known as sweet and sour pork. At Bodhi Kosher Vegetarian Restaurant, Kent carefully adapted it to utilize rice cakes as a means of imitating the fatty meat, and the lean meat is instead substituted with vegetarian ham. Traditional hawthorn, pineapple, and various fruits are used to cook the sweet and sour sauce. Together, the color is vibrant and the taste is rich. A famous dish in Guangdong, "sweet and sour pork" has been deliciously adapted into a vegetarian version at Bodhi Kosher Vegan Restaurant. Photo courtesy of Bodhi Kosher Vegetarian Restaurant. One must-try savory northern recipe is a famous dish from Beijing: "Shredded Pork with Beijing Sauce." Traditionally, pork loin and Beijing onions are stir-fried with a sweet noodle sauce. Kent has also reinvigorated these ingredients using bean curds to make vegetarian shredded pork, coating it with cornstarch, and frying it until fragrant and tender. Then, Kent uses a sweet sauce for stir-frying, and uses shredded cucumber as the side dish instead of Beijing onion, which not only retains the flavor of northern Shandong cuisine but is also healthier. The continuously heated hot pot contains an assortment of delicious veggies. Photo courtesy of Bodhi Kosher Vegetarian Restaurant. Among the eight major Chinese cuisines, many have spicy dishes, but the "Mixed Hot Pot" from the Northeast is particularly mouth-watering. The pot is placed on a fire, and the spicy soup base can be paired with a variety of vegetables, tofu, and beyond. When the conversation arrived at the more sour tangs associated with western dishes, the "sour soup noodles" from Shanxi were called to attention. Kent said that the original noodles made in the shop sold very well, and then they bought a machine to help press the noodles together. Many noodles contain eggs to improve flavor, but these fresh, handmade noodles boiled and combined with Shanxi vinegar and stewed vegan meat certainly do not want for anything. The different flavors of Chinese cuisine, from east, west, north, and south, are a treasure trove. If you dig into it, you can get creative vegetarian recipes. Kent Zhang, Owner of Bodhi Kosher Vegetarian Restaurant Uniting to Advocate for Vegetarianism At Bodhi Kosher Vegetarian Restaurant, the meals are clean, healthy, and flavorful, attracting many non-vegetarians as well as long-time vegetarians. "I remember once a table of South American customers ordered a lot of dishes and ate very happily. At the checkout, I stepped forward to greet them and asked them which one is vegetarian. They looked at each other and wondered why I asked. I said, 'this is a vegetarian restaurant.' They exclaimed that what they just ate was vegetarian!" During his time operating the restaurant, Kent has encountered many more interesting misunderstandings just like this one. Bodhi Kosher Vegetarian's dishes have not only accumulated a large number of vegetarian customers, but opened the eyes of non-vegetarians. Vegetarian sweet and sour fish is a very popular menu item for vegetarian customers as well as non-vegetarians. Photo courtesy of Bodhi Kosher Vegetarian Restaurant. Vegetarian BBQ pepper pork is another popular dish at Bodhi Kosher Vegetarian Restaurant. Photo courtesy of Bodhi Kosher Vegetarian Restaurant. The COVID-19 pandemic dealt a huge blow to the restaurant industry. Because Bodhi Kosher Vegetarian Restaurant is located in the central area of ​​Manhattan, a large portion of its customers are tourists. Nowadays, with entire countries quarantined, tourists have nearly vanished from the area, and restaurants continue to suffer heavy losses. "Living all these years, we really haven't experienced anything like this. We closed the door for two months without any guests, and we couldn't bear to let the workers take the risk to go to work," Kent recalled of the early days of the pandemic. After the situation eased in July, Kent, who lives in Bayside, Queens, drove to pick up restaurant employees in Brooklyn every day to help limit their exposure to other people. After posting the reopening news on social media, Kent received calls from several customers. Some gave him gift cards, and some said that they would set up a credit card with him for a few hundred dollars to order his food. "That kind of support is really touching," said Kent sincerely. Wherever there are difficulties, Tzu Chi will always stand out, and here with me too, I have suffering and difficulties, and Tzu Chi appeared. Kent's relationship with Tzu Chi began early; before opening the restaurant, Kent participated in Tzu Chi's choir. Later, as he started to manage the restaurant and time had to be handled carefully, those at Tzu Chi always remained in his heart. "Understanding that my business is recovering slowly, several senior brothers and sisters from Tzu Chi have come to recommend the Very Veggie Movement to me. I think this concept is very good, so I participated and offered a 50% discount on different dishes every week. I hope to attract more people to try vegetarian food." Kent has a great relationship with Tzu Chi, and took a photo with his old friend, who is a Tzu Chi volunteer. Photo courtesy of Bodhi Kosher Vegetarian Restaurant. Bodhi Kosher Vegetarian Restaurant 77 Mulberry St, New York, NY 10013 buddhavegetarian.com Join the Very Veggie Movement today by signing up, and you'll begin receiving recipes, meal ideas, tips, articles, and special deals from our Very Veggie Partners! Becoming a partner of the Very Veggie Movement is easy — discover how you can join us, too!
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
4,794
Q: How to restrict access to API to the portal run on browser through Nginx Scenario I have a system with an api server and a front end (static website run on browsers) and they are publicly available under 2 domain names a.example.com and a-api.example.com Question How do I restrict access to my api at a-api.example.com to my front end only (e.g. no one can arbitrarily curl to it and be able to access)? Or is it possible at all? If you can add a sample nginx block that'd be awesome. A: You cannot block curl calls completely. However, you can make them more difficult by requiring that HTTP referrer header is set to the api. You can use nginx HTTP referer module for this. An example configuration: server { valid_referers a-api.example.com; if ($invalid_referer) { return 403; } } This is not adding any security to your website. It is trivial for bad actor to add the required HTTP header when making requests to a-api.example.com. Therefore it is important that best security prcatices are used in your API implementation.
{ "redpajama_set_name": "RedPajamaStackExchange" }
4,294
Monday evening, 25 April, after three postponements, the European Space Agency ESA finally launched the Soyuz rocket from French Guiana that carried Aalborg University's newest satellite in space. Early Tuesday morning the engineering students were in two-way contact with AAUSAT4. Its beacons were also received by radio amateurs in the United States, Austria and Denmark. "It's a huge relief after we had a problem communicating with the predecessor AAUSAT5", says engineering student Mikael Juhl Kristensen from the AAUSAT4 team. A long and exciting night in the university's satellite laboratory was followed by a busy morning, where the team was ready to receive signals from AAUSAT4 when it passed close enough over Aalborg to allow radio contact. The direct communication was supported by positive observations from ground stations and radio amateurs around the world. The first data came from the earth station in Thule (Greenland) around 04:15 in the morning. Later the students had full two-way contact with the satellite as it flew over Aalborg shortly after six o'clock. The pass was used to collect a lot of useful statistics on AAUSAT4's condition. "The station in Thule has not been in use for one and a half years, so we were not sure how well it would perform. But it gave us the first definite indication that everything is alright. The main points being that there is power and that the satellite is charging. As the mood in the room indicates we're now very relaxed and happy", says Associate Professor Jesper Abildgaard Larsen, who is a veteran from the former AAUSAT missions. "Both the battery voltage of 8 volts and the temperature of 20 degrees testify that the satellite is in excellent shape. Now we need to optimize the reception of AIS data transmitted by ships in the waters around Greenland, which is AAUSAT4's practical mission". This will be achieved by calibrating the satellite and adjusting various parameters of its software with commands from the ground. The main star in the successful rocket launch from Kourou in South America was ESA's environmental monitoring satellite Sentinel-1B. ESA Education seized the opportunity to let engineering students from universities in Aalborg (Denmark), Liege (Belgium) and Turin (Italy) send their small CubeSats into space as part of ESA's' !Fly Your Satellite!" program. The payload of AAUSAT4 is to improve the monitoring of sea traffic around Greenland, which began with its predecessor AAUSAT3 and was followed up with AAUSAT5 in collaboration with Danish astronaut Andreas Mogensen. The Belgian CubeSat Oufti-1 will test digital radio communications to space, and Italian e-st@ar-II experiments with a control system that utilizes the earth's magnetic field. The AAUSAT4 website at www.space.aau.dk/aausat4 is constantly updated with information about data collected at the various passes. ESA news update: Student satellites fly freely on their orbit in space. Fly Your Satellite! is an educational programme by the European Space Agency (ESA) run in close collaboration with European universities and aimed at complementing academic education. It is providing university students across Europe with the unique opportunity to gain practical experience in key phases of a challenging, real satellite project – a CubeSat - from integration, test and verification, launch and operations. Through Fly Your Satellite! and other educational projects, ESA acts to inspire, engage and better prepare students to undertake scientific and technological careers, in the space sector in particular. Fly Your Satellite! is part of the newly established ESA Academy programme. Learn more. AAUSAT4 is a CubeSat of 10x10x11 cm. It can receive AIS identification signals from ships in those areas it flies over, and send the information to control stations on the ground. Monitoring is relevant, for example, when investigating pollution from ships. Jens Dalsgaard Nielsen, Associate Professor, AAU Student Space, Mobile +45 2872 8753. Jesper Abildgaard Larsen, Associate Professor, AAU Student Space, Mobile +45 5170 0417. Engineering students: Anders Kalør, Mobile +45 6169 9310, and Mikael Juhl Kristensen, Mobile +45 2364 1901.
{ "redpajama_set_name": "RedPajamaC4" }
6,094
A journal by artist and designer John Coulthart. Lyrical Substance Deliberated Lucy In The Sky With Diamonds from Yellow Submarine (1968). The advent of spring invariably gets me listening to favourite psychedelic songs, and this year has been no exception. Earlier this week I was idly wondering how many songs there are that follow the Beatles' lead in telegraphing their drug metaphors by using the initials L-S-D in their titles. Wikipedia's page for Lucy In The Sky With Diamonds (1967) relates John Lennon's oft-repeated claim that the initialism in the title was a coincidence, and the song itself is really a bit of Lewis Carroll-like whimsy. This might be credible if works of art only ever carried one meaning but they don't, of course, and the song is both a piece of Lewis Carroll-like whimsy as well as being a pretty obvious paean to the drug experience: "Climb in the back with your head in the clouds / And you're gone". Jefferson Airplane's White Rabbit (1967) was similarly ambivalent with mushrooms/pills replacing acid. Among the many things birthed by the enormous success of the Sgt Pepper album, a small flurry of songs or instrumentals have imitated Lennon's initialism for their titles. The ones that came immediately to mind are detailed below, and they make a curious group. If anyone knows of any others—there must be others…—then please leave a comment. Burning Of The Midnight Lamp/The Stars That Play With Laughing Sam's Dice (Aug, 1967). The Jimi Hendrix Experience's B-side not only alludes to LSD but also to STP. The song itself doesn't go very far before collapsing into freakout mode. The Trip (1967). Not a song but included here for that "Lovely Sort of Death" tag. Written by Jack Nicholson! With Dennis Hopper as the acid dealer! See the trailer here, then watch the whole film here. Lost Soul In Disillusion (November, 1967). Hard to imagine anyone in London would have heard this in 1967. The Power of Beckett were a Montreal garage group who only released two singles. Lost Soul In Disillusion turned up years later on compilation albums. Would You Believe (1968) by Billy Nicholls. Billy Nicholls' debut album begins its second side with London Social Degree, a song in which Billy advises a female friend to open her mind by getting hip to the "degree" in question. A pretty good number which also turns up on compilations. Foolish Seasons (1968) by Dana Gillespie. This was a surprise. Dana Gillespie is a British actress with a Raquel Welch-style reputation for her prodigious chest measurements. A year after performing in Hammer's bizarre The Lost Continent she recorded her debut album which includes among its cover songs her version of London Social Degree. Church of Hawkwind (1982) by Hawkwind. This Hawkwind album (actually more of a Dave Brock solo album) includes an instrumental track entitled Light Specific Data. It's also the first Hawkwind album that featured any artwork of mine with some illustrations in the lyric booklet. Love's Secret Domain (1991) by Coil. Coil's first psychotropic album, several tracks of which allude to drugs. The song Love's Secret Domain also manages to allude to William Blake, Arthur Machen and Roy Orbison, among other things. Stolen and Contaminated Songs (1992) by Coil. The companion release to Love's Secret Domain was a very strong collection of alternate versions, and unreleased tracks from sessions for the earlier album. The final track is Light Shining Darkly. Light Sensitive Data/UFO (1995) by Dimension 5. Eight-and-a-half minutes of Goa trance. Previously on { feuilleton } • The Art of Tripping, a documentary by Storm Thorgerson • Enter the Void • In the Land of Retinal Delights • The art of LSD • Hep cats Author JohnPosted on April 4, 2014 April 6, 2014 Categories {drugs}, {film}, {music}, {psychedelia}, {work}Tags Arthur Machen, Billy Nicholls, cats, Coil (group), Dana Gillespie, Dave Brock, Dennis Hopper, Dimension 5, Hawkwind, Jack Nicholson, Jefferson Airplane, Jimi Hendrix, John Lennon, Lewis Carroll, LSD, Roy Orbison, The Beatles, The Jimi Hendrix Experience, The Power of Beckett, William Blake, Yellow Submarine 6 thoughts on "Lyrical Substance Deliberated" MÁRCIO SALERNO says: Well, Lewis Carroll, according to legend, was also a big fan of such mushrooms… Also from Coil – and quite a listen – the piece "Light Shining Darkly". There's a nice flow to the letters on the Dana Gillespie album. Edward: Thanks, I've amended the post. No excuse for my missing that when I've had the album since 1992. Tel: Indeed. And they also resisted the obvious move of having a close-up of Dana. When you're ready to move on we could look at 'grass'?! Grass? Where to begin? And end? Too many choices… Previous Previous post: Lovecraft's Monsters unleashed Next Next post: Acid covers Giger's first alien: Swissmade: 2069 Ghost Box and The Infinity Box Weekend links 551 David Britton, 1945–2020 Sredni Vashtar, 1981 Curious Relations Jean Giraud record covers Seasonal spectres Arzak Rhapsody About { feuilleton } Coulthart books Themed archive pages The album covers archive The book covers archive The etching and engraving archive The fantastic art archive The gay artists archive The illustrators archive The men with swords archive The Oscar Wilde archive The panoramas archive The recurrent pose archive The Salomé archive Archives Select Month January 2021 December 2020 November 2020 October 2020 September 2020 August 2020 July 2020 June 2020 May 2020 April 2020 March 2020 February 2020 January 2020 December 2019 November 2019 October 2019 September 2019 August 2019 July 2019 June 2019 May 2019 April 2019 March 2019 February 2019 January 2019 December 2018 November 2018 October 2018 September 2018 August 2018 July 2018 June 2018 May 2018 April 2018 March 2018 February 2018 January 2018 December 2017 November 2017 October 2017 September 2017 August 2017 July 2017 June 2017 May 2017 April 2017 March 2017 February 2017 January 2017 December 2016 November 2016 October 2016 September 2016 August 2016 July 2016 June 2016 May 2016 April 2016 March 2016 February 2016 January 2016 December 2015 November 2015 October 2015 September 2015 August 2015 July 2015 June 2015 May 2015 April 2015 March 2015 February 2015 January 2015 December 2014 November 2014 October 2014 September 2014 August 2014 July 2014 June 2014 May 2014 April 2014 March 2014 February 2014 January 2014 December 2013 November 2013 October 2013 September 2013 August 2013 July 2013 June 2013 May 2013 April 2013 March 2013 February 2013 January 2013 December 2012 November 2012 October 2012 September 2012 August 2012 July 2012 June 2012 May 2012 April 2012 March 2012 February 2012 January 2012 December 2011 November 2011 October 2011 September 2011 August 2011 July 2011 June 2011 May 2011 April 2011 March 2011 February 2011 January 2011 December 2010 November 2010 October 2010 September 2010 August 2010 July 2010 June 2010 May 2010 April 2010 March 2010 February 2010 January 2010 December 2009 November 2009 October 2009 September 2009 August 2009 July 2009 June 2009 May 2009 April 2009 March 2009 February 2009 January 2009 December 2008 November 2008 October 2008 September 2008 August 2008 July 2008 June 2008 May 2008 April 2008 March 2008 February 2008 January 2008 December 2007 November 2007 October 2007 September 2007 August 2007 July 2007 June 2007 May 2007 April 2007 March 2007 February 2007 January 2007 December 2006 November 2006 October 2006 September 2006 August 2006 July 2006 June 2006 May 2006 April 2006 March 2006 February 2006 Categories Select Category {architecture} {art} {beardsley} {black and white} {collage} {illustrators} {painting} {sculpture} {books} {borges} {burroughs} {cormac} {lovecraft} {cities} {comics} {dance} {design} {art nouveau} {drugs} {events} {fantasy} {fashion} {film} {abstract cinema} {animation} {kubrick} {games} {gay} {eye candy} {horror} {magazines} {miscellaneous} {music} {electronica} {noted} {occult} {photography} {politics} {psychedelia} {pulp} {religion} {science fiction} {science} {surrealism} {symbolists} {technology} {television} {theatre} {typography} {uncategorized} {wordpress} {work} "Feed your head." { feuilleton } Proudly powered by WordPress
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
50
US Campgrounds > California > San Diego County > Desert Sands Trailer Park Desert Sands Trailer Park, California Pacific Crest Trail - section 3 in the vicinity has a height difference of 3,921 feet. This hiking trail is 56.8 miles of stunning backcountry trekking in the fresh air. You must have seven days to trek the Pacific Crest Trail - section 3; it's a demanding trek. If the weather is perfect here at Desert Sands Trailer Park, you can go for a hike along the Pacific Crest Trail - section 2. Anza-Borrego Desert State Park is a tremendous location to pop by on a delightful day. Going for a hike along the Pacific Crest Trail - section 3 will let you absorb the natural beauty of this vicinity. People from Borrego Springs love to come to Desert Sands Trailer Park, and Ocotillo Wells State Park is both great and charming. Why not spend some time golfing at a close by golf course like Rams Hill Country Club. Borrego Palm Canyon is a splendid spot to visit close to Desert Sands Trailer Park, and close by you stumble on Panorama Outlook. This is no place for people who don't like to enjoy themselves. On a good day at Desert Sands Trailer Park you can take a hike along the Cactus Spring Trail. If you want to play some golf, you're in luck because Roadrunner Golf & Country Club is in the vicinity. For the period of the months of summer here at Desert Sands Trailer Park, high temperatures commonly get into the 90's while overnight lows are generally in the 50's. For the duration of the wintertime the highs are commonly in the 50's with winter lows in the 20's overnight here at Desert Sands Trailer Park. At Desert Sands Trailer Park you don't see much precipitation. During the month of February you get the most rain around here while June is the month with the least amount of precipitation. * * * RV hookups available BORREGO SPRINGS RESORT, Borrego Springs Borrego Springs Resort & Golf Club, BW Premier Collection, Borrego Springs La Casa Del Zorro-Worldhotel, Borrego Springs San Vicente Golf Resort, Ramona Other campgrounds near Desert Sands Trailer Park, California: Palm Canyon Resort, Borrego Springs, 0 miles away Borrego Palm Canyon Campground, San Diego County, 2 miles away The Springs at Borrego, Borrego Springs, 2 miles away Anza-Borrego Desert State Park - Culp Valley, Ranchita, 5 miles away Paroli Homesite Campground, San Diego County, 5 miles away Anza-Borrego Desert State Park - Vern Whittaker Horse Camp, Borrego Springs, 5 miles away Desert Gardens Campground, San Diego County, 6 miles away Holiday Home, Borrego Springs, 7 miles away Anza-Borrego Desert State Park - Yaqui Well, Borrego Springs, 7 miles away Anza-Borrego Desert State Park - Yaqui Pass, Borrego Springs, 7 miles away Public Lands near Desert Sands Trailer Park, California: Joshua Tree National Park, Twentynine Palms - CA, 57 miles away Cabrillo National Monument, San Diego - CA, 66 miles away Anza-Borrego Desert State Park, California, 8 miles away Ocotillo Wells State Park, California, 17 miles away Cuyamaca Rancho State Park, California, 20 miles away Cleveland National Forest, California, 30 miles away Palomar Mountain State Park, California, 36 miles away Trails near Desert Sands Trailer Park, California: Pacific Crest Trail - section 3, overlapping County, 8 miles away 56.8 hiking miles, 3921 feet elevation difference Pacific Crest Trail - section 2, overlapping County, 16 miles away Cactus Spring Trail, Riverside County, 16 miles away Golf Courses Near Desert Sands Trailer Park, California: Roadrunner Golf & Country Club, Borrego Springs, 3 miles away Rams Hill Country Club, Borrego Springs, 6 miles away Current weather conditions at Desert Sands Trailer Park, California Local climate location: HENSHAW DAM 8 miles away Outdoors Recreation Near San Diego-Carlsbad-San Marcos, California Outdoors recreation in the vicinity of San Diego-Carlsbad-San Marcos, California, the metro area neareast to Desert Sands Trailer Park. Find info on campgrounds, marinas, hiking trails, ski resorts, lakes, beaches, parks, whitewater, golf courses and more. California Campgrounds Complete list of all campgrounds in California San Diego County Campgrounds Complete list of all campgrounds in San Diego County Books about campgrounds in California List of books available on Amazon.com about campgrounds in California.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
106
Q: how to use the new image size I am new to wordpress and i want to use a size which can be 300 x 283, i tried the the_post_thumbnail(array('300,283')) but it didn't work, then i went to read about add_image_size which i added like so add_image_size( 'homepage-thumb', 220, 180, true );, gave my post_thumbnail same dimension but nothing happened, am i missing something? how to u my newly added add_image_size in the_post_thumbnail A: By default Wordpress has three four different image sizes: full, large, medium, thumbnail. If you don't need more than three images size, you can set the with and height of the default image sizes. For example: add_action( 'after_setup_theme', 'theme_setup' ); function theme_setup() { // Be sure your theme supports post-thumbnails add_theme_support( 'post-thumbnails' ); //set thumbnail size to 150 x 150 pixels set_post_thumbnail_size( 150, 150); //For the other images size it must be used update_option() function. //For example, set width to 300 px and height to 200 px for medium size (this is a native image size in Wordpress). if (get_option('medium_size_w') != 300 ) { update_option('medium_size_w', 300); update_option('medium_size_h', 200); } } Note: the dimensions for each default image size can be configured in the Wordpress admin area. The above code will override this configuration. If you need more than three image sizes, you want to have your very own image sizes or you don't want to alter the default image sizes, you can add new image sizes using add_image_size() funciton: add_action( 'after_setup_theme', 'theme_setup' ); function theme_setup() { // Be sure your theme supports post-thumbnails add_theme_support( 'post-thumbnails' ); // the params are 'name of the custom size', width (px), height (px) and crop (false/true). add_image_size('my-image-size', 300, 110, true); } Once the new image size has been added, or the width/height of default sizes have been changed, the upcoming images will have a version with the new image size. The old images must be rebuild. To rebuild the old images you can upload them again or use a plugin, for example AJAX Thumbnail Rebuild. After that, you are ready to use the custom image size anywhere using, for example: the_post_thumbnail('my-image-size'); A: Check out the Codex a little deeper. The answer is there! the_post_thumbnail( 'your-custom-size' );
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,145
Meet your new go-to summer vegetarian recipe, pasta stuffed peppers! These poblano peppers are stuffed with orzo pasta to keep you satisfied. Perfect for a lunch idea or a flavorful dinner, share this tasty recipe with anyone who enjoys a little spice in their life. 1. Grill corn on hot grill until charred. 2. Remove corn from cob and place in bowl. 3. Cook pasta according to package directions; drain and mix with corn. 4. Add black beans to pasta and corn. 5. Heat olive oil in large sauté pan over high heat and add garlic and onions, sauté for 30 seconds. 6. Add zucchini, sauté for one minute and add mixture to bowl. 7. Add cilantro and jack cheese to bowl and mix all ingredients until well combined. 8. Slice each poblano from stem to tip. 9. Fill with half of black bean orzo mixture. 10. Place the other half of mixture in bottom of microwave proof dish, top with stuffed Poblano. 11. Microwave for 3 minutes. 12. Serve with low-fat sour cream and chopped cilantro. *Note: To roast poblano peppers, place on tray and broil until inside flesh becomes very soft and skin has large char marks (approximately 6-10 minutes). Turn peppers over every few minutes to ensure even cooking. Food Stylist: Diane Vezza http://www.dianevezza.com/"
{ "redpajama_set_name": "RedPajamaC4" }
1,623
\section{Introduction} \IEEEPARstart{I}{ndex} modulation (IM) is emerging as a new waveform format due to the need for increasing spectral efficiency and energy efficiency \cite{IM_nextGeneration,IMfor5G}. By activating a subset of communication resource entities during each signalling period, implicit information is embedded in activation patterns and transmitted without energy consumption. Meanwhile, activated entities operate with conventional digital modulation schemes and carry explicit information. According to selected resource entities, IM is classified into three main domains and termed spatial-domain IM (SD-IM), time-domain IM (TD-IM), and frequency-domain IM (FD-IM). Spatial modulation (SM), proposed in \cite{spatial_modulation_2008}, was an early implementation of SD-IM, where only a single transmit (TX) antenna in a multiple-input multiple-output (MIMO) system is activated in each symbol period. Inter-channel interference is therefore removed because of one active TX antenna, which leads to reduced detection complexity but at the price of throughput loss. Subsequent research activated multiple TX antennas to compensate for the throughput loss such as generalized-SM (GSM) \cite{GSM} and multiple active-SM (MA-SM) \cite{MA_SM}, in which GSM transmits the same data symbols on all activated antennas to avoid inter-channel interference and inter-symbol interference (ISI) while MA-SM transmits different symbols on different antennas. TX antenna grouping was introduced to GSM to harvest transmission diversity \cite{SM_TX_grouping}. Likewise, SD-IM can also be performed at receive (RX) antennas. Pre-processing aided-SM (PSM) deploys pre-coding to ensure that only the target RX antenna receives signals while the others only receive noise. In addition, the index of the activated RX antenna conveys information \cite{PSM}. Later, the SD-IM scheme was generalized in \cite{GPSM} to allow multiple activated RX antennas. In addition to the aforementioned SM schemes that utilize the indices of antennas to transmit information, MIMO with Antenna Number Modulation (MIMO-ANM) was developed in \cite{MIMO-ANM} which utilizes the number of antennas instead of their indices. By enabling a variable number of activated TX antennas, MIMO-ANM exhibits performance advantage over SM. TD-IM is performed in time domain where only a subset of time slots is activated for data transmission. Alternatively, empty time slots can be utilized by using a different modulation scheme that is distinguishable from the one modulated at activated time slots. When such TD-IM techniques are applied on single-carrier systems, they are termed either single-carrier IM (SC-IM) \cite{Single_carrier_TD_IM} or dual-mode SC-IM (DM-SCIM) \cite{dual_mode_SC_TD_IM}. To further enhance achievable spectral efficiency, single-mode and dual-mode TD-IM are combined with Faster-than-Nyquist (FTN), which enhances spectral efficiency by violating the time-orthogonal Nyquist-criterion \cite{dual_mode_SC_TD_IM, single_mode_FTN_TD_IM}. Particularly, FTN-IM in \cite{single_mode_FTN_TD_IM} can efficiently mitigate ISI owing to the unused time slots. In addition to the aforementioned TD-IM schemes that purely work in time domain, TD-IM is also incorporated with SD-IM leading to a family of multidimensional IM schemes, where index information is transmitted through the indices of activated time slots and TX antennas. There has been substantial research in this area since the early implementation of space-time IM (ST-IM) \cite{space_time_IM}. In the recent work \cite{FTN_multimode_ST_IM}, FTN in multi-mode TD-IM is combined with SM, which yields better BER performance than traditional MIMO. FD-IM refers to IM performed on subcarriers' state in multi-carrier systems. Orthogonal frequency division multiplexing with index modulation (OFDM-IM) proposed in \cite{OFDM_IM_Basar} paves the way for subsequent research by introducing subcarrier grouping. In this case, a fixed number of subcarriers are activated in each subcarrier group (i.e., subblock) to simplify index selection and detection. As index information is conveyed implicitly without energy consumption, signal power originally allocated to unused subcarriers can be suppressed, which yields improved energy efficiency. Advantageously, the saved signal power can be allocated to activated subcarriers, leading to improved error performance over OFDM. In addition, peak-to-average power ratio (PAPR) of index-modulated multi-carrier signals is reduced, given that the number of activated subcarriers is reduced \cite{OFDMIM_PAPR}. Hence, the energy efficiency of FD-IM systems is improved. To obtain higher achievable spectral efficiency, enhanced OFDM-IM schemes have been put forward by increasing the flexibility of activation patterns. In \cite{Generalized_OFDM_IM}, OFDM with generalized index modulation 1 (OFDM-GIM1) with a variable number of activated subcarriers per subblock and OFDM-GIM2 with indexing on the in-phase and quadrature components of subcarriers were proposed. On the other hand, OFDM with subcarrier number modulation (OFDM-SNM) \cite{OFDM_Subcarrier_Number_Modulation,enahnced_OFDM_Subcarrier_Number_Modulation} was developed to perform IM on the number of activated subcarriers instead of their indices, which obtains additional coding gain by adjusting activated subcarriers' locations based on channel conditions. However, spectral efficiency of OFDM-SNM is variable since it is dependant on the number of activated subcarriers, which may lead to error propagation. To solve this problem, joint-mapping OFDM-SNM (JM-OFDM-SNM) was put forward which ensures a fixed length of information bits for each transmission by jointly considering subcarrier activation patterns and modulation schemes, and performance improvement was reported \cite{joint-mapping-OFDM-SNM}. Considering the throughput loss brought by the unused subcarriers, a group of OFDM-IM schemes that have all subcarriers activated were proposed. In dual-mode OFDM (DM-OFDM) \cite{dual_mode_OFDM_IM} and generalized DM-OFDM (GDM-OFDM) \cite{Generalized_Dual_Mode_OFDM_IM}, subcarriers are modulated with two distinguishable modulation schemes, and their indices transmit index information. Moreover, multiple-mode OFDM-IM (MM-OFDM-IM) \cite{MM-OFDM-IM} and its generalized version (GMM-OFDM-IM) \cite{generalized-MM-OFDM-IM} were proposed, where the permutations of multiple distinguishable modulation schemes serve IM purpose. These schemes fully utilize frequency resources by trading off energy efficiency and detection complexity. OFDM is a key constituent of many modern wireless systems in 4G \cite{LTE_standard}, 5G \cite{5gtutorial} and wireless local area network (WLAN) \cite{WLAN_standard}. Separated by a certain spacing, subcarriers overlap with each other without experiencing inter-carrier interference (ICI), which enables low-complexity receiver designs. However, OFDM is constrained by such spacing to maintain the orthogonality between subcarriers. Facing the ever-increasing bandwidth demand, non-orthogonal multi-carrier systems are gaining research interests. Spectrally efficient frequency division multiplexing (SEFDM) was developed enabling flexible subcarrier spacing \cite{SEFDM2003}. Despite enhanced spectral efficiency, SEFDM suffers from self-introduced ICI between subcarriers, which degrades the performance of linear detection techniques such as zero forcing (ZF) and minimum mean square error (MMSE). Hence, complicated receiver designs are required for signal detection such as maximum likelihood (ML) detection \cite{SEFDM_MMSE_ML}, turbo equalizer \cite{softdetector, TongyangTVT2017} and deep learning algorithms \cite{deeplearning}, which limit the practical implementation of SEFDM. To further improve spectral efficiency, IM was introduced to SEFDM \cite{SEFDMIM_CHINA, SEFDMIM_JAPAN,SEFDM-IM_jointChannel}, which effectively reduces ICI effect by switching off some subcarriers. Nevertheless, residual ICI still leads to the requirement of complicated detection techniques. As always, ML detection yields optimal performance, but it is only computationally practical in small-size systems with a small number of subcarriers and low modulation cardinality. In \cite{SEFDMIM_CHINA}, IM was applied in SEFDM by activating only a subset of SEFDM subcarriers in each subblock, and subblock-based ML detection was implemented. To reduce the computational complexity from ML, log-likelihood ratio (LLR) detection has been investigated. A successive MMSE-LLR receiver was developed in \cite{SEFDMIM_JAPAN}, where an MMSE equalizer was deployed for ICI mitigation, followed by an LLR detector to estimate subcarriers' state. Channel coding was reported for several traditional SEFDM systems demonstrating advantages of low-density parity-check (LDPC) coding in simulation research \cite{Xinyue_6G_Coexistence} and practical experiment \cite{Hedaia_TMTT_2019}. For SEFDM-IM, LDPC was initially used in \cite{SEFDM-IM_jointChannel}, where improved bit error rate (BER) was achieved with the aid of turbo equalization after sufficient iterations. In addition, channel estimation methods were put forward in \cite{SEFDM-IM_jointChannel} for SEFDM-IM systems. However, we find that all previous research on SEFDM-IM deployed the same IM design as the classical OFDM-IM in \cite{OFDM_IM_Basar}, where all subcarriers are probable to be activated. In this case, the influence of subcarrier location on the ICI level was ignored. Moreover, only BPSK and QPSK modulation were investigated. Against this background, this work aims to propose a novel SEFDM-IM system design with enhanced ICI mitigation from an IM perspective. First, novel index modulation activation patterns for SEFDM-IM are designed to deal with inter-subblock ICI. In this case, independent signal detection per subblock will be more robust even with higher levels of subcarrier spacing compression. Moreover, different modulation schemes are jointly investigated to optimize achievable spectral efficiency. IM-based systems are normally used for low spectral efficiency applications. The proposed SEFDM-IM systems improve further spectral efficiency, which would be obviously beneficial to Internet of things (IoT) applications such as narrowband IoT (NB-IoT) and enhanced NB-IoT (eNB-IoT) \cite{Tongyang_NB_IoT_2018}. Furthermore, the enhanced index would improve the performance for visible light communications (VLC) \cite{VLC_SEFDM2016}, integrated sensing and communications (ISAC) \cite{Tongyang_ISAC_2022} and waveform-defined security (WDS) \cite{Tongyang_JIOT_WDS_2021}. For details, the main contributions of this paper are summarised as the following. \begin{itemize} \item{ We propose a novel pattern design principle for SEFDM-IM, which limits the locations of activated subcarriers and reduces the ICI between neighbouring subblocks. The unavailability of certain subcarrier location is compensated by the design feature of allowing systems with a variable number of activated subcarriers per subblock. Following the proposed principle, we develop three novel subcarrier activation schemes denoted as SEFDM-IM-1, SEFDM-IM-2 and SEFDM-IM-3. } \item{ We consider jointly waveform formats, modulation schemes, levels of bandwidth compression, and the number of activated subcarriers for four achievable spectral efficiency scenarios. To figure out an optimal index-modulated system design in each case, the system performance of three proposed schemes in coded scenarios is compared with that of the traditional SEFDM-IM and classical OFDM-IM, in terms of BER and PAPR performance, computational complexity and achievable spectral efficiency. } \item{ This work verifies that our proposed SEFDM-IM systems can achieve better BER, lower PAPR and lower computational complexity than other available index-modulated systems when considering the same spectral efficiency. In detail, simulation results reveal that at low spectral efficiency of 0.75 bit/s/Hz, traditional SEFDM-IM outperforms OFDM-IM and our proposed SEFDM-IM systems in terms of BER and PAPR performance. When spectral efficiency is increased to 1, 1.1 and 1.25 bit/s/Hz, our proposed SEFDM-IM systems outperform OFDM-IM and traditional SEFDM-IM in both BER and PAPR performance. Results also reveal that our proposed SEFDM-IM systems achieve lower computational complexity over that of OFDM-IM and traditional SEFDM-IM for spectral efficiency of 1 bit/s/Hz and above. } \item{ This work potentially provides a design principle for optimal SEFDM-IM. Previous work has only considered modulation order up to QPSK. We extend the modulation order up to 16QAM and observe that SEFDM-IM systems with a smaller number of activated subcarriers achieve both improved BER and PAPR performance as well as reduced computational complexity. } \end{itemize} The rest of this paper is organized as follows. The system model of the coded SEFDM-IM is presented in Section \ref{section1}. In Section \ref{section2}, the proposed pattern design principle and three proposed SEFDM-IM schemes are detailed. We then provide possible system configurations for four selected values of spectral efficiency in Section \ref{section3}, and the simulation results are shown in Section \ref{section4}. The computational complexity analysis is given in Section \ref{section5}. Finally, Section \ref{section6} concludes this work. \section{System Model}\label{section1} \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{01.eps} \caption{System diagram of the coded SEFDM-IM system.} \label{Fig:coded_system_disgram} \end{figure*} In this section, we provide a LDPC-coded SEFDM-IM system model operating in an additive white Gaussian noise (AWGN) channel. \subsection{Transmitter Design} The system block diagram is shown in Fig. \ref{Fig:coded_system_disgram}. A \(B\)-length information bit sequence denoted as \({\mathcal{B}}\) is input for the transmission of one SEFDM-IM symbol, which is split into an index bit sequence \({{\mathcal{B}}_1}\) and a data bit sequence \({{\mathcal{B}}_2}\). These sequences are encoded by LDPC encoder-1 and encoder-2 of the coding rate \(\mathcal{R}\) separately, yielding two corresponding coded bit sequences \({\mathcal{C}_1}\) and \({\mathcal{C}_2}\). The coded index bit \({\mathcal{C}_1}\) is equally divided into \(G\) groups, and each group of \(L_1\) bits determines the activation pattern for an SEFDM-IM subblock of length \(K\), where \(K\) is the number of subcarriers in one subblock. The activation pattern for the \(g\)-th subblock is given by \begin{eqnarray}\label{eq:activation_pattern} I^{g}=\left \{ i_{1}^{g},i_{2}^{g},\cdot \cdot \cdot ,i_{K}^{g} \right \}, \end{eqnarray} where \(1\leq g\leq G\), and \(i_{\rho}^{g}\in\left \{ 0,1 \right \}\) for \(\rho =1, 2,..., K\). \(i_{\rho}^{g}=1\) denotes an activated subcarrier, and \(i_{\rho}^{g}=0\) denotes an unused subcarrier. In non-index-modulated systems, all \(K\) subcarriers are activated. However, in our proposed SEFDM-IM systems, only \(K_A\) out of \(K\) subcarriers are activated, and the throughput loss due to unused subcarriers is compensated by the index bits transmitted by activation patterns. The maximum number of index bits transmitted by one subblock is \(L_1=\left \lfloor \log _2\binom{K}{K_{A}} \right \rfloor\), where \(\left \lfloor \cdot \right \rfloor\) denotes the floor function. Since \(\log _2\binom{K}{K_{A}} \) may not be an integer, we use only \(U=2^{L_{1}}\) valid activation patterns and the set of which is denoted as \(\mathcal{I}\). The coded data bit sequence \({\mathcal{C}_2}\) is equally partitioned into \(G\) groups, and each group of \(L_2=K_A\log _2M\) bits is mapped into \(K_A\) data symbols with conventional \(M\)-ary amplitude and/or phase modulation schemes. Therefore, the data symbol vector for the \(g\)-th subblock is given by \begin{eqnarray}\label{eq:data_symbol} {{P}^{g}=\left [ \zeta_{1}^{g}, \zeta_{2}^{g},\cdot \cdot \cdot, \zeta_{K_{A}}^{g}\right ]^{T},} \end{eqnarray} where \([\cdot]^{T}\) denotes transposition, \(\zeta_{\varrho}^{g}\in\Upsilon\) for \(\varrho =1, 2,..., K_A\), \({P}^{g}\in\Upsilon^{K_A}\), and \(\Upsilon\) denotes the \(M\)-ary symbol alphabet. To maintain the average transmission power per SEFDM-IM symbol at unity, \(\Upsilon\) is scaled by \(\sqrt{K/K_A}\). After this point, the \(g\)-th subblock at the output of the symbol mapper is constructed from \begin{eqnarray}\label{eq:transmitted_message} {{S}^{g}=\overline{\boldsymbol{\mathbf{I}}}_{K\times K_{A}}^{g}\cdot{P}^{g}, } \end{eqnarray} where \(\overline{\mathbf{I}}_{K\times K_{A}}^{g}\) is a \(K\times K_{A}\) activation matrix whose columns are drawn from a \(K\times K\) identity matrix for those indices corresponding to activated subcarriers. For example, the first subblock (i.e., \(g=1\)) with an activation pattern of \(I^{1}=\left \{ 1,0,1,0 \right\}\) is given by \begin{equation}\label{eq:tx_message_example} \begin{aligned} S^{1}&=\begin{bmatrix} 1&0 \\ 0&0 \\ 0&1 \\ 0&0 \end{bmatrix}\cdot \begin{bmatrix} \zeta_{1}^{1}\\ \zeta_{2}^{1} \end{bmatrix}\\ &=\left [ \zeta_{1}^{1}, 0,\zeta_{2}^{1},0 \right ]^{T}. \end{aligned} \end{equation} Consequently, a total of \(L=L_1+L_2\) bits are conveyed by one subblock. An SEFDM-IM block is formed by \(G\) concatenated subblocks, expressed as \begin{equation}\label{eq:concatenation} \begin{aligned} {S}&={\left [ {S}^{1},{S}^{2},\cdot \cdot \cdot ,{S}^{G} \right ]}^{T}\\ &={\left [ s_{1}, s_{2},\cdot \cdot \cdot , s_{N} \right ]}^{T}, \end{aligned} \end{equation} where \(s_{k}\in\left \{ 0,\Upsilon \right \}\) for \(k=1,2,...,N\), and \(N=KG\) is the total number of subcarriers. A discrete $N$-subcarrier modulated SEFDM-IM signal is expressed as \begin{eqnarray}\label{eq:SEFDM_modulation} {x_{n}=\frac{1}{\sqrt{N}}\sum_{k=1}^{N}s_{k}e^{j2\pi\alpha\frac{kn}{N} },} \end{eqnarray} for \(n=1,2,...,N\), where \(1/\sqrt{N}\) is the power normalization factor. \(\alpha =\Delta fT\) is the bandwidth compression factor, where \(\Delta f\) is the subcarrier spacing and \(T\) is the SEFDM-IM symbol duration. \(\alpha<1\) indicates an SEFDM signal, which is converted to a traditional OFDM signal when \(\alpha =1\). SEFDM modulation can be performed by a bank of modulators operating on the non-orthogonal subcarrier frequencies, given by \begin{eqnarray}\label{eq:SEFDM_modulation_matrix} {{X}=\boldsymbol{\Phi}\cdot {S},} \end{eqnarray} where \({X}={\left [ x_{1},x_{2},\cdot \cdot \cdot ,x_{N} \right ]}^{T}\), and \(\boldsymbol{\Phi}\) is an \(N\times N\) carrier matrix whose elements are given by \(\boldsymbol{\Phi}_{k,n}=(1/{\sqrt{N}})e^{j2\pi\alpha\frac{kn}{N} }\). In practice, channel coding is considered leading to an achievable spectral efficiency of an SEFDM-IM system as \begin{eqnarray}\label{eq:coded_SE} {\textnormal{SE}=\frac{1}{\alpha}\frac{\mathcal{R}L}{K}.} \end{eqnarray} It is clear that the spectral efficiency is increased due to the bandwidth compression factor $\alpha$. The PAPR of the SEFDM-IM signal is calculated by \begin{eqnarray}\label{eq:papr} {\textnormal{PAPR}=\frac{\max \left \{ \left | x_{n} \right |^{2} \right \}}{E\left \{ \left | x_{n} \right |^{2} \right \}},} \end{eqnarray} for \(n=1,2,...,N\), where \(E\left \{ \cdot \right \}\) denotes the expectation operator. The numerator and the denominator in \eqref{eq:papr} yield the peak and average power of the signal, respectively, and both of them are dependant on the input symbol vector \(S\). Considering its random nature, PAPR is described statistically by the complementary cumulative distribution function (CCDF), given by \begin{eqnarray}\label{eq:ccdf} {\textnormal{CCDF}_{\Gamma}\left ( \gamma \right )=\textnormal{Pr}\left ( \Gamma> \gamma \right ),} \end{eqnarray} which calculates the probability of the PAPR of a transmitted symbol exceeding a given threshold \(\gamma\), and \(\Pr\left (\cdot\right )\) denotes the probability of an event. \subsection{Receiver Design} The signal is transmitted over an AWGN channel, and the received signal is given by \begin{eqnarray}\label{eq:received_signal} {{Y}={X}+{W},} \end{eqnarray} where the noise component \({W}={\left [ w_{1},w_{2},\cdot \cdot \cdot ,w_{N} \right ]}^{T}\) comprises \(N\) noise samples drawn from a complex Gaussian distribution \(\mathcal{CN}\left ( 0,N_{0} \right )\) and \(N_{0}\) is the noise variance. The received signal is projected onto the conjugate of non-orthogonal subcarriers, and the demodulated signal is \begin{equation}\label{eq:matched_filtering} \begin{aligned} {R}&=\boldsymbol{\Phi}^{H}\cdot {Y}\\ &=\boldsymbol{C}\cdot S+{W}_{\boldsymbol{\Phi}^{H}}, \end{aligned} \end{equation} where \([\cdot]^{H}\) denotes Hermitian transposition, \({W}_{\boldsymbol{\Phi}^{H}}\) corresponds to demodulated noise samples, and \(\boldsymbol{C}\) is the correlation matrix given by \(\boldsymbol{C}=\boldsymbol{\Phi}^{H}\boldsymbol{\Phi}\). The non-zero off-diagonal elements in the correlation matrix \(\boldsymbol{C}\) characterise the ICI caused by non-orthogonal subcarriers, which are given by \begin{eqnarray}\label{eq:correlation_matrix} {\boldsymbol{C}_{k,n}=\begin{cases} 1& ,k=n \\ \frac{1}{N}\frac{1-e^{j2\pi\alpha(k-n)}}{1-e^{\frac{j2\pi\alpha(k-n)}{N}}}& ,k\neq n \end{cases}.} \end{eqnarray} After de-modulation, soft information of coded bits is required for LDPC decoding. Due to ICI, the optimal detection needs to consider \(N\) subcarriers, resulting in high computational complexity for large \(N\). The ICI in SEFDM-IM can be divided into two parts: the intra-subblock ICI caused by subcarriers within the same subblock, and the inter-subblock ICI caused by subcarriers in different subblocks. This work proposes efficient solutions to mitigate inter-subblock ICI and enables subblock-based detection. Hence, the received SEFDM-IM symbol is split into \(G\) subblocks in the IM block splitter, and we assume that each demodulated subblock is independent. Next, the LLR calculator gives the natural logarithm of the ratio of probabilities of a 0 being transmitted versus a 1 being transmitted based on the received signal. We define \(\mathcal{I}_{l,0}\) and \(\mathcal{I}_{l,1}\) as the subsets of \(\mathcal{I}\) that transmit a 0 and a 1 as the \(l\)-th index bit for \(l=1,2,...,L_1\), respectively. Likewise, \(\Upsilon_{v,0}^{K_A}\) and \(\Upsilon_{v,1}^{K_A}\) denote the subsets of \(\Upsilon^{K_A}\) that transmit a 0 and a 1 as the \(v\)-th data bit for \(v=1,2,...,L_2\), respectively. The \(l\)-th coded index bit in the \(g\)-th subblock is denoted as \({\mathcal{C}_1}^{g}\left ( l \right )\), and its LLR value is given by \begin{eqnarray}\label{eq:LLR_index_bits} {\lambda \left ( {\mathcal{C}_1}^{g}\left ( l \right )|{R}^{g} \right )=\ln\frac{\Pr\left ( {\mathcal{C}_1}^{g}\left ( l \right )=0|{R}^{g} \right )}{\Pr\left ( {\mathcal{C}_1}^{g}\left ( l \right )=1|{R}^{g} \right )},} \end{eqnarray} where \({R}^{g}\) is the \(g\)-th \(K\times1\) vector of the demodulated signal. Assuming the same a priori probabilities for all valid activation patterns and data symbols, \eqref{eq:LLR_index_bits} can be expressed as \begin{eqnarray}\label{eq:LLR_index_2} {\lambda \left ( {\mathcal{C}_1}^{g}\left ( l \right )|{R}^{g} \right )=\ln\frac{\underset{{{I}}^{g}\in \mathcal{I}_{l,0}}\Sigma\;\underset{{{P}}^{g}\in \Upsilon^{K_A}}\Sigma \Pr\left ( {R}^{g}|{S}^{g} \right )}{\underset{{{I}}^{g}\in \mathcal{I}_{l,1}}\Sigma\;\underset{{{P}}^{g}\in \Upsilon^{K_A}}\Sigma \Pr\left ( {R}^{g}|{S}^{g} \right )},} \end{eqnarray} where the transmitted subblock \({S}^{g}\) is constructed from \({I}^{g}\) and \({P}^{g}\) via \eqref{eq:transmitted_message}. The likelihood function for the \(g\)-th subblock is formulated as \begin{equation}\label{eq:likelihood} \begin{aligned} \Pr\left ( {R}^{g}|{S}^{g} \right )&=\frac{e^{-\frac{1}{N_{o}}\left ( {R}^{g}-\boldsymbol{C}^{g}{S}^{g} \right )^{H}\left ( {R}^{g}-\boldsymbol{C}^{g}{S}^{g} \right )}}{\pi N_{0}}\\ &=\frac{e^{-\Psi\left ( {I}^{g},{P}^{g} \right )}}{\pi N_{0}}, \end{aligned} \end{equation} where \(\boldsymbol{C}^{g}\) is the \(g\)-th \(K\times K\) sub-matrix of the correlation matrix. For brevity of presentation, \(\Psi\left ( {I}^{g},{P}^{g} \right )\) is defined as \begin{eqnarray}\label{eq:Psi_white} {\Psi\left ( {I}^{g},{P}^{g} \right )=\frac{1}{N_0}\left ( {R}^{g}-\boldsymbol{C}^{g}{S}^{g} \right )^{H}\left ( {R}^{g}-\boldsymbol{C}^{g}{S}^{g} \right ).} \end{eqnarray} Considering \eqref{eq:likelihood} and \eqref{eq:Psi_white}, the expression in \eqref{eq:LLR_index_2} is simplified to \begin{eqnarray}\label{eq:LLR_index_3} {\lambda \left ({\mathcal{C}_1}^{g}\left ( l \right )|{R}^{g} \right )=\ln\frac{\underset{{{I}}^{g}\in \mathcal{I}_{l,0}}\Sigma\;\underset{{{P}}^{g}\in \Upsilon^{K_A}}\Sigma e^{ -{\Psi}\left ( {I}^{g},{P}^{g} \right ) }}{\underset{{{I}}^{g}\in \mathcal{I}_{l,1}}\Sigma\;\underset{{{P}}^{g}\in \Upsilon^{K_A}}\Sigma e^{ -{\Psi}\left ( {I}^{g},{P}^{g} \right ) }}. } \end{eqnarray} Since the indices of activated subcarriers are not known at the receiver, all valid activation patterns need to be considered when calculating soft information for coded data bits. The \(v\)-th coded data bit in the \(g\)-th subblock is denoted as \({\mathcal{C}_2}^{g}\left ( v \right )\), and its LLR value is given by \begin{eqnarray}\label{eq:LLR_data_1} {\lambda \left ({\mathcal{C}_2}^{g}\left ( v \right )|{R}^{g} \right )=\ln\frac{\Pr\left ({\mathcal{C}_2}^{g}\left ( v \right )=0|{R}^{g} \right )}{\Pr\left ({\mathcal{C}_2}^{g}\left ( v \right )=1|{R}^{g} \right )}.} \end{eqnarray} Then, similar to the LLR calculations for coded index bits, \eqref{eq:LLR_data_1} simplifies to \begin{equation}\label{eq:LLR_data_2} \begin{aligned} \lambda \left ( {\mathcal{C}_2}^{g}\left ( v \right )|{R}^{g} \right )&=\ln\frac{\underset{{{I}}^{g}\in \mathcal{I}}\Sigma\;\underset{{{P}}^{g}\in \Upsilon_{v,0}^{K_A}}\Sigma \Pr\left ( {R}^{g}|{S}^{g} \right )}{\underset{{{I}}^{g}\in \mathcal{I}}\Sigma\;\underset{{{P}}^{g}\in \Upsilon_{v,1}^{K_A}}\Sigma \Pr\left ( {R}^{g}|{S}^{g} \right )}\\ &=\ln\frac{\underset{{{I}}^{g}\in \mathcal{I}}\Sigma\;\underset{{{P}}^{g}\in \Upsilon_{v,0}^{K_A}}\Sigma e^{ -{\Psi}\left ( {I}^{g},{P}^{g} \right ) }}{\underset{{{I}}^{g}\in \mathcal{I}}\Sigma\;\underset{{{P}}^{g}\in \Upsilon_{v,1}^{K_A}}\Sigma e^{ -{\Psi}\left ( {I}^{g},{P}^{g} \right ) }}. \end{aligned} \end{equation} A low complexity LLR calculation method could be used according to \cite{low_complexity_llr_calculation}. This work mainly considers uplink NB-IoT applications, therefore optimal LLR calculation is acceptable as most of complicated signal processing is within the central point, which is not sensitive to signal processing complexity. After collecting the LLR values for coded bits in each subblock, two LDPC decoders perform decoding for index bit sequence and data bit sequence separately. Then the two sequences of decoded bits \(\widehat{{{\mathcal{B}}_1}}\) and \(\widehat{{{\mathcal{B}}_2}}\) are formulated into one output bit sequence \(\widehat{{{\mathcal{B}}}}\) in the bit combiner. \section{Proposed Subcarrier Pattern Designs}\label{section2} We define subcarrier patterns as the combination of activation patterns and modulation schemes. In this section, we present both traditional and proposed subcarrier pattern designs for SEFDM-IM and demonstrate the superiority of our proposals. \subsection{Challenges for Existing IM Systems} The traditional pattern design principle deployed in existing work \cite{SEFDMIM_CHINA,SEFDMIM_JAPAN,SEFDM-IM_jointChannel} have three characteristics: first, there is a fixed number of activated subcarriers in one subblock, and second, all subcarriers in one subblock are probable to be activated. Lastly, all activated subcarriers are modulated with the same modulation scheme. In other words, both \(K_A\) and \(M\) have a fixed value. An example lookup table for traditional subcarrier patterns in SEFDM-IM with \([K,K_A]=[4,1]\) is given in Table \ref{tab:41_traditional_SEFDMIM}, where the values of \(K\) and \(K_A\) are specified in brackets for notational convenience. In four valid patterns, one out of four subcarriers is activated and modulated with a data symbol \(\mathcal{S}_A^{\left ( 1 \right )}\), where the subscript \(A\) denotes the modulation cardinality \(M_A\) (e.g., \(M_A=4\) for QPSK) and the superscript in brackets corresponds to the indices of symbols in the data symbol vector \({{P}}^{g}\). In this case, \({{P}}^{g}\) has only one symbol for \(K_A=1\). According to \eqref{eq:correlation_matrix}, the ICI level increases as the frequency spacing between two activated subcarriers decreases. Severe inter-subblock ICI is introduced when the last subcarrier in one subblock and the first subcarrier in the following subblock are both activated. In this case, subblock-based detection that ignores inter-subblock ICI for computational efficiency suffers from performance degradation. For SEFDM-IM systems with more activated subcarriers, i.e., lower values of \(K/K_A\), the probability of having adjacently-located activated subcarriers in neighbouring subblocks is increased, leading to a higher level of inter-subblock ICI impairment. \subsection{Proposed Designs in this Work} \begin{table}[] \centering \caption{\\Subcarrier pattern lookup table for traditional SEFDM-IM with \([K,K_A ]= [4,1 ]\).} \begin{tabular}{|c|c|c|c|} \hline&&&\\[-0.65em] \textit{\textbf{Pattern}} & \textit{\textbf{Index bits}} & \textit{\textbf{Activation patterns}} & \textit{\textbf{Subcarrier patterns}} \\ [0.5ex] \hline\hline &&&\\[-0.65em] 1 & {[}0,\;0{]} & \{1,\;0,\;0,\;0\} & \(\left [ \mathcal{S}_A^{\left ( 1 \right )},\;0,\;0,\;0 \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 2 & {[}0,\;1{]} & \{0,\;0,\;0,\;1\} & \(\left [ 0,\;0,\;0,\;\mathcal{S}_A^{\left ( 1 \right )} \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 3 & {[}1,\;0{]} & \{0,\;1,\;0,\;0\} & \(\left [ 0,\;\mathcal{S}_A^{\left ( 1 \right )},\;0,\;0 \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 4 & {[}1,\;1{]} & \{0,\;0,\;1,\;0\} & \(\left [ 0,\;0,\;\mathcal{S}_A^{\left ( 1 \right )},\;0 \right ]^{T}\) \\ [1.1ex] \hline \end{tabular} \label{tab:41_traditional_SEFDMIM} \end{table} Against the above background, we propose a novel pattern design principle for SEFDM-IM, where the last subcarrier in each subblock is always left unused. As a result, activated subcarriers in neighbouring subblocks are separated by at least one unused subcarrier, leading to reduced inter-subblock ICI. A similar method, termed multiband SEFDM, was shown to be efficacious in improving and simplifying SEFDM detection in \cite{TongyangCSNDSPBSEFDM}. An alternative approach to mitigate inter-subblock interference is to optimize spectrum features via cutting out-of-band power emissions, which was explored in \cite{psd}. The method proposed in this work is more flexible since a partial number of subcarriers are activated leading to unique activation patterns in Table \ref{tab:412_SEFDMIM}, where pattern-1, pattern-3 and pattern-4 have the same subcarrier patterns as those of the traditional SEFDM-IM with \([K,K_A]=[4,1]\). Advantageously, pattern-2 simultaneously activates two subcarriers, the first and the third, whilst keeping the last subcarrier space void of energy, i.e., \(K_A=2\). Consequently, index bits are transmitted by both the number of activated subcarriers and their locations. To avoid error propagation due to incorrect detection of activation patterns, \(L\) is fixed regardless of the \(K_A\) value. More specifically, the number of data bits transmitted in pattern-2 should be the same as that of other patterns. The extra activated subcarrier provides a new degree of freedom to alter the subcarrier pattern in pattern-2. For research completeness and convincing comparisons, three possible realizations of subcarrier pattern designs are presented as follows. It should be noted that \(M_A\) is the modulation cardinality used in pattern-1, pattern-3 and pattern-4. \begin{itemize} \item {\emph{Proposed SEFDM-IM-1}: In pattern-2, the first subcarrier is modulated with a pre-defined signaling symbol \( \mathcal{S}_A^{\left ( * \right )}\) known at the receiver, and the third subcarrier is modulated with a data symbol of the same modulation cardinality \(M_A\) mapped from data bits.} \item{\emph{Proposed SEFDM-IM-2}: Motivated by repetition coding, in pattern-2, the first and the third subcarriers are modulated with the same data symbol mapped from data bits.} \item{\emph{Proposed SEFDM-IM-3}: In pattern-2, the first and the third subcarriers are modulated with two data symbols \( \mathcal{S}_B^{\left ( 1 \right )}\) and \( \mathcal{S}_C^{\left ( 1 \right )}\) of the modulation cardinalities \(M_B\) and \(M_C\), which satisfies the condition \begin{eqnarray}\label{eq:412_condition1} {M_{B}M_{C}=M_{A}}, \end{eqnarray} which ensures that \(L_2\) data bits are conveyed by pattern-2. In addition, to help the receiver distinguish between pattern-2 and the other patterns, \(M_A\), \(M_B\) and \(M_C\) satisfy \begin{eqnarray}\label{eq:412_condition2} {\left ( M_{B}\neq M_{A} \right )\lor\left ( M_{C}\neq M_{A} \right )= 1}, \end{eqnarray} where \(\lor\) stands for the logical OR operator. \eqref{eq:412_condition2} indicates that there is at least one modulation cardinality used in pattern-2 that is different from \(M_A\).} \end{itemize} When the traditional subcarrier pattern design is deployed, \(M\) and \(K_A\) have a fixed value regardless of the selected activation pattern, and hence data symbol vector \({{P}}^{g}\) is independent from activation patterns. By contrast, for the proposed subcarrier pattern design, index bits determine the activation pattern for each subblock, which in turn determines the values of \(M\) and \(K_A\) for constructing \({{P}}^{g}\). \begin{table}[] \centering \caption{\\Subcarrier pattern lookup table for proposed SEFDM-IM with \([K,K_A]=[4,(1,2)]\). } \begin{tabular*}{\columnwidth}{ @{\extracolsep{\fill}} |c|c|c|c|} \hline&&&\\[-0.65em] \textit{\textbf{Pattern}} & \textit{\textbf{\begin{tabular}[c]{@{}c@{}}Index\\ bits\end{tabular}}} & \textit{\textbf{\begin{tabular}[c]{@{}c@{}}Activation\\ patterns\end{tabular}}} & \textit{\textbf{Subcarrier patterns}} \\ [0.5ex] \hline\hline &&&\\[-0.65em] 1 & {[}0,\;0{]} & \{1,\;0,\;0,\;0\} & \(\left [ \mathcal{S}_A^{\left ( 1 \right )},\;0,\;0,\;0 \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 2 & {[}0,\;1{]} & \{1,\;0,\;1,\;0\} & \begin{tabular}[c]{@{}c@{}} SEFDM-IM-1:\(\left [ \mathcal{S}_A^{\left ( * \right )},\;0,\;\mathcal{S}_A^{\left ( 1 \right )},\;0 \right ]^{T}\)\\ [0.8ex] SEFDM-IM-2:\(\left [ \mathcal{S}_A^{\left ( 1 \right )},\;0,\;\mathcal{S}_A^{\left ( 1 \right )},\;0 \right ]^{T}\)\\ [0.8ex] SEFDM-IM-3:\(\left [ \mathcal{S}_B^{\left ( 1 \right )},\;0,\;\mathcal{S}_C^{\left ( 1 \right )},\;0 \right ]^{T}\)\\ [1ex] \end{tabular} \\ [1.1ex] \hline&&&\\[-0.65em] 3 & {[}1,\;0{]} & \{0,\;1,\;0,\;0\} & \(\left [ 0,\;\mathcal{S}_A^{\left ( 1 \right )},\;0,\;0 \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 4 & {[}1,\;1{]} & \{0,\;0,\;1,\;0\} & \(\left [ 0,\;0,\;\mathcal{S}_A^{\left ( 1 \right )},\;0 \right ]^{T}\) \\ [1.1ex] \hline \end{tabular*} \label{tab:412_SEFDMIM} \end{table} We also propose subcarrier patterns for a larger number of activated subcarriers, which is shown in Table \ref{tab:423_SEFDMIM}. Explicitly, two out of four subcarriers are activated and modulated with two \(M_A\)-ary data symbols for pattern-1, pattern-3 and pattern-4. Similarly, we propose three subcarrier patterns for pattern-2, where three subcarriers are activated simultaneously. The extra activated subcarrier is either modulated with a pre-defined signaling symbol \(\mathcal{S}_A^{\left ( * \right )}\) for SEFDM-IM-1 or the repetition of the first data symbol \(\mathcal{S}_A^{\left ( 1 \right )}\) for SEFDM-IM-2. For SEFDM-IM-3, three modulation cardinalities \(M_{B}\), \(M_{C}\) and \(M_{D}\) are deployed, and they satisfy two following conditions \begin{eqnarray}\label{eq:condition1} {M_{B}M_{C}M_{D}=\left ( M_{A}\right )^{2}}, \end{eqnarray} and \begin{eqnarray}\label{eq:condition2} {\left ( M_{B}\neq M_{A} \right )\lor\left ( M_{C}\neq M_{A} \right )\lor\left ( M_{D}\neq M_{A} \right )= 1}. \end{eqnarray} These conditions ensure a constant number of bits transmitted in pattern-2 and at least one distinct modulation cardinality used in pattern-2. The novel subcarrier pattern designs provided above are proposed from the point of view of activation patterns, which also could be used in other non-orthogonal systems such as multi-carrier faster-than-Nyquist (MFTN) \cite{MFTN}. Given that modulation schemes have an impact on index-modulated system performance, we also consider new pattern designs from the perspective of modulation schemes. While existing work only considers BPSK and QPSK modulation, we explore higher modulation cardinalities up to 16QAM, which can be combined with both traditional and proposed pattern design principles. \begin{table}[] \centering \caption{\\Subcarrier pattern lookup table for proposed SEFDM-IM with \([K,K_A]=[4,(2,3)]\).} \begin{tabular}{|c|c|c|c|} \hline&&&\\[-0.65em] \textit{\textbf{Pattern}} & \textit{\textbf{\begin{tabular}[c]{@{}c@{}}Index\\ bits\end{tabular}}} & \textit{\textbf{\begin{tabular}[c]{@{}c@{}}Activation\\ patterns\end{tabular}}} & \textit{\textbf{Subcarrier patterns}} \\ [0.5ex] \hline\hline &&&\\[-0.65em] 1 & {[}0,\;0{]} & \{0,\;1,\;1,\;0\} & \(\left [0,\; \mathcal{S}_A^{\left ( 1 \right )},\;\mathcal{S}_A^{\left ( 2 \right )},\;0 \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 2 & {[}0,\;1{]} & \{1,\;1,\;1,\;0\} & \begin{tabular}[c]{@{}c@{}} SEFDM-IM-1:\(\left [ \mathcal{S}_A^{\left ( * \right )},\mathcal{S}_A^{\left ( 1 \right )},\mathcal{S}_A^{\left ( 2 \right )},0 \right ]^{T}\)\\ [0.8ex] SEFDM-IM-2:\(\left [ \mathcal{S}_A^{\left ( 1 \right )},\mathcal{S}_A^{\left ( 1 \right )},\mathcal{S}_A^{\left ( 2 \right )},0 \right ]^{T}\)\\ [0.8ex] SEFDM-IM-3:\(\left [ \mathcal{S}_B^{\left ( 1 \right )},\mathcal{S}_C^{\left ( 1 \right )},\mathcal{S}_D^{\left ( 1 \right )},0 \right ]^{T}\)\\ [1ex] \end{tabular} \\ [0.5ex] \hline&&&\\[-0.65em] 3 & {[}1,\;0{]} & \{1,\;0,\;1,\;0\} & \(\left [ \mathcal{S}_A^{\left ( 1 \right )},\;0,\;\mathcal{S}_A^{\left ( 2 \right )},\;0 \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 4 & {[}1,\;1{]} & \{1,\;1,\;0,\;0\} & \(\left [ \mathcal{S}_A^{\left ( 1 \right )},\;\mathcal{S}_A^{\left ( 2 \right )},\;0,\;0 \right ]^{T}\) \\ [1.1ex] \hline \end{tabular} \label{tab:423_SEFDMIM} \end{table} \section{Pattern Designs for Selected Spectral Efficiency}\label{section3} In this section, we propose the subcarrier pattern designs for four selected candidates of spectral efficiency, namely, \(\textnormal{SE}=1.5,\;2,\;2.2,\;2.5\) bit/s/Hz. Noting that these spectral efficiency values do not consider coding rate, i.e., \(\mathcal{R}=1\). The spectral efficiency is related to the number of activated subcarriers and the order of modulation schemes. Since the change of these two factors is discrete, the spectral efficiency values we investigate are also discrete instead of being continuous. In each case, different activation patterns and modulation schemes are jointly considered. System configurations are specified by the activation parameters \([K,K_A]\), modulation schemes and the values of \(\alpha\), followed by a detailed description of each subcarrier pattern. For brevity, traditional SEFDM-IM schemes that have been deployed in previous research are denoted as "SEFDM-IM-Tra", where "Tra" is the abbreviation for "Traditional". In this paper, we only consider \(K=4\), which yields \(U=4\) and \(L_1=2\). The proposed subcarrier pattern designs can be extended to higher values of \(K\) as long as the last subcarrier in each subblock is unused. The maximum modulation format we investigate is 16QAM. For IM systems with even-higher-order modulation schemes, chasing for index-domain via intentionally deleting subcarriers will cause significant spectral efficiency loss. Moreover, this work aims for NB-IoT, where the current 3GPP standard defines the maximum modulation format as QPSK. In the future 3GPP standard, the modulation format could be improved to 16QAM. \subsection{Spectral efficiency of 1.5 bit/s/Hz} \begin{itemize} \item{ SEFDM-IM-Tra, \([K,K_A]=[4,1]\), QPSK and \(\alpha=0.67\): One out of four subcarriers is activated and modulated with a QPSK data symbol, i.e., \(M_A=4\).} \item{ SEFDM-IM-1, \([K,K_A]=[4,(1,2)]\), QPSK and \(\alpha=0.67\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with a QPSK data symbol. In pattern-2, one subcarrier is modulated with a QPSK data symbol and the other is modulated with a pre-defined QPSK signalling symbol known at the receiver.} \item{ SEFDM-IM-2, \([K,K_A]=[4,(1,2)]\), QPSK and \(\alpha=0.67\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with a QPSK data symbol. In pattern-2, two subcarriers are modulated with the same QPSK data symbol.} \item{ SEFDM-IM-3, \([K,K_A]=[4,(1,2)]\), QPSK and \(\alpha=0.67\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with a QPSK data symbol. In pattern-2, two subcarriers are modulated with two BPSK data symbols, i.e., \(M_B=M_C=2\).} \end{itemize} \subsection{Spectral efficiency of 2 bit/s/Hz} \begin{itemize} \item{ SEFDM-IM-M1, \([K,K_A]=[4,1]\), 8QAM and \(\alpha=0.625\): This is the proposed scheme from a modulation prospective. One out of four subcarriers is activated and modulated with an 8QAM data symbol, i.e., \(M_A=8\).} \item{ SEFDM-IM-1, \([K,K_A]=[4,(1,2)]\), 8QAM and \(\alpha=0.625\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with an 8QAM data symbol. In pattern-2, one subcarrier is modulated with an 8QAM data symbol and the other is modulated with a pre-defined 8QAM signalling symbol known at the receiver.} \item{ SEFDM-IM-2, \([K,K_A]=[4,(1,2)]\), 8QAM and \(\alpha=0.625\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with an 8QAM data symbol. In pattern-2, two subcarriers are modulated with the same 8QAM data symbol.} \item{ SEFDM-IM-3, \([K,K_A]=[4,(1,2)]\), 8QAM and \(\alpha=0.625\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with an 8QAM data symbol. In pattern-2, two subcarriers are modulated with a QPSK and a BPSK data symbols, i.e., \(M_B=4\) and \(M_C=2\).} \item{ SEFDM-IM-Tra, \([K,K_A]=[4,2]\), QPSK and \(\alpha=0.75\): Two out of four subcarriers are activated and modulated with two QPSK data symbols, i.e., \(M_A=4\).} \item{ SEFDM-IM-1, \([K,K_A]=[4,(2,3)]\), QPSK and \(\alpha=0.75\): In pattern-1, pattern-3 and pattern-4, two activated subcarriers are modulated with two QPSK data symbols. In pattern-2, the second and the third subcarriers are modulated with QPSK data symbols, while the first subcarrier is modulated with a pre-defined QPSK signalling symbol.} \item{ SEFDM-IM-2, \([K,K_A]=[4,(2,3)]\), QPSK and \(\alpha=0.75\): In pattern-1, pattern-3 and pattern-4, two activated subcarriers are modulated with two QPSK data symbols. In pattern-2, the second and the third subcarriers are modulated with QPSK data symbols, while the first subcarrier is modulated with the same data symbol as the second subcarrier.} \item{ SEFDM-IM-3, \([K,K_A]=[4,(2,3)]\), QPSK and \(\alpha=0.75\): In pattern-1, pattern-3 and pattern-4, two activated subcarriers are modulated with two QPSK data symbols. In pattern-2, the first and the third subcarriers are modulated with BPSK data symbols, i.e., \(M_B=M_D=2\), while the second subcarrier is modulated a QPSK data symbol, i.e., \(M_C=4\).} \end{itemize} \begin{table}[] \centering \caption{\\Subcarrier pattern lookup table for traditional SEFDM-IM with \([K,K_A]=[4,3]\).} \begin{tabular}{|c|c|c|c|} \hline&&&\\[-0.65em] \textit{\textbf{Pattern}} & \textit{\textbf{\begin{tabular}[c]{@{}c@{}}Index\\ bits\end{tabular}}} & \textit{\textbf{Activation patterns}} & \textit{\textbf{Subcarrier patterns}} \\ [0.5ex] \hline\hline &&&\\[-0.65em] 1 & {[}0,\;0{]} & \{0,\;1,\;1,\;1\} & \(\left [ 0,\;\mathcal{S}_A^{\left ( 1 \right )},\;\mathcal{S}_A^{\left ( 2 \right )},\;\mathcal{S}_A^{\left ( 3 \right )} \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 2 & {[}0,\;1{]} & \{1,\;1,\;1,\;0\} & \(\left [ \mathcal{S}_A^{\left ( 1 \right )},\;\mathcal{S}_A^{\left ( 2 \right )},\;\mathcal{S}_A^{\left ( 3 \right )},\;0 \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 3 & {[}1,\;0{]} & \{1,\;0,\;1,\;1\} &\(\left [ \mathcal{S}_A^{\left ( 1 \right )},\;0,\;\mathcal{S}_A^{\left ( 2 \right )},\;\mathcal{S}_A^{\left ( 3 \right )} \right ]^{T}\) \\ [1.1ex] \hline&&&\\[-0.65em] 4 & {[}1,\;1{]} & \{1,\;1,\;0,\;1\} & \(\left [ \mathcal{S}_A^{\left ( 1 \right )},\;\mathcal{S}_A^{\left ( 2 \right )},\;0,\;\mathcal{S}_A^{\left ( 3 \right )} \right ]^{T}\) \\ [1.1ex] \hline \end{tabular} \label{tab:43_SEFDMIM} \end{table} \subsection{Spectral efficiency of 2.2 bit/s/Hz} \begin{itemize} \item{ SEFDM-IM-Tra, \([K,K_A]=[4,3]\), QPSK and \(\alpha=0.9\): Subcarrier patterns are given in Table \ref{tab:43_SEFDMIM}. Three out of four subcarriers are activated and modulated with three QPSK data symbols, i.e., \(M_A=4\).} \item{ SEFDM-IM-M2, \([K,K_A]=[4,1]\), 16QAM and \(\alpha=0.675\): One out of four subcarriers is activated and modulated with a 16QAM data symbol, i.e., \(M_A=16\).} \item{ SEFDM-IM-1, \([K,K_A]=[4,(1,2)]\), 16QAM and \(\alpha=0.675\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with a 16QAM data symbol. In pattern-2, one subcarrier is modulated with a 16QAM data symbol and the other is modulated with a pre-defined 16QAM signalling symbol known at the receiver.} \item{ SEFDM-IM-2, \([K,K_A]=[4,(1,2)]\), 16QAM and \(\alpha=0.675\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with a 16QAM data symbol. In pattern-2, two subcarriers are modulated with the same 16QAM data symbol.} \item{ SEFDM-IM-3, \([K,K_A]=[4,(1,2)]\), 16QAM and \(\alpha=0.675\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with a 16QAM data symbol. In pattern-2, two subcarriers are modulated with two QPSK data symbols, i.e., \(M_B=M_C=4\).} \end{itemize} \subsection{Spectral efficiency of 2.5 bit/s/Hz} \begin{itemize} \item{ SEFDM-IM-Tra, \([K,K_A]=[4,3]\), QPSK and \(\alpha=0.8\): Three out of four subcarriers are activated and modulated with three QPSK data symbols, i.e., \(M_A=4\).} \item{ SEFDM-IM-M2, \([K,K_A]=[4,1]\), 16QAM and \(\alpha=0.6\): One out of four subcarriers is activated and modulated with a 16QAM data symbol, i.e., \(M_A=16\).} \item{ SEFDM-IM-1, \([K,K_A]=[4,(1,2)]\), 16QAM and \(\alpha=0.6\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with a 16QAM data symbol. In pattern-2, one subcarrier is modulated with a 16QAM data symbol and the other is modulated with a pre-defined 16QAM signalling symbol known at the receiver.} \item{ SEFDM-IM-2, \([K,K_A]=[4,(1,2)]\), 16QAM and \(\alpha=0.6\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with a 16QAM data symbol. In pattern-2, two subcarriers are modulated with the same 16QAM data symbol.} \item{ SEFDM-IM-3, \([K,K_A]=[4,(1,2)]\), 16QAM and \(\alpha=0.6\): In pattern-1, pattern-3 and pattern-4, the single activated subcarrier is modulated with a 16QAM data symbol. In pattern-2, two subcarriers are modulated with two QPSK data symbols, i.e., \(M_B=M_C=4\).} \end{itemize} \section{Simulation Results and Discussions}\label{section4} In this section, we comprehensively investigate BER and PAPR performance for the proposed SEFDM-IM systems. All investigated SEFDM-IM systems have been detailedly described in Section \ref{section3}. The results of classical OFDM-IM and traditional SEFDM-IM-Tra systems are provided as benchmarks. The performance comparisons of classical OFDM-IM and traditional SEFDM-IM-Tra with basic OFDM have been comprehensively investigated in existing research such as \cite{OFDM_IM_Basar} and \cite{SEFDMIM_JAPAN}. This work aims to improve pattern design in the IM domain. Therefore, we focus our performance comparisons in terms of IM-based schemes. For convenience, the figure legend is specified by \([K,K_A,\textnormal{modulation scheme}, \alpha]\), where the modulation scheme refers to that used in pattern-1, pattern-3 and pattern-4, i.e., \(M_A\). The modulation schemes used in pattern-2 are not specified, which can be found in Section \ref{section3}. Since we deploy two independent LDPC decoders, we can calculate BER from index bits and data bits separately. By counting the number of differences between the input index bits \({{\mathcal{B}}_1}\) and the output index bits \(\widehat{{{\mathcal{B}}_1}}\), the index BER is obtained. Similarly, the data BER is obtained by comparing \({{\mathcal{B}}_2}\) and \(\widehat{{{\mathcal{B}}_2}}\). The average BER is obtained by comparing \({{\mathcal{B}}}\) and \(\widehat{{{\mathcal{B}}}}\), and it is referred to as BER unless otherwise specified. In all simulations, a coding rate of \(\mathcal{R}=1/2\) is used for LDPC encoding, and both LDPC decoders deploy the belief propagation algorithm with 50 decoding iterations. In addition, we assume an AWGN channel and the following system parameters: \(N=12\) and \(K=4\). \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{02.eps} \end{center} \caption{Error performance of coded SEFDM-IM and OFDM-IM systems with the spectral efficiency of 0.75 bit/s/Hz.} \label{Fig:coded_SE15} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{03.eps} \end{center} \caption{Error performance of coded SEFDM-IM systems with the spectral efficiency of 0.75 bit/s/Hz in terms of index BER, data BER and average BER.} \label{Fig:sep_ber_coded_SE15_all_SEFDMIM} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{04.eps} \end{center} \caption{Error performance of coded SEFDM-IM and OFDM-IM systems with the spectral efficiency of 0.75 bit/s/Hz in terms of index BER, data BER and average BER.} \label{Fig:sep_ber_coded_SE15} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{05.eps} \end{center} \caption{CCDF of the PAPR of SEFDM-IM and OFDM-IM systems with the spectral efficiency of 0.75 bit/s/Hz.} \label{Fig:papr_SE15} \end{figure} We first consider SEFDM-IM systems with the spectral efficiency of 1.5 bit/s/Hz, which turns into 0.75 bit/s/Hz after LDPC coding is applied. For fair comparisons, a QPSK modulated OFDM-IM system with \([K,K_A]=[4,2]\) is considered. In Fig. \ref{Fig:coded_SE15}, the BER performance of investigated systems is plotted as a function of \(E_{b}/N_{0}\), and the traditional SEFDM-IM-Tra system with \([K,K_A]=[4,1]\) achieves the best BER performance among all coded systems. As shown in Fig. \ref{Fig:sep_ber_coded_SE15_all_SEFDMIM}, the data BER of four SEFDM-IM systems clip in the low \(E_{b}/N_{0}\) regime, which results in the sudden drop in average BER. We find that SEFDM-IM-2 has the lowest data BER because of its inherent repetition coding in pattern-2. In addition, it is observed that the average error performance of the SEFDM-IM systems with QPSK modulation is dominated by the decision errors in index bits, where the error cliff appears. Since the transmission power of activated subcarriers is proportional to the value of \(K/K_A\) \cite{SIM_will_it_work}, the traditional SEFDM-IM-Tra system with a higher \(K/K_A\) value leads to an increased minimum Euclidean distance between different subcarrier patterns. Moreover, the single activated subcarrier in each activation pattern does not overlap for SEFDM-IM-Tra with \(K_A=1\), while in the proposed designs activation pattern-1, pattern-2 and pattern-4 activate subcarriers on repeated locations, i.e., the first and the third subcarrier locations as shown in Table \ref{tab:412_SEFDMIM}. Therefore, the bit errors resulting from erroneous detection of subcarriers' state are less likely to occur in SEFDM-IM-Tra, which leads to the best BER performance. In Fig. \ref{Fig:sep_ber_coded_SE15}, we further compare the error performance of SEFDM-IM-2 and SEFDM-IM-Tra with classical OFDM-IM systems. Benefiting from orthogonality between subcarriers, the OFDM-IM system has superior performance in recovering QPSK data symbols, which leads to the lowest data BER. However, the OFDM-IM system suffers from higher index BER compared to that of the SEFDM-IM-Tra since OFDM-IM has a lower \(K/K_A\) according to the conclusion from \cite{SIM_will_it_work}. The PAPR comparisons of systems at 0.75 bit/s/Hz are provided in Fig. \ref{Fig:papr_SE15}. The traditional SEFDM-IM-Tra system has the lowest PAPR, and it achieves 1.75 dB performance gain at the CCDF of \(10^{-2}\) compared to the OFDM-IM system. Meanwhile, the three proposed SEFDM-IM-1\&2\&3 systems exhibit close PAPR performance and achieve 0.6 dB gain over the OFDM-IM counterpart. Therefore, Fig. \ref{Fig:papr_SE15} reveals that PAPR becomes better when the value of \(K/K_A\) increases. \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{06.eps} \end{center} \caption{Error performance of the coded SEFDM-IM and OFDM-IM systems with the spectral efficiency of 1 bit/s/Hz.} \label{Fig:coded_SE2} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{07.eps} \end{center} \caption{CCDF of the PAPR of SEFDM-IM and OFDM-IM systems with the spectral efficiency of 1 bit/s/Hz.} \label{Fig:papr_SE2} \end{figure} We then consider SEFDM-IM systems with the spectral efficiency of 2 bit/s/Hz, which becomes 1 bit/s/Hz after coding. In general, we have two ways to increase spectral efficiency of index-modulated systems: first, increase the modulation cardinality, and second, increase the number of activated subcarriers. For SEFDM-IM systems, another adjustable parameter is the level of bandwidth compression specified by \(\alpha\). Therefore, SEFDM-IM systems are more flexible in terms of achieving any desired spectral efficiency, compared to OFDM-IM systems. In our case, the increase in spectral efficiency from 0.75 to 1 bit/s/Hz is achieved by either increasing \(M_A\) to 8 or increasing \(K_A\) to 2, and \(\alpha\) is adjusted accordingly. In Fig. \ref{Fig:coded_SE2}, we observe that the SEFDM-IM systems with the increased modulation cardinality perform better than their counterparts with a higher number of activated subcarriers. This is because the dominant errors result from the decision errors in index bits, and systems with a higher value of \(K/K_A\) perform better on detecting activation patterns. A coded OFDM-IM system with \( [K,K_A]=[4,3]\) and QPSK modulation at 1 bit/s/Hz is considered. It has close BER performance to the SEFDM-IM-M1 system with \([K,K_A]=[4,1]\) and 8QAM modulation, while it suffers from 2.5 dB performance loss in terms of the PAPR at the CCDF of \(10^{-2}\), as seen in Fig. \ref{Fig:papr_SE2}. \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{08.eps} \end{center} \caption{Error performance of the coded SEFDM-IM systems with the spectral efficiency of 1.1 and 1.25 bit/s/Hz.} \label{Fig:coded_SE22_25} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{09.eps} \end{center} \caption{Error performance of coded SEFDM-IM systems with the spectral efficiency of 1.1 bit/s/Hz in terms of index BER, data BER and average BER.} \label{Fig:sep_ber_coded_SE22} \end{figure} It is observed that SEFDM-IM systems perform better with high \(K/K_A\) in terms of both BER and PAPR performance, and hence we only consider systems with \([K,K_A]=[4,1]\) and \([K,K_A]=[4,(1,2)]\) for higher spectral efficiency. In Fig. \ref{Fig:coded_SE22_25}, we compare BER performance of coded SEFDM-IM systems at 1.1 and 1.25 bit/s/Hz. For the spectral efficiency of 1.1 bit/s/Hz, the traditional SEFDM-IM-Tra system with \([K,K_A]=[4,3]\) and QPSK modulation operates with \(\alpha=0.9\), while the SEFDM-IM-M2 and SEFDM-IM-1\&2\&3 systems require an increased level of bandwidth compression, i.e., \(\alpha\) is reduced to 0.675. Compared with the traditional system, the proposed SEFDM-IM-2, SEFDM-IM-M2 and SEFDM-IM-3 systems obtain 1.3, 0.7 and 0.3 dB better BER performance, respectively. Furthermore, the best-performing SEFDM-IM-2 system achieves both 0.4 dB power gain and 10\% bandwidth saving compared to the classical OFDM-IM system at 1 bit/s/Hz. In Fig. \ref{Fig:sep_ber_coded_SE22}, the BER corresponding to index bits and data bits are presented to elaborate why the SEFDM-IM-2 system exhibits a BER performance advantage over the rest SEFDM-IM systems for high spectral efficiency. This is because when a high modulation cardinality is deployed, i.e., 16QAM in this paper, the average BER is dominated by the data BER, and SEFDM-IM-2 with inherent repetition coding has the best data BER performance among SEFDM-IM systems. By contrast, the traditional SEFDM-IM-Tra system with \([K,K_A]=[4,3]\) suffers from high index BER, resulting in the worst average BER. Similar results are observed for SEFDM-IM systems at 1.25 bit/s/Hz, whose spectral efficiency is achieved by further reducing the values of \(\alpha\). In this case, BER performance of all SEFDM-IM systems are degraded due to the increased level of ICI. The performance gap between the SEFDM-IM-2 system and the traditional SEFDM-IM-Tra system is enlarged to 1.7 dB. In addition, the SEFDM-IM-2 system exhibits close BER performance to the classical OFDM-IM system while obtaining 25\% higher spectral efficiency. \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{10.eps} \end{center} \caption{ Frequency response for the static frequency selective channel model, where a deep frequency notch and two shallow frequency notches are intentionally designed.} \label{Fig:frequency_response_channel} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{11.eps} \end{center} \caption{Error performance of the coded SEFDM-IM-2 system at 1.1 bit/s/Hz and the OFDM-IM system at 1 bit/s/Hz with and without frequency selective channel.} \label{Fig:fading_channel_ber} \end{figure} It should be noted that the systems in this paper are considered to follow NB-IoT configurations where \(N=12\) subcarriers are used. In this application area, signals normally experience flat fading given the narrow bandwidth of 180 kHz, which is the main reason why AWGN channel is normally assumed. To further evaluate the robustness of our proposals against frequency selectivity, we define a three-path static frequency selective channel given by \(h(t)=0.9137\delta(t)+0.3179\delta(t-2T_s )-0.2532e^{\frac{j\pi}{2}} \delta(t-3T_s )\), where $T_s$ indicates the time duration of one sample. Therefore, the delay spread is 16.67 \(\mu\)s considering the 180 kHz NB-IoT signal configurations. The channel frequency response is illustrated in Fig. \ref{Fig:frequency_response_channel}, in which a deep frequency notch and two shallow frequency notches are intentionally designed to test the robustness of our proposals. For simplicity, we only show the BER performance of the best-performing SEFDM-IM-2 system discussed in Fig. \ref{Fig:coded_SE22_25} and compare it with OFDM-IM. In Fig. \ref{Fig:fading_channel_ber}, we observe that SEFDM-IM-2 and OFDM-IM experience 1 dB and 0.5 dB performance loss in terms of the average BER, respectively, when the frequency selective channel is applied. More specifically, the power gain of the index part of SEFDM-IM-2 to that of OFDM-IM decreases from 0.64 dB to 0.19 dB. Meanwhile, the power penalty of the data part of SEFDM-IM-2 to that of OFDM-IM increases by 0.3 dB. In general, both index bits and data bits in SEFDM-IM-2 are less robust to frequency selectivity when compared to those in OFDM-IM, although SEFDM-IM-2 maintains its BER advantage in index BER. This may be explained by the fact that OFDM-IM with \([K,K_A]=[4,3]\) has more subcarriers activated and therefore it is more robust to frequency selectivity when compared with SEFDM-IM-2 with \([K,K_A]=[4,(1,2)]\). \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{12.eps} \end{center} \caption{Error performance of OFDM-IM systems with the different number of activated subcarriers in one subblock.} \label{Fig:influence_of_no_of_activated_subcarriers} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{13.eps} \end{center} \caption{Error performance of SEFDM-IM and OFDM-IM systems with different levels of bandwidth compression.} \label{Fig:influence_of_BW_compression} \end{figure} \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{14.eps} \end{center} \caption{Error performance of SEFDM-IM systems with different modulation schemes.} \label{Fig:influence_of_modulation_schemes} \end{figure} The average BER performance is dependant on the relative performance of index BER and data BER, which is subject to the values of \(K/K_A\), \(\alpha\) and \(M_A\). In other words, the optimal SEFDM-IM design is subject to the dominant BER factor, i.e., index BER or data BER, and hence associated with the target spectral efficiency. In order to show the impact of each factor on error performance, six systems that have been discussed above are further compared. In Fig. \ref{Fig:influence_of_no_of_activated_subcarriers}, we observe that as \(K_A\) of OFDM-IM increases from 2 to 3, both index BER and data BER increase, which leads to the increase in average BER. This is because the transmission power of activated subcarriers decreases as the value \(K/K_A\) decreases. In Fig. \ref{Fig:influence_of_BW_compression}, the impact of the bandwidth compression level is presented, where 1 dB performance loss is observed on the index, data as well as average BER when a 10\% bandwidth compression is performed. In terms of the impact of \(M_A\), the data BER is degraded by 1.4 dB when QPSK modulation is replaced with 16QAM as shown in Fig. \ref{Fig:influence_of_modulation_schemes}. Consequently, the error performance of SEFDM-IM-Tra is dominated by the index part, while that of SEFDM-IM-M2 is dominated by the data part. In general, the index BER and data BER are jointly affected by the values of \(K/K_A\), \(M_A\) and \(\alpha\). It is inferred that both index BER and data BER increase with the decrease in \(K/K_A\) or \(\alpha\) values, and data BER increases with the increase in \(M_A\). \begin{figure}[] \begin{center} \includegraphics[width=\columnwidth]{15.eps} \end{center} \caption{CCDF of the PAPR of SEFDM-IM systems with the spectral efficiency of 1.1 and 1.25 bit/s/Hz.} \label{Fig:papr_SE22_25} \end{figure} The PAPR performance of the systems with 1.1 and 1.25 bit/s/Hz is shown in Fig. \ref{Fig:papr_SE22_25}. As expected, systems with \([K,K_A]=[4,3]\) exhibit higher PAPR than those with \([K,K_A]=[4,(1,2)]\), and the SEFDM-IM-M2 systems with \([K,K_A]=[4,1]\) achieve the lowest PAPR. For systems with the same configurations except for the value of \(\alpha\), PAPR slightly decreases when \(\alpha\) decreases from 0.9 to 0.8 and from 0.675 to 0.6. This demonstrates the advantage of SEFDM-IM systems in terms of PAPR reduction. \section{Computational Complexity Analysis}\label{section5} In this section, we compare the computational complexity of the investigated systems. Since all of them need to perform standard LDPC decoding, we can only consider the computational complexity of LLR calculations. According to \eqref{eq:LLR_index_3} for index bits and \eqref{eq:LLR_data_2} for data bits, the computational complexity of LLR calculations is dominated by \(\Psi\left ( {I}^{g},{P}^{g} \right )\) calculations and therefore we use the number of \(\Psi\left ( {I}^{g},{P}^{g} \right )\) calculations required per coded bit as the metric for complexity comparisons. The calculations of \(\Psi\left ( {I}^{g},{P}^{g} \right )\) need to be performed for all possible combinations of valid activation patterns and data symbols, given by \begin{eqnarray}\label{eq:computational_complexity_calculations} {\forall{{I}}^{g}\in \mathcal{I},\forall{{{P}}^{g}\in \Upsilon^{K_A}}:{\Psi}\left ( {I}^{g},{P}^{g} \right ) ,} \end{eqnarray} where \(g=1,2,...,G\), and \(\forall\) denotes the operation that loops through all elements in a given set. Since the calculation results remain the same for all coded bits in one subblock, \eqref{eq:computational_complexity_calculations} only needs to be performed once per subblock. In any index-modulated systems with the traditional subcarrier pattern design, both \(M\) and \(K_A\) have a single value. As a result, the computational complexity per coded bit in terms of the number of \(\Psi\left ( {I}^{g},{P}^{g} \right )\) calculations is given by \begin{eqnarray}\label{eq:computational_complexity_traditional} {{\Theta}_{tra}=\frac{1}{L}U\left (M\right )^{{K_A}},} \end{eqnarray} where the subscript \(tra\) stands for the traditional subcarrier pattern design. By contrast, SEFDM-IM-1\&2\&3 with the proposed subcarrier pattern design has two values of \(K_A\) and multiple modulation cardinalities because of pattern-2. Since we deliberately set a constant number of data bits transmitted per subblock regardless of activation patterns, the computational complexity of pattern-2 is the same as that of other patterns. Therefore, the total number of \(\Psi\left ( {I}^{g},{P}^{g} \right )\) calculations required by SEFDM-IM-1\&2\&3 is calculated same as the traditional counterpart. Hence, the computational complexity of obtaining the LLR value for a coded bit transmitted in SEFDM-IM-1\&2\&3 is given by \begin{eqnarray}\label{eq:computational_complexity} {{\Theta}_{pro}=\frac{1}{L}U\left (M' \right )^{{K_A}'},} \end{eqnarray} where the subscript \(pro\) represents the proposed subcarrier pattern design, and \({M}'\) and \({K_A}'\) are the number of activated subcarriers per subblock and the modulation cardinality used in pattern-1, respectively. Taking the configuration of \([K,K_A]=[4,(1,2)]\) in Table \ref{tab:412_SEFDMIM} as an example, ${M}'$ is replaced by $M_A$, and ${K_A}'$ is assigned with a value of 1 regardless of ${K_A}=(1,2)$. The computational complexity of OFDM is not considered here since OFDM detectors are based on simple subcarrier-based detection as opposed to the subblock-based detection required for the IM-based cases. \begin{table}[t] \caption{\\Computational complexity of index-modulated systems.} \centering \begin{tabular}{|c|c|c|} \hline&&\\[-0.65em] \textit{\textbf{Scheme}} &\textit{\textbf{\begin{tabular}[c]{@{}c@{}}Modulation scheme\\in pattern-1\&3\&4 \end{tabular}}} & \textit{\textbf{\begin{tabular}[c]{@{}c@{}}Computational\\ complexity\end{tabular}}}\\ \hline\hline &&\\[-0.65em] \multirow{3}{*}{\begin{tabular}[c]{@{}c@{}}SEFDM-IM with \\ \([K,K_A]=[4,1]\),\\[0.5ex] \([K,K_A]=[4,(1,2)]\)\end{tabular}} & QPSK & 4 \\[0.5ex] \cline{2-3}\cline{2-3} &&\\[-0.65em] & 8QAM & 7 \\ [0.5ex] \cline{2-3}\cline{2-3} &&\\[-0.65em] & 16QAM & 11 \\[0.5ex] \hline &&\\[-0.65em] \begin{tabular}[c]{@{}c@{}}SEFDM-IM with\\ \([K,K_A]=[4,2]\),\\[0.5ex] \([K,K_A]=[4,(2,3)]\);\\[0.8ex] OFDM-IM with\\ \([K,K_A]=[4,2]\)\end{tabular} & QPSK & 11 \\ [1.1ex] \hline &&\\[-0.65em] \begin{tabular}[c]{@{}c@{}}SEFDM-IM with\\ \([K,K_A]=[4,3]\);\\ [0.8ex] OFDM-IM with\\ \([K,K_A]=[4,3]\)\end{tabular} & QPSK & 32 \\ \hline \end{tabular} \label{tab:computational_complexity} \end{table} The computational complexity of all investigated systems with the LLR calculator is provided in Table \ref{tab:computational_complexity}\footnote{Modulation schemes for pattern-1\&3\&4 are fixed. Pattern-2 could employ mixed modulation schemes but has the same computational complexity with pattern-1\&3\&4.}. It is observed that the computational complexity increases exponentially upon the linearly increased \(K_A\) value, such that a system with a low \(K_A\) value achieves reasonable computational complexity despite a high modulation cardinality. Therefore, results in Table \ref{tab:computational_complexity} reveal that the proposed SEFDM-IM systems with \([K,K_A]=[4,(1,2)]\) achieve reduced computational complexity, compared to traditional SEFDM-IM-Tra with \([K,K_A]=[4,2]\) and \([K,K_A]=[4,3]\) under equivalent spectral efficiency. Furthermore, we find that classical OFDM-IM systems typically suffer from high computational complexity. This is because they are required to transmit more bits to achieve the target spectral efficiency compared to their SEFDM-IM counterparts, due to the constraint on subcarrier spacing. \section{Conclusion}\label{section6} In this paper, we have proposed a novel index modulation pattern design principle for SEFDM-IM systems based on keeping the last subcarrier in each subblock unused, thereby improving signal quality, since ICI levels are dependant on locations of activated subcarriers. The deletion of the last subcarrier is compensated by varying the number of activated subcarriers per subblock. Following this design principle, we have developed three SEFDM-IM activation schemes termed SEFDM-IM-1\&2\&3 and compared their system performance with that of classical OFDM-IM and traditional SEFDM-IM. In addition to BPSK and QPSK that were investigated in previous work, we have explored higher-order modulation schemes 8QAM and 16QAM with both traditional and proposed activation patterns. Results have shown that the newly proposed schemes outperform OFDM-IM and other SEFDM-IM in terms of BER, PAPR and computational complexity, when spectral efficiency is 1 bit/s/Hz and higher. Furthermore, we have found that SEFDM-IM systems with a higher modulation cardinality outperform those with a higher number of activated subcarriers at equivalent spectral efficiency. Therefore, this work provides a useful design principle for coded SEFDM-IM by tuning subcarrier activation patterns, number of activated subcarriers, bandwidth compression factors and modulation schemes. \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
9,142
Q: Django App Implementing Auth0 won't render on iOS devices after logging in I have a nice Django app that implements Auth0. It works on all browsers on pcs and on browsers on Android. When testing on iOS devices however, after the user logs in through Auth0, the device asks to download a file and then downloads it and does nothing. If I try to redirect to my english page, it downloads a file called "en", if I try to redirect to my french version of the page, it downloads a file called "fr". Not sure why - it is at the end of the url myurl.something.org/myForm/en for English for example. At first I thought the issue had to do with Apple not allowing Same-Site cookies, so I added the CSRF_COOKIE_SAMESITE = None setting. But I see now that after logging in, in the address bar there is the url that I want the user to be redirected to. When I tried using the Web Inspector for Safari on Iphone, I see that there are no same-site cookies, so it seems that this is not the problem. I see the document "en" in the list of resources on the Web Inspector when on the login page. It is type "document" and shows that inside of it is the following: <!DOCTYPE html> <html> <head> <meta charset="utf-8"> <meta http-equiv="X-UA-Compatible" content="IE=edge,chrome=1"> <title>Sign In with Auth0</title> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <style> .bgimg { background-image: url('pic.jpg'); } #myVideo { position: fixed; right: 0; bottom: 0; min-width: 100%; min-height: 100%; z-index: 1; } .auth0-lock.auth0-lock.auth0-lock-opened .auth0-lock-overlay { opacity: 0.0 !important; -webkit-transition: opacity 0.3s ease-in 0s; transition: opacity 0.3s ease-in 0s; } .video{ position:absolute; z-index:-5 !important; } .overlay{ background:#333; color: white; position:fixed; top: 50px; left: 0; width: 400px; height: 100px; z-index:1000; visibility:hidden; /* * if you want to make it none clickable but make the * clicks go to the video */ pointer-events: none; } </style> </head> <body background="https://www.nbn.org.il/wp-content/uploads/2019/04/auth0_bkg.jpg"> <div class="login-container"></div> <!-- <div class ="video"> <video autoplay muted loop id="myVideo"> <source src="https://www.nbn.org.il/source_files/login/nbnommunitywaiting.mp4" type="video/mp4"> Your browser does not support HTML5 video. </video> </div> --> <!--[if IE 8]> <script src="//cdnjs.cloudflare.com/ajax/libs/ie8/0.2.5/ie8.js"></script> <![endif]--> <!--[if lte IE 9]> <script src="https://cdn.auth0.com/js/base64.js"></script> <script src="https://cdn.auth0.com/js/es5-shim.min.js"></script> <![endif]--> <script src="https://cdn.auth0.com/js/lock/11.3/lock.min.js"></script> <script> // Decode utf8 characters properly var config = JSON.parse(decodeURIComponent(escape(window.atob('long_token')))); config.extraParams = config.extraParams || {}; var connection = config.connection; var prompt = config.prompt; var languageDictionary; var language; if (config.dict && config.dict.signin && config.dict.signin.title) { languageDictionary = { title: config.dict.signin.title }; } else if (typeof config.dict === 'string') { language = config.dict; } var loginHint = config.extraParams.login_hint; var lock = new Auth0LockPasswordless(config.clientID, config.auth0Domain, { auth: { redirectUrl: config.callbackURL, responseType: (config.internalOptions || {}).response_type || (config.callbackOnLocationHash ? 'token' : 'code'), params: config.internalOptions }, /* additional config needed to use custom domains configurationBaseUrl: config.clientConfigurationBaseUrl, overrides: { __tenant: config.auth0Tenant, __token_issuer: config.auth0Domain }, */ assetsUrl: config.assetsUrl, allowedConnections: connection ? [connection] : null, rememberLastLogin: !prompt, language: language, languageDictionary: languageDictionary, prompt: 'consent', theme: { logo:'pic.png', primaryColor: "#fbaa40", }, closable: false, // uncomment if you want small buttons for social providers // socialButtonStyle: 'small' }); lock.show(); </script> <div class="overlay">i'm a cool overlayed html block</div> </body> </html> and then when I log in and go to the next page, under the Elements, section I get just a blank html body called about:blank. Here is another image from the Web Inspector of the login page before getting to the first page of my app which may or may not be helpful: Any insight into what might be wrong would be much appreciated! A: My original thoughts about the issue posted above were completely not the problem. At some point I realized this: If I required login through auth0 and passed a context in the view the application downloaded html instead of rendering on iOS devices. If I didn't require login, passing a context was no problem. If I didn't pass in a context, requiring login through auth0 was no problem. Even passing an empty context caused a problem. Eventually I figured out that the problem was this: In the Auth0 Django SDK that I was following, it shows that you should pass in what I thought was an extra dictionary parameter for auth0: {'auth0User': auth0user,'userdata': json.dumps(userdata, indent=4) } Since I didn't consider the other parameter to be a context, I also added my own context in the return statement of the view like this: return render(request, 'mypage.html', context, { 'auth0User': auth0user, 'userdata': json.dumps(userdata, indent=4)} ) So for some reason this worked on pcs and androids to pass in two contexts, but iOS didn't like it. Once I combined the two contexts return render(request, 'mypage.html', {'contextvar1': 'data', 'contextvar2': 'moredata', 'auth0User': auth0user, 'userdata': json.dumps(userdata, indent=4)} ) everything works. If anyone has understanding as to why this worked on other operating systems besides Apple and why this caused iOS devices to download the html, I would love to understand.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,455
\section{Introduction}\label{sec:intro} The theory for describing systems at equilibrium, and especially what drives transitions between different phases, is well developed and the picture rather complete, whether it is for finite temperature (thermal phase transition (PT))~\cite{goldenfeld1992lectures}, or at zero temperature (quantum phase transition (QPT))~\cite{sachdev2007quantum}. A great leap in our understanding was taken with Landau's mean-field theory of phase transition, which in particular tells that a continuous (i.e. second order) PT can be ascribed a spontaneous breaking of some symmetry~\cite{landau1937theory}. The type of symmetry being broken also predicts what kind of excitations one can predict in the symmetry broken phase, so called Higgs or Goldstone modes representing respectively massive or massless excitations~\cite{altland2010condensed}. The non-analytic behavior at a quantum critical point implies that the spectrum is necessarily gapless at this point, and more importantly the physics in the vicinity of the critical point, for example how the gap closes, is universal. This means that microscopic details become irrelevant and the physics can be described by a set of critical exponents that only depend on global symmetries and dimensions~\cite{goldenfeld1992lectures}. The fact that the gap closes means that the typical time-scale (inverse of the gap) diverges, what is called critical slowing down. An interesting and important result of critical slowing down is that if we drive the system through a critical point, by externally varying some system parameter, no matter how slow this quench is the system will always get excited due to non-adiabatic contributions. Kibble and independently Zurek suggested that the density of these generated defects also behaves universal -- it can be estimated from some critical exponents~\cite{kibble1976topology,zurek1985cosmological}. While originally for thermal PT's, the idea of Kibble and Zurek can be applied to QPT's as well~\cite{zurek2005dynamics}. This idea is simple, the evolution of the quench can be divided into being either adiabatic (typically when the gap is large) or diabatic (in the vicinity of the critical point where the gap closes). This has been termed the Kibble-Zurek mechanism (KZM), and its applicability has been demonstrated in numerous experiments by now~\cite{bowick1994cosmological,ruutu1996vortex,ducci1999order,monaco2002zurek,maniv2003observation,weiler2008spontaneous,bakr2010probing,chen2011quantum,braun2015emergence,ulm2013observation,pyka2013topological,mielenz2013trapping}. Lately a new type of critical behavior in non-equilibrium systems has gained much attention thanks to great experimental advances especially in the AMO (atomic, molecular and optical) community. These are driven-dissipative systems, in which, in principle, both the drive and the sort of dissipation can be tailored~\cite{diehl2008quantum,verstraete2009quantum,eisert2010noise,hoening2012critical}. The system relaxes to some steady state that will not be an equilibrium one, i.e. a non-equilibrium steady state or short NESS. Criticality in these models are correspondingly defined via non-analyticities in the system's steady state $\hat\rho_\mathrm{ss}$ upon varying some model parameter, similar to how the ground state $|\psi_0\rangle$ shows non-analytic behavior at the critical point for a QPT. For AMO experiments, the evolution of the system in contact with an environment can often be accurately described by a Markovian Lindblad master equation~\cite{gardiner2015quantum,breuer2002theory} \begin{equation}\label{lindblad} \begin{array}{lll} \partial_t\hat\rho(t) & = & \displaystyle{\hat{\mathcal L}(\hat\rho(t))\equiv i\left[\hat\rho(t),\hat H\right]}\\ \\ & & \displaystyle{+\sum_i\kappa_i\left(2\hat L_i\hat\rho(t)\hat L_i^\dagger-\hat L_i^\dagger\hat L_i\hat\rho(t)-\hat\rho(t)\hat L_i^\dagger\hat L_i\right).} \end{array} \end{equation} The influence of the environment is included in the last term containing a sum over possible Lindblad jump operators $\hat L_i$, and where the $\kappa_i$'s are the corresponding decay rates. When tailoring the system-environment couplings these are constructed with the purpose of reaching a desired steady state $\hat\rho_\mathrm{ss}$ of Eq.~(\ref{lindblad}). Typically the jump operators are local but in principle need not be so~\cite{schneider2002entanglement,hannukainen2017dissipation}. For $\hat\rho_\mathrm{ss}$ to be non-analytic in the thermodynamic limit, the gap in the spectrum, now of the Liouvillian $\hat{\mathcal L}$, must close at the critical point~\cite{kessler2012dissipative}. Since the time-evolution governed by $\hat{\mathcal L}$ is in general not unitary, the spectrum is normally complex with the real parts representing relaxation to the NESS's~\cite{albert2014symmetries,albert2016geometry}. It is clearly so that our understanding of NESS critical behavior is far less developed than what it is for thermal or quantum PT's. It has been shown, for example, that new universality classes are possible in these systems~\cite{sieberer2013dynamical,marino2016driven,zamora2017tuning}, as is the presence of continuous PT's lacking any spontaneous symmetry breaking~\cite{hannukainen2017dissipation}. Hence, qualitative difference between equilibrium and non-equilibrium QPT's may indeed exist. A very famous example from classical physics is the `flocking transition' which describes build-up of long-range order in for example flocks of birds~\cite{vicsek1995novel}. At equilibrium, long-range order is prohibited by the Mermin-Wagner theorem~\cite{auerbach2012interacting}, but the theorem does not apply for non-equilibrium situations and thereby the possibility for the birds to order. In the present paper we analyze how NESS critical systems, described by some Lindblad master equations, respond to slow quenches across a critical point. As pointed out above, for equilibrium systems we know that under rather general circumstances the KZM accurately explains such scenarios. Thus, the corresponding question is whether some KZM can be applied also for these models? We will see that there is indeed a generalization of the traditional KZM to open systems, but much care needs to be taken into account to settle this correspondence. For example, how the concept of adiabaticity is translated to open quantum systems, and more importantly how to quantify the amount of non-adiabatic excitations generated during the quench. It is clear that since the criticality manifests in the system's steady state, this state should be the reference when determining the amount of excitations. We will argue that the natural measure for the amount of excitations is the trace distance for density operators. It is in general harder to solve dynamical than statical or equilibrium problems, and in particular it is more difficult to simulate the evolution governed by a Lindblad master equation than that deriving from a Hamiltonian. This is easily understood due to the simple observation that one needs many more values to describe a general state $\hat\rho(t)$ than a pure state $|\psi(t)\rangle$. Indeed, for a Hilbert space dimension $D$, the number of eigenstates of $\hat{\mathcal L}$ is $D^2$ to be compared to $D$ eigenstates of $\hat H$. A zero eigenvalue eigenstate $\hat\rho_\mathrm{ss}$ is obviously a steady state of the Liouvillian. Adiabatic evolution should imply that a system initialized in some steady state $\hat\rho_\mathrm{ss}(t_i)$ at time $t_i$ remains in the instantaneous steady state $\hat\rho_\mathrm{ss}(t)$ throughout the duration of the quench. In the past, several works have studied how an environment affects the KZ-type quench through a quantum critical point~\cite{fubini2007robustness,cincio2009dynamics,dutta2016anti,dziarmaga2006dynamics}. This is a conceptually different question than the one we address: In our setting we cannot in general characterize the system from some underlying Hamiltonian -- it is truly the system plus environment that determines the properties of the state $\hat\rho(t)$. In the Refs.~\cite{fubini2007robustness,cincio2009dynamics,dutta2016anti,dziarmaga2006dynamics} the environment serves mainly as an additional source of fluctuations, and the general finding is that the amount of excitations created when quenched through the (Hamiltonian) critical point is increased. It is, in particular, assumed that the presence of an environment will not qualitatively alter the critical behavior of the Hamiltonian. Moreover, excitations are measured with respect to the Hamiltonian ground state, and not from the steady state as in the present work. Another crucial observation is that when the ground state is the reference state, as in the mentioned references, the amount of excitations will not, in a strict sense, only depend on the quench rate and critical exponents but also on the initial and final times, $t_i$ and $t_f$. This is irrespective of whether the quench end close or far from a critical point, since even far from the critical point, where the evolution is presumably adiabatic, there is an environment induced relaxation of the system towards some steady state. In our analysis of quenches through NESS critical points, the results do not depend on $t_i$ and $t_f$, and thereby only rely on system parameters and properties. In the respect, the approach we take is more appropriate when the goal is to explore universal properties of the critical behavior. The paper is organized as follows. In the following section we summarize some earlier results such that they can be put in context with ours, and we also classify different scenarios that can emerge in NESS critical systems. More precisely; in Sec.~\ref{ssec:KZM} we reproduce the arguments (so called `adiabatic-impulse approximation') for the KZM when applied to quantum critical points of closed systems, then in Sec.~\ref{ssec2B} we continue with defining NESS criticality and introduce different classes arising from the competing terms in the Lindblad master equation, and in Sec.~\ref{ssec2C} we review earlier works related to our results. Section~\ref{sec3} presents the general results. We start in Sec.~\ref{ssec3A} by introducing the Bloch equations of Lindblad master equations and discuss their structure in rather general terms. This allows us to explain what is meant with adiabaticity for open quantum systems in Sec.~\ref{adsubsec}. To use the trace distance as a measure of excitations is argued for in Sec.~\ref{adsubsec} by demonstrating that it correctly predicts the amount of excitations in the limiting situation of a closed quantum system. In Sec.~\ref{ssec3d} we generalize the adiabatic-impulse approximation to our Bloch equations and thereby derive the KZM for open quantum systems. The proceeding Sec.~\ref{sec4} is devoted to two examples, the open Landau-Zener model in Sec.~\ref{ssec4A} and the dephasing transverse Ising model in Sec.~\ref{ssec4B}. The two examples verify the applicability of our general results from Sec.~\ref{sec3}. We conclude in Sec.~\ref{sec5} with a summary and possible future continuations. \section{Preliminaries and earlier results} \subsection{Dynamics of quantum phase transitions}\label{ssec:KZM} In this Subsection we reproduce the idea of the adiabatic-impulse (AI) approximation, and how the KZM is applied to QPT's~\cite{zurek2005dynamics}. A QPT occurs at zero temperature and result from quantum fluctuations~\cite{sachdev2007quantum}, contrary to classical PT's which are driven by thermal fluctuations. Typically for a system showing critical behavior, its Hamiltonian can be decomposed as \begin{equation} \hat H=\hat H_0+\lambda\hat H_1, \end{equation} with $\left[\hat H_0,\hat H_1\right]\neq0$ and $\lambda$ is a coupling parameter determining the relative strength between the two terms. For $\lambda=0$, the ground state is governed by $\hat H_0$, while in the opposite limit, $\lambda\rightarrow\infty$, it is dictated by $\hat H_1$. The characteristics of the ground state $|\psi_0(\lambda)\rangle$ may be very distinct in the two limits, and in particular in the thermodynamic limit it may become non-analytic for some finite critical coupling $\lambda_c$. When the transition is continuous this marks the critical point. The non-analytic behavior necessarily implies that the spectrum of $\hat H$ is gapless at $\lambda_c$, i.e. the energy gap from the ground state $\Delta_H\rightarrow0$. In particular, for $\lambda<\lambda_c$ the ground state is non-degenerate and the spectrum gapped (normal phase), while the ground state is degenerate for $\lambda>\lambda_c$ (symmetry broken phase) and the spectrum may either be gapped or not. At the critical point the ground state cannot be said to be ruled by either $\hat H_0$ or $\hat H_1$, and furthermore we cannot assign any finite characteristic length scale to the state. In the vicinity of the critical point, the length scale diverges algebraically~\cite{goldenfeld1992lectures} \begin{equation} \xi\propto|\lambda-\lambda_c|^{-\nu}, \end{equation} for some universal critical exponent $\nu$ that does not depend on microscopic details but only on global symmetries and dimensions. The inverse of the gap, $\Delta_H^{-1}$, sets an intrinsic time-scale for how fast the system responds to small parameter changes. Since the behavior of the gap closing $\Delta_H\rightarrow0$ is universal we must have that this time-scale also diverges with some dynamical exponent $z$ defined as \begin{equation}\label{dyncrit} \tau_H\propto|\lambda-\lambda_c|^{-\nu z}. \end{equation} \begin{figure} \includegraphics[width=8cm]{AI.jpg} \caption{The idea behind the AI-approximation which forms the basis for the KZM applied to QPT's. Far from the critical point, the energy gap is large such that the reaction time $\tau_H$ (marked by solid black lines) is small, i.e. the system responds fast to any external changes. In this adiabatic regime the inverse transition rate $\varepsilon/\dot\varepsilon$ (dashed red lines), giving the time-scale for the external driving, is much larger than the reaction time. As the critical point is approached, the reaction time becomes longer and at the freeze-out time $-\hat t$ the system cannot stay adiabatic and it enters the impulse regime. Here the evolution is assumed frozen, i.e. diabatic. After traversing the critical point, the system can reenter into an adiabatic regime at the second freeze-out time $\hat t$. } \label{fig1} \end{figure} We assume now that the system is quenched through the critical point such that~\cite{zurek2005dynamics} \begin{equation}\label{quench} \varepsilon(t)\equiv\lambda(t)-\lambda_c=-\frac{t}{\tau_Q}, \end{equation} where $\tau_Q$ is the `quench rate', i.e. it sets the rate of change of the Hamiltonian. At the time $t=0$ we are exactly at the critical point, and we assume that for negative times the instantaneous ground state is the gapped normal phase. Even though the energy gap closes at the critical point, we may assume that sufficiently far from the critical point the evolution is adiabatic. As we move closer to the critical point, non-adiabatic excitations cannot be overlooked. While the Hamiltonian changes smoothly, in the AI-approximation one assumes that there is a certain time $-\hat t$ for which the system swaps from following the quench adiabatically to diabatically. Thus, after this {\it freeze-out time} the population transfer is hindered -- the evolution is frozen. The design is thereby such that the quench can be split into an adiabatic regime followed by an {\it impulse} regime -- the AI-approximation. Within the AI-approximation, the problem becomes essentially one of determining $\hat t$. The breakdown of adiabaticity should occur in the vicinity of the point when the rate of change $|\dot\varepsilon(t)/\varepsilon(t)|$ equals the response time $\tau_H=\Delta_H^{-1}$. Using (\ref{dyncrit}) and the explicit form of the quench (\ref{quench}), one finds \begin{equation} \hat t\sim\tau_Q^{\frac{\nu z}{1+\nu z}}. \end{equation} The validity of the AI-approximation hinges on that the window where the evolution is neither adiabatic nor diabatic is narrow. Typically, the quench rate should not be too small for good predictions from the KZM, since then this window is not narrow any more. Up to the instant $-\hat t$ we have assumed adiabatic following, and thereafter frozen evolution. The quench thereby imprints a characteristic length scale \begin{equation}\label{length} \xi(\hat t)\sim\varepsilon(\hat t)^{-\nu}\sim\tau_Q^{\frac{\nu}{1+\nu z}} \end{equation} to the system state. At $t=-\hat t$ the system is still approximately in its instantaneous ground state, but this is not the ground state at later times meaning that the system gets excited as time progresses. These are the non-adiabatic excitations. Depending on the dimensionality and the symmetries, the excitations can have different character like domain walls/kinks, or vortices. Hence, local topological defects in the otherwise uniform ground state. The length scale (\ref{length}) determines the density of such defects, i.e. \begin{equation}\label{defdens} n_\mathrm{D}\sim\xi^{-d}\sim\tau_Q^{-\frac{d\nu}{1+\nu z}}, \end{equation} where $d$ is the dimension. The KZM predicts universal behavior away from equilibrium, the scaling of the defects is determined by the critical exponents $\nu$ and $z$~\cite{del2014universality}. For a continuous PT that breaks a discrete symmetry, in the symmetry broken phase the ground state is degenerate but there is a gap to higher excited states (Higgs mode)~\cite{altland2010condensed}. The adiabatic-impulse scenario is then that you cross from adiabatic to diabatic at $-\hat t$, and then back to adiabatic at $\hat t$, see Fig.~\ref{fig1}. The imprinted excitations are still the same in this scheme. We may note that nothing restricts us to linear quenches~(\ref{quench}), and neither to the case where the two freeze-out times are symmetric, i.e. $-\hat t$ and $+\hat t$, which could for example occur when the critical exponents are different above and below the critical point~\cite{liu2009large,mumford2015dicke}. Furthermore, the transition does not have to be a proper continuous PT, but can be a crossover that appears in finite systems~\cite{zurek2005dynamics,dziarmaga2005dynamics,cincio2007entropy,de2010quench}. Indeed, the KZM can successfully be applied also to avoided crossing models~\cite{damski2005simplest,damski2006adiabatic}, which ca be seen as an approximation of a QPT in a finite system~\cite{zurek2005dynamics}. \begin{figure} \includegraphics[width=8cm]{LZspec.jpg} \caption{Adiabatic $\epsilon_\pm^{(\mathrm{ad})}(vt)=\sqrt{(vt)^2+g^2}$ (solid black lines) and diabatic $\epsilon_\pm^{(\mathrm{d})}(vt)=\pm vt$ (dashed red lines) energies of the LZ model~(\ref{lzham}) with $g=2$. The slope of the curves away from the crossing is determined by the velocity $v$, while the gap at $t=0$ is $\Delta_H(t=0)=2g$. A large coupling $g$ thus implies a large gap which favors adiabaticity, and likewise a small $v$ means a gradual change and more adiabatic evolution. } \label{fig2} \end{figure} In Refs.~\cite{damski2005simplest,damski2006adiabatic} Damski implemented the KZM on the solvable LZ problem~\cite{zener1932non,landau1932theorie} that describes an avoided level crossing. The LZ Hamiltonian is given by ($\hbar=1$) \begin{equation}\label{lzham} \hat H_\mathrm{LZ}(t)=\left[ \begin{array}{cc} vt & g\\ g & -vt \end{array}\right], \end{equation} such that the quench rate $\tau_Q=v^{-1}$, and $g$ is the coupling between the two diabatic states $|0\rangle=[1\,\, 0]^T$ and $|1\rangle=[0\,\, 1]^T$. If we assume that the system starts in its ground state $|0\rangle$ for $t=-\infty$, then the probability that it ends up in the excited state at $t=+\infty$ is given by the LZ formula~\cite{zener1932non,landau1932theorie} \begin{equation}\label{lzformula} P_\mathrm{LZ}=e^{-\pi\frac{g^2}{v}}. \end{equation} The adiabaticity parameter $\Lambda\equiv\pi g^2/v$ determines how adiabatic the process is; the larger it is the more adiabatic quench. The instantaneous eigenvalues $\epsilon_\pm^{(\mathrm{ad})}(t)=\sqrt{(vt)^2+g^2}$ of $\hat H_\mathrm{LZ}(t)$ (adiabatic energies), as well as the `bare energies' $\epsilon_\pm^{(\mathrm{d})}(t)=\pm vt$ (diabatic energies) are displayed in Fig.~\ref{fig2}. Damski found that the KZM works very well to reproduce the correct analytical result~(\ref{lzformula}) for the excitations. The agreement was particularly good for fast quenches, i.e. in the non strictly adiabatic regime~\cite{hwang2015quantum}. Naturally, there is no length scale in the LZ problem, and instead the amount of excitations $P_\mathrm{LZ}$ replaces the density of defects~(\ref{defdens}). \subsection{NESS criticality and the Lindblad master equation}\label{ssec2B} The conventional wisdom is that dissipation and/or decoherence decimate quantum properties~\cite{schlosshauer2007decoherence}. This is, for example, the main hindrance when it comes to quantum information processing. One possibility to circumvent this problem for quantum computing is to employ {\it decoherence-free subspaces}~\cite{lidar1998decoherence}, which uses states that are insensible for the decoherence. Relatedly, one could imagine to monitor the dissipation/decoherence channels such that the system approaches a steady state manifold where the computations take place~\cite{kraus2008preparation,diehl2008quantum,verstraete2008quantum}. Typically, these are controlled driven-dissipative systems with desirable non-equilibrium steady states $\hat\rho_\mathrm{ss}$. Naturally, the NESS $\hat\rho_\mathrm{ss}$ may also show non-analytic behavior in some proper thermodynamic limit, just like equilibrium states can become critical~\cite{diehl2010dynamical,eisert2010noise,tomadin2010signatures,hoening2012critical,sieberer2013dynamical}. As manifestly non-equilibrium, these NESS phase transitions need not necessarily obey laws applicable to equilibrium PT's like the Mermin-Wagner theorem~\cite{auerbach2012interacting}. There are three possible scenarios we can imagine: \begin{enumerate} \item The model is critical at the closed level, i.e. the Hamiltonian supports different phases. We exclude here the possibility of new phases appearing due to the openness, see below. The questions are, how does the inclusion of dissipation/decoherence (noise) affect the types of transitions and the properties of the phases? It has been demonstrated that the presence of environmental noise can indeed change the critical exponents of the PT~\cite{nagy2011critical,baumann2011exploring}, or even lead to new universality classes~\cite{sieberer2013dynamical,marino2016driven,kardar1986dynamic,zamora2017tuning}, as well as altering the properties of the phases~\cite{joshi2013quantum,ludwig2013quantum,altman2015two}. In certain cases, the transition becomes classical and the fluctuations induced by the environment can be described as an effective temperature~\cite{mitra2006nonequilibrium,diehl2008quantum,dalla2010quantum,sieberer2013dynamical,kessler2012dissipative}, which may prohibit build-up of long-range order in lower dimensions according to the Mermin-Wagner theorem. In realistic experimental settings considering finite systems, the length scale may, however, be larger than the system size and the characteristics of the PT could in these cases still be accessible~\cite{altman2015two,dagvadorj2015nonequilibrium}. \item Much less studied is the situation where the criticality emerges as an interplay between different dissipative channels, and not due to the Hamiltonian. Then, depending on which channel that dominates the state approaches different steady states. Verstrate {\it et al.} showed that it is possible to perform quantum computing tasks using solely dissipative evolution~\cite{verstraete2008quantum}. The target state is then a steady state and the dissipation is monitored in such a way that the desired target state is reached. \item The last scenario is when the criticality is only possible in the presence of both a Hamiltonian and dissipative channels~\cite{diehl2008quantum,diehl2010dynamical,eisert2010noise,hoening2012critical,hannukainen2017dissipation,carmichael2015breakdown}. Here, the emerging phases are typically resulting from either dissipation or unitary Hamiltonian evolution. The separate parts alone, Hamiltonian or dissipator, may well be trivial. It was also recently shown that the criticality in these systems can be conceptually different from equilibrium situations where a continuous PT is generally accompanied by a spontaneous symmetry breaking~\cite{hannukainen2017dissipation}. \end{enumerate} AMO systems have the advantage that they can be fairly versatile and offer high controllability~\cite{bloch2008many,cirac2012goals} in comparison to other material systems. The atoms or ions are manipulated with optical light, and controlled dissipation is accomplished by combination of excitations and spontaneous emission. In these optical settings, a Born-Markovian master equation is typically applicable~\cite{gardiner2015quantum}. Under such circumstances and invoking the secular (also called rotating wave) approximation, the system is described by the Lindblad master equation of Eq.~(\ref{lindblad})~\cite{breuer2002theory}. The first commutator term on the right hand side of Eq.~(\ref{lindblad}) comprises the Hamiltonian evolution, with the notification that $\hat H$ in principle also includes Lamb shifts steaming from coupling with the environment. The second term accounts for the coupling to the environment, where $\kappa_i$ are decay rates or coupling strengths, and $\hat L_i$ are the Lindblad jump operators. We have also defined the Liouvillian $\hat{\mathcal L}$ which is the total generator of the time-evolution. In what follows we will always assume that the system evolution is generated by a master equation on the form (\ref{lindblad}). For our purpose when studying NESS criticality, the steady state \begin{equation} \hat{\mathcal L}(\hat\rho_\mathrm{ss})=0 \end{equation} is the main objective, i.e. the kernel of the Liouvillian or the eigenstate with zero eigenvalue. It serves the same role as what the ground state does in equilibrium QPT's. The manifold $\mathcal{M}=\left\{\hat\rho\, |\,\partial_t\hat\rho=\hat{\mathcal{L}}(\hat\rho)=0\right\}$ of steady states forms a connected convex set. If the Liouvillian is time-independent, any initial state approaches a steady state in the infinite time limit~\cite{alicki286quantum,rivas2012open}. Thus, if the steady state is unique it is attractive in the meaning that any initial state will eventually path its way to it. Attractiveness implies robustness -- if for some reason the steady state is perturbed, the dynamics will bring it back. For systems with more than one steady state, attractiveness is lost as you can be brought back to another steady state. As will become clear, this observation is very important for our work. The uniqueness of steady states has been explored in the past~\cite{spohn1977algebraic,schirmer2010stabilizing,frigerio1978stationary,fagnola2001existence,fagnola2002subharmonic,baumgartner2008analysis2}. To say something general about whether the steady state will be unique or not is not trivial apart from special cases. For example, if the only operator commuting with all jump operators $\hat L_i$ is the identity then the steady state is unique~\cite{spohn1977algebraic,schirmer2010stabilizing}. It can be proven that there must exist at least one steady state~\cite{rivas2012open}. Reversely, given a pure state it is always possible to construct a Liouvillian with at least one jump operator that has this pure state as unique steady state. If the steady state is unique the evolution is ofter called relaxing. A general Liouvillian eigenstate has a complex eigenvalue $\mu_i$, i.e. $\hat{\mathcal L}(\hat\rho_i)=\mu_i\hat\rho_i$, such that time-evolution results in $\hat\rho_i(t)=\exp(\mu_it)\hat\rho_i$, or that we can expand a general state as \begin{equation}\label{expand} \hat\rho(t)=\sum_ic_ie^{\mu_it}\hat\rho_i. \end{equation} The eigenstates may not be physical states, i.e. positive semi-definite, and the set $\{\hat\rho_i\}$ is over-complete. Of course, Liouvillian evolution preserves positivity and norm of the state, but nevertheless the sum (\ref{expand}) may well contain unphysical states even though $\hat\rho(t)$ is physical. In fact, the eigenvalues/eigenstates typically come in complex/hermitian conjugated pairs which assures that the expansion (\ref{expand}) always exists for any physical state. It follows that the eigenvalues must obey $\mathrm{Re}(\mu_i)\leq0$~\cite{kessler2012dissipative}. The quantity \begin{equation}\label{lgap} \Delta_M=\underset{i}{\mathrm{min}}\,\mathrm{Re}(-\mu_i), \end{equation} with the minimum taken over all eigenvalues with non-zero real parts, defines the Liouvillian gap that sets the inverse time-scale for reaching the steady state~\cite{albert2014symmetries,albert2016geometry}. The subscript is introduced to distinguish this gap from the Hamiltonian energy gap $\Delta_H$. In order for $\hat\rho_\mathrm{ss}$ to become non-analytic in the thermodynamic limit, and thereby allow for critical behavior, one must have that $\Delta_M\rightarrow0$~\cite{kessler2012dissipative}, analogously to the gap closening for equilibrium QPT's. Scaling (i.e. universality) of $\Delta_M$ is not known, and, in general, it seems that the mechanisms behind NESS QPT's can be qualitatively different from those of equilibrium QPT's~\cite{hannukainen2017dissipation}. In order to discuss various scenarios for different Liouvillians, and thereby try to classify them, let us assume that the sum in (\ref{lindblad}) is restricted to a single term, i.e. we have just a single jump operator $\hat L$. This is, of course, a special case of the general situation but, as we will see, it can be a relevant case for engineered driven-dissipative systems. Even when confine the discussion to this special case, it will provide insight also for the general cases. We note also that for several jump operators $\hat L_i$, the decomposition of the Liouvillian is known to be not unique~\cite{baumgartner2008analysis}, which makes a general classification much more complicated. For single jump operators we can construct the following four classes: \begin{description} \item[$\bullet$ Class I] {\it Energy dephasing}. The jump operator is hermitian, $\hat L=\hat L^\dagger$, and commutes with the Hamiltonian, $\left[\hat L,\hat H\right]=0$. Any energy eigenstate $\hat\rho_n=|E_n\rangle\langle E_n|$ will also be a steady state, and so will any mixed state $\hat\rho_{p_n}=\sum_np_n\hat\rho_n$, i.e. $\hat{\mathcal L}(\hat\rho_{p_n})=0$. Thus, the steady states are diagonal in the energy eigenbasis $\{|E_n\rangle\}$. For any initial state, evolution will cause $\hat\rho$ to become diagonal in the energy eigenbasis without dissipation of energy, but an increase of entropy~\cite{nielsen2010quantum}. The probability distribution $p_n$ determines the steady state, and, in particular, this exemplify how the manifold of steady states is convex and simply connected. \item[$\bullet$ Class II] {\it General dephasing}. The jump operator is hermitian, $\hat L=\hat L^\dagger$, and does not commute with the Hamiltonian, $\left[\hat L,\hat H\right]\neq0$. The maximally mixed state $\hat\rho_\mathrm{ss}=\mathbb{I}/D$, with $D$ the hilbert space dimension, is a trivial steady state in this class. In many physically relevant situations, the maxiamally mixed state is also the unique steady state. With $|\varphi_n\rangle$ the eigenstates of $\hat L$, any diagonal state $\hat\rho=\sum_np_n|\varphi_n\rangle\langle\varphi_n|$ is transparent to the environment, but the Hamiltonian will drive you out of this manifold which implies relaxation to some true steady state. \item[$\bullet$ Class III] {\it Dissipation I}. The jump operator is non-hermitian, $\hat L\neq\hat L^\dagger$, but commutes with the Hamiltonian, $\left[\hat L,\hat H\right]=0$. This class is probably the least physically relevant; we know of no non-trivial physical scenarios were it occurs. A trivial situation is that the jump operator acts in a space with only degenerate energy eigenstates, e.g. $\hat H\propto\mathbb{I}$. \item[$\bullet$ Class IV] {\it Dissipation II}. The jump operator is non-hermitian, $\hat L\neq\hat L^\dagger$, and does not commute with the Hamiltonian, $\left[\hat L,\hat H\right]\neq0$. Non-hermitian operators typically appear in cases of spontaneous decay (e.g. $\hat L=\hat a$ for decay of photons/phonons and $\hat L=\hat\sigma^-$ for spontaneous decay of the upper level of a two-level system) or for incoherent pumping (e.g. $\hat L=\hat a^\dagger$ for the situation of a single boson mode). A dark state is a pure state $\hat\rho_{D}=|D\rangle\langle D|$ that is transparent to the dissipation/decoherence, e.g. $\hat L|D\rangle=0$~\cite{arimondo1996v}. It is clear that if further $|D\rangle$ is an eigenstate of the Hamiltonian, then $\hat\rho_{D}$ is also a steady state. The most common example being a system coupled to a zero temperature bath which cools the system down to its ground state. More generally, if the thermal bath is at some non-zero temperature, detail balance between incoherent loss and gain of particles leads to a thermal steady state provided the system is not driven. \end{description} As a final remark on properties of steady states, it is rather straight forward to show~\cite{dietz2003memory} that one steady state solution falling in the classes I and III (and possibly also in the other classes in certain cases) is \begin{equation} \hat\rho_{\hat L}=\hat R/\mathrm{Tr}[\hat R], \end{equation} where $\hat R=\left(\hat L^\dagger\hat L\right)^{-1}$. \subsection{Earlier studies}\label{ssec2C} \subsubsection{Dynamics across equilibrium quantum critical points} For thermal PT's, the KZM have been well explored both numerically~\cite{laguna1997density,yates1998vortex,hindmarsh2000defect} and experimentally~\cite{bowick1994cosmological,ruutu1996vortex,ducci1999order,monaco2002zurek,maniv2003observation,weiler2008spontaneous}. It is only recently, however, that the dynamics emerging from quenches through quantum critical points has been investigated~\cite{zurek2005dynamics,dziarmaga2005dynamics,damski2005simplest,polkovnikov2005universal,schutzhold2006sweeping,cherng2006entropy,cincio2007entropy,cucchietti2007dynamics}. Laser cooling and manipulation of atoms or ions have opened up a new avenue for studying quantum many-body systems out of equilibrium~\cite{bloch2008many}. Especially the high control of system parameters make these systems good candidates for exploring critical dynamics like the KZM and its predicted scaling~\cite{dziarmaga2010dynamics,del2014universality}. The KZM was theoretically studied for the superfluid-to-Mott insulator transition of an ultracold atomic gas in an optical lattice~\cite{schutzhold2006sweeping,cucchietti2007dynamics}, and a KZ scaling was suggested. Quenching across this transition has also been studied in several experiments ~\cite{bakr2010probing,chen2011quantum,braun2015emergence}. In the first experiment the KZ scaling was however not analyzed. In Ref.~\cite{chen2011quantum} the scaling after quenching from the Mott to the superfluid was mapped out, but deviations from simple theory was found which was explained as a result of the inhomogeneity of the atomic cloud~\cite{dziarmaga2010dynamics,bernier2011slow,dziarmaga2014quench}. It is also known that the KZM can fail as the quench becomes too slow, where instead different scaling exponents can be predicted from adiabatic perturbation theory~\cite{hwang2015quantum}. The behavior after quenching across a PT in a spinor BEC has also been experimentally explored~\cite{sadler2006spontaneous,nicklas2015observation,anquez2016quantum}, where indeed scaling in agreement with theory was found~\cite{anquez2016quantum}. For trapped ions, it was suggested that the KZM could be ideally studied in the so called `zig-zag' transition in which a linear chain of trapped ions reorganize to form a zig-zag structure as the confining trapping is weakend~\cite{del2010structural,de2010spontaneous}. Experiments following the theory proposals confirmed the KZM scaling with very good accuracy~\cite{ulm2013observation,pyka2013topological,mielenz2013trapping}. \subsubsection{Influence of dissipation/decoherence} For open quantum systems the focus has been on how decoherence/dissipation affects the creation of defects during the quench. The rule of thumb is that fluctuations stemming from any environment will increase the amount of defects~\cite{fubini2007robustness,cincio2009dynamics,dutta2016anti}. For the random Ising model, the characteristics may also be qualitatively different from those predicted by the KZM for closed systems~\cite{dziarmaga2006dynamics}. In particular, even at infinitely slow quenches the density of defects is non-vanishing also for finite systems where the gap is always non-zero. In Refs.~\cite{fubini2007robustness,dutta2016anti}, studying a system belonging to Class II in the classification above, it was especially found that as the quench became slower the number of excitations increased, something that was termed `anti-Kibble-Zurek mechanism'. Such a behavior is explained from the fact that a slower process implies a longer duration for which the system interacts with its environment, such that the environment induced excitations become essential. The same phenomenon is known both in `quantum coherent control'~\cite{ivanov2004effect,mathisen2016view} and adiabatic quantum computing~\cite{sarandy2005adiabatic}, i.e. there is an optimal process time for minimizing the amount of excitations. \subsubsection{Preparation of NESS's } Using environments and dissipation as resources have been especially discussed in the realm of quantum information processing. The Liouvillian gap (\ref{lgap}) plays the role for open systems as what the energy gap (above the ground state) does for Hamiltonians. For a large gap, the steady state or ground state is robust to external fluctuations. However, steady states are not only protected against external imperfections due to the large size of the gap, the gap also implies that if you are taken out of the steady state manifold you will relax back to it. If your desired steady state is unique you are guaranteed to be projected back onto it if the disturbances are temporary. For quantum information processing the natural candidate for steady states are non-classical entangled states. Several proposals have been put forward how dissipation can be harnessed for preparation of entangled states, e.g. Bell states~\cite{clark2003unconditional,kraus2004discrete,paternostro2004complete,cho2011optical,kastoryano2011dissipative}. It has also been experimentally demonstrated when it comes to the creation of Bell states of trapped ions, either via discrete gates~\cite{barreiro2011open} or continuous relaxation~\cite{lin2013dissipative}, or to entanglement between macroscopic atomic clouds~\cite{krauter2011entanglement}. Extending the schemes for the generation of multi-qubit entangled states have also been suggested~\cite{kraus2008preparation} and experimentally verified~\cite{barreiro2010experimental,schindler2012quantum}, or other exotic many-body quantum states~\cite{diehl2008quantum,witthaut2008dissipation,diehl2010dynamical,diehl2011topology,muller2012engineered,bardyn2013topology}. As discussed already in the previous Subsection, in driven systems dissipation may cause critical behavior of the steady states~\cite{diehl2008quantum,diehl2010dynamical,eisert2010noise,hoening2012critical,sieberer2013dynamical,eisert2014quantum,hannukainen2017dissipation}. Criticality in driven-dissipative systems has recently been experimentally explored in quantum optical systems like the Dicke `normal-superradiance' PT~\cite{baumann2009dicke,baumann2011exploring,klinder2015dynamical}. Other examples of steady state criticality are `optical bistability'~\cite{bonifacio1978optical,drummond1980quantum,fink2017observation} and the onset of lasing in the laser~\cite{degiorgio1970analogy,rice1994photon,haken2012laser}. \section{General description of the Kibble-Zurek mechanism for NESS phase transitions}\label{sec3} \subsection{The Bloch representation for Lindblad master equations}\label{ssec3A} The KZM relies on the AI-approximation, and while it is clear what adiabatic vs. diabatic (impulse) evolutions mean for closed quantum systems it is not completely evident what it implies for open quantum systems. We therefor must define what we mean by adiabaticity for open quantum systems. As we will see there are various approaches one may take, but before that we need to say something about the Liouvillian of Eq.~(\ref{lindblad}). The Lindblad master equation~(\ref{lindblad}) is linear, i.e. $\hat{\mathcal{L}}(c_1\hat\rho_1+c_2\hat\rho_2)=c_1\hat{\mathcal{L}}(\hat\rho_1)+c_2\hat{\mathcal{L}}(\hat\rho_2)$. If we parametrize the density operator $\hat\rho$ and represent it as some vector, the linearity implies that the Lindblad equation is cast in a simple matrix form. There are different choices for how to parametrize $\hat\rho$, but here we employ one related to the Bloch vector representation. Given that the Hilbert space dimension is $D$, any density operator obeying positivity and normalization can be expressed as~\cite{kimura2003bloch,bertlmann2008bloch} \begin{equation}\label{dens} \hat\rho=\frac{1}{D}\left(\mathbb{I}+\sqrt{\frac{D(D-1)}{2}}\mathbf{R}\cdot\lambda\right). \end{equation} Here, $\mathbf{R}=(r_1,\,r_2,\,\dots,\,r_{D^2-1})$ is the generalized (real) Bloch vector and the vector $\lambda=(\hat\lambda_1,\,\hat\lambda_2,\dots,\,\hat\lambda_{D^2-1})$ is composed of the generalized Gell-Mann matrices $\hat\lambda_i$\cite{hioe1981n}. The $\hat\lambda_i$ matrices are generators of the Lie algebra corresponding to the group $SU(D)$, and in particular they are traceless, hermitian, and mutually orthogonal such that given any density operator its Bloch vector is obtained from $r_i=\mathrm{Tr}[\hat\lambda_i\hat\rho]$. For $D=2$ (qubit) and $D=3$ (qutritt) the matrices are the standard Pauli and Gell-Mann matrices respectively. The Bloch vector length $|\mathbf{R}|\leq1$ such that the state space can be represented by a `Bloch hyper-sphere'. However, whenever $D>2$ not all points in the Bloch sphere represent a physical state~\cite{kimura2003bloch,goyal2016geometry}. The larger dimension, the sparser the Bloch sphere is in terms of physical states. The non-physical states are not proper density matrices as they are not positive semi-definite. In the Bloch representation we have parametrized the density operator in terms of $\mathbf{R}$, and the Lindblad master equation is given by~\cite{schirmer2010stabilizing} \begin{equation}\label{meq} \partial_t\mathbf{R}=\mathbf{MR}+\mathbf{b}. \end{equation} The Liouvillian matrix $\mathbf{M}$ is of dimension $(D^2-1)\times(D^2-1)$ and in general not hermitian. In fact, for a closed system $\mathbf{M}$ is skew-symmetric. For now, let us assume that both $\mathbf {M}$ and $\mathbf{b}$ are time-independent. The term $\mathbf{b}$ is a column vector of $D^2-1$ elements and represents some sort of pumping that prevents the state $\mathbf{R}={\bf 0}$ to be a trivial steady state. The general steady state is given by $\mathbf{MR}_\mathrm{ss}+\mathbf{b}=0$, or if $\mathbf{M}$ is invertible $\mathbf{R}_\mathrm{ss}=-\mathbf{M}^{-1}\mathbf{b}$. If $\mathbf{M}$ is not invertible the system of equations is under-determined and the steady state need not be unique which will give more interesting situations when discussing the KZM for NESS critical models. When $\mathbf b=0$ and $\mathbf M$ is not invertible we note that the steady state $\mathbf M\mathbf R_\mathrm{ss}=0$ defines a connected manifold of steady states due to the ambiguity of the norm of $\mathbf R_\mathrm{ss}$. The continuity and linearity of (\ref{meq}) warrant that there must exist at least one steady state (or fixed point)~\cite{schirmer2010stabilizing}. In the general case of a non-vanishing pump term $\mathbf{b}$, by solving the homogeneous equation $\partial_t\mathbf{Q}=\mathbf{MQ}$ the solution of the inhomogeneous problem is $\mathbf{R}(t)=\mathbf{Q}(t)\left[\mathbf{R}(0)+\int_0^td\tau\,\mathbf{Q}^{-1}(\tau)\mathbf{b}(\tau)\right]$. Or if we introduce the matrices $\mathbf{V}$ and $\mathbf{U}$ that diagonalizes the Liouvillian matrix, i.e. $\mathbf {D}=\mathbf{V}^t\mathbf{MU}$ with $\mathbf{D}$ diagonal, then the right eigenvectors of $\mathbf{M}$ evolve as $\mathbf{Q}_i(t)=\mathbf{Q}_i(0)e^{\mu_it}+(e^{\mu_it}-1)\mathbf{V}^t\mathbf{b}/\mu_i$ where $\mu_i$ is the corresponding eigenvalue of $\mathbf{M}$. Thus, the eigenvalues of the Liouvillian matrix determines the characteristic time-scales, and note that we must have $\mathrm{Re}(\mu_i)\leq0$ in order to preserve normalization. The Liouvillian gap is defined as before in Eq.~(\ref{lgap}). Provided that the Liouvillian matrix is diagonalizable (for example if it is normal, $[\mathbf{M},\mathbf{M}^\dagger]=0$, as will be relevant for us in the following Section), its eigenvectors $\mathbf{Q}_i$ (or $\mathbf{R}_i$ if $\mathbf{b}={\bf 0}$) of $\mathbf{M}$ form an over-complete basis, i.e. they are not linearly independent. Furthermore, given one eigenvector, its corresponding density operator $\hat\rho_i$ need not be physical. Nevertheless, any initial physical $\mathbf{R}(0)$ will evolve into a new physical Bloch vector $\mathbf{R}(t)$. The fact that $\mathbf{M}$ is not hermition implies that it might not be diagonalizable. There exists, however, a similarity matrix $\mathbf{S}$ that puts $\mathbf{M}$ on a Jordan block form, i.e. \begin{equation} \mathbf{D}=\mathbf{SMS}^{-1}, \end{equation} where the Jordan blocks of $\mathbf{D}$ have identical values on the diagonal and ones on the superdiagonal, and the remaining values are all zero. Thus, if $\mathbf{M}$ is diagonalizable all Jordan blocks has dimension one. If $\mathbf{M}$ is not diagonalizable, the spectrum shows an exceptional point~\cite{heiss2004exceptional,heiss2012physics}, where the real parts of at least two eigenvalues of $\mathbf{M}$ coalesce. Furthermore, at the exceptional point the corresponding eigenvectors are identical. The presence of exceptional points will however not be relevant for us in the examples discussed in the following section. \subsection{Adiabaticity for open quantum systems}\label{adsubsec} Having introduced the Bloch representation, it is rather straightforward to generalize the ideas of adiabaticity from quantum mechanics~\cite{ballentine2014quantum} to open quantum systems~\cite{sarandy2005adiabatic2,sarandy2005adiabatic}. In order to make the analysis transparent we here consider the cases with vanishing pump terms (the generalization to $\mathbf{b}\neq0$ is direct), and we call the instantaneous right eigenvectors of $\mathbf {M}(t)$ for $\mathbf {R}_i(t)$. Adiabaticity then implies that the dynamics does not generate any transfer of population between the instantaneous eigenvectors. Thus, if we initialize the system in one eigenvector $\mathbf{R}_i(0)$, and the system evolves adiabatically the state at a later time is $\mathbf{R}(t)=\mathbf{R}_i(t)\exp\left(\int_0^t\mu_i(\tau)\,d\tau\right)$. The condition warranting adiabatic evolution for an open quantum system described by a Liouvillian matrix $\mathbf{M}(t)$ takes a very similar form to that for Hamiltonian systems~\cite{sarandy2005adiabatic2}. That is, if $\omega_{ij}(t)=|\mu_i(t)-\mu_j(t)|$ represents the ``gap'' and $\mathbf{L}_j(t)$ a left eigenvector of $\mathbf{M}(t)$, then the `rate of change' $|\mathbf{L}_j(t)\dot{\mathbf{M}}(t)\mathbf{R}_i(t)|$, with dot representing time derivative, should be small in comparison to the ``gap'' $\omega_{ij}^2(t)$ for all times and all $j$. The similarities between adiabatic evolution generated by a hermitian matrix $\mathbf{H}(t)$ and a non-hermitian matrix $\mathbf{M}(t)$ are many, but there are still important differences that should be appreciated; ($i$) we already mentioned that the set of eigenvectors $\mathbf{R}_i$ forms an over-complete basis, ($ii$) most of the eigenvectors do not represent physical states $\hat\rho_i$, ($iii$) contrary to state vectors, the norms of the Bloch vectors are arbitrary between 0 and 1, and ($iv$) since $\mathrm{Re}(\mu_i)\leq0$ even under adiabatic evolution the norm of the adiabatic Bloch vector typically decreases. If the instantaneous eigenstates $\mathbf{R}_i(t)$ define the adiabatic states $\hat\rho_i^{\mathrm{(ad)}}(t)$, the ambiguity of the Bloch vector norm means that for every $i$ there is a set comprised of infinitely many adiabatic states. The last point in the list above then implies that even under adiabatic evolution an adiabatic state evolve within the $i$'th set of adiabatic states. The generalization of the adiabatic theorem, going from a Hamiltonian $\hat H(t)$ to a non-hermitian matrix $\mathbf{M}(t)$ as sketched above, is natural from a mathematical perspective. It provides, however, a less clear physical picture, especially since most eigenvectors $\mathbf{R}_i$ are non-physical. Naturally, we are more interested in how physical states evolve, and what is meant by adiabaticity for those. The physical state of special interest is the instantaneous steady state \begin{equation} \hat{\mathcal L}_t(\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t))=0, \end{equation} where the subscript $t$ marks that the Liouvillian is explicitly time-dependent and the superscript (ad) denotes an {\it adiabatic} steady state. Thus, $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$ is an exceptional example of the adiabatic states $\hat\rho_i^{\mathrm{(ad)}}(t)$ introduced in the previous paragraph. Note that if we imagine a fixed time $t$, $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$ is a corresponding steady state. If the time-dependence of the Liouvillian is weak we may expect that the relaxation time of the system is short on the overall time-scale such that throughout the driving the system stays close to a steady state. This then resemblance adiabatic evolution. Analyzing adiabaticity in terms of deviations from the instantaneous steady state has been the subject of several papers~\cite{davies1978open,joye2007general,avron2012adiabatic,venuti2016adiabaticity}. In particular, if the quench can be assigned a time $T$ the deviations from $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$ deep in the adiabatic regime scales as $T^{-\eta}$, with $\eta=1$ for a gapped system and for a system in which the gap closes $\eta=\frac{1}{1+\beta}$, where the exponent $\beta$ determines how fast the Liouvillian gap (\ref{lgap}) closes~\cite{venuti2016adiabaticity}. Since $\beta\leq0$, any gap closening worsen the adiabaticity. $\beta$ should be compared to $\nu z$, and $T$ to $\tau_Q$ for the closed system discussed in Sec.~\ref{ssec:KZM}. \begin{figure} \includegraphics[width=8cm]{Fig3a.jpg} \includegraphics[width=8cm]{Fig3b.jpg} \includegraphics[width=8cm]{Fig3c.jpg} \caption{Schematic picture of the time-evolution for open quantum systems. The yellow area represents the space of all physical states $\hat\rho(t)$, and the light blue the manifold of instantaneous steady states $\hat\rho_\mathrm{ss}(t)$. Since the model is explicitly time-dependent, the steady state manifold could change in time. In (a) the evolution is adiabatic and an initial steady state $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t_i)$ is adiabatically propagated by the Liouvillian $\hat{\mathcal L}_t$ to a final steady state $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t_f)$. The steady state manifold is continuously deformed as time progresses and the propagated state $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$ remains in the manifold throughout. There is a one-to-one mapping between every steady states in the different manifolds at different times. For a non-adiabatic evolution, the initial steady state is taken out from its instantaneous steady state into some state $\hat\rho(t)$, typically lying outside the steady state manifold, as depicted in (b). When the driving stops at time $t_f$, the state $\hat\rho(t)$ relaxes down to a steady state $\hat\rho_\mathrm{ss}(t_f)$ in the steady state manifold (c). The distance $D$ is then a measure of how non-adiabatic the whole process was. } \label{fig3} \end{figure} \subsection{Measure of non-adiabatic excitations}\label{ssec3C} The KZM for quantum systems estimates the density of defects in terms of the quench rate $\tau_Q$ and the critical exponents according to Eq.~(\ref{defdens}). These defects are topological by nature and describe local excitations. Thus, the sum of them gives the amount of excitations above the ground state energy. The more excited the system gets, while quenched through the critical point, the higher number of defects. When we deal with an open system there is nothing like the ground state energy. We can of course consider the instantaneous energy $E_H(t)=\mathrm{Tr}[\hat\rho(t)\hat H(t)]$ where $\hat\rho(t)$ is the state at time $t$. This energy could be compared to the `adiabatic' energy $E_H^\mathrm{(ad)}(t)=\mathrm{Tr}[\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)\hat H(t)]$ to give some sort of `excitation measure'. However, such a comparison only makes sense if the influence of the environment is modest and the full system is mainly prescribed by the Hamiltonian. In fact, previous works have had this idea in mind; how is the quench affected by a weak coupling to an environment~\cite{dziarmaga2006dynamics,fubini2007robustness,cincio2009dynamics,nalbach2015quantum,dutta2016anti}? The general answer to this question is that the environment causes additional fluctuations that increase the amount of excitations. When the system plus environment cannot be thought of as separate subsystems, for example when the criticality crucially depends on both parts, $E_H(t)-E_H^\mathrm{(ad)}(t)$ has little to do with any excitations. Learning from the discussion above about adiabaticity in open quantum systems, the proper measure for adiabaticity should be the distance from the instantaneous steady state $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$. Indeed, $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$ is the state of relevance and any non-adiabatic excitations should be measured relative to this state. There are many possible choices for such a distance, but as we will see the natural one here is the trace distance~\cite{nielsen2010quantum}, which for two density matrices is defined as \begin{equation}\label{tr1} D(\hat\rho_1,\hat\rho_2)\equiv\frac{1}{2}\mathrm{Tr}\left[\sqrt{(\hat\rho_1-\hat\rho_2)^2}\right]=\frac{1}{2}\sum_i|\lambda_i|, \end{equation} where $\lambda_i$ is the $i$'th eigenvalue of $\hat\rho_1-\hat\rho_2$. Thus, the quantity of interest for us is $D\left(\hat\rho(t),\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)\right)$. The idea behind the trace distance as the appropriate measure is pictured in Fig.~\ref{fig3}, and we will visualize it further in the following section when we discuss particular examples. Since any steady state has eigenvalue zero of the Liouvillian $\hat{\mathcal L}_t$, under adiabatic evolution the Liouvillian does not generate any evolution of $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$. That is, given that we started in the instantaneous steady state $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(0)$, then $D\left(\hat\rho(t),\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)\right)=0$ for all times. In particular, the length of the corresponding Bloch vector is constant. Any non-zero trace distance can only be the result of non-adiabatic transitions ocurring during the evolution. Typically, transitions out from $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$ will put the system in a non-steady state. The real parts of the Liouvillian eigenvalues will push the state back towards the steady state manifold, simultaneously as the driving may cause further non-adiabatic excitations. As the driving stops, according to the discussion above regarding fixed points of (\ref{meq}), the relaxation maintains and eventually the system is to be found in a steady state. The distance of this steady state to the adiabatic steady state, \begin{equation}\label{trdist} D\left(\hat\rho_\mathrm{ss},\hat\rho_\mathrm{ss}^\mathrm{(ad)}\right), \end{equation} is a direct result from non-adiabatic transitions developed during the quench and hence serves as our measure of excitations. This is one of the key results of the present work. We typically consider a situation where the final times are far from the impulse regime. For the quench in the closed system this implies that the evolution is adiabatic and no further excitations develop. For the open case, during the later stages far from the critical point when the system relaxes to its instantaneous steady state -- the relaxation time-scale is typically the short one. Thus, we normally have that $\hat\rho(t_f)$ is to a good approximation an instantaneous steady state as long as $t_f$ is large in comparison to any freeze-out time. The relaxation depicted in Fig.~\ref{fig3} (c) therefor occurs already before we reached the final time $t_f$. With this in mind it should be clear that the trace distance is not only the suitable measure in terms of quantifying the amount of non-adiabatic excitations, but it also has the advantage that it does not depend on $t_f$ as long as it is much larger than the freeze-out time $\hat t$. This observation is important since our aim is to describe universal features of the quench for open quantum critical systems. As a universal quantity we do not want it to depend on $t_i$ nor $t_f$. This is different from earlier works exploring quenches through critical points in open quantum systems~\cite{dziarmaga2006dynamics,fubini2007robustness,cincio2009dynamics,nalbach2015quantum,dutta2016anti}. In the absence of an environment we wish that the trace distance (\ref{trdist}) should in some sense be directly related to the amount of non-adiabatic excitations out from the ground state. As we already pointed out, in certain situations one cannot even talk about energy excitations since there is no energy spectrum. However, in the cases when one can it is of course desirable that the trace distance measures such energy excitations. Class I of the above classification is such a scenario. Here criticality cannot arise from coupling to the environment since the jump operator commutes with the Hamiltonian -- the environment induces a dephasing in the energy basis. In particular, in this Class all steady states are diagonal in the energy eigenbasis. The instantaneous ground state of $\hat H(t)$ is thereby also an instantaneous steady state. With $|E_n(t)\rangle$ the instantaneous (adiabatic) eigenstates we envision the situation where the system is initialized in the Hamiltonian ground state, and hence for adiabatic evolution $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)=|E_0(t)\rangle\langle E_0(t)|$. The actual time-evolved state, after relaxation to its final steady state, is on the form $\hat\rho_\mathrm{ss}(t)=\sum_{n=0}^\infty p_n|E_n(t)\rangle\langle E_n(t)|$. The trace distance at the final time $t_f$ becomes \begin{equation}\label{trdist2} D\left(\hat\rho_\mathrm{ss}(t_f),\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t_f)\right)=\sum_{n=1}^\infty p_n=1-p_0. \end{equation} Thus, the distance gives the probability to be excited. Furthermore, if we shift the instantaneous ground state energy $E_0(t_f)=0$, the amount of excitations in terms of energy is simply $\delta E=\mathrm{Tr}\left[\hat\rho_\mathrm{ss}(t_f)\hat H(t_f)\right]$. \subsection{The KZM for open quantum systems}\label{ssec3d} As we discussed in Sec.~\ref{adsubsec}, the adiabatic concepts for time-dependent hermitian matrices (i.e. Hamiltonians) can be generalized to in principle any square matrix like for example the Liouvillian matrix $\mathbf M(t)$. Regardles of the properties of the matrix the idea is that the variations of the change should be small in comparison to the squared gap in order to warrant adiabatic evolution. Since the eigenvalues $\mu_i(t)$ of $\mathbf M(t)$ are in general complex, the absolute value of the gap should be considered, i.e. $\omega_{ij}(t)=|\mu_i(t)-\mu_j(t)|$. Remember that even if the pump term $\mathbf b(t)$ is non-zero, a transformation can cast the Bloch equation into a homogenous one such that we can limit the discussion to homogenous equations. We already pointed out that there is, however, one crucial aspects of adiabaticity in open quantum systems differing from the standard setting of closed quantum systems. As long as the adiabatic state $\mathbf R_i(t)$ is not a steady state, even under adiabatic evolution the norm of $\mathbf R_i(t)$ will decrease (keep in mind that we assume $\mathbf b(t)=0$, and if this is not the case it is the norm of $\mathbf Q_i(t)$ that is shrinking). Thus, on the Bloch sphere any symmetry axis defines a class of connected adiabatic states. As for QPT's, criticality of open quantum systems is also accompanied by a gap closening in the spectrum of $\mathbf M$~\cite{kessler2012dissipative}. Thus, at the critical point the Liouvillian gap of Eq.~(\ref{lgap}) $\Delta_M\rightarrow0$. This is the equivalence of the critical slowing down for open quantum PT's. Far from the critical point we expect instead that $\omega_{ij}(t)$, for any $i$ and $j$, is large compared to the inverse rate-of-change, such that the dynamics is adiabatic. In this respect we are lead to introduce the AI-approximation. The picture is then very similar to that of Fig.~\ref{fig1}, where the evolution is divided into an adiabatic, an impulse, and an adiabatic regime. Recall that the freeze-out time $-\hat t$ when the system goes from adiabatic to diabatic is determined from equalizing the relaxation time $\tau_H=\Delta_H^{-1}$ with the inverse transition rate. For a linear quench we thereby have $\tau_H(\hat t)=\alpha\hat t$, where $\alpha$ is some parameter that could depend on the system parameters~\cite{damski2006adiabatic}. When considering a quench through a critical point of an open quantum system we should replace the Hamiltonian reaction time $\tau_H$ with the Liouvillian reaction time $\tau_M=\omega_{ij}^{-1}$ (note that the Liouvillian gap $\Delta_M$ sets the relaxation rate to the steady state, while $\omega_{ij}$ determines total response), and furthermore the parameter $\alpha$ may well be altered by the environment and especially is expected to depend on the loss rates $\kappa_i$. In fact, it is not {\it a priori} clear that the characteristic time for the rate-of-change will be linear even though the quench is linear. One could, for example, imagine Lindblad jump operators that are explicitly time-dependent. Another possibility is that the impulse regime need not be symmetric around the critical point such that there is a left $\hat t_L$ and right $\hat t_R$ freeze-out time with $\hat t_L\neq-\hat t_R$. Nevertheless, with the above argument we hope that it is cleatr that the general idea of the AI-approximation can be applied to critical open quantum systems. In the following section we will verify this by analyzing two examples. \section{Examples}\label{sec4} For a non-trivial situation we need a model that supports a manifold of many steady states. For a single unique steady state it is clear that by taking $t_f$ large enough the system always ends up in this state and we cannot conclude how adiabatic the quench was. We know that Class I of our classification supports a connected manifold of steady states. This Class is also physically relevant as it describes energy dephasing. In fact, small fluctuations in experimental parameters should at first cause a dephasing in the energy basis. We thereby look for Lindblad jump operators that commute with the Hamiltonian $\hat H(t)$ for all times. One choice is to take the instantaneous projectors onto the adiabatic states; $\hat L_i(t)=|\phi_i^{\mathrm{(ad)}}(t)\rangle\langle \phi_i^\mathrm{(ad)}(t)|$. Alternatively one can take that $\hat L(t)=\hat H(t)$. In both examples we have that the jump operators are explicitly time-dependent and in a strict sense the resulting master equation is not on a Lindblad form. Nevertheless, this is not so important for us since it is still a `completely positive trace preserving map' that guarantees that the density matrix stays physical under time-evolution. In both examples discussed in this section we consider the case when the jump operators equals the system Hamiltonian. Using the adiabatic state projectors instead does not change the results qualitatively in any way. \subsection{Dephasing LZ model}\label{ssec4A} As Damski pointed out, the simplest model supporting the KZM is the LZ problem~\cite{damski2005simplest}. It is not a model describing true criticality, but rather a smooth transition between two orthogonal states, see Fig.~\ref{fig2}. For the dephasing LZ problem the Lindblad equation takes the form \begin{equation}\label{lzlind} \begin{array}{l} \partial_t\hat\rho(t)=i\left[\hat\rho(t),\hat H_\mathrm{LZ}(t)\right]\\ \\ \displaystyle{+\kappa\left(2\hat H_\mathrm{LZ}(t)\hat\rho(t)\hat H_\mathrm{LZ}(t)-\hat H_\mathrm{LZ}^2(t)\hat\rho(t)-\hat\rho(t)\hat H_\mathrm{LZ}^2(t)\right),} \end{array} \end{equation} with $\hat H_\mathrm{LZ}(t)$ the LZ Hamiltonian~(\ref{lzham}). The manifold of steady states comprises those along a symmetry axis in the Bloch sphere between the adiabatic states, $|\phi_1^\mathrm{(ad)}(t)\rangle$ and $|\phi_2^\mathrm{(ad)}(t)\rangle$. For large negative or positive times these states approximately coincide with the diabatic states $|0\rangle=[1\,\,0]^T$ and $|1\rangle=[0\,\, 1]^T$, i.e. the north and the south pole on the sphere. Starting in say the south pole, regardless of whether $\kappa$ vanishes or not, adiabatic evolution implies that the state stays pure and traverses the Bloch sphere and ends up on the north pole. In the adiabatic basis, the state is frozen under adiabatic evolution. Non-adiabatic excitations takes the state away from the symmetry axis. Simultaneously, the dephasing shrinks the length of the Bloch vector and pushes the state back towards the symmetry axis (steady state manifold). This relaxation is clearly absent when $\kappa=0$. The result of a numerical simulation demonstrating this behavior is presented in Fig.~\ref{fig4}. \begin{figure} \includegraphics[width=8cm]{BlochFig4.jpg} \caption{(Color online) Comparison between the Bloch vector evolution for the closed, $\kappa=0$, and open, $\kappa\neq0$, LZ model. The thick blue arrow represents the manifold of steady states (symmetry axis); the arrow head is the adiabatic state $|\phi_1^\mathrm{(ad)}(t)\rangle$ and the other end of the blue arrow is the orthogonal adiabatic state $|\phi_2^\mathrm{(ad)}(t)\rangle$. The thick black arrow shows the final state at $t_f$ (big enough such that the relaxation is complete). Adiabatic evolution would mean that the black and blue arrows completely overlap. The turquoise thin lines give snapshots of the Bloch vector $\mathbf R(t_i)$ at different times $t_i$. The decreased Bloch vector for $\kappa\neq0$ is evident, as is the relaxation down to the steady state manifold. The amount of excitations is determined from $D$. For this numerical example, $v=0.4$, $g=0.5$, and the initial and final times $t_i=-10$ and $t_f=10$ are taken to warrant convergence of the population transfer.} \label{fig4} \end{figure} In Appendix~\ref{appA} we give the general expression for the Bloch equation for two-level systems, from which we can extract the corresponding equations for (\ref{lzlind}). The Liouvillian matrix takes the form \begin{equation}\label{mmatrix} \mathbf{M}(t)=2\left[ \begin{array}{ccc} -\kappa (vt)^2 & -vt & \kappa gvt\\ vt & -\kappa\left[(vt)^2+g^2\right] & -g\\ \kappa gvt & g & -\kappa g^2 \end{array}\right] \end{equation} and the pump term $\mathbf b=0$, as follows from that the jump operator is hermitian. The Liouvillian matrix is normal, $\left[\mathbf M(t),\mathbf M^\dagger(t)\right]=0$, which implies that it is unitarilly diagonalizable~\cite{arfken2005mathematical}. The instantaneous eigenvalues are \begin{equation}\label{meig} \begin{array}{l} \mu_1(t)=0,\\ \\ \mu_2(t)=-2\left(\kappa\varepsilon^2(t)-i\varepsilon(t)\right),\\ \\ \mu_3(t)=\mu_2^*(t)=-2\left(\kappa\varepsilon^2(t)+i\varepsilon(t)\right), \end{array} \end{equation} where, as before, $\varepsilon(t)=\sqrt{(vt)^2+g^2}$. The corresponding, orthonormal, instantaneous eigenvectors are \begin{equation} \begin{array}{c} \begin{array}{ll} \mathbf R_1(t)=\frac{1}{\varepsilon(t)}\left[ \begin{array}{c} g\\ 0\\ vt\end{array}\right], & \mathbf R_2(t)=\frac{1}{\sqrt{2}\varepsilon(t)}\left[ \begin{array}{c} vt\\ -i\varepsilon(t) \\ g\end{array}\right], \end{array}\\ \\ \begin{array}{c} \mathbf R_3(t)=\frac{1}{\sqrt{2}\varepsilon(t)}\left[ \begin{array}{c} vt\\ i\varepsilon(t) \\ g\end{array}\right]. \end{array} \end{array} \end{equation} The first Bloch vector $\mathbf R_1(t)$ is the steady state with the accompanying zero eigenvalue. The remaining two Bloch vectors are both complex, and hence cannot represent physical states. The spectral gap $\omega(t)=|\mu_1(t)-\mu_2(t)|=|\mu_1(t)-\mu_3(t)|=2\sqrt{\kappa^2\varepsilon^4(t)+\varepsilon^2(t)}$. For $\kappa=0$ we regain the LZ gap $2\varepsilon(t)$. The absolute values of the eigenvalues are shown in Fig.~\ref{fig5}, from where it is also evident that the gap grows with $\kappa$. \begin{figure} \includegraphics[width=8cm]{LZspec2.jpg} \caption{(Color online) Absolute values of the instantaneous eigenvalues (\ref{meig}) of the Liouvillian matrix (\ref{mmatrix}). There is a time-independent zero eigenvalue corresponding to the steady states. The difference between the zero eigenvalue and the remaining two reflects the spectral gap function $\omega(t)$ which sets the inverse response time. In the figure $g=2$ and we show two examples; $\kappa=0.1$ for the solid black line and $\kappa=0$ for the dashed red line. Note, in particular, that the gap $\omega(t)$ increases with $\kappa$ implying that the response time gets shorter the larger $\kappa$ is.} \label{fig5} \end{figure} Turning to the KZM for the open LZ problem, we note that the response time $\tau_M=\omega^{-1}(t)$ is a decreasing function of $\kappa$. This suggests that the non-vanishing loss rate makes the quench more adiabatic. In return, this should result in a shorter impulse region, i.e. the value of the freeze-out time $|\hat t|$ decreases. But this is in contrast to the general knowledge that the coupling to an environment causes further excitations stemming from the additional fluctuations induced by the environment~\cite{fubini2007robustness,cincio2009dynamics,dutta2016anti}. Indeed, one finds that the amount of excitations do increase for a non-zero $\kappa$ (see below). Does this mean that the idea of the KZM, as a result of an AI approximation, breaks down for open systems? As pointed at in Sec.~\ref{ssec3d}, the answer to this question lies in the fact that the rate-of-change also depends on $\kappa$, i.e. the slope of the red dashed line in Fig.~\ref{fig1} is $\kappa$-dependent. Thus, even if the solid black lines of Fig.~\ref{fig1} move downward (suggesting a shorter impulse region), the change in the slope of the red dashed line can compensate this to keep the impulse region large. If the amount of excitations increases we even expect the impulse region to grow with $\kappa$. For the closed LZ problem it was motivated that the slope of the red dashed line in Fig.~\ref{fig1} is given by $\alpha=\pi$~\cite{damski2006adiabatic}. In particular, in the regime of relatively fast quenches where we expect the AI-approximation to be applicable, an expansion argument, and comparing to the known analytical result~(\ref{lzformula}), give this slope value. To analyze the open LZ problem we rewrite the Liouvillian matrix in powers of $vt$; \begin{widetext} \begin{equation}\label{mk} \mathbf{M}(t)=(vt)^0\left[ \begin{array}{ccc} 0 & 0 & 0\\ 0 & -2\kappa g^2 & -2g\\ 0 & 2g & -2\kappa g^2 \end{array}\right]+(vt)^1\left[ \begin{array}{ccc} 0 & -2 & 2\kappa g\\ 2 & 0 & 0\\ 2\kappa g & 0 & 0 \end{array}\right]+(vt)^2\left[ \begin{array}{ccc} -2\kappa & 0 & 0\\ 0 & -2\kappa & 0\\ 0 & 0 & 0 \end{array}\right]. \end{equation} \end{widetext} We note: ($i$) the last term, quadratic in $vt$, generates relaxation of $R_x$ and $R_y$ but does not directly affect $R_z$, ($ii$) for $g=0$ any state on the symmetry axis between the north/south poles is a steady state, ($iii$) for $\kappa=0$ the time-dependence is linear and the second term contains the elements $\pm2vt$ which result in identifying the slope $\alpha=\pi$~\cite{damski2006adiabatic}, and ($iv$) the $\kappa$-dependent elements of the second term clearly describe non-unitary evolution as they appear with the same sign. The fact that $\mathbf M(t)$ contains both a linear and quadratic time-dependence could hint that the rate-of-change should not be taken linear as for the closed case, or in the visual example of Fig.~\ref{fig1}. However, the quadratic time-dependence only enters on the diagonals and does not generate direct couplings between the Bloch vector components. For the linear term we expect that $\alpha=\alpha(\kappa g)$, and furthermore that $\lim_{\kappa\rightarrow0}\alpha(\kappa g)=\pi$. We also know that we must have $\alpha(\kappa g)<\pi$ for non-zero $\kappa$. Since the open LZ problem studied here is not analytically solvable (to the best of our knowledge), we cannot use the same arguing as in Ref.~\cite{damski2006adiabatic} to determine $\alpha(\kappa g)$. Assuming a linear $\kappa g$ dependence, a least square fit gives \begin{equation}\label{alphaeq} \alpha=\pi\left(1-\frac{\kappa g}{2}\right). \end{equation} Here it is understood that this formula is valid for $\kappa g\ll1$. The freeze-out time is obtained from \begin{equation} \pi\left(1-\frac{\kappa g}{2}\right)\hat t=\frac{1}{2\sqrt{\kappa^2\varepsilon^4(\hat t)+\varepsilon^2(\hat t)}}. \end{equation} The analytic expression for $\hat t$ is long and not very informative. We note, however, that $\hat t$ is an increasing function of $\kappa$, which explains the larger amount of excitations generated through the quench in the open compared to the closed LZ problem. The applicability of the AI-approximation, using the parameter~(\ref{alphaeq}), is demonstrated in Fig.~\ref{fig6}. The regime of study is for relatively fast quenches where the KZM is believed to reproduce quantitatively correct predictions. For these parameters, the agreement is very convincing, even for as large loss rates as $\kappa=0.4$. Indeed, for the parameters of the figure the agreement is improved with larger $\kappa$. For even larger $\kappa$, beyond $\sim3$, the behavior is qualitatively different as the evolution is then described by a Zeno effect, see discussion in the concluding remarks in Sec.~\ref{sec5}. \begin{figure} \includegraphics[width=8cm]{ExcitationsFig6.jpg} \caption{(Color online) The amount of excitations, measured by the trace distance(\ref{trdist2}), as a function of $g^2/v$ for three different loss rates $\kappa$. Solid lines give the results according to the AI-approximation with the rate-of-change(\ref{alphaeq}), while the dashed ones are the corresponding numerical results obtained from direct numerical integration of~(\ref{lzlind}). Note that the agreement gets better the larger $\kappa$ is. The closed case represents the situation studied in Ref.~\cite{damski2005simplest,damski2006adiabatic}. The interval of integration $[t_i,t_f]$ is the same as in Fig.~\ref{fig4}.} \label{fig6} \end{figure} \subsection{Dephasing transverse Ising model}\label{ssec4B} The open LZ model of the previous subsection is a most simple example demonstrating the ideas of the KZM for open quantum systems, but it is not, in a true sense, describing a proper quantum phase transition. In this subsection we turn to a model, the transverse Ising model~\cite{sachdev2007quantum,suzuki2012quantum}, that indeed hosts a quantum phase transition, both in the open and closed version of the model. Thanks to a set of clever transformations proposed by Dziarmaga in Ref.~\cite{dziarmaga2005dynamics}, the transverse Ising model can be mapped to a set of decoupled LZ problems. Each LZ system describes the dynamics of a single momentum mode, and for the of limit long wavelengths ($k\rightarrow0$) the effective LZ velocity $v$ diverges marking the presence of a critical point and the unavoidable breakdown of adiabaticity. \begin{figure} \includegraphics[width=8cm]{QuenchFig7.jpg} \caption{Schematic picture showing the quench order of the transverse Ising model. At $t=-\infty$ we have $g=\infty$ and the system is deep in one of the paramagnetic phases. The ground state is unique and the spectrum gapped. For $t=-\tau_Q$ the system passes the first critical point and enters the symmetry broken ferromagnetic phase. In this phase the $\mathbb{Z}_2$ parity symmetry is broken and the ground state is doubly degenerate. Local excitations consists in domain walls between the two ferromagnetic ground states. At $t=+\tau_Q$ the system goes through the second critical point and ends up in the other paramagnetic phase. } \label{fig7} \end{figure} The transverse Ising model in one dimension for $N$ sites is \begin{equation}\label{ising} \hat H_\mathrm{I}=-J\sum_{i=1}^N\left(\hat\sigma_i^z\hat\sigma_{i+1}^z+g\hat\sigma_i^x\right), \end{equation} where $J$ is the typical energy scale, $g$ the transverse field strength, and $\hat\sigma_i^\alpha$ ($\alpha=x,\,y,\,z$) the Pauli matrices at site $i$. Throughout we use periodic boundary conditions, $\hat\sigma_1^\alpha=\hat\sigma_{N+1}^\alpha$. For $g=0$ the Hamiltonian is diagonal in the computational basis with the doubly degenerate ferromagnetic ground states $|\uparrow\uparrow\dots\uparrow\rangle$ and $|\downarrow\downarrow\dots\downarrow\rangle$. In the opposite limit $g\rightarrow\pm\infty$, the ground state is paramagnetic (or polarized) $|\!\!\leftarrow\leftarrow\dots\leftarrow\rangle$ or $|\!\!\rightarrow\rightarrow\dots\rightarrow\rangle$. Here $|\!\uparrow\rangle$ and $|\!\downarrow\rangle$ are the eigenstates of $\hat\sigma_z$ with eigenvalues $\pm1$, while $|\!\!\leftarrow\rangle$ and $|\!\!\rightarrow\rangle$ are the eigenstates of $\hat\sigma_x$. At $g_c=\pm1$ there is a continuous QPT between the para- and ferromagnetic phases. Thus, a quench dictated by the time-dependent field strength \begin{equation}\label{qising} g(t)=-\frac{t}{\tau_Q} \end{equation} drives the system from a paramagnetic phase through a critical point at $g_c=+1$ (i.e. $t=-\tau_Q$) into the symmetry broken ferromagnetic phase and through a second critical point at $g_c=-1$ (i.e. $t=\tau_Q$) back into a symmetric paramagnetic phase. The quench scheme is depicted in Fig.~\ref{fig7}. \subsubsection{Mapping to a LZ problem} Before entering the problem of how dephasing affects the evolution we first discuss the closed system. In Appendix~\ref{appB} we present the details of the mapping of the quenched Ising model to a LZ problem. Here we only record the main results of the mapping. The simplest approach in solving the transverse Ising model is by employing the Jordan-Wigner transformation that casts the Hamiltonian into a problem of free fermions~\cite{sachdev2007quantum}. In the momentum representation the Hamiltonian reads \begin{equation} \hat H_\mathrm{I}=2\sum_{k>0}\left[\alpha_k\left(\hat c_k^\dagger\hat c_k-\hat c_{-k}\hat c_{-k}^\dagger\right)+\beta_k\left(\hat c_{-k}^\dagger\hat c_k^\dagger+\hat c_{-k}\hat c_k\right)\right], \end{equation} where $\alpha_k=g(t)-\cos(k)$, $\beta_k=\sin(k)$, $\hat c_k^\dagger$ and $\hat c_k$ are the creation and annihilation operators of a fermion with momentum $k$, and the sum runs over only positive momentum modes. As a quadratic Hamiltonian it can be diagonalized by a Bogoliubov transformation which introduces the new fermion operators according to \begin{equation}\label{bt} \hat c_k=u_k(t)\hat\gamma_k+v_{-k}^*(t)\hat\gamma_{-k}^\dagger \end{equation} and equivalently for $\hat c_k^\dagger$, and to warrant the correct fermionic commutation relations $|u_k(t)|^2+|v_k(t)|^2=1$. The Heisenberg equations give two coupled equations for the Bogoliubov amplitudes as \begin{equation}\label{BdG} i\frac{d}{dt}\!\left[\!\begin{array}{c} u_k(t)\\ v_k(t)\end{array}\!\right]=2\!\left[\begin{array}{cc} g(t)-\cos(k) & \sin(k)\\ \sin(k) & -g(t)+\cos(k)\end{array}\!\right]\!\!\left[\!\begin{array}{c} u_k(t)\\ v_k(t)\end{array}\!\right]\!, \end{equation} which defines a $2\times2$ $k$- and time-dependent Hamiltonian $\hat h_\mathrm{LZ}^{(k)}(t)$. With the substitution~\cite{dziarmaga2005dynamics} \begin{equation} \tau=\frac{2\tau_Q\sin(k)}{g_0}\left[\frac{t}{\tau_Q}+\cos(k)\right],\hspace{0.5cm}v=\frac{g_0^2}{2\tau_Q\sin^2(k)} \end{equation} the equations of motions for the amplitudes become \begin{equation} i\frac{d}{d\tau}\!\left[\!\begin{array}{c} u_k(\tau)\\ v_k(\tau)\end{array}\!\right]=2\!\left[\begin{array}{cc} -v\tau & g_0\\ g_0 & v\tau\end{array}\!\right]\!\left[\!\begin{array}{c} u_k(\tau)\\ v_k(\tau)\end{array}\!\right]\!, \end{equation} i.e. they attain the familiar form of a LZ problem with a corresponding Hamiltonian $\hat H_\mathrm{LZ}^{(k)}(\tau)$. Thus, we have reduced the analysis of the full time-dependent transverse Ising model to a problem of solving a set of LZ models, one for each momentum mode $k$. \subsubsection{Universal scaling of excitations} For the regular LZ sweep from $\tau=-\infty$ to $\tau=+\infty$, as already mentioned, the system crosses two critical points. We thereby consider the time interval $t\in(-\infty,0]$ to start with, and briefly discuss the second scenario later. The final time $t_f=0$ is assumed to be within the adiabatic regime such that the LZ formula~(\ref{lzformula}) should be applicable. One point to notice is that the LZ crossing instant $\tau=0$ does not coincide with the actual crossing of the critical point at $t=\tau_Q$; it is shifted by $-\tau_Q\cos(k)$. Furthermore, note that for the low energy modes, corresponding to small $k$, the rate $v$ diverges, marking the breakdown of adiabaticity in the vicinity of the critical points. It is mainly these long wave-length modes being excited during the quench, and we may expand $\sin^2(k)\approx k^2$. By identifying $g_0^2/v=2\tau_Qk^2$ we estimate the excitation from the LZ formula~(\ref{lzformula}), \begin{equation} P_\mathrm{LZ}^{(k)}=e^{-2\pi\tau_Qk^2}. \end{equation} The total amount of excitations is then evaluated as \begin{equation}\label{isingdef} n_D=\sum_{k}P_\mathrm{LZ}^{(k)}\approx\frac{1}{2\pi}\frac{1}{\sqrt{2\tau_Q}}. \end{equation} Going back to the KZ prediction~(\ref{defdens}), and using the transverse Ising exponents $z=\nu=1$, we see that it exactly reproduce the above result~(\ref{isingdef})~\cite{dziarmaga2005dynamics}. With the energy dephasing jump operator $\hat L=\hat h_\mathrm{LZ}(t)$, the transformation to fermions in the momentum representation is still straightforward. The total state $\hat\rho(\tau)=\bigotimes_k\hat\rho^{(k)}(\tau)$, where, for each $k$, \begin{widetext} \begin{equation}\label{lzlindk} \partial_t\hat\rho^{(k)}(t)=i\left[\hat\rho^{(k)}(t),\hat h_\mathrm{LZ}^{(k)}(t)\right] \displaystyle{+\kappa\left(2\hat h_\mathrm{LZ}^{(k)}(t)\hat\rho^{(k)}(t)\hat h_\mathrm{LZ}^{(k)}(t)-\hat h_\mathrm{LZ}^{(k)2}(t)\hat\rho^{(k)}(t)-\hat\rho^{(k)}(t)\hat h_\mathrm{LZ}^{(k)2}(t)\right).} \end{equation} \end{widetext} Thus, for each momentum mode the amount of excitations is derived as in the previous subsection, and their sum gives the total excitations $n_D$. The results of a numerical calculation are presented in Fig.~\ref{fig8} showing $n_D$ for different $\kappa$'s and as a function of $\tau_Q$. The amount of excitations increases with $\kappa$ as anticipated. Universality implies a power-law scaling, which is also found as is evident from the log-log plot, Fig.~\ref{fig8} (b). The interesting result is that the exponent is altered by the dephasing. For mean-field models~\cite{nagy2011critical,baumann2011exploring} and quantum critical models~\cite{patane2008adiabatic,patane2009adiabatic} it has been demonstrated that critical exponents may change due to coupling to an environment. The numerically extracted exponents, $n_D\propto\tau_Q^\mu$, for the examples of Fig.~\ref{fig8} are listed in Tab.~\ref{tab1}. \begin{figure} \includegraphics[width=8cm]{DefectsFig8.jpg} \caption{(Color online) The density of defects $n_D$ (a) accumulated through the quench from $t=-\infty$ to $t=0$, and for different decay rates $\kappa$ (see inset). Clearly, the larger $\kappa$ the more excitations. The right panel (b) displays the same but as a log-log plot. The defect density scales with an exponent $\mu$ in all cases (see Tab.~\ref{tab1}), which interestingly is modified by the dephasing. For the numerics, the initial time $t_i=4\tau_Q$ which is deep in the adiabatic regime, and the number of sites $N=512$.} \label{fig8} \end{figure} \begin{table} \begin{center} \begin{tabular}{cc} \Xhline{3\arrayrulewidth} \hspace{1cm}$\kappa$\hspace{1cm} & \hspace{1cm}$\mu$\hspace{1cm} \\ \hline 0 & -0.5 \\ 0.05 & -0.504 \\ 0.1 & -0.523 \\ 0.4 & -0.541 \\ 0.8 & -0.544 \\ \Xhline{3\arrayrulewidth} \end{tabular} \caption{Numerically estimated KZ exponents $\mu$, $n_D\propto\tau_Q^\mu$ for different $\kappa$. The $\mu$'s have been calculated from a least square fit from the data of Fig.~\ref{fig8} (b), and the numerically obtained error is around 1$\%$ for all examples. Convergence of the results have been checked, both in respect to the integration interval and the system size. }\label{tab1} \end{center} \end{table} \subsubsection{Correlation function} For a quench ending at $g(t_f)=0$ in the ferromagnetic phase we expect excitations in terms of domain walls (also called kinks in one dimension). According to the result above, the KZM predicts a correlation length $\xi\propto\sqrt{\tau_Q}$ for the closed Ising model. For non-zero $\kappa$ we still expect domain wall excitations as the Lindblad jump operator commutes with the Hamiltonian for every time instant. Nevertheless, we saw that the exponents are modified by the dephasing and should accordingly affect the correlation length. The characteristic length $\xi$ between the domain wall excitations is extracted from the correlator \begin{equation}\label{corrf0} C_R^{z}\equiv\langle\hat\sigma_i^z\hat\sigma_{i+R}^z\rangle-\langle\hat\sigma_i^z\rangle\langle\hat\sigma_{i+R}^z\rangle=\langle\hat\sigma_i^z\hat\sigma_{i+R}^z\rangle, \end{equation} where we have used that $\langle\hat\sigma_i^z\rangle=0$ from symmetry. The derivation of the correlation function is reproduced in Appendix~\ref{appC} following Ref.~\cite{dziarmaga2010dynamics2}. The crucial observation is that the correlation function can be written as a determinant of a matrix of pair correlators; \begin{equation}\label{corrf} |C_R^z|=\sqrt{|\mathrm{det}(Q_R)|}, \end{equation} where \begin{equation} Q_R=\left[\begin{array}{cccc} G_{11} & G_{12} & \hdots & G_{1R}\\ G_{21} & G_{22} & \hdots & G_{2R}\\ \vdots & \vdots & \ddots & \vdots\\ G_{R1} & G_{R2} & \hdots & G_{RR} \end{array}\right] \end{equation} and \begin{equation}\label{gmatrix} G_{ij}=\left[\begin{array}{cc} \langle\hat A_{i+1}\hat A_{j+1}\rangle & \langle\hat B_{i}\hat A_{j+1}\rangle\\ \langle\hat A_{i+1}\hat B_{j}\rangle & \langle\hat B_{i}\hat B_{j}\rangle \end{array}\right], \end{equation} with $\hat A_i=\left(\hat c_i^\dagger+\hat c_i\right)$ and $\hat B_i=\left(\hat c_i^\dagger-\hat c_i\right)$. The correlators of Eq.~(\ref{gmatrix}) can be expressed in terms of the Bloch vector components (see Apprendix~\ref{appC}), for example \begin{equation}\label{corr2} \langle\hat B_i\hat A_j\rangle=-\frac{1}{N}\sum_k\left[R_k^z\cos(k(i-j))+R_k^x\sin(k(i-j))\right]. \end{equation} As time progresses, the different Bloch vectors will depart one another causing an intrinsic dephasing and a decay of the correlations. This dephasing is manifested over a length scale $L=\sqrt{\tau_Q}\log\tau_Q$~\cite{dziarmaga2010dynamics2}. However, since the dephasing occurs after some time $T_2$ it will not be seen immediately, for example at the critical point it is not established but it is at $g=0$. \begin{figure} \includegraphics[width=8cm]{CorrFunFig9.jpg} \caption{(Color online) The absolute value of the $z$-correlation function (a) and its logarithm (b) as a function of the distance $R$. The curves give the results for different $\kappa$ values, and the final time is such that $g(t_f)=0$. The quench time $\tau_Q=20$, and $t_i=4\tau_Q$. The distances between local minima, especially visible in (b), give the typical length $\xi$ that for $\kappa=0$ should scale as $\sqrt{\tau_Q}$ according to the KZM. This has also been confirmed here but is not explicitly shown as we only consider one time $\tau_Q$. The effect of the dephasing, i.e. non-zero $\kappa$, is not very dominant; a slight shorter correlation length $\xi$ is found.} \label{fig9} \end{figure} The numerically extracted correlation function for different rates $\kappa$ is shown in Fig.~\ref{fig9}. In the ferromagnetic phase, in the presence of a domain wall $C_R^z$ should flip sign, but since the analytical expression~(\ref{corrf}) only gives the absolute vale we cannot see such a sign change. However, it is clear that $|C_R^z|$ becomes very small for periodic $R$ values. The length between the minima gives $\xi$. The scaling $\xi\propto\sqrt{\tau_Q}$ for $\kappa=0$ has been confirmed numerically by calculating the correlation function for different $\tau_Q$'s. According to Tab.~\ref{tab1}, the environmental dephasing slightly changes the KZ exponent $\mu$ which is also seen for the correlation function, particularly evident in the log plot (b). We have also analyzed the quench when extended to $t=+\infty$ such that the system is driven through two critical points, one at $g=-1$ as above and one at $g=+1$. Naturally, at both critical points the spectrum is gapless and we may therefore envision the scheme as a sort of interferometer; in the vicinity of $g=-1$ different energy eigenstates get populated, and later at $g=+1$ when the gap closes again the different states mix and the interference loop(s) is closed. In the setting of two LZ crossings, such an interferometer is called a Landau-Zener-St\"uckelberg interferometer~\cite{shevchenko2010landau}. The external dephasing appearing whenever $\kappa\neq0$ could well affect the interferences and thereby the resulting amount of excitations. This dephasing, stemming from incoherent time evolution, is of course different from the coherent intrinsic one deriving from different momentum modes $k$ as discussed above. Nevertheless, for the correlation function~(\ref{corrf}), constructed from terms like~(\ref{corr2}), it is found that the external dephasing does not qualitatively alter the behavior of the correlation function, see Appendix~\ref{appC}. In short, the sum in~(\ref{corr2}) can be small either due to canceling terms or if the different momentum $k$ Bloch vectors shrink. \section{Summary and future directions}\label{sec5} In their seminal work~\cite{zurek2005dynamics}, Zurek {\it et al.} suggested how the resulting dynamics when systems are driven across quantum critical points can be understood in terms of the AI-approximation. Since then, this KZM has been verified both numerically and experimentally in a variety of systems, but also its limitations have been explored, see for example~\cite{su2013kibble}. In this work we have generalized the KZM as applied to closed quantum systems to open quantum systems, i.e. systems showing driven-dissipative critical behavior. This new type of non-equilibrium quantum PT's has gained much attention lately due to recent experimental progress in the AMO community of ultracold physics. As the field of NESS criticality is still very young not much is known even if the field is developing rapidly. Our results add to the understanding of these systems. Many of the previous results point in the direction that the physics behind NESS PT's bare much in common with equilibrium QPT's, as for example universality, but as we pointed out there exist also differences. In this respect, it is not {\it a priori} clear that teh KZM presented in Ref.~\cite{zurek2005dynamics} can be generalized to NESS PT's. The KZM, hinging on the AI-approximation, relies on some fundamental concepts, namely adiabaticity and universal scaling. Thus, for any generalization of the KZM to NESS PT's one must first explore these concepts in terms of open quantum systems. Adiabaticity for open quantum systems, as discussed in Sec.~\ref{adsubsec}, is by now rather well understood, partly thanks to works in adiabatic quantum computing~\cite{sarandy2005adiabatic2,sarandy2005adiabatic,joye2007general,avron2012adiabatic,venuti2016adiabaticity}. The fact that the time-evolution is non-unitary in general (manifested for example as complex Liouvillian eigenvalues), even under extremely slow parameter changes an adiabatic following may imply some change in the system's state. In this work we are interested in the instantaneous steady states which by definition has a zero eigenvalue and adiabatic evolution implies no such relaxation. Even if adiabaticity for open quantum systems is understood to a great extent, the same cannot be said about critical scaling, or in particular the scaling of the eigenvalues in the vicinity of a critical point. The question seems to go back to~\cite{kessler2012dissipative} where it is pointed out that the Liouvillian gap~(\ref{lgap}) must close at the critical point. Numerics is substantially much more difficult when diagonalizing Lindblad equations than simple Hamiltonians, which make finite size scaling explorations of the Liouvillian spectrum harder~\cite{vicentini2017critical}. Nevertheless, there are evidences supporting critical slowing down for open quantum systems, which motivates a KZM approach in order to explore quenches across NESS critical points. The KZM for open quantum systems was discussed in Sec.~\ref{ssec3d}, and we especially pointed out that the relevant time-scales, reaction time and the inverse transition rate (see Fig.~\ref{fig1}), may well depend on the decay rate $\kappa$, which in return implies that also the extent of the impulse regime will depend on $\kappa$. Related to the question about adiabaticity for open quantum systems is that of characterizing the amount of non-adiabatic excitations, which was the topic of Sec.~\ref{ssec3C}. As we have emphasized, for driven-dissipative critical quantum systems the steady state $\hat\rho_\mathrm{ss}$ replaces the role played by the ground state in critical closed quantum systems. The KZ scaling in closed systems gives a prediction of the amount of excitations relative the ground state. Since the steady state may be very distinct from the ground state of the Hamiltonian of Eq.~(\ref{lindblad}) we should not define non-adiabatic excitations with respect to that ground state, but instead in relation to the adiabatic instantaneous steady state $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$. For an adiabatic evolution, according to its definition, the time-evolved state will identify $\hat\rho_\mathrm{ss}^\mathrm{(ad)}(t)$ at every instant, and any deviations from it are ascribed non-adiabatic excitations. As argued in Sec.~\ref{ssec3C}, the natural measure is then the trace distance of density operators, Eqs.~(\ref{tr1}) and~(\ref{trdist}). In particular, we showed that for energy dephasing (where the energy eigenbasis can be seen as the most relevant one) the trace distance gives the amount of energetic excitations above the ground state. A new ingredient in the dynamics of open QPT's is the fact that the state typically becomes mixed as time progresses, see Fig.~\ref{fig4}. This typically increases the amount of excitations, which is in agreement with what has been found in earlier studies~\cite{fubini2007robustness,cincio2009dynamics,dutta2016anti}. Thus, this observation provides additional insight into those processes. We did not analyze the regime of very large $\kappa$ in this work. We have, however, explored this limit numerically and we found that the evolution can enter a `quantum Zeno' regime which is conceptually different from the results presented in this work. In the field of quantum coherent control, exploiting the Zeno effect has been analyzed in the past~\cite{maniscalco2008protecting,scala2010stimulated,mathisen2016view}. A more thorough study of this phenomenon by extending it to NESS critical models is left for the future. With the open LZ problem, analyzed Sec.~\ref{ssec4A}, we verified the applicability of the KZM to such an open quantum systems. When taking into account that the rate-of-change parameter $\alpha$ must be dressed with a $\kappa$-dependence we found very good agreement between the results predicted from the KZM and those obtained from direct numerical integration, see Fig.~\ref{fig6}. The explicit $\kappa$-dependence was not rigorously proven, but motivated both numerically and from arguments based on the form of the Liouvillian matrix $\mathbf{M}(\kappa)$ of Eq.~(\ref{mk}). The KZ scaling for a model supporting a true critical point was demonstrated in Sec.~\ref{ssec4B} for the transverse Ising model exposed to energy dephasing. Critical exponents were extracted numerically and we indeed found a slight $\kappa$-dependence which implies that the openness of the problem may alter the Ising universality class. The shift in the exponents resulted in a slight change also in the density of defects as exhibited in Fig.~\ref{fig9}. There are several open questions to be addressed, and we see many possible future directions. On a more general level, establishing the scaling of the Liouvillian spectrum in the vicinity of the critical point is certainly an important issue. For example, it was recently shown that a continuous PT is possible for an open quantum system where the steady state is unique, i.e. there is no symmetry breaking accompanying the PT~\cite{hannukainen2017dissipation}. A related question, relevant also for the present work, is the types of excitations in NESS critical systems. For closed systems we know that the symmetries and dimensions determine the character of excitations, e.g. vortices, domain walls, and waves. For Lindblad master equations, on the other hand, symmetries and conserved quantities are not necessarily linked~\cite{albert2014symmetries,albert2016geometry}. In the present work the two studied examples belong to Class I, i.e. representing energy dephasing, where the type of the excitations are not expected to change in comparison to the closed case. It would therefor definitely be interesting to analyze the other classes as well. Especially since then there need not be any clear link between the steady state and the Hamiltonian eigenestates. A difficulty here is that it seems that models in these classes become substantially more complicated, unless they are in some sense trivial. For our example of the Ising model we found that the excitations were domain walls (kinks) as for the closed system. And we also saw that the correlation function behaved much the same as for the closed Ising model. This suggests that it would be interesting to analyze other type of correlators that are connected to quantum entanglement and not classical correlations as studied here. In such situations we may expect more distinct differences between the open and closed cases. As a final remark, the classification scheme in Sec.~\ref{ssec2B} is limited to single loss channels, and it is unclear whether it makes sense to classify the general case or if there are simply too many qualitatively different scenarios or classes. \begin{acknowledgements} We thank Irina Dumitru and Thomas Kvorning for helpful discussions. The Knut and Alice Wallenberg foundation (KAW) and the Swedish research council (VR) are acknowledged for financial support. \end{acknowledgements}
{ "redpajama_set_name": "RedPajamaArXiv" }
3,921
Мармутье () — коммуна на северо-востоке Франции в регионе Гранд-Эст (бывший Эльзас — Шампань — Арденны — Лотарингия), департамент Нижний Рейн, округ Саверн, кантон Саверн. До марта 2015 года коммуна являлась административным центром одноимённого упразднённого кантона (округ Саверн). Площадь коммуны — 14,07 км², население — 2657 человек (2006) с тенденцией к росту: 2702 человека (2013), плотность населения — 192,0 чел/км². Население Население коммуны в 2011 году составляло 2779 человек, в 2012 году — 2737 человек, а в 2013-м — 2702 человека. Динамика населения: Экономика В 2010 году из 1853 человек трудоспособного возраста (от 15 до 64 лет) 1358 были экономически активными, 495 — неактивными (показатель активности 73,3 %, в 1999 году — 71,8 %). Из 1358 активных трудоспособных жителей работал 1231 человек (632 мужчины и 599 женщин), 127 числились безработными (60 мужчин и 67 женщин). Среди 495 трудоспособных неактивных граждан 150 были учениками либо студентами, 170 — пенсионерами, а ещё 175 — были неактивны в силу других причин. Достопримечательности (фотогалерея) Примечания Ссылки
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,050
\section{Algorithms} \label{sec:algorithms} In this section, we develop algorithms for accelerated minimization of strongly quasar-convex functions and quasar-convex functions, respectively, and analyze their running times in terms of the number of function and gradient evaluations required. \subsection{Strongly Quasar-Convex Minimization} \label{sec:strongly-quasar-convex} First, we provide and analyze our algorithm for $(\gamma, \mu)$-strongly quasar-convex function minimization, where $\mu > 0$. The algorithm (Algorithm~\ref{alg:strongly_agd}) is a carefully constructed instance of the general AGD framework (Algorithm~\ref{alg:agd}). As in the general AGD framework, the algorithm maintains two current points denoted $x^{(k)}$ and $v^{(k)}$ and at each step appropriately selects $y^{(k)} = \alpha^{(k)} x^{(k)} + (1 - \alpha^{(k)}) v^{(k)}$ as a convex combination of these two points. Intuitively, the algorithm iteratively seeks to decrease quadratic upper and lower bounds on the function value. $L$-smoothness of $f$ implies for all $x,y\in \reals^n$ that $f(x) \leq U_y(x) \triangleq f(y) + {\nabla f(y)^\top (x-y)} + \frac{L}{2} \norm{x - y}^2$; we set $x^{(k+1)}$ to be the minimizer $y^{(k)} - \ff{1}{L} \nabla y^{(k)}$ of the upper bound $U_{y^{(k)}}$. Similarly, by $(\gamma, \mu)$ quasar-convexity, $f(x) \ge f(x^*) \ge \min_z L_y(z)$ for all $x,y \in \reals^n$, where $L_y(x) \triangleq f(y) + \frac{1}{\gamma} \nabla f(y)^\top (x - y) + \frac{\mu}{2} \norm{x-y}^2$. The minimizer of the lower bound $L_{y^{(k)}}$ is $y^{(k)} - \frac{1}{\gamma \mu} \nabla f(y^{(k)})$; we set $v^{(k+1)}$ to be a convex combination of $v^{(k)}$ and the minimizer of $L_{y^{(k)}}$. \begin{algorithm}[H] \caption{Accelerated Strongly Quasar-Convex Function Minimization} \label{alg:strongly_agd} \SetAlgoLined \SetKwInOut{Input}{input} \Input{ $L$-smooth $f : \reals^n \rightarrow \reals$ that is $(\gamma,\mu)$-strongly quasar-convex, with $\mu > 0$} \Input{ Initial point $x^{(0)} \in \reals^n$, number of iterations $K$, solution tolerance $\epsilon > 0$} Return output of \Cref{alg:agd} on $f$ with initial point $x^{(0)}$ and parameter $\beta = 1 - \gamma{\sqrt{\ff{\mu}{L}}}$, \vskip 0ex where for all $k$, $\eta^{(k)} = \eta \triangleq \frac{1}{\sqrt{\mu L}}$, \vskip 0ex and $\alpha^{(k)} = \texttt{BinaryLineSearch}(f, x^{(k)}, v^{(k)}, L, b = \ff{1-\beta}{2\eta}, c = \ff{L\eta-\gamma}{\beta}, \tilde{\epsilon} = 0)$ \textbf{if} $\beta > 0$ \textbf{else} 1. \end{algorithm} We leverage the analysis from \Cref{sec:acceleration_framework} to analyze \Cref{alg:strongly_agd}. First, in \Cref{lem:strong_converge} we show that the algorithm converges at the desired rate, by building off of \Cref{lem:agd_one_step} and using the specific parameter choices in \Cref{alg:strongly_agd}. \begin{restatable}[Strongly Quasar-Convex Convergence]{lem}{strongconverge} \label{lem:strong_converge} If $f$ is $L$-smooth and $(\gamma,\mu)$-strongly quasar-convex with minimizer $x^*$, $\gamma \in (0,1]$, and $\mu > 0$, then in each iteration $k \ge 0$ of Algorithm~\ref{alg:strongly_agd}, \begin{equation} \label{eq:strong_agd_one_step} \epsilon^{(k+1)} + \frac{\mu}{2} r^{(k+1)} \le \left(1 - \frac{1}{\gamma \sqrt{\kappa}}\right) \left[\epsilon^{(k)} + \frac{\mu}{2} r^{(k)} \right]~, \end{equation} where $\epsilon^{(k)} \triangleq f( x^{(k)} ) - f(x^*), r^{(k)} \triangleq \| v^{(k)} - x^* \|^2$, and $\k \triangleq \ff{L}{\mu}$. Therefore, if the number of iterations $K \ge \ceil{\frac{\sqrt{\kappa}}{\gamma} \log^{+}\left( \frac{3\epsilon^{(0)}}{\gamma\epsilon} \right)}$, then the output $x^{(K)}$ of Algorithm~\ref{alg:strongly_agd} satisfies $f( x^{(K)} ) \le f(x^*) + \epsilon$. \end{restatable} \newcommand{\strongconvergeProof}[1]{ \begin{proof} For all $k$, $\eta^{(k)} = \eta = \ff{1}{\sqrt{\mu L}} \ge \sqrt{\ff{\gamma}{(2-\gamma) L^2}} \ge \ff{\gamma}{L}$ as required by \Cref{alg:agd}, since $\ff{(2-\gamma)L}{\gamma} \ge \mu > 0$ by \Cref{obs:l_vs_mu} and $\ff{x}{2-x} \ge x^2$ for all $x \in [0,1]$. Similarly, since $0 < \ff{\mu}{L} \le \ff{2-\gamma}{\gamma}$ and $\gamma \in (0, 1]$, we have $0 < \gamma\sqrt{\k} = \gamma \sqrt{\ff{\mu}{L}} \le \sqrt{\gamma(2-\gamma)} \le 1$, meaning that $\beta \in [0, 1)$. Additionally, by construction, either $\beta = 0$ and $\alpha^{(k)} = 1$, or $\beta > 0$, $\alpha^{(k)} \in [0,1]$, and $(\alpha,x,y_{\alpha},v) = (\alpha^{(k)},x\ind{k},y\ind{k},v\ind{k})$ satisfies \eqref{eq:ak_existence_2} with $b = \ff{1-\beta}{2\eta^{(k)}}$, $c = \ff{L\eta^{(k)}-\gamma}{\beta}$, $\tilde{\epsilon} = 0$. Consequently, by combining Lemmas \ref{lem:agd_one_step} and \ref{lem:agd_linesearch}, for each iteration $k \ge 0$ of \Cref{alg:strongly_agd} we have \[ 2 \eta ^2 L \epsilon^{(k+1)} + r^{(k+1)} \leq \beta r^{(k)} + \left[(1 - \beta) - \gamma \mu \eta \right] r^{(k)}_y + 2 \eta \left[ L \eta - \gamma \right] \epsilon^{(k)} + 2\beta\eta\tilde{\epsilon} \] Substituting in $\eta = \frac{1}{\sqrt{\mu L}} = \frac{1 - \beta}{\gamma \mu}$ and $\tilde{\epsilon} = 0$, this implies that \[ \frac{2}{\mu} \epsilon^{(k+1)} + r^{(k+1)} \leq \beta r^{(k)} + \frac{2}{\sqrt{\mu L}} \left[ \sqrt{\frac{L}{\mu}}- \gamma \right] \epsilon^{(k)} = \beta \left[r^{(k)} + \frac{2}{\mu} \epsilon^{(k)} \right] ~. \] Multiplying by $\mu / 2$ and using the definition of $\beta$ as $1-\ff{\gamma}{\sqrt{\k}}$ yields \eqref{eq:strong_agd_one_step}. Now, by \eqref{eq:strong_agd_one_step} and induction, \[ \epsilon^{(k)} + \frac{\mu}{2} r^{(k)} \le \left(1 - \frac{\gamma}{\sqrt{\kappa}} \right)^k \left[\epsilon^{(0)} + \frac{\mu}{2} r^{(0)}\right] \le \exp \left( - \frac{k \gamma}{\sqrt{\kappa}} \right) \left[\epsilon^{(0)} + \frac{\mu}{2} r^{(0)}\right] ~. \] Therefore, whenever $k \ge \frac{\sqrt{\kappa}}{\gamma} \log^{+}\left( \frac{\epsilon^{(0)} + \frac{\mu}{2} r^{(0)} }{\epsilon} \right)$ we have $\epsilon^{(k)} = f( x^{(k)}) - f(x^*) \le \epsilon$, as $r^{(k)} \ge 0$ always. By \Cref{rem:distbound}, $\ff{2\epsilon^{(0)}}{\gamma} \ge \frac{\mu}{2} r^{(0)}$, so it suffices to run $k \ge \ceil{\frac{\sqrt{\k}}{\gamma} \log^{+} \left( \ff{3\epsilon^{(0)}}{\gamma\epsilon} \right)}$ iterations. \end{proof} } \strongconvergeProof Note that when $f$ is $(1, \mu)$-strongly quasar-convex with $\mu > 0$, \Cref{lem:strong_converge} implies that the number of \textit{iterations} \Cref{alg:strongly_agd} needs to find an $\epsilon$-approximate minimizer of $f$ is of the same order as the number of iterations required by standard AGD to find an $\epsilon$-approximate minimizer of a $\mu$-strongly convex function \citen{Nesterov04}. In each iteration of \Cref{alg:strongly_agd}, we compute $\alpha^{(k)}$ and then simply perform $O(1)$ vector operations to compute $y^{(k)}$, $x^{(k+1)}$, and $v^{(k+1)}$. Consequently, to obtain a complete bound on the overall complexity of \Cref{alg:strongly_agd}, all that remains is to bound the cost of computing $\alpha^{(k)}$, which we do using \Cref{lem:linesearch}. This leads to \Cref{thm:strong_runtime}. \begin{restatable}{thm}{strongruntime} \label{thm:strong_runtime} If $f$ is $L$-smooth and $(\gamma,\mu)$-strongly quasar-convex with $\gamma \in (0,1]$ and $\mu > 0$, then \Cref{alg:strongly_agd} produces an $\epsilon$-optimal point after $O\left( \gamma^{-1}\k^{1/2} \log \left( \gamma^{-1}\k \right) \log^{+} \left( \ff{f( x^{(0)} ) - f(x^*)}{\gamma\epsilon} \right) \right)$ function and gradient evaluations. \end{restatable} \newcommand{\strongruntimeProof}[1]{ \begin{proof} \Cref{lem:strong_converge} implies that $O\left( \ff{\sqrt{\k}}{\gamma} \log^+ \left( \ff{\epsilon^{(0)}}{\gamma\epsilon} \right) \right)$ iterations are needed to get an $\epsilon$-optimal point. Lemma \ref{lem:linesearch} implies that each iteration uses $O \left( \log^+ \left( (1+c) \min \left\{\ff{L\norm{x-v}^2}{\tilde{\epsilon}}, \ff{L^3}{b^3} \right\} \right) \right)$ function and gradient evaluations. In this case, $b = \ff{1-\beta}{2\eta} = \ff{\gamma \sqrt{\mu / L}}{2 / \sqrt{\mu L}} = \ff{\gamma \mu}{2}$, $c = \ff{L\eta - \gamma}{\beta} = \ff{\sqrt{L / \mu} - \gamma}{1 - \gamma\sqrt{\mu / L}} = \sqrt{\k} \ge \sqrt{\ff{\gamma}{2}}$, and $\tilde{\epsilon} = 0$. Thus, this reduces to $O(\log^+ ( \sqrt{\k} \ff{L^3}{\gamma^3 \mu^3} )) = O(\log(\ff{\k}{\gamma}))$. So, the total number of required function and gradient evaluations is $O\left( \ff{\sqrt{\k}}{\gamma} \log \left( \ff{\k}{\gamma} \right) \log^+ \left( \ff{\epsilon^{(0)}}{\gamma\epsilon} \right) \right)$ as claimed. Note that \Cref{lem:strong_converge} shows that $x\ind{k}$ will be $\epsilon$-optimal if $k = \ceil{\frac{\sqrt{\kappa}}{\gamma} \log^{+}\left( \frac{3\epsilon^{(0)}}{\gamma\epsilon} \right)}$, while the above argument shows that $O\left( \ff{\sqrt{\k}}{\gamma} \log \left( \ff{\k}{\gamma} \right) \log^+ \left( \ff{\epsilon^{(0)}}{\gamma\epsilon} \right) \right)$ function and gradient evaluations are required to compute such an $x\ind{k}$. Thus, \Cref{alg:strongly_agd} produces \textit{an} $\epsilon$-optimal point using at most this many evaluations; however, of course, the algorithm need not return instantly and may still continue to run if the specified number of iterations $K$ is larger. (Future iterates will also be $\epsilon$-optimal.) \end{proof} } \strongruntimeProof Standard AGD on $L$-smooth $\mu$-strongly-convex functions requires $O\left( \k^{1/2} \log^{+} \left( \ff{f( x^{(0)} ) - f(x^*)}{\epsilon} \right) \right)$ function and gradient and evaluations to find an $\epsilon$-optimal point \citen{Nesterov04}. Thus, as the class of $L$-smooth $(1,\mu)$-strongly quasar-convex functions contains the class of $L$-smooth $\mu$-strongly convex functions, our algorithm requires only a $O(\log(\k))$ factor extra function and gradient evaluations in the smooth strongly convex case, while also being able to efficiently minimize a much broader class of functions than standard AGD. \subsection{Non-Strongly Quasar-Convex Minimization} \label{sec:quasar-convex} Now, we provide and analyze our algorithm (\Cref{alg:nonstrong_agd}) for \textit{non-strongly} quasar-convex function minimization, i.e. when $\mu = 0$. Once again, this algorithm is an instance of \Cref{alg:agd}, the general AGD framework, with a different choice of parameters. We assume $L > 0$, since otherwise quasar-convexity implies the function is constant and thus trivial to minimize. \begin{algorithm}[H] \caption{Accelerated Non-Strongly Quasar-Convex Function Minimization} \label{alg:nonstrong_agd} \SetAlgoLined \SetKwInOut{Input}{input} \Input{ $L$-smooth $f : \reals^n \rightarrow \reals$ that is $\gamma$-quasar-convex} \Input{ Initial point $x^{(0)} \in \reals^n$, number of iterations $K$, solution tolerance $\epsilon > 0$} Define $\omega^{(-1)} = 1$, $\omega^{(k)} = \ff{1}{2} \left( \omega^{(k-1)}\left(\sqrt{(\omega^{(k-1)})^2 + 4}-\omega^{(k-1)}\right)\rt$ for $k \ge 0$ \; Return output of \Cref{alg:agd} on $f$ with initial point $x^{(0)}$ and parameter $\beta = 1$, \vskip 0ex where for all $k$, $\eta^{(k)} = \frac{\gamma}{L \omega^{(k)}}$, \vskip 0ex and $\alpha^{(k)} = \texttt{BinaryLineSearch}(f, x^{(k)}, v^{(k)}, L, b = 0, c = [L\eta^{(k)}-\gamma], \tilde{\epsilon} = \ff{\gamma\epsilon}{2})$. \end{algorithm} \begin{restatable}[Non-Strongly Quasar-Convex AGD Convergence]{lem}{nonstrong} \label{lem:nonstrong_converge} If $f$ is $L$-smooth and $\gamma$-quasar-convex with respect to a minimizer $x^*$, with $\gamma \in (0,1]$, then in each iteration $k \ge 0$ of Algorithm~\ref{alg:nonstrong_agd}, \begin{equation} \label{eq:nonstrong_agd_one_step} \epsilon^{(k)} \leq \f{8}{(k+2)^2} \left[\epsilon^{(0)} + \frac{L}{2\gamma^2} r^{(0)} \right] + \f{\epsilon}{2}~, \end{equation} where $\epsilon^{(k)} \triangleq f( x^{(k)} ) - f(x^*)$ and $r^{(k)} \triangleq \norm{v^{(k)} - x^*}^2$. Therefore, if $R \ge \norm{x^{(0)} - x^*}$ and the number of iterations $K \ge \floor{4\gamma^{-1}L^{1/2}R\epsilon^{-1/2}}$, then the output $x^{(K)}$ of Algorithm~\ref{alg:nonstrong_agd} satisfies ${f(x^{(K)}) \le f(x^*) + \epsilon}$. \end{restatable} Combining the bound on the number of iterations from \Cref{lem:nonstrong_converge}, and the bound from \Cref{lem:linesearch} on the number of function and gradient evaluations during the line search, leads to the bound in \Cref{thm:nonstrong_runtime} on the total number of function and gradient evaluations required to find an $\epsilon$-optimal point. The proofs of \Cref{lem:nonstrong_converge} and \Cref{thm:nonstrong_runtime} are given in \Cref{sec:nonstrong-analysis}. \begin{restatable}{thm}{nonstrongruntime} \label{thm:nonstrong_runtime} If $f$ is $L$-smooth and $\gamma$-quasar-convex with respect to a minimizer $x^*$, with $\gamma \in (0,1]$ and $\norm{x^{(0)} - x^*} \le R$, then \Cref{alg:nonstrong_agd} produces an $\epsilon$-optimal point after \newline $O\left( \gamma^{-1} L^{1/2}R\epsilon^{-1/2} \log^{+}\left( \gamma^{-1} L^{1/2}R \epsilon^{-1/2}\right) \right)$ function and gradient evaluations. \end{restatable} Note that standard AGD on the class of $L$-smooth \textit{convex} functions requires $O\left( L^{1/2}R\epsilon^{-1/2} \right)$ function and gradient evaluations to find an $\epsilon$-optimal point; so, again, our algorithm requires only a logarithmic factor more evaluations than does standard AGD. \section{Algorithm analysis} \label{sec:analysis-lemmas} Here, we provide missing proofs for Sections \ref{sec:acceleration_framework}-\ref{sec:algorithms}. \subsection{Line search analysis} \input{linesearch-proofs.tex} \subsection{Quasar-convex algorithm analysis} \label{sec:nonstrong-analysis} \begin{lem} Suppose $\omega^{(-1)} = 1$ and $\omega^{(k)} = \ff{1}{2} \left( \omega^{(k-1)} \left( \sqrt{\left( \omega^{(k-1)}\right)^2+4}-\omega^{(k-1)}\right) \right)$ for $k \ge 0$. In the following sub-lemmas, we prove various simple properties of this sequence: \begin{sublemma} \label{lem:sk} $\omega^{(k)} \le \f{4}{k+6}$ for all $k \ge 0$. \end{sublemma} \end{lem} \begin{proof} The case $k = 0$ is clearly true as $\omega^{(0)} = \ff{\sqrt{5}-1}{2} < \ff{2}{3}$. Suppose that $\omega^{(i-1)} \le \f{4}{i+5}$ for some $i \ge 1$. $\omega^{(i)} = \f{\omega^{(i-1)}}{2} \left( \sqrt{\left(\omega^{(i-1)}\right)^2 + 4} - \omega^{(i-1)} \right)$. Using the fact that $\sqrt{x^2 + 1} \le 1+\ff{x^2}{2}$ for all $x$ and the fact that $\omega^{(i)} \in (0,1)$, \[\omega^{(i)} \le \f{\omega^{(i-1)}}{2} \left( 2 - \omega^{(i-1)} + \f{\left(\omega^{(i-1)}\right)^2}{2} \right) \le \omega^{(i-1)}\left( 1- \f{\omega^{(i-1)}}{4}\right).\] If $y > 0$, then $x(1-\ff{x}{4}) < \ff{4}{y+1}$ for all $0 \le x \le \ff{4}{y}$. Thus, setting $y = i+5$ yields that $\omega^{(i)} \le \ff{4}{i+6}$ by the inductive hypothesis. \end{proof} \begin{sublemma} \label{lem:sk2} $\omega^{(k)} \ge \f{1}{k+2}$ for all $k \ge 0$. \end{sublemma} \begin{proof} The case $k = 0$ is clearly true as $\omega^{(0)} = \ff{\sqrt{5}-1}{2} > \ff{1}{2}$. Suppose that $\omega^{(i-1)} \ge \f{1}{i+1}$ for some $i \ge 1$. Observe that the function $h(x) = \ff{1}{2}(x(\sqrt{x^2+4}-x))$ is increasing for all $x$. Therefore, $\omega^{(i)} = h(\omega^{(i-1)}) \ge h(\ff{1}{i+1}) = \ff{1}{2(i+1)} \left( \sqrt{\ff{1}{(i+1)^2} + 4} - \ff{1}{i+1}\right) = \ff{1}{2(i+1)^2}\left( \sqrt{4(i+1)^2+1}-1\right)$. Now, it just remains to show that $\sqrt{4x^2+1} \ge \f{2x^2}{x+1}+1$ for all $x \ge 0$. To prove this, note that $4x^2(x+1)^2 = 4x^4 + 8x^3 + 4x^2$, so \[4x^2 + 1 = \f{4x^4 + 8x^3 + 4x^2}{(x+1)^2} + 1 \ge \f{4x^4 + 4x^3 + 4x^2}{(x+1)^2} + 1 = \left( \f{2x^2}{x+1} + 1\right)^2~.\] Thus, \[\omega^{(i)} \ge \f{1}{2(i+1)^2}\left( \sqrt{4(i+1)^2+1}-1\right) \ge \f{1}{2(i+1)^2} \cdot \f{2(i+1)^2}{(i+2)} = \f{1}{i+2}~.\] \end{proof} \begin{sublemma} \label{lem:wint} $\omega^{(k)} \in (0,1)$ for all $k \ge 0$. Additionally, $\omega^{(k)} < \omega^{(k-1)}$ for all $k \ge 0$. \end{sublemma} \begin{proof} The fact that $\omega^{(k)} > 0$ follows from \Cref{lem:sk2}. To show the rest, we simply observe that $\ff{1}{2}(\sqrt{x^2+4}-x) < \ff{2}{2} = 1$ for all $x > 0$; as $\omega^{(-1)} = 1$ and $\omega^{(k)} = \ff{1}{2}(\sqrt{(\omega\ind{k-1})^2+4}-\omega\ind{k-1}) \cdot \omega\ind{k-1}$ for all $k \ge 0$, the result follows. \end{proof} \begin{sublemma} Define $s^{(k)} = 1+\su{i=0}{k-1} \f{1}{\omega^{(i)}}$. Then, $\left( s^{(k)}\right)^{-1} \le \f{8}{(k+2)^2}$ for all $k \ge 0$. \end{sublemma} \begin{proof} Applying \Cref{lem:sk}, $s^{(k)} \ge 1 + \su{i=0}{k-1} \left( \f{i+6}{4} \right) = \f{k(k+11)+8}{8} \ge \f{k(k+4)+4}{8} = \ff{1}{8}(k+2)^2$, and so $\left( s^{(k)} \right)^{-1} \le \f{8}{(k+2)^2}$. \end{proof} {\renewcommand\footnote[1]{}\nonstrong*} \begin{proof} \, In the non-strongly quasar-convex case, $\mu = 0$ and $\beta = 1$. For all $k$, $\eta^{(k)} = \ff{\gamma}{L\omega^{(k)}} \ge \ff{\gamma}{L}$ since $\omega^{(k)} \in (0,1)$ by \Cref{lem:wint}. Additionally, $\alpha^{(k)}$ is in $[0,1]$ and $(\alpha,x,y_{\alpha},v) = (\alpha^{(k)},x\ind{k},y\ind{k},v\ind{k})$ satisfies \eqref{eq:ak_existence_2} with $b = \ff{1-\beta}{2\eta^{(k)}} = 0$, $c = \ff{L\eta^{(k)}-\gamma}{\beta} = L\eta^{(k)}-\gamma$ by construction. Lemmas \ref{lem:agd_one_step} and \ref{lem:agd_linesearch} thus imply that for all $k \ge 0$, \begin{equation} \label{eq:nonstrong_onestep} 2 ( \eta^{(k)})^2 L \epsilon^{(k+1)} + r^{(k+1)} \le r^{(k)} + 2\eta^{(k)} \left( L\eta^{(k)} - \gamma \right) \epsilon^{(k)} + 2\eta^{(k)} \tilde{\epsilon}~. \end{equation} Define $A^{(k)} \triangleq 2\left( \eta^{(k)}\right)^2 L - 2\eta^{(k)} \gamma$. So, $(A^{(k)} + 2 \eta^{(k)} \gamma)\epsilon^{(k+1)} + r^{(k+1)} \le A^{(k)} \epsilon^{(k)} + r^{(k)} + 2\eta^{(k)}\tilde{\epsilon}$. Notice that $(\omega^{(k+1)})^2 = (1-\omega^{(k+1)})(\omega^{(k)})^2$ and $\omega^{(k)} \in (0,1)$ for all $k \ge 0$. So, \begin{align*} A^{(k+1)} - (A^{(k)} + 2\eta^{(k)}\gamma) &= 2( \eta^{(k+1)})^2 L - 2\eta^{(k+1)} \gamma - 2( \eta^{(k)})^2 L &&= \\ 2\left( \f{\gamma^2 L}{L^2 ( \omega^{(k+1)})^2} - \f{\gamma^2}{L \omega^{(k+1)}} - \f{\gamma^2 L}{L^2 ( \omega^{(k)})^2} \right) &= \f{2\gamma^2}{L} \left( \f{1-\omega^{(k+1)}}{( \omega^{(k+1)})^2} - \f{1}{( \omega^{(k)})^2} \right) &&= 0. \end{align*} So, $A^{(k+1)} = A^{(k)} + 2 \eta^{(k)} \gamma = 2(\eta^{(k)})^2L$. \newline Also, $A^{(0)} = 2( \eta^{(0)})^2 L - 2 \eta^{(0)} \gamma = 2\f{\gamma^2}{L ( \omega^{(0)})^2} - 2\f{\gamma^2}{L \omega^{(0)}} = \f{2\gamma^2}{L}$ as $\omega^{(0)} = \f{\sqrt{5}-1}{2}$. Thus, by induction on $k$, $A^{(k)} = \f{2\gamma^2}{L} + 2\gamma \su{i=0}{k-1} \eta^{(i)} = \f{2\gamma^2}{L} s^{(k)}$, where $s^{(k)} \triangleq \left( 1 + \su{i=0}{k-1} \f{1}{\omega^{(i)}} \right)$. From \eqref{eq:nonstrong_onestep} and the fact that $A^{(k+1)} = 2(\eta^{(k)})^2L$, we have \begin{equation} \label{eq:nonstrong_induct} A^{(k)} \epsilon^{(k)} + r^{(k)} \le A^{(k-1)} \epsilon^{(k-1)} + r^{(k-1)} + 2\eta^{(k-1)} \tilde{\epsilon} \le \dots \le A^{(0)} \epsilon^{(0)} + r^{(0)} + 2\tilde{\epsilon}\su{i=0}{k-1}\eta^{(i)}. \end{equation} So, as $r^{(k)} \ge 0$, \begin{align*} \epsilon^{(k)} &\le (A^{(k)})^{-1} \left( A^{(0)} \epsilon^{(0)} + r^{(0)} \right) + 2(A^{(k)})^{-1} \tilde{\epsilon}\su{i=0}{k-1}\eta^{(i)} & \\ &= \f{L}{2\gamma^2} (s^{(k)})^{-1} \left( \f{2\gamma^2}{L} \epsilon^{(0)} + r^{(0)} \right) + \left( 2\gamma \left( \f{\gamma}{L} + \su{i=0}{k-1} \eta^{(i)}\right)\rt^{-1} \left( 2\tilde{\epsilon} \su{i=0}{k-1} \eta^{(i)}\right) & \\ &\le (s^{(k)})^{-1} \left( \epsilon^{(0)} + \f{L}{2\gamma^2}r^{(0)}\right) + \gamma^{-1} \tilde{\epsilon} \end{align*} Now, $\tilde{\epsilon} = \f{\gamma \epsilon}{2}$ by definition and $\left( s^{(k)}\right)^{-1} \le \f{8}{(k+2)^2}$ by Lemma \ref{lem:sk}, which proves the bound on $\epsilon^{(k)}$. For the iteration bound, we simply require $K$ large enough such that $\ff{8}{(K+2)^2}\left( \epsilon^{(0)} + \ff{L}{2\gamma^2}r^{(0)}\right) \le \ff{\epsilon}{2}$. Observe that as $f(x^{(0)}) \le f(x^*) + \ff{L}{2}\norm{x^{(0)}-x^*}^2$ by \Cref{fact:smooth_decr}, $\epsilon^{(0)} \le \ff{L}{2} r^{(0)} \le \ff{L}{2\gamma^2} r^{(0)}$. So, it suffices to have $\ff{8}{(K+2)^2} \left( \ff{L}{\gamma^2}r^{(0)}\right) \le \ff{\epsilon}{2}$. Rearranging, this is equivalent to $K+2 \ge 4\gamma^{-1}L^{1/2}R\epsilon^{-1/2}$, as $r^{(0)} = R^2$. As $K$ must be a nonnegative integer, it suffices to have $K \ge \floor{4\gamma^{-1}L^{1/2}R\epsilon^{-1/2}}$. \end{proof} \nonstrongruntime* \begin{proof} \Cref{lem:nonstrong_converge} implies $O(\gamma^{-1}L^{1/2}R\epsilon^{-1/2})$ iterations are needed to get an $\epsilon$-optimal point. Lemma \ref{lem:linesearch} implies that each line search uses $O \left( \log^+ \left( (1+c) \min \left\{ \ff{L\norm{x\ind{k}-v\ind{k}}^2}{\tilde{\epsilon}}, \ff{L^3}{b^3} \right\} \right) \right)$ function and gradient evaluations. In this case, $b = 0$, $c = L\eta\ind{k} - \gamma = \gamma \left( \ff{1}{\omega^{(k)}} - 1 \right)$, and $\tilde{\epsilon} = \ff{\gamma \epsilon}{2}$. By Lemma \ref{lem:sk2} and \ref{lem:wint}, $1 < \ff{1}{\omega^{(k)}} \le k+2$ for all $k \ge 0$. Thus, the number of function and gradient evaluations required for the line search at iteration $k$ of \Cref{alg:nonstrong_agd} is $O\left( \log^+ \left( (\gamma k+1) \ff{L\norm{x^{(k)}-v^{(k)}}^2}{\gamma \epsilon} \right)\rt$. Now, we bound $\norm{x\ind{k} - v\ind{k}}^2$. To do so, we first bound $\norm{v\ind{k} - x^*}^2 = r\ind{k}$. Recall that equation \eqref{eq:nonstrong_induct} in the proof of \Cref{lem:nonstrong_converge} says that $A\ind{k}\epsilon\ind{k} + r\ind{k} \le A^{(0)} \epsilon^{(0)} + r^{(0)} + 2\tilde{\epsilon}\sum\limits_{i=0}^{k-1}\eta^{(i)}$, where $A\ind{j} \triangleq \ff{2\gamma^2}{L} \left( 1 + \sum\limits_{i=0}^{j-1} \ff{1}{\omega^{(i)}}\right)$. As $A\ind{k}, \epsilon\ind{k} \ge 0$, this means that \begin{align*} r\ind{k} \le A\ind{0}\epsilon\ind{0} + r\ind{0} + 2\tilde{\ep}\su{i=0}{k-1} \eta\ind{i} = \f{2\gamma^2}{L} \epsilon\ind{0} + r\ind{0} + \f{\gamma^2\epsilon}{L}\su{i=0}{k-1}\f{1}{\omega\ind{i}}~,\end{align*} using that $\eta\ind{i} = \ff{\gamma}{L\omega\ind{i}}$, $\tilde{\ep} = \ff{\gamma\epsilon}{2}$, and $A\ind{0} = \ff{2\gamma^2}{L}$ (as previously shown in the proof of \Cref{lem:nonstrong_converge}). Now, by \Cref{lem:sk2} we have that $\sum\limits_{i=0}^{k-1}\ff{1}{\omega\ind{i}} \le \sum\limits_{i=0}^{k-1} (i+2) = \ff{k(k+3)}{2}$, and by $L$-smoothness of $f$ and \Cref{fact:smooth_decr} we have that $\epsilon\ind{0} \le \ff{L}{2}r\ind{0} \le \ff{L}{2\gamma^2}r\ind{0}$. Thus, for all $k \ge 1$, we have \begin{align*} r\ind{k} \le 2r\ind{0} + \ff{\gamma^2\epsilon k(k+3)}{2L} \le 2 (R^2 + \ff{\gamma^2 \epsilon k^2}{L})~, \end{align*} as $r\ind{0} = R^2$ and $k+3 \le 4k$ for all $k \ge 1$. In fact, the above holds for $k = 0$ as well, because $r\ind{k}$ is simply $r\ind{0}$ in this case. By the triangle inequality, $\norm{v\ind{k} - v\ind{k-1}} \le \norm{v\ind{k}-x^*} + \norm{v\ind{k-1}-x^*} \le 2\sqrt{2(R^2 + \ff{\gamma^2\epsilon k^2}{L})}$. Since $\beta = 1$, we have that $v\ind{k-1} - \eta\ind{k-1} \nabla f(y\ind{k-1})$ and so $\norm{v\ind{k} - v\ind{k-1}} = \eta\ind{k-1} \norm{\nabla f(y\ind{k-1})}$. Thus, \begin{equation} \label{eq:gradbound} \norm{\nabla f(y\ind{k-1})} \le (\eta\ind{k-1})^{-1} \cdot 2\sqrt{2(R^2 + \ff{\gamma^2\epsilon k^2}{L})} = L\omega\ind{k-1}\gamma^{-1}\sqrt{8(R^2 + \ff{\gamma^2\epsilon k^2}{L})}~. \end{equation} Now, by definition of $x\ind{k}$, $v\ind{k}$, and $y\ind{k-1}$, \begin{align*} x\ind{k}-v\ind{k} &= y\ind{k-1} - \ff{1}{L} \nabla f(y\ind{k-1}) - v\ind{k} \\ &= \alpha\ind{k-1} x\ind{k-1} + (1-\alpha\ind{k-1})v\ind{k-1} - \ff{1}{L} \nabla f(y\ind{k-1}) - v\ind{k} \\ &= \alpha\ind{k-1} x\ind{k-1} + (1-\alpha\ind{k-1})v\ind{k-1} - \ff{1}{L} \nabla f(y\ind{k-1}) - \left( v\ind{k-1} - \eta\ind{k-1}\nabla f(y\ind{k-1})\right)\\ &= \alpha\ind{k-1} (x\ind{k-1}-v\ind{k-1}) + (\eta\ind{k-1}-\ff{1}{L})\nabla f(y\ind{k-1})~. \end{align*} Therefore, \begin{align*} \norm{x\ind{k}-v\ind{k}} &\le \alpha\ind{k-1} \norm{x\ind{k-1}-v\ind{k-1}} + \left|\eta\ind{k-1}-\ff{1}{L}\right| \cdot \norm{\nabla f(y\ind{k-1})} \\&\le \norm{x\ind{k-1}-v\ind{k-1}} + \left(\eta\ind{k-1}+\ff{1}{L}\right) \cdot \norm{\nabla f(y\ind{k-1})} \\&\le \norm{x\ind{k-1}-v\ind{k-1}} + \ff{2}{L\omega\ind{k-1}} \cdot \norm{\nabla f(y\ind{k-1})} \\&\le \norm{x\ind{k-1}-v\ind{k-1}} + \gamma^{-1}\sqrt{32(R^2 + \ff{\gamma^2\epsilon k^2}{L})} \\&\le \norm{x\ind{k-1}-v\ind{k-1}} + \sqrt{32}\gamma^{-1}\left( R + \gamma k\sqrt{\ff{\epsilon}{L}}\right)~, \end{align*} where the first inequality is the triangle inequality, the third inequality uses that $\eta\ind{k-1} = \ff{\gamma}{L\omega\ind{k-1}}$ and that $\gamma, \omega\ind{k-1} \in (0,1]$, the fourth inequality uses \eqref{eq:gradbound}, and the final inequality uses that $\sqrt{a+b} \le \sqrt{a}+\sqrt{b}$ for any $a,b \ge 0$. As this holds for all $k \ge 1$, we have by induction that for all $k \ge 0$, \begin{align*} \norm{x\ind{k}-v\ind{k}} \le \norm{x\ind{0}-v\ind{0}} + \su{j=1}{k} \sqrt{32}\gamma^{-1}\left( R + \gamma j\sqrt{\ff{\epsilon}{L}}\right) = \sqrt{32}\gamma^{-1}\su{j=1}{k} \left( R + \gamma j\sqrt{\ff{\epsilon}{L}}\right)~, \end{align*} since $x\ind{0}=v\ind{0}$. Simplification yields $\norm{x\ind{k}-v\ind{k}} \le \sqrt{32}k\gamma^{-1}R + \sqrt{8}k(k+1)\sqrt{\ff{\epsilon}{L}}$. For all $k \ge 1$, it is the case that $k+1 \le 2k$, so $\norm{x\ind{k}-v\ind{k}} \le \sqrt{32}\left( k\gamma^{-1}R + k^2\sqrt{\ff{\epsilon}{L}}\right)$; this inequality holds for $k = 0$ as well, as $\norm{x\ind{0}-v\ind{0}} = 0$ in this case. Suppose $k \le \floor{4\gamma^{-1}L^{1/2}R\epsilon^{-1/2}}$. Then \begin{align*} \norm{x\ind{k}-v\ind{k}} &\le \sqrt{32}\left( 4\gamma^{-1}L^{1/2}R\epsilon^{-1/2} \cdot \gamma^{-1}R + 16\gamma^{-2}LR^2\epsilon^{-1}\cdot \sqrt{\ff{\epsilon}{L}}\right) \\ &= 80\sqrt{2} \cdot \gamma^{-2}L^{1/2}R^2\epsilon^{-1/2}~. \end{align*} Recall that the line search at iteration $k$ requires $O\left( \log^+ \left( (\gamma k+1) \ff{L\norm{x^{(k)}-v^{(k)}}^2}{\gamma \epsilon} \right)\rt$ function and gradient evaluations. $(\gamma k+1) \ff{L\norm{x^{(k)}-v^{(k)}}^2}{\gamma \epsilon} \le (4L^{1/2}R\epsilon^{-1/2} + 1) \cdot 12800 (\gamma^{-5}L^{2}R^4\epsilon^{-2})$. Therefore, each line search indeed requires $O\left( \log^+ \left( \gamma^{-1} L^{1/2} R \epsilon^{-1/2} \right)\rt$ function and gradient evaluations. As the number of iterations $k$ is $O(\gamma^{-1}L^{1/2}R\epsilon^{-1/2})$, the total number of function and gradient evaluations required is thus $O \left( \gamma^{-1} L^{1/2}R \epsilon^{-1/2} \log^+ \left( \gamma^{-1} L^{1/2} R \epsilon^{-1/2} \right) \right)$, as claimed. As in the strongly convex case, the algorithm may continue to run if the specified number of $\text{iterations}$ $K$ is larger; however, this theorem combined with \Cref{lem:nonstrong_converge} shows that $x\ind{k}$ will be $\epsilon$-optimal if ${k = \floor{4\gamma^{-1}L^{1/2}R\epsilon^{-1/2}}}$, and this $x\ind{k}$ will be produced using $O \left( \gamma^{-1} L^{1/2}R \epsilon^{-1/2} \log^+ \left( \gamma^{-1} L^{1/2} R \epsilon^{-1/2} \right) \right)$ function and gradient evaluations. (Iterates $x\ind{k'}$ with $k' > \floor{4\gamma^{-1}L^{1/2}R\epsilon^{-1/2}}$ will also be $\epsilon$-optimal.) \end{proof} \begin{rem} If $f$ is $L$-smooth and $\gamma$-quasar-convex with $\gamma \in (0,1]$ and $\norm{x^{(0)}-x^*} \le R$, then gradient descent with step size $\ff{1}{L}$ returns a point $x$ with $f(x) \le f(x^*) + \epsilon$ after $O\left( \gamma^{-1}LR^2 \epsilon^{-1} \right)$ function and gradient evaluations. \end{rem} \begin{proof} \,See Theorem 1 in \citen{guminov2017accelerated}. \end{proof} \section{Conclusion} In this work, we introduce a generalization of star-convexity called quasar-convexity and provide insight into the structure of quasar-convex functions. We show how to obtain a near-optimal accelerated rate for the minimization of any smooth function in this broad class, using a simple but novel binary search technique. In addition, we provide nearly matching theoretical lower bounds for the performance of any first-order method on this function class. Interesting topics for future research are to further understand the prevalence of quasar-convexity in problems of practical interest, and to develop new accelerated methods for other structured classes of nonconvex problems. \section{Quasar-Convex Minimization Framework} \label{sec:acceleration_framework} In this section, we provide and analyze a general algorithmic template for accelerated minimization of smooth quasar-convex functions. In \Cref{sec:strongly-quasar-convex} we show how to leverage this framework to achieve accelerated rates for minimizing \textit{strongly} quasar-convex functions, and in \Cref{sec:quasar-convex} we show how to achieve accelerated rates for minimizing \textit{non-strongly} quasar-convex functions (i.e. when $\mu = 0$). For simplicity, we assume the domain is $\reals^n$. Our algorithm (Algorithm~\ref{alg:agd}) is a simple generalization of accelerated gradient descent. Given a differentiable function $f \in \reals^n \rightarrow \reals$ with smoothness parameter $L > 0$ and initial point $x^{(0)} = v^{(0)} \in \reals^n$, the algorithm iteratively computes points $x^{(k)}, v^{(k)} \in \reals^n$ of improving ``quality.'' However, it is challenging to argue that \Cref{alg:agd} actually performs optimally \textit{without the assumption of convexity}. The crux of circumventing convexity is to show that there exists a way to efficiently compute the momentum parameter $\alpha^{(k)}$ to yield convergence at the desired rate. In this section, we provide general tools for analyzing this algorithm; in \Cref{sec:algorithms}, we leverage this analysis with specific choices of the parameters $\alpha^{(k)}, \beta$, and $\eta^{(k)}$ to derive our fully-specified accelerated schemes for both quasar-convex and strongly quasar-convex functions. \begin{algorithm}[H] \caption{General AGD Framework} \label{alg:agd} \SetAlgoLined \SetKwInOut{Input}{input} \Input{ $L$-smooth $f: \reals^n \rightarrow \reals$, initial point $x^{(0)} \in \reals^n$, number of iterations $K$} \BlankLine Parameter $\beta \in [0,1]$ and sequences $\{\alpha^{(k)}\}_{k=0}^{K-1}$, $\{\eta^{(k)}\}_{k=0}^{K-1}$ are computed as defined by the \vskip 0ex particular algorithm instance, where $\alpha^{(k)} \in [0,1], \, \eta^{(k)} \ge \ff{\gamma}{L}$~. \BlankLine Set $v^{(0)} = x^{(0)}$ \; \For{$k = 0,1,2,\dots,K-1$} { Set $y^{(k)} = \alpha^{(k)} x^{(k)} +(1-\alpha^{(k)})v^{(k)}$ \; Set $x^{(k+1)} = y^{(k)} - \frac{1}{L} \nabla f( y^{(k)} )$ \; Set $v^{(k+1)} = \beta v^{(k)} + (1-\beta) y^{(k)} - \eta^{(k)} \nabla f ( y^{(k)} )$ \; } \Return{ $x^{(K)}$} \BlankLine \end{algorithm} We first define notation that will be used throughout Sections~\ref{sec:acceleration_framework} and \ref{sec:algorithms}: \begin{definition} Let $\epsilon^{(k)} \triangleq f( x^{(k)} ) - f(x^*), \epsilon^{(k)}_y \triangleq f( y^{(k)} ) - f(x^*), r^{(k)} \triangleq \norm{v^{(k)} - x^*}^2, \newline r^{(k)}_y \triangleq \norm{y^{(k)} - x^*}^2, Q^{(k)} \triangleq \beta \left( 2\eta^{(k)}\alpha^{(k)} \nabla f( y^{(k)} )^\top (x^{(k)} - v^{(k)}) - ( \alpha^{(k)})^2 (1-\beta) \norm{x^{(k)}-v^{(k)}}^2 \right)$. \end{definition} In the remainder of this section, we analyze Algorithm~\ref{alg:agd}. We assume that $f$ is $L$-smooth and $(\gamma, \mu)$ strongly quasar-convex (possibly with $\mu = 0$) with respect to a minimizer $x^*$. First, we use Lemma~\ref{lem:agd_one_step} to bound how much the function error of $x^{(k)}$ and the distance from $v^{(k)}$ to $x^*$ decrease at each iteration. To prove this lemma we use the following elementary fact (see \citen{Nesterov04} for proof). \newcommand{\agdonestepFacts}{ \begin{fact} \label{fact:smooth_decr} If $f$ is $L$-smooth and $x = y - \frac{1}{L} \nabla f( y )$, then $f(x) \leq f( y ) - \frac{1}{2L} \norm{\nabla f( y )}^2$ for all $y$. Additionally, if $x^*$ is a minimizer of $f$, then $f(y) \le f(x^*) + \ff{L}{2}\norm{y-x^*}^2$ for all $y$. \end{fact} } \agdonestepFacts \begin{restatable}[One Step Framework Analysis]{lem}{agdonestep} \label{lem:agd_one_step} Suppose $f$ is $L$-smooth and $(\gamma,\mu)$-quasar-convex with respect to a minimizer $x^*$. Then, in each iteration $k \ge 0$ of Algorithm~\ref{alg:agd} applied to $f$, it is the case that \[ 2 ( \eta^{(k)})^2 L \epsilon^{(k+1)} + r^{(k+1)} \leq \beta r^{(k)} + \left[(1 - \beta) - \gamma \mu \eta^{(k)}\right] r^{(k)}_y + 2 \eta^{(k)} \left[ L \eta^{(k)} - \gamma \right] \epsilon^{(k)}_y + Q^{(k)}. \] \end{restatable} \newcommand{\agdonestepProof}{ \begin{proof} Let $z^{(k)} \triangleq \beta v^{(k)} + {(1- \beta)}y^{(k)}$. Since $v^{(k+1)} = z^{(k)} - \eta^{(k)} \nabla f( y^{(k)} )$, direct algebraic manipulation yields that \begin{align} r^{(k+1)} &= \norm{v^{(k+1)}-x^*}^2 = \norm{z^{(k)} - x^* - \eta^{(k)} \nabla f( y^{(k)} )}^2 \nonumber \\ &= \norm{z^{(k)}-x^*}^2 + 2 \eta^{(k)} \nabla f( y^{(k)} )^\top (x^*-z^{(k)}) + ( \eta^{(k)})^2 \norm{\nabla f( y^{(k)} )}^2 ~. \label{eq:agd_framework_1} \end{align} Using the definitions of $z^{(k)}$ and $y^{(k)}$, we have \begin{align} \norm{z^{(k)}-x^*}^2 &= \beta \norm{ v^{(k)}-x^*}^2 + (1-\beta)\norm{y^{(k)}-x^*}^2 - \beta(1-\beta)\norm{v^{(k)}-y^{(k)}}^2 \nonumber \\ &= \beta r^{(k)} + (1-\beta) r^{(k)}_y - \beta(1-\beta)(\alpha^{(k)})^2\norm{v^{(k)}-x^{(k)}}^2~. \label{eq:agd_framework_2} \end{align} Further, since $v^{(k)} = y^{(k)} + \alpha^{(k)} (v^{(k)} - x^{(k)})$ and $z^{(k)} = \beta v^{(k)} + (1 - \beta)y^{(k)} = y^{(k)} + \alpha^{(k)} \beta (v^{(k)} - x^{(k)})$, it follows that \begin{equation} \label{eq:agd_framework_3} \nabla f( y^{(k)} )^\top (x^* - z^{(k)}) = \nabla f( y^{(k)} )^\top (x^* - y^{(k)}) + \alpha^{(k)} \beta \nabla f( y^{(k)} )^\top (x^{(k)} - v^{(k)}) ~. \end{equation} Since $(\gamma, \mu)$-strong quasar-convexity of $f$ implies $-\epsilon^{(k)}_y \geq \frac{1}{\gamma} \nabla f( y^{(k)} )^\top (x^* - y^{(k)}) + \frac{\mu}{2} r^{(k)}_y$ and the definition of $x^{(k+1)}$ implies $\norm{\nabla f( y^{(k)} )}^2 \leq 2L [\epsilon^{(k)}_y - \epsilon^{(k+1)}]$ by \Cref{fact:smooth_decr}, combining with \eqref{eq:agd_framework_1}, \eqref{eq:agd_framework_2}, and \eqref{eq:agd_framework_3} yields the result. \end{proof} } \agdonestepProof \Cref{lem:agd_one_step} provides our main bound on how the error $\epsilon^{(k)}$ changes between successive iterations of Algorithm~\ref{alg:agd}. The key step necessary to apply this lemma is to relate $f(y^{(k)})$ and $\nabla f( y^{(k)} )^\top (x^{(k)} - v^{(k)})$ to $f(x^{(k)})$, in order to bound $Q\ind{k}$. In the standard analysis of accelerated gradient descent, convexity is used to obtain such a connection. In our algorithms, we instead perform binary search to compute the momentum parameter $\alpha^{(k)}$ for which the necessary relationship holds without assuming convexity. The following lemma shows that there always exists a setting of $\alpha^{(k)}$ that satisfies the necessary relationship. \begin{figure}[t] \label{fig:lem2} \center \includegraphics[scale=0.45]{images/lemma2.png} \caption{Illustration of \Cref{lem:ak_existence}. $g(\alpha)$ is defined as in the proof of the lemma; here, we depict the case where $g(0) > g(1)$ and $g'(1) > 0$. The points highlighted in green satisfy inequality \eqref{eq:ak_existence}; the circled point has $g'(\alpha) = 0$ and $g(\alpha) \le g(1)$. Here $c = 10$.} \end{figure} \begin{restatable}[Existence of ``Good'' $\alpha$]{lem}{akexistence} \label{lem:ak_existence} Let $f : \reals^n \rightarrow \reals$ be differentiable and let $x, v \in \reals^n$. For $\alpha \in \reals$ define $y_\alpha \triangleq \alpha x + (1 - \alpha) v$. For any $c \ge 0$ there exists $\alpha \in [0, 1]$ such that \begin{equation} \label{eq:ak_existence} \alpha \nabla f(y_\alpha)^\top (x - v) \leq c \left[f(x) - f(y_\alpha) \right] ~. \end{equation} \end{restatable} \newcommand{\akexistproof}{ \begin{proof} Define $g(\alpha) \triangleq f(y_\alpha)$. Then for all $\alpha \in \reals$ we have $g'(\alpha) = \nabla f(y_\alpha)^\top (x - v)$. Consequently, \eqref{eq:ak_existence} is equivalent to the condition $\alpha g'(\alpha) \leq c [g(1) - g(\alpha)]$. If $g'(1) \leq 0$, inequality \eqref{eq:ak_existence} trivially holds at $\alpha = 1$; if $f(v) = g(0) \le g(1) = f(x)$, the inequality trivially holds at $\alpha = 0$. If neither of these conditions hold, $g'(1) > 0$ and $g(0) > g(1)$, so \Cref{fact:minimizer} from \Cref{sec:linesearch-proofs} implies that there is a value of $\alpha \in (0,1)$ such that $g'(\alpha) = 0$ and $g(\alpha) \le g(1)$, and therefore this value of $\alpha$ satisfies \eqref{eq:ak_existence}. Figure~\ref{fig:lem2} illustrates this third case graphically. \end{proof} } \akexistproof In our algorithms we will not seek $\alpha \in [0,1]$ satisfying (\ref{eq:ak_existence}) exactly, but instead $\alpha \in [0, 1]$ such that \begin{equation} \label{eq:ak_existence_2} \alpha \nabla f(y_\alpha)^\top (x - v) - \alpha^2 b\norm{x-v}^2 \leq c \left[f(x) - f(y_\alpha) \right] + \tilde{\epsilon}~, \end{equation} for some $b,c,\tilde{\epsilon} \ge 0$. As \eqref{eq:ak_existence_2} is a weaker statement than \eqref{eq:ak_existence}, the existence of $\alpha$ satisfying \eqref{eq:ak_existence_2} directly follows from \Cref{lem:ak_existence}. Moreover, we will show how to lower bound the size of the set of points satisfying \eqref{eq:ak_existence_2}, which we use to bound the time required to compute such a point. We can thus bound the quantity $Q^{(k)}$ from \Cref{lem:agd_one_step} by selecting $\alpha^{(k)}$ to satisfy \eqref{eq:ak_existence_2} with appropriate settings of $b,c,\tilde{\epsilon}$, which we do in Lemma \ref{lem:agd_linesearch}. \begin{restatable}{lem}{agdlinesearch} \label{lem:agd_linesearch} If $\beta > 0$ and $\alpha^{(k)} \in [0,1]$ satisfies \eqref{eq:ak_existence_2} with $x = x^{(k)}, v = v^{(k)}, \, b = \ff{1-\beta}{2\eta^{(k)}}, c = \ff{L\eta^{(k)}-\gamma}{\beta}$, or if $\beta = 0$ and $\alpha^{(k)} = 1$, then \begin{equation} \label{eq:onestep_ineq} Q\ind{k} \le 2\eta\ind{k}\left[(L\eta\ind{k} - \gamma ) \cdot (\epsilon\ind{k} - \epsilon\ind{k}_y) + \beta \tilde{\epsilon} \right]. \end{equation} \end{restatable} \newcommand{\agdLinesearchProof}{ \begin{proof} First suppose $\beta > 0$. As by definition $y^{(k)} = \alpha^{(k)} x^{(k)} + (1-\alpha^{(k)})v^{(k)}$ and $L\eta^{(k)} \ge \gamma$, applying \eqref{eq:ak_existence_2} yields \begin{align*} Q^{(k)} &= 2\beta \eta^{(k)} \left( \alpha^{(k)} \nabla f( y^{(k)} )^\top (x^{(k)} - v^{(k)}) - \left( \alpha^{(k)}\right)^2 \f{(1-\beta) \norm{x^{(k)}-v^{(k)}}^2}{2\eta^{(k)}}\right) \\ &\le 2\beta \eta^{(k)} \left( \f{L \eta^{(k)} - \gamma}{\beta} [ f(x^{(k)}) - f(y^{(k)})] + \tilde{\epsilon} \right) = 2 \eta^{(k)} \left( [L \eta^{(k)} - \gamma] \cdot [ \epsilon^{(k)} - \epsilon^{(k)}_y] + \beta \tilde{\epsilon} \right)~. \end{align*} Alternatively, suppose $\beta = 0$. Then $Q^{(k)} = 0$ as well; if we select $\alpha^{(k)} = 1$, then $y^{(k)} = x^{(k)}$ and \eqref{eq:onestep_ineq} trivially holds for any $\tilde{\epsilon}$, as $\epsilon^{(k)}_y = \epsilon^{(k)}$. \end{proof} } \agdLinesearchProof Now, in Algorithm~\ref{alg:linesearch} we show how to efficiently compute an $\alpha$ satisfying inequality \eqref{eq:ak_existence_2}. \newcommand{{\normalfont \textbf{lo}}}{{\normalfont \textbf{lo}}} \newcommand{{\normalfont \textbf{hi}}}{{\normalfont \textbf{hi}}} \newcommand{\tau}{\tau} \begin{algorithm}[H] \label{alg:linesearch} \SetAlgoLined \caption{\texttt{BinaryLineSearch}($f, x, v, L, b, c, \tilde{\epsilon}$)} \SetKwInOut{Input}{input} \textit{Assumptions}: $f$ is $L$-smooth;\, $x,v \in \reals^n$;\, $b,c,\tilde{\epsilon} \ge 0$. \vskip 0ex Define $g(\alpha) \triangleq f(\alpha x + (1-\alpha) v)$ and $p \triangleq b\norm{x-v}^2$ \; \lIf{$g'(1) \le \tilde{\epsilon}+p$}{\Return 1} \lElseIf{$c = 0\,\,\mathrm{ or }\,\,g(0) \le g(1) + \tilde{\epsilon} / c$\,}{\Return 0} $\tau \leftarrow 1 - \ff{\tilde{\ep}+p}{L\norm{x-v}^2}$ \; ${\normalfont \textbf{lo}} \leftarrow 0, {\normalfont \textbf{hi}} \leftarrow \tau, \alpha \leftarrow \tau$ \; \While{$c g(\alpha) + \alpha (g'(\alpha)- \alpha p) > c g(1) + \tilde{\epsilon}$}{ $\alpha \leftarrow ({\normalfont \textbf{lo}}+{\normalfont \textbf{hi}})/2$ \; \lIf{$g(\alpha) \le g(\tau)$}{${\normalfont \textbf{hi}} \leftarrow \alpha$} \lElse{${\normalfont \textbf{lo}} \leftarrow \alpha$} } \Return{$\alpha$} \BlankLine \end{algorithm} The basic idea behind Algorithm~\ref{alg:linesearch} is as follows: as in the proof of \Cref{lem:ak_existence}, let $g(\alpha) \triangleq f(\alpha x + (1-\alpha)v)$ be the restriction of the function $f$ to the line from $v$ to $x$. If either $g(0) \le g(1)$, or $g$ is decreasing at $\alpha = 1$, then \eqref{eq:ak_existence} is immediately satisfied. If this does not happen, then $g(0)$ is greater than $g(1)$ but $g'(1) > 0$, which means that at some $\alpha \in (0,1)$ with $g(\alpha) < g(1)$, the function $g$ must switch from decreasing to increasing, and so $g'(\alpha) = 0$. Such a value of $\alpha$ also satisfies \eqref{eq:ak_existence}. Algorithm~\ref{alg:linesearch} uses binary search to exploit this type of relationship and thereby efficiently compute a value of $\alpha$ \textit{approximately} satisfying \eqref{eq:ak_existence} (i.e., satisfying \eqref{eq:ak_existence_2}). In \Cref{lem:linesearch}, we bound the maximum number of iterations that this algorithm can take until \eqref{eq:ak_existence_2} holds and it thereby terminates. The proof of \Cref{lem:linesearch} is given in \Cref{sec:linesearch-proofs}. \begin{restatable}[Line Search Runtime]{lem}{linesearch} \label{lem:linesearch} For $L$-smooth $f : \reals^n \rightarrow \reals$, points $x, v \in \reals^n$ and scalars $b,c,\tilde{\epsilon} \ge 0$, Algorithm~\ref{alg:linesearch} computes $\alpha \in [0,1]$ satisfying \eqref{eq:ak_existence_2} with at most \[ 5+2\ceil{\log_2^+ \left( (1+c/2) \min\left\{ \ff{L^3}{b^3}, \ff{L\norm{x-v}^2}{\tilde{\epsilon}} \right\} \right)} \] function and gradient evaluations. \end{restatable} In summary, we achieve our accelerated quasar-convex minimization procedures by setting $\eta^{(k)}, \beta$, and $\epsilon$ appropriately and computing an $\alpha^{(k)}$ satisfying \eqref{eq:ak_existence_2} via binary search (Algorithm~\ref{alg:linesearch}). By carefully lower bounding the length of the interval of values of $\alpha^{(k)}$ satisfying \eqref{eq:ak_existence_2}, we ultimately show that this binary search only costs a logarithmic factor in the algorithm's overall runtime. \section{Introduction} Acceleration \citen{nemirovski1982, nesterov1983method} is one of the most powerful tools for improving the performance of first-order optimization methods. Nesterov's accelerated gradient descent method obtains asymptotically optimal runtimes for minimizing smooth convex functions \citen{nesterov1983method}. Furthermore, acceleration is prevalent in stochastic optimization \citen{allen2017katyusha,ghadimi2016accelerated,JohnsonZh13,woodworth2016tight,xu2018pca}, is useful in coordinate descent methods \citen{fercoq2015accelerated,hanzely2018acd,nesterov2012efficiency,shalev2014accelerated}, can improve proximal methods \citen{frostig2015regularizing,li2015prox,lin2015universal}, and yields tight rates for higher-order optimization \citen{bubeck2018near, gasnikov18arxiv, jiang2018arxiv}. In addition, there has been extensive work giving alternative interpretations of acceleration \citen{allen2014linear,bubeck2015geometric,su2014differential}, and acceleration has been shown to be successful in a variety of practical applications, such as image deblurring \citen{beck2009fast} and neural network training \citen{sutskever2013importance}. More recently, acceleration techniques have been applied to compute $\epsilon$-stationary points (i.e., points where the gradient has norm at most $\epsilon$) of nonconvex functions with smooth derivatives \citen{agarwal2017finding,carmon2017convex,carmon2018accelerated}. In particular, using a first-order method (i.e. using only function and gradient queries), one can find an $\epsilon$-stationary point in $O(\epsilon^{-5/3}\log(\epsilon^{-1}))$ iterations \citen{carmon2017convex}, which improves on gradient descent's iteration bound of $O(\epsilon^{-2})$. Furthermore, \citet*{Carmon:2017aa} show that under the same assumptions, any dimension-free deterministic first-order method requires at least $\Omega(\epsilon^{-8/5})$ iterations to compute an $\epsilon$-stationary point in the worst case. These bounds are significantly worse than the corresponding $O(\epsilon^{-1/2})$ bound that accelerated gradient descent (AGD) achieves for smooth convex functions \citen{nesterov1983method}. Still, in practice it is often possible to find approximate stationary points, and even approximate global minimizers, of nonconvex functions faster than these lower bounds suggest. This performance gap stems from the fairly weak assumptions underpinning these generic bounds. For example, \citet{Carmon:2017aa,carmon2017lower} only assume Lipschitz continuity of the gradient and some higher-order derivatives. However, functions minimized in practice often admit significantly more structure, even if they are not convex. For example, under suitable assumptions on their inputs, several popular nonconvex optimization problems, including matrix completion, deep learning, and phase retrieval, display ``convexity-like'' properties, e.g. that all local minimizers are global \citen{bartlett2019gradient,ge2016matrix}. Much more research is needed to characterize structured sets of functions for which minimizers can be efficiently found; our work is a step in this direction. The ``structured'' class of nonconvex functions that we focus on in this paper is the class of functions we term \textit{quasar-convex}. Informally, quasar-convex functions are unimodal on all lines that pass through a global minimizer. This function class is parameterized by a constant $\gamma \in (0,1]$, where $\gamma = 1$ implies the function is star-convex \citen{nesterov2006cubic} (itself a generalization of convexity), and smaller values of $\gamma$ indicate the function can be even ``more nonconvex.'' We produce an algorithm that, given any smooth $\gamma$-quasar-convex function, uses $O(\gamma^{-1}\epsilon^{-1/2}\log(\gamma^{-1}\epsilon^{-1}))$ function and gradient evaluations to find an $\epsilon$-optimal point. Additionally, we provide nearly matching query complexity lower bounds of $\Omega(\gamma^{-1}\epsilon^{-1/2})$ for \textit{any} deterministic first-order method applied to this function class. Minimization on this function class has been studied previously \citen{guminov2017accelerated,nesterov2018primal}; our bounds more precisely characterize its complexity. \paragraph{Basic notation} Throughout this paper, we use $\norm{\cdot}$ to denote the Euclidean norm (i.e. $\norm{\cdot}_2$). We say that a function $f : \reals^n \rightarrow \reals$ is $L$-smooth, or $L$-Lipschitz differentiable, if $\norm{\nabla f(x)-\nabla f(y)} \le L\norm{x-y}$ for all $x,y \in \reals^n$. (We say a function is \emph{smooth} if it is $L$-smooth for some $L \in [0, \infty)$.) We denote a minimizer of $f$ by $x^*$, and we say that a point $x$ is ``$\epsilon$-optimal'' or an ``$\epsilon$-approximate minimizer'' if $f(x) \le f(x^*) + \epsilon$. We use $\log$ to denote the natural logarithm and $\log^+(\cdot)$ to denote $\max\{\log(\cdot), 1\}$. \subsection{Quasar-convexity: definition, motivation and prior work} \label{sec:motivate-and-define-quasar} In this paper, we improve upon the state-of-the-art complexity of first-order methods for minimizing smooth \emph{quasar-convex} functions,\footnote{The concept of quasar-convexity was first introduced by \citet{weakquasiconvexity}. They describe it using the term `weak quasi-convexity'. We decided to introduce the term quasar-convexity because we believe it is linguistically clearer. In particular, `weak quasi-convexity' is a misnomer because it does not subsume quasi-convexity. Moreover, using this terminology, strong quasar-convexity would be termed ``strong weak quasi-convexity'' which is difficult to understand.} defined as follows. \begin{definition} Let $\gamma \in (0,1]$ and let $x^{*}$ be a minimizer of the differentiable function $f : \reals^{n} \rightarrow \reals$. The function $f$ is \emph{$\gamma$-quasar-convex} with respect to $x^*$ if for all $x \in \reals^n$, \begin{equation} \label{eq:qc} f(x^{*}) \ge f(x) + \frac{1}{\gamma} \nabla f(x)^\top (x^{*}-x). \end{equation} Further, for $\mu \ge 0$, the function $f$ is \emph{$(\gamma,\mu)$-strongly quasar-convex}\footnote{By \Cref{obs:unique}, $x^*$ is unique if $\mu > 0$.} (or \emph{$(\gamma,\mu)$-quasar-convex} for short) if for all $x \in \reals^n$, \begin{equation} \label{eq:sqc} f(x^{*}) \ge f(x) + \frac{1}{\gamma} \nabla f(x)^\top (x^{*}-x) + \frac{\mu}{2} \| x^{*} -x \|^2. \end{equation} \label{defn:defns} \end{definition} We simply say that $f$ is quasar-convex if \eqref{eq:qc} holds for some minimizer $x^*$ of $f$ and some constant $\gamma \in (0, 1]$, and strongly quasar-convex if \eqref{eq:sqc} holds with some constants $\gamma \in (0,1], \mu > 0$. We refer to $x^*$ as the ``quasar-convex point'' of $f$. Assuming differentiability, in the case $\gamma = 1$, condition \eqref{eq:qc} is equivalent to what is known as star-convexity \citen{nesterov2006cubic};\footnote{When $\gamma = 1$, condition \eqref{eq:sqc} is variously known as \textit{quasi-strong convexity} \citen{necoara} or \textit{weak strong convexity} \citen{karimi2016linear}.} if in addition the conditions \eqref{eq:qc} or \eqref{eq:sqc} hold for all $y \in \reals^n$ instead of just for $x^*$, they become the standard definitions of convexity or $\mu$-strong convexity, respectively \citen{BoydVa04}. We also note that Definition~\ref{defn:defns} can be straightforwardly generalized to the case where the domain of $f$ is a convex subset of $\reals^n$ (see Definition~\ref{defn:gen-defns} in \Cref{sec:quasar-structure}). Thus, our definition of quasar-convexity strictly generalizes the standard notions of convexity and star-convexity in the differentiable case. Lemma \ref{lem:star_char} in Appendix~\ref{sec:equivs} shows that quasar-convexity is equivalent to a certain ``convexity-like'' condition on line segments to $x^{*}$. \begin{figure}[t] \center \includegraphics[scale=0.36]{images/unimodal.png} \includegraphics[scale=0.44]{images/starconvex-2.png} \includegraphics[scale=0.44]{images/quasarconvex-cropped.png} \caption{Examples of quasar-convex functions.} \end{figure} We say that a one-dimensional function is \textit{unimodal} if it monotonically decreases to its minimizer and then monotonically increases thereafter; a function is \textit{strictly unimodal} if the same holds with monotonicity replaced by strict monotonicity. As Observation~\ref{obs:unimodalImpliesQuasar} shows, quasar-convexity is closely related to unimodality. Therefore, like the well-known quasiconvexity \citen{quasi} and pseudoconvexity \citen{pseudo}, quasar-convexity can be viewed as an approximate generalization of unimodality to higher dimensions. We remark that beyond one dimension, neither quasiconvexity nor pseudoconvexity subsumes or is subsumed by quasar-convexity. The proof of Observation~\ref{obs:unimodalImpliesQuasar} appears in Appendix~\ref{sec:unimodal-quasar}, and follows fairly directly from the definitions. \begin{restatable}{observation}{unimodalImpliesQuasar}\label{obs:unimodalImpliesQuasar} Let $a < b$ and let $f : [a,b] \rightarrow \reals$ be continuously differentiable. The function $f$ is $\gamma$-quasar-convex for some $\gamma \in (0, 1]$ if and only if $f$ is unimodal with $f'(c) \ne 0$ for all $c \in [a,b]$ such that $c \not \in \mathop{\rm argmin}_{x \in [a,b]} f(x)$. Additionally, if $h : \mathbb{R}^{n} \rightarrow \mathbb{R}$ is $\gamma$-quasar-convex with respect to a minimizer $x^*$, then for any $d \in \reals^n$ with $\| d \| = 1$, the one-dimensional function $f(\theta) \triangleq h(x^{*} + \theta d)$ is $\gamma$-quasar-convex. \end{restatable} There are several other `convexity-like' conditions in the literature related to quasar-convexity. For example, star-convexity is a condition that relaxes convexity, and is a strict subset of quasar-convexity in the differentiable case. \citet{nesterov2006cubic} introduce this condition when analyzing cubic regularization. \citet*{lee2016optimizing} further investigate star-convexity, developing a cutting plane method to minimize general star-convex functions. Star-convexity is an interesting property because there is some evidence to suggest the loss function of neural networks might conform to this structure in large neighborhoods of the minimizers \citen{kleinberg2018alternative,zhou2019sgd}. Therefore, understanding acceleration for quasar-convex functions is pertinent to understanding acceleration for neural network training. Furthermore, \citet*{weakquasiconvexity} show that, under mild assumptions, the objective for learning linear dynamical systems is quasar-convex; the problem of learning dynamical systems is closely related to the training of recurrent neural networks. Another relevant class of functions is those for which a small gradient implies approximate optimality. This is known as the Polyak-\L{}ojasiewicz (PL) condition \citen{polyak} and is weaker than strong quasar-convexity \citen{guminov2017accelerated}. For linear residual networks, the PL condition holds in large regions of parameter space \citen{hardt2016identity}. In addition to pseudoconvexity, quasiconvexity, star-convexity, and the PL condition, other relaxations of convexity or strong convexity include invexity \citen{craven1985invex}, semiconvexity \citen{ngai2007}, quasi-strong convexity \citen{necoara}, restricted strong convexity \citen{zhang2013gradient}, one-point convexity \citen{li2017relu}, variational coherence \citen{zhou2017vc}, the quadratic growth condition \citen{anitescu}, and the error bound property \citen{fabian2010}. We are not the first to study acceleration on quasar-convex functions. Recent work by \citet{guminov2017accelerated} and \citet{nesterov2018primal} shows how to achieve accelerated rates for minimizing quasar-convex functions. For a function that is $L$-smooth and $\gamma$-quasar-convex with respect to a minimizer $x^*$, with initial distance to $x^*$ bounded by $R$, the algorithm of \citet{guminov2017accelerated} yields an $\epsilon$-optimal point in $O(\gamma^{-1} L^{1/2} R \epsilon^{-1/2})$ iterations, while the algorithm of \citet{nesterov2018primal} does so in $O(\gamma^{-3/2} L^{1/2} R\epsilon^{-1/2})$ iterations. For convex functions (which have $\gamma = 1$), these bounds match the \textit{iteration} bounds achieved by AGD \citen{nesterov1983method}, but use a different oracle model. In particular, to achieve these iteration bounds, \citet{guminov2017accelerated} rely on a low-dimensional subspace optimization method within each iteration, while \citet{nesterov2018primal} use a one-dimensional line search over the function value in each iteration. However, quasar-convex functions are not necessarily unimodal along the arbitrary low-dimensional regions or line segments being searched over. Therefore, even finding an approximate global minimizer within these subregions may be computationally expensive, and thus the total number of \textit{function and gradient evaluations} required by these methods may be large. In addition, neither paper provides lower bounds nor studies the ``strongly quasar-convex'' regime. Independently, recent work by \citet{sra2019} uses a differential equation discretization to approach the accelerated $O(\k^{1/2}\log(\epsilon^{-1}))$ rate for minimization of smooth strongly quasar-convex functions in a neighborhood of the optimum, in the special case $\gamma = 1$.\footnote{$\k = L/\mu$ denotes the \textit{condition number} of an $L$-smooth $(\gamma,\mu)$-strongly quasar-convex function.} Similarly, in the $\gamma = 1$ case, geometric descent \citen{bubeck2015geometric} achieves $O(\k^{1/2}\log(\epsilon^{-1}))$ running times in terms of the number of calls to a one-dimensional line search oracle (although, as previously noted, the number of function and gradient evaluations required may still be large).\footnote{Although this result is not explicitly stated in the literature, upon careful inspection of the analysis in \citen{bubeck2015geometric} it can be observed that the $\mu$-strong convexity requirement in \citen{bubeck2015geometric} may be relaxed to the requirement of $(1,\mu)$-strong quasar-convexity, with no changes to the algorithm necessary.} \subsection{Our results} For functions that are $L$-smooth and $\gamma$-quasar-convex, we provide an algorithm that finds an $\epsilon$-optimal solution in $O(\gamma^{-1} L^{1/2} R \epsilon^{-1/2})$ iterations (where, as before, $R$ is an upper bound on the initial distance to the quasar-convex point $x^*$). Our iteration bound is the same as that of \citet{guminov2017accelerated}, and a factor of $\gamma^{1/2}$ better than the $O(\gamma^{-3/2} L^{1/2} R \epsilon^{-1/2})$ bound of \citet{nesterov2018primal}. Additionally, we are the first to provide bounds on the total number of function and gradient evaluations required; our algorithm uses $O(\gamma^{-1} L^{1/2} R \epsilon^{-1/2}\log(\gamma^{-1} \epsilon^{-1}))$ function and gradient evaluations to find a $\epsilon$-optimal solution. We also provide an algorithm for $L$-smooth, $(\gamma,\mu)$-strongly quasar-convex functions; our algorithm uses $O(\gamma^{-1} \kappa^{1/2} \log(\gamma^{-1}\epsilon^{-1}) )$ iterations and $O(\gamma^{-1} \kappa^{1/2} \log(\gamma^{-1} \kappa) \log(\gamma^{-1} \epsilon^{-1}) )$ total function and gradient evaluations to find an $\epsilon$-optimal point, where $\kappa \triangleq L / \mu$. For constant $\gamma$, this matches accelerated gradient descent's bound for smooth strongly convex functions up to a logarithmic factor. The key idea behind our algorithm is to take a close look at which essential invariants need to hold during the momentum step of AGD, and use this insight to carefully redesign the algorithm to accelerate on general smooth quasar-convex functions. By observing how the function behaves along the line segment between current iterates, we show that for any smooth quasar-convex function, there always exists a point along this segment with the properties needed for acceleration. Furthermore, we show that an efficient binary search can be used to find such a point, even without the assumption of convexity along this line segment. To complement our upper bounds, we provide lower bounds of $\Omega(\gamma^{-1} L^{1/2}R \epsilon^{-1/2})$ for the number of gradient evaluations that \textit{any} deterministic first-order method requires to find an $\epsilon$-approximate minimizer of a quasar-convex function. This shows that up to logarithmic factors, our lower and upper bounds are tight. Our lower bounds extend the techniques of \citet*{Carmon:2017aa} to the class of smooth quasar-convex functions, remarkably allowing an almost exact characterization of the complexity of minimizing these functions. \paragraph{Paper outline} In Section~\ref{sec:acceleration_framework}, we provide a general framework for accelerating the minimization of smooth quasar-convex functions. In Section~\ref{sec:algorithms}, we apply our framework to develop specific algorithms tailored to both quasar-convex and strongly quasar-convex functions. In Section~\ref{sec:lb}, we provide lower bounds to show that the upper bounds for quasar-convex minimization of Section~\ref{sec:algorithms} are tight up to logarithmic factors. \section{Lower bound proofs} \label{sec:lb-app} In this section, we use $\mathbf{0}$ to denote a vector with all entries equal to 0 and $\mathbf{1}$ to denote a vector with all entries equal to 1. \subsection{Proof of Lemma~\ref{lem:main-lb-quasar}} \label{sec:lem-lb-proof} Before we prove Lemma~\ref{lem:main-lb-quasar}, we prove two useful results related to the properties of $q$ and $\Upsilon$. For convenience, these functions are restated below: \funcdefs \begin{observation}\label{obs:props-q} $q$ is convex and $\ff{1}{2}$-smooth with minimizer $x^* = \mathbf{1}$. Also, for any $1 \le j_1 < j_2 \le T$, $$ q(x) = \frac{1}{2} \nabla q(x)^\top (x - x^{*}) \ge \max\left\{ \frac{1}{4} (x_1-1)^2, \frac{(x_{j_1} - x_{j_2})^2}{4 (j_2 - j_1)} \right\}. $$ \end{observation} \begin{proof} Convexity and $\ff{1}{2}$-smoothness of $q$ follow from definitions. It is easy to see that $q$ is always nonnegative and $q(\mathbf{1}) = 0$, so $\mathbf{1}$ minimizes $q$. In fact $\mathbf{1}$ is the unique minimizer, since $q$ is strictly positive for all nonconstant vectors and all vectors with $x_1 \ne 1$. Notice that as $q$ is a convex quadratic, $q(x) = \frac{1}{2} (x-x^{*})^\top \nabla^2 q(x) (x-x^{*})$ where $\nabla^2 q(x)$ is a constant matrix. Therefore $\nabla q(x) = \nabla^2 q(x) (x - x^{*})$. It follows that $q(x) = \frac{1}{2} \nabla q(x)^\top (x - x^{*})$. By definition $q(x) \ge \frac{1}{4} (x_1-1)^2$. Furthermore, $\frac{1}{j_2 - j_1} \sum_{i=j_{1}}^{j_{2}} (x_i - x_{i+1})^2 \ge \left( \frac{1}{j_2 - j_1} \sum_{i=j_1}^{j_2} (x_i - x_{i+1}) \right)^2 = \frac{(x_{j_1} - x_{j_2})^2}{(j_2 - j_1)^2}$ where the inequality uses that the expectation of the square of a random variable is greater than the square of its expectation. The result follows. \end{proof} Properties of $\Upsilon$ that we will use are listed below. \begin{lem}\label{lem:props-ups} The function $\Upsilon$ satisfies the following. \begin{enumerate} \item \label{item:ups-grad-zero} $\Upsilon'(0) = \Upsilon'(1) = 0$. \item \label{item:ups-quasi-convex} For all $\theta \le 1$, $\Upsilon'(\theta) \le 0$, and for all $\theta \ge 1$, $\Upsilon'(\theta) \ge 0$. \item \label{item:ups-min-one} For all $\theta \in \reals$ we have $\Upsilon(\theta) \ge \Upsilon(1) = 0$, and $\Upsilon(0) \le 10$. \item \label{item:ups-large-grad} $\Upsilon'(\theta) < -1$ for all $\theta \in (-\infty,-0.1] \cup [0.1,0.9]$. \item \label{item:ups-smooth} $\Upsilon$ is $180$-smooth. \item \label{item:ups-theta} For all $\theta \in \reals$ we have $\Upsilon(\theta) \le \min\{30 \theta^4 - 40 \theta^3 + 10, \,\, 60 (\theta-1)^2\}$, and $\Upsilon(0) \ge 5$. \item \label{item:ups-quasar} For all $\theta \not\in (-0.1,0.1)$ we have $40 (\theta - 1) \Upsilon'(\theta) \ge \Upsilon(\theta)$. \end{enumerate} \end{lem} \begin{proof} Properties \ref{item:ups-grad-zero}-\ref{item:ups-large-grad} were proved in \cite[Lemma~2]{Carmon:2017aa}. \emph{Property \ref{item:ups-smooth}.} $|\Upsilon''(\theta)| = 120 \left|\frac{\theta (\theta^3 + 3 \theta - 2)}{(1 + \theta^2)^2}\right| \le 120 \cdot \ff{3}{2} = 180$ for all $\theta \in \reals$. Thus, for any $\theta_1,\theta_2 \in \reals$, $|\Upsilon'(\theta_1) - \Upsilon'(\theta_2)| \le \max\limits_{\theta \in [\theta_1,\theta_2]} |\Upsilon''(\theta)| \cdot |\theta_1 - \theta_2| \le 180 |\theta_1-\theta_2|$. \emph{Property \ref{item:ups-theta}.} We have $\Upsilon(0) = 120 \int_{0}^{1} \frac{t^2 (1-t)}{1 + t^2} \,dt \ge 120 \int_{0}^{1} \ff{t^2 (1-t)}{2} \,dt = \frac{120}{2 \cdot 12} = 5$. For all $\theta \in \reals$ we have $\Upsilon(\theta) = 120 \int_{1}^{\theta} \frac{t^2 (t-1) }{1 + t^2} \,dt \le 120 \int_{1}^{\theta} t^2 (t-1) \,dt = 120 ( (\theta^4/4 + \theta^3/3) - (1/4 - 1/3) ) = 30 \theta^4 - 40 \theta^3 + 10$. In addition, since $\ff{t^2}{1+t^2} \le 1$ for all $t$, we have for all $\theta \in \reals$ that $\Upsilon(\theta) \le 120 \int^{\theta}_1 (t-1) \,dt = 120 (\theta-1)^2/2$. \emph{Property \ref{item:ups-quasar}.} If $\theta \in (\infty, -1.0] \cup [1.0, \infty)$ then $\ff{\theta^2}{1+\theta^2} \ge \ff{1}{2}$, so by property \ref{item:ups-theta} we have \begin{flalign*} \Upsilon(\theta) + 40 (1 - \theta) \Upsilon'(\theta) &\le 60 (\theta-1)^2 - 40 \cdot 120 \frac{\theta^2 (\theta-1)^2}{1 + \theta^2} \\ &\le 60 (\theta - 1)^2 - 40 \cdot 60 (\theta-1)^2 \\ &= -60 \cdot 39 (\theta-1)^2 \\ &\le 0. \tag*{\qedhere} \end{flalign*} Alternatively, if $\theta \in [-1.0,-0.1] \cup [0.1, 1.0]$ then $\ff{1}{1+\theta^2} \ge \ff{1}{2}$, so by property \ref{item:ups-theta} we have \begin{flalign*} \Upsilon(\theta) + 40 (1 - \theta) \Upsilon'(\theta) &\le 10 + 30 \theta^4 - 40 \theta^3 - 40 \cdot 120 \frac{\theta^2 (\theta-1)^2 }{1 + \theta^2} \\ &\le 10 \left( 1 + \theta^2 \left( 3 \theta^2 - 4 \theta - 240 (\theta-1)^2 \right) \right) \\ &= 10 \left( 1 - 237 \theta^4 + 476 \theta^3 - 240 \theta^2 \right) \\ &= 10P(\theta)~, \end{flalign*} where we define $P(\theta) \triangleq 1 - 237 \theta^4 + 476 \theta^3 - 240 \theta^2$. Observe that $P'(\theta) = {-12 \theta (40 - 119 \theta + 79 \theta^2)}$ has exactly three roots: at $\theta = 0, \theta = 1$ and $\theta = 40/79$. Furthermore, at $\theta = 1$, $\theta = 40/79$ and $\theta = 0.1$ we have $P(\theta) \le 0$, which implies $P(\theta) \le 0$ for $\theta \in [0.1,1]$. We conclude that $\Upsilon(\theta) + 40 (1 - \theta) \Upsilon'(\theta) \le 0$ for $\theta \in [0.1,1]$. In addition, $P(\theta)$ is negative while $P'(\theta)$ is positive for $\theta = -0.1$, which means that $P(\theta)$ and thus $\Upsilon(\theta) + 40 (1 - \theta) \Upsilon'(\theta)$ are also negative on $[-1.0, -0.1]$. \end{proof} \lemUnscaledLB* \begin{proof} Since $\sigma^{1/2} \in (0, 10^{-3}]$, $\Upsilon$ is $180$-smooth, and $q$ is $1/2$-smooth, we deduce $\bar{f}_{T,\sigma}$ is $1$-smooth. By \Cref{obs:props-q} and \Cref{lem:props-ups}.\ref{item:ups-min-one} we deduce $\bar{f}_{T,\sigma}(\mathbf{1}) = 0 < \bar{f}_{T,\sigma}(x)$ for all $x \ne \mathbf{1}$. Therefore, $x^{*} = \mathbf{1}$ is the unique minimizer of $\bar{f}_{T,\sigma}$. Now, we will show $\bar{f}_{T,\sigma}$ is $\frac{1}{100 T \sqrt{\sigma}}$-quasar-convex, i.e. that $\nabla \bar{f}_{T,\sigma}(x)^\top(x-\mathbf{1}) \ge \ff{\bar{f}_{T,\sigma}(x)-\bar{f}_{T,\sigma}(\mathbf{1})}{100 T \sqrt{\sigma}}$ for all $x \in \reals^T$. Define \begin{flalign*} \mathcal{A} &\triangleq \{ i : x_i \in (-\infty, -0.1] \cup (0.9,\infty) \} \\ \mathcal{B} &\triangleq \{ i : x_i \in (-0.1,0.1) \} \\ \mathcal{C} &\triangleq \{ i : x_i \in [0.1,0.9] \}. \end{flalign*} First, we derive two useful inequalities. By Observation~\ref{obs:props-q} and the fact that $\Upsilon'(x_i) \le 0$ for $i \in \mathcal{B}$, \begin{flalign} \nabla \bar{f}_{T,\sigma}(x)^\top (x - \mathbf{1}) &= \nabla q(x)^\top(x - \mathbf{1}) + \sigma \su{i \in \mathcal{A} \cup \mathcal{B} \cup \mathcal{C}}{} (x_i-1)\Upsilon'(x_i) \nonumber \\ &\ge 2 q(x) + \sigma \sum_{i \in \mathcal{A} \cup \mathcal{C} } (x_i - 1) \Upsilon'(x_i)~. \label{ineq-grad-ups} \end{flalign} By Lemma~\ref{lem:props-ups}.\ref{item:ups-quasi-convex} and \ref{lem:props-ups}.\ref{item:ups-theta} we deduce $\sum_{i \in \mathcal{B} \cup \mathcal{C}}\Upsilon(x_i) \le | \mathcal{B} \cup \mathcal{C} | \Upsilon(-0.1) \le 11 T$, so it follows that $\bar{f}_{T,\sigma}(x) \le q(x) + 11T\sigma + \sigma\sum_{i \in \mathcal{A}} \Upsilon(x_i)$, and therefore using $T \ge \sigma^{-1/2}$ and nonnegativity of $\Upsilon$ and $q$, we have \begin{flalign} \frac{\bar{f}_{T,\sigma}(x) - \bar{f}_{T,\sigma}(\mathbf{1})}{100 T \sqrt{\sigma}} &= \frac{\bar{f}_{T,\sigma}(x)}{100 T \sqrt{\sigma}} \nonumber \\ &\le \frac{11T\sigma}{100T\sqrt{\sigma}} + \frac{\sigma}{100T\sqrt{\sigma}} \sum_{i \in \mathcal{A}} \Upsilon(x_i) + \frac{1}{100T\sqrt{\sigma}} q(x) \nonumber \\ &\le \frac{11}{100} \sigma^{1/2} + \frac{\sigma}{100} \sum_{i \in \mathcal{A}} \Upsilon(x_i) + \frac{1}{100} q(x) \nonumber \\ &\le \frac{11}{100} \sigma^{1/2} + \frac{\sigma}{40} \sum_{i \in \mathcal{A}} \Upsilon(x_i) + q(x) \label{ineq-func-ups} \end{flalign} We now consider three possible cases for the values of $x$. \begin{enumerate} \item Consider the case that $x_1 \not\in [0.9,1.1]$. We have \begin{flalign*} \nabla \bar{f}_{T,\sigma}(x)^\top (x - \mathbf{1}) &\ge 2 q(x) + \frac{\sigma}{40} \sum_{i \in \mathcal{A} \cup \mathcal{C} } \Upsilon(x_i) \\ & \ge \frac{0.1^2}{4} + q(x) + \frac{\sigma}{40} \sum_{i \in \mathcal{A} \cup \mathcal{C} } \Upsilon(x_i) \\ & = \frac{1}{\sqrt{10^4\sigma}} \cdot \frac{ \sqrt{\sigma}}{4} + \frac{\sigma}{40} \sum_{i \in \mathcal{A} \cup \mathcal{C} } \Upsilon(x_i) + q(x) \\ & \ge \frac{ \sqrt{\sigma}}{4} + \frac{\sigma}{40} \sum_{i \in \mathcal{A} \cup \mathcal{C} } \Upsilon(x_i) + q(x) \\ &\ge \frac{\bar{f}_{T,\sigma}(x) - \bar{f}_{T,\sigma}(\mathbf{1}) }{100 T \sqrt{\sigma} } \end{flalign*} where the first inequality uses \eqref{ineq-grad-ups} and Lemma~\ref{lem:props-ups}.\ref{item:ups-quasar}, the second inequality uses Observation~\ref{obs:props-q} and $x_1 \not\in [0.9,1.1]$, the penultimate inequality uses $\sigma \in (0, 10^{-6}] \subset (0,10^{-4}]$, and the final inequality uses \eqref{ineq-func-ups} and nonnegativity of $\Upsilon$. \item Consider the case that $\mathcal{B} = \emptyset$. By Lemma~\ref{lem:props-ups}.\ref{item:ups-quasar} and convexity of $q(x)$, \begin{flalign*} \nabla \bar{f}_{T,\sigma}(x)^\top(x - \mathbf{1}) &= \nabla q(x)^\top(x-\mathbf{1}) + \sigma \su{i \in \mathcal{A} \cup \mathcal{C}}{} (x_i-1)\Upsilon'(x_i) \\ &\ge q(x) - q(\mathbf{1}) + \f{\sigma}{40} \su{i \in \mathcal{A} \cup \mathcal{C}}{} \Upsilon(x_i) \\ &= \f{1}{40} \left( q(x) + \sigma \su{i=1}{T} \Upsilon(x_i)\right) - \bar{f}_{T,\sigma}(\mathbf{1}) + \f{39}{40} q(x) \\ &\ge \frac{\bar{f}_{T,\sigma}(x) - \bar{f}_{T,\sigma}(\mathbf{1})}{40} \\ &\ge \frac{\bar{f}_{T,\sigma}(x) - \bar{f}_{T,\sigma}(\mathbf{1})}{100T\sqrt{\sigma}}. \end{flalign*} \item Suppose cases 1-2 do not hold, i.e., $x_1 \in [0.9,1.1]$ and $\mathcal{B} \neq \emptyset$. Then there exists some $m \ge 1$ and $j \in \{1,\dots,T-m\}$ such that $x_{j} \ge 0.9$, $x_{j + m} \le 0.1$, and $x_i \in \mathcal{C}$ for all $i \in \{j+1, \dots, j+m-1\}$. Then, \begin{flalign*} \nabla \bar{f}_{T,\sigma}(x)^\top (x - \mathbf{1} ) &\ge q(x) + \sigma \sum_{i \in \mathcal{A} \cup \mathcal{C}} (x_i - 1) \Upsilon'(x_i) + q(x) \\ &\ge \frac{0.8^2}{4 m} + \sigma \sum_{i \in \mathcal{C}} (x_i - 1) \Upsilon'(x_i) + \sigma \sum_{i \in \mathcal{A}} (x_i - 1) \Upsilon'(x_i) + q(x) \\ &\ge \frac{0.8^2}{4 m} + 0.1 \sigma (m - 2) + \frac{\sigma}{40} \sum_{i \in \mathcal{A}} \Upsilon(x_i) + q(x) \\ &\ge \f{0.16}{\sqrt{1.6}} \sigma^{1/2} + \frac{\sigma }{40} \sum_{i \in \mathcal{A}} \Upsilon(x_i) + q(x) \\ &\ge \frac{\bar{f}_{T,\sigma}(x) - \bar{f}_{T,\sigma}(\mathbf{1})}{100T \sqrt{\sigma}} \end{flalign*} where the the first inequality holds by \eqref{ineq-grad-ups}, the second inequality uses Observation~\ref{obs:props-q}, the third inequality uses Lemma~\ref{lem:props-ups}.\ref{item:ups-large-grad} and \ref{lem:props-ups}.\ref{item:ups-quasar}, the fourth inequality uses that $m = \sqrt{1.6} \sigma^{-0.5} \ge 2$ minimizes the previous expression, and the final inequality uses \eqref{ineq-func-ups} [and the fact that $0.16 / \sqrt{1.6} > 0.11$]. \end{enumerate} Finally, suppose $x_t = 0$ for all $t = \ceil{T/2}, \dots, T$. Then we have $\bar{f}_{T,\sigma}(x) - \bar{f}_{T,\sigma}(\mathbf{1}) = \bar{f}_{T,\sigma}(x) \ge \sigma \ceil{T / 2} \Upsilon(0) \ge 2 T \sigma$, where the first inequality uses that $\Upsilon \ge 0$ and $q \ge 0$, and the last inequality uses that $T \ge 1$ and $\Upsilon(0) \ge 5$. \end{proof} With Lemma~\ref{lem:lb-unscaled} in hand, we are able to establish Lemma~\ref{lem:main-lb-quasar} which is a scaled version of Lemma~\ref{lem:lb-unscaled}. \lemMainLBquasar* \begin{proof} We have $\sigma^{-1/2} = 10^{2} T \gamma \le T$ and $\sigma = \frac{1}{10^4 T^2 \gamma^2} \le \frac{1}{(L^{1/2} R \epsilon^{-1/2})^2} \le 10^{-6}$, so $\bar{f}_{T,\sigma}$ satisfies the conditions of Lemma~\ref{lem:lb-unscaled}. Let us verify the properties of $\hat{f}$. The optimal solution to $\bar{f}_{T,\sigma}$ is $\mathbf{1}$, but after this rescaling it becomes $x^{*} = \frac{R}{\sqrt{T}} \mathbf{1}$, for which $\norm{x^{*}} = R$. For all $x,y \in \reals^T$, by $1$-smoothness of $\bar{f}_{T,\sigma}$ we have \begin{flalign*} \norm{ \nabla \hat{f}(x) - \nabla \hat{f}(y) } &= (L R^2 T^{-1}) \cdot (T^{1/2} R^{-1}) \norm{ \nabla \bar{f}_{T,\sigma}(x T^{1/2} R^{-1}) - \nabla \bar{f}_{T,\sigma}(y T^{1/2} R^{-1}) } \\ &\le (L R^2 T^{-1}) \cdot (T^{1/2} R^{-1})^2 \norm{ x-y } \\ &= L\norm{ x-y }~. \end{flalign*} Therefore $\hat{f}$ is $L$-smooth. By the definition of $\sigma$ we have $\frac{1}{100T\sqrt{\sigma}} = \gamma$, so $\bar{f}_{T,\sigma}$ is $\gamma$-quasar-convex. As quasar-convexity is invariant to scaling (\Cref{obs:scaling}), we deduce that $\hat{f}$ is $\gamma$-quasar-convex as well. Finally, given $x^{(k)}_t = 0$ for $t = \ceil{T/2}, \dots, T$, we have $$ \hat{f}(x^{(k)}) - \inf_z \hat{f}(z) \ge 2 T \sigma \cdot \frac{L R^2}{T} = 2 L R^2 \sigma = 2 (10^{-2} \gamma^{-1} L^{1/2} R T^{-1})^2 \ge \epsilon, $$ where the first transition uses Lemma~\ref{lem:lb-unscaled}, the third transition uses that $\sigma = \frac{1}{10^4 T^2 \gamma^2}$, and the last transition uses that $T = \ceil{10^{-2} \gamma^{-1} L^{1/2} R \epsilon^{-1/2}} \le \sqrt{2} \cdot 10^{-2} \gamma^{-1} L^{1/2} R\epsilon^{-1/2}$ since $\gamma^{-1} (L^{1/2} R \epsilon^{-1/2}) \ge 10^2 \cdot 10^3 = 10^5$. \end{proof} \subsection{Proof of \Cref{thm:main-lb-quasar}} \label{sec:coro-lb-proof} Before proving \Cref{thm:main-lb-quasar} we recap definitions that were originally provided in \citet*{carmon2017lower}. \begin{definition} A function $f$ is a first-order zero-chain if for every $x \in \reals^n$, $$ x_i = 0 \quad \forall i \ge t \quad \Rightarrow \quad \nabla_i f(x) = 0 \quad \forall i > t. $$ \end{definition} \begin{definition} An algorithm is a first-order zero-respecting algorithm if its iterates $x\ind{0}, x\ind{1}, ... \in \reals^{n}$ satisfy $$ \nabla_i f(x\ind{k}) = 0 \quad \forall k \le t \quad \Rightarrow \quad x\ind{t+1}_i = 0 $$ for all $i \in \{1, \dots, n \}$. \end{definition} \begin{definition} An algorithm $\mathcal{A}$ is a first-order deterministic method if there exists a sequence of functions $\mathcal{A}_k$ such the algorithm's iterates satisfy $$ x\ind{k+1} = \mathcal{A}_k(x\ind{0}, \dots, x\ind{k}, \nabla f(x\ind{0}),\dots, \nabla f(x\ind{k})) $$ for all $k \in \mathbb{N}$, input functions $f$, and starting points $x\ind{0}$. \end{definition} \begin{observation}\label{obs-zero-respecting} Consider $\epsilon > 0$, a function class $\mathcal{F}$, and $K \in \mathbb{N}$. If $f : \reals^{n} \rightarrow \reals$ satisfies \begin{enumerate} \item $f$ is a first-order zero-chain, \item $f$ belongs to the function class $\mathcal{F}$, i.e. $f \in \mathcal{F}$, and \item $f(x) - \inf_{z} f(z) \ge \epsilon$ for every $x$ such that $x_t = 0$ for all $t \in \{K, K+1, \dots, n\}$; \end{enumerate} then it takes at least $K$ iterations for a first-order zero-respecting algorithm to find an $\epsilon$-optimal solution of $f$. \end{observation} \begin{proof} Cosmetic modification of the proof of Observation~2 of \citet*{carmon2017lower}. \end{proof} \coroMainLBquasar* \begin{proof} Applying Lemma~\ref{lem:main-lb-quasar} and Observation~\ref{obs-zero-respecting} implies this result for any zero-respecting first-order method. Applying Proposition~1 of \citet*{carmon2017lower}, which states that lower bounds for zero-respecting first-order methods also apply to deterministic first-order methods, gives the result. \end{proof} \begin{rem}\label{rem:quasar-approx} If we have an algorithm that can approximately minimize a strongly quasar-convex function, we can use it to approximately minimize a quasar-convex function. \end{rem} \begin{proof} This follows from the fact that if $f$ is $\gamma$-quasar-convex with respect to $x^{*}$, a minimizer of $f$, then if $\norm{x\ind{0} - x^{*}} \le R$, the function $g(x) = f(x) + \ff{\epsilon}{2 R^2} \norm{x - x\ind{0}}^2$ is $(\gamma,\epsilon/R^2)$-strongly quasar-convex with respect to $x^{*}$ (recall this terminology from Remark~\ref{rem:star_char_gen}). Note that $x^{*}$ is not necessarily a minimizer of $g$, but $g(x^{*}) \le f(x^*) + \epsilon/2$. Therefore, if we obtain a point $\tilde{x}$ with $g(\tilde{x}) \le \inf_x g(x) + \epsilon/2$, then $f(\tilde{x}) \le g(\tilde{x}) \le g(x^*) + \epsilon / 2 \le f(x^*) + \epsilon$. \end{proof} Note that if $f$ is $L$-smooth, then $g$ is $(L+\ff{\epsilon}{R^2})$-smooth, so the condition number of $g$ is $\k = 1 + \ff{LR^2}{\epsilon}$. Thus, \Cref{rem:quasar-approx} combined with \Cref{thm:main-lb-quasar} shows that, given any deterministic first-order method, there exists an $L$-smooth $(\gamma,\mu)$-strongly quasar-convex function such that the method requires at least $\Omega(\gamma^{-1}\k^{1/2})$ gradient evaluations to find an $\epsilon$-optimal solution, where $\k = \ff{L}{\mu}$. \section{Lower bounds}\label{sec:lb} \newcommand{\bar{f}_{T,\sigma}}{\bar{f}_{T,\sigma}} In this section, we construct lower bounds which demonstrate that the algorithms we presented in Section~\ref{sec:algorithms} obtain, up to logarithmic factors, the best possible worst-case iteration bounds for deterministic first-order methods. We use the ideas of \citet{carmon2017lower}, who mechanized the process of constructing such lower bounds. Their idea is to construct a \emph{zero-chain}, which is defined as a function $f$ for which if $x_j = 0, \forall j \ge t$ then $\frac{\partial f(x)}{\partial x_{t+1}} = 0$. On these zero-chains, one can provide lower bounds for a particular class of methods known as \emph{first-order zero-respecting algorithms}. First-order zero-respecting algorithms \citen{carmon2017lower} are algorithms that only query the gradient at points $x\ind{t}$ with $x\ind{t}_i \neq 0$ if there exists some $j < t$ with $\nabla_i f(x\ind{j}) \neq 0$. Examples of zero-respecting first-order methods include gradient descent, accelerated gradient descent, and nonlinear conjugate gradient \citen{cg}. It is relatively easy to form lower bounds for zero-respecting algorithms applied to zero-chains, because one can prove that if the initial point is $x\ind{0} = \mathbf{0}$, then $x\ind{T}$ has at most $T$ nonzeros \cite[Observation~1]{carmon2017lower}. The particular first-order zero-chain we use to derive our lower bounds is $$ \bar{f}_{T,\sigma}(x) \triangleq q(x) + \sigma \sum_{i=1}^{T} \Upsilon (x_i) $$ where \newcommand{\funcdefs}{ \begin{flalign*} \Upsilon (\theta) &\triangleq 120 \int_{1}^{\theta} \frac{t^2 (t-1) }{1 + t^2} \,dt \\ q(x) &\triangleq \frac{1}{4} (x_1-1)^2 + \frac{1}{4} \sum_{i=1}^{T-1} (x_i - x_{i+1})^2. \end{flalign*} } \funcdefs This function $\bar{f}_{T,\sigma}$ is very similar to the function $\bar{f}_{T,\mu,r}$ of \citet*{Carmon:2017aa}. However, the lower bound proof is different because the primary challenge is to show $\bar{f}_{T,\sigma}$ is quasar-convex, rather than showing that $\| \nabla \bar{f}_{T,\sigma}(x) \| \ge \epsilon$ for all $x$ with $x_{T} \neq 0$. Our main lemma shows that this function is in fact $\frac{1}{100 T \sqrt{\sigma}}$-quasar-convex. \newcommand{\lemUnscaledLBstatement}{ \begin{restatable}{lem}{lemUnscaledLB} \label{lem:lb-unscaled} Let $\sigma \in (0, 10^{-6}], T \in \left[\sigma^{-1/2}, \infty \right) \cap \mathbb{Z}$. The function $\bar{f}_{T,\sigma}$ is $\frac{1}{100 T \sqrt{\sigma}}$-quasar-convex and $1$-smooth, with unique minimizer $x^{*} = \mathbf{1}$. Furthermore, if $x_{t} = 0$ for all $t = \ceil{T/2}, \dots, T$, then $\bar{f}_{T,\sigma}(x) - \bar{f}_{T,\sigma}(\mathbf{1}) \ge 2 T \sigma$. \end{restatable} } \lemUnscaledLBstatement The proof of Lemma~\ref{lem:lb-unscaled} appears in Appendix~\ref{sec:lem-lb-proof}. The argument rests on showing that the quasar-convexity inequality $\frac{1}{100 T \sqrt{\sigma}}(\bar{f}_{T,\sigma}(x) - \bar{f}_{T,\sigma}(\mathbf{1})) \le \nabla \bar{f}_{T,\sigma}(x)^T (x - \mathbf{1}) $ holds for all $x \in \reals^T$. The nontrivial situation is when there exists some $j_1 < j_2$ such that $x_{j_1} \ge 0.9$, $x_{j_2} \le 0.1$, and $0.1 \le x_i \le 0.9$ for $i \in \{j_1 + 1, \dots, j_2 - 1\}$. In this situation, we use ideas closely related to the transition region arguments made in Lemma~3 of \citet*{Carmon:2017aa}. The intuition is as follows. If the gaps $x_{i+1} - x_{i}$ are large, then the convex function $q(x)$ dominates the function value and gradient of $\bar{f}_{T,\sigma}(x)$, allowing us to establish quasar-convexity. Conversely, if the $x_{i+1} - x_{i}$'s are small, then a large portion of the $x_i$'s must lie in the quasar-convex region of $\Upsilon$, and the corresponding $\Upsilon'(x_i) (x_i - 1)$ terms make $\nabla \bar{f}_{T,\sigma}(x)^\top (x - \mathbf{1})$ sufficiently positive. \defLet $\epsilon \in (0, \infty)$, $\1 \in (0, 10^{-2}]$, $T = \ceil{10^{-2} \1^{-1} L^{1/2} R \epsilon^{-1/2}}$, and $\sigma = \frac{1}{10^4 T^2 \1^2}$, and assume $L^{1/2}R\epsilon^{-1/2} \ge 10^{3}$.{Let $\epsilon \in (0, \infty)$, $\gamma \in (0, 10^{-2}]$, $T = \ceil{10^{-2} \gamma^{-1} L^{1/2} R \epsilon^{-1/2}}$, and $\sigma = \frac{1}{10^4 T^2 \gamma^2}$, and assume $L^{1/2}R\epsilon^{-1/2} \ge 10^{3}$.} \begin{restatable}{lem}{lemMainLBquasar}\label{lem:main-lb-quasar} Let $\epsilon \in (0, \infty)$, $\1 \in (0, 10^{-2}]$, $T = \ceil{10^{-2} \1^{-1} L^{1/2} R \epsilon^{-1/2}}$, and $\sigma = \frac{1}{10^4 T^2 \1^2}$, and assume $L^{1/2}R\epsilon^{-1/2} \ge 10^{3}$.{} Consider the function \begin{flalign}\label{eq:f-hat} \hat{f}(x) \triangleq L R^2 T^{-1} \cdot \bar{f}_{T,\sigma} ( x T^{1/2} R^{-1} ). \end{flalign} This function is $L$-smooth and $\gamma$-quasar-convex, and its minimizer $x^*$ is unique and has $\norm{x^*} = R$. Furthermore, if $x_{t} = 0\,\, \forall t \in \mathbb{Z} \cap [T/2,T]$, then $\hat{f}(x) - \inf_z \hat{f}(z) > \epsilon$. \end{restatable} The proof of Lemma~\ref{lem:main-lb-quasar} appears in Appendix~\ref{sec:lem-lb-proof}. Combining Lemma~\ref{lem:main-lb-quasar} with Observation~1 from \citet{carmon2017lower} yields a lower bound for first-order zero-respecting algorithms. Furthermore, we can use the argument from \citen{carmon2017lower} to extend our lower bounds for first-order zero-respecting methods to the class of all deterministic first-order methods. This leads to \Cref{thm:main-lb-quasar}, whose proof appears in Appendix~\ref{sec:coro-lb-proof}. \begin{restatable}{thm}{coroMainLBquasar}\label{thm:main-lb-quasar} Let $\epsilon, R, L \in (0,\infty)$, $\gamma \in (0, 1]$, and assume $L^{1/2}R\epsilon^{-1/2} \ge 1$. Let $\mathcal{F}$ denote the set of $L$-smooth functions that are $\gamma$-quasar-convex with respect to some point with Euclidean norm less than or equal to $R$. Then, given any deterministic first-order method, there exists a function $f \in \mathcal{F}$ such that the method requires at least $\Omega(\gamma^{-1} L^{1/2} R \epsilon^{-1/2} )$ gradient evaluations to find an $\epsilon$-optimal point of $f$. \end{restatable} \Cref{thm:main-lb-quasar} demonstrates that the worst-case bound for our algorithm for quasar-convex minimization is tight within logarithmic factors. We note that by reduction (see Remark~\ref{rem:quasar-approx}), one can prove a lower bound of $\Omega(\gamma^{-1} \k^{1/2} )$ for strongly quasar-convex functions, demonstrating that our algorithm for strongly quasar-convex minimization is also optimal within logarithmic factors. Although the construction of the lower bounds in \citen{Carmon:2017aa} is quite similar to our construction, there are some important differences between our lower bounds and those in \citen{Carmon:2017aa}. First, the assumptions differ significantly; we assume quasar-convexity and Lipschitz continuity of the first derivative, while \citet{Carmon:2017aa} assume Lipschitz continuity of the first \emph{three} derivatives. Next, we have only logarithmic gaps between our lower and upper bounds, whereas there is a gap of $\tilde{O}(\epsilon^{-1/15})$ between the lower bound of $\Omega(\epsilon^{-8/5})$ given by \citen{Carmon:2017aa} and the best known upper bound of $O(\epsilon^{-5/3}\log(\epsilon^{-1}))$ given by \citen{carmon2017convex} for the minimization of functions satisfying the assumptions in \citen{Carmon:2017aa}. Another $\text{key difference}$ is that the bounds in \cite{Carmon:2017aa} and \citen{carmon2017convex} apply to finding $\epsilon$-stationary points, rather than $\epsilon$-optimal points. Finally, we require $x_{t} = 0$ for all $t > T/2$ to guarantee $\hat{f}(x) - \inf_z \hat{f}(z) > \epsilon$, whereas \citet{Carmon:2017aa,carmon2017lower} only need $x_{T} = 0$ to guarantee $\| \nabla \hat{f}(x) \| > \epsilon$. \section*{#1}} \newcommand{\st}[1]{\subsection*{#1}} \newcommand{\sst}[1]{\subsection*{#1}} \newcommand{\left(}{\left(} \newcommand{\right)}{\right)} \newcommand{\left[}{\left[} \newcommand{\right]}{\right]} \newcommand{\alpha}{\alpha} \newcommand{\beta}{\beta} \newcommand{\lambda}{\lambda} \newcommand{\mathcal{X}}{\mathcal{X}} \newcommand{\nabla}{\nabla} \newcommand{\nabla}{\nabla} \newcommand{\I}[2]{\mathlarger{\int}\limits_{#1}^{#2}} \renewcommand{\k}{\kappa} \newcommand{\rightarrow}{\rightarrow} \newcommand{\Rightarrow}{\Rightarrow} \newcommand{\floor}[1]{\left\lfloor#1\right\rfloor} \newcommand{\ceil}[1]{\left\lceil#1\right\rceil} \newcommand{\omega}{\omega} \newcommand{\tilde{\ep}}{\tilde{\epsilon}} \providecommand{\norm}[1]{\left\lVert#1\right\rVert} \providecommand{\tnorm}[1]{\lVert#1\rVert} \newcommand{\citen}[1]{[\citenum{#1}]} \newcommand{\triangleq}{\triangleq} \newcommand{\X}{\mathcal{X}} \newcommand{x^*}{x^*} \section{The structure of quasar-convex functions} \label{sec:quasar-structure} In this section, we prove various properties of quasar-convex functions. First, we state a slightly more general definition of quasar-convexity on a convex domain. \begin{definition} Let $\mathcal{X} \subseteq \reals^n$ be convex. Furthermore, suppose that either $\mathcal{X}$ is open or $n = 1$. Let $\gamma \in (0,1]$ and let $x^{*} \in \mathcal{X}$ be a minimizer of the differentiable function $f : \mathcal{X} \rightarrow \reals$. The function $f$ is \emph{$\gamma$-quasar-convex} on $\mathcal{X}$ with respect to $x^*$ if for all $x \in \mathcal{X}$, \begin{equation*} f(x^{*}) \ge f(x) + \frac{1}{\gamma} \nabla f(x)^\top (x^{*}-x). \end{equation*} Suppose also $\mu \ge 0$. The function $f$ is \emph{$(\gamma,\mu)$-strongly quasar-convex} on $\mathcal{X}$ if for all $x \in \mathcal{X}$, \begin{equation*} f(x^{*}) \ge f(x) + \frac{1}{\gamma} \nabla f(x)^\top (x^{*}-x) + \frac{\mu}{2} \| x^{*} -x \|^2. \end{equation*} If $\mathcal{X}$ is of the form $[a,b] \subseteq \reals$, then $\nabla f(a)$ and $\nabla f(b)$ here denote $\lim\limits_{h \rightarrow 0^+} \ff{f(a+h)-f(a)}{h}$ and $\lim\limits_{h \rightarrow 0^-} \ff{f(b+h)-f(b)}{h}$, respectively. Differentiability simply means that $\nabla f(x)$ exists for all $x \in \mathcal{X}$. \label{defn:gen-defns} \end{definition} Definition~\ref{defn:gen-defns} is exactly the same as Definition~\ref{defn:defns} if the domain $\mathcal{X} = \reals^n$. We remark that it is possible to generalize Definition~\ref{defn:gen-defns} even further to the case where $\mathcal{X}$ is a \textit{star-convex} set with star center $x^*$. \subsection{Proof of Observation~\ref{obs:unimodalImpliesQuasar}}\label{sec:unimodal-quasar} \unimodalImpliesQuasar* \begin{proof} First, we prove that if $f$ is continuously differentiable and unimodal with nonzero derivative except at minimizers, then $f$ is $\gamma$-quasar-convex for some $\gamma > 0$. Let $x^*$ be a minimizer of $f$ on $[a,b]$, and let $x \in [a,b]$ be arbitrary. Define $g_x(t) = f((1-t)x^*+tx)$. $g_x$ is differentiable and increasing on $[0,1]$, so $g_x'(t) \ge 0$ for $t \in [0,1]$, and \[ f(x) - f(x^*) = g_x(1) - g_x(0) = \I{0}{1} g_x'(t) \,dt ~. \] Also, $g_x'(1) = f'(x)(x-x^*) \ne 0$ by assumption for all $x$ with $f(x) > f(x^*)$. Note that if $f(x) = f(x^*)$, then $g_x(t)$ is constant on $[0, 1]$ by unimodality and so $g_x'(t) = 0$ for all $t \in [0,1]$. Define $C_{x^*} = \sup\limits_{x \in [a,b]}\sup\limits_{t \in [0,1]} \f{g_x'(t)}{g_x'(1)}$, where we define the inner supremum to be 1 if $f(x) = f(x^*)$. By continuity of each $g_x'$ over $[0,1]$ and the fact that $g_x'(1) > 0$ for all $x \in [a,b]$ with $f(x) > f(x^*)$, $\sup_{t \in [0,1]} \ff{g_x'(t)}{g_x'(1)}$ is a continuous function of $x$. Thus as the outer supremum is over the compact interval $[a,b]$, \,$C_{x^*}$ indeed exists; note that $C_{x^*} \in [1,\infty)$. For any $x \in [a,b]$ with $f(x) > f(x^*)$, we thus have $\f{f(x)-f(x^*)}{f'(x)(x-x^*)} = \f{\int_{0}^{1}g_x'(t) \, dt}{g_x'(1)} \le C_{x^*}$, meaning $f(x^*) \ge f(x) + C_{x^*} (f'(x)(x^*-x))$. This also holds for all $x$ such that $f(x) = f(x^*)$, as either $x = x^*$ or $f'(x) = 0$ in these cases. Thus, $f$ is $\ff{1}{C_{x^*}}$ quasar-convex on $[a,b]$ with respect to $x^*$. Finally, if we define $C_{\max} = \max\limits_{x^* \in \mathop{\rm argmin}_{x \in [a,b]} f(x)} C_{x^*}$, we have that $f$ is $\ff{1}{C_{\max}}$ quasar-convex on $[a,b]$ where $\ff{1}{C_{\max}} \in (0,1]$ is a constant depending only on $f$, $a$, and $b$. This completes the proof. Now, we prove the other direction (which is much simpler). Suppose that $f : [a,b] \rightarrow \reals$ is differentiable and quasar-convex for some $\gamma \in (0, 1]$. Then $\ff{1}{\gamma}f'(x)(x-x^*) \ge f(x) - f(x^*) \ge 0$. If $x$ is not a minimizer of $f$, then the last inequality is strict; otherwise, either $x \in \{a,b\}$ or $f'(x) = 0$. In other words, assuming $x$ is not a minimizer, when $x < x^*$ [i.e. to the left of $x^*$], $f' < 0$ and so $f$ is strictly decreasing, while when $x > x^*$ [i.e. to the right of $x^*$], $f' > 0$ and so $f$ is strictly increasing. This implies that $f$ is unimodal. Finally, suppose $h : \reals^n \rightarrow \reals$ is $\gamma$-quasar-convex with respect to a minimizer $x^*$, suppose $d \in \reals^n$ has $\norm{d} = 1$, and define $f(\theta) \triangleq h(x^* + \theta d)$. Note that $f'(\theta) = d^\top \nabla h(x^* + \theta d)$ and that $\theta = 0$ minimizes $f$. By $\gamma$-quasar-convexity of $h$ with respect to $x^*$, we have for all $\theta \in \reals$ that \begin{align*} f(0) = h(x^*) \ge h(x^* + \theta d) + \ff{1}{\gamma}\nabla h(x^* + \theta d)^\top (x^* - (x^* + \theta d)) = f(\theta) + \ff{1}{\gamma} f'(\theta)(0-\theta)~, \end{align*} meaning that $f$ is $\gamma$-quasar-convex. \end{proof} \subsection{Characterizations of quasar-convexity} \label{sec:equivs} \begin{lem} \label{lem:star_char} Let $f : \mathcal{X} \rightarrow \reals$ be differentiable with a minimizer $x^* \in \mathcal{X}$, where the domain $\mathcal{X} \subseteq \reals^n$ is open and convex.\footnote{We remark that this lemma still holds if ${\mathcal{X}\text{ is open and star-convex with star center }x^*\text{, or if }\mathcal{X}\text{ is any subinterval of }\reals.}$} Then, the following two statements: \begin{equation} \label{eq:star_def_undiff} f(tx^*+(1-t)x) + t\left(1-\f{t}{2-\gamma}\right)\f{\gamma\mu}{2}\norm{x^*-x}^2 \le \gamma t f(x^*) + (1-\gamma t)f(x)\,\, \forall x \in \mathcal{X}, \,t \in [0,1] \end{equation} \begin{equation} \label{eq:star_diff} f(x^*) \ge f(x) + \f{1}{\gamma}\nabla f(x)^\top(x^*-x) + \f{\mu}{2} \norm{x^*-x}^2 \,\,\forall x \in \mathcal{X} \end{equation} are equivalent for all $\mu \ge 0$, $\gamma \in (0,1]$. \end{lem} \begin{proof} First, we prove that \eqref{eq:star_diff} implies \eqref{eq:star_def_undiff}. Suppose \eqref{eq:star_diff} holds and $\mu = 0$. Let $x \in \mathcal{X}$ be arbitrary and for all $t\in [0,1]$ let $x_t \triangleq (1-t)x^* + t x$ and let $g(t) \triangleq f(x_t) - f(x^*)$. Since $g'(t) = \nabla f(x_t)^\top (x - x^*)$ and $x^* - x_t = - t (x^* - x)$, substituting these equalities into $\eqref{eq:star_diff}$ yields that $g(t) \le \frac{t}{\gamma} g'(t)$ for all $t \in [0,1]$. Rearranging, we see that the inequality in \eqref{eq:star_def_undiff} [for fixed $x$] is equivalent to the condition that $g(t) \le \ell(t)$ for all $t \in [0,1]$, where ${\ell(t) \triangleq (1 - \gamma (1 - t)) g(1)}$. We proceed by contradiction: suppose that for some $\alpha \in [0, 1]$ it is the case that $g(\alpha) > \ell(\alpha)$. Note that $\alpha > 0$ necessarily. Let $\beta$ be the minimum element of the set $\{ t \in [\alpha, 1] : g(t) = \ell(t) \}$. Since $g(1) = \ell(1)$, such a $\beta$ exists with $\alpha < \beta$. Consequently, for all $t \in (\alpha, \beta)$ we have $g(t) \geq \ell(t)$ and so \begin{equation} \label{eq:g1} \int_{\alpha}^{\beta} g'(t) \,dt = g(\beta) - g(\alpha) < \ell(\beta) - \ell(\alpha) = \gamma (\beta - \alpha) g(1) \end{equation} and \begin{equation} \label{eq:g2} (\beta - \alpha) g(1) = \int_{\alpha}^{\beta} \frac{\ell(t)}{1 - \gamma (1 - t)} \,dt \leq \int_{\alpha}^{\beta} \frac{g(t)}{1 - \gamma (1 - t)} \,dt ~. \end{equation} Combining \eqref{eq:g1} and \eqref{eq:g2} and using that $g(t) \le \ff{t}{\gamma}g'(t)$, we have \[ \int_{\alpha}^{\beta} \left[ \frac{1}{t} - \frac{1}{1 - \gamma (1 - t)} \right] g(t) \,dt \le \int_{\alpha}^{\beta} \f{g'(t)}{\gamma} \,dt - \int_{\alpha}^{\beta} \frac{g(t)}{1 - \gamma (1 - t)} \,dt < 0 \] As $g(t) = f(x_t)-f(x^*) \ge 0$ and $1/t \ge 1/(1-\gamma (1 - t))$ for all $t \in [\alpha,\beta] \subset (0,1]$, we have a contradiction. Now, suppose $\mu > 0$. Define $h(x) \triangleq f(x) - \f{\gamma\mu}{2(2-\gamma)} \norm{x^*-x}^2$. Observe that $h(x^*) = f(x^*)$, $\nabla h(x) = \nabla f(x) - \f{\gamma\mu}{2-\gamma} (x-x^*)$, and $\nabla h(x)^\top(x^*-x) = \nabla f(x)^\top(x^*-x) + \f{\gamma\mu}{2-\gamma} \norm{x^*-x}^2$. Thus, by algebraic simplification and then application of \eqref{eq:star_diff} by assumption, \begin{flalign*} h(x) + \f{1}{\gamma} \nabla h(x)^\top(x^*-x) &= f(x) - \f{\gamma\mu}{2(2-\gamma)} \norm{x^*-x}^2 + \f{1}{\gamma}\nabla f(x)^\top(x^*-x) + \f{\mu}{2-\gamma} \norm{x^*-x}^2 && \\ &= f(x) + \f{1}{\gamma}\nabla f(x)^\top(x^*-x) + \f{\mu}{2}\norm{x^*-x}^2 \left(- \f{\gamma}{2-\gamma} + \f{2}{2-\gamma} \right)&& \\ &= f(x) + \f{1}{\gamma}\nabla f(x)^\top(x^*-x) + \f{\mu}{2}\norm{x^*-x}^2 &&\\ &\le f(x^*) = h(x^*)~. \end{flalign*} As we earlier showed that \eqref{eq:star_diff} implies \eqref{eq:star_def_undiff} in the $\mu = 0$ case, we have that \[h(tx^* + (1-t)x) \le \gamma t h(x^*) + (1-\gamma t)h(x)~.\] Substituting in the definition of $h$: \begin{flalign*} &f(tx^* + (1-t)x) - \f{\gamma\mu}{2(2-\gamma)} \norm{x^*-tx^*-(1-t)x}^2 \\ \le\,\,& \gamma t f(x^*) + (1-\gamma t)f(x) - (1-\gamma t)\f{\gamma\mu}{2(2-\gamma)}\norm{x^*-x}^2~. \end{flalign*} Rearranging terms and simplifying yields \begin{flalign*} &f(tx^* + (1-t)x) + \f{\gamma\mu}{2(2-\gamma)} \left( (1-\gamma t) \norm{x^*-x}^2 - (1-t)^2\norm{x^*-x}^2\right) \\ \le\,\,& \gamma t f(x^*) + (1-\gamma t)f(x)~. \end{flalign*} Finally, $(1-\1t) - (1-t)^2 = t((2-\gamma)-t)$, which gives the desired result. Now, we prove that \eqref{eq:star_def_undiff} implies \eqref{eq:star_diff}. This time, define $g(t) \triangleq f(tx^* + (1-t)x)$. For $t \in [0,1)$, $g'(t) = \nabla f(tx^* + (1-t)x)^\top(x^*-x)$. By assumption, $g(t) + t\left(1-\f{t}{2-\gamma}\right)\f{\gamma\mu}{2}\norm{x^*-x}^2 \le \gamma tg(1) + (1-\gamma t)g(0)$ for all $t \in [0,1]$, so $g(1) \ge g(0) + \f{g(t) - g(0)}{\gamma t} + \left(1-\f{t}{2-\gamma}\right)\f{\mu}{2}\norm{x^*-x}^2$ for all $t \in (0,1]$. Taking the limit as $t \downarrow 0$ yields $f(x^*) = g(1) \ge g(0) + \f{1}{\gamma}g'(0) + \f{\mu}{2}\norm{x^*-x}^2 = f(x) + \f{1}{\gamma}\nabla f(x)^\top(x^*-x) + \f{\mu}{2}\norm{x^*-x}^2$. \end{proof} \begin{rem} \label{rem:star_char_gen} A modified version of \Cref{lem:star_char} holds if $x^*$ is replaced with any point $\hat{x} \in \mathcal{X}$, where either $\gamma = 1$ or \eqref{eq:star_def_undiff} and \eqref{eq:star_diff} hold for all $x \in \mathcal{X}$ with $f(x) \ge f(\hat{x})$. If $f$ satisfies either of these equivalent properties, we then say that $f$ is ``$(\gamma,\mu)$-strongly quasar-convex with respect to $\hat{x}$.'' \end{rem} \begin{rem}\label{eq:quasar-convex-more-general} Using \Cref{rem:star_char_gen}, we can show that even if $\hat{x}$ is not a minimizer of the function $f$, Algorithms~\ref{alg:strongly_agd} and \ref{alg:nonstrong_agd} can still be applied to efficiently finding a point that has an objective value of at most $f(\hat{x}) + \epsilon$; the respective runtime bounds are the same, and the proofs remain essentially unchanged. \end{rem} Note that when $\gamma = 1, \mu = 0$, and \eqref{eq:star_def_undiff} is required to hold for \textit{all} minimizers of $f$, it becomes the standard definition of star-convexity \citen{nesterov2006cubic}. \begin{coro} \label{rem:distbound} If $f$ is $(\gamma,\mu)$-strongly quasar-convex with minimizer $x^*$, then $$f(x) \ge f(x^*) + \f{\gamma\mu}{2(2-\gamma)}\norm{x^*-x}^2,~ \forall x$$ \end{coro} \begin{proof2} Plug in $t = 1$ to \eqref{eq:star_def_undiff} to get \[f(x^*) + \left(1-\f{1}{2-\gamma}\right)\f{\gamma\mu}{2}\norm{x^*-x}^2 \le \gamma f(x^*) + (1-\gamma)f(x)~.\] Simplifying yields \[f(x) \ge f(x^*) + \left(1-\f{1}{2-\gamma}\right)\f{\gamma\mu}{2(1-\gamma)}\norm{x^*-x}^2 = f(x^*) + \f{\gamma\mu}{2(2-\gamma)}\norm{x^*-x}^2~. \tag*{\qedhere}\] \end{proof2} \begin{observation} \label{obs:l_vs_mu} If $f$ is $(\gamma,\mu)$-strongly quasar-convex, then $f$ is not $L$-smooth for any $L < \ff{\gamma\mu}{2-\gamma}$. \end{observation} \begin{proof} If $f$ is $(\gamma,\mu)$-strongly quasar-convex, \Cref{rem:distbound} says that $f(x) \ge f(x^*) + \ff{\gamma\mu}{2(2-\gamma)}\norm{x^*-x}^2$ for all $x$. If $f$ is $L$-smooth, \Cref{fact:smooth_decr} says that $f(x) \le f(x^*) + \ff{L}{2}\norm{x^*-x}^2$ for all $x$. Thus, if $f$ is $(\gamma,\mu)$-strongly quasar-convex and $L$-smooth, we have $\ff{\gamma\mu}{2(2-\gamma)}\norm{x^*-x}^2 \le \ff{L}{2}\norm{x^*-x}^2$ for all $x$, which means that we must have $L \ge \ff{\gamma\mu}{2-\gamma}$. \end{proof} \begin{observation} \label{obs:minimizers} If $f$ is $\gamma$-quasar convex, the set of its minimizers is star-convex. \end{observation} \begin{proof} Recall that a set $S$ is termed \textit{star-convex} (with star center $x_0$) if there exists an $x_0 \in S$ such that for all $x \in S$ and $t \in [0,1]$, it is the case that $tx_0 + (1-t)x \in S$ \citen{munkres}. Suppose $f : \mathcal{X} \rightarrow \reals$ is $\gamma$-quasar-convex with respect to a minimizer $x^* \in \mathcal{X}$, where $\mathcal{X}$ is convex. Suppose $y \in \mathcal{X}$ also minimizes $f$. Then for any $t \in [0,1]$, equation \eqref{eq:star_def_undiff} implies that $f(tx^* + (1-t)y) \le \1t f(x^*) + (1-\1t)f(y) = \gamma t f(x^*) + (1-\1t)f(x^*) = f(x^*)$. So, $tx^* + (1-t)y$ is in $\mathcal{X}$ and also minimizes $f$. Thus, the set of minimizers of $f$ is star-convex, with star center $x^*$. \end{proof} \begin{observation} \label{obs:unique} If $f$ is $(\gamma,\mu)$-strongly quasar-convex with $\mu > 0$, $f$ has a unique minimizer. \end{observation} \begin{proof} \,By \Cref{rem:distbound}, $f(x) > f(x^*)$ if $\mu > 0$ and $x \ne x^*$, implying that $x$ minimizes $f$ iff $x = x^*$. \end{proof} \begin{observation} \label{obs:tradeoff} Suppose $f$ is differentiable and $(\gamma,\mu)$-strongly quasar-convex. Then $f$ is also $(\theta \gamma,\mu/\theta)$-strongly quasar-convex for any $\theta \in (0,1]$. \end{observation} \begin{proof} $(\gamma,\mu)$-strong quasar-convexity states that $0 \ge f(x^{*}) - f(x) \ge \frac{1}{\gamma} \nabla f(x)^\top (x^* - x) + {\frac{\mu}{2} \norm{x^* - x}^2}$ for some $x^*$ and all $x$ in the domain of $f$. Multiplying by $\ff{1}{\theta}-1 \ge 0$, it follows that \newline $f(x^*) \ge f(x) + \frac{1}{\gamma} \nabla f(x)^\top (x^{*} - x) + \frac{\mu}{2} \norm{ x - x^{*} }^2 \ge f(x) + \frac{1}{\gamma \theta} \nabla f(x)^\top (x^{*} - x) + \frac{\mu}{2\theta} \norm{x^* - x}^2$. Note that any $(\gamma,\mu)$-strongly quasar-convex function is also $(\gamma,\tilde{\mu})$-strongly quasar-convex for any $\tilde{\mu} \in [0,\mu]$. Thus, the restriction $\gamma \in (0,1]$ in the definition of quasar-convexity may be made without any loss of generality compared to the restriction $\gamma > 0$. \end{proof} \begin{observation} \label{obs:scaling} The parameter $\gamma$ is a dimensionless quantity, in the sense that if $f$ is $\gamma$-quasar-convex on $\reals^n$, the function $g(x) \triangleq a \cdot f(b x)$ is also $\gamma$-quasar-convex on $\reals^n$, for any $a \ge 0, b \in \reals$. \end{observation} \begin{proof} If $a$ or $b$ is 0, then $g$ is constant so the claim is trivial. Now suppose $a,b \ne 0$. Let $x^*$ denote the quasar-convex point of $f$. Observe that as $x^*$ minimizes $f$, $x^* / b$ minimizes $g$. By \eqref{eq:star_def_undiff}, for all $x \in \reals^n$ we have \begin{flalign*} \ff{1}{a}g((tx^*+(1-t)x)/b) &= f(tx^*+(1-t)x) \\ &\le \gamma t f(x^*) + (1-\gamma t)f(x) \\ &= \gamma t \cdot \ff{1}{a} g(x^*/ b) + (1-\gamma t) \cdot \ff{1}{a} g(x / b)~. \end{flalign*} Multiplying by $a$, we have $g(t(x^* / b) + (1-t)(x/b)) \le \gamma t g(x^* / b) + (1-\gamma t)g(x / b)$ for all $x \in \reals^n$. Since $x/b$ can take on any value in $\reals^n$, this means that $g$ is $\gamma$-quasar-convex with respect to $x^* / b$. \end{proof}
{ "redpajama_set_name": "RedPajamaArXiv" }
5,304
<?php /** * An extension to Zend Flashmessenger to allow for status updates in a flash message. * Each Message will be shown as a seperate message. You can group Messages in one status by passing it as an Array. * * @package MUtil * @subpackage Controller * @copyright Copyright (c) 2014 Erasmus MC * @license New BSD License */ class Gems_Controller_Action_Helper_FlashMessenger extends \MUtil_Controller_Action_Helper_FlashMessenger { }
{ "redpajama_set_name": "RedPajamaGithub" }
9,554
Q: NEAT function taking forever to execute I'm trying to use the NEAT library but unfortunately one of the lines is taking forever to execute (I waited 2 hrs and it had not completed) Here is the code: config = neat.config.Config(neat.DefaultGenome,neat.DefaultReproduction, neat.DefaultSpeciesSet, neat.DefaultStagnation, config_path) p = neat.Population(config) p=neat.Population(config) is the line taking forever to execute. The file in config_path is this one: file Can anyone help?
{ "redpajama_set_name": "RedPajamaStackExchange" }
311
\section*{Introduction} As the existence of the massive vector bosons has clearly shown, the Proca vector field is a very important field in physics. Its importance resides on the fact that, being it a vector field with dynamical term given by the exterior derivative and having mass, it allows for the description of all vector fields that are gauge fields before eventually getting their masses through a mass generation mechanism; another fundamental reason is that, since the dynamical term is the divergence of the curl of the vector field and there is the mass term, then this particular structure of its field equations automatically provides the subsidiary condition that reduces the number of degrees of freedom to those needed to define massive vector fields. However, the fact that this field has dynamical term written in terms of the antisymmetric part of the derivative, which can be defined without torsion, does not prevent us to try to generalize it up to the the exterior derivative calculated with respect to the most general connection, in which torsion is present in a natural way; this most general connection is commonly not used because as it is known it would spoil the gauge invariance, but there is no gauge invariance to save in this case for this field is massive: therefore such a general connection can be employed. In this paper we will consider such a theory, deriving its consequences and discussing its implications regarding the Proca field, its propagation and its most important geometrical properties. \section{Fundamental Definitions} In a given geometry, the metric structure is given in terms of two symmetric metric tensors $g_{\alpha\beta}$ and $g^{\alpha\beta}$ that are one the inverse of the other, and differential operations $D_{\mu}$ are defined through the connections $\Gamma^{\rho}_{\alpha\beta}$; the metric tensor are to be such that they can be locally reduced to the Minkowskian form of signature $(1,-1,-1,-1)$, and the covariant derivatives applied upon the metric tensors are required to vanish according to what is called metricity condition $D_{\mu}g=0$, as discussed in \cite{h-h-k-n}. Furthermore, requiring this condition of metricity for any connection leads to the complete antisymmetry of Cartan torsion tensor $Q_{\alpha\mu\rho}$, as explained in \cite{f/1}. In this background, we will define Riemann curvature tensor $G_{\alpha\beta\mu\nu}$ as \begin{eqnarray} G^{\alpha}_{\lambda\mu\nu}= \partial_{\mu}\Gamma^{\alpha}_{\lambda\nu}-\partial_{\nu}\Gamma^{\alpha}_{\lambda\mu} +\Gamma^{\alpha}_{\rho\mu}\Gamma^{\rho}_{\lambda\nu} -\Gamma^{\alpha}_{\rho\nu}\Gamma^{\rho}_{\lambda\mu} \label{Riemann} \end{eqnarray} antisymmetric in both the first and the second couple of indices, allowing only one independent contraction, Ricci curvature tensor $G^{\lambda}_{\alpha\lambda\beta}=G_{\alpha\beta}$, whose contraction is Ricci curvature scalar $G_{\alpha\beta}g^{\alpha\beta}=G$ and this will set our convention. Riemann curvature tensor, Ricci curvature tensor and scalar, together with Cartan torsion tensor verify \begin{eqnarray} D_{\rho}Q^{\rho\mu \nu} +\left(G^{\nu\mu}-\frac{1}{2}g^{\nu\mu}G\right) -\left(G^{\mu\nu}-\frac{1}{2}g^{\mu\nu}G\right)\equiv0 \label{torsiondiv} \end{eqnarray} and \begin{eqnarray} D_{\mu}\left(G^{\mu\rho}-\frac{1}{2}g^{\mu\rho}G\right) -\left(G_{\mu\beta}-\frac{1}{2}g_{\mu\beta}G\right)Q^{\beta\mu\rho} +\frac{1}{2}G^{\mu\kappa\beta\rho}Q_{\beta\mu\kappa}\equiv0 \label{curvaturediv} \end{eqnarray} which are geometric identities in the form of conservation laws, called Jacobi-Bianchi identities. We remark that from the metric tensor it is possible to define the Levi-Civita tensor $\varepsilon$ for which $D_{\mu}\varepsilon=0$ precisely because of the complete antisymmetry of torsion. In turn, since torsion is completely antisymmetric then we can write \begin{eqnarray} Q^{\beta\mu\rho}=\varepsilon^{\beta\mu\rho\sigma}W_{\sigma} \label{axialvector} \end{eqnarray} in terms of what is called axial torsion vector. Within this background, to define matter fields that can be classified according to the value of their spin we have to consider that a given matter field of spin $s$ possesses $2s+1$ degrees of freedom, which have to correspond to the $2s+1$ independent solutions of a system of equations that specify the highest-order time derivative for all components of the field, called system of matter field equations. However, since it may happen that field equations are not enough to determine the correct rank of the solution, restrictions need to be imposed in terms of equations in which all components of the field have highest-order time derivatives that never occur, called constraints; these constraints can be imposed in two ways, either being implied by the field equations, or being assigned as subsidiary conditions that come along with the field equations themselves. Although the former procedure seems more elegant, whenever interactions are present it can give rise to two types of problems, the first of which concerning the fact that the presence of the interacting fields could increase the order derivative of the constraining equation up to the same order derivative of the field equations themselves, creating the possibility that highest-order time derivatives of some component occur, converting the constraint into a field equation, then spoiling the counting of degrees of freedom. Before proceeding we have to remind the reader that to check causal propagation, the general method is to consider in the field equations eventually modified by constraints the terms of the highest-order derivative of the field, formally replacing the derivatives with the vector $n$ in order to obtain the propagator, of which one has to compute the determinant setting it to zero in order to get an equation in terms of $n$ called characteristic equation, whose solutions are the normal to the characteristic surfaces, representing the propagation of the wave fronts: if there is no time-like normal among all the possible solutions, then there is no space-like characteristic surface, and therefore these is no acausal propagation of the wave front. If in the constraining equation the highest-order time derivative never appeared, or if it actually appeared but could be removed by means of field equations, then the constraint is a constraint indeed, but in this case a second type of problem can arise, regarding the fact that the interacting fields could let appear terms of the highest-order derivative in the propagator, allowing these terms to influence the propagation of the wave fronts themselves, as it is explained in \cite{v-z}. Once this analysis is performed, causal propagation of wave fronts is checked, and the exact number of degrees of freedom of the matter field solution is established, the last requirement for this system of matter field equations is that they have to ensure the complete antisymmetry of the spin, so that taking the spin $S^{\nu\sigma\rho}$ with the energy $T^{\sigma\rho}$ they have to be such that the relationships \begin{eqnarray} D_{\rho}S^{\rho\mu\nu}+\frac{1}{2}\left(T^{\mu\nu}-T^{\nu\mu}\right)=0 \label{conservationspin} \end{eqnarray} and \begin{eqnarray} D_{\mu}T^{\mu\rho}-T_{\mu\beta}Q^{\beta\mu\rho}-S_{\beta\mu\kappa}G^{\mu\kappa \beta \rho}=0 \label{conservationenergy} \end{eqnarray} are verified, implying the whole set of field equations \begin{eqnarray} \left(G^{\sigma\rho}-\frac{1}{2}g^{\sigma\rho}G\right)=-\frac{1}{2}T^{\sigma\rho} \label{einstein} \end{eqnarray} and \begin{eqnarray} Q^{\nu\sigma\rho}=S^{\nu\sigma\rho} \label{sciama-kibble} \end{eqnarray} to be such that the conservation laws (\ref{torsiondiv}) and (\ref{curvaturediv}) are satisfied automatically. This determines the set-up of the fundamental field equations in minimal coupling, that is taking the least-order derivative possible in both sides of the field equations. \section{Propagation and Geometrical Properties} Having settled the background in this way, and because the background is characterized by these restrictions, then matter fields will behave in a correspondingly restricted way, as it is also explained in \cite{f/2}. Now, we begin to consider the issue of which matter vector fields could possibly be defined within this background. In the case of a vector $V_{\mu}$ it is possible to define beside the standard covariant derivative given in terms of the connection another most special differential operation given by $Z_{\mu\nu}=\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu}$ in terms of no additional field and called curl or exterior derivative, which can be generalized up to the differential operator given by $Z_{\rho\mu}=D_{\rho}V_{\mu}-D_{\mu}V_{\rho}$ that is formally the exterior derivative but now with respect to the most general connection. So given the vector field $V_{\mu}$, we postulate the most general Proca matter field equations as \begin{eqnarray} D_{\mu}Z^{\mu\alpha} +\frac{\lambda}{2} D_{\mu}Z_{\eta\rho}\varepsilon^{\mu\eta\rho\alpha}+m^{2}V^{\alpha}=0 \label{fieldequations} \end{eqnarray} which specify the second-order time derivative for only the spatial components, but which also develop the constraint \begin{eqnarray} &m^{2}D_{\mu}V^{\mu} -\frac{\lambda}{4}Q_{\rho\mu\nu}D^{\rho}Z_{\alpha\beta}\varepsilon^{\alpha\beta\mu\nu} -\frac{1}{2}Q^{\rho\alpha\beta}D_{\rho}Z_{\alpha\beta}-\\ \nonumber &-\frac{\lambda}{2}D_{\mu}Q^{\rho}_{\phantom{\rho}\beta\nu}Z_{\rho\alpha}\varepsilon^{\alpha\beta\mu\nu} -\frac{1}{2}D_{\rho}Q^{\rho\alpha\beta}Z_{\alpha\beta}=0 \label{constraint} \end{eqnarray} and where the conserved quantities are given by the energy \begin{eqnarray} \nonumber &T^{\alpha\mu}= -\frac{1}{2}g^{\alpha\mu}m^{2}V^{2} +\left(\frac{1}{4}g^{\alpha\mu}Z_{\rho\eta}Z^{\rho\eta} -Z^{\mu\theta}Z^{\alpha}_{\phantom{\alpha}\theta}\right)+\\ &+D_{\rho}V^{\mu}\left(Z^{\rho\alpha}+\frac{\lambda}{2}Z_{\sigma\theta} \varepsilon^{\sigma\theta\rho\alpha}\right) \label{energy} \end{eqnarray} and the spin \begin{eqnarray} S^{\rho\alpha\beta}=\frac{1}{2} \left[V^{\alpha}\left(Z^{\rho\beta}+\frac{\lambda}{2}Z_{\sigma\theta} \varepsilon^{\sigma\theta\rho\beta}\right) -V^{\beta}\left(Z^{\rho\alpha}+\frac{\lambda}{2}Z_{\sigma\theta} \varepsilon^{\sigma\theta\rho\alpha}\right)\right] \label{spin} \end{eqnarray} so that, whereas the condition \begin{eqnarray} V^{\alpha}\left(Z^{\rho\beta} +\frac{\lambda}{2}Z_{\sigma\theta}\varepsilon^{\sigma\theta\rho\beta}\right) +V^{\rho}\left(Z^{\alpha\beta} +\frac{\lambda}{2}Z_{\sigma\theta}\varepsilon^{\sigma\theta\alpha\beta}\right)=0 \label{condition} \end{eqnarray} ensures the complete antisymmetry of the spin, this form of the spin with the energy is such that the conservation laws (\ref{energy}) and (\ref{spin}) are verified. We notice a couple of facts about the constraints (\ref{constraint}) and the condition of complete antisymmetry of the spin (\ref{condition}): first, due to the presence of torsion the constraint contains terms with the second-order time derivative of spatial components, which can anyway be removed by means of field equations, and thus it is a real constraint that can then be plugged back into the field equations allowing them to specify the second-order time derivative of all components; second, the condition of complete antisymmetry of the spin admits only one independent contraction that eventually yields $V_{\rho}Q^{\rho\alpha\beta}=0$ and $W^{\nu}V^{\rho}=W^{\rho}V^{\nu}$, and therefore allowing us to write $Z_{\mu\nu}=\partial_{\mu}V_{\nu}-\partial_{\nu}V_{\mu}$, i.e. although we originally began with the differential operator given by the formal exterior derivative with respect to the most general connection we finally get the exterior derivative without additional fields: the consequence of this fact is that in this way the expression of the spin tensor can be inverted to let us write the torsion tensor as \begin{eqnarray} Q^{\rho\alpha\beta}=\frac{1}{2} \left[V^{\alpha}\left(Z^{\rho\beta}+\frac{\lambda}{2}Z_{\sigma\theta} \varepsilon^{\sigma\theta\rho\beta}\right) -V^{\beta}\left(Z^{\rho\alpha}+\frac{\lambda}{2}Z_{\sigma\theta} \varepsilon^{\sigma\theta\rho\alpha}\right)\right] \end{eqnarray} and equivalently the axial torsion vector as \begin{eqnarray} W_{\nu}=\frac{1}{6}\left(\lambda^{2}-1\right)V^{\alpha}Z^{\rho\beta} \varepsilon_{\alpha\rho\beta\nu} \end{eqnarray} in terms of the vector field in the field equations allowing them to account for the back-reaction effects. Finally we have that $3\lambda W^{\nu}=(\lambda^{2}-1)Z^{\nu\rho}V_{\rho}$ is an important relationship between the axial torsion vector and the vector field. First of all we easily see that in general cases in which the parameter $\lambda$ is different from zero, we have that we can substitute torsion through the torsion vector in terms of the vector field in the field equations, getting third-order derivatives within the field equations themselves; this problem can be solved by decomposing the vector field as $V_{\mu}=U_{\mu}+D_{\mu}B$ with $D_{\mu}U^{\mu}=0$ in terms of its transversal and longitudinal parts: the characteristic equation is given by \begin{eqnarray} n^{2}\left(m^{2}+(\lambda^{2}-1)W^{2}\right)=(2\lambda^{2}-1)\left(n\cdot W\right)^{2} \label{equation} \end{eqnarray} in terms of the torsion vector itself. Clearly this characteristic equations shows that to avoid time-like solutions to occur in the circumstance of weak torsion we have to require $2\lambda^{2}<1$ which expresses the fine-tuning of the parameter of the model; in this case we have that the condition of causality becomes $W^{2}<2m^{2}$ which is a condition expressing that torsion has a limit controlled from above by the mass of the vector field. However, even restricting the discussion to the case in which the propagation is acceptable, we see that the condition of complete antisymmetry of the spin constitute a problem for the counting of degrees of freedom; indeed this condition accounts at least for an additional constraint that reduces the number of degrees of freedom to $2$ at most: this is not the right number of degrees of freedom possessed by the massive vector field. We notice that for the particular case given by $\lambda^{2}\equiv0$ we do not get the characteristic equation (\ref{equation}) because in this case we have $Z^{\nu\rho}V_{\rho}\equiv0$ which gives $Z_{\mu\nu}Q^{\rho\mu\nu}=0$ and therefore the field equations reduce to those we would have had in absence of torsion; however, although torsion is not coupled to the vector field, it is nonetheless present with the condition of complete antisymmetry accounting for the additional constraint that reduces to $2$ the maximum number of degrees of freedom: even in this case the right amount of degrees of freedom is not achieved. Finally, we consider the special case $\lambda^{2}\equiv1$ for which we have causality, which is due to the vanishing of torsion; however in this case too, although torsion would be zero and hence already completely antisymmetric, nevertheless the very vanishing of torsion is itself a condition that accounts for additional constraints which reduce to $2$ the maximum number of degrees of freedom: therefore in this case the right balance of degrees of freedom is not accomplished as well. So, the most general Proca field can be fine-tuned to give a causal model, but all these causal models are overdetermined, and thus inconsistent. \section*{Conclusion} In this paper, we have considered the Proca vector field in which the dynamical term is written in terms of the curl or equivalently the exterior derivate, calculated with respect to the most general metric connection with completely antisymmetric Cartan torsion tensor; the most general system of field equations has been given, and discussed from the point of view of causal propagation and the geometrical properties constraining the degrees of freedom of the field. It has been shown that the parameter $\lambda$ determines the features of the model: regarding the propagation, we proved that causality is ensured by the fine-tuning of the parameter given by $\lambda^{2}<\frac{1}{2}$, or else by $\lambda^{2}\equiv1$; regarding the geometric properties, we have seen that when $\lambda^{2}<\frac{1}{2}$ torsion undergoes the limitation given by $W^{2}<2m^{2}$, while for the special case $\lambda^{2}\equiv1$ the dynamical term is self-dual and torsion vanishes identically, so that in any case we have that the maximum value of torsion can never exceed a value given in terms of the mass of the field, and finally we have seen that this massive vector field is always overrestricted by the constraints arisen within the model. One point that should be stressed is that the special case $\lambda^{2}\equiv1$ is of fundamental interest because this special instance of self-dual dynamical term gives rise to a vanishing spin tensor, although a non-trivial spin tensor should be present in general for vector fields, and it is also intriguing that the occurrence of the condition $W^{2}<2m^{2}$ giving to torsion an upper value in terms of the mass of the field, which is a fact that admits no clear interpretation; on the other hand, the fact that this field never possesses the $3$ degrees of freedom that define massive vector fields constitutes an unsurmountable barrier. As this discussion has extensively underlined, any attempt to add the completely antisymmetric torsion to the metric connection in the exterior derivatives of the dynamical term for the Proca field implies inconsistencies. So, albeit the inclusion of torsion could be a possible generalization for the Proca field, no such generalization actually leads to a consistent set of Proca field equations; therefore no such generalization gives rise to any consistent Proca field theory. This shows that it is already in its most general instance that the standard Proca theory is defined.
{ "redpajama_set_name": "RedPajamaArXiv" }
4,248
{"url":"https:\/\/ohwr.org\/project\/white-rabbit\/blame\/master\/documents\/calibration\/appendix.tex","text":"appendix.tex 18.8 KB\n Grzegorz Daniluk committed Sep 10, 2014 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 \\section{Mathematical proofs} \\subsection{Reference fiber latency} \\label{subsec:app_filat} The same pair of WR Devices is used for all three connections in this procedure. That is why, when considering round-trip delay after subtracting the bitslide values, the transmission and reception delays of both devices are summed together and remain constant for fiber $f_1$, $f_2$, $f_1 + f_2$: \\Delta = \\Delta_{TXM} + \\Delta_{RXM} + \\Delta_{TXS} + \\Delta_{RXS} When two fibers $f_1$, $f_2$ are joined together the fiber latency for this connection will be the sum of $\\delta_1$ and $\\delta_2$. After eliminating the bitslide value, the remaining part of a round trip delay consists of the following elements: delay_{MM1}' = \\Delta + \\delta_1\\\\ delay_{MM2}' = \\Delta + \\delta_2\\\\ delay_{MM3}' = \\Delta + \\delta_1 + \\delta_2 This equation system has three unknowns and after solving gives the formulas for the round-trip fiber latencies: \\begin{align} \\delta_1 = delay_{MM3}' - delay_{MM2}'\\\\ \\delta_2 = delay_{MM3}' - delay_{MM1}' \\end{align} \\subsection{Fiber asymmetry} \\label{subsec:app_fiasym} In this step of the calibration procedure two WR connections with the same pair of devices are established. For each of them the offset between the WR Slave and WR Master is calculated by the WR PTP software as: offset_{MS} = t_1 - t_2 + delay_{MS} \\noindent where $delay_{MS}$ is an estimated one-way link delay. When $\\alpha$ is initially equal to 0, $delay_{MS}$ is estimated as half of the round-trip delay, which results in a distorted offset between the two devices: offset_{MS}' = t_1 - t_2 + \\frac{1}{2} delay_{MM} \\noindent Then $skew_{PPS}$ measured with an oscilloscope is equal to the uncompensated link asymmetry (the sum of the fiber asymmetry and hardware asymmetry): \\begin{align} \\label{equ:app_fiasym:skew1} skew_{PPS1} &= offset_{MS1} - offset'_{MS1} = delay_{MS1} - \\frac{1}{2} delay_{MM1}\\\\ \\label{equ:app_fiasym:skew2} skew_{PPS2} &= offset_{MS2} - offset'_{MS2} = delay_{MS2} - \\frac{1}{2} delay_{MM2} \\end{align} \\noindent We also know what factors build up the round trip delays and one-way delays for both connections. Please notice since the same pair of the devices is used in both cases, fixed hardware delays stay the same: \\begin{align} delay_{MM1} &= \\Delta + \\delta_1\\\\ delay_{MM2} &= \\Delta + \\delta_2\\\\ delay_{MS1} &= \\Delta_{TXM} + \\Delta_{RXS} + \\delta_{MS1}\\\\ delay_{MS2} &= \\Delta_{TXM} + \\Delta_{RXS} + \\delta_{MS2}\\\\ \\delta_1 &= \\delta_{MS1} + \\delta_{SM1}\\\\ \\delta_2 &= \\delta_{MS2} + \\delta_{SM2} \\end{align} \\noindent Using the formulas above, equations \\ref{equ:app_fiasym:skew1} and \\ref{equ:app_fiasym:skew2} can be expanded: \\begin{align} skew_{PPS1} &= \\Delta_{TXM} + \\Delta_{RXS} + \\delta_{MS1} - \\frac{1}{2}\\Delta - \\frac{1}{2} \\delta_1\\\\ skew_{PPS2} &= \\Delta_{TXM} + \\Delta_{RXS} + \\delta_{MS2} - \\frac{1}{2}\\Delta - \\frac{1}{2} \\delta_2 \\end{align} \\noindent Subtracting the two skew measurements eliminates any asymmetry due to fixed hardware delays: \\begin{align} \\label{equ:app_fiasym:skew_pps} skew_{PPS} &= skew_{PPS2} - skew_{PPS1}\\\\ &= \\Delta_{TXM} + \\Delta_{RXS} - \\Delta_{TXM} - \\Delta_{RXS} + \\delta_{MS2} - \\delta_{MS1} - \\frac{1}{2}\\Delta + \\frac{1}{2}\\Delta - \\frac{1}{2} \\delta_2 + \\frac{1}{2} \\delta_1 \\nonumber\\\\ &= \\delta_{MS2} - \\delta_{MS1} - \\frac{1}{2}\\delta_2 + \\frac{1}{2}\\delta_1 \\end{align} \\noindent However, if fiber $f_1$ is just a few meters long, then its asymmetry is negligible. That means its one-way Master-to-Slave latency equals half of the total fiber latency: \\delta_{MS1} = \\frac{1}{2} \\delta_1 \\noindent This results in a simplified formula describing $skew_{PPS}$: \\begin{align} skew_{PPS} &= \\delta_{MS2} - \\frac{1}{2}\\delta_2 = \\delta_{MS2} - \\frac{1}{2}\\delta_{MS2} - \\frac{1}{2}\\delta_{SM2} \\nonumber\\\\ \\label{equ:app_fiasym:final_skew} &= \\frac{1}{2}(\\delta_{MS2} - \\delta_{SM2}) \\end{align} \\noindent Having in mind that $\\alpha = \\frac{\\delta_{MS} - \\delta_{SM}}{\\delta_{SM}}$, using the already known value of the $f_2$ round-trip latency $\\delta_2$ and equations \\ref{equ:app_fiasym:skew_pps}, \\ref{equ:app_fiasym:final_skew} we get the expression for $\\alpha$ used in the calibration procedure: \\alpha = \\frac{2(skew_{PPS2} - skew_{PPS1})}{\\frac{1}{2}\\delta_2 - (skew_{PPS2} - skew_{PPS1})} \\subsection{WR Device calibration} \\label{subsec:apx:devices} After the WR PTP daemon on a Slave device is synchronized to Master, the $skew_{PPS}$ observed on an oscilloscope can be treated as an error of a clock correction on the Slave side: \\label{equ:devices:corrs} corr = corr_{ideal} - skew_{PPS} The correction value that should be applied to the Slave clock by the daemon ($corr_{ideal}$) is calculated based on timestamps and a $delay_{MS}$ estimation: corr_{ideal} = t_1 - t_2 + delay_{MS_{ideal}} The one-way delay is the sum of the fiber latency, Master transmission delay and Slave reception delay: \\label{equ:devices:ideal_delay} delay_{MS_{ideal}} = \\frac{1+\\alpha}{2+\\alpha}(delay_{MM} - \\Delta) + \\Delta_{TXM} + \\Delta_{RXS} However, the Slave reception delay used by the daemon is the result of the first 4 steps of the procedure in \\ref{subsec:devices} ($\\frac{1}{2}\\Delta_S$). That means, it has to be corrected by an asymmetry coefficient $\\beta$ to get the right value that produces $corr_{ideal}$ above: \\label{equ:devices:delta_rxs} \\Delta_{RXS} = \\frac{1}{2}\\Delta_S + \\beta The round-trip delay value and the sum of hardware delays are fixed, which means the same asymmetry factor has to be subtracted from the Slave transmission delay to preserve those sums: \\label{equ:devices:delta_txs} \\Delta_{TXS} = \\frac{1}{2}\\Delta_S - \\beta \\noindent Taking it back to equation \\ref{equ:devices:ideal_delay} we get: delay_{MS_{ideal}} = \\frac{1+\\alpha}{2+\\alpha}(delay_{MM} - \\Delta) + \\Delta_{TXM} + \\frac{1}{2}\\Delta_S + \\beta However, the Master to Slave delay calculated by the daemon using the values without the asymmetry taken into account is: delay_{MS} = \\frac{1+\\alpha}{2+\\alpha}(delay_{MM} - \\Delta) + \\Delta_{TXM} + \\frac{1}{2}\\Delta_S So the correction value for the reception asymmetry is also the difference between the $delay_{MS}$ estimations: delay_{MS_{ideal}} = delay_{MS} + \\beta \\noindent Putting this back into the equation for $corr_{ideal}$: corr_{ideal} = t_1 - t_2 + delay_{MS} + \\beta \\noindent Please remember though, $t_1 - t_2 + delay_{MS}$ is in fact the correction value ($corr$) derived from the coarse (without asymmetry) Slave delays: corr_{ideal} = corr + \\beta Comparing the equation above with \\ref{equ:devices:corrs}: \\beta = skew_{PPS} That means, the difference between 1-PPS signals observed on the oscilloscope has to be used as the correction factor for the coarse delays of the Slave device.\\\\ The asymmetry of each calibrated Tx\/Rx delay is set to compensate also the asymmetry of the WR Calibrator. Equations \\ref{equ:devices:delta_rxs} and \\ref{equ:devices:delta_txs} can be expanded to show the components of asymmetry $\\beta$ of two WR Devices calibrated to the same WR Calibrator (where $\\beta_C$ is the calibrator asymmetry and $\\beta_1$, $\\beta_2$ are the internal asymmetries of each device): \\begin{align} \\Delta_{TX1} = \\frac{1}{2}\\Delta_1 - \\beta_{C1} = \\frac{1}{2}\\Delta_1 - \\beta_1 + \\beta_C \\\\ \\Delta_{RX1} = \\frac{1}{2}\\Delta_1 + \\beta_{C1} = \\frac{1}{2}\\Delta_1 + \\beta_1 - \\beta_C \\\\ \\Delta_{TX2} = \\frac{1}{2}\\Delta_2 - \\beta_{C2} = \\frac{1}{2}\\Delta_2 - \\beta_2 + \\beta_C \\\\ \\Delta_{RX2} = \\frac{1}{2}\\Delta_2 + \\beta_{C2} = \\frac{1}{2}\\Delta_2 + \\beta_2 - \\beta_C \\end{align} After connecting those two WR Devices together, the transmission circuits of each one communicate with the reception circuits of the other, resulting in a one-way link delay (without fiber propagation latency): \\begin{align} \\Delta_{1-2} = \\Delta_{TX1} + \\Delta_{RX2} = \\frac{1}{2}\\Delta_1 - \\beta_1 + \\beta_C + \\frac{1}{2} \\Delta_2 + \\beta_2 - \\beta_C = (\\frac{1}{2}\\Delta_1 - \\beta_1) + (\\frac{1}{2}\\Delta_2 + \\beta_2) \\\\ \\Delta_{2-1} = \\Delta_{TX2} + \\Delta_{RX1} = \\frac{1}{2}\\Delta_2 - \\beta_2 + \\beta_C + \\frac{1}{2} \\Delta_1 + \\beta_1 - \\beta_C = (\\frac{1}{2}\\Delta_2 - \\beta_2) + (\\frac{1}{2}\\Delta_1 + \\beta_1) \\end{align} This proves that devices which have been calibrated using the same WR Calibrator can use the asymmetries found during the calibration process to synchronize one another. \\subsection{Measurement with a loop-back fiber} For both measurements the same loop-back fiber, optical transmitter and optical receiver are used. There is also a requirement in the measurement procedure (section \\ref{subsec:loopback}) saying that both transmitter and receiver should have a constant delay that doesn't vary for each connection. That means, for both steps, the loop-back link has some unknown latency $\\delta_{L}$.\\\\ In the first case, the 1-PPS skew measured on the WR Master side can be represented with the formula: \\label{equ:loopback:skew1} skew_{PPS1} = t_{PPSM1} - (t_{PPSS1} + \\delta_{L}) where $t_{PPSM1}$ is a WR Master absolute time of 1-PPS generation, $t_{PPSS1}$ is a WR Slave absolute time of 1-PPS generation. The latency of the loop-back fiber $\\delta_{L}$ is added to $t_{PPSS1}$, because in the first step the Slave 1-PPS signal observed on the WR Master side is delayed by $\\delta_{L}$ picoseconds.\\\\ In the second step, the situation is reversed. The measurement is made on the WR Slave side, which means the 1-PPS generated from the WR Master is observed $\\delta_{L}$ picoseconds later: \\label{equ:loopback:skew2} skew_{PPS2} = (t_{PPSM2} + \\delta_{L}) - t_{PPSS2} The actual $skew_{PPS}$ that we want to measure within this procedure is the difference between the absolute time of the 1-PPS generation on Master and Slave: \\label{equ:lookback:offset} skew_{PPS} = t_{PPSM1} - t_{PPSS1} = t_{PPSM2} - t_{PPSS2} Of course we can make those subtractions equal only because the measurement in both cases is done when WR Master and WR Slave are synchronized. Now, putting together equations \\ref{equ:loopback:skew1}, \\ref{equ:loopback:skew2} and \\ref{equ:lookback:offset} the following system of equations with two unknowns is produced: \\begin{align} skew_{PPS1} = skew_{PPS} - \\delta_{L}\\\\ skew_{PPS2} = skew_{PPS} + \\delta_{L} \\end{align} Solving it creates the final formula to calculate the 1-PPS skew between the WR Master and the WR Slave: skew_{PPS} = \\frac{1}{2} (skew_{PPS1} + skew_{PPS2}) \\subsection{Recovering the calibrator} The new WR Calibrator has unknown transmission and reception delays as any other, uncalibrated WR Device. We represent them using the mean (coarse) delay ($\\Delta_{C2}$) and the asymmetry factor ($\\beta_{C2}$): \\begin{align} \\Delta_{TXC2} = \\frac{1}{2}\\Delta_{C2} - \\beta_{C2}\\\\ \\Delta_{RXC2} = \\frac{1}{2}\\Delta_{C2} + \\beta_{C2} \\end{align} We already know from the previous sections that a WR Device (D1) calibrated to the primary calibrator (C1) compensates its own asymmetry but also the asymmetry of the WR Calibrator: \\begin{align} \\Delta_{TXD1} = \\frac{1}{2}\\Delta_{D1} - \\beta_{D1} + \\beta_{C1} \\\\ \\Delta_{RXD1} = \\frac{1}{2}\\Delta_{D1} + \\beta_{D1} - \\beta_{C1} \\end{align} In an ideal case, when each WR Device knows its delays, the Master-to-Slave (one-way) delay without the fiber propagation latency would be: \\label{equ:recc:delaymsideal} \\Delta_{D1-C2_{ideal}} = \\Delta_{TXD1_{ideal}} + \\Delta_{RXC2_{ideal}} = \\frac{1}{2}\\Delta_{D1} - \\beta_{D1} + \\frac{1}{2}\\Delta_{C2} + \\beta_{C2} On the other hand, since the WR Device \\emph{D1} compensates also the asymmetry of the primary calibrator \\emph{C1} and initially $\\beta_{C2}$ is unknown (set to 0), the actual fixed delay for \\emph{D1}-\\emph{C2} connection is: \\label{equ:recc:delayms} \\Delta_{D1-C2} = \\frac{1}{2}\\Delta_{D1} - \\beta_{D1} + \\beta_{C1} + \\frac{1}{2}\\Delta_{C2} Comparing equations \\ref{equ:recc:delayms} and \\ref{equ:recc:delaymsideal} it can be noticed that the factor $\\beta_{C1}$ partially compensates the asymmetry of the new calibrator \\emph{C2}. The uncompensated part: \\beta'_{C2} = \\beta_{C2} - \\beta_{C1} produces an additional skew of the 1-PPS signals in the same way as the uncompensated asymmetry of the WR Device in section \\ref{subsec:apx:devices}: skew_{PPS} = \\beta_{C2} - \\beta_{C1} This remaining asymmetry of the \\emph{D1}-\\emph{C2} connection is compensated in the calibration procedure by using the 1-PPS skew as the correction factor. Then, the transmission and reception delays of the new calibrator \\emph{C2} are presented in the equations: \\begin{align} \\Delta_{TXC2} = \\frac{1}{2}\\Delta_{C2} - skew_{PPS} = \\frac{1}{2}\\Delta_{C2} - \\beta_{C2} + \\beta_{C1}\\\\ \\Delta_{RXC2} = \\frac{1}{2}\\Delta_{C2} + skew_{PPS} = \\frac{1}{2}\\Delta_{C2} + \\beta_{C2} - \\beta_{C1} \\end{align} Each of them has the asymmetry factor $\\beta_{C2}$ reduced by $\\beta_{C1}$ so that the actual hardware asymmetry is reduced only partially. The remaining, uncompensated part equals the asymmetry of the primary calibrator \\emph{C1}, so that the new calibrator \\emph{C2} behaves for all practical purposes as the old calibrator \\emph{C1}. Grzegorz Daniluk committed May 23, 2015 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 \\subsection{1-PPS skew measurement} Reading proposed calibration procedure one can start wondering what is the influence of 1-PPS propagation time - from the inside of FPGA to the physical connector - on the 1-PPS skew measurement. Let's consider one more time two WR Devices (\\emph{D1}, \\emph{D2}) being calibrated to the calibrator \\emph{C}. This time we take into account the 1-PPS propagation delay from the inside of FPGA to the physical connector where we take it for skew measurement (figure \\ref{fig:ppsdel:calibration}). Those delays are marked $\\tau_C$, $\\tau_1$, $\\tau_2$ for the calibrator, device under calibration 1 and device under calibration 2. \\begin{figure}[ht] \\begin{center} \\includegraphics[width=\\textwidth]{calibration\/calibration_pps_delay.pdf} \\caption{Calibration with 1-PPS delays taken into account} \\label{fig:ppsdel:calibration} \\end{center} \\end{figure} Therefore e.g. 1-PPS signal generated inside the FPGA of the WR Calibrator at time $t_{CPPS}$ is observed on the oscilloscope at time $t_{CPPS} + \\tau_C$. Taking this into account, our skew measured in section \\ref{subsec:devices} can be expanded as: \\begin{align} skew'_{1C} &= (t_{1PPS} + \\tau_1) - (t_{CPPS} + \\tau_C) \\nonumber\\\\ &= (t_{1PPS} - t_{CPPS}) + (\\tau_1 - \\tau_C) \\nonumber\\\\ &= skew_{1C} + (\\tau_1 - \\tau_C) \\end{align} According to the calibration procedure we apply the measured skew (in our case $skew'_{1C}$) as an asymmetry factor ($\\beta_{C1}$ in section \\ref{subsec:apx:devices}) to calculate fixed hardware delays $\\Delta_{TX}$, $\\Delta_{RX}$ for the device under calibration. Thus, our asymmetry factor also contains the difference in 1-PPS propagation times: \\beta'_{C1} = \\beta_{C1} + (\\tau_1 - \\tau_C) As a consequence, fixed transmission and reception delays for device $D_1$ calculated from the coarse delay and asymmetry factor $\\beta'_{C1}$ will also contain 1-PPS propagation times: \\begin{align} \\Delta'_{TX1} &= \\frac{1}{2}\\Delta_1 - \\beta'_{C1} = \\frac{1}{2}\\Delta_1 - \\beta_{C1} - (\\tau_1 - \\tau_C) = \\Delta_{TX1} - (\\tau_1 - \\tau_C)\\\\ \\Delta'_{RX1} &= \\frac{1}{2}\\Delta_1 + \\beta'_{C1} = \\frac{1}{2}\\Delta_1 + \\beta_{C1} + (\\tau_1 - \\tau_C) = \\Delta_{RX1} + (\\tau_1 - \\tau_C) \\end{align} By analogy we get the same result for device $D_2$ calibration: \\begin{align} \\Delta'_{TX2} &= \\frac{1}{2}\\Delta_2 - \\beta'_{C2} = \\frac{1}{2}\\Delta_2 - \\beta_{C2} - (\\tau_2 - \\tau_C) = \\Delta_{TX2} - (\\tau_2 - \\tau_C)\\\\ \\Delta'_{RX2} &= \\frac{1}{2}\\Delta_2 + \\beta'_{C2} = \\frac{1}{2}\\Delta_2 + \\beta_{C2} + (\\tau_2 - \\tau_C) = \\Delta_{RX2} + (\\tau_2 - \\tau_C) \\end{align} We can see that after performing the calibration procedure, both of these devices have their fixed hardware delays distorted by the difference in 1-PPS propagation delay of the device and the calibrator. \\begin{figure}[ht] \\begin{center} \\includegraphics[width=.5\\textwidth]{calibration\/wr_devices_pps_delay.pdf} \\caption{Synchronization of WR Devices calibrated to the same calibrator} \\label{fig:ppsdel:sync} \\end{center} \\end{figure} When we connect two devices together and let them synchronize (figure \\ref{fig:ppsdel:sync}), the propagation delay of calibrator's PPS signal gets canceled in the one-way delay calculation: \\begin{align} delay'_{MS} &= \\delta_{MS} + \\Delta'_{TX1} + \\Delta'_{RX2} = \\delta_{MS} + (\\Delta_{TX1} - \\tau_1 + \\tau_C) + (\\Delta_{RX2} + \\tau_2 - \\tau_C)\\\\ delay'_{MS} &= delay_{MS} + (\\tau_2 - \\tau_1) \\end{align} Now, the one-way delay is distorted only by the difference between $D_2$ and $D_1$ 1-PPS propagation delays. Having in mind the formula for a correction factor applied on the Slave side ($corr_{ideal}$ in section \\ref{subsec:apx:devices}) we can see that in our case the distortion of $delay_{MS}$ directly affects $corr_{ideal}$ as well: corr = corr_{ideal} + (\\tau_2 - \\tau_1) The change in the $corr$ value shifts the timescale of the slave device ahead by $(\\tau_2 - \\tau_1)$ so for this connection every 1-PPS pulse from $D_2$ is generated earlier than it should be in the ideal case: t'_{2PPS} = t_{2PPS} - (\\tau_2 - \\tau_1) $skew'_{12}$ between $D_1$ and $D_2$ measured with the oscilloscope is: \\begin{align} skew'_{21} &= (t'_{2PPS} + \\tau_2) - (t_{1PPS} + \\tau_1) = t_{2PPS} - \\tau_2 + \\tau_1 + \\tau_2 - t_{1PPS} - \\tau_1\\\\ skew'_{21} &= t_{2PPS} - t_{1PPS} \\end{align} This shows that the difference between 1-PPS propagation delays causes Slave device to have its internal time shifted to compensate for this difference when two devices (calibrated earlier to the same calibrator) are connected together. Therefore when these devices are synchronized, their 1-PPS signals will be aligned and the difference in their propagation times is properly compensated. 1-PPS socket is our calibration reference plane. One can move the reference plane to any other point, also to the inside of the FPGA. However, this requires the knowledge of precise 1-PPS delay value to be taken into account in the PPS skew measurements.","date":"2020-05-31 14:02:28","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9824895858764648, \"perplexity\": 1200.9871880414078}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590347413406.70\/warc\/CC-MAIN-20200531120339-20200531150339-00085.warc.gz\"}"}
null
null
SYNONYM #### According to The Catalogue of Life, 3rd January 2011 #### Published in null #### Original name Mycosphaerella suttoniae Crous & M.J. Wingf., 1997 ### Remarks null
{ "redpajama_set_name": "RedPajamaGithub" }
4,084
{"url":"https:\/\/discuss.codechef.com\/t\/how-to-find-xor-of-all-the-elements-in-given-range\/12911","text":"# How to find XOR of all the elements in given range?\n\n#1\n\ngiven to integers A and B A<=B find XOR of all the elements between them.\n\nExpected Complexity: logN\n\n#2\n\n@drcoderji\n\nNo, it is not always true. If A=2, B=3, then your answer will be 5. But 2^3 = 1.\n\n#3\n\nLet us denote f(n) = 1 \\oplus 2 \\oplus 3 \\oplus \\dots \\oplus n, where \\oplus denotes XOR operation\nthen XOR of all numbers between A and B can be represented by f(B) \\oplus f(A-1), because x \\oplus x = 0\n\nNow we can find out easily that,\n\nf(n) = \\left\\{\\begin{array}{@{}lr@{}} n, & ext{n mod 4 = 0}\\\\ 1, & ext{n mod 4 = 1}\\\\ n+1, & ext{n mod 4 = 2}\\\\ 0, & ext{n mod 4 = 3} \\end{array}\\right\\}\n\nTime Complexity - O(1)\n\n#4\n\nSimilar question in stackoverflow. Hope it helps. happy coding.\n\n#5\n\n@jaydeep97 -its right. The answer will be f(3)^f(1)=0^1=1.\n\n#6\n\ncan anybody help me if the array is given and range is also given \u2026\nif array is 1 2 7 4 5\na=3 b=4 ans should be 3 but above solution gives 7 ans.\n\n#7\n\n@dark_stranger\n\nin this question range means all the no. in that range. suppose a=3 b=7 then this means we have to find 3^4^5^6^7.","date":"2019-05-24 04:04:02","metadata":"{\"extraction_info\": {\"found_math\": false, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8133432865142822, \"perplexity\": 1349.5921252555056}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-22\/segments\/1558232257497.4\/warc\/CC-MAIN-20190524024253-20190524050253-00397.warc.gz\"}"}
null
null
The ENDFORCE Enterprise 2.2 from ENDFORCE is built on a standards-based framework that enables enterprises to centrally manage security policy enforcement. It uses Context Aware Authorization technology, which combines assessment and reporting tools with enforcement capabilities. The new features of the software include support for DHCP Quarantine settings and a DHCP Enforcement Action Report. Users can now integrate LDAP directories into the system. The unit has compatibility with over 400 security applications and network equipment vendors. For more information about this product, call ENDFORCE at 866-777-2537 or visit the company's web site at www.endforce.com.
{ "redpajama_set_name": "RedPajamaC4" }
4,043
How to buy industrial parts in India India is a fast-growing industrial nation, but the country's largest companies are struggling to attract investment. A growing number of companies are in the process of converting their manufacturing operations into low-cost operations, which means that some of the countrys most important industrial sectors are now unable to compete with foreign rivals. According to a report released by McKinsey & Co, India's industrial segment has grown to more than 20,000 enterprises with an annual turnover of $1.8 trillion and an investment rate of 20%. This number has ballooned to more $5 trillion in 2017. With such a fast pace of growth, the country has become one of the most competitive places in the world for manufacturing. The McKinsey report said that India has a total manufacturing capacity of about 7.5 million square feet of floor space. The largest companies in this segment are Bharti Enterprises Ltd (bcom) and Hindustan Aeronautics Limited (hala) with a combined annual turnover in excess of $600 billion. The report added that Indian companies have the potential to create nearly $1 trillion in new employment in the next two decades. However, the report pointed out that in the past few years, the growth of the sector has slowed down and there is a significant drop in the number of manufacturing companies. According the report, in 2017, the top ten companies were Bhartis, Hindustans, Hindu and Hindostan respectively. The next five companies are Hindustani and Bharat Chemicals, a company owned by India's largest conglomerate Hindusthan Zee Group. In 2016, the sector witnessed a major slowdown in India, as the country recorded just 3.6 million manufacturing jobs, down from a peak of 6.4 million jobs in 2011. In 2017, however, there was a significant upturn in the numbers of manufacturing jobs. In terms of the size of the Indian manufacturing sector, the McKinsey data shows that in 2017 there were 4.9 million manufacturing companies, a rise of 4.2 percent. The Indian manufacturing market is one of Asia's most important sectors, accounting for about two-thirds of the global manufacturing market. While it accounts for about one-third of global imports of manufactured goods, it has a significant share of the market share for its domestic production. The McKinsey survey of India's top 100 manufacturing companies found that Indian manufacturers are more efficient than their Chinese counterparts, as they can produce products with lower quality and lower cost, according to the report. The Indian companies that have recently invested in manufacturing have done so for a variety of reasons. For instance, they have sought to diversify their product lines, improve their supply chain and attract more foreign investment, the data showed. The most common reasons for investments in manufacturing in India include the following: 1) the cost of doing business, which is the cost to produce the products and to sell them, 2) the competitive environment in the country, which enables manufacturers to focus on their strengths and differentiate themselves from their competitors, 3) the availability of skilled workers, and 4) the need to expand capacity, according McKinsey. The study also noted that the rapid growth of industrial manufacturing in the recent years has led to a shift in the way manufacturing is conducted in the state. The new trend of manufacturing as a whole has been a significant contributor to the country becoming a manufacturing powerhouse, and has been an important catalyst for the country increasing its trade with China and India. According to the McKinsell study, India has one of most productive manufacturing industries in the developed world. However, this report highlighted that the country faces many challenges in maintaining its high productivity, with some regions experiencing a drop in output. This has led some sectors to consider restructuring their business models, according the McKinseys report. Tagged fasco industries parts, industrial automation parts, industrial parts store, industrial toilet parts, kth parts industries Website https://irantecnik.com A Chinese manufacturer is working on a "super-fast and easy to use" industrial parts robot for factories GE says it's now shipping GE industrial parts to Amazon Tagged global industrial parts, pacific industrial parts, recreative industries parts Tagged industrial ceramic parts, industrial sink parts
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
3,205
Qualcomm Introduces Next-Generation GPU Architecture and Image Signal Processor for the Ultimate Graphics and Mobile Camera Experience –New Custom Adreno 5xx GPU and Qualcomm Spectra ISP Bring the Most Advanced Visual Processing Technology to Snapdragon Processors– Aug 12, 2015LOS ANGELES Qualcomm Incorporated (NASDAQ: QCOM) today announced that its subsidiary, Qualcomm Technologies, Inc. (QTI), has introduced its next-generation visual processing technology with new versions of its graphics processing unit (GPU) and image signal processing (ISP) unit, to deliver significant advancements in performance, power efficiency and user experience to Qualcomm® Snapdragon™ processors. The new Qualcomm® Adreno™ 5xx GPU architecture delivers increased speed and efficiency over the previous generation and supports stunning high-definition mobile graphics while introducing general-purpose compute co-processing for exceptionally low power consumption. The first two GPUs available on the new architecture, the Adreno 530 and Adreno 510, will be available integrated within the forthcoming Snapdragon 820 and Snapdragon 620/618 processors. In addition, Snapdragon 820 will also debut the new 14-bit Qualcomm Spectra™ image signal processing (ISP) unit, designed to support superior DSLR-quality photography and enhanced computer vision. Devices based on Snapdragon 820 are expected to be available in 1H 2016. Qualcomm Snapdragon, Qualcomm Adreno and Qualcomm Spectra are products of QTI. "We're significantly enhancing the visual processing capabilities of Snapdragon to support next-generation user experiences related to computational photography, computer vision, virtual reality and photo-realistic graphics on mobile devices, all while maximizing battery life," said Tim Leland, vice president, product management, Qualcomm Technologies, Inc. "Qualcomm Spectra ISP, together with our Adreno 5xx-class GPU, brings an entirely new level of imaging to smartphones, and is designed to allow Snapdragon-powered devices to capture ultra-clear, vivid photos and videos regardless of motion and lighting conditions and display them with the color accuracy that nature intended. In addition, as emerging growth segments such as automotive demand more immersive visual experiences, Snapdragon 820 will enable the next generation of infotainment, computer vision and advanced processing for instrument clusters." Designed for scalability, the Adreno 5xx architecture is the foundation of Qualcomm Technologies' next-generation custom GPUs and is the successor to the Adreno 4xx family. Newly integrated within the Snapdragon 820, the Adreno 530 is the highest-performance GPU ever designed by Qualcomm Technologies, providing superior experiences with: Up to 40 percent lower power consumption and 40 percent faster performance for both graphics and GPGPU compute when compared to the Adreno 430; Leading-edge capabilities in graphics and compute APIs including OpenGL ES 3.1+AEP (Android Extension Pack), Renderscript, as well as the new OpenCL 2.0 and Vulkan standards. Vulkan minimizes driver overhead and enables multi-threaded performance on mobile and embedded platforms; Support for 64-bit virtual addressing, allowing for shared virtual memory (SVM) and efficient co-processing with 64 bit CPUs; Improved fine-grain power management, new rendering, compositing and compression techniques to enable higher performance at lower power consumption and reduced DRAM bandwidth; Up to 4K HEVC video support at 60fps over HDMI 2.0 to Rec. 2020 ultra-high definition (UHD) displays and TVs; Improved Qualcomm® EcoPix™ and Qualcomm® TruPalette™ support for longer battery life and superior pixel quality; and Software compatibility between Adreno 530 and Adreno 510. Introduced with the Snapdragon 820, Qualcomm Spectra ISP is Qualcomm Technologies' most advanced dual-imaging signal processing unit to-date, integrated and designed to provide best-in-class camera image quality and end-user benefits, including: Superior image quality, with more natural skin tones via advanced, 14-bit dual ISPs supporting up to 3 simultaneous cameras (e.g. one facing the user, and two rear facing), and up to 25 megapixels at 30 frames per-second with zero shutter lag; Improved photos with Qualcomm Spectra ISP's flexible hybrid autofocus framework and multi-sensor fusion algorithms supporting next generation computational photography; Improved power efficiency when compared to previous generations, better noise immunity and higher throughput via advanced compression techniques and use of the latest MIPI serial C-PHY interface; and Next generation Computer Vision and other use cases via direct-to-DSP raw bayer data streaming and pre-processing capabilities. About Qualcomm Incorporated Qualcomm Incorporated (NASDAQ: QCOM) is a world leader in 3G, 4G and next-generation wireless technologies. Qualcomm Incorporated includes Qualcomm's licensing business, QTL, and the vast majority of its patent portfolio. Qualcomm Technologies, Inc., a wholly-owned subsidiary of Qualcomm Incorporated, operates, along with its subsidiaries, substantially all of Qualcomm's engineering, research and development functions, and substantially all of its products and services businesses, including its semiconductor business, QCT. For more than 25 years, Qualcomm ideas and inventions have driven the evolution of digital communications, linking people everywhere more closely to information, entertainment and each other. For more information, visit Qualcomm's website, OnQ blog, Twitter and Facebook pages. Qualcomm, Snapdragon and Adreno are trademarks of Qualcomm Incorporated, registered in the United States and other countries. Qualcomm Spectra, EcoPix, and TruPalette are trademarks of Qualcomm Incorporated. OpenCL™ 2.0 and OpenGL® ES 3.1 are based on published Khronos Specifications. Qualcomm Snapdragon 820 is based on a published Khronos Specifications and is expected to pass the Khronos Conformance Process. Vulkan™ is based on internal draft Khronos Specifications, which may change before final release. Conformance criteria for Vulkan Specifications have not yet been established. Qualcomm Snapdragon 820 is intended to support these standards. Current conformance status can be found at www.khronos.org/conformance.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,593
{"url":"https:\/\/github.com\/ArnaudBu\/esg2","text":"# ArnaudBu\/esg2\n\nR package for economic scenarios generation with a g2++ model\nSwitch branches\/tags\nNothing to show\nArnaud BUZZI and Arnaud BUZZI Correction coquille\nLatest commit 38a0a36 Jun 5, 2018\n Failed to load latest commit information. R Jun 5, 2018 img Jan 24, 2018 man May 20, 2018 DESCRIPTION May 19, 2018 NAMESPACE May 19, 2018 README.md Feb 20, 2018 test_esg2.R May 19, 2018\n\n# esg2\n\nR package for risk neutral economic scenarios generation with a g2++ model\n\n## Getting started\n\n### Prerequisites\n\nesg2 only relies on R default libraries. The package devtools is required in order to install it directly from its GitHub repository.\n\ninstall.packages(\"devtools\")\n\n### Installation\n\nThe package is available from its GitHub repository.\n\ndevtools::install_github(\"arnaudbu\/esg2\")\n\n## Model\n\nThe G2++ model implemented in this package is described by the following equation:\n\nThe Black & Scholes model implemented for related assets follows the classical model, with a time-dependant interest rate.\n\n## Documentation\n\nThe script file test_esg2.R contains all the usefull commands in order to run an end to end example with this package.\n\nAll the functions of the package are documented through R's help() function.\n\n### Zero Coupon Curve\n\nThe Zero Cupon curve is the basic brick for rate projection.\n\n#### Import the curve\n\nA zero coupon rate curve is imported through the function curvezc, which create a Zero Coupon Curve object from the rates given in standard unit.\n\nThree discounting methods are available:\n\n\u2022 continuous;\n\u2022 actuarial;\n\u2022 libor.\n\nOnly rates are given in the function. Maturities are assumed to go continuously from 1 to the length of the curve.\n\nFor example, importing a 10 periods curve is done with the command:\n\n?curvezc\n\nrates <- c(-0.00316,-0.00269,-0.00203,-0.00122,-0.00022, 0.00092,0.00215,0.00342,0.00465,0.00581)\n\ncurve <- curvezc(method = \"continuous\",\nrates = rates\n)\n\n#### Operations\n\nIt is possible to display, print or plot a curve object.\n\ncurve\nprint(curve)\nplot(curve)\n\nThe plot gives the zero coupon rate for each maturity.\n\n### Swaptions\n\nThe rate model uses swaptions for its calibration. Those instrument are stocked as a table in an object of class Swaptions.\n\n#### Import\n\nThe swaptions initiator needs a curve object for the pricing of the swaptions, along with a list of maturities, tenors, and volatilities.\n\nTwo pricing method are available:\n\n\u2022 normal: Bachelier pricing;\n\u2022 lognormal: Black & Scholes Pricing.\n\nThe pricing model also use the frequency as the number of payments for each period.\n\n?swaptions\n\nmaturities = c(2,3,4,5)\ntenors = c(1,1,1,1)\nvols = c(0.016735,0.009841,0.007156,0.005425)\n\nswaptions <- swaptions(\"normal\", curve, maturities, tenors, vols, 2)\n\n#### Operations\n\nIt is possible to display, print or plot the volatility surface of a swaptions object. The print function also displays the computed prices.\n\nswaptions\nprint(swaptions)\nplot(swaptions)\n\n### Handling correlations\n\nCorrelations needs to be handled between at least the two components of the rate model, and for associated assets using the projected rates in a Black & Scholes context.\n\nWe thus need to generate W_1 and W_2 processes with a rho correlation, and take into account the correlations between the other Wiener's process.\n\nA function, genW is thus implemented in order to generate as many correlated processes as needed:\n\nThis function takes as argument the correlation matrix between the different process, the horizon for projection and the number of desired simulations.\n\nIn order to get the calibrated parameters of the rate model, this function should be run after the calibration process.\n\n?genW\n\ncorrel <- cbind(c(1,g2model@rho, 0.25),c(g2model@rho,1, 0), c(0.25, 0, 1))\nW <- genW(correl, g2model@nsimul, g2model@horizon)\n\n\n### Rate Model\n\n#### Initialization\n\nThe rate model is initialized via a zero coupon curve, a projection horizon and a number of simulation.\n\nIt is possible to directly pass the parameters of the model as arguments in order to skip the calibration step.\n\n?g2\n\ng2model <- g2(curve, horizon = 50, nsimul = 1000)\n\n#### Calibration\n\nCalibration is performed on the model over a panel of swaptions defined in a Swaption object.\n\nThe optimized function for the process is a trade off between the differences between theoretical and observed prices and the probability to get negative rates. The optimal parameters are found with a Nelder-Mead algorithm followed by a Newton-Rhaphson method.\n\n?calibrate\n\ng2model <- calibrate(g2model, swaptions, maxIter = 100)\n\n#### Projection and Visualization\n\nOnce calibrated, the model is projected via the function project.\n\nIt is possible to pass Wx and Wy as optional arguments, that corresponds to W_1 and W_2 in the model equation and are generated to be correlated between themselves and with other assets. If not given, those two processes are computed with the rho parameter of the model, without taking into account any external process.\n\n?project\n\ng2model <- project(g2model, Wx = W[,,1], Wy = W[,,2])\n\nIt is possible to display, print, and plot the model. For the later case, 100 scenarios are displayed, along with the mean trajectory and the standard deviation around the mean.\n\ng2model\nprint(g2model)\nplot(g2model)\n\nOnce the projection are realized, it is possible to get the deflator or the projected zero-coupon prices table at a time t.\n\n# Get deflator table\n?deflator\ndef <- deflator(g2model)\n\n# Get zero coupon table at time 10\n?zctable\nzc10 <- zctable(g2model, 10)\n\n#### Validation\n\nThe deflator test is also implemented in the package. This test aims to verify that the means of the deflator values match the zero coupon curve for every maturity.\n\nThe function test_deflator plot the zero coupon curve against the mean of the deflator for each maturity, along with a table displaying the absolute differences in points.\n\n?test_deflator\ntest_deflator(g2model)\n\n### Black & Scholes Model\n\nThe library allows to use a Black & Scholes model for asset projection with the projected rate model.\n\n#### Initialization\n\nInitializing a Black & Scholes modelized asset requires:\n\n\u2022 a calibrated and projected rate model;\n\u2022 an initial value for the projection;\n\u2022 a volatility for the asset;\n\u2022 a dividend rate, as the proportion of the value of the asset that is paid each year as dividends or rent;\n\u2022 the correlation with the rate model;\n\u2022 the generated distribution for W_s.\n\nThe last value is optional. If not provided, a distribution is computed with the given correlation.\n\n?bs\n\naction <- bs(g2model,\ns0 = 100,\nvol = 0.2,\ndiv = 0.02,\nrho = -0.5,\nW = W[,,3])\n\n#### Projection and Visualization\n\nAll the traditional display functions are available, and the trajectories can be extracted with the function trajAction.\n\n?traj\ntrajAction <- traj(action)\n\naction\nprint(action)\nplot(action)\n\n#### Validation\n\nThe martingality test is implemented.\n\nThe means of the discounted value of the asset is ploted with the 95% confidence interval of its values.\n\n?test_martingal\ntest_martingal(action)","date":"2018-11-14 05:34:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6393219828605652, \"perplexity\": 4133.901324419856}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2018-47\/segments\/1542039741628.8\/warc\/CC-MAIN-20181114041344-20181114062823-00025.warc.gz\"}"}
null
null
You are here: Parliament home page > Parliamentary business > Publications and Records > Hansard > Commons Debates > Daily Hansard - Debate 24 Feb 2009 : Column 139 The Secretary of State for Foreign and Commonwealth Affairs (David Miliband): I launched the UK-China framework last month because the Government believe that positive engagement with China is essential to achieving our wider international objectives and to addressing the major global challenges, including the current economic crisis. We welcome the positive response from the Chinese Government to this strategy, we will monitor progress against its detailed objectives, and we will welcome the views of Members and others. Mr. Murphy: Given my right hon. Friend's rather special relationship with the US Secretary of State, Hillary Clinton, can he comment on the US's new approach to China and inform the House whether that new approach will impact in any way on the policies of the United Kingdom? David Miliband: My hon. Friend will be pleased to hear that I spoke to my new friend in advance of her trip to China. I think that the messages she gave to the Chinese about the determination of the whole of the American Government to engage with China in a new way is wholly welcome. There was, I think, in Secretary Clinton's remarks in Beijing an important recognition of the changed balance of power in the world and of China's centrality to addressing many of the big global problems we face—not least economic and environmental problems and nuclear proliferation. Willie Rennie (Dunfermline and West Fife) (LD): If we are ever to secure peace in Afghanistan, we are going to have to engage all the countries in the wider region, including China, especially if we are going to seek a final solution in that area. What discussions have the Government had with China about engaging with Iran to provide that solution? David Miliband: I agree with the hon. Gentleman. We obviously talked about Afghanistan during the visit of Premier Wen and Foreign Minister Yang at the beginning of this month. I was in Afghanistan last week. I believe that the important regional approach taken by the new envoy, Ambassador Holbrooke, is wholly welcome. Mr. Denis MacShane (Rotherham) (Lab): As literally millions and millions of Chinese people lose their jobs with the Chinese economy going into even freer fall than the European and American economies, there are political consequences. In my right hon. Friend's talks with the Chinese, will he gently suggest that the next economic paradigm has to be based on workers being able to earn enough to buy what they produce and to have social and other networks of support? Will he further bring into play the International Labour Organisation to urge the Chinese to develop a much fairer social and wage system in their country? David Miliband: I think that my right hon. Friend will recognise the irony of China riding to the rescue of international capitalism at this time, but his points about the balance of economic and social stability in China are very well made. Our human rights dialogue certainly provides one opportunity to raise a whole range of social issues with the Chinese Government. Mr. David Heathcoat-Amory (Wells) (Con): Is the Secretary of State aware that when North Koreans try to leave that dictatorship, they often cross into China, where they are rounded up and sent back to North Korea in defiance of all China's obligations as a signatory to the UN refugee convention? The fate of these returnees to North Korea is extremely gruesome, so will the Secretary of State ensure that his new love-in with China—whether via Mrs. Clinton or anyone else—does not prevent him and the Government from raising this issue with the Chinese Government as a matter of urgency, or does he think that China is too important and large to merit such criticism? David Miliband: The right hon. Gentleman raises an important point, which is one that we have raised with the Chinese. I think I should write to him with a report on how those discussions have gone and what the latest stage is. The importance of our engagement with China is precisely that, because we engage with the Chinese, we are able to raise all issues, including human rights issues, openly and frankly. That spirit of candour has been developed over the past few years in our relationship with China. Respect for China does not mean the relegation of our concerns to a subsidiary role. In fact, I would argue that the respect that is afforded to China is the basis for proper engagement on issues that concern us. 5. Joan Ryan (Enfield, North) (Lab): What recent assessment he has made of the political situation in Sri Lanka; and if he will make a statement. [257917] The Secretary of State for Foreign and Commonwealth Affairs (David Miliband): The recent military advances by the Sri Lankan Government and the subsequent humanitarian crisis are of continuing serious concern. We have repeatedly called for an immediate humanitarian ceasefire. We have made it clear to the Government of Sri Lanka that a political solution that addresses the legitimate concerns of all communities in Sri Lanka is the only way to bring a sustainable end to the conflict. Our commitment to that goal and our desire to work with the Sri Lankan Government are clear in the appointment of an experienced former Secretary of State, my right hon. Friend the Member for Kilmarnock and Loudoun (Des Browne), as the Prime Minister's special envoy. I remain in active discussion with the Sri Lankan Government to encourage them to work with him. Joan Ryan: I thank my right hon. Friend for that answer and also welcome the appointment of my right hon. Friend the Member for Kilmarnock and Loudoun as special envoy to Sri Lanka, as well as the statements made in the House by members of the Government regarding ceasefire, but warm words and good intentions will not protect the civilians of the Vanni. My right hon. Friend the Foreign Secretary will be aware that, in the last 24 hours, a ceasefire offer has been made but was rejected out of hand by the Government of Sri Lanka. The situation is grave, with 2,000 civilian deaths since January. Is not now the time for the Government to take the issue up at the highest levels—namely, at a session of the United Nations Human Rights Council or in the Security Council itself, or by seeking the suspension from the Commonwealth of the Government of Sri Lanka? David Miliband: The situation is indeed extremely serious. For some time, the Sri Lankan authorities were offering a ceasefire and it was rejected by the Liberation Tigers of Tamil Eelam. Now there is news of an LTTE offer, which has been rejected by the Government. My right hon. Friend will have seen the strong conclusions reached by the European General Affairs and External Relations Council yesterday on the Sri Lankan issue, which are wholly appropriate and welcome, and she can be assured that we continue to press at the highest levels for humanitarian assistance and for a ceasefire. Mr. Edward Davey (Kingston and Surbiton) (LD): Further to the point made by the right hon. Member for Enfield, North (Joan Ryan), will the Foreign Secretary explain to the House why the Government have not sought a resolution of the UN Security Council for a ceasefire in Sri Lanka? Indeed, why, when Mexico recently asked for the council to be briefed on Sri Lanka, did the British representative to the UN fail to support that call? Does the Foreign Secretary realise that people get pretty angry when UK Ministers here in London talk about and call for a ceasefire, but British officials in New York do not follow through? David Miliband: I am sorry to hear the hon. Gentleman talk in that way, because he knows that a failed resolution—one that faces a veto—is worse than no resolution at all, and it would strengthen precisely the forces that he and I oppose. I can assure him that our diplomats, whether in New York or in the region, are all working off the same script, which is one that has been set by the Prime Minister and me. Mike Gapes (Ilford, South) (Lab/Co-op): Can the Foreign Secretary confirm that the problem in the Security Council is not the UK Government, but the Russian Government, who refuse to support the Security Council resolution? Therefore, unlike in Gaza, we are unable to get the Security Council resolution that is so needed. David Miliband: There certainly is a blockage at the UN. That is why the UN has not been able to opine on this issue. Andrew Stunell (Hazel Grove) (LD): The Secretary of State will know that there are credible reports of atrocities on both sides. Will he assure the House that the Government will channel their energies into getting this ceasefire before more and more civilians are killed and brought into the conflict? David Miliband: Yes. The tragedy in Sri Lanka has claimed 70,000 lives in the course of the conflict. That conflict is against the interests of all Sri Lanka's communities, which could find a way to live together if they had representation that was able to eschew violence and look for a political solution. I assure the hon. Gentleman that we are using all our best efforts to achieve that. It is deeply to be regretted that the appointment of an envoy has not yet been met with a welcome in Colombo, but that is what we are working for. Fiona Mactaggart (Slough) (Lab): But will that envoy be able to help us ensure that Ban Ki-Moon's commitment to supporting a ceasefire that enables civilians to leave the hot areas in Sri Lanka can be realised? Families in Britain are anxious about relatives of whom they have heard nothing for months. We need to help them, and their relatives, to be safe. David Miliband: My hon. Friend speaks about this issue with knowledge and passion. She is absolutely right about the need for us to do all that we can to protect those civilians, including working with the United Nations. There are very distressing reports of both sides interfering with civilians' ability to find safety. It is at the heart of our concerns not just to try to provide money, but to try to provide space to which civilians can escape and in which they can be given proper safety. The situation is deeply distressing, not just to people in the region but to many, many people in the United Kingdom. Mr. Elfyn Llwyd (Meirionnydd Nant Conwy) (PC): Some of the signals coming from the Sri Lankan Government imply that they are quite prepared to go ahead with acts of genocide. Time is of the essence. I understand that the right hon. Gentleman is doing what he can, but many of us are deeply worried about what is going on in Sri Lanka and, as time goes by, it is getting worse. The next fortnight may be crucial. May I urge the right hon. Gentleman to think again about every possible avenue that might enable a horrible humanitarian catastrophe to be averted? David Miliband: The hon. Gentleman has raised an important point. Sri Lanka has a democratic Government, and—as I have said in another context—high standards are rightly expected of democratic Governments, and should be adhered to by every single Government. What the hon. Gentleman said about the Sri Lankan Government was absolutely right. No one denies that there is a terrorist problem in Sri Lanka. That terrorist problem poses a mortal threat to Sri Lankans in all communities, but the resolution of that terrorist problem cannot be achieved at the expense of the rights of minority communities in Sri Lanka, and that is what we are trying to work on. Mr. Andrew Love (Edmonton) (Lab/Co-op): As chairman of the all-party parliamentary group on Sri Lanka, I welcome the appointment of my right hon. Friend the Member for Kilmarnock and Loudoun (Des Browne), and wish him well in his discussions with the Government there. Human Rights Watch reported recently that 2,000 people had died and 5,000 had been injured—innocent civilians caught in the conflict. There are now reports that the so-called safe areas are no longer safe because conflict is proceeding there. I have noted the comments of my right hon. Friend the Foreign Secretary. Will he redouble his efforts to secure a humanitarian corridor that will allow innocent civilians to escape entirely from the area of conflict in the Vanni? David Miliband: I recognise the work that my hon. Friend has done as chairman of the all-party group. We will certainly explore all options for the provision of civilian safety, including a ceasefire, a humanitarian corridor and humanitarian safe zones. The situation does indeed get worse day by day. The stories that emerge are of extreme cruelty—cruelty, I have to say, on both sides—and it is very important for the international community to work on the issue. The unanimity of the European Union's response yesterday is an important indication that the issue is rightly becoming higher on the international agenda. Mr. Keith Simpson (Mid-Norfolk) (Con): Obviously we all wish the right hon. Member for Kilmarnock and Loudoun (Des Browne) great success. However, is it not the case that after the Prime Minister had announced the right hon. Gentleman's appointment, the Sri Lankan Government made it clear that they had not been consulted and that they found the whole thing extremely objectionable, and is it not the case that, on Wednesday 18 February, the Sri Lankan Cabinet met and refused to withdraw its opposition to the right hon. Gentleman's appointment? If that is so, it must mean either that the right hon. Gentleman personally is unacceptable—which I would find strange—or that a special envoy from the United Kingdom is unacceptable and will therefore be in permanent limbo. David Miliband: I am sorry that the hon. Gentleman has taken the position that he has, because following a letter from our Prime Minister to the President of Sri Lanka, I spoke to the President of Sri Lanka on 30 January—a long time before the date the hon. Gentleman mentioned—and President Rajapakse said he would engage with a UK envoy. Two meetings between our high commissioner and the President confirmed that position, so it is important that we do not leave on the record the suggestion that there was not consultation. There was, indeed, consultation on this issue, and that is why we are working hard to explain to the Sri Lankan Government not only the virtues of my right hon. Friend the Member for Kilmarnock and Loudoun, but the potential benefit of a UK envoy, joining envoys from Japan, Norway and other countries, playing a positive role in the conflict. Dr. Phyllis Starkey (Milton Keynes, South-West) (Lab): Among the civilian deaths in the north of Sri Lanka as a result of the Sri Lankan Government's military action are 11 relatives of a member of the Milton Keynes Tamil Forum. What she wants to know is what justice there will be for her relatives killed in that action. Can the Foreign Secretary offer any hope of justice? David Miliband: The constituent my hon. Friend mentions has lost 11 relatives, and it is impossible from this Dispatch Box to say anything that will give someone in such a situation, at a time of such huge distress, any sense of real comfort. She is among a large number of people in this country who have lost large numbers of relatives in this terrible conflict. I can assure her and every person who has Sri Lankan heritage or relatives in Sri Lanka that their Government in the UK are working very hard, internationally and bilaterally, on the issue. There are responsibilities on the LTTE, but there are also responsibilities on the Sri Lankan Government, and both need to fulfil them. 6. James Duddridge (Rochford and Southend, East) (Con): What recent reports he has received on the political situation in the Democratic Republic of Congo. [257919] The Parliamentary Under-Secretary of State for Foreign and Commonwealth Affairs (Gillian Merron): The DRC and its neighbours are co-operating constructively on regional security. The Government have begun work on areas such as security sector reform and development, and the national Parliament is increasingly effective in holding the Government to account. However, much work remains to be done to achieve the lasting progress that we all want to see. James Duddridge: In addition to looking at increasing UN troop numbers, which the Minister mentioned earlier, will she also look at the effectiveness of those troops, particularly given UN commander Bipin Rawat's comments that he can only get munitions delivered 9 to 5, Monday to Friday, not at the weekends, and that there is no capacity whatever for night flights? Gillian Merron: Of course, this is a matter for the UN, and we will discuss it there. The MONUC team is available to the DRC and Rwandan armies to help them with their military planning, and I would encourage them to make full use of that, because what we want to see is the MONUC troops carrying out the highest priority, which is civilian protection. Mary Creagh (Wakefield) (Lab): Developing the justice sector is key to creating political stability in the DRC. We were all delighted to see the arrest of Laurent Nkunda, the warlord who ran CNDP criminals in north Kivu, over Christmas, but what conversations has my hon. Friend had with the Governments of Rwanda and the DRC to ensure that Laurent Nkunda returns to the DRC to face justice for the unspeakable acts committed by him and his troops? Gillian Merron: My hon. Friend is right that justice not only has to be done, but has to be seen to be done, and matters such as those are raised regularly both directly with the Governments and through the UN and EU. Mr. Philip Hollobone (Kettering) (Con): Is it the view of Her Majesty's Government that 3,000 extra troops will be enough? Gillian Merron: That is the estimate that has been made, and, indeed, the UK has supported the United Nations security resolution that brought about that extra reinforcement. What matters is that those reinforcements arrive as soon as possible, that they get on with the job that they are there to do, and that they assist the Rwandan and DRC Governments to protect civilians and to bring about a lasting peace. However, as I said earlier, that cannot be done only by military means. It has to be done through a political process. There has been progress, and we will continue to support that. 7. Mrs. Linda Riordan (Halifax) (Lab/Co-op): What recent discussions he has had with his international counterparts on peacekeeping initiatives for the Gaza strip. [257920] The Minister of State, Foreign and Commonwealth Office (Bill Rammell): The Foreign Secretary and I this morning met special envoy Mitchell to discuss Gaza and the middle east. We reiterated the UK's determination to support the ceasefire, both by helping to stop arms smuggling into Gaza and by pressing the Israeli Government to open the crossings. The Foreign Secretary will be leading the UK delegation to the Gaza reconstruction conference in Egypt on Monday.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,640
\section*{Abstract} Cytotoxic T lymphocytes (T cells) and natural killer cells form a tight contact, the immunological synapse (IS), with target cells, where they release their lytic granules containing perforin/granzyme and cytokine containing vesicles. During this process the cell repolarizes and moves the microtubule organizing center (MTOC) towards the IS. In the first part of our work we developed a computational model for the molecular-motor-driven motion of the MT cytoskeleton confined between plasma membrane and nucleus during T cell polarization and analyzed different mechanisms (cortical sliding and capture-shrinkage) that have been proposed on the basis of recent experiments. Here we use this model to analyze the dynamics of the MTOC during the repositioning process in situations in which a) the IS is in an arbitrary position with respect to the initial position of the MTOC and b) the T cell has two IS at two arbitrary positions. We observe several scenarios that have also been reported experimentally: the MTOC alternates stochastically (but with a well defined average transition time) between the two IS; it wiggles in between the two IS without transiting to one of the two; or it is at some point pulled to one of the two IS and stays there. Our model allows to predict which scenario emerges in dependency of the mechanisms in action and the number of dyneins present. \section*{Introduction} T Cells have a key role in the adaptive branch of our immune system by finding and destruction of virus-infected and tumor cells, parasites, and foreign invaders. Cytotoxic killing of a target cell is achieved in three subsequent steps. First, T Cell binds to the surface of the target cell and creates a tight contact zone called immunological synapse (IS) \cite{rudolph_how_2006,garcia_reconciling_2012, zinkernagel_restriction_1974,attaf_t_2015,wucherpfennig_t_2004, babbitt_binding_1985,monks_three-dimensional_1998,dustin_novel_1998,dustin_understanding_2010}. Second, the T Cell relocates the microtubule organizing center (MTOC) towards the IS by a massive movement of the entire MT cytoskeleton due to forces acting on MTs \cite{geiger_spatial_1982,kupfer_polarization_1982,yi_centrosome_2013, stinchcombe_centrosome_2006,maccari_cytoskeleton_2016,kuhn_dynamic_2002, hui_dynamic_2017}. This process involves the repositioning of mitochondria, the Golgi apparatus, and the endoplasmic reticulum, since the organelles are bound to the cytoskeleton and relocate with it \cite{maccari_cytoskeleton_2016,kupfer_reorientation_1984,kupfer_specific_1986,gurel_connecting_2014,lee_dynamic_1988,waterman-storer_endoplasmic_1998,palmer_role_2005}. In the third step, the T Cell releases the cytotoxic material from the lytic granules towards the target cell leading to its death by necrosis or apthosis \cite{mullbacher_granzymes_1999,lowin_perforin_1995,voskoboinik_perforin-mediated_2006,grossman_orphan_2003,krzewski_human_2012, groscurth_killing_1998}. The secretion of lytic granules can take place without the MTOC repolarization \cite{golstein_early_2018}, or before it \cite{bertrand_initial_2013}. However, it does not make the repositioning redundant, since the MTOC accompanied granule secretion may be crucial for the killing of resistant cells. \newline The IS is divided into several supramolecular activation clusters (SMACs) including ring shaped peripheral SMAC (pSMAC) \cite{monks_three-dimensional_1998,dustin_understanding_2010,andre_use_1990,lin_c-smac_2005,choudhuri_signaling_2010}. Dynein, a minus-end-directed (toward the MTOC) molecular motor protein, is indispensable for the repositioning as was shown by knock out experiments \cite{martin-cofreces_mtoc_2008,nguyen-ngoc_coupling_2007,saito_mcp5_2006,yamashita_fission_2006, ananthanarayanan_dynein_2013}. The dynein colocalizes with the adaptor protein ADAP that forms a ring at the IS periphery after the activation of the T cell \cite{combs_recruitment_2006,hashimoto-tane_dynein-driven_2011}. Dynein plays a key role in the two mechanisms proposed to drive the repositioning: cortical sliding and capture-shrinkage. In the cortical sliding mechanism the dyneins step to the minus-end of MTs(towards the MTOC) while being anchored on the cell membrane and therefore pull the MTOC towards the IS \cite{combs_recruitment_2006,stinchcombe_communication_2014,kuhn_dynamic_2002}. It was indicated that the ring shaped pSMAC is the place where attached dyneins are anchored \cite{combs_recruitment_2006,kuhn_dynamic_2002}. \newline A detailed analysis of the process was performed by Yi et al \cite{yi_centrosome_2013}. They used an optical trap to place the target cell so that the IS(contact zone) is initially diametrically opposed to the MTOC. This well defined initial configuration allowed quantitative dynamical imagining including the observation of the MT cytoskeleton morphology. They provided strong evidence that the repositioning is driven by a capture-shrinkage mechanism \cite{laan_cortical_2012} involving the depolymerization of the caught MT in a confined area in the center of the IS. It was shown \cite{yi_centrosome_2013} that MTs bend alongside the cell membrane to reach the IS. Consequently, the MTs caught by their plus-end in the center of the IS straighten, form a narrow stalk and depolymerize at the capture-point. The MTOC is pulled to the center of the IS which invaginates the cell indicating the location of the main pulling force. The capture shrinkage mechanism was identified as the main driving force of the repositioning, since inhibiting the MT depolymerization substantially slowed down the repositioning. Yi et al. \cite{yi_centrosome_2013} reported that the repositioning can be divided into two phases that differ in the MTOC speed and the direction of its motion. In the first so-called polarization phase, the MTOC travels quickly in a circular motion around the nucleus. In the second, docking phase, the MTOC moves slowly and directly towards the IS. \newline Although the MTOC repositioning was thoroughly documented, the details of the force generation and cytoskeleton dynamics remain elusive. Is the transition between polarization and docking caused by the emergence of the resistive force as proposed by \cite{yi_centrosome_2013}? Is the observed dominance of the capture-shrinkage mechanism persistent in other naturally occurring situation? What is then the role of cortical sliding? Why are the MTs attached to the cortical sliding dyneins just on the periphery of the IS? Finally, how does the presence of a second IS influence the MT cytoskeleton dynamics? \newline The T cell has the ability to attack two target cells at once. In this case, two immunological synapses are established \cite{kuhn_dynamic_2002}. Subsequently, one observes repeated transitions in which the MTOC goes to one IS, stays there for some time and subsequently travels to the other IS. It appears plausible that the transitions are induced by forces generated by both mechanisms(cortical sliding and capture-shrinkage) and that dyneins from both IS are in a tug-of-war. The interplay between dyneins and filaments is influenced by dynamic MTs which constantly grow and shrink: periods of grow alternate with the periods of rapid depolymerization in a process called dynamic instability \citep{ walker_dynamic_1988,mitchison_dynamic_1984,vorobev_dynamics_2003, horio_role_2014,brouhard_dynamic_2015, mandelkow_microtubule_1991,bieling_reconstitution_2007, desai_microtubule_1997,kerssemakers_assembly_2006,schek_microtubule_2007,gardner_microtubule_2008}. This process allows the cytoskeleton to adopt itself to the needs and the functions of the cell to perform substantial shape changes through the cell cycle \cite{horio_role_2014,myers_distinct_2011,lacroix_microtubule_2018,fuesler_dynamic_2012}. Transitions between two IS would not be possible without the dynamic instability(DI) of MTs: at the end of the repositioning process towards one IS the MT cytokeleton is deformed and capture-shrinkage MTs are depolymerized \cite{yi_centrosome_2013,hornak_stochastic_2020}. Due to DI, the depolymerized MTs regrow and the deformed cytoskeleton can restructure. \newline We divided our analysis of the MTOC repositioning into two parts. In the first \cite{hornak_stochastic_2020}, we analyzed the repositioning in the cell where the MTOC and the IS are initially diametrically opposed. We found out that the capture-shrinkage mechanism is more efficient since it results in a faster repositioning even when employing just a fraction of dyneins compared to cortical sliding. The two mechanisms act in an unexpected synergy: cortical sliding passes the MTs to the more efficient capture-shrinkage mechanism which in turn provides a firm anchor point to cortical sliding MTs. The synergy saves the cell resources since the combination of the two mechanisms with a relatively low dynein area densities can be faster than the sole actions of capture-shrinkage mechanism with a much higher density. In the real cell, where the MTOC and the MT cytoskeleton are dragged through an inhomogenous environment of cytoplasm, filaments and organelles, this synergy could make a difference between finished and incomplete repositioning. It was hypothesized \cite{kuhn_dynamic_2002} that the dynein colocalizes with the ADAP ring at the periphery of the IS to fascilitate the interaction with MTs. In our model the cortical sliding dynein was distributed equally over the entire IS. However, we observed that the large majority of attached cortical sliding dynein is located at the periphery of the IS. In the case of combined mechanisms, the attached cortical sliding dyneins are completely absent in the center of the IS. This finding supports the aforementioned hypothesis. \newline In this paper we present the second part of our MTOC repositioning analysis and examine the effects of varying the initial configuration of the MTOC and the IS. Our main focus in this paper will be the analysis of the MTOC dynamics in the presence of two IS. In particular we will address the question under which condition the MTOC alternates between the two IS and what are the typical dwelling and transitions times. \section*{Computational model} \label{computational_model} {\small We use the computational model introduced in \cite{hornak_stochastic_2020}. The cell membrane and the nucleus are represented by two spheres with radius 5$\mu$m and 3.8$\mu$m, respectively. MTs sprout from the MTOC to the cell periphery, as sketched in Figs. \ref{fig:variable_Beta_basic}a and b. They are modeled by a bead-rod model with constrained Langevin dynamics. The MTs move under the influence of several forces: bending, drag, molecular motors, noise and repulsive forces keeping them between the nucleus and the cell membrane. The MTOC moves to the IS due to the pulling force of dyneins acting via two mechanisms: cortical sliding during which the plus-end of the MT remains free and filament slides tangentially along the plasma membrane, and capture-shrinkage, by which dyneins capture the tip of the MT and depolymerize it by pulling it against the membrane, as sketched in Fig. \ref{fig:variable_Beta_basic}c. Dyneins acting via cortical sliding and capture-shrinkage are located in the complete IS and the narrow center, respectively. The two regions are represented by intersections of the cell sphere with cylinders with radius $R_{\rm IS}=2\mu\textrm{m}$ for the complete IS and $R_{\rm CIS}=0.4\mu\textrm{m}$ for the center, as sketched in Fig. \ref{fig:variable_Beta_basic}. In \cite{hornak_stochastic_2020} we focused on the analysis of the MTOC repositioning process in the experimental setup used in \cite{yi_centrosome_2013}, in which the MTOC and the IS are initially diametrically opposed. Here we consider naturally occurring situations, in which the angle $\beta$ between the MTOC and the IS (see \ref{fig:variable_Beta_basic}a) is arbitrary, and situations in which the T cell attaches simultaneously to two target cells and thus forms two IS \cite{kuhn_dynamic_2002}. \newline To analyze the situation with two IS we augmented our model presented in \cite{hornak_stochastic_2020} in several ways. The configuration of a cell with two IS is defined by the angle $\gamma$ between the lines connecting the centers of IS with the center of the cell, sketched in Fig. \ref{fig:two_IS_sketch}a. Both IS and the center of the cell are located on the $xz$ plane of the coordinate system, sketched in Fig. \ref{fig:two_IS_sketch}b and visually demonstrated in Fig. \ref{fig:two_IS_sketch}c. The dyneins from both IS are in a tug-of-war leading to an increase of the detachment rate \cite{hornak_stochastic_2020}. When all capture-shrinkage dyneins detach from the MT, the plus end is no longer fixed on the cell membrane. Most importantly we included the dynamical instability of MTs \citep{mitchison_dynamic_1984,desai_microtubule_1997,brouhard_dynamic_2015, horio_role_2014,zhang_mechanistic_2015} since we hypothesized that transitions between two IS rely on DI. The measured values of parameters of dynamic instability differ \cite{cassimeris_real-time_1988,sammak_direct_1988,belmont_real-time_1990,mitchison_dynamic_1984,zwetsloot_measuring_2018, steinberg_microtubules_2001,yvon_non-centrosomal_1997,carminati_microtubules_1997,adames_microtubule_2000, drummond_dynamics_2000,van_damme_vivo_2004}, they depend on the cell phase \cite{yamashita_three-dimensional_2015,tirnauer_yeast_1999}, and on the distance from the cell membrane \cite{komarova_life_2002,alieva_microtubules_2010,brunner_clip170-like_2000,rusan_cell_2001}. We take the following estimates from the literature: growth velocity $v_g=0.1 \mu\textrm{ms}^{-1}$ \citep{brouhard_dynamic_2015,zwetsloot_measuring_2018,trushko_growth_2013, van_damme_vivo_2004} - although it might depend mildly on load and MT plus end location \cite{alieva_microtubules_2010,schek_microtubule_2007}; shrinking velocity $v_s=0.2\mu\textrm{ms}^{-1}$; rescue rate (the transition rate from shrinkage to growth) $r_r=0.044/\textrm{s}$ \citep{cassimeris_real-time_1988, shelden_observation_1993, belmont_real-time_1990, fees_unified_2019}; and a length dependent catastrophe rate (transition rate from growth to shrinkage) $c_{r}(L) = \textrm{exp}(( L - L_{c} ) / b_{c})$ where $L_{c} = \pi R_{\textrm{Cell}} + \frac{ R_{\textrm{Cell}} }{2}$, $b_{c} = ( L_{0} - L_{c} ) / \textrm{ln}( r_{c} )$, $L_{0} = \pi R_{\textrm{Cell}}$ and $r_{c} = 0.022s^{-1}$, reflecting a lower catastrophy rate close to the MTOC and a higher one at the cell periphery \cite{tischer_force-_2009,komarova_life_2002,myers_distinct_2011}. The MT length distribution resulting from the dynamic instability with aforementioned parameters is shown in Fig. \ref{fig:two_IS_sketch}f. Due to the dynamic instability, growing and shrinking MTs coexist in a dynamically changing cytoskeleton affected by the two mechanisms in both IS, as visualized in Fig. \ref{fig:two_IS_sketch}c. The dynamic instability adds another force acting on MTs, since the growing tips of filaments are pushed against the cell membrane, sketched in Fig. \ref{fig:two_IS_sketch}e. In contrast to the dynein forces, growing tips can push the MTOC from both IS and the $xz$ plane. Since the plus-end of the MTs remain free during the cortical sliding mechanism, the MTs can grow or shrink even when attached. The MT length influences the contact between MTs and the dyneins on the cell membrane. The MTOC is modeled as a planar structure \cite{hornak_stochastic_2020} and the MT sprouts from the MTOC radially. The short MTs have to bend to stay in contact with the dyneins on the membrane, as sketched in Fig. \ref{fig:two_IS_sketch}d. Once the dynein detach, the tip recedes from the membrane making the reattachment unlikely. On the other hand, bending forces press the tip of a long MT against the cell membrane where it can attach to dyneins. } \begin{figure}[hbt!] \centering \includegraphics[trim=0 490 15 10,clip,width=0.8\textwidth]{Figure_1.png} \caption{ {\small (a–d) Sketch of the model of the cell with one IS. (a) A two-dimensional cross-section of the model is shown. The movement of the MTs sprouting from the MTOC is confined between the nucleus and the cell membrane. MTs are pulled by the capture shrinkage and cortical sliding mechanisms employing dynein motors in the IS. The configuration of the cell is determined by the angle $\beta$ between the IS and the initial position of the MTOC. (b) A three-dimensional sketch of the cell model is given. The plasma membrane and the nucleus are represented by the transparent outer and inner spheres, respectively. Small green spheres represent unattached dyneins in the IS and its center depicted by the cyan and small brown disks, respectively. Cortical sliding dyneins are anchored in the IS and the capture-shrinkage dyneins in the center. (c) A sketch of the cortical sliding mechanism and the capture-shrinkage mechanism is shown. Small black dots on the membrane and on the MTs represent dynein's anchor and attachment points, respectively. When the MT is pulled by capture-shrinkage mechanism, the end of the MT is anchored in the center of the IS and depolymerizes. Cortical sliding MTs slide on the surface and the plus-end remains free. (d) Sketch of MTs intersecting the IS and its center in the cells with different angles $\beta$. The percentage of MTs intersecting the IS is given by the ratio of the diameter of the IS to the cell circumference corresponding to angle $\beta$ depicted by dashed circles. The percentage decreases to the minimum at $\beta=0.5\pi$ and then it increases again. \label{fig:variable_Beta_basic} } } \end{figure} \begin{figure}[hbt!] \centering \includegraphics[trim=0 410 0 0,clip,width=\textwidth]{Figure_2.png} \caption{ {\small Sketch of the model of the cell with two IS. (a) The sketch of the angle $\gamma$ between two IS . Black dotted lines depict the axes of both IS with the mutual angle $\gamma$. Small black dots on the membrane and MTs represent dynein anchor and attachment points, respectively. MTs can attach to capture-shrinkage or cortical sliding dyneins in both IS. (b) The sketch of the azimuthal and polar angles is given. The positions of the centers of both IS and the cell are located on the $xz$ plane depicted in gray. When $\gamma<\pi$ both IS are located in the upper hemisphere($z>0$) denoted by the magenta arrow. The polar angle $\varphi$ describes the cone with the vertex located in the center of the cell. The azimuthal angle $\theta$ gives the angle between the projection of the MTOC position on $xy$ plane depicted by the small black circle and x-Axis. (c) Three-dimensional sketch of the cell model is given, $\gamma = \frac{3}{4} \pi$. Growing and shrinking MTs sprout from the MTOC to the cell periphery and can be pulled by the two mechanisms in both IS. Dyneins in one IS cooperate and dyneins from different IS are in a tug-of-war. (d) The sketch of bending forces acting on MTs attached to the capture-shrinkage dynein is given. The wide black line and brown rectangles represent the plane of the MTOC and the centers of IS, respectively. Attached MTs sprout from the MTOC tangentially. Bending forces push the long MT against the cell membrane and pull the short MT from it. (e) A two dimensional sketch of the forces acting on the cytoskeleton. The gray line represents the $xz$ plane on which the centers of the cell and both IS are located. The red line stands for the MT attached in the IS. Dynein forces acting on the red MT pull the MTOC to the $xz$ plane where the IS is located. The growing olive and violet plus-end of MTs push the MTOC from the $xz$ plane and towards it, respectively. (f) The probability density of MTs length. \label{fig:two_IS_sketch} } } \end{figure} \section*{Results} \subsection*{Repositioning time scales} Before we present the results of computer simulations of the model defined in \ref{computational_model} the previous section we give an estimate for the time scale of the MTOC repositioning process based on the antagonistic interplay of friction and pulling forces acting on the MT cytoskeleton. \begin{figure}[hbt!] \centering \includegraphics[trim=0 650 15 0,clip,width=0.999\textwidth]{Figure_3.png} \caption{ Estimations of dynein forces in IS. (a) Dependence of the dynein force $F_{d}$ on the length of the dynein stalk $l_{d}$. The red points represent the multiples of the dynein step $d_{\textrm{step}} = 8\textrm{nm}$. Dynein makes the first two steps quickly due to the zero load and the stepping slows down as the force increases and stops at the stall force $F_{S}=4\textrm{pN}$. The dashed black lines represent the second step and the length corresponding to the stall force and delimit the probable length of the dynein stalk. (b) The dependence of the dynein attachment rate $p_{\textrm{a}}$ on the distance $d_{\textrm{md}}$ between the dynein and the MT. The dependence of the dynein detachment rate $p_{\textrm{det}}$ on the dynein force $F_{d}$ in the inset. The black dashed curve in the inset represents the detachment rate $p_{\textrm{det}}(\overline{F_{\textrm{d}}})$ corresponding to the average dynein force. The black dashed line in the main picture represent the distance $d_{\textrm{md}}$ when $p_{\textrm{a}} = p_{\textrm{det}}(\overline{F_{\textrm{d}}})$. (c) Fraction of all MTs intersecting the IS $q_{\textrm{IS}}$(inset: central area of the IS $q_{\textrm{CIS}}$) as function of the angle $\beta$. (d) The dependence of the estimated time of the repositioning $T_{\textrm{est}}$ on the angle $\beta$. Comparison of the Figs. d and \ref{fig:variable_Beta}b shows that the estimated times of the repositioning are comparable to the measured ones and that the dependencies on the angle $\beta$ have the same development, see Fig. S2l. The estimated times are shorter when $\rho_{\textrm{IS}}=200, 500\mu\textrm{m}^{-2}$, since our estimates did not consider the repulsive force of the nucleus and the decrease of the number of dyneins at the end of the repositioning \cite{hornak_stochastic_2020}, see Fig. S4(d-f). The MTOC is less pressed against the nucleus when $\rho_{\textrm{IS}}=80\mu\textrm{m}^{-2}$, see SFigs. 4d and e. \label{fig:estimations} } \end{figure} The drag force acting on a MT moving with velocity $v$ is $F_{\rm drag}=\gamma_{\rm MT}\cdot v $, where $\gamma_{\rm MT}$ is the drag coefficient. For a cylindrical object of length $L$ and diameter $d$ it is given by \cite{howard_mechanics_2001} \begin{equation} \label{darg_coefficient} \gamma_{\rm MT}=\frac{4\pi\mu L}{\ln(L/d)+0.84}, \end{equation} where $\mu$ is the viscosity of the surrounding liquid, the cytosplasm, which is $e$ times the viscosity of water, $\mu=e\cdot\mu_{w}\approx 10^{-3}{\rm N\,sec/m^2} =e\cdot 10^{-3}{\rm N\,sec/m^2}$ and we estimate it to be $e\approx 30$ \cite{hornak_stochastic_2020}. Note that for simplicity we do not discriminate between movement of the cylindrical object in the longitudinal or in the transverse direction. Taking the average length of the MT to be $L=10\mu m$ and its diameter to be $d=25\textrm{nm}$ we have $\gamma_{\rm MT}\approx\mu\cdot 18.4\mu m$. The drag coefficient of the whole cytosceleton with $N_{\rm MT}$ MTs the is $\gamma_{\rm cyto}=N_{\rm MT}\cdot\gamma_{\rm MT}$. Mitochondria, Golgi apparatus \cite{xu_asymmetrical_2013,ladinsky_golgi_1999,day_three-stage_2013,huang_golgi_2017} and endoplasmic reticulum \cite{westrate_form_2015,english_endoplasmic_2013,english_peripheral_2009,shibata_rough_2006, hu_weaving_2011} are massive organelles entangled with the cytoskeleton \cite{gurel_connecting_2014,maccari_cytoskeleton_2016} and dragged with it, thereby increasing the drag coefficient by a factor $g$, i.e. $\gamma_{\rm eff}=g\cdot\gamma_{\rm cyto}$, which was estimated to be $g\approx 3$ \cite{hornak_stochastic_2020}. \newline The force pulling on the cytoskeleton is given by the number of dyneins attached to MTs times the average forces exerted by a dynein motor: $F=N_{\rm dyn}\cdot F_{\rm dyn}$, the latter is in the pico-Newton range, $F_{\rm dyn}=f\cdot 10^{-12}N$, with $f\approx 1$. Consequently, the velocity of the whole cytoskeleton movement when $N_{\rm dyn}$ are pulling is \begin{equation} \label{speed_estimation} v=\frac{F_{\rm dyn}}{\gamma_{\rm eff}} \approx 54\cdot\frac{N_{\rm dyn}}{N_{\rm MT}}\cdot\frac{f}{e\,g}\; \frac{\mu{\rm m}}{\rm sec}. \end{equation} Inserting the estimates $f=1$, $e=30$, $g=3$, and evaluating the r.h.s. for $N_{\rm MT}=100$ MTs and $N_{\rm dyn}=10-50$ attached dyneins one obtains a velocity $v= 3.6-18{\mu\rm m}/{\rm min}$, a range that agrees well with the experimentally determined MTOC velocities \cite{yi_centrosome_2013}. For an initial MTOC position diametrically opposed to the IS the MTOC would have to travel a distance $D=\pi R_{\textrm{Cell}}$, where $R_{\textrm{Cell}}$ is the radius of the cell, and with $R_{\textrm{Cell}}\approx 5\mu{\rm m}$ and the above velocity estimate, the whole relocation process would need 1-4 minutes, which also agrees with the experimentally reported relocation times \cite{yi_centrosome_2013}. \newline Since the number of attached dyneins is the central quantity determining the speed of the relocation process let us relate to the dynein density and the attachment rates that we use in our model. For the capture shrinkage mechanism we assume dynein to be concentrated in a central region of the IS with radius $R_{\rm CIS}=0.4\mu{\rm m}$ (i.e. an area of $0.5\mu{\rm m}^2$ and with a dynein density $\rho_{\rm IS}$. At medium density of $\rho_{\rm IS}=100\mu{\rm m}^{-2}$ we have $50$ dyneins located in this area and since most MTs in our model reach this area, they could in principle all be attached: the average distance between dyneins is $D_{\rm d2}=\rho_{\rm IS}^{-1/2}=100\textrm{nm}$ for the assumed dynein density and the attachment rate is assumed to be $p_a=5\cdot\exp(-(d_{\textrm{md}}/p_{d}))$ with $d_{\textrm{md}}$ the distance between a MT and a dynein and $p_{d}=100\textrm{nm}$ one has $p_a\approx 2{\rm s}^{-1}$, implying that attachment is fast in comparison to the duration of the relocation process. Actually, in our simulations we observe that initially ca. one quarter of all MTs get attached to dynein, some of them even attached simultaneously to two dyneins. Consequently for $\rho_{\rm IS}=100\mu{\rm m}^{-2}$ we have indeed initially 25-50 dyneins attached to MTs, resulting in an initial MTOC velocity of $v_{\rm MTOC}=9-18{\mu\rm m}/{\rm min}$. In the later stage of the relocation process competing forces will slow down the MTOC velocity, which will be revealed by the actual simulations reported below. \newline These rough estimates hold for our model framework as well and can be elaborated more on the basis of more detailed model assumptions. First force exerted by attached dynein is assumed to depend on the length of the stalk between the attachment and the anchor point $l_{\textrm{d}}$ and is expressed as $F_{\textrm{d}} = 0$ if $l_{\textrm{d}} < L_{0}$ and $ F_{\textrm{d}} = k_{\textrm{d}}( l_{\textrm{d}} - L_{0} ) $ otherwise, where $L_{0} = 18\textrm{nm}$ is the length of the relaxed stalk and $k_{\textrm{d}} = 400\textrm{pN} \mu \textrm{m}^{-1}$ is the elastic modulus of the stalk, see Fig. \ref{fig:estimations}a. In our model, the dynein makes steps with the length $d_{\textrm{step}}=8\textrm{nm}$ towards the minus-end of the MT. The stepping is very fast at zero load(the first two steps), it slows down as the force increases \cite{hornak_stochastic_2020} and the movement stops at the stall force $F_{S}=4$pN. Since the MT depolymerizes and moves, the distance between the attachment and the anchor point can differ from the multiples of the dynein step. Consequently, the length of the stalk is $l_{1}<l_{\textrm{d}}<l_{2}$ where $l_{1}$ and $l_{2}$ are the lengths corresponding to the second step and to the stall force, respectively, see Fig. \ref{fig:estimations}a. The average dynein force $\overline{F}_{\textrm{d}} = 1.66$pN is calculated as the integral of the force between $l_{1}$ and $l_{2}$ divided by their distance. \newline At the beginning of the repositioning, the number of attached dyneins increases fast, see Fig. S2. The detachment rate of the dynein is $ p_{\textrm{det}} = \textrm{exp}(\frac{F_{d}}{F_{D}})$, where $F_{D} =2$pN. The detachment rate corresponding to the average dynein force is $p_{\textrm{det}}(\overline{F}_{\textrm{d}}) = 2.29\textrm{s}^{-1}$, see Fig. \ref{fig:estimations}b. Consequently, dyneins are expected to detach in less than half a second. The attachment rate of the dynein decreases exponentially with the distance from the filament \cite{hornak_stochastic_2020}. When the distance between the dynein and the filament $d_{\textrm{md}} = 95$nm, the attachment rate equals detachment rate of the average dynein force $p_{a}(d_{\textrm{md}} ) = p_{\textrm{det}}(\overline{F}_{\textrm{d}})$, see Fig. \ref{fig:estimations}b. Consequently, the dyneins located closer to the filament are expected to attach faster than dyneins detach on average. The fraction of MTs intersecting the IS, $q_{\textrm{IS}}$, (or the central region of the IS, $q_{\textrm{CIS}}$), shown in Fig. \ref{fig:estimations}c, is given by the ratio of the diameter of the IS (or diameter of the center of the IS) and the circumference $c(\beta)$ of the circle of latitude at angle $\beta$ (see Fig. \ref{fig:variable_Beta_basic}d): $q_{\textrm{IS}} = \textrm{min}( 1, 2 R_{\textrm{IS}} / c(\beta))$, with $c(\beta)= 2 \pi r(\beta)$, where $r(\beta) =R_{\textrm{Cell}}*\textrm{sin}(\beta)$ is the radius of the circle. The number of attached dyneins can then be estimated by the number of dyneins that are closer than $d_{\textrm{md}} = 95\textrm{nm}$ to a MT: \begin{equation} \label{number_of_dynein} \overline{N}_{dm} = N_{\textrm{MT}} \cdot q_{\textrm{CIS}} \cdot n_{dm}, \end{equation} where $N_{\textrm{MT}} = 100$ is the number of MTs and $n_{dm} = \pi \cdot d_{\textrm{md}}^{2} \cdot \rho_{\textrm{IS}}$ is the number of dynein in the proximity of the filament. It can be seen in Fig. S2 that the number of attached dyneins approaches the estimated number of dyneins regardless of the angle and the dynein density. With this $\beta$-dependent estimate of the $\beta$-dependent number of attached dyneins we can perform again the calculation of the estimated MTOC velocity and the relocation time as above. \newline Fig. \ref{fig:estimations}d shows the dependence of the estimated time of the repositioning $T_{\textrm{est}}$ on the angle $\beta$. The repositioning time increases with the angle until they reach a maximum at $\beta\sim0.6\pi$ and then they decrease, see \ref{fig:estimations}d. The decrease when $\beta\leq0.5\pi$ can be explained by the fact that the distance is increasing and the $q_{\textrm{CIS}}$ decreases with $\beta$, see Fig. \ref{fig:estimations}d. The ratio $q_{\textrm{CIS}}$ increases sharply when $\beta>0.65\pi$ and the increased pulling force results in faster repositioning. \subsection*{Repositioning with one IS} \begin{figure}[hbt!] \centering \includegraphics[trim=0 650 0 0,clip,width=0.99\textwidth]{Figure_4.png} \caption{Repositioning under the influence of the capture-shrinkage and cortical sliding mechanisms for different angles $\beta$ between the IS and the initial position of the MTOC. (a) Dependence of the average MTOC-IS distance $\bar{d}_{\textrm{MIS}}$ on time. $\beta = 0.5\pi$. (b) Dependence of the averaged times $\overline{\textrm{T}}_{\textrm{MTOC}}$ (MTOC-IS distance $d_{\textrm{MIS}}<1.5\mu\textrm{m}$) on the angle $\beta$. Capture-shrinkage and cortical sliding mechanisms are represented by solid and dashed lines, respectively. (c-d) Dependencies of the the mean numbers of attached dyneins $\overline{N}$ averaged over simulation runs on the angle $\beta$. (c) Capture-shrinkage mechanism. (d) Cortical sliding mechanism. \label{fig:variable_Beta} } \end{figure} In this section we present the results of the computer simulation of the model with one IS, located at an angle $\beta$ with respect to the initial position of the MTOC, see Fig. \ref{fig:variable_Beta_basic}. Fig. \ref{fig:variable_Beta}a shows that expectedly the repositioning becomes faster with increasing dynein density for both mechanisms. Moreover, the MTOC dynamic has the same characteristic as in the case of $\beta = \pi$, which was analyzed in detail in \cite{hornak_stochastic_2020}. The MTOC travels to the IS and its speed decreases with the MTOC-IS distance. Additional analysis of the repositioning for the cases of $\beta = 0.75,0.5,0.25\pi$ can be found in Supporting Materials and Methods, Sections 2.1 and 2.2. Here we focus on the average repositioning time $\overline{T}_{\textrm{MTOC}}$ and its dependence on the angle $\beta$: Fig. \ref{fig:variable_Beta}b shows that $\overline{T}_{\textrm{MTOC}}$ increases with the angle $\beta$ to a maximum at $\beta \sim 0.75\pi$ and then decreases. $\overline{T}_{\textrm{MTOC}}$ depends on the initial MTOC-IS distance, opposing forces and the pulling force of dynein motors. The opposing forces increase with the angle $\beta$ since the nucleus increasingly presents an obstacle on the path of the MTOC. For $\beta = 0.25\pi$ the nucleus does not intersect the line between the initial positions of the MTOC and the IS, visually demonstrated in Supporting Materials and Methods, Figs. S1b-k. Contrarily, the MTOC has to navigate around the entire nucleus when $\beta=\pi$. \newline Figs. \ref{fig:variable_Beta}c and d show that the number of attached dyneins decreases with $\beta$ to a minimum at approximately $\beta = 0.6\pi$ and then increases sharply. This can easily be explained by the number of MTs intersecting the IS given by the ratio of the diameter of the IS (or its center) and the circumference of the circle of latitude at angle beta (see Figs. \ref{fig:variable_Beta_basic}d and \ref{fig:estimations}c): $q_{\textrm{IS}} = \textrm{min}( 1, 2 R_{\textrm{IS}} / c(\beta))$, with $c(\beta)= 2 \pi r(\beta)$, where $r(\beta) =R_{\textrm{Cell}}*\textrm{sin}(\beta)$ is the radius of the circle. In the special case of $\beta=\pi$ all MTs long enough intersect the IS, as visualized in Figs. S1c and h. However, when $\beta=0.5\pi$ the IS is intersected only by MTs sprouting from the MTOC towards it, visually in Figs. \ref{fig:variable_Beta_basic}b and d and SFigs. 1a and e. The ratio decreases with the angle $\beta$ until it reaches the minimum at $\beta=0.5\pi$ and then it increases sharply, see Fig. \ref{fig:estimations}c and visually demonstrated in \ref{fig:variable_Beta_basic}d. In the simulations the minimum is slightly shifted from $\beta=\pi/2$ to $0.6\pi$, as visible in Figs. \ref{fig:variable_Beta}c and d, because dyneins detach due to an increasing opposing force of the nucleus. Subsequently, the number of dynein increases due to the fact that the increasing percentage of MTs intersects the IS, compare Figs. \ref{fig:variable_Beta}c and d with Fig.\ref{fig:estimations}c. By comparing Fig. \ref{fig:variable_Beta}c and \ref{fig:variable_Beta}d one observes that the number of attached cortical sliding dyneins increases more sharply with increasing $\beta$, due to the fact that a part of the relatively large IS is located in the diametrical opposition of the IS for $\beta>0.9\pi$ and the MTs sprouting from the MTOC in all directions can attach to dynein. The number of attached capture-shrinkage dyneins at $\beta=0.15\pi$ is smaller than for $\beta=0.2\pi$ due to the fact that due to the short MTOC-IS distances the MTOC is dragged towards the IS and the number of dyneins quickly decreases, see Fig. S4. \newline The repositioning time, $\overline{T}_{\textrm{MTOC}}$, increases with $\beta$ between 0 and $\beta<0.7\pi$, since the distance and opposing force increase and the number of attached dyneins decrease, see Fig. \ref{fig:variable_Beta}b-d. It can be seen in Fig. \ref{fig:variable_Beta}b that the increase of $\overline{T}_{\textrm{MTOC}}$ is sharper for cortical sliding when $\beta>0.5\pi$ due to the different behavior of the number of attached dynein, c.f. S4 and S6. When $\beta>0.8$, $\overline{T}_{\textrm{MTOC}}$ decreases rapidly due to the sharp increase of pulling force. \newline The repositioning time offers a way how to compare the performance of the two mechanisms for different configurations of the cell. It can be seen that the cortical sliding mechanism outperforms the capture-shrinkage mechanism when $\beta < 0.5\pi$ and is substantially slower otherwise. The only exception is the case of cortical sliding mechanism when the density $\tilde{\rho}_{\textrm{IS}} =200\mu\textrm{m}^{-2}$ since it results in the fastest repositioning when $\beta\geq 0.85\pi$. The speed of the process can be explained by the three regimes of cortical sliding repositioning analyzed in \cite{hornak_stochastic_2020}. The difference between the repositioning times for the two mechanisms decreases as the dynein density increases, see Fig. \ref{fig:variable_Beta}b. \subsection*{Repositioning in the T Cell with two IS } In this section we present the results of the computer simulation of the model with two IS, as sketched in Fig. \ref{fig:two_IS_sketch}. The configuration of the cell is defined by the angle $\gamma$ between the two IS, sketched in Fig. Fig. \ref{fig:two_IS_sketch}a. The densities of dyneins anchored at both IS, $\tilde{\rho}^{1}_{\textrm{IS}}$ and $\tilde{\rho}^{2}_{\textrm{IS}}$, and the central region of the IS, $\rho^{1}_{\textrm{IS}}$ and $\rho^{2}_{\textrm{IS}}$, are unknown model parameters, which we therefore vary over a broad range between 0 (no anchored dynein) and $1000\mu\textrm{m}^{-2}$. We calculate and analyze the following quantities: the transition frequency between the two IS, $N_{\textrm{tr}}$ $\textrm{min}^{-1}$; the dwell times at one IS, which is defined as the time interval during which the MTOC-IS distance is smaller than $3\mu\textrm{m}$, $T_{\textrm{d}}$; the longitudinal and transverse fluctuations of the MTOC by determining the time averaged probability distribution of the polar and azimuthal angle, $\varphi$ and $\theta$, respectively, which are defined as sketched in Fig. \ref{fig:two_IS_sketch}b. For each point in the parameter space, these quantities were averaged over 500 simulation runs. Each simulation run is initialized with all dyneins being detached. Results are shown with the standard deviation as error bars. \subsubsection*{Capture-shrinkage mechanism } \label{capture_shrinkage_2_IS} \begin{figure}[hbt!] \centering \includegraphics[trim=0 300 0 0,clip,width=0.666\textwidth]{Figure_5.png} \caption{Repositioning with two IS: Snapshots from the time evolution of the MT cytoskeleton configuration with capture-shrinkage mechanism, $\rho_{IS}^{1} = \rho_{IS}^{2} = 400\mu m^{-2}$. The MTOC is indicated by the large black sphere. Brown cylinders indicate the centers of both IS where capture-shrinkage dyneins are located. Black and red lines represent MTs attached to capture-shrinkage dyneins and blue and green lines indicate growing, shrinking unattached MTs, respectively. Small black spheres in both IS represent attached dyneins. (a) MTs attach to dyneins in the left IS, form a stalk and the MTOC moves towards the left IS. (b) MTOC approaches the IS and MTs depolymerize. (c) Short MTs detach from the left IS. Simultaneously, the plus end of the MT intersect with the center of the distant IS and are captured by dynein. (d) Only one MT remains attached to the dynein in the left IS and additional MTs attach in the right IS. (e) All MTs are detached from the left IS and multiple MTs are attached in the distant IS. (f) MTs stalk is formed and the MTOC moves towards the right IS. \label{fig:two_IS_sketch_capt_capt}} \end{figure} Video S1 of the Supporting Materials and Methods shows the repositioning with two IS with the same density of capture-shrinkage dyneins $\rho_{IS}^{1} = \rho_{IS}^{2} = 400\mu m^{-2}$. In the first seconds of the simulation, MTS attach to dyneins at the left IS, visualized in Fig. \ref{fig:two_IS_sketch_capt_capt}a, and the MTOC is dragged towards it. Captured MTs shorten and depolymerize, see Fig. \ref{fig:two_IS_sketch_capt_capt}b. As the MTOC approaches the left IS, we observe that the number of attached MTs decreases as MTs detach and reattach in the center of the IS, visually demonstrated in Figs. \ref{fig:two_IS_sketch_capt_capt}b-d. Simultaneously, the plus end of MTs intersect with the distant IS and are captured by dyneins. Finally, all MTs are detached from the left IS at the end of the transition, and the MTOC moves to the right IS, visually demonstrated in Figs. \ref{fig:two_IS_sketch_capt_capt}e and f. Due to the dynamic instability, MTs grow(blue lines) and shrink(green lines). The MT cytoskeleton is not damaged permanently by the capture-shrinkage mechanism since short filaments regrow due to the polymerization. Therefore, the MTOC relocates back and forth between the two IS until one IS is removed. In Figs. \ref{fig:two_IS_capture}a-c we see that the time evolution of the MTOC position follows a recurring pattern already seen in the video: the MTOC travels to one IS, remains in its close proximity for a time and then repositions to the second IS. This pattern is caused by two effects: the MTs lose contact with the close IS(to which it moves) and establish a contact with the distant IS. The similar process for the case $\gamma=\frac{3\pi}{4}$ is shown in Video S2 of the Supporting Materials and Methods. \newline The physical mechanism underlying the MTOC transition from one IS to the other and back is the decrease of the dynein attachment probability with decreasing MTOC-IS distance due to strong bending of attached filaments at short distances as sketched in Fig. \ref{fig:two_IS_sketch}d: The MTOC is a planar structure and MTs sprout from the MTOC tangentially. At large MTOC-IS distances the MT bends around the nucleus and bending forces press the plus-end against the cell membrane where it can be captured by dyneins. At small MTOC-IS distances an attached MT has to bend to stay in a contact with the IS. When a short MT detaches from dyneins, the plus end recedes from the IS making a reattachment unlikely. The attachment probability of the MT in the IS depends on the circumferential MTOC-IS distance, since only MTs having a length roughly corresponding to it can attach in the IS. Fig. \ref{fig:two_IS_sketch}(e) shows that the probability density of the MT length steadily increases before reaching a peak at $L_{\textrm{MT}} \sim 15.8\mu\textrm{m}$ corresponding to the circumferential distance between two IS when $\gamma = \pi$. Consequently, the probability of the MT attachment in the distant IS increases as the MTOC recedes since increasing number of MT have a length corresponding to the circumferential MTOC-IS distance. \begin{figure}[hbt!] \centering \includegraphics[trim=35 410 30 30,clip,width=0.66\textwidth]{Figure_6.png} \caption{Capture-shrinkage mechanism with two IS with the same dynein density $\rho_{IS}^{1} = \rho_{IS}^{2} = \rho_{IS}$. (a-c) Examples of the time evolution of the MTOC position in 600s of the simulation. The time evolutions of x coordinate of the MTOC are shown. Both IS are located in X-Z plane and the MTOC is originally located at the same distance from both IS, $x = 0$. (a and b) $\gamma = \pi$. (a) $\rho_{IS} = 200\mu\textrm{m}^{-2}$. (b) $\rho_{IS} = 800\mu\textrm{m}^{-2}$. (c) $\gamma = \frac{1}{2}\pi$, $\rho_{IS} = 800\mu\textrm{m}^{-2}$. (d) Dependencies of the average transition frequency between two IS per minute (e), the average dwell time that the MTOC spends next to the IS(f) on dynein density $\rho_{IS}$ are shown. (f) Probability densities of dwell times for angle between the axis of two IS $\gamma = \pi$, $\rho_{\textrm{IS}} = 200\mu\textrm{m}^{-2}$. Inset: Dwell time distribution in a log-lin plot demonstrating the exponential tail. \label{fig:two_IS_capture}} \end{figure} By comparing Figs. \ref{fig:two_IS_capture}a and \ref{fig:two_IS_capture}b one realizes that when the density increases, transitions are faster and the MTOC remains close to the IS for a shorter time. Fig. \ref{fig:two_IS_capture}d shows that the transition frequency increases with the dynein density. Dwell times decrease with the increasing dynein density and increasing angle $\gamma$, c.f. Fig. \ref{fig:two_IS_capture}e. One would expect that the transition frequency decreases with the rising distance between two IS(increasing with $\gamma$). Surprisingly, it decreases with $\gamma$ only when $\gamma\leq\frac{3\pi}{4}$ and is the maximal when $\gamma=\pi$. \newline The dynein detachment probability is force dependent and its pulling force is constantly opposed by forces of the friction and from the nucleus. As the density increases, more dyneins share the load from opposing forces and the detachment probability decreases leading to shorter dwell times, c.f. Fig. \ref{fig:two_IS_capture}e. Fig. \ref{fig:two_IS_capture}f shows the probability distribution of dwell times and from the log-lin scale of the same plot in the inset one concludes that the dwell time distribution has an exponential tail. An increased dynein number leads to faster MTOC movement and shorter dwell times, which again result in an increased transition frequency. \newline The transition frequency does not decrease monotonously with increasing angle $\gamma$ since the probability of dynein attachment increases with the circumferential distance between two IS. At the end of the MTOC transition, only MTs having a length roughly corresponding to the circumferential distance between two IS can attach at the distant IS, as visualized in Fig. \ref{fig:two_IS_sketch_capt_capt}. The MT length distribution increases until it reaches maximum corresponding approximately to the half of the cell circumference, c.f. Fig. \ref{fig:two_IS_sketch}f. Consequently, the probability that a plus end intersects with the center of the IS at the end of MTOC transitions increases with the angle $\gamma$. The transition frequency decreases with the angle when $\gamma\leq\frac{3\pi}{4}$ because the MTOC travels longer distances and the increase of probability density is not significant. When $\gamma>\frac{3\pi}{4}$ the increasing distance is compensated by a higher number of MTs intersecting with the center of the distant IS leading to shorter dwell times and faster MTOC movement, see Figs. \ref{fig:two_IS_capture}b-e. The case of $\gamma = \pi$ has the additional geometrical advantage that all MTs with sufficient length intersect the distant IS at the end of transitions, visually demonstrated in Fig. \ref{fig:two_IS_sketch_capt_capt}(b). \newline The increasing number of attached MTs influences the continuity of the MTOC transitions. When $\gamma = \pi$, the movement of the MTOC is regular and uninterrupted, see Fig. \ref{fig:two_IS_capture}b. On the other hand, for the smallest value of $\gamma$, i.e. the shortest distance between the two IS, the movement of the MTOC is highly irregular, see Fig. \ref{fig:two_IS_capture}c: the MTOC stops and stalls before resuming the movement to the IS (blue, green). In some cases the MTOC does not finish the journey and returns to the original location (black). When $\gamma = \pi$ a relatively high number of MTs intersects with the center of the distant IS with their plus end, see Fig. \ref{fig:two_IS_sketch}f. Since the MTOC is pulled by dyneins acting on multiple MTs, transitions are smooth and uninterrupted. When $\gamma = \pi/2$ only a limited number of MTs is pulled resulting in easily interrupted transitions. \newline The longitudinal and transverse fluctuations of the MTOC along its path from one IS to the other can be described by the distribution of the polar and azimuthal angle, $\varphi$ and $\theta$, sketched in Fig. \ref{fig:two_IS_sketch}b. The standard deviation of the azimuthal angle decreases with the increasing dynein density when $\gamma \leq \frac{3 \pi}{4}$ and increases when $\gamma = \pi$, see Fig. \ref{fig:two_IS_capture_2_angle}a. Two forces act on cytoskeleton: forces of dynein pulling the MTOC towards the IS and the forces of the tips of growing MTs on the cell membrane pushing the MTOC to all directions, sketched in Fig. \ref{fig:two_IS_sketch}e. When $\gamma < \frac{3 \pi}{4}$, only a small fraction of MTs sprouting from the MTOC intersect the distant IS at the end of the transition, see Fig. \ref{fig:two_IS_sketch}f. Since the stalk pulls the MTOC either within the $xz$ plane or towards it, sketched in Fig. \ref{fig:two_IS_sketch}e, the azimuthal angle can only decrease during transitions. At the end of the transition, the dynein detach and forces from growing MT tips can push the MTOC from the $xz$ plane, sketched in Fig. \ref{fig:two_IS_sketch}e, increasing the azimuthal angle. Consequently, the standard deviation of the azimuthal angle decreases with dwell times and therefore decreases with dynein density, see Figs. \ref{fig:two_IS_capture}e and \ref{fig:two_IS_capture_2_angle}a. Fig. \ref{fig:two_IS_capture_2_angle}b shows that when $\gamma = \frac{\pi}{2}$ the peak of the probability distribution of the azimuthal angle is located at $\theta = 0$ and narrows for higher dynein densities resulting in a reduced standard deviation. When $\gamma \geq \frac{7}{8}\pi$ the transitions can increase the azimuthal angle of the MTOC since the MTs sprouting in multiple directions can attach to the IS, as visualized in Fig. \ref{fig:two_IS_sketch_capt_capt}. In contrast to the case $\gamma < \frac{7}{8}\pi$, the azimuthal angle increases as the dwell time decreases when $\gamma = \pi$ since the azimuthal angles are low when the MTOC is in the proximity of the IS and the transitions pulls it from the plane increasing azimuthal angles of the MTOC, see Fig. \ref{fig:two_IS_capture_2_angle}a and c. \newline When $\gamma < \pi$ the standard deviation of the polar angle slightly decreases with the dynein density when $\rho_{\textrm{IS}}<100\mu\textrm{m}^{2}$ and then it increases, see Fig. \ref{fig:two_IS_capture_2_angle}d. The standard deviation of the polar angle depends on its range. When $\rho_{\textrm{IS}}\geq 100\mu\textrm{m}^{2}$ the MTOC transitions between two IS, see Fig. \ref{fig:two_IS_capture}d and e and the rising dynein force pulls the MTOC closer to the IS, see Fig.\ref{fig:two_IS_capture_2_angle}e, increasing the range of the polar angle. The density of $\rho_{\textrm{IS}}= 50\mu\textrm{m}^{2}$ is an exceptional case since the MTOC does not transition, see Fig. \ref{fig:two_IS_capture}d, since the dynein density is too small to establish the MT stalk. Consequently, forces from the growing MTs can push the MTOC from both IS increasing the polar angle, see the inset of Fig. \ref{fig:two_IS_capture_2_angle}e. Obviously, the standard deviation of the polar angle increases with $\gamma$, see Fig. \ref{fig:two_IS_capture_2_angle}d. When $\gamma=\pi$, the standard deviation is the largest and increases monotonously with dynein density. When $\gamma<\pi$ dyneins always pull the MTOC to the IS located in upper hemisphere, sketched in Fig. \ref{fig:two_IS_sketch}b. When $\gamma=\pi$ the MTOC can travel through the lower hemisphere thus increasing the range of the polar angle. Consequently, the standard deviation of the polar angle increases with the transition frequency, compare Figs. \ref{fig:two_IS_capture_2_angle}d and \ref{fig:two_IS_capture}d. Similarly to the case $\gamma<\beta$ the MTOC is pulled closer to the IS as the dynein density increases, see Fig. \ref{fig:two_IS_capture_2_angle}f. \newline We further analyzed the capture-shrinkage scenario with different dynein densities at the two IS. As in the case of equal densities, dynein detaches when the MTOC approaches the IS and MTs attach at the second IS, visually in Fig. \ref{fig:two_IS_sketch_capt_capt}b. We fixed the density in $\textrm{IS}_{1}$ $\rho_{\textrm{IS}}^{1}=600\mu\textrm{m}^{-2}$ and we vary the density in $\textrm{IS}_{2}$ $50\mu\textrm{m}^{-2}<=\rho_{\textrm{IS}}^{2}<=1000\mu\textrm{m}^{-2}$. From Fig. \ref{fig:two_IS_capture_2_unequal} one can see that the MTOC transitions between two IS even when dynein densities are different. The MTOC is predominantly located closer to the IS with higher dynein density. Average MTOC-$\textrm{IS}_{1}$ angle $\overline{\Omega}$ steadily increases with $\rho_{\textrm{IS}}^{2}$ and $\overline{\Omega}=\frac{\gamma}{2}$ when $\rho_{\textrm{IS}}^{2} = \rho_{\textrm{IS}}^{1}$, see Fig. \ref{fig:two_IS_capture_2_unequal}b. Moreover, the dwell times are substantially larger for the IS with higher density, see Fig. \ref{fig:two_IS_capture_2_unequal}c. \newline \begin{figure}[hbt!] \centering \includegraphics[trim=35 410 30 30,clip,width=0.66\textwidth]{Figure_7.png} \caption{Capture-shrinkage mechanism in the cell with two IS with the same dynein density$\rho_{IS}^{1} = \rho_{IS}^{2} = \rho_{IS}$. (a) The dependence of the standard deviation of the azimuthal angle $\theta$ on the dynein density $\rho_{IS}$. (b and c) Probability densities of the azimuthal angle $\theta$. (b) $\gamma = \frac{1}{2}\pi$. (c) $\gamma = \pi$. (d) The dependence of the standard deviation of the polar angle $\varphi$ on the dynein density $\rho_{IS}$. (e and f) Probability densities of the MTOC-IS angle $\Omega$. (e) $\gamma = \frac{1}{2}\pi$. The probability density of the polar angle $\varphi$ is shown in the inset. (f) $\gamma = \pi$. \label{fig:two_IS_capture_2_angle}} \end{figure} \begin{figure}[hbt!] \centering \includegraphics[trim=35 600 30 30,clip,width=0.666\textwidth]{Figure_8.png} \caption{Capture-shrinkage mechanism in the cell with two IS with different dynein densities. The dynein density in one IS is fixed, $\rho_{IS}^{1} = 600\mu\textrm{m}^{-2}$ and the density in the second IS ranges between $50\mu\textrm{m}^{-2}\leq\rho_{IS}^{2}\leq 1000\mu\textrm{m}^{-2}$. (a-c) Dependencies of the average transition frequency per minute $\overline{N}_{\textrm{tr}}$ (a), the average MTOC-IS angle $\overline{\Omega}$ (b), and the average dwell times close to the first IS, $\overline{T}_{d}^{1}$, and the second IS $\overline{T}_{d}^{2}$(c) on the dynein density in the second IS $\rho_{IS}^{2}$ are shown. \label{fig:two_IS_capture_2_unequal}} \end{figure} \subsubsection*{Cortical sliding mechanism } \begin{figure}[hbt!] \centering \includegraphics[trim=0 540 0 0,clip,width=0.66\textwidth]{Figure_9.png} \caption{(a-b) Snapshots from the time evolution of the MT cytoskeleton configuration with the cortical-sliding mechanism acting at both IS. MTs are connected to the MTOC indicated by the large black sphere. Yellow lines represent MTs attached to cortical-sliding dyneins and blue and green lines indicate growing and shrinking unattached MTs, respectively. Attached cortical-sliding dyneins indicated by small black spheres are located in both IS represented by cyan cylinders. (a) $\gamma = 0.5\pi$, $\tilde{\rho}_{IS}^{1} = \tilde{\rho}_{IS}^{2}=\tilde{\rho}_{IS}=200\mu m^{-2}$. MTs attach in both IS and dyneins remain in a thug-of-war for the rest of the simulation. (b) $\gamma = \pi$, $\tilde{\rho}_{IS}^{1} = \tilde{\rho}_{IS}^{2}=\tilde{\rho}_{IS}=1000\mu m^{-2}$. The MTOC is located close to the center of the IS. Almost all MTs are attached to dyneins and they sprout from the MTOC in all directions. (c) The sketch of dynein forces when $\gamma = \pi$ and with the MTOC in the close proximity of the IS is shown. The cell membrane and the nucleus are represented by the gray and blue circle, respectively. The black line and the cyan rectangles denote the MTOC and two IS, respectively. Black dots and arrows denote dynein motors and directions of pulling forces, respectively. The green MT is attached to dyneins in both IS and the red MT is attached only by dyneins in the distant IS. Dyneins in the distant IS pull MTs in different directions. The MTOC stays in the close proximity of the IS since it is pulled there by the combined forces of both IS. \label{fig:two_IS_sketch_cort_cort}} \end{figure} In contrast to the capture-shrinkage mechanism, cortical sliding dyneins are distributed in a relatively large IS and can attach at any position on a MT. Since multiple filaments intersect with the IS in every instant, MTs are always simultaneously attached at both IS, as visualized in Figs. \ref{fig:two_IS_sketch_cort_cort}a and b. \newline By comparison of Figs. \ref{fig:two_IS_cortical}d-l one realizes that as the angle $\gamma$ increases, the MTOC transitions become more continuous and less frequent. When $\gamma<\frac{3\pi}{4}$, the transition frequency increases before reaching the peak at $\tilde{\rho}_{\textrm{IS}}=200\mu\textrm{m}^{-2}$ and then it declines, see Fig. \ref{fig:two_IS_cortical}a. It steadily decreases with the rising dynein density when $\gamma>\frac{3\pi}{4}$. The case of $\gamma=\pi$ is unique since the transition frequency decreases to zero. Moreover, it is the only case when standard deviation of the polar angle decreases with rising dynein density, see Fig. \ref{fig:two_IS_cortical}b. The standard deviations of the azimuthal angle steadily decreases with the dynein density, see Fig. \ref{fig:two_IS_cortical}c. \newline When $\gamma <\frac{3\pi}{4}$ relatively large IS are located close to each other, visually demonstrated in Fig. \ref{fig:two_IS_sketch_cort_cort}a. The dynein detach when the MTOC approaches the IS, see Section 2.2 of the Supporting Materials and Methods, and the pulling force decreases. MTs are being pulled by dyneins in the second IS at the same time. These two effects result in minimal MTOC fluctuations around the central position, see Fig. \ref{fig:two_IS_cortical}d, and in a relatively high transition frequency between two hemispheres, see Fig. \ref{fig:two_IS_cortical}a. By comparison of azimuthal angles in Figs. \ref{fig:two_IS_cortical}f and i one realizes that the MTOC fluctuations have a strong lateral component when $\gamma \leq\frac{3\pi}{4}$ which is stronger than the parallel one when $\tilde{\rho} = 50\mu\textrm{m}^{-2}$. This is due to the fact that dyneins located at the peripheries of both IS can cooperate while pulling the MTOC from the $xz$ plane but are always in competition when pulling the MTOC parallel to the plane. The MTOC movement gets more aligned with the $xz$ plane, see Fig. \ref{fig:two_IS_cortical}f, as the dynein density increases leading to a slight increase in the transition frequency, see Fig. \ref{fig:two_IS_cortical}a. As the density further increases, $\tilde{\rho} > 200\mu\textrm{m}^{-2}$, the MTOC is increasingly pulled from the central position to the IS, see Fig. \ref{fig:two_IS_cortical}e and f. The number of transition decreases since the MTOC travels longer distance. Moreover, as the MTOC approaches one IS, the forces of nucleus oppose the movement to the distant IS giving the advantage to the dynein at the close IS in the constant tug of war. Since the nucleus increasingly presents and obstacle between two IS as $\gamma$ increases, the transition frequency decreases more significantly with densities $\tilde{\rho} > 200\mu\textrm{m}^{-2}$ when $\gamma=\frac{5\pi}{6}$, see Fig. \ref{fig:two_IS_cortical}a. \newline When $ \frac{3\pi}{4}\leq \gamma < \pi$ and dynein densities are low, the constant competition between dyneins from both IS leads to short, interrupted transitions between two IS, see Fig. \ref{fig:two_IS_cortical}g. The MTOC moves around the central position(green), transitions between two IS are very slow(blue), interrupted(red) or the MTOC dwells in one hemisphere for a long time(black). As in the previous case, the MTOC is increasingly pulled from the central position to the IS with rising density, see Fig. \ref{fig:two_IS_cortical}h and i. Transitions to the distant IS gets more unlikely due to the fact that the dyneins from the distant IS are opposed by the forces of dyneins from the close IS and from the nucleus. When $\tilde{\rho} = 1000\mu\textrm{m}^{-2}$ the MTOC dwells in one hemisphere and rarely transitions, see Fig. \ref{fig:two_IS_cortical}h and i. Since the MTOC stays longer in the proximity of the IS located at the $xz$ plane as the density increases, the peak of azimuthal angle probability distributions narrows, see Fig. \ref{fig:two_IS_cortical}i. The Video S3 of the Supporting Materials and Methods shows the process for the case $\gamma=\frac{3}{4}\pi$ and $\tilde{\rho} = 600\mu\textrm{m}^{-2}$. \newline In Figs. \ref{fig:two_IS_cortical}j and k it can be seen that the MTOC trajectories are fundamentally different for lower and higher densities when $\gamma = \pi$. Moreover, the transition frequency is higher than in the case of $\gamma = \frac{7\pi}{8}$, since the higher number of MTs intersect the distant IS when IS are in diametrical opposition, visually demonstrated in Fig. \ref{fig:two_IS_sketch_cort_cort}. When the density is low, the MTOC transitions between two IS never reaching their center, see Fig. \ref{fig:two_IS_cortical}j and l. As the density increases, the MTOC approaches the IS closer, see Figs. \ref{fig:two_IS_cortical}k and l. When $\tilde{\rho}_{\textrm{IS}}>600\mu\textrm{m}^{-2}$, dynein forces are strong enough to pull the MTOC to the center of the IS, where it remains for the rest of the simulation, see Fig. \ref{fig:two_IS_cortical}k and l. In such a case the majority of MTs are attached in the distant IS, visually demonstrated in Fig. \ref{fig:two_IS_sketch_cort_cort}b. Since MTs attached at the distant IS are sprouting from the MTOC in every direction, the dyneins act in a competition, sketched in Fig. \ref{fig:two_IS_sketch_cort_cort}c. If the MTOC recedes from the center of the IS, the dyneins at the close IS pulls the MTOC back alongside the part of the dynein in the distant IS. Contrarily to the cases when $\beta<\pi$ the MTOC can travel to the distant IS in all directions resulting in substantial deviations from the $xz$ plane, see the inset of Fig. \ref{fig:two_IS_sketch_cort_cort}l. The peak of the probability density of the angle $\theta$ gets more narrow with rising density since the MTOC is increasingly located closer to the IS, \ref{fig:two_IS_sketch_cort_cort}l. The Video S4 of the Supporting Materials and Methods shows the process for the case $\gamma=\pi$ and $\tilde{\rho} = 1000\mu\textrm{m}^{-2}$. \newline In general, the MTOC is located closer to the $xz$ plane as the density increases, see Figs. \ref{fig:two_IS_cortical}f, i and l. Consequently, the standard deviation of the horizontal angle decreases with the dynein density, see Fig. \ref{fig:two_IS_cortical}c. At the same time the standard deviation of the polar angle increases, see Fig. \ref{fig:two_IS_cortical}b, due to its increased range, see Figs. \ref{fig:two_IS_cortical}f and i. The only exception is the case of $\gamma=\pi$. Small and still decreasing transition frequency cause decreasing range MTOC-$\textrm{IS}_{1}$ angles, see Fig. \ref{fig:two_IS_cortical}a and l, leading to the decreased standard deviation of the polar angle, see Figs. \ref{fig:two_IS_cortical}b. \begin{figure}[hbt!] \centering \includegraphics[trim=35 100 30 30,clip,width=0.66\textwidth]{Figure_10.png} \caption{Cortical-sliding mechanism with two IS with the same dynein density $\tilde{\rho}_{\textrm{IS}}^{1} = \tilde{\rho}_{\textrm{IS}}^{2}=\tilde{\rho}_{\textrm{IS}}$. (a-c) Dependencies of the average transition frequency $\overline{N}_{\textrm{tr}}$ (a), the standard deviation of the polar angle $\varphi$ (b), and the standard deviation of the azimuthal angle $\theta$ (c) on dynein density $\tilde{\rho}_{IS}$ are shown. (d,e): Examples of the time evolution of the MTOC position for $\gamma=\pi/2$. The time evolutions of x coordinate of the MTOC are shown, (d) for $\tilde{\rho}_{\textrm{IS}}=50\mu \textrm{m}^{-2}$, (e) for $\tilde{\rho}_{\textrm{IS}}=1000\mu \textrm{m}^{-2}$. (f) Probability distribution of the polar angle $\varphi$ (main plot) and the azimuthal angle $\theta$ (inset). (g-i) The same as (d-e) for $\gamma=\frac{2 \pi}{3}$. (j,k): Examples of the time evolution of the MTOC position for $\gamma=\pi$, (j) for $\tilde{\rho}_{\textrm{IS}}=50\mu \textrm{m}^{-2}$, (k) for $\tilde{\rho}_{\textrm{IS}}=1000\mu \textrm{m}^{-2}$ (main plot) and $\tilde{\rho}_{\textrm{IS}}=200\mu \textrm{m}^{-2}$ (inset). (l) Probability distribution of the MTOC-IS angle $\Omega$ and the azimuthal angle (inset). \label{fig:two_IS_cortical}} \end{figure} \subsubsection*{Capture-shrinkage and cortical sliding mechanisms in different IS} \begin{figure}[hbt!] \centering \includegraphics[trim=0 300 0 0,clip,width=0.66\textwidth]{Figure_11.png} \caption{Snapshots from the time-evolution of the MT cytoskeleton configuration under the effects of both capture-shrinkage and cortical sliding mechanisms in different IS with the same dynein densities, $\gamma = \frac{3\pi}{4}$, $\rho_{\textrm{IS}}^{1} = \tilde{\rho}_{\textrm{IS}}^{2} = \rho = 400 \mu\textrm{m}^{-2}$, $\rho_{\textrm{IS}}^{2} = \tilde{\rho}_{\textrm{IS}}^{1} = 0 \mu\textrm{m}^{-2}$. The brown cylinder indicates the center of the IS where capture-shrinkage dyneins are located and the cyan cylinder indicates the whole IS containing cortical sliding dyneins. Attached dyneins are represented by small, black spheres. Red and yellow lines represent MTs attached to capture-shrinkage and cortical sliding dyneins, respectively. Blue and green lines depict growing and shrinking unattached MTs, respectively, connected to the MTOC represented by the large, black sphere. (a) Initially, MTs attach only to the cortical sliding dynein at the left IS since no plus end of MTs intersect with the center of the right IS. (b) Cortical sliding dyneins detach as the MTOC approaches the left IS. (c) The plus end of a MT intersects with the center of the right IS and is captured by dynein. (d) Several MTs are still attached to cortical sliding dyneins at the left IS and multiple MTs form a stalk connecting the center of the right IS with the MTOC. Pulling force of capture-shrinkage dyneins overpowers cortical sliding dyneins and the MTOC moves to the right IS. (e) As the MTOC approaches the right IS, capture-shrinkage MTs detach from the dyneins. Simultaneously, MTs attach to cortical sliding dyneins at the left IS resulting in the transition to the left IS. (f) Snapshots from the time-evolution of the MT cytoskeleton, $\rho = 1000\mu\textrm{m}^{-2}$, $\gamma = \pi$. The MTOC is located close to the center of the right IS. Almost all MTs are attached to cortical sliding dyneins at the left IS and they sprout from the MTOC in all directions. The MTOC dwells close to the right IS since the forces from cortical sliding dyneins are pulling the MTOC to different directions and the MTs attached to capture-shrinkage dynein pull the MTOC towards the right IS. \label{fig:sketch_two_IS_combined_one_each_side}} \end{figure} \begin{figure}[hbt!] \centering \includegraphics[trim=0 250 10 0,clip,width=0.66\textwidth]{Figure_12.png} \caption{Capture-shrinkage and cortical sliding mechanisms in different IS with the same dynein densities $\rho_{\textrm{IS}}^{1} = \tilde{\rho}_{\textrm{IS}}^{2} = \rho$, $\rho_{\textrm{IS}}^{2} = \tilde{\rho}_{\textrm{IS}}^{1} = 0 \mu\textrm{m}^{-2}$. Cortical sliding $\textrm{IS}_{2}$ is located in the hemisphere $x<0$. (a-c) Dependencies of the average transition frequency $\overline{N}_{\textrm{tr}}$ between two IS (a), the average angle between the MTOC and the capture-shrinkage $\textrm{IS}_{1}$ $\overline{\Omega}$ (b), and average dwell times $\overline{\textrm{T}}^{1}_{\textrm{d}}$ and $\overline{T}^{2}_{\textrm{d}}$ that the MTOC spends next to the capture-shrinkage $\textrm{IS}_{1}$ and the cortical sliding $\textrm{IS}_{2}$, respectively (c), on the dynein density $\rho$ are shown. The dwell times $\overline{\textrm{T}}^{1}_{\textrm{d}}$ are given for $\rho \geq 200 \mu\textrm{m}^{-2}$ since the MTOC does not reach the $\textrm{IS}_{1}$ when $\rho < 200 \mu\textrm{m}^{-2}$. (d-e) Examples of the time evolution of the MTOC position in 600s of simulation. The time evolutions of x coordinate of the MTOC are shown, $\gamma = \frac{3 \pi}{4}$. (d) $\rho = 200\mu\textrm{m}^{-2}$. (e) $\rho = 1000\mu\textrm{m}^{-2}$. (f) Probability distribution of the angle $\Omega$ between the MTOC and the capture-shrinkage $\textrm{IS}_{1}$, $\gamma = \frac{3\pi}{4}$. (g-h) Examples of the time evolution of the MTOC position in 600s of simulation are shown, $\gamma = \pi$. (g) $\rho = 200\mu\textrm{m}^{-2}$. (h) $\rho = 1000\mu\textrm{m}^{-2}$. (i) Probability distribution of the angle $\Omega$ between the MTOC and the capture-shrinkage $\textrm{IS}_{1}$, $\gamma = \pi$. \label{fig:two_IS_combined_one_each_side}} \end{figure} In this section, we analyze the scenario when two IS employ different mechanisms. The cortical sliding has multiple advantages over the capture-shrinkage mechanism. Given the radii of the whole IS and its center $R_{\textrm{IS}}=2\mu\textrm{m}^{-2}$ and $R_{\textrm{CIS}}=0.4\mu\textrm{m}^{-2}$, respectively, the surface of the whole IS is $25\times$ larger than the surface of the IS center. Moreover, the cortical sliding dyneins attach on the whole MT, capture-shrinkage just at the end. Consequently, multiple filaments are attached to cortical-sliding dyneins during the entire simulation. The capture-shrinkage dynein attach only when the tip of the MT intersects with the narrow center of the IS making the attachment of capture-shrinkage dyneins far less frequent. All capture-shrinkage dyneins can be unattached for a long time. On the other hand, the capture-shrinkage mechanism has the advantage that the attached MTs form a narrow stalk assuring the alignment of dynein forces, as visualized in Fig. \ref{fig:sketch_two_IS_combined_one_each_side}d. \newline The resulting repositioning process is shown in Video S5 of the Supporting Materials and Methods, $\rho_{\textrm{IS}}^{1} = \tilde{\rho}_{\textrm{IS}}^{2} = 400 \mu\textrm{m}^{-2}$. The capture-shrinkage dyneins are located in the right $\textrm{IS}_{1}$. The MTOC moves to the left IS, since the MTs attach immediately to cortical sliding dyneins and the center of the right IS is not intersected by plus ends of MTs, visualized in Fig. \ref{fig:sketch_two_IS_combined_one_each_side}a. When the MTOC approaches the left IS, the cortical-sliding dyneins detach and, simultaneously, the tips of MTs passing through the center of the right IS attach to capture-shrinkage dyneins, visually demonstrated in Figs. \ref{fig:sketch_two_IS_combined_one_each_side}b and c. Since the capture-shrinkage mechanism is opposed by cortical sliding, MTs can detach from the capture-shrinkage dyneins. It takes several MTs to attach in the center of the IS at the same time to compete with the force of cortical sliding dyneins. As the force of capture-shrinkage dyneins outweighs the force of the cortical-sliding, the MTOC moves to the right center in the direction given by the MT stalk, visualized in Fig. \ref{fig:sketch_two_IS_combined_one_each_side}d. The capture-shrinkage dynein detach when the MTOC approaches the right IS. Simultaneously, cortical sliding dyneins attach at the left IS, visually demonstrated in Fig. \ref{fig:sketch_two_IS_combined_one_each_side}e, and the MTOC moves again to the left IS. \newline Fig. \ref{fig:two_IS_combined_one_each_side}a shows that when $\gamma < \pi$ the transition frequency steadily increases with the dynein density. Fig. \ref{fig:two_IS_combined_one_each_side}b shows that when the densities are low, the average angle between the MTOC and the capture-shrinkage $\textrm{IS}_{1}$ $\overline{\Omega}>>\frac{ \gamma}{2}$ indicating that the MTOC is predominantly located closer to the cortical sliding $\textrm{IS}_{2}$. Moreover, the angle decreases with the increasing dynein density. Average dwell times $\overline{T}_{d}^{2}$ close to cortical sliding $\textrm{IS}_{2}$ and $\overline{T}_{d}^{1}$ close to capture-shrinkage $\textrm{IS}_{1}$ decrease and increase with increasing density, respectively, see Fig. \ref{fig:two_IS_combined_one_each_side}c. \newline It can be seen in Figs. \ref{fig:two_IS_combined_one_each_side}d,e,g and h that initially the MTOC travels to the cortical sliding $\textrm{IS}_{2}$ in all cases except one. The MTOC travels to the capture-shrinkage $\textrm{IS}_{1}$ only in a highly improbable scenario when plus ends of multiple MTs intersect the narrow IS center. When $\gamma = \frac{3\pi}{4}$ and $\rho = 200\mu\textrm{m}^{-2}$ the MTOC dwells in the proximity of the cortical sliding $\textrm{IS}_{2}$, see \ref{fig:two_IS_combined_one_each_side}d and f. The transitions to the capture-shrinkage $\textrm{IS}_{1}$ are interrupted and the MTOC travels back to the cortical sliding $\textrm{IS}_{2}$(black). When the MTOC finishes the transitions to the $\textrm{IS}_{1}$, it dwells in its proximity for a short time and then returns to the $\textrm{IS}_{2}$(blue, red). Multiple transitions to $\textrm{IS}_{1}$ rarely occur(green). Interrupted transitions to $\textrm{IS}_{1}$ can be explained by constantly attached cortical sliding dyneins overpowering the force of capture-shrinkage mechanism. If the MTOC finishes transition to the $\textrm{IS}_{1}$, capture-shrinkage dyneins detach and cortical sliding pulls the MTOC back to the $\textrm{IS}_{2}$. To conclude, the cortical sliding has the dominance over the capture-shrinkage mechanism when $\rho_{\textrm{IS}}<600\mu\textrm{m}^{-2}$, since the MTOC is located predominantly closer to the $\textrm{IS}_{2}$, see Fig \ref{fig:two_IS_combined_one_each_side}b-d and f. \newline The Fig. \ref{fig:two_IS_combined_one_each_side}e shows that when $\rho = 1000\mu\textrm{m}^{-2}$, the transitions towards the capture-shrinkage $\textrm{IS}_{1}$ are mostly uninterrupted indicating that the capture-shrinkage mechanism can compete with cortical sliding dyneins by capturing several MTs and forming MT stalk, as visualized in Fig. \ref{fig:sketch_two_IS_combined_one_each_side}d. Moreover, the MTOC dwells longer close to the capture-shrinkage $\textrm{IS}_{1}$, see Fig. \ref{fig:two_IS_combined_one_each_side}c, e and f, resulting in the decrease of the average MTOC-$\textrm{IS}_{1}$ angle, see Fig. \ref{fig:two_IS_combined_one_each_side}b. Therefore, capture-shrinkage mechanism gains the dominance over the cortical sliding mechanisms as the dynein density increases. \newline When $\gamma < \pi$, the transition frequency increases with the dynein density, see Fig. \ref{fig:two_IS_combined_one_each_side}a, and therefore it increases as the capture-shrinkage mechanism becomes dominant. The increasing density of capture-shrinkage dyneins increases the probability of dynein attachment and the formation of MTs stalk that can overcome the cortical sliding mechanism. The formation of the MTs stalk results in complete transitions towards the capture-shrinkage $\textrm{IS}_{1}$ and in the steep decrease of cortical sliding dwell times, see Figs. \ref{fig:two_IS_combined_one_each_side}c and e. However, the capture-shrinkage dwell times increase only slightly with the increasing density, see Fig. \ref{fig:two_IS_combined_one_each_side}c. Regardless of dynein density, motors detach at the end of the transition and depolymerized MTs are unlikely to reattach, visually demonstrated in Fig. \ref{fig:two_IS_sketch}d. Consequently, as the dynein density increases, capture-shrinkage mechanism becomes more able to pull the MTOC, but remains unable to hold it leading to the increased transition frequency. \newline The case of $\gamma = \pi$ is unique since the transition frequency increases with the dynein density before reaching the peak at $\rho = 400\mu\textrm{m}^{-2}$ and then it slowly decreases, see Fig. \ref{fig:two_IS_combined_one_each_side}a. The MTOC trajectories differ when considering multiple dynein densities. When $\rho = 200\mu\textrm{m}^{-2}$, the MTOC moves similarly to the case when $\gamma<\pi$. The MTOC transitions to one IS, dwells there and then it moves to the second IS, see Fig. \ref{fig:two_IS_combined_one_each_side}g. Fig. \ref{fig:two_IS_combined_one_each_side}i shows that the MTOC is predominantly located closer to the cortical sliding IS when the dynein density is low. When $\rho = 1000\mu\textrm{m}^{-2}$, the MTOC dwells in the proximity of the capture-shrinkage $\textrm{IS}_{1}$, see Fig. \ref{fig:two_IS_combined_one_each_side}i and, the transitions to the cortical-sliding $\textrm{IS}_{2}$ are infrequent and unfinished, see Fig. \ref{fig:two_IS_combined_one_each_side}h. When $\rho \geq 600\mu\textrm{m}^{-2}$, the dynein force is strong enough to pull the MTOC to the close proximity of the center of the capture-shrinkage $\textrm{IS}_{1}$, Fig \ref{fig:two_IS_combined_one_each_side}i. In such a case almost all MTs are attached to the cortical sliding dynein at the distant $\textrm{IS}_{2}$, visually demonstrated in Fig \ref{fig:sketch_two_IS_combined_one_each_side}f. The MTOC stays in the proximity of the capture-shrinkage $\textrm{IS}_{1}$, see Fig. \ref{fig:two_IS_combined_one_each_side}i, since the cortical-sliding dyneins pull the MTOC in different directions and oppose each other. Moreover, the MTOC is pulled back to the close IS by MTs occasionally attached to capture-shrinkage dyneins in the center of the $\textrm{IS}_{1}$, visually depicted by the red short MT in Fig. \ref{fig:sketch_two_IS_combined_one_each_side}f. The Video S6 of the Supporting Materials and Methods shows the process for $\gamma=\pi$ and $\rho_{\textrm{IS}}^1 = \tilde{\rho}^2_{\textrm{IS}}=1000\mu\textrm{m}^{-2}$. \newline The transition frequency decreases with the distance between the two IS when $\gamma\leq\frac{2\pi}{3}$, see Fig. \ref{fig:two_IS_combined_one_each_side}a, since the MTOC has to travel longer distance. When $\gamma>\frac{2\pi}{3}$ the distance is compensated by the increased attachment probability in the center of the IS caused by the increased probability density of MTs length corresponding to the circumferential distance between the two IS, see Fig. \ref{fig:two_IS_sketch}f. Increased number of attached capture-shrinkage MTs leads to the decreased cortical sliding dwelling times as the $\gamma$ increases, see Fig. \ref{fig:two_IS_combined_one_each_side}. The capture-shrinkage dwell times increase with $\gamma$, since the higher number of MTs pull the MTOC closer to the IS, see Figs. \ref{fig:two_IS_combined_one_each_side}f and i. \subsubsection*{Combined mechanisms in both IS} \begin{figure}[hbt!] \centering \includegraphics[trim=0 320 0 0,clip,width=0.66\textwidth]{Figure_13.png} \caption{Snapshots from the time-evolution of the MT cytoskeleton configuration under the effects of both mechanisms with the same density of the capture-shrinkage and cortical sliding dyneins in both IS $\tilde{\rho}_{\textrm{IS}}^{1}=\tilde{\rho}_{\textrm{IS}}^{2}=\rho_{\textrm{IS}}^{1}=\rho_{\textrm{IS}}^{2} = 400 \mu\textrm{m}^{-2}$ , $\gamma = \frac{3\pi}{4}$. Brown and cyan cylinders indicate centers of both IS where capture-shrinkage dyneins are located and the whole areas of both IS containing cortical sliding dyneins, respectively. Black and red lines represent MTs attached to capture-shrinkage dyneins in the centers of both IS, yellow lines depicts MTs attached to cortical sliding dyneins and blue and green lines indicate growing, shrinking unattached MTs, respectively. Small black spheres in both IS represent attached dyneins. (a) The MTOC is closer to the right IS. MTs intersecting with the center of the IS attach to capture-shrinkage dyneins. Cortical sliding dyneins attach to MTs at the periphery of the IS. MTOC is pulled to the left IS by both mechanisms. (b-c) MTs captured in the left IS depolymerize as the MTOC approaches the IS. Cortical sliding dyneins pull MTs at the periphery of the left IS. MTs are not pulled by the capture-shrinkage dyneins from the left IS since no MT intersect with its center. Cortical sliding dyneins at the right IS attach randomly on MTs, but they are overpowered by the combined force of both mechanism from the left IS and detach. Consequently, substantially more attached cortical sliding dyneins pull at the left IS. (d) As the MTOC approaches, MTs detach from the capture-shrinkage dyneins in the left IS. Simultaneously, MTs intersect with the center of the right IS and are captured by dyneins. Cortical sliding dyneins attach at the right IS and detach at the left. (e) Both mechanism pull the MTOC to the right IS. (f) MTs attach to capture-shrinkage dyneins in the center of the IS and are pulled by cortical sliding dyneins at the periphery. The dyneins pull MTs in alignment and they share the load from opposing forces. \label{fig:sketch_two_IS_combined_both_equal}} \end{figure} The time-evolution of the cytoskeleton under the effect of both mechanisms with the equal densities in both IS $\tilde{\rho}_{\textrm{IS}}^{1}=\tilde{\rho}_{\textrm{IS}}^{2}=\rho_{\textrm{IS}}^{1}=\rho_{\textrm{IS}}^{2} = 400 \mu\textrm{m}^{-2}$, $\gamma = \frac{3\pi}{4}$, is shown in the Video S7 of the Supporting Materials and Methods. During the simulation, the MTOC repeatedly transitions between the two IS. The snapshots of one transitions can be seen in Fig. \ref{fig:sketch_two_IS_combined_both_equal}. At the end of the transition, the MTs intersecting with the center of the distant IS are capture by dyneins, as visualized in \ref{fig:sketch_two_IS_combined_both_equal}a. The cortical sliding dyneins in the right IS have to compete with both mechanisms from the left IS and detach. Consequently, the MTOC is pulled to the left IS by both mechanisms and the movement is not opposed by the forces from the right IS, visualized in Figs. \ref{fig:sketch_two_IS_combined_both_equal}b and c. As the MTOC approaches the left IS, capture-shrinkage MTs detach, visually in Fig. \ref{fig:sketch_two_IS_combined_both_equal}d. Simultaneously, MTs are captured on the other side of the cell. Consequently, the stalk connecting the MTOC and the IS is formed and both mechanisms pull the MTOC to the right IS, as visualized in Fig. \ref{fig:sketch_two_IS_combined_both_equal}e. \newline Fig. \ref{fig:two_IS_combined_both_equal}a shows that the transition frequency increases with the dynein density. Moreover, the transition frequency decreases with the increasing angle $\gamma$ only when $\gamma\leq\frac{3\pi}{4}$ and reaches the maximum when $\gamma=\pi$. Surprisingly, dwell times in the proximity of the IS do not steadily decrease with the dynein density despite the continuous decrease of the time that the MTOC spends in one hemisphere, see Fig. \ref{fig:two_IS_combined_both_equal}b. The dwell times decrease with the dynein density until they reach a minimum when $\rho \sim 400 \mu\textrm{m}^{-2}$ and then they slightly increase. The standard deviation of the polar angle slightly increases and decreases when $\gamma<\pi$ and $\gamma=\pi$, respectively. \newline By comparison between Figs. \ref{fig:two_IS_capture}a-c and \ref{fig:two_IS_combined_both_equal}d and e one realizes that the MTOC trajectories under the effects of both mechanisms follow the same pattern as in the case of the sole capture-shrinkage mechanism: the MTOC travels to one IS, dwells in its close proximity and then transitions to the second IS. Moreover, the transitions between two IS are regular and continuous when $\gamma = \pi$ and incomplete and irregular when $\gamma = \frac{\pi}{2}$. As in the case of the capture-shrinkage mechanism, increasing circumferential distance between the two IS increases the probability that the plus end of a MT is captured in the center of the distant IS due to the increasing probability density of MT length, see Fig. \ref{fig:two_IS_sketch}f. Consequently, the dynein acts on increased number of filaments as the $\gamma$ increases assuring continuous transition. \newline The combination of mechanisms leads to the unprecedented transition frequency and shortest dwell times, compare Figs. \ref{fig:two_IS_capture}, \ref{fig:two_IS_combined_one_each_side} and \ref{fig:two_IS_combined_both_equal}. The reason is that the capture-shrinkage mechanism supports the cortical sliding mechanism at the distant IS and hinders it at the close IS. At the end of the transitions, capture-shrinkage MTs are depolymerized and the cortical sliding dyneins can attach to a lower number of MTs. Contrarily, MTs attach to the capture-shrinkage dyneins in the distant center and the two mechanisms pull in alignment sharing the load from opposing forces, as visualized in Fig. \ref{fig:sketch_two_IS_combined_both_equal}f. Consequently, the MTOC is pulled to the distant IS by two mechanisms and to the close IS just by the cortical sliding acting on a reduced number of MTs. \newline The dwell times decrease with the rising dynein density, see Fig. \ref{fig:two_IS_combined_both_equal}b, due to the higher pulling force. The slight increase of dwelling times when $\rho>400\mu\textrm{m}^{-2}$ is caused by the fact the the MTOC travels closer to the IS, see Figs. \ref{fig:two_IS_combined_both_equal}h and i, and spends more time in the proximity of the IS. The monotonously decreasing times that the MTOC spends in one hemisphere indicate that the process gets faster with the dynein density despite the slightly increased dwelling times, see Fig. \ref{fig:two_IS_combined_both_equal}b. \newline When $\gamma < \pi$, the standard deviation of the polar angle increases with the dynein density, see Fig. \ref{fig:two_IS_combined_both_equal}c, because the MTOC is pulled closer to the IS and the angle has a wider range, see Fig. \ref{fig:two_IS_combined_both_equal}h. The standard deviation of the polar angle is the largest when $\gamma = \pi$, since the MTOC can transition between IS through the lower hemisphere, sketched in Fig. \ref{fig:two_IS_sketch}b. The standard deviation slightly decreases with the density. The reason lies in the fast speed of the MTOC transitions that leads to the fast transition from one IS to the second. Fig. shows \ref{fig:two_IS_combined_both_equal}i that the MTOC is increasingly located closer to the IS when $\gamma=\pi$. As in the case of the capture-shrinkage mechanism, the deviations from the $xz$ plane decrease with the rising density, \ref{fig:two_IS_combined_both_equal}f and g. The probability density does not have a peak at $\theta = 0$ when $\gamma = \pi$ at low densities, see Fig. \ref{fig:two_IS_combined_both_equal}g, since the transitions pull the MTOC from the $xz$ plane and the force is often insufficient to finish the transition in the close proximity of the IS center. \newline \begin{figure}[hbt!] \centering \includegraphics[trim=35 280 30 30,clip,width=0.66\textwidth]{Figure_14.png} \caption{Combination of capture-shrinkage and cortical sliding mechanisms with the same dynein density in both IS, $\tilde{\rho}_{\textrm{IS}}^{1}=\tilde{\rho}_{\textrm{IS}}^{2}=\rho_{\textrm{IS}}^{1}=\rho_{\textrm{IS}}^{2} = \rho$. (a-c) The dependencies of the average transition frequency $\overline{N}_{\textrm{tr}}$ (a), the time that the MTOC spends in one hemisphere $\overline{T}_{s}$ and the dwell time in the proximity of the IS $\overline{T}_{\textrm{d}}$(b) and the standard deviation of the polar angle $\varphi$ (c) on the dynein density $\rho$ are shown. (d and e) Examples of the time evolution of the MTOC position in 600s of simulation are shown. The time evolutions of x coordinate of the MTOC are shown. $\rho = 600\mu\textrm{m}^{-2}$ (d) $\gamma = \frac{\pi}{2}$. (e) $\gamma = \pi$. (f-g) The probability distributions of the azimuthal angle $\theta$. (f) $\gamma = \frac{\pi}{2}$. (g) $\gamma = \pi$ (h-i) Probability distributions of the angle $\Omega$ between the MTOC and the $\textrm{IS}$ and the polar angle $\varphi$. (f) $\gamma = \frac{\pi}{2}$. (g) $\gamma = \pi$. \label{fig:two_IS_combined_both_equal}} \end{figure} \section*{Discussion} We have analyzed the dynamics of the MTOC during the repositioning process in situations a) in which the T cell has one IS in an arbitrary position with respect to the initial position of the MTOC (quantifies by the angle $\beta$ sketched in Fig. \ref{fig:variable_Beta_basic}a ), and b) in which the T cell has two IS at two arbitrary positions determined by the angle $\gamma$ between them, sketched in Fig. \ref{fig:two_IS_sketch}a. In \cite{hornak_stochastic_2020} we studied the repositioning in the cell where the MTOC and the IS are initially diametrically opposed which was the configuration previously analyzed experimentally by Yi et al \cite{yi_centrosome_2013}. Here we showed that the predictions for this special situation are robust when more general, naturally occurring situations are considered. Most notably, we found that the timescale for the completion of the relocation process agrees for a wide range of dynein densities. Moreover, we predicted and provided explanations for the changes in the MT structure, the "opening" of the MT cytoskeleton resulting from the formation of the MT stalk and friction forces acting on unattached MTs. We further reported that the capture-shrinkage is the dominant mechanism in the cell when the MTOC and the IS are initially diametrically opposed. We also discovered that the two mechanisms act in a fascinating synergy reducing the cell's resources for the efficient repositioning, since the combination of two mechanisms with a relatively low area density can be faster than the dominant mechanism with high density. \newline One of the differences occurring with smaller initial angles beta is that only a small fraction of MTs intersect the IS when $\beta<\pi$, visually demonstrated in Fig. \ref{fig:variable_Beta_basic}d and Fig. S1, leading to a smaller number of attached dyneins and slower repositioning, see Fig. \ref{fig:variable_Beta}b. Lower number of capture-shrinkage MTs and lower friction forces lead to milder changes in the cytoskeleton structure, compare Figs. 3f in \cite{hornak_stochastic_2020} and Fig. S3. Nevertheless, the opening is still evident and can be used in experiments to prove the actions of capture-shrinkage mechanism regardless of cell's initial configuration. \newline It was shown in \cite{combs_recruitment_2006} that the dyneins colocalize with the ADAP ring in the pSMAC. Furthermore, it was hypothesized in \cite{kuhn_dynamic_2002} that one of the reasons that the PSMAC takes the form of the ring is to facilitate the interactions with MTs. Our work strongly supports such hypothesis, since the dynein attach to MTs predominantly at the IS periphery in all configurations \citep{hornak_stochastic_2020}, see Fig. S7 in Supporting Materials and Methods. In our model, the cortical sliding dyneins were distributed homogeneously in the IS. However, attached dyneins are always located predominantly at the periphery of the IS. Moreover, attached dyneins move slightly to the periphery as the dynein area density increases, see Fig. S7a. When the two mechanisms are acting together, the attached cortical sliding dyneins almost completely evacuate the center of the IS, see Fig. S9c in Supporting Materials and Methods. \newline The detailed study of cortical sliding repositioning revealed three different characteristics of the MTOC movement depending on three regimes for the dynein density. This behavior resulted from the competing forces of attached MTs sprouting from the MTOC in different directions. Such a behavior was not observed when $\beta<\pi$, see Fig. \ref{fig:variable_Beta} and Fig. S6, since attached MTs are aligned right from the beginning, as visualized in Fig. S5. \newline The comparison of the two mechanisms in various configurations demonstrated that the capture-shrinkage mechanism is dominant when $\beta>\pi/2$, since the times necessary for the MTOC repositioning are shorter, see Fig. \ref{fig:variable_Beta}b. The cortical sliding mechanism is clearly dominant when $\beta<\pi/2$, due to the absence of resisting force of the nucleus, see Fig. \ref{fig:variable_Beta}b. The case of $\beta=\pi/2$ appears as a borderline. We can conclude that the two mechanisms have different roles in the cell. When the initial positions of the MTOC and the IS are close to each other, the cortical sliding mechanism can assure effective repositioning. Capture-shrinkage mechanism plays a key role when the cortical sliding reaches its limits so that a fast repositioning is assured in every configuration. \newline Most importantly, it was shown that the mechanisms act in synergy regardless the initial configuration of the cell, see Fig. S9a and b, since the dominant mechanism is always supported by the secondary one. The cortical sliding mechanism supports capture-shrinkage mechanism by passing MTs to it, see Figs. S9f and i. The capture-shrinkage mechanism supports cortical sliding by providing a firm anchor point and pulling the MTOC from the nucleus, see SFigs. 9g and h. When the MTOC recedes from the nucleus, MTs copy the cell membrane more closely and the attachment to cortical sliding is more likely, see Fig. S9e. The dyneins of the two mechanisms pull in alignment sharing the load from opposing forces resulting in the decrease of the detachment probability. The combination of two mechanisms with low area densities can be faster than the dominant mechanism with high densities \cite{hornak_stochastic_2020}, Fig. S9a and b. \newline To conclude, the cell can polarize with a stunning efficiency by employing two mechanisms performing differently in various cell's configurations. In the computational model the synergy of two mechanisms is displayed in the terms of speed. In the real cell where the cytoskeleton is dragged through the inhomogeneous environment of organelles and filaments, the synergy can make a difference between complete and unfinished repositioning. Thus it appears that the location of dyneins on the IS periphery and the combination of two synergetically acting mechanisms together form a complex, efficient machinery assuring that the crucial immune response of human body is carried out efficiently while saving resources. \newline In situations in which the T cell has two IS (with relative positions defined by the angle $\gamma$, sketched in Fig. \ref{fig:two_IS_sketch}a) several scenarios have been observed experimentally \cite{kuhn_dynamic_2002} and are also predicted by our model: the MTOC alternates stochastically (but with a well defined average transition time) between the two IS; it wiggles in between the two IS without transiting to one of the two; or it is at some point pulled to one of the two IS and stays there. We have analyzed with the help of our model which scenario emerges in dependency of the mechanisms in action and the number of dyneins present. When only the capture-shrinkage mechanism is acting, the transition frequency increases and dwelling times decrease with increasing dynein density, see Figs. \ref{fig:two_IS_capture}d and e. Moreover, as the density increases, the lateral fluctuations of the MTOC (perpendicular to the $xz$ plane spanned by the centers of the two IS and the cell center) decrease for $\gamma\leq\frac{2\pi}{3}$, see Fig. \ref{fig:two_IS_capture_2_angle}a and b. The increase of the angle $\gamma$ between the two IS changes the MTOC trajectories: they are interrupted and incomplete when $\gamma=\frac{\pi}{2}$ and continuous when $\gamma=\pi$. One would expect that the transition frequency decreases with the distance between two IS. Surprisingly, the transition frequency slightly decreases with increasing angle $\gamma$ only when $\gamma\leq\frac{2\pi}{3}$ and increases otherwise, see Fig. \ref{fig:two_IS_capture}. The change of the MTOC trajectories and the increase of transition frequency with increasing $\gamma$ can be explained by the shape of the MT length distribution, see Fig. \ref{fig:two_IS_sketch}f. As the $\gamma$ increases, increasing numbers of MTs have a length corresponding to the circumferential distance between two IS. Increasing numbers of attached MTs result in stronger pulling force and a higher transition frequency. Therefore, the presence of capture-shrinkage mechanism supports the transitions between two IS even when the densities are unequal, see Figs. \ref{fig:two_IS_capture} and \ref{fig:two_IS_capture_2_unequal}. \newline When only the cortical sliding mechanism is present, the dyneins from both IS are in a constant tug-of-war. When the dynein densities are small, the MTOC wiggles around the central position, see Figs. \ref{fig:two_IS_cortical}d,g and j. As the dynein density increases, one IS gains the upper hand and pulls the MTOC. Higher the density is, more the MTOC travels from the central position, see Figs. \ref{fig:two_IS_cortical}f, i and l. A subsequent transition to the distant IS is unlikely, since the dyneins from the distant IS have to overcome the forces from the close IS and the forces from the nucleus. The effect of the cortical sliding mechanism differs substantially from the effect of the capture-shrinkage mechanism, since the transition frequency decreases with increasing angle $\gamma$ and with increasing dynein density when $\tilde{\rho}_{\textrm{IS}}>200\mu\textrm{m}^{-2}$, see Fig. \ref{fig:two_IS_cortical}a. When $\gamma \geq \frac{3\pi}{4}$, the transition frequency is very small compared with the effect of the capture-shrinkage mechanism, compare Figs. \ref{fig:two_IS_cortical}a and \ref{fig:two_IS_capture}c. In the special case of $\gamma=\pi$, the MTOC does not transition when $\tilde{\rho}_{\textrm{IS}}\geq 600\mu\textrm{m}^{-2}$ due to the competing forces from the distant IS, sketched in Fig. \ref{fig:two_IS_sketch_cort_cort}c. \newline The mechanisms can be compared by locating them in different IS. One observes that for $\rho_{\textrm{IS}}< 600\mu\textrm{m}^{-2}$, the average angle between the MTOC and the capture-shrinkage $\textrm{IS}_{\textrm{1}}$ $\overline{\Omega}>\frac{\gamma}{2}$, indicating that the MTOC is located closer to the cortical sliding $\textrm{IS}_{\textrm{2}}$, see Fig. \ref{fig:two_IS_combined_one_each_side}b. As the density increases, the capture-shrinkage mechanism gains the upper hand and the MTOC is located closer to the capture-shrinkage $\textrm{IS}_{\textrm{1}}$, see Fig. \ref{fig:two_IS_combined_one_each_side}b, f and i. One can therefore conclude that the cortical sliding mechanism is stronger only when $\rho_{\textrm{IS}}< 600\mu\textrm{m}^{-2}$ and weaker otherwise. The transition frequency increases with the dynein density when $\gamma<\pi$, see \ref{fig:two_IS_combined_one_each_side}a. This is due to the fact that the capture-shrinkage dynein pull the MTOC towards the capture-shrinkage $\textrm{IS}_{\textrm{1}}$, the dyneins detach and the cortical sliding dyneins pull the MTOC to the distant IS. When $\gamma=\pi$, the transition frequency decreases with increasing dynein density when $\rho_{\textrm{IS}}> 400\mu\textrm{m}^{-2}$. In such a case, the MTOC moves closer to the IS as the dynein density increases, see Fig. \ref{fig:sketch_two_IS_combined_one_each_side}i. Subsequently, the MTOC dwells close to the capture-shrinkage IS because cortical sliding dyneins act against each other, as visualized in Fig. \ref{fig:sketch_two_IS_combined_one_each_side}f. \newline When the two mechanisms act together in both IS, the transition frequency increases with the dynein density, see Fig. \ref{fig:two_IS_combined_both_equal}a and the dwell times are lowest, see Fig. \ref{fig:two_IS_combined_both_equal}b. The high transition frequency is due to the fact that as the MTOC is located in the proximity of one IS, the two mechanisms work together in the distant IS and oppose each other at the closer IS. As the MTOC approaches the IS, the captured MTs depolymerize and at the end detach from dyneins. Consequently, the capture-shrinkage mechanism cannot keep the MTOC at the close IS and the cortical sliding mechanisms acts on a reduced numbers of MTs. On the other hand, the dyneins from both mechanisms cooperate at the distant IS and share the load from opposing forces reducing their detachment rate. \newline In conclusion we provided here a rather complete picture of the MTOC repositioning with one or two IS, under the model assumption of a fixed (spherical) cell shape. It would certainly be rewarding to include a deformable, semiflexible (due to the actin cortex) cell boundary interacting mechanically with the forces exerted by the semiflexible MTs. Another open question concerns the way in which dyneins are spatially organized in the membrane: do they self-organize \cite{hooikaas_kinesin-4_2020} or are they more or less firmly anchored in the actin cortex as we assumed in our model. Probably more experimental insight is necessary to decide this question. \section*{AUTHOR CONTRIBUTIONS} I.H. and H.R. designed the research. I.H. performed calculations, prepared figures, and analyzed the data. I.H. and H.R. wrote the manuscript. \section*{ACKNOWLEDGMENTS} This work was financially supported by the German Research Foundation (DFG) within the Collaborative Research Center SFB 1027.
{ "redpajama_set_name": "RedPajamaArXiv" }
8,448
Solo Exhibititions & Commissions Education & Awards Murals & Lectures Moses Ros Ros Reports Ros Report 2021 Wishing you an extraordinarily healthy, happy, and safe 2022 as we continue to face the challenge of a worldwide pandemic. 2021 was the year of resilience. I have endured and have been able to be productive in my artistic career. I greatly appreciate all the support I received from friends, family, and colleagues. The year was filled with rewarding artistic efforts and accomplishments. "The Quell", 2020 Acrylic on scrim Before entering 2021, I finished the previous year with "The Quell", 7'x9'. This artwork was part of the Appropriated Intentions artist actions, For the Love of Art, curated by Alexis Mendoza at Fort George Hill in New York City. This artwork, standing against racism, was influenced by Keith Haring, with whom I had worked in the Eighties, Philip Guston, and Jean Michel Basquiat. "Fruits of the Spirit", 2021 Banners and Poles "Fruits of the Spirit", at the Queens Botanical Garden, is a 2021 installation consisting of three art banners along the garden's main lawn. Inspired by the words Love, Peace, and Joy, in English, Spanish, and Chinese, they create a graceful and festive atmosphere. These artworks are meant to lift the spirit in our present difficult times. I was a selected artist for the AnkhLave Garden Project Fellowship, where the objective is to consider art presentations beyond the traditional, white-walled spaces. "American Congo", 2016, Acrylic, wood, screws, nails, and rope "American Congo" is a relief sculpture made of acrylic, wood, screws, nails, and rope. This artwork was selected by the curator Haifa Bint-Kadi for the exhibition Ubuntu: I Am Because You Are. Riverfront Art Gallery, Yonkers, New York. This sculpture is based on the Power Figures of the Congo in Africa. These figures served as doctor, judge, and priest. They are carved to capture the power of spirits that were important for healing and for adjudicating disputes. The figure was filled with powerful magical substances by priests and tended to in a shrine, where its spirit powers were made available to individuals. Nails were driven into the figure upon a mutual agreement to seal an oath. In this way, the figure's supernatural powers could be called upon to punish those who broke their oaths. In "American Congo", I call on our society to take an oath of freedom in a deeply divided world in the hope of finding healing and resolution. "Honoring My Grandparents", 2016 Mixed Media on wood "Honoring My Grandparents" was selected for the Viva La Memento Mori exhibition at the AHA Fine Art Gallery in Brooklyn, New York. This artwork refers to the hardships my grandparents and family faced during a tyrannical era in the Dominican Republic. "Dominicanidad", 2021, Painting on paper "Dominicanidad", an acrylic painting on a New York City Zoning Map, was included in the There & Here exhibition as part of the New York Dominican Book Fair. This artwork refers to the state and quality of being Dominican outside the island nation. Our cultural heritage and advancement require nurturing to develop to their fullest potential. I hope to inspire a vision of infinite possibilities and achievement. The sky's the limit. "Rebirth of Our Nation", 2020, Acrylic on wood, 8'x4' "Rebirth of Our Nation" is now on display at the Sugar Hill Building windows on St. Nicholas Ave and 155th Street. This painting addresses Black Lives Matter and was in the exhibition Black Is Beautiful: From Carlos Cooks to COVID-19 at the Sugar Hill Museum in Harlem, New York City. This artwork was commissioned by the Rio Galleries of Broadway Housing Communities under the curatorship of Ana-Ofelia Rodriguez and the directorship of Ellen Baxter, with the support of Manon Slone and Eve Moros Ortega of the Plywood Project. The exhibition was reviewed by Architectural Digest: https://www.architecturaldigest.com/story/plywood-project-art "Rebirth of Our Nation", 2020, Relief Print, 14"x11" The Clemente Center in New York City included the print "Rebirth of Our Nation" in the exhibition Social Reckoning - eMeLe-K, curated by Alexis Mendoza. The exhibition honored Martin Luther King, Jr. and what he stood for. "Rebirth of Our Nation 2", 2021, Silk Screen Print, 30"x22" "Rebirth of Our Nation 2" is a new print based on the black-and-white print and color painting of the same name. It is an edition of 26, available to the public. It was published by Coronado Studios of Austin, Texas. Nation of Graffiti Artists, NOGA, Book, 2021 I am prominently featured in the new book Nation of Graffiti Artists (NOGA), written by Chris Pape, photographs by Michael Lawrence, and published by Beyond the Streets, 2021. Some of my photographs are in there as well. I started out as a graffiti artist, writing as SAL 161. I eventually ventured into NOGA, the Nation of Graffiti Artists, an artist's workshop located on the Upper West Side of Manhattan. It was the vision of Jack Pelsinger for an art studio where kids could develop their interests in the arts. I created some of my first artworks on canvas there. The book reviewed by Brooklyn Street Art. https://www.brooklynstreetart.com/2021/12/11/nation-of-graffiti-artists-opens-another-chapter-of-nyc-writer-history/ The book is available at https://beyondthestreets.com/ "Mentalmente" "Enraizamiente" "Avioneta" In 2021, The ArteLatAm artist collective, of which I am a member, and which is composed of artists of Latin American and Caribbean descent (Argentina, Cuba, Dominican Republic, Ecuador, Mexico, and Venezuela), added three prints by one of the members to the Deriva linoleum print portfolio. I organized the portfolio, which contains 18 original prints (three prints by each of the six artists) in an edition of 25. The portfolios are available at: https://artelatam.org/deriva/ The mission of the ArteLatAm artist collective is to contribute to a better and deeper understanding of contemporary Latin American art in the United States, an American art. "El Reggaetón del Bachatero," 2010 Etching aquatint print with Chine Colle In 2021, I was interviewed by Lauren Chalk of Griffith University for her Ph.D. research project in relation to the print "El Reggaetón del Bachatero," which is now part of the exhibition ¡Printing the Revolution!, which is traveling from the Smithsonian American Art Museum, Washington, D.C. Here is the link to that interview. https://www.representingreggaeton.com/representingreggaetonproject https://www.representingreggaeton.com/museums Group Exhibitions coming in 2022 The In-Between Spaces January 6 – March 24 Riverfront Art Gallery, Yonkers, New York For The Public January 16 – February 19 Local Project, Long Island City, New York ¡Printing the Revolution! February 20 – May 8 Amon Carter Museum of American Art, Fort Worth, Texas (Touring from the Smithsonian American Art Museum, Washington, DC) ¡Presente! A Latino History of the United States April 23, 2022 – December 1, 2024 National Museum of American History, Washington, DC Your interest in my work is greatly appreciated, and, as always, I send you my best wishes for the year moving forward. Please share The Ros Report with those who may enjoy my artwork. May peace and unity reign! moses_ros@yahoo.com mosesros.com bx200.com/portfolio/moses-ros-suarez/ Instagram: moses_ros The Ros Report 2020
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,253
from app.utils.environment import load_env auth0 = load_env('auth0.env')
{ "redpajama_set_name": "RedPajamaGithub" }
7,937
Football World Cup qualifying looks Trinidad and Tobago straight in the face Trinidad and Tobago's Under-20 football team last night bowed out of contention for a place in the finals of their age group FIFA World Cup when sliding to their third loss in as many matches at the CONCACAF Championship taking place in Jamaica. Needing to win last evening's match against the United States, Trinidad and Tobago came unstuck, losing on a solitary goal scored in the 78th minute of the match in Montego Bay. Trinidad and Tobago's Under-20 football team Even after losses to Guatemala – 0-2 and the unbeaten Panama 1-nil, Trinidad and Tobago yet had a chance to advance to the championship's World Cup playoff round by beating the Americans to finish third of the six teams in their Group. But it was the USA who gained that berth behind Panama and Guatemala and now it's the second-placed Guatemala and third USA who will engage the two runners-up from the other Group, the winners joining Panama and the other Group winner as World Cup finalists. Trinidad and Tobago had started their campaign with a 2-all draw with hosts Jamaica before whipping Aruba 5-1. But they were derailed by the triple-match beating and now their twin-island state must throw their support behind the Under17 Boys team who have their CONCACAF Championship, a World Cup qualifying tournament as well, coming up next month in Honduras. In December the Trinidad and Tobago Women's team also failed to qualify for the women's World Cup finals which are st for Canada in June. The TT men's team are to begin CONCACAF qualifying for the 2018 World Cup in November.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
1,578
The round barn at the Fulton County Historical Society grounds was built in 1924 and moved to the open-air museum in 1989 after it was struck by a tornado. The frugal farmers of Fulton County cut corners building their barns a century ago — literally. The barns they built were round. In doing so, the northcentral Indiana county became known as the "Round Barn Capital of the World." The county's appreciation and preservation of its special marks in history make the county special, too. Even though the landlubber county is mostly farmland, it's named for Robert Fulton — the American inventor who first gave us working steamboats, submarines and torpedoes. But in a nod to Fulton's inventiveness, a group of Fulton County farmers embraced round barns, touted at the time for their modern engineering and scientific application that improved farm efficiency. Round barns put more area under roof while using less building materials than traditional rectangular barns. Fulton County also holds an unheralded but likely lofty position in the state when it comes to engineering and manipulating natural resources. Lake Manitou, which today covers 775 acres southeast of Rochester, is one of the oldest manmade lakes in Indiana, if not the oldest. The lake was created in 1827 by the federal government to power a mill for the Potawatomi Indian village there as part of a treaty. The removal of the Potawatomi to Kansas, a 660-mile march known as "The Trail of Death," started just north of Fulton County and passed through. It's a dark chapter of history but is also one county residents have been in the forefront of researching and commemorating. A Fulton County Eagle Scout placed the first historical marker along the trail in the county in 1976. Now, more than 80 markers, placed mostly by scouts and civic groups, trace its route across four states. One last one is being placed this spring in western Fulton County. The annual Trail of Courage Living History Festival each September at the county historical grounds also commemorates the event. U.S. 31 and Fulton County Road 375 N. Celebrate the first meetings of French fur traders with Native Americans in what became Fulton County along the Tippecanoe River as the redbuds bloom. The festival features foods cooked over wood fires, programs, blanket traders and merchants, traditional craft demonstrations, pre-1836 reenactors and a Civil War encampment.
{ "redpajama_set_name": "RedPajamaC4" }
4,448
layout: ipsumpage title: Cantadas Nerds key: cantadasnerd description: "Quem disse que nerd não dá cantada?" author: sena titleColor: "#ff0000" descColor: "#ff0000" genBtnBgColor: "#ff0000" genBtnTextColor: "#ffffff" labelTextColor: "#ff0000" labelBgColor: "#ffffff" labelBorderColor: "red" paragraphText: "Pretendentes" genBtnText: "Lançar xaveco." language: Português text: - Gata me chama de Jedi que te ensino a mexer no meu sabre de luz - Gata não tenho 7 esferas do dragão mas com 2 satisfaço seu desejo - Gata vc é uma Dragonite em meio a tantos Rattatas. - gata eu não sou cachoeira mas tenho uma queda por você - Gata você não é o baidu mas é impossível remover você da minha memória. - Gata eu não sou Jedi mas posso te fazer sentir a força - Gata não precisa ser da área de exatas pra saber que o X que falta na sua equação sou eu! - gata, na tabela do meu banco de dados você é sempre chave primária: única e sem você eu não funciono. - gata se você fosse um pretérito da língua portuguesa você seria o pretérito mais que perfeito - Gata, por você desinstalaria o lol do meu PC - gata eu não sou charizard mas deixaria seu rabo pegando fogo!!! - Gata eu não sou o Link mas voltaria no tempo pra viver tudo denovo - Eu não sou o pikachu, mas prometo que quando eu te beijar vai ser chocante. - Gata, eu não posso te ver, porque a lente do meu óculos é divergente, não de ver anjo. - Gata, nao sou princesa zelda mas to doido pra voce toca minha ocarina - Gata, não sou o ekko, mas voltaria no tempo pra viver tudo novamente. - Gata, você é filha de Apolo?-não, por que?-Porque você me deixa quente - Gata, te quero tanto que uma rapidinha com você duraria os mesmos 5 minutos que namekusei demorou para explodir. Sua linda! - Gata, não sou cavaleiro de Athenas, mas faço elevar o cosmo do seu coração. - Gato, não sou Gollum, mas farei de você meu precioso!! - Nossa gata se eu fosse o Yoshi eu adoraria pular no seu buraco. - Moça, Você não é uma horcrux, mas uma parte da minha alma está com você. - Gato, seu nome é Finn? Porque eu tô louca pra pegar na sua espada e ter uma Hora de Aventura com você, seu lindo! - Gata, você não é extreme potion, mas me deixou totalmente revigorado. - Gata, você é não é globulos vermelhos mas percorre meu corpo até o coração sua linda. - No Universo, uma porção de qualquer matérial atrai outra porção de matéria: o Sol atrai a Terra; a Terra nos atrai e também atrai a Lua, mantendo-a em sua órbita; eu atraiu você e vice-versa. - Gata, Não sou o Alladin, mas se você esfregar a Lâmpada, faço sair o gênio. - Gata, eu não sou a força mas eu quero estar sempre com você, sua linda. - Gata você é tão Linda que até o demolidor enxergou sua beleza. - Garota, mal te conheço, mas já sei só você é digna de levantar o meu martelo. - posso ser nub no LOL , mas por você encaro a mid sem pensar duas vezes. - Gata, você é o Papa Léguas? Porque é difícil de pegar, heih. - Queria ser gás nobre, mas gás nobre não posso ser,Porque gás nobre se estabiliza com 8 elétrons, e eu me estabilizo com você. - Gata, você não é balão, mas me leva as alturas. - Gata, não sou pokémon de água, mas posso te deixar toda molhadinha :3. - Gata, me chama de rom e me emula no seu coração. - Gata nosso amor vai durar mais que One piece. - Gato, por você eu iria de Markarth até Riften, sem fast travel e com peso limitado, seu lieendo! - Gata, vc não é Activision e eu não sou a Bungie, mas o Destiny nos uniu. - Gata, Loki pode ter um exército, os Vingadores podem ter o Hulk mas para mim o que importa é ter você. - Você não é compilador mas por você eu Debugava. - Mina,não sou Stephen Hawking,mas tenho uma teoria sobre seu buraco negro. - Minha vida é uma máquina de estados, onde você é a transição que me levará ao estado de felicidade. - Minha vida é uma máquina de estados, onde você é a transação que me leva ao estado da felicidade. - Gata você não é esfera do dragão, mas eu me mataria pra te pegar. - Gata, não sou o Scorpion mas meu Get Over Here vai acertar seu coração. - Gata, me chama de iphone 6 que eu te mostro o plus, sua linda. - Gata, me chama de dropbox que eu te levo até as nuvens. Sua linda! - Gata, você é uma ótima treinadora Pokémon!Capturou meu coração sem ter que machuca-lo!Sua Linda! - Gata, me chama de crowley e faz um pacto comigo! - Gato sou que nem videogame 8bits, simples, mas perigosamente viciante. - Se eu declarasse uma variável amor e chamasse ela depois, retornaria uma string com seu nome. - Eu, posso não ser forte como o Goku, posso não ser engraçado como o Homem-Aranha, posso até não ser bonito como o Dante, mas por você, eu desligaria o meu vídeo-game a qualquer instante. - Gato, me chama de noob e vem me ensinar a upar. - Gata me chama de tio Zangado que eu te digo se você vale apena. - Cara donzela, eu não sou a mística do X-men, mas eu posso me transformar em quem você quiser! - Gata, eu não sou vídeo game, mas quando estiver triste, talvez eu te CONSOLE. - Gato, você não é o slender, mas bem que podíamos estar sozinhos numa floresta - Me chama de Brok, que te mostro meu Onix. - Eu não sou o JigglyPuff mas se eu cantar você dorme comigo? ---
{ "redpajama_set_name": "RedPajamaGithub" }
1,516
{"url":"http:\/\/forum.wilmott.com\/viewtopic.php?f=8&t=95822","text":"SERVING THE QUANTITATIVE FINANCE COMMUNITY\n\nsudarshankumar\nTopic Author\nPosts: 8\nJoined: November 15th, 2013, 5:46 am\n\n### Kalman filter stability problem\n\nWhile implementing Klamna filter some times matrix become singular. After searching internet I came to know Josef form of Klaman filter can mitigate that.Just wanted to know how to implement thatWill replacing old quantion (InitVarY=(I-KalmanGain*H)*VarY; with new one InitVarY=(I-KalmanGain*H)*VarY*(I-KalmanGain*H)'+ KalmanGain*R*KalmanGain'; will be sufficient ??or do i need to edit some other equantion as well\n\nOOglesby\nPosts: 42\nJoined: August 26th, 2011, 5:34 am\n\n### Kalman filter stability problem\n\nThere can be several places and several reasons why a matrix can become singular. Unless the model used is truly linear, your process noise matrix, Q, should not be all zeros. For nonlinear models, having even small non-zero values on the diagonal of Q can help prevent singular matrices from forming and it helps ensure that the filter does not ignore the measurements.Another place to be careful is the calculation of the Kalman gain: $\\frac{VarY*H^T}{H*VarY*H^T + R}$. Do not explicitly compute inverse in this equation. Instead solve a system of equations (Ax=b) whereA=$(H*VarY*H^T + R)^T$x=$KalmanGain^T$b=$(VarY*H^T)^T$If your solver can handle solving (xA=b), then the outermost transposes can be removed.If your development language has symmetric matrices, then use it to represent InitVarY, VarY, R, and Q.","date":"2020-10-29 23:32:10","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.32113444805145264, \"perplexity\": 2936.8866480595775}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-45\/segments\/1603107905965.68\/warc\/CC-MAIN-20201029214439-20201030004439-00165.warc.gz\"}"}
null
null
\section{Introduction} \label{sec1} One of the principle questions of the Compressed Baryon Matter (CBM) experiment at GSI FAIR is the equation of state of hot and dense matter produced in heavy-ion collisions at energies about 20 - 40 AGeV \cite{fair}. Because the perturbative quantum chromodynamics (pQCD) is not applicable to soft processes with small momentum transfer, one has to rely on microscopic models that correctly describe many features of the collisions at various energies. Two of such models, ultra-relativistic quantum molecular dynamics (UrQMD) \cite{urqmd} and quark-gluon string model (QGSM) \cite{qgsm}, are used to extract the effective EOS of the excited matter in heavy-ion collisions at bombarding energies ranging from AGS to SPS. The UrQMD and, to a lesser extent, QGSM were already employed for studying the equilibration processes, see \cite{urqmd_equil,qgsm_equil}. Recently we modified the analysis by extending it to a non-fixed cell, which should follow the expanding area of uniformly distributed energy density \cite{prc_08}. By using both the UrQMD and QGSM for studies of the relaxation process in a broad energy range one can expect that the model-dependent effects, caused by application of a particular event generator, will be significantly reduced. - The models use different mechanisms of string excitation and fragmentation. UrQMD relies on the longitudinal excitation, whereas the color exchange scheme is employed in QGSM. The fragmentation functions that determine the energy, momentum, and the type of the hadrons produced during the string decay are also different. Finally, both models do not use the same tables of hadrons, chosen as discrete degrees of freedom. Whereas the UrQMD contains 55 baryon and 32 meson states together with their antistates, the QGSM takes into account octet and decuplet baryons, and nonets of vector and pseudoscalar mesons, as well as their antiparticles. Heavy resonances are not included in the current version of the QGSM, and this circumstance can be used to elaborate on their role in the EOS. Central gold-gold collisions with zero impact parameter $b = 0$\,fm were simulated at bombarding energies $E_{\rm lab} = 11.6, 20, 30, 40, 80$ and 160\, AGeV, respectively. The total energy, the net baryon charge and the net strangeness extracted for a certain volume of the reaction, were inserted into a system of nonlinear equations \cite{urqmd_equil} to obtain temperature $T$, baryon chemical potential $\mu_{\rm B}$ and strangeness chemical potential $\mu_{\rm S}$ of an ideal hadron gas in equilibrium. If the yields and transverse momentum spectra of particles obtained in a snapshot of microscopic simulations at time $t$ were close to the results of statistical model (SM), the matter in the cell is considered to be in the vicinity of equilibrium. Then its equation of state can be derived and studied. Because the cell is an open system with instantly changing energy and particle density, the verification of the equilibrium conditions is repeated after a time-step of $\Delta t = 1$\,fm/$c$. \section{Relaxation to equilibrium and EOS in the cell.} \label{sec2} \begin{figure}[htb] \begin{minipage}[t]{70mm} \epsfig{file=eos_fig1.eps,width=70mm} \caption{ Hadron yields in the central $V = 125$\,fm$^3$ cell of central Au+Au collisions at 40\,AGeV in microscopic models (histograms) and statistical model (symbols). } \label{fig1} \end{minipage} \hspace{\fill} \begin{minipage}[t]{65mm} \epsfig{file=eos_fig2.eps,width=70mm} \caption{ Energy spectra of hadrons in the central $V = 125$\,fm$^3$ cell. } \label{fig2} \end{minipage} \end{figure} In the standard approach the test-volume was a fixed central cubic cell of $V = 125$\,fm$^3$. The yields of some hadron species are displayed in Fig.~\ref{fig1} for central gold-gold collisions at $E_{\rm lab} = 40$\,AGeV. The agreement between the results of microscopic and statistical model calculations is good after $t \geq 9$\,fm/$c$. Here the standard criterion $[yield(mic)-yield(SM)]/error(SM) \leq 1$ is applied. According to model analysis, after $t \approx 10$\,fm/$c$ almost all many-body processes going via the formation of strings or many-particle decaying resonances are ceased, and one deals mainly with elastic and quasi-elastic reactions. The energy spectra $d N / 4 \pi p E d E$ calculated microscopically are shown in Fig.~\ref{fig2}. The Boltzmann fit to particle distributions is presented in Fig.~\ref{fig2} as well. Both in UrQMD and in QGSM the energy spectra agree well with the exponential form of the Boltzmann distributions. Because the hadronic matter in the central cell nearly reaches the state of thermal and chemical equilibrium, the macroscopic thermodynamic parameters of the system, such as temperature and chemical potentials, become meaningful. Isentropic expansion of relativistic fluid is one of the main postulates of Landau hydrodynamic theory \cite{La53} of multiparticle production. As can be seen in Fig.~\ref{fig3} the entropy per baryon ratio is nearly conserved in the equilibrium phase of the expansion within the 5\% accuracy limit. The entropy densities $s$ obtained for the cell in both models are very close to each other, but, because of the difference in net-baryon sector, the ratio $s/\rho_{\rm B}$ in UrQMD is about 15--20\% larger than that in QGSM. \begin{figure}[htb] \begin{minipage}[t]{70mm} \epsfig{file=eos_fig3.eps,width=75mm} \caption{ Entropy per baryon in the central cell as a function of time $t$. } \label{fig3} \end{minipage} \hspace{\fill} \begin{minipage}[t]{70mm} \epsfig{file=eos_fig4.eps,width=75mm} \caption{ Equation of state: microscopic pressure $P$ vs. the energy density $\varepsilon$. } \label{fig4} \end{minipage} \end{figure} Any hydrodynamic model relies on the equation of state, which links the pressure of the system to its energy density. Otherwise, the system of hydrodynamic equations is incomplete. The corresponding plot with microscopic pressures $P_{\rm mic}(\varepsilon)$ is presented in Fig.~\ref{fig4}. For both models the shapes of the distributions are very close to linear for all energies in question. Thus the EOS has a rather simple form \begin{equation} \displaystyle P(\varepsilon) = c_s^2 \varepsilon\ , \label{eq1} \end{equation} where the sonic velocity in the medium $c_s = (dP/d\varepsilon)^{1/2}$ is fully determined by the slopes of the distributions $P(\varepsilon)$. To account for possible deviations from a straight line behavior the slopes of the functions $P$ versus $\varepsilon$ were averaged over the whole period of the equilibrated phase. For the UrQMD calculations the velocity of sound increases from 0.13 at $E_{\rm lab} = 11.6$\,AGeV to 0.146 at $E_{\rm lab} = 158$\,AGeV. It saturates at $c_s^2 \approx 0.15$ for RHIC energies \cite{urqmd_equil}. That corresponds to change of the nuclear compressibility from 140\,MeV\,(AGS) to 200\,MeV\,(SPS and RHIC). In QGSM calculations the averaged sound velocity is about 0.015 units smaller. Note that due to the averaging over time, respectively energy density, these values are lower the maximal values for $c_s^2$ that are reached in the corresponding reactions. Both models indicate that at the energy around $E_{\rm lab} = 40$\,AGeV the slope of the $c_s^2 (\sqrt{s})$ distribution is changing, and the velocity of sound becomes less sensitive to rising bombarding energy. \begin{figure}[htb] \begin{minipage}[t]{70mm} \epsfig{file=eos_fig5.eps,width=75mm} \caption{ The sound velocity $c_s^2$ in the central cell of volume $V=125$\,fm$^3$ as a function of baryon chemical potential $\mu_{\rm B}$. } \label{fig5} \end{minipage} \hspace{\fill} \begin{minipage}[t]{70mm} \epsfig{file=eos_fig6.eps,width=75mm} \caption{ Temperature dependence of the sound velocity. Dashed line corresponds to calculations within Hagedorn model of ideal hadron gas. } \label{fig6} \end{minipage} \end{figure} Figure \ref{fig5} shows the dependence of the $c_s^2$ on the baryon chemical potential $\mu_{\rm B}$. For three bombarding energies, $E_{\rm lab} = 20$\,AGeV, 30\,AGeV, and 40\,AGeV, the functions $c_s^2(\mu_{\rm B})$ are close to each other. In QGSM calculations $c_s^2$ depends linearly on $\mu_{\rm B}$ and the slope $c_s^2 / \mu_{\rm B}$ is unique for all reactions. In UrQMD the picture is more complex. For the late stages of system evolution the slopes of all distributions are also similar, but for energies of $E_{\rm lab} \geq 40$\,AGeV one sees the rise of the sound velocity at the beginning of the equilibration, plateau, and the falloff. This can be taken as indication of the role of heavy resonances, because their fraction is presented in the particle spectrum at the early period and disappeared completely at the end. These resonances are rare at $E_{\rm lab} \leq 20$\,AGeV, and distributions $c_s^2(\mu_{\rm B})$ obtained in both models are quite similar. The obtained EOS is soft, because for the ultrarelativistic gas of light particles the sonic speed is $c_s = 1/\sqrt{3}$. But the presence of resonances in particle spectrum generates the decrease \cite{Shur72} of the $c_s$. Employing the empirical dependence $ \displaystyle \rho (m) \propto m^{\alpha^\prime} \ ,\ 2 \leq \alpha^\prime \leq 3$ \cite{Hag65}, where $\rho(m)\, dm$ denotes the number of resonances with masses from $m$ to $m + dm$, one arrives to the equation of state in the form \cite{Shur72} \begin{equation} \displaystyle \varepsilon = (\alpha^\prime + 4)\, P \ , \label{eq2} \end{equation} i.e., $\frac{1}{7} \leq c_s^2 \leq \frac{1}{6}$. This result is reproduced in microscopic models. Note that PHENIX collaboration reported the value $c_s \approx 0.35 \pm 0.05$ \cite{PHENIX_Cs}, i.e., $c_s^2 \approx 0.12 \pm 0.3$, for Au+Au collisions at top RHIC energy $\sqrt{s} = 200$\,AGeV. This value is close to our results and also implies rather soft effective EOS. Temperature dependence of the sonic speed $c_s^2(T)$ is depicted in Fig.~\ref{fig6} together with the EOS calculated in \cite{CFC05} within the Hagedorn model with $\mu = 0$. For $E_{\rm lab} = 80$\,AGeV and 160\,AGeV the UrQMD data exhibit a falloff in $c_s^2(T)$ at $T \geq 120$\,MeV in accord with the Hagedorn model. This decrease is assigned to heavy resonances, because neither the UrQMD calculations at lower energies nor the QGSM calculations without the heavy resonances reveal the negative slope in the equation of state $c_s^2(T)$. Below $T = 100$\,MeV both microscopic models indicate rapid drop of the sound velocity that occurs much earlier compared to that of the Hagedorn model. In the modified analysis the central cell was further subdivided into the smaller ones embedded one into another. If the $\varepsilon$ of the inner cell is not the same (within the 5\% limit of accuracy) as the energy density of the outer one, the SM analysis of the thermodynamic conditions is performed for the inner cell, otherwise the outer cell becomes a new test volume. This permits one to follow the expansion of the area with uniformly distributed energy. EOS in the $T-\mu_{\rm B}$ plane is shown in Fig.~\ref{fig7}. Symbols and dashed lines show the evolution of these quantities in a cell of instantly increasing volume ($V_{\rm init} = 0.125$\,fm$^3$), whereas dotted (upper plot) and full (both plots) lines are related to calculations with the fixed volume $V = 125$\,fm$^3$. The transition to equilibrium proceeds quite smoothly if the analysis is performed for the fixed cell. In contrast, in the area with uniformly distributed energy the transition is characterized by a kink distinctly seen in each of the phase diagrams in both microscopic models. The effect, which takes place along the lines of the constant entropy per baryon, is caused by the significant reduction of the number of processes going via the formation and fragmentation of strings, i.e., chemical freeze-out. The observed phenomenon can easily mimic the signature of the QCD phase transition in the $T$-$\mu_{\rm B}$ plane. Evolution of strangeness chemical potential $\mu _{\rm S}$ with $T$ in the fixed volume and non-fixed volume is displayed in Fig.~\ref{fig8}. As in Fig.~\ref{fig7}, all systems develop kinks in the $T(\mu_{\rm S})$ distributions precisely at the moment of transition from nonequilibrium to equilibrium phase. Both baryon density and strangeness density are decreasing in the test volume, however, the baryon chemical potential increases with time, whereas the strangeness one drops. The evolution of the $\mu _{\rm S}$ and $\mu _{\rm B}$ with $T$ proceeds quasilinearly, thus reducing the deviations, caused by nonzero chemical potentials, of the functions $\varepsilon(T)$ and $s(T)$ from the ideal gas behavior at $\mu = 0$. \begin{figure}[htb] \begin{minipage}[t]{70mm} \epsfig{file=eos_fig7.eps,width=75mm} \caption{ Temperature $T$ vs. baryon chemical potential $\mu_{\rm B}$. } \label{fig7} \end{minipage} \hspace{\fill} \begin{minipage}[t]{70mm} \epsfig{file=eos_fig8.eps,width=75mm} \caption{ Temperature $T$ vs. strangeness chemical potential $\mu_{\rm S}$. } \label{fig8} \end{minipage} \end{figure} \section{Conclusions} \label{sec3} In summary, both microscopic models favor the formation of the equilibrated matter for a period of about 10\,fm/$c$ for all reactions in question. During this period the matter in the central cell expands with constant entropy per baryon. The equation of state can be approximated by a simple linear dependence $P = a(\sqrt{s}) \varepsilon$, where the square of the speed of sound $c_s^2 = a(\sqrt{s})$ varies from 0.13 (AGS) to 0.15 (SPS) in the UrQMD calculations and from 0.11 (AGS) to 0.14 (SPS) in the QGSM ones. Heavy resonances are responsible for negative slope in $c_s^2 (T)$ at $T \geq 100$\,MeV in accord with the predictions of Hagedorn model of hadron resonance gas. At lower temperatures both microscopic models indicate a rapid drop of the sonic speed in stark contrast with the Hagedorn model calculations with zero chemical potential. Study of the expanding area of isotropically distributed energy reveals that the relaxation to equilibrium in this dynamic region proceeds at the same rate as in the case of the fixed-size cell. However, here both microscopic models unambiguously show the presence of a kink in the $T$-$\mu_{\rm B}$ phase diagrams. The higher the collision energy, the earlier the kink formation. Its origin is linked to the freeze-out of inelastic reactions in the considered area. {\it Acknowledgments\/.} This work was supported by the Norwegian Research Council (NFR) under contract no. 185664/V30, by the DFG and the BMBF. \section*{References}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,878
Anthony Albanese MP Leader of the Australian Labor Party Federal Member for Grayndler Contact Anthony Connect to Facebook Connect to Twitter Connect to Instrgram Grayndler Congratulatory messages Flags & symbols requests Federal Labor Leader Media Centre Federal Labor Leader Media Releases Federal Labor Leader Speeches Federal Labor Leader Interview Transcripts Federal Labor Leader Hansard Grayndler Media Centre Grayndler Media Grayndler Hansard Transcript of Radio Interview – Gold Central – Tuesday, 24 September 2019 SUBJECTS: Today's visit to Bendigo; regional cities' positive impact on the economy; housing issues across Australia; role of the Leader of the Opposition; Scott Morrison's trip to the United States. HOST: Very pleased to have as our guest on the show, the Leader of the Federal Opposition, Anthony Albanese. I was just thinking before the interview that Mr Albanese sort of sounds like I might be talking to a headmaster or someone and I don't know that I've ever heard anyone call him Anthony. So Mr Albanese, would it be okay if we've dispensed with formality and I went with Albo? ANTHONY ALBANESE, LEADER OF THE AUSTRALIAN LABOR PARTY: That would be fine Robbo. HOST: Terrific. Listen, welcome to the show. Thanks very much for your time this morning. ALBANESE: Thanks for having me on. HOST: What brings you to our beautiful part of the world today? ALBANESE: Well I've been invited by Lisa Chesters in Bendigo where we're going to the Keech 3D Advanced Manufacturing Centre there that employs over 100 people in Bendigo. It's an example of a real success story in a regional city employing Australians and making a positive impact on the economy. Later on, we've been invited to have a look at the Haven Home Safe which is a social housing residential complex there and we'll be having morning tea with the residents. And we'll be having a look at other issues around Bendigo including the potential advance of projects that we've promoted during the election campaign that we think should proceed. Projects like the Rail Trail, projects like the upgrade of Bendigo airport, and arguing essentially that the economy needs a bit of a push at the moment and these projects are ready to go. They create jobs now and helped to boost the economy here in Bendigo. HOST: I guess not just central Victoria but a lot of regional Australia, particularly in relation to Keech 3D and companies like that, regional areas with the costs so high now in major metropolitan areas, regional manufacturing and business is something that obviously has to be on everybody's agenda. ALBANESE: Absolutely and it's actually huge comparative advantage that regional cities have, like Bendigo. And Bendigo is doing it really well in terms of Advanced Manufacturing and Defence, in newer technology such as 3D printing and other companies as well. Part of the reason for my visit here is to showcase Central Victoria and to encourage that business investment here in the region where overheads are less than they would be if it were located in the centre of Melbourne. And it also has some advantages in terms of lifestyle. Bendigo and Central Victoria is a fantastic place to live for workers and commute times are less, urban congestion is less. And it's a great lifestyle. Lisa Chesters is very proud to represent this area and I can understand why. HOST: I guess then with the other part of your visit you mentioned the Sidney Myer Haven and part of that also demonstrates that we are not immune, as is anywhere in Australia, of the homeless situation and housing issues and dare I say, a crisis at times as well. ALBANESE: Absolutely. They've got a mobile van that they used to service some of the outreach places in the area and throughout the region. We'll be assisting there to serve morning tea and to talk to residents and to talk to the operators of the service there. And I'm looking forward to that this morning. HOST: Just on a more general issue, the Leader of the Opposition's role, has it been more challenging than you expected or is it really what you thought it would be? ALBANESE: Look, I always knew that it would be a difficult job and it is. It's challenging. You don't have the resources of government. But I love getting around and meeting people, talking to people, listening to what the issues are out there and putting forward proposals which hold the Government to account. But also, it's an opportunity over the next two and a half years to develop our plan for the next election so that we can be a very strong alternative government. I think the lesson of governments in recent times, the current Government I think suffered from just saying no to everything when they were the Opposition, and Tony Abbott was an effective Opposition Leader. But, I think they weren't really prepared for government and they've suffered from that in terms of, even now, not having a clear agenda going forward on a whole range of issues like energy policy, and climate change, and dealing with the slowdown that's happening in the economy. And that's why it's important to get those policy proposals right, to engage with the community and to really be ready in 2022 or late 2021. And that's what I'm doing, getting out and meeting with people right around the country and working hard along with the team. Lisa is a really strong representative for Bendigo. And making sure that we want to represent Australians wherever they live, whether in the cities or in the region. HOST: Just before I let you go, I know you're tight for time. Your thoughts on the Prime Minister's trip to the States, the 'Scomance' as I'm calling it, even Aunty Mary's poetry getting involved there. I mean my first thoughts were, 'lads come on, get a room', but what are your thoughts? ALBANESE: Look, it's a good thing that the Prime Minister visits the United States. They're important allies. I do think that he needs to concentrate on what Australia's national interest is rather than being a partner in what would appear to be some of Donald Trump's re-election campaign there in Ohio. And I think that our relationship with the US is a very important one. It is a good thing that he is there but he needs to make sure that he's raising issues like ending the trade conflict that is there between the US and China and play a constructive role by urging action on climate change and really looking after Australia's national interests. That's going to have to be his main focus. But if he wants to have dinner and some social occasions, that's understandable as well. And that's a good thing, but he needs to keep his eye on the main game because there are real issues here in Australia that we need to deal with. It's all right to find $150 million for the program to send people to Mars, but we need to have a clear idea of what that's going to be used for and it is a bit disappointing that announcement seemed to take some of his own ministers by surprise apparently. HOST: Mr Anthony Albanese, Leader of the Federal Opposition, thank you so much for joining us on the Wake Up Call this morning and enjoy your visit to our beautiful part of the world. ALBANESE: Thanks very much Robbo, it is always great to be in Bendigo. Connect to Twitter Connect to Facebook Tweets by @AlboMP Authorised by Anthony Albanese. 334a Marrickville Rd, Marrickville NSW 2204. (02) 9564 3588 Electorate Office Connect to Facebook Connect to Twitter Connect to Instrgram Disclaimer | Privacy | Contact
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
5,703
\section*{Additional analysis of results} We detail in this section additional results and analyses to supplement the results in the main paper. \subsection*{Heatmaps-only model} We evaluate the relative benefit of including images and heatmaps as inputs to our model by also considering a model variant where the model only takes as input the patch classification heatmaps, without being shown the original mammogram. The results are shown in \autoref{tab:cancer_pred_input_variant}. We see that the heatmaps-only model performs comparably with the image-only model on malignant/not malignant classification, while significantly underperforming on benign/not benign classification. We speculate that this discrepancy arises from the higher prevalence of mammographically occult benign findings. The patch classification models are trained on classifying patches based on pixel-level segmentations, which contain a higher density of label information compared to breast-level labels and thus provide a stronger learning signal, leading to better performance on malignant/not malignant prediction. On the other hand, because benign findings are more likely to be mammographically occult (see Table 2 in \cite{NYU_dataset}), these cases cannot be segmented and hence are not present in the patch dataset--the patch classification model is thus less well-conditioned to those benign findings. Conversely, the image-only model is still shown the benign labels derived from biopsies, and may thus pick up on visual clues suggesting a benign finding despite the cases being considered mammographically occult by radiologists. The superior performance of the heatmaps-only model over the image-only model on malignant/not-malignant classification also suggests that the increased depth of the model and the more strongly supervised nature of the patch classification task outweighs the benefits of training a deep model end-to-end. In addition, we observe a much smaller benefit to ensembling the heatmaps-only models compared to the image-only and image-and-heatmaps models. The intuition behind this observation is that the heatmaps have likely already distilled most of the pertinent information for cancer classification. We speculate that because the heatmaps-only model learn a simpler transformation to target cancer classification, there is lower model diversity and thus a smaller benefit from ensembling. Above all, the image-and-heatmaps model still remains the strongest overall model, demonstrating that effectively utilizing both local and global visual information leads to superior performance on the cancer classification problem. \begin{table}[ht] \centering \caption{ AUCs of model input variants on screening and biopsied populations. } \begin{tabular}{| l | c | c | c | c |} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{single} & \multicolumn{2}{c|}{5x ensemble} \\ \cline{2-5} \multicolumn{1}{c|}{} & malignant & benign & malignant & benign \\ \cline{2-5} \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{screening population}} } \\ \hline image-only & 0.827$\pm$0.008 & 0.731$\pm$0.004 & 0.840 & 0.743 \\ \hline heatmaps-only & 0.837$\pm$0.010 & 0.674$\pm$0.007 & 0.835 & 0.691 \\ \hline image-and-heatmaps & \textbf{0.886}$\pm$\textbf{0.003} & \textbf{0.747}$\pm$\textbf{0.002} & \textbf{0.895} & \textbf{0.756} \\ \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{biopsied population}} } \\ \hline image-only & 0.781$\pm$0.006 & 0.673$\pm$0.003 & 0.791 & 0.682 \\ \hline heatmaps-only & 0.805$\pm$0.007 & 0.621$\pm$0.008 & 0.803 & 0.633 \\ \hline image-and-heatmaps & \textbf{0.843}$\pm$\textbf{0.004} & \textbf{0.690}$\pm$\textbf{0.002} & \textbf{0.850} & \textbf{0.696} \\ \hline \end{tabular} \label{tab:cancer_pred_input_variant} \end{table} \subsection*{Correlation between model predictions} We visualize in \autoref{fig:pred_correls} the correlations between model predictions for the four different labels (left-benign, left-malignant, right-benign, right-malignant) for a given exam. In both image-only and image-and-heatmaps model ensembles, we observe high correlations between benign and malignant predictions for the same breast, and low correlations for predictions between breasts. Notably, we observe a lower correlation between benign and malignant predictions for the same breast in the image-and-heatmaps model ensemble compared to the image-only model ensemble. This is consistent with other results showing that the image-and-heatmaps models are better able to distinguish between benign and malignant cases, likely due to the additional information from the class-specific heatmaps. \begin{figure*}[htb!] \centering \begin{tabular}{c c} \includegraphics[height=0.3\linewidth]{figures/corr_image_heat.pdf} & \includegraphics[height=0.3\linewidth]{figures/corr_image.pdf} \\ \footnotesize{(a) image-and-heatmaps} & \footnotesize{(b) image-only} \end{tabular} \vspace{-2mm} \caption{ Correlations of model ensemble predictions across labels. } \label{fig:pred_correls} \end{figure*} \subsection*{Comparison of CC and MLO model branches} \begin{table}[ht] \centering \caption{ AUCs of CC and MLO model branches on screening and biopsied populations. } \begin{tabular}{| l | c | c | c | c |} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{single} & \multicolumn{2}{c|}{5x ensemble} \\ \cline{2-5} \multicolumn{1}{c|}{} & malignant & benign & malignant & benign \\ \cline{2-5} \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{screening population}} } \\ \hline image-only & 0.827$\pm$0.008 & 0.731$\pm$0.004 & 0.840 & 0.743 \\ \hline image-only (CC) & 0.813$\pm$0.009 & 0.726$\pm$0.004 & 0.830 & 0.739 \\ \hline image-only (MLO) & 0.766$\pm$0.012 & 0.691$\pm$0.006 & 0.776 & 0.705 \\ \hline image-and-heatmaps & \textbf{0.886}$\pm$\textbf{0.003} & \textbf{0.747}$\pm$\textbf{0.002} & \textbf{0.895} & \textbf{0.756} \\ \hline image-and-heatmaps (CC) & 0.873$\pm$0.006 & 0.740$\pm$0.005 & 0.891 & 0.752 \\ \hline image-and-heatmaps (MLO) & 0.834$\pm$0.002 & 0.703$\pm$0.002 & 0.847 & 0.712 \\ \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{biopsied population}} } \\ \hline image-only & 0.781$\pm$0.006 & 0.673$\pm$0.003 & 0.791 & 0.682 \\ \hline image-only (CC) & 0.774$\pm$0.006 & 0.677$\pm$0.003 & 0.786 & 0.687 \\ \hline image-only (MLO) & 0.732$\pm$0.012 & 0.638$\pm$0.002 & 0.743 & 0.649 \\ \hline image-and-heatmaps & \textbf{0.843}$\pm$\textbf{0.004} & \textbf{0.690}$\pm$\textbf{0.002} & \textbf{0.850} & 0.696 \\ \hline image-and-heatmaps (CC) & 0.833$\pm$0.004 & \textbf{0.690}$\pm$\textbf{0.004} & 0.847 & \textbf{0.699} \\ \hline image-and-heatmaps (MLO) & 0.802$\pm$0.003 & 0.650$\pm$0.002 & 0.813 & 0.656 \\ \hline \end{tabular} \label{tab:cancer_pred_views} \end{table} The architecture of our model can be decomposed into two separate but symmetric deep neural network models, operating on CC view and MLO view images respectively, which we refer to as the CC and MLO branches of the model. Each branch individually computes predictions for all four labels, and the full model's final prediction is the average of the predictions of both branches. We show in \autoref{tab:cancer_pred_views} the breakdown of the performance of the CC and MLO branches of our model. We observe a fairly consistent trend of the CC model branch outperforming the MLO model branch, across multiple contexts (malignant/not malignant classification, benign/not benign classification, with or without the heatmaps). The superior performance of the CC model branch is consistent with the view of radiologists that findings may be more conspicuous and better visualized in the CC view compared to the MLO view. The predictions of the full model generally outperform using the predictions of either branch individually, except in the case of benign prediction for the biopsied population, where the CC model branch slightly outperforms the averaged prediction. \subsection*{Classifying malignant/benign vs. normal} In this section and the next, we further analyze the behavior of our model by decomposing the task of breast cancer classification into two sub-tasks: (i) determining if a breast has any findings, benign or malignant, and (ii) conditional on the presence of a finding, determining whether it is malignant or benign. First, we evaluated our models on the task of only predicting whether a single breast has either a malignant or a benign finding, or otherwise (negative for both malignant and benign findings). This is equivalent to predicting whether, for a given screening mammogram, a biopsy was subsequently performed. This evaluation is performed over the screening population. Without retraining the model, we took the maximum of malignant and benign predictions as the prediction of a biopsy. We obtained an AUC of 0.767 using the image-and-heatmaps model ensemble, with more results shown in \autoref{tab:cancer_pred_no_malbenvs}. The relatively small margin in performance between the image-only and image-and-heatmaps models indicates that the heatmaps are marginally less useful for the task of determining the presence of any finding at all. \begin{table}[ht] \centering \caption{ AUCs of our models on screening and biopsied populations, on the task of classifying malignant/benign vs normal. } \begin{tabular}{| l | c | c |} \cline{2-3} \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{single} & \multicolumn{1}{c|}{5x ensemble} \\ \cline{2-3} \hline image-only & 0.740$\pm$0.003 & 0.752 \\ \hline image-and-heatmaps & \textbf{0.759}$\pm$\textbf{0.002} & \textbf{0.767} \\ \hline \end{tabular} \label{tab:cancer_pred_no_malbenvs} \end{table} \subsection*{Classifying malignant vs. benign} Next, we investigated the ability of our models to distinguish between malignant and benign findings on exams where we know there is a mammographically visible finding--this isolates the discriminative capability of our model between types of findings. We constructed the \textit{one-class biopsied subpopulation}, with a subset of 384 breasts from our test set, comprising only of breasts labeled either only malignant or only benign. We evaluate our models on the ability to predict whether a finding in a given breast was malignant or benign. To adapt the predictions of our model to this binary classification task, we normalized the prediction probabilities for the two classes to sum to one and calculated AUCs based on these normalized predictions. The image-only ensemble attained an AUC of 0.738 while the image-and-heatmaps ensemble attained an AUC of 0.803. This, along with the results above, provides evidence that the heatmaps help primarily in distinguishing between benign and malignant findings. \subsection*{Additional reader study analysis} We supplement the results of our reader study shown in the main paper with additional analysis in this section. \subsubsection*{Reader ensemble and hybrid ensemble} We evaluate a \textit{reader ensemble} by averaging the predictions across our 14 readers. We also evaluate a \textit{hybrid ensemble} by averaging the predictions of the ensemble of readers with our image-and-heatmaps model ensemble, where we equally weight both sets of predictions. \autoref{fig:reader_ensemble} shows the ROC curves and precision-recall curves of these two ensembles compared to our model ensemble alone. We observe that the hybrid ensemble outperforms the reader ensemble based on AUC, but underperforms based on PRAUC. This suggests that although the combination of our model a single radiologist tends to lead to improved accuracy, the benefit that our model could provide to a group of radiologists is far more limited. \begin{figure}[h!] \centering \begin{tabular}{c c} \includegraphics[width=0.4\linewidth]{figures/reader_study/auc_4.pdf} & \includegraphics[width=0.4\linewidth]{figures/reader_study/prauc_4.pdf} \end{tabular} \caption{ROC curves and precision-recall curves of reader ensemble, hybrid ensemble and our image-and-heatmaps model ensemble.} \label{fig:reader_ensemble} \end{figure} \subsubsection*{Representation learned by image-only model} We visualize the hidden representations learned by the best image-only model in addition to the best image-and-heatmaps model, computed on the same reader study subpopulation (cf. \autoref{fig:tsne_breast}). Compared to the distribution of representations learned by the image-and-heatmaps model, exams classified as more likely to be malignant according to readers were spread more randomly over the space but group together in several clusters. This pattern is apparent in both sets of activations, and suggests that the learned representational space that the mammograms are projected to are better conditioned for malignancy classification for the image-and-heatmaps model compared to the image-only model. \begin{figure}[!htb] \centering \begin{tabular}{c c c c} \phantom{abcdefghihihi}& \hspace{-2mm}\includegraphics[height=0.32\linewidth]{figures/reader_study/tsne_reader_study_imageonly_h0.pdf}&\hspace{-4mm} \includegraphics[height=0.32\linewidth]{figures/reader_study/tsne_reader_study_imageonly_h1.pdf} & \raisebox{.16\height}{\includegraphics[height=0.22\linewidth, trim=3mm 0mm 10mm 0mm]{figures/tsne_reader_study_viewsplit_colorbar.pdf}} \end{tabular} \vspace{-2mm} \caption{Exams in the reader study set represented using the concatenated activations from the four image-specific columns (left) and the concatenated activations from the first fully connected layer in both CC and MLO model branches (right). The above activations are learned by the best image-only model. } \label{fig:tsne_breast} \end{figure} \subsubsection*{Error analysis} To further understand the performance of our image-and-heatmaps model ensemble (referred to as \textit{the model} for the remainder of this section), especially its medical relevance, we conducted the following detailed analysis on nine breasts comprised of examples which were classified incorrectly but confidently by the model. These includes false positive cases for malignancy (cf. \autoref{fig:error_analysis_case_1}, \autoref{fig:error_analysis_case_2}, \autoref{fig:error_analysis_case_3}) and false negative cases for malignancy (cf. \autoref{fig:error_analysis_case_4}, \autoref{fig:error_analysis_case_5}, \autoref{fig:error_analysis_case_6}), as well as examples where the model strongly disagreed with the average predictions of the 14 readers (cf. \autoref{fig:error_analysis_case_7}, \autoref{fig:error_analysis_case_8}, \autoref{fig:error_analysis_case_9}). Examples are shown with annotated lesions, both heatmaps, and a brief summary of the model's and readers' predictions. The case in \autoref{fig:error_analysis_case_1} succinctly illustrates the ambiguity in medical imaging. Both the model and readers were highly confident in predicting cancer, but the result of the biopsy was a high-risk benign finding. Further evidence for ambiguity is found in \autoref{fig:error_analysis_case_5}, where both the model and readers predicted that the calcifications were benign. Although the findings appear relatively benign on screening mammography, radiologists often recommend a biopsy of low suspicion calcifications (>2-10\% chance of malignancy) due to the known wide variation in the appearance of malignant calcifications and the opportunity to identify an early, more treatable cancer. Above all, some of the model's false negative and false positive cases can be explained as evidence for the inherent ambiguity in imaging, which highlights that screening mammography may not be sufficient to determine the correct diagnosis for certain findings. In \autoref{fig:error_analysis_case_9}, the readers' scores for malignancy are consistently low while the score given by our model is 0.590. In fact, the small mass marked in green on the image turned out to be benign, while on the diagnostic mammogram, the area marked in red looked more suspicious and turned out to be a cancer. This case illustrates the strength of our model--when multiple suspicious findings are present and some are more obvious and easier to determine, human readers may be fatigued from reading a series of mammograms and could be more prone to error by not fully considering each suspicious finding. \autoref{fig:error_analysis_case_3} is another good example from this perspective when a finding presents differently on two views. Upon a retrospective review by a radiologist, the smaller faint calcifications on the MLO view indeed appear suspicious, whereas readers may have focused on the CC view that look benign during the reader study. Our model still lacks the ability to summarize information about changes from multiple images and views. Some cases with incorrect predictions could be summarized from this perspective, such as \autoref{fig:error_analysis_case_2} and \autoref{fig:error_analysis_case_6}. In \autoref{fig:error_analysis_case_2}, while the radiologist confidently thought that the case would be benign, since even the distribution of the calcification was particularly suspicious on one image, this did not hold up on additional images. In \autoref{fig:error_analysis_case_6}, the model may have missed the malignant finding because it only appeared highly suspicious on the MLO view, but experienced doctors still caught it with high confidence. However, in \autoref{fig:error_analysis_case_7}, our model shows its potential in utilizing both global and local information. According to radiologists, the case looks very suspicious, especially on MLO view, where there is a white mass with irregular margins (termed architectural distortion). However, the pathology was benign. The model indeed provided a low score for a malignant finding and a high score for a benign finding. While the `malignant' heatmap appeared more correlated with the area under the yellow mask for both views, the `benign' heatmap was widely distributed with high magnitude. \autoref{fig:error_analysis_case_4} is an example of a false negative for the model. Although we obverse an overlap between the region highlighted by the the `malignant' heatmap and the lesion, the model's prediction for malignant findings is low while the its prediction for benign findings is higher than 0.5. However, there is an asymmetry with architectural distortion on both views--an imaging feature that has a high probability of malignancy. Hence, radiologists assigned a high malignancy score. Another case where readers were more confident than the model is \autoref{fig:error_analysis_case_8}. The mass in the right breast appears suspicious because it has an architectural distortion. In addition, its location at the bottom from the MLO view (inferior breast), and its medial location from the CC view makes it highly suspicious. In this scenario, radiologists incorporated information from the global view of the mammogram in making their assessment about the likelihood of malignancy for this case. \begin{figure}[!htb] \centering \begin{minipage}[t]{.31\textwidth} \centering \begin{tabular}{c c c} \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_1/r_cc_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_1/r_cc_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_1/r_cc_m.png} \\ \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_1/r_mlo_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_1/r_mlo_b.png} & \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_1/r_mlo_m.png} \end{tabular} \vspace{-4mm} \caption{A biopsy-proven high-risk benign finding marked in yellow on both CC (top row) and MLO views (the second row) in the patient's right breast. The heatmaps overlying the images (green for benign and red for malignant) are shown after the images with segmentation. The malignant score for this breast given by the model is 0.997 while the benign score is 0.909. The `malignant' heatmap highlighted the marked area but the `benign' heatmap did not, for both CC and MLO views. The mean malignant score given by the 14 readers is 0.699, with 12 readers giving scores over 0.6. } \label{fig:error_analysis_case_1} \end{minipage} \hspace{3mm} \begin{minipage}[t]{.31\textwidth} \centering \begin{tabular}{c c c} \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_2/l_cc_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_2/l_cc_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_2/l_cc_m.png} \\ \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_2/l_mlo_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_2/l_mlo_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_2/l_mlo_m.png} \end{tabular} \vspace{-4mm} \caption{A region with biopsy-proven high-risk benign findings marked in yellow on both the CC view and the MLO view in the patient's left breast. Images and heatmaps are shown with the same layout as in \autoref{fig:error_analysis_case_1}. Compared with the case in \autoref{fig:error_analysis_case_1}, the gap between the malignant and benign scores given by the model for this breast is larger--the malignant score is 0.709 and the benign score is 0.433. The highest malignant scores given by the 14 readers is 0.25 and the mean is only 0.03. } \label{fig:error_analysis_case_2} \end{minipage} \hspace{3mm} \begin{minipage}[t]{.31\textwidth} \centering \begin{tabular}{c c c} \hspace{-5.5mm} \includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_3/l_cc_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_3/l_cc_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_3/l_cc_m.png} \\ \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_3/l_mlo_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_3/l_mlo_b.png} & \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_3/l_mlo_m.png} \end{tabular} \vspace{-4mm} \caption{A biopsy-proven benign finding marked in green on both the CC view and the MLO view in the patient's left breast. Images and heatmaps are shown with the same layout as in \autoref{fig:error_analysis_case_1}. The malignant score for this breast given by the model is 0.735 while the benign score is 0.549. Readers were highly confident that this case was benign and their mean malignant score is 0.05, with the highest score being only 0.2. } \label{fig:error_analysis_case_3} \end{minipage} \end{figure} \begin{figure}[!htb] \centering \begin{minipage}[t]{.31\textwidth} \centering \begin{tabular}{c c c} \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_4/r_cc_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_4/r_cc_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_4/r_cc_m.png} \\ \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_4/r_mlo_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_4/r_mlo_b.png} & \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_4/r_mlo_m.png} \end{tabular} \vspace{-4mm} \caption{A biopsy-proven malignant finding marked in red on both the CC view and the MLO view in the patient's right breast. Images and heatmaps are shown with the same layout as in \autoref{fig:error_analysis_case_1}. The malignant score for this breast given by the model is 0.210 while the benign score is 0.621. Doctors' mean malignant score is 0.459 and eight readers among the 14 provided a score higher than 0.5. } \label{fig:error_analysis_case_4} \end{minipage} \hspace{3mm} \begin{minipage}[t]{.31\textwidth} \centering \begin{tabular}{c c c} \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_5/l_cc_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_5/l_cc_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_5/l_cc_m.png} \\ \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_5/l_mlo_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_5/l_mlo_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_5/l_mlo_m.png} \end{tabular} \vspace{-4mm} \caption{A biopsy-proven malignant finding marked in red on both the CC view and the MLO view in the patient's left breast. And the breast was also labeled as benign according to related pathology reports. Images and heatmaps are shown with the same layout as in \autoref{fig:error_analysis_case_1}. The malignant score given by the model is 0.068 and benign score is 0.433. The mean malignant score given by the 14 readers is 0.176, with the highest being only 0.30. } \label{fig:error_analysis_case_5} \end{minipage} \hspace{3mm} \begin{minipage}[t]{.31\textwidth} \centering \begin{tabular}{c c c} \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_6/r_cc_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_6/r_cc_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_6/r_cc_m.png} \\ \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_6/r_mlo_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_6/r_mlo_b.png} & \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_6/r_mlo_m.png} \end{tabular} \vspace{-4mm} \caption{A biopsy-proven high-risk benign finding marked in yellow and a malignant finding marked in red on both the CC view and the MLO view in the patient's right breast. Images and heatmaps are shown with the same layout as in \autoref{fig:error_analysis_case_1}. The malignant score for this breast given by the model is only 0.044 while the benign score is 0.162. Doctors were confident that it was highly suspicious--their mean malignant score is 0.698 and 10 of them provided a probability estimate over 0.5. } \label{fig:error_analysis_case_6} \end{minipage} \end{figure} \begin{figure}[!htb] \centering \begin{minipage}[t]{.31\textwidth} \centering \begin{tabular}{c c c} \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_7/l_cc_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_7/l_mlo_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_7/l_mlo_m.png} \\ \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_7/l_mlo_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_7/l_cc_b.png} & \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_7/l_cc_m.png} \end{tabular} \vspace{-4mm} \caption{A biopsy-proven high-risk benign finding marked in yellow on both the CC view and the MLO view in the patient's left breast. Images and heatmaps are shown with the same layout as in \autoref{fig:error_analysis_case_1}. The malignant score for this breast given by the model is 0.124 and the benign score is 0.530. Readers' mean malignant score is 0.763. } \label{fig:error_analysis_case_7} \end{minipage} \hspace{3mm} \begin{minipage}[t]{.31\textwidth} \centering \begin{tabular}{c c c} \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_8/r_cc_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_8/r_cc_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_8/r_cc_m.png} \\ \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_8/r_mlo_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_8/r_mlo_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_8/r_mlo_m.png} \end{tabular} \vspace{-4mm} \caption{A biopsy-proven malignant finding marked in red and a benign finding marked in green on both the CC view and the MLO view in the patient's right breast. Images and heatmaps are shown with the same layout as in \autoref{fig:error_analysis_case_1}. The malignant score given by the model is 0.702 and the benign score is 0.682. Mean malignant score given by readers is 0.978 and ten of them gave a score over 0.9. } \label{fig:error_analysis_case_8} \end{minipage} \hspace{3mm} \begin{minipage}[t]{.31\textwidth} \centering \begin{tabular}{c c c} \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_9/r_cc_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_9/r_cc_b.png}& \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_9/r_cc_m.png} \\ \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_9/r_mlo_seg.png} & \hspace{-6.0mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_9/r_mlo_b.png} & \hspace{-5.5mm}\includegraphics[width=0.37\linewidth, height=0.51\linewidth]{figures/error_analysis/case_9/r_mlo_m.png} \end{tabular} \vspace{-4mm} \caption{A biopsy-proven benign finding marked in green on both CC view (top row) and MLO view (the second row) and a malignant finding marked in red only on CC view, in the patient's right breast. Images and heatmaps are shown with the same layout as in \autoref{fig:error_analysis_case_1}. The malignant score for this breast given by the model is only 0.590 while the benign score is 0.557. The readers' mean malignant score is 0.071, with the highest score being only 0.3. } \label{fig:error_analysis_case_9} \end{minipage} \end{figure} \section*{Network architecture and training} We detail in this section the model architecture, training procedure and hyperparameters associated with training our deep convolutional neural network for cancer classification. \subsection*{Breast-level cancer classification model} We use a single classification model to generate predictions for each of the four labels of an exam, corresponding to the presence of findings in either breast (left-benign, left-malignant, right-benign, right-malignant). The model takes as input a set of four single-channel images, corresponding to the four standard mammographic views (R-CC, L-CC, R-MLO, L-MLO). We use an input resolution of $2677\times1942$ pixels for CC views, and $2974\times1748$ pixels for MLO views, based on the optimal window procedure described in \cite{NYU_dataset}. When additionally using the heatmaps produced by the auxiliary network learning from patch-level labels, we concatenate them as extra input channels to the corresponding views, resulting in three channels in total: the image, the `benign' patch classification heatmap, and the `malignant' patch classification heatmap. The model is composed of four view-specific columns, each based on the ResNet architecture \cite{resnet} that computes a fixed-dimension hidden representation for each view. Weights are shared between the L-CC and R-CC columns, and L-MLO and R-MLO columns regardless of model variant. The output of the model is four separate binary probability estimates--one for each of the four labels. We initialized the weights of the view-specific columns by pretraining with BI-RADS labels (see section below), and randomly initialized the rest. We trained the whole model using stochastic gradient descent with the Adam optimization algorithm \cite{adam}, using a learning rate of $10^{-5}$ and a minibatch of size $4$. Our loss function was cross-entropy averaged across all four labels. We applied L2-regularization to our model weights with a coefficient of $10^{-4.5}$. As only a small fraction of the exams in our training set contained images of biopsied breasts, learning with all data in the training set would be extremely slow as the model would only be shown a relatively small number of positive examples per epoch. To alleviate this issue, we adopted the following two strategies. First, while we trained the cancer classification model on data from all screening exams, within each training epoch, the model was shown all exams with biopsies in the training set (4,844 exams) but only a random subset of an equal number of exams without biopsies (also 4,844 exams) \cite{imbalanced_survey}. Secondly, as mentioned above, we initialized the ResNet weights of the cancer classification model from a model trained on BI-RADS classification, a task for which we have labels for all exams. We early-stopped the training when the average of the validation AUCs over the four labels computed on the validation set did not improve for 20 epochs. We then selected the version of the model with the best validation AUC as our final model candidate. We show the training and validation curve for one image-only model and one image-and-heatmaps model in \autoref{fig:training_curves}. For the training curve, we computed the AUC of each prediction and corresponding label (e.g. left breast/CC/benign) and average across the breast sides and CC/MLO branches. The AUC is computed on a training data subsample that has an equal number of biopsied and non-biopsied examples. We do the same for the validation curve, except we compute the AUC on the full validation data set. Because of the difference in distributions, the training and validation AUC curves are not directly comparable--we refer the reader to the discussion in the main paper on how differences in the proportion of biopsied examples can significantly influence AUC calculations. We observe that the image-and-heatmap model attains higher training and validation AUC for malignancy prediction compared to the image-only model, whereas the AUCs for benign prediction are not significantly different between the image-and-heatmaps and image-only models. The full image-only model has 6,132,592 trainable parameters, while the image-and-heatmaps model has 6,135,728 trainable parameters. The only difference between both architectures is the size of the kernel in the first convolutional layer to accommodate the difference in the number of input channels. On an Nvidia V100 GPU, an image-only model takes about 12 hours to train to the best validation performance, while an image-and-heatmaps model takes about 24 hours. A significant amount of training overhead is associated with the time to load and augment the high resolution mammography images. \begin{figure}[h] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=1.0\linewidth]{figures/training_curves.pdf} \caption{Training curves for image-only and image-and-heatmaps models. AUCs are averaged across prediction heads and target labels. Training AUCs are computed on subsampled data with an equal number of biopsied and randomly subsampled non-biopsied examples, while validation AUCs are computed on the full validation set.} \label{fig:training_curves} \end{minipage} \end{figure} \section*{Data augmentation for model training} Data augmentation is often applied in training deep neural networks to increase the diversity of the training data samples to improve the robustness of the trained model. We apply size augmentation (slightly modifying the crop window size, and resizing using bicubic interpolation to fit the desired size for the model) and location augmentation (adding noise around the chosen optimal center of the window). Examples can be found in \autoref{fig:augmentation_cropping_noise}. We limited the maximum value for both size and location augmentation to 100 pixels in any direction. If the image was too small to apply augmentation, we additionally pad the images to allow enough room. At test time, we similarly apply data augmentation, and average predictions over 10 random augmentations to compute the prediction for a given sample. No data augmentation is used during validation. \begin{figure}[h] \centering \begin{tabular}{c c c c} \includegraphics[height=0.3\linewidth]{figures/augmentation_line_lcc_1.png} & \includegraphics[height=0.3\linewidth]{figures/augmentation_line_rcc_1.png} & \includegraphics[height=0.3\linewidth]{figures/augmentation_line_lmlo_1.png} & \includegraphics[height=0.3\linewidth]{figures/augmentation_line_rmlo_1.png} \end{tabular} \caption{ Example of drawing 10 augmentation windows with random noise in the location and size of the windows. } \label{fig:augmentation_cropping_noise} \end{figure} \subsection*{Model variants} \begin{figure*}[h!] \centering \begin{tabular}{c c c c} \includegraphics[width=0.23\linewidth,trim={2.3cm 2.4cm 2.7cm 3.8cm}, clip]{figures/model_2.pdf}& \includegraphics[width=0.23\linewidth,trim={2.3cm 2.4cm 2.7cm 3.8cm}, clip]{figures/model_3.pdf}& \includegraphics[width=0.23\linewidth,trim={2.3cm 2.4cm 2.7cm 3.8cm}, clip]{figures/model_4.pdf}& \includegraphics[width=0.23\linewidth,trim={2.3cm 2.4cm 2.7cm 3.8cm}, clip]{figures/model_1.pdf} \\ \footnotesize{(a) view-wise} & \footnotesize{(b) image-wise} & \footnotesize{(c) breast-wise} & \footnotesize{(d) joint} \end{tabular} \vspace{-2mm} \caption{ Four model variants for incorporating information across the four screening mammography views in an exam. All variants are constrained to have a total of 1,024 hidden activations between fully connected layers. The `view-wise' model, which is the primary model used in our experiments, contains separate model branches for CC and MLO views--we average the predictions across both branches. The `image-wise' model has a model branch for each image, and we similarly average the predictions. The `breast-wise' model has separate branches per breast (left and right). The `joint' model only has a single branch, operating on the concatenated representations of all four images. } \label{fig:architectures} \end{figure*} \begin{table}[ht] \centering \caption{ AUC of model variants on screening and biopsied populations. } \begin{tabular}{| l | c | c | c | c |} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{single} & \multicolumn{2}{c|}{5x ensemble} \\ \cline{2-5} \multicolumn{1}{c|}{} & malignant & benign & malignant & benign \\ \cline{2-5} \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{screening population}} } \\ \hline image-only (view-wise) & 0.827$\pm$0.008 & 0.731$\pm$0.004 & 0.840 & 0.743 \\ \hline image-only (image-wise) & 0.830$\pm$0.006 & 0.759$\pm$0.002 & 0.841 & 0.766 \\ \hline image-only (breast-wise) & 0.821$\pm$0.012 & 0.757$\pm$0.002 & 0.836 & 0.768 \\ \hline image-only (joint) & 0.822$\pm$0.008 & 0.737$\pm$0.004 & 0.831 & 0.746 \\ \hline image-and-heatmaps (view-wise) & \textbf{0.886}$\pm$\textbf{0.003} & 0.747$\pm$0.002 & \textbf{0.895} & 0.756 \\ \hline image-and-heatmaps (image-wise) & 0.875$\pm$0.001 & \textbf{0.765}$\pm$\textbf{0.003} & 0.885 & 0.774 \\ \hline image-and-heatmaps (breast-wise) & 0.876$\pm$0.004 & 0.764$\pm$0.004 & 0.889 & \textbf{0.779} \\ \hline image-and-heatmaps (joint) & 0.860$\pm$0.008 & 0.745$\pm$0.002 & 0.876 & 0.763 \\ \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{biopsied population}} } \\ \hline image-only (view-wise) & 0.781$\pm$0.006 & 0.673$\pm$0.003 & 0.791 & 0.682 \\ \hline image-only (image-wise) & 0.740$\pm$0.007 & 0.638$\pm$0.001 & 0.749 & 0.642 \\ \hline image-only (breast-wise) & 0.726$\pm$0.009 & 0.639$\pm$0.002 & 0.738 & 0.645 \\ \hline image-only (joint) & 0.780$\pm$0.006 & 0.682$\pm$0.001 & 0.787 & 0.688 \\ \hline image-and-heatmaps (view-wise) & \textbf{0.843}$\pm$\textbf{0.004} & 0.690$\pm$0.002 & \textbf{0.850} & 0.696 \\ \hline image-and-heatmaps (image-wise) & 0.812$\pm$0.001 & 0.653$\pm$0.003 & 0.821 & 0.658 \\ \hline image-and-heatmaps (breast-wise) & 0.805$\pm$0.004 & 0.652$\pm$0.004 & 0.818 & 0.661 \\ \hline image-and-heatmaps (joint) & 0.817$\pm$0.008 & \textbf{0.696}$\pm$\textbf{0.005} & 0.830 & \textbf{0.709} \\ \hline \end{tabular} \label{tab:cancer_pred_variant} \end{table} Based on the four view-specific hidden representations, we considered four model variants for incorporating the information from all four views in producing our output predictions. The full architectures of the four variants are shown in \autoref{fig:architectures}. The `view-wise' model concatenates L-CC and R-CC representations, and L-MLO and R-MLO representations, and uses separate CC and MLO prediction heads to generate predictions for all four labels. This is the model used in the main paper, chosen based on validation performance on the screening population. The `image-wise' model has separate prediction heads for each of the four views, predicting only the malignant or benign labels for the corresponding breast. The `side-wise' model concatenates L-CC and L-MLO representations, and R-CC and R-MLO representations, and has separate prediction heads for each breast. Lastly, the `joint' model concatenates the representations of all four views and jointly predicts malignant and benign findings for both breasts. Regardless of architecture, each model consists of two fully connected layers that produce four probability estimates--one for each of the four labels. We show results across different model variants in \autoref{tab:cancer_pred_variant}, evaluated on the screening population. Overall, all four model variants achieve high and relatively similar AUCs. The `view-wise' image-and-heatmaps ensemble, which is also architecturally most similar to the BI-RADS model used in the pretraining stage, performs the best in predicting malignant/not malignant, attaining an AUC of 0.895 on the screening population and 0.850 on the biopsied population. However, some of the other model variants do outperform the `view-wise' ensemble for benign/not-benign prediction. Among the image-only models, the four model variants perform roughly comparably, though still consistently underperforming the image-and-heatmaps models. We emphasize that the `view-wise' model was chosen as the model shown in the main paper based on the average of malignant/not malignant and benign/not benign AUCs on the validation set, and not based on test set results. Constructing an ensemble of the four model variants for the image-and-heatmaps model, with five randomly initialized models per variant,\footnote{Only the weights in the fully connected layers are randomly initialized--we use the same set of pretrained BI-RADS weights to initialize ResNet columns in all experiments, excluding the experiments with models without BI-RADS pretraining.} results in an AUC of 0.778 on benign/not benign prediction, and 0.899 on malignant/not malignant prediction on the screening population. Although this performance is superior to any individual model variant, running such a large ensemble of 20 separate models would be prohibitively expensive in practice. \subsection*{Single-view ResNet} The overall model consists of four separate ResNet \cite{resnet} models corresponding to each of the four views. In this section, we describe the structure of these ResNets. The full architecture of each ResNet is shown in \autoref{fig:single_view_resnet}. We tied the weights for the L-CC and R-CC ResNets, as well as the L-MLO and R-MLO ResNets. Likewise, we flipped the L-CC and L-MLO images before feeding them to the model, so all breast images are rightward-oriented, allowing the shared ResNet weights to operate on similarly oriented images. \begin{figure}[h] \centering \begin{minipage}{\textwidth} \centering \includegraphics[width=0.8\linewidth]{figures/single_view_resnet_figure.pdf} \caption{Architecture of single-view ResNet. The numbers in square brackets indicate the number of output channels, unless otherwise specified. Where no downsampling factor is specified for a ResNet block, the downsampling layer reduces to a 1x1 convolution. \textbf{Left}: Overview of the single-view ResNet, which consists of a set of ResNet layers. \textbf{Center}: ResNet layers consist of a sequence of ResNet blocks with different downsampling and output channels. \textbf{Right}: ResNet blocks consist of two 3x3 convolutional layers, with interleaving downsampling and batch normalization operations, and a residual connection between input and output.} \label{fig:single_view_resnet} \end{minipage} \end{figure} The output of each ResNet is a $H\times W \times 256$-dimensional tensor where $H$ and $W$ are downsampled from the original input size, with $H$=42 and $W$=31 for the CC view, and $H$=47 and $W$=28 for MLO view. We average-pool across the spatial dimensions to obtain a 256-dimension hidden representation vector for each view. For reference, we show the dimensions of the hidden activations after each major layer of the ResNet in \autoref{tab:resnet_dimensions}. \begin{table}[ht] \centering \caption{ Dimensions hidden of activation after each layer in the ResNet, shown as $D \times H\times W$. } \begin{tabular}{| c | c | c| } \cline{2-3} \multicolumn{1}{c|}{} & \textbf{CC view} & \textbf{MLO view} \\ \hline Conv7x7 & 1339$\times$971$\times$16 & 1487$\times$874$\times$16 \\ \hline ResBlock 0 & 670$\times$486$\times$16 & 744$\times$437$\times$16 \\ \hline ResBlock 1 & 335$\times$243$\times$32 & 372$\times$219$\times$32 \\ \hline ResBlock 2 & 168$\times$122$\times$64 & 186$\times$110$\times$64 \\ \hline ResBlock 3 & 84$\times$61$\times$128 & 93$\times$55$\times$128 \\ \hline ResBlock 4 & 42$\times$31$\times$256 & 47$\times$28$\times$256 \\ \hline \end{tabular} \label{tab:resnet_dimensions} \end{table} \subsection*{Pretraining on BI-RADS classification} Because of the small number of labeled biopsied examples we have available, we apply transfer learning to improve the robustness and performance of our models. Transfer learning involves reusing parts of a model pretrained on another task as a starting point for training the target model, taking advantage of the learned representations from the pretraining task. For our model, we apply transfer learning from a network pretrained on a BI-RADS classification task, as in \cite{high_resolution}, which corresponds to predicting a radiologist's assessment of a patient's risk of developing breast cancer based on screening mammography. The three BI-RADS classes we consider are: BI-RADS Category 0 (``incomplete''), BI-RADS Category 1 (``normal'') and BI-RADS Category 2 (``benign''). The algorithm used to extract these labels is explained in \cite{NYU_dataset}. Although these labels are potentially much noisier than biopsy outcomes (being assessments of clinicians based on screening mammograms and not informed by a biopsy), compared to the 4,844 exams with biopsy-proven cancer labels in the training set, we have over 99,528 training examples with BI-RADS 0 and BI-RADS 2 labels. As shown in \cite{6909618}, a few thousand training exams may be insufficient to learn millions of parameters in CNN architectures--instead, convolutional layers can be pretrained as a ``generic extractor of mid-level image representation" and thereafter reused. On the other hand, although the BI-RADS labels are noisy, neural networks can reach reasonable levels of performance even when trained with noisy labels, as shown in \cite{DBLP:journals/corr/KrauseSHZTDPL15} and \cite{DBLP:journals/corr/SunSSG17}, and the information learned can then be transferred to the cancer classification model. In fact, our experiments show that pretraining on BI-RADS classification contributes significantly to the performance of our model. \begin{figure}[h] \centering \begin{tabular}{c} \includegraphics[width=0.4\linewidth,trim={2.35cm 2.5cm 2.7cm 3.1cm}, clip]{figures/BI-RADS_model.pdf} \end{tabular} \caption{BI-RADS classification model architecture. The architecture is largely similar to the `view-wise' cancer classification model variant, except that the output is a set of probability estimates over the three output classes. The model consists of four ResNet columns, with weights shared within CC and MLO branches of the model.} \label{fig:birads_model_figure} \end{figure} The model we use for BI-RADS classification is shown in \autoref{fig:birads_model_figure}. It is largely similar to the `view-wise' model architecture for cancer classification described in the \textit{Model variants} section above, except that the output layer outputs probability estimates over three classes for a single label. Although the BI-RADS classification task is a three-class classification task, we measured the performance of the model by averaging AUCs of 0-vs-other, 1-vs-other and 2-vs-other predictions on the validation set. The rest of the training details (e.g. ResNet architecture, optimizer hyperparameters) are identical to those of the cancer classification model, except that the model was trained with a minibatch size of 24 instead of 4. We early-stopped training based on validation AUCs after no improvements for 20 epochs, and initialized the ResNet weights for the cancer classification model using the learned weights in the BI-RADS model. Where we used heatmaps as additional input channels, we duplicated the weights on the bottommost convolutional kernel such that the model can operate on inputs with three channels--the rest of the model is left unchanged. In our experimental results, we used a BI-RADS model trained for 111 epochs (326 hours or approximately 14 days on four Nvidia V100 GPUs), which obtained an averaged validation AUC of 0.748. We emphasize here that we used the same train-validation-test splits for pretraining our BI-RADS classification model as in training our cancer classification model, so no data leakage across splits was possible. \subsubsection*{Cancer classification model without BI-RADS pretraining} In this section, we evaluate the benefit of the BI-RADS pretraining by comparing the performance of our models to cancer classification models trained without using weights from a pretrained BI-RADS model. Specifically, we train a set of cancer classification models by starting from entirely randomly initialized model weights. The results are shown in \autoref{tab:cancer_pred_no_pretraining}. In every case, we see an improvement in performance from using weights of a model pretrained on BI-RAD classification, compared to randomly initializing the model weights and training from scratch. The improvement in performance from using pretrained weights tends to be larger for the image-only model compared to image-and-heatmaps models. We hypothesize that this is because the heatmaps already contain significant information pertaining to cancer classification, and hence the model can likely more quickly learn to make use of the heatmaps for cancer classification. In contrast, the image-only models rely entirely on the ResNets to effectively encode visual information for cancer classification, and therefore using the weights of a model pretrained for BI-RADS classification contributes significantly to the model performance. \begin{table}[ht] \centering \caption{ AUCs of our models on screening and biopsied populations, with and without BI-RADS pretraining. } \begin{tabular}{| l | c | c | c | c |} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{single} & \multicolumn{2}{c|}{5x ensemble} \\ \cline{2-5} \multicolumn{1}{c|}{} & malignant & benign & malignant & benign \\ \cline{2-5} \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{screening population}} } \\ \hline image-only (pretrained) & 0.827$\pm$0.008 & 0.731$\pm$0.004 & 0.840 & 0.743 \\ \hline image-only (random) & 0.687$\pm$0.009 & 0.657$\pm$0.006 & 0.703 & 0.669 \\ \hline image-and-heatmaps (pretrained) & \textbf{0.886}$\pm$\textbf{0.003} & \textbf{0.747}$\pm$\textbf{0.002} & \textbf{0.895} & \textbf{0.756} \\ \hline image-and-heatmaps (random) & 0.856$\pm$0.007 & 0.701$\pm$0.004 & 0.868 & 0.708 \\ \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{biopsied population}} } \\ \hline image-only (pretrained) & 0.781$\pm$0.006 & 0.673$\pm$0.003 & 0.791 & 0.682 \\ \hline image-only (random) & 0.693$\pm$0.006 & 0.564$\pm$0.006 & 0.709 & 0.571 \\ \hline image-and-heatmaps (pretrained) & \textbf{0.843}$\pm$\textbf{0.004} & \textbf{0.690}$\pm$\textbf{0.002} & \textbf{0.850} & \textbf{0.696} \\ \hline image-and-heatmaps (random) & 0.828$\pm$0.008 & 0.633$\pm$0.006 & 0.841 & 0.640 \\ \hline \end{tabular} \label{tab:cancer_pred_no_pretraining} \end{table} \section*{Details of the auxiliary patch-level classifier} \begin{figure}[h] \resizebox{.98\textwidth}{!}{ \begin{tabular}{C{0.5\textwidth} C{0.5\textwidth}} \centering \begin{tabular}{c c} \includegraphics[width=0.22\textwidth]{figures/patch_malignant_1.png} & \includegraphics[width=0.22\textwidth]{figures/patch_malignant_2.png} \\ \includegraphics[width=0.22\textwidth]{figures/patch_malignant_3.png} & \includegraphics[width=0.22\textwidth]{figures/patch_malignant_5.png} \end{tabular} & \begin{tabular}{c c} \includegraphics[width=0.22\textwidth]{figures/patch_benign_1.png} & \includegraphics[width=0.22\textwidth]{figures/patch_benign_2.png} \\ \includegraphics[width=0.22\textwidth]{figures/patch_benign_3.png} & \includegraphics[width=0.22\textwidth]{figures/patch_benign_4.png} \end{tabular}\\ (a) Malignant. Examples of patches overlapping only with biopsied malignant findings (marked with red). & (b) Benign. Examples of patches overlapping only with biopsied benign findings (marked with yellow or green).\\ \begin{tabular}{c c} \includegraphics[width=0.22\textwidth]{figures/patch_outside_1.png} & \includegraphics[width=0.22\textwidth]{figures/patch_outside_2.png} \\ \includegraphics[width=0.22\textwidth]{figures/patch_outside_3.png} & \includegraphics[width=0.22\textwidth]{figures/patch_outside_4.png} \end{tabular} & \begin{tabular}{c c} \includegraphics[width=0.22\textwidth]{figures/patch_negative_1.png} & \includegraphics[width=0.22\textwidth]{figures/patch_negative_2.png} \\ \includegraphics[width=0.22\textwidth]{figures/patch_negative_3.png} & \includegraphics[width=0.22\textwidth]{figures/patch_negative_4.png} \end{tabular}\\ (c) Outside. Examples of patches from images with biopsied findings but without an overlap with any biopsied findings. & (d) Negative. Examples of patches from images without any biopsied findings. \end{tabular} } \caption{Examples of patches sampled according to the procedure described in the \textit{Sampling the patches} section. From (a) to (d), four patches are shown with the images of their origin, for four classes: malignant, benign, outside and negative. Patches are shown on the left, while the images of origin (with indicated biopsied findings if any were present) are on the right. The blue squares indicate the location of the patches in the original images. The meaning of the colored regions on the images is described in a greater detail in \cite{NYU_dataset}. } \label{fig:example_patches} \end{figure} We used a dataset of 5,000,000 patches to train the auxiliary patch-level classification network to classify patches into one of four classes: (i) patches overlapping only with area segmented by annotations in red, indicating malignant findings (malignant); (ii) patches overlapping only with area segmented by annotations in green or yellow, indicating benign findings (benign); (iii) patches from segmented images but not overlapping with any marked area (outside); (iv) patches from images in exams labeled as negative for both benign and malignant (negative). As described in \cite{NYU_dataset}, the findings were manually indicated on the images by radiologists on a pixel-level, based on results from pathology. Images which are mammographically occult, i.e., the lesions that were biopsied were not visible on the image, were not taken into consideration while generating this training set. \subsubsection*{Sampling the patches} Patches in the dataset were generated from all available mammography exams in the training set--the same as those used to train the breast-level model. Before extracting the patches, images were all cropped according to the algorithms described in \cite{NYU_dataset}. As was the case for the breast-level model, we flipped the L-CC and L-MLO images so that all breast images were rightward-oriented. Each patch was cropped as a square from a full-size image. To sample a patch, we first sampled a location for the center of the patch, then sampled its size from a uniform distribution between 128 pixels and 384 pixels and finally sampled an angle by which the crop window was rotated, also from a uniform distribution from -30 to 30 degrees. A sample was rejected if it contained any pixels outside of the full-size image or only contained zero-valued pixels (i.e. containing only background and no breast tissue). Once extracted, the patches were resized to 256$\times$256 pixels. Examples are shown in \autoref{fig:example_patches}. \subsubsection*{Training and architecture} We used a DenseNet-121 architecture \cite{densenet} for our patch-level auxiliary classifier, with four dense blocks with 6, 12, 24, 16 dense layers respectively. The entire network has approximately seven million parameters. We initialized the weights of the model with the weights of a DenseNet-121 trained on ImageNet. The number of images with visible biopsied findings is small (0.85\%) in comparison to the total number of images. Furthermore, the fraction of the total image area associated with visible biopsied findings is also small (0.87\%, averaging above images with segmentation). To accommodate this, in each training epoch, we randomly sampled 10,000 patches: 20 from the malignant class, 35 from the benign class, 5,000 from the outside class and 4,945 from the negative class. This ratio of malignant, benign, outside and negative patches was chosen to reflect the ratio of $\mathbf{area_{m}}$, $\mathbf{area_{b}}$, $\mathbf{area_{o}}$ and $\mathbf{area_{n}}$, which are the sums of the respective fractions of total areas over our segmented training data set. $\mathbf{area_{m}}$ and $\mathbf{area_{b}}$ denote the total sum of the area under biopsied malignant and benign findings respectively over the entire $6,758$ images with segmentation in the training set. Accordingly, the sum of the remaining area is denoted by $\mathbf{area_{o}}$. 7,000 images without any segmentation were randomly sampled and $\mathbf{area_{n}}$ denotes the sum of the size of all those images. In order to address the extreme class imbalance, we used weighted cross-entropy as the training loss, wherein the class weights were set as the inverse of the above patch ratio so that losses on incorrect predictions of malignant and benign patches were appropriately upweighted. Weighted cross-entropy loss has the following form: \begin{equation*} \mathcal{L}(\mathbf{x})= \sum_{c} \mathbf{w}_{c} \log \hat{p}_{c}( \mathbf{x}), \label{eq:auxiliary_objective} \end{equation*} where $\mathbf{x}$ is the input image, $c$ is the class label assumed to be in $\{\text{malignant (m)}, \text{benign (b)}, \text{outside (o)}, \text{negative (n)}\}$ and $\hat{p}_{c}(\mathbf{x})$ is the probability of class $c$ predicted by the network. The coefficient $\mathbf{w}_c$ is computed as $$\mathbf{w}_c = \frac{\Pi_{k \ne c}N_k }{\sum_{j \in {m, b, o, n}} \Pi_{k\ne j}N_k }, $$ where $N_k$ is the number of patches for class $k$. We trained the network using the Adam optimization algorithm \cite{adam}, with a batch size of 100 and a learning rate of $10^{-5}$. \subsubsection*{Patch classification heatmap generation} The patch-level auxiliary classifier is applied to the full resolution images in a sliding window fashion to create two class-specific heatmaps, corresponding to the malignant and benign predictions of the patch-level classifier. Since mammography images vary in sizes (before cropping is applied to use them as an input for the model), to slide the classifier over the image, we used Algorithm~\ref{alg:stride_setting} to compute the appropriate values of the strides for vertical and horizontal dimensions. We applied strides of approximately 70 pixels wide across both dimensions. We used non-augmented $256\times 256$ patches as inputs to the patch classifier. For each patch, we projected the respective predicted class probabilities to the original $256\times 256$ input area of each patch, and we averaged the predicted probabilities for pixels in overlapping patches. Ultimately, we generate two heatmaps for each image--one for prediction of malignancy and one for prediction of benign findings. Both are passed as additional input channels to the breast-level model. The predicted probabilities for outside and negative patch classes are not used here. \begin{algorithm*}[ht] \caption{Strides setting} \begin{algorithmic}[1] \Function{strides\_setting}{\texttt{image\_size}} \State \texttt{prefixed\_stride} = 70 \State \texttt{patch\_size} = 256 \State \texttt{sliding\_steps} = (\texttt{image\_size} - \texttt{patch\_size}) // \texttt{prefixed\_stride} \State \texttt{pixel\_remaining} = (\texttt{image\_size} - \texttt{patch\_size}) \% \texttt{prefixed\_stride} \If{\texttt{pixel\_remaining} == 0} \State \texttt{stride\_list} = [\texttt{prefixed\_stride}] * \texttt{sliding\_steps} \Else \State \texttt{sliding\_steps} += 1 \State \texttt{pixel\_overlap} = \texttt{prefixed\_stride} - \texttt{pixel\_remaining} \State \texttt{stride\_avg} = \texttt{prefixed\_stride} - \texttt{pixel\_overlap} // \texttt{sliding\_steps} \State \texttt{stride\_list} = [\texttt{stride\_avg}] * \texttt{sliding\_steps} \State randomly choose number of \texttt{pixel\_overlap} \% \texttt{sliding\_steps} items from \texttt{stride\_list} and decrement 1 for each. \EndIf \Return \texttt{stride\_list} \EndFunction \end{algorithmic} \label{alg:stride_setting} \end{algorithm*} \begin{figure}[htb!] \centering \begin{tabular}{C{0.15cm} c c c} & \begin{tabular}{c c c} \hspace{-5mm}image & \hspace{4mm}malignant & \hspace{3.5mm}benign \end{tabular}&\begin{tabular}{c c c} \hspace{-3mm}image & \hspace{3.5mm}malignant & \hspace{4mm}benign \end{tabular} &\begin{tabular}{c c c} \hspace{-3mm}image & \hspace{3.5mm}malignant & \hspace{4mm}benign \end{tabular} \\ \hspace{-6mm}\begin{tabular}{c} \rotatebox[origin=c]{270}{R-CC} \end{tabular}& \begin{tabular}{c c c} \hspace{-8mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_R-CC_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_R-CC_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_R-CC_b.png} \end{tabular} & \begin{tabular}{c c c} \hspace{-5mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_R-CC_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_R-CC_b.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_R-CC_m.png} \end{tabular}&\begin{tabular}{c c c} \hspace{-5mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_R-CC_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_R-CC_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_R-CC_b.png} \end{tabular} \\ \hspace{-6mm}\begin{tabular}{c} \rotatebox[origin=c]{270}{L-CC} \end{tabular}& \begin{tabular}{c c c} \hspace{-8mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_L-CC_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_L-CC_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_L-CC_b.png} \end{tabular} & \begin{tabular}{c c c} \hspace{-5mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_L-CC_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_L-CC_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_L-CC_b.png} \end{tabular}& \begin{tabular}{c c c} \hspace{-5mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_L-CC_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_L-CC_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_L-CC_b.png} \end{tabular} \\ \hspace{-6mm}\begin{tabular}{c} \rotatebox[origin=c]{270}{R-MLO} \end{tabular}& \begin{tabular}{c c c} \hspace{-8mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth,trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_R-MLO_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_R-MLO_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_R-MLO_b.png} \end{tabular} & \begin{tabular}{c c c} \hspace{-5mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth,trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_R-MLO_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth,trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_R-MLO_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_R-MLO_b.png} \end{tabular}& \begin{tabular}{c c c} \hspace{-5mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_R-MLO_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_R-MLO_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_R-MLO_b.png} \end{tabular}\\ \hspace{-6mm}\begin{tabular}{c} \rotatebox[origin=c]{270}{L-MLO} \end{tabular} & \begin{tabular}{c c c} \hspace{-8mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_L-MLO_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_L-MLO_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/121_L-MLO_b.png} \end{tabular} & \begin{tabular}{c c c} \hspace{-4mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_L-MLO_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_L-MLO_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/2769_L-MLO_b.png} \end{tabular}& \begin{tabular}{c c c} \hspace{-4mm} \includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_L-MLO_i.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_L-MLO_m.png}\hspace{-4mm}& \hspace{-5mm}\includegraphics[height = 0.15\textwidth, width = 0.10\textwidth, trim={0mm 0mm 0mm 0mm}]{figures/heatmaps/4696_L-MLO_b.png} \end{tabular} \\ & (a) & (b) & (c) \end{tabular} \vspace{-2mm} \caption{ We select three exams from the test set and visualize the four standard views from each along with two heatmaps overlaid on the images. For each view, from left to right, we show: the original image, the image overlaid with a heatmap of the pixel-level prediction for malignancy, the image overlaid with a heatmap of the pixel-level prediction for benign findings. (a) An exam where the left breast was labeled as malignant as well as benign. (b) An exam in which there is a benign finding in the left breast. (c) An exam with benign findings in the right breast.} \label{fig:heatmaps_more} \end{figure} \subsubsection*{Model evaluation and selection} The main purpose of the patch-level classifier is to generate heatmaps which can be used as extra channels for the breast-level classifier. Unfortunately, it is hard to evaluate the patch-level classifier with respect to how it improves the breast-level model at each epoch. We trained the patch-level network for 2,000 epochs, saving its parameters every 200 epochs. The 10 models saved were used to generate malignant and benign heatmaps for all images in the validation set. To form a breast-level prediction for the malignant/not malignant task from the heatmaps, we took the maximum value across the malignant heatmaps for each breast. The breast-level predictions for the benign/not benign task were computed analogously. The model we used for generating heatmaps for the entire data set was selected based on the average of the AUCs (between the two tasks) we obtained using these predictions. The process of generating the heatmaps for the entire dataset took approximately 1,000 hours using an Nvidia V100 GPU (2.12 seconds per image). Examples of the images with two corresponding heatmaps are shown in \autoref{fig:heatmaps_more}. \newpage \section*{Data} Our retrospective study was approved by our institutional review board and was compliant with the Health Insurance Portability and Accountability Act. Informed consent was waived. This dataset\footnote{Details of its statistics and how it was extracted can be found in a separate technical report \cite{NYU_dataset}.} is a larger and more carefully curated version of a dataset used in our earlier work \cite{high_resolution, breast_density}. The dataset includes 229,426 digital screening mammography exams (1,001,093 images) from 141,473 patients. Each exam contains at least four images,\footnote{Some exams contain more than one image per view as technologists may need to repeat an image or provide a supplemental view to completely image the breast in a screening examination.} corresponding to the four standard views used in screening mammography: R-CC (right craniocaudal), L-CC (left craniocaudal), R-MLO (right mediolateral oblique) and L-MLO (left mediolateral oblique). A few examples of exams are shown in Figure~\ref{fig:example_exams}. To extract labels indicating whether each breast of the patient was found to have malignant or benign findings at the end of the diagnostic pipeline, we relied on pathology reports from biopsies. We have 5,832 exams with at least one biopsy performed within 120 days of the screening mammogram. Among these, biopsies confirmed malignant findings for 985 (8.4\%) breasts and benign findings for 5,556 (47.6\%) breasts. 234 (2.0\%) breasts had both malignant and benign findings. For the remaining screening exams that were not matched with a biopsy, we assigned labels corresponding to the absence of malignant and benign findings in both breasts. For all exams matched with biopsies, we asked a group of radiologists (provided with the corresponding pathology reports) to retrospectively indicate the location of the biopsied lesions at a pixel level. An example of such a segmentation is shown in Figure~\ref{fig:example_segmentation}. We found that, according to the radiologists, approximately 32.8\% of exams were mammographically occult, i.e., the lesions that were biopsied were not visible on mammography, even retrospectively, and were identified using other imaging modalities: ultrasound or MRI. \begin{figure}[ht] \centering \begin{tabular}{c c c c } \hspace{-2mm}R-CC & \hspace{-4.5mm}L-CC & \hspace{-4.5mm}R-MLO & \hspace{-4.5mm}L-MLO \\ \hspace{-2mm}\includegraphics[width=0.245\linewidth]{figures/example_normal_rcc_2.png} & \hspace{-4.5mm}\includegraphics[width=0.245\linewidth]{figures/example_normal_lcc_2.png} & \hspace{-4.5mm}\includegraphics[width=0.245\linewidth]{figures/example_normal_rmlo_2.png} & \hspace{-4.5mm}\includegraphics[width=0.245\linewidth]{figures/example_normal_lmlo_2.png} \\ \vspace{-3mm} \\ \hspace{-2mm}\includegraphics[width=0.245\linewidth]{figures/example_malignant_rcc.png} & \hspace{-4.5mm}\includegraphics[width=0.245\linewidth]{figures/example_malignant_lcc.png} & \hspace{-4.5mm}\includegraphics[width=0.245\linewidth]{figures/example_malignant_rmlo.png} & \hspace{-4.5mm}\includegraphics[width=0.245\linewidth]{figures/example_malignant_lmlo.png} \\ \vspace{-3mm} \\ \hspace{-2mm}\includegraphics[width=0.245\linewidth]{figures/example_benign_rcc.png} & \hspace{-4.5mm}\includegraphics[width=0.245\linewidth]{figures/example_benign_lcc.png} & \hspace{-4.5mm}\includegraphics[width=0.245\linewidth]{figures/example_benign_rmlo.png} & \hspace{-4.5mm}\includegraphics[width=0.245\linewidth]{figures/example_benign_lmlo.png} \end{tabular} \vspace{-4mm} \caption{Examples of breast cancer screening exams. First row: both breasts without any findings; second row: left breast with no findings and right breast with a malignant finding; third row: left breast with a benign finding and right breast with no findings. } \label{fig:example_exams} \end{figure} \begin{figure}[ht] \begin{minipage}{.22\textwidth} \centering \begin{tabular}{c c} \hspace{-2mm}\includegraphics[width=0.48\linewidth]{figures/image_original.png} & \hspace{-4mm} \includegraphics[width=0.48\linewidth]{figures/image_biopsy.png} \end{tabular} \vspace{-3mm} \caption{An example of a segmentation performed by a radiologist. Left: the original image. Right: the image with lesions requiring a biopsy highlighted. The malignant finding is highlighted with red and benign finding with green.} \label{fig:example_segmentation} \end{minipage} \hfill \begin{minipage}{.25\textwidth} \centering \vspace{-1.5mm} \includegraphics[width=1\linewidth,trim={2cm 2.2cm 2.15cm 3.5cm},clip]{figures/schematics.pdf} \vspace{-7mm} \caption{A schematic representation of how we formulated breast cancer exam classification as a learning task.} \label{fig:schematic} \end{minipage} \vspace{-3mm} \end{figure} \section*{Deep CNNs for cancer classification} \subsection*{Problem definition} For each breast, we assign two binary labels: the absence/presence of malignant findings in a breast, and the absence/presence of benign findings in a breast. With left and right breasts, each exam has a total of four binary labels. Our goal is to produce four predictions corresponding to the four labels for each exam. As input, we take four high-resolution images corresponding to the four standard screening mammography views. We crop each image to a fixed size of $2677\times1942$ pixels for CC views and $2974\times1748$ pixels for MLO views. See \autoref{fig:schematic} for a schematic representation. \subsection*{Model architecture} We trained a deep multi-view CNN of architecture shown in \autoref{fig:architectures}, inspired by \cite{high_resolution}. The overall network consists of two core modules: (i) four view-specific columns, each based on the ResNet architecture \cite{resnet} that outputs a fixed-dimension hidden representation for each mammography view, and (ii) two fully connected layers to map from the computed hidden representations to the output predictions. We used four ResNet-22\footnote{\textit{ResNet-22} refers to our version of a 22-layer ResNet, with additional modifications such as a larger kernel in the first convolutional layer. Details can be found in the SI.} columns to compute a 256-dimension hidden representation vector of each view. The columns applied to L-CC/R-CC views share their weights. The columns applied to L-MLO/R-MLO views share their weights too. We concatenate the L-CC and R-CC representations into a 512-dimension vector, and apply two fully connected layers to generate predictions for the four outputs. We do the same for the L-MLO and R-MLO views. We average the probabilities predicted by the CC and MLO branches of the model to obtain our final predictions. \begin{figure}[ht] \begin{minipage}{0.72\linewidth} \centering \includegraphics[width=0.88\textwidth,trim={2.3cm 2.4cm 2.7cm 3.8cm}, clip]{figures/model_2.pdf} \vspace{-2.5mm} \caption{ Architecture of our model. Four ResNet-22 columns take the four views as input. The architecture is divided into CC and MLO branches. In each branch, the corresponding left and right representations from the ResNets are individually average-pooled spatially and concatenated, and two fully connected layers are applied to compute the predictions for the four outputs. The predictions are averaged between the CC and MLO branches. Weights are shared between L-CC/R-CC columns and L-MLO/R-MLO columns. When heatmaps are added as additional channels to corresponding inputs, the first layers of the columns are modified accordingly. } \label{fig:architectures} \end{minipage}\hfill \begin{minipage}{0.24\linewidth} \centering \begin{tabular}{c} \includegraphics[width = 0.9\textwidth,trim={23mm 12mm 0mm 0mm}]{figures/heatmaps/2769_L-MLO_i.png}\\ \includegraphics[width = 0.9\textwidth,trim={23mm 12mm 0mm 0mm}]{figures/heatmaps/2769_L-MLO_m.png}\\ \includegraphics[width = 0.9\textwidth,trim={23mm 12mm 0mm 0mm}]{figures/heatmaps/2769_L-MLO_b.png}\\ \end{tabular} \vspace{-2mm} \caption{The original image, the `malignant` heatmap over the image and the `benign` heatmap over the image. } \label{fig:heatmaps} \end{minipage} \vspace{-5mm} \end{figure} \subsubsection*{Auxiliary patch-level classification model and heatmaps} The high resolution of the images and the limited memory of modern GPUs constrain us to use relatively shallow ResNets within our model when using full-resolution images as inputs. To further take advantage of the fine-grained detail in mammograms, we trained an auxiliary model to classify $256 \times 256$-pixel patches of mammograms, predicting two labels: the presence or absence of malignant and benign findings in a given patch. The labels for these patches are produced based on the pixel-level segmentations of the corresponding mammograms produced by clinicians. We refer to this model as a \textit{patch-level} model, in contrast to the \textit{breast-level} model described in the section above which operates on images of the whole breast. Subsequently, we apply this auxiliary network to the full resolution mammograms in a sliding window fashion to create two \textit{heatmaps} for each image (an example in \autoref{fig:heatmaps}), one containing an estimated probability of a malignant finding for each pixel, and the other containing an estimated probability of a benign finding. These patch classification heatmaps can be used as additional input channels to the breast-level model to provide supplementary fine-grained information. Using separate breast- and pixel-level models as described above differentiates our work from approaches which utilize pixel-level labels in a single differentiable network \cite{multi_scale} or models based on the variations of R-CNN \cite{breast_cancer_rcnn}. Our approach allows us to use a very deep auxiliary network at the patch level, as this network does not have to process the entire high-resolution image at once. Adding the heatmaps produced by the patch-level classifier as additional input channels allows the main classifier to get the benefit from pixel-level labels, while the heavy computation necessary to produce the pixel-level predictions does not need to be repeated each time an example is used for learning. We can also initialize weights of the patch-level classifier using weights of networks pretrained on large off-domain data sets such as ImageNet \cite{imagenet}.\footnote{To finetune a network pretrained on RGB images with grayscale images, we duplicate the grayscale images across the RGB channels.} Hereafter, we refer to the model using only breast-level labels as the \textit{image-only} model, and the model using breast-level labels and the heatmaps as the \textit{image-and-heatmaps} model. \section*{Experiments} In all experiments, we used the training set for optimizing parameters of our model and the validation set for tuning hyperparameters of the model and the training procedure. Unless otherwise specified, results were computed across the screening population. To obtain predictions for each test example, we apply random transformations to the input 10 times, apply the model to each of the 10 samples separately and then average the 10 predictions (details in the SI). To further improve our results, we employed the technique of model ensembling \cite{ensemble}, wherein the predictions of several different models are averaged to produce the overall prediction of the ensemble. In our case, we trained five copies of each model with different random initializations of the weights in the fully connected layers. The remaining weights are initialized with the weights of the model pretrained on BI-RADS classification, giving our model a significant boost in performance (details in the SI). For each model, we report the results from a single network (mean and standard deviation across five random initializations) and from an ensemble. \subsection*{Test populations} In the experiments below, we evaluate our model on several populations to test different hypotheses: (i) \textit{screening population}, including all exams from the test set without subsampling; (ii) \textit{biopsied subpopulation}, which is subset of the screening population, only including exams from the screening population containing breasts which underwent a biopsy; (iii) \textit{reader study subpopulation}, which consists of the biopsied subpopulation and a subset of randomly sampled exams from the screening population without any findings. \subsection*{Evaluation metrics} We evaluated our models primarily in terms of AUC (area under the ROC curve) for malignant/not malignant and benign/not benign classification tasks on the breast level. The model and readers' responses on the subset for reader study are evaluated in terms of AUC as well as precision-recall AUC (PRAUC), which are commonly used metrics in evaluation of radiologists' performance. ROC and PRAUC capture different aspects of performance of a predictive model. The ROC curve summarizes the trade-off between the true positive rate and false positive rate for a model using different probability thresholds. The precision-recall curve summarizes the trade-off between the true positive rate (recall) and the positive predictive value (precision) for a model using different probability thresholds. \subsection*{Screening population} In this section we present the results on the screening population, which approximates the distribution of patients who undergo routine screening. Results are shown in the first two rows of \autoref{tab:cancer_pred}. The model ensemble using only mammogram images achieved an AUC of 0.840 for malignant/not malignant classification and an AUC of 0.743 for benign/not benign classification. The image-and-heatmaps model ensemble using both the images and the heatmaps achieved an AUC of 0.895 for malignant/not malignant and 0.756 for benign/not benign classification, outperforming the image-only model on both tasks. The discrepancy in performance of our models between these two tasks can be largely explained by the fact that a larger fraction of benign findings than malignant findings are mammographically-occult (Table 2 in \cite{NYU_dataset}). Additionally, there can be noise in the benign/not benign labels associated with radiologists' confidence in their diagnoses. For the same exam, one radiologist might discard a finding as obviously not malignant without requesting a biopsy, while another radiologist might ask for a biopsy. We find that the image-and-heatmaps model performs better than the image-only model on both tasks. Moreover, the image-and-heatmaps model improves more strongly in malignant/not malignant classification than benign/not benign classification. We also find that ensembling is beneficial across all models, leading to a small but consistent increase in AUC. \begin{table}[t] \centering \caption{ AUCs of our models on screening and biopsied populations. } \vspace{-2mm} \resizebox{.485\textwidth}{!}{ \begin{tabular}{| l | c | c | c | c |} \cline{2-5} \multicolumn{1}{c|}{} & \multicolumn{2}{c|}{single} & \multicolumn{2}{c|}{5x ensemble} \\ \cline{2-5} \multicolumn{1}{c|}{} & malignant & benign & malignant & benign \\ \cline{2-5} \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{screening population}} } \\ \hline image-only & 0.827$\pm$0.008 & 0.731$\pm$0.004 & 0.840 & 0.743 \\ \hline image-and-heatmaps & \textbf{0.886}$\pm$\textbf{0.003} & \textbf{0.747}$\pm$\textbf{0.002} & \textbf{0.895} & \textbf{0.756} \\ \hline \multicolumn{5}{|c|}{\cellcolor{gray!20} {\textbf{biopsied subpopulation}} } \\ \hline image-only & 0.781$\pm$0.006 & 0.673$\pm$0.003 & 0.791 & 0.682 \\ \hline image-and-heatmap & \textbf{0.843}$\pm$\textbf{0.004} & \textbf{0.690}$\pm$\textbf{0.002} & \textbf{0.850} & \textbf{0.696} \\ \hline \end{tabular} } \label{tab:cancer_pred} \vspace{-2mm} \end{table} \subsection*{Biopsied subpopulation} We show the results of our models evaluated only on the biopsied subpopulation, in the last two rows of \autoref{tab:cancer_pred}. Within our test set, this corresponds to 401 breasts: 339 with benign findings, 45 with malignant findings, and 17 with both. This subpopulation that underwent biopsy with at least one imaging finding differs markedly from the overall screening population, which consists of largely healthy individuals undergoing routine annual screening without recall for additional imaging or biopsy. Compared to the results on the screening population, AUCs on the biopsied population are markedly lower across all the model variants. On the biopsied subpopulation, we observed a consistent difference between the performance of image-only and image-and-heatmaps models. The ensemble of image-and-heatmaps models performs best on both malignant/not malignant classification, attaining an AUC of 0.850, and on benign/not benign classification, attaining an AUC of 0.696. The markedly lower AUCs attained for the biopsied subpopulation, in comparison to the screening population, can be explained by the fact that exams that require a recall for diagnostic imaging and that subsequently need a biopsy are more challenging for both radiologists and our model.\footnote{More precisely, this difference in AUC can be explained by the fact that while adding or subtracting negative examples to the test population does not change the true positive rate, it alters the false positive rate. False positive rate is computed as a ratio of false positive and negative. Therefore, when adding easy negative examples to the test set, the number of false positives will be growing slower than the number of all negatives, which will lead to an increase in AUC. On the other hand, removing easy negative examples will have a reverse effect and the AUC will be lower.} \subsection*{Results across ages and breast densities} We divide the test set by patient age and breast density and evaluate our model on each subpopulation, as shown in \autoref{fig:age_den}. We observe that the performance of both the image-only and the image-and-heatmaps models varies across age groups. We also find that both models perform worse on dense breasts (``heterogeneously dense'' and ``extremely dense'') than on fattier ones (``almost entirely fatty'' and ``scattered areas of fibroglandular density''), which is consistent with the decreased sensitivity of radiologists for patients with denser breasts. Differences in the model's performance in benign/not benign classification is larger than in malignant/not malignant classification. We hypothesize that this is due to age and breast density influencing the level of noise in benign/not benign labels, associated with radiologists' confidence in their diagnoses. \begin{figure}[ht] \vspace{-3mm} \begin{tabular}{c c} \includegraphics[width = 0.22\textwidth]{figures/age_den_1.pdf} & \hspace{-5mm} \includegraphics[width = 0.255\textwidth]{figures/age_den_2.pdf} \end{tabular} \vspace{-3mm} \caption{AUCs for patients grouped by age and by breast density.} \vspace{-4mm} \label{fig:age_den} \end{figure} \begin{figure*}[ht] \begin{tabular}{c c c c c c} \hspace{-3.5mm}\includegraphics[width = 0.165\textwidth]{figures/reader_study/auc_1.pdf}& \hspace{-3.5mm}\includegraphics[width = 0.165\textwidth]{figures/reader_study/auc_2.pdf} & \hspace{-3.5mm}\includegraphics[width = 0.165\textwidth]{figures/reader_study/auc_3.pdf} & \hspace{-3.5mm}\includegraphics[width = 0.165\textwidth]{figures/reader_study/prauc_1.pdf}& \hspace{-3.5mm}\includegraphics[width = 0.165\textwidth]{figures/reader_study/prauc_2.pdf} & \hspace{-3.5mm}\includegraphics[width = 0.165\textwidth]{figures/reader_study/prauc_3.pdf}\\ \vspace{-5mm}\\ \footnotesize{(a)} & \footnotesize{(b)} & \footnotesize{(c)} & \footnotesize{(a*)} & \footnotesize{(b*)} & \footnotesize{(c*)} \\ \end{tabular} \vspace{-2mm} \caption{ROC curves ((a), (b), (c)) and Precision-Recall curves ((a*), (b*), (c*)) on the subset of the test set used for reader study. (a) \& (a*): curves for all 14 readers. Their average performance are highlighted in blue. (b) \& (b*): curves for hybrid of the image-and-heatmaps ensemble with each single reader. Curve highlighted in blue indicates the average performance of all hybrids. (c) \& (c*): comparison among the image-and-heatmaps ensemble, average reader and average hybrid.} \label{fig:human_ai_comparison} \vspace{-3mm} \end{figure*} \section*{Reader study} To compare the performance of our image-and-heatmaps ensemble (hereafter referred to as \textit{the model}) to human radiologists, we performed a reader study with 14 readers---12 attending radiologists at various levels of experience (between 2 and 25 years), a resident and a medical student---each reading 740 exams from the test set (1,480 breasts): 368 exams randomly selected from the biopsied subpopulation and 372 exams randomly selected from exams not matched with any biopsy. Exams were shuffled before being given to the readers. Readers were asked to provide a probability estimate of malignancy on a 0\%-100\% scale for each breast in an exam. As some breasts contain multiple suspicious findings, readers were asked to give their assessment of the most suspicious finding. We used the first 20 exams as a practice set to familiarize readers with the format of the reader study--these were excluded from the analysis.\footnote{The readers were shown the images and asked to give their assessment. We confirmed the correctness of the format in which they returned their answers but we did not provide them with feedback on the accuracy of their predictions.} On the remaining 720 exams, we evaluated the model's and readers' performance on malignancy classification. Among the 1,440 breasts, there are 62 breasts labeled as malignant and 356 breasts labeled as benign. In the breasts labeled as malignant, there are 21 masses, 26 calcifications, 12 asymmetries and 4 architectural distortions.\footnote{Masses are defined as 3-dimensional space occupying lesion with completely or partially convex-outward borders. Calcifications are tiny specks of calcific deposits. An asymmetry is defined as a unilateral deposit of fibroglandular tissue that does not meet the definition of mass, i.e., it is an area of the fibroglandular tissue that is not seen other breast. Architectural distortion refers to a disruption of the normal random pattern of fibroglandular tissue with no definite mass visible.}\footnote{As one breast had two types of findings, the numbers add up to 39, not 38.} In the breasts labeled as benign, the corresponding numbers of imaging findings are: 87, 102, 36 and 6. Our model achieved an AUC of 0.876 and PRAUC of 0.318. AUCs achieved by individual readers varied from 0.705 to 0.860 (mean: 0.778, std: 0.0435). PRAUCs for readers varied from 0.244 to 0.453 (mean: 0.364, std: 0.0496). Individual ROCs and precision-recall curves, along with their averages are shown in \autoref{fig:human_ai_comparison}(a) and \autoref{fig:human_ai_comparison}(a*). \begin{figure*}[ht] \centering \begin{minipage}[t]{.48\textwidth} \centering \begin{tabular}{c c} \hspace{-1mm}\includegraphics[height=0.416\linewidth]{figures/reader_study/weight_auc.pdf} & \hspace{-4mm}\includegraphics[height=0.416\linewidth]{figures/reader_study/weight_prauc.pdf} \end{tabular} \vspace{-4mm} \caption{AUC (left) and PRAUC (right) as a function of $\lambda \in [0, 1)$ for hybrids between each reader and our image-and-heatmaps ensemble. Each hybrid achieves the highest AUC/PRAUC for a different $\lambda$ (marked with $\diamondsuit$). } \label{fig:weighting} \end{minipage} \hspace{3mm} \begin{minipage}[t]{.48\textwidth} \centering \begin{tabular}{c c c c} \phantom{ab}&\vspace{0.5mm}\hspace{-1mm}\includegraphics[height=0.38\linewidth]{figures/tsne_reader_study_viewsplit_h0.pdf} & \vspace{0.5mm}\hspace{-3mm} \includegraphics[height=0.38\linewidth]{figures/tsne_reader_study_viewsplit_h1.pdf} &\hspace{-2mm} \raisebox{.11\height}{\includegraphics[height=0.3\linewidth, trim=3mm 0mm 10mm 0mm]{figures/tsne_reader_study_viewsplit_colorbar.pdf}} \end{tabular} \vspace{-3mm} \caption{Exams in the reader study set represented using the concatenated activations from the four image-specific columns (left) and the concatenated activations from the first fully connected layer in both CC and MLO model branches (right).} \label{fig:tsne_breast} \end{minipage} \end{figure*} We also evaluated the accuracy of a human-machine hybrid, whose predictions are a linear combination of predictions of a radiologist and of the model--that is, $\mathbf{y}_\mathrm{hybrid} = \lambda\mathbf{y}_\mathrm{radiologist} + (1 - \lambda)\mathbf{y}_\mathrm{model}$. For $\lambda = 0.5$\footnote{We do not have a way to tune $\lambda$ to individual readers, hence we chose $\lambda = 0.5$ as the most natural way of aggregating two sets of predictions when not having prior knowledge of their quality. As \autoref{fig:weighting} shows, an optimal $\lambda$ varies a lot depending on the reader. The stronger reader's performance the smaller the optimal weight on the model. Notably though all readers can be improved by averaging their predictions with the model for both AUC and PRAUC.} (see \autoref{fig:weighting} for the results for $\lambda \in [0, 1)$), hybrids between each reader and the model achieved an average AUC of 0.891 (std: 0.0109) and an average PRAUC of 0.431 (std: 0.0332) (cf. \autoref{fig:human_ai_comparison}(b), \autoref{fig:human_ai_comparison}(b*)). These results suggest our model can be used as a tool to assist radiologists in reading breast cancer screening exams and that it captured different aspects of the task compared to experienced breast radiologists. \subsection*{Visualization of the representation learned by the classifier} Additionally, we examined how the network represents the exams internally by visualizing the hidden representations learned by the best single image-and-heatmaps model, for exams in reader study subpopulation. We visualize two sets of activations: concatenated activations from the last layer of each of the four image-specific columns, and concatenated activations from the first fully connected layer in both CC and MLO model branches. Both sets of activations have 1,024 dimensions in total. We embed them into a two-dimensional space using UMAP~\citep{umap} with the Euclidean distance. \autoref{fig:tsne_breast} shows the embedded points. Color and size of each point reflect the same information: the warmer and larger the point is, the higher the readers' mean prediction of malignancy is. A score for each exam is computed as an average over predictions for the two breasts. We observe that exams classified as more likely to be malignant according to the readers are close to each other for both sets of activations. The fact that previously unseen exams with malignancies were found by the network to be similar further corroborates that our model exhibits strong generalization capabilities. \section*{Related work} Prior works approach the task of breast cancer screening exam classification in two paradigms. In one paradigm, only exam-level, breast-level or image-level labels are available. A CNN is first applied to each of the four standard views and the resulting feature vectors are combined to produce a final prediction \cite{high_resolution}. This workflow can be further integrated with multi-task learning where radiological assessments, such as breast density, can be incorporated to model the confidence of the classification \cite{mammo}. Other works formulate the breast cancer exam classification task as weakly supervised localization and produce a class activation map that highlights the locations of suspicious lesions \cite{data_driven}. Such formulations can be paired with multiple-instance learning where each spatial location is treated as a single instance and associated with a score that is correlated with the existence of a malignant finding \cite{multi_instance}. In the second paradigm, pixel-level labels that indicate the location of benign or malignant findings are also provided to the classifier during training. The pixel-level labels enable training models derived from the R-CNN architecture \cite{breast_cancer_rcnn} or models that divide the mammograms into smaller patches and train patch-level classifiers using the location of malignant findings \cite{kooi2017classifying, shen2017end, teare2017malignancy, multi_scale}. Some of these works directly aggregate outputs from the patch-level classifier to form an image-level prediction. A major limitation of such architectures is that information outside the annotated regions of interest will be neglected. Other works apply the patch-level classifier as a first level of feature extraction on top of which more layers are stacked and the entire model is then optimized jointly. A downside of this kind of architecture is the requirement for the whole model to fit in GPU memory for training, which limits the size of the minibatch used (usually to one), depth of the patch-level model and how densely the patch-level model is applied. Our work is most similar to the latter type of models utilizing pixel-level labels--however our strategy uses a patch-level classifier for producing heatmaps as additional input channels to the breast-level classifier. While we forgo the ability to train the whole model end-to-end, the patch-level classifier can be significantly more powerful and can be densely applied across the original image. As a result, our model has the ability to learn both local features across the entire image as well as macroscopic features such as symmetry between breasts. For a more comprehensive review of prior work, refer to one of the recent reviews \cite{AJR_review, harvey_review}. A variety of results in terms of AUC for prediction of malignancy have been reported in literature. The most comparable to our work are: \cite{multi_instance} (0.86), \cite{breast_cancer_rcnn} (0.95), \cite{becker2017deep} (0.81), \cite{data_driven} (0.91), \cite{standalone} (0.84) and \cite{ritse} (0.89). Unfortunately, although these results can serve as a rough estimate of model quality, comparing different methods based on these numbers would be misleading. Some authors do not discuss design of their models \cite{becker2017deep, ritse, standalone}, some evaluate their models on very small public datasets \cite{inbreast, DDSM}, insufficient for a meaningful evaluation, while others used private datasets with populations of different distributions (on a spectrum between screening population and biopsied subpopulation), different quality of imaging equipment and even differently defined labels. By making the code and the weights of our model public, we seek to enable more direct comparisons to our work. \section*{Discussion} By leveraging a large training set with breast-level and pixel-level labels, we built a neural network which can accurately classify breast cancer screening exams. We attribute this success in large part to the significant amount of computation encapsulated in the patch-level model, which was densely applied to the input images to form heatmaps as additional input channels to a breast-level model. It would be impossible to train this model in a completely end-to-end fashion with currently available hardware. Although our results are promising, we acknowledge that the test set used in our experiments is relatively small and our results require further clinical validation. We also acknowledge that although our network's performance is stronger than that of the radiologists' on the specific task in our reader study, this is not exactly the task that radiologists perform. Typically, screening mammography is only the first step in a diagnostic pipeline, with the radiologist making a final determination and decision to biopsy only after recall for additional diagnostic mammogram images and possible ultrasound. However, in our study a hybrid model including both a neural network and expert radiologists outperformed either individually, suggesting the use of such a model could improve radiologist sensitivity for breast cancer detection. On the other hand, the design of our model is relatively simple. More sophisticated and accurate models are possible. Furthermore, the task we considered in this work, predicting whether the patient had a visible cancer at the time of the screening mammography exam, is the simplest possible among many tasks of interest. In addition to testing the utility of this model in real-time reading of screening mammograms, a clear next step would be predicting the development of breast cancer in the future--before it is even visible to a trained human eye. \acknow{The authors would like to thank Catriona C. Geras for correcting earlier versions of this manuscript, Michael Cantor for providing us pathology reports, Marc Parente and Eli Bogom-Shanon for help with importing the image data and Mario Videna for supporting our computing environment. We also gratefully acknowledge the support of Nvidia Corporation with the donation of some of the GPUs used in this research. This work was supported in part by grants from the National Institutes of Health (R21CA225175 and P41EB017183).} \showacknow{}
{ "redpajama_set_name": "RedPajamaArXiv" }
1,076
<?php namespace Bemoove\AppBundle\Entity; use ApiPlatform\Core\Annotation\ApiResource; use Doctrine\ORM\Mapping as ORM; use Symfony\Component\Serializer\Annotation\Groups; /** * Reservation * * @ApiResource(attributes={ * "filters"={"reservation.futureworkoutinstance", "reservation.state_filter", "reservation.person_filter", "reservation.workoutinstance"}, * "normalization_context"={"groups"={"reservation","person","workout","full_workoutinstance"}}, * "denormalization_context"={"groups"={"post_person"}},}) * @ORM\Table(name="reservation") * @ORM\Entity(repositoryClass="Bemoove\AppBundle\Repository\ReservationRepository") */ class Reservation { /** * @var int * * @Groups({"reservation"}) * @ORM\Column(name="id", type="integer") * @ORM\Id * @ORM\GeneratedValue(strategy="AUTO") */ private $id; /** * @Groups({"reservation"}) * @ORM\ManyToOne(targetEntity="OrderBundle\Entity\Order", inversedBy="reservations") * @ORM\JoinColumn(nullable=true) */ private $order; /** * @Groups({"reservation"}) * @ORM\ManyToOne(targetEntity="Bemoove\AppBundle\Entity\Person") * @ORM\JoinColumn(nullable=false) */ private $person; /** * @var \DateTime * * @Groups({"reservation"}) * @ORM\Column(name="date_add", type="datetimetz") */ private $dateAdd; /** * @Groups({"reservation"}) * @ORM\ManyToOne(targetEntity="Bemoove\AppBundle\Entity\WorkoutInstance") * @ORM\JoinColumn(nullable=false) */ private $workoutInstance; /** * @var int * * @Groups({"reservation"}) * @ORM\Column(name="nb_booking", type="smallint") */ private $nbBooking; /** * @var float * * @Groups({"reservation"}) * @ORM\Column(name="unit_price_tax_incl", type="float") */ private $unitPriceTaxIncl; /** * @Groups({"reservation"}) * @ORM\ManyToOne(targetEntity="Bemoove\AppBundle\Entity\ReservationState") * @ORM\JoinColumn(nullable=false) */ private $state; /** * Constructor */ public function __construct() { $this->setDateAdd(new \DateTime()); // $this->person = new \Doctrine\Common\Collections\ArrayCollection(); // $this->workoutInstance = new \Doctrine\Common\Collections\ArrayCollection(); } /** * Get id * * @return integer */ public function getId() { return $this->id; } /** * Set dateAdd * * @param \DateTime $dateAdd * * @return Reservation */ public function setDateAdd($dateAdd) { $this->dateAdd = $dateAdd; return $this; } /** * Get dateAdd * * @return \DateTime */ public function getDateAdd() { return $this->dateAdd; } /** * Set nbBooking * * @param integer $nbBooking * * @return Reservation */ public function setNbBooking($nbBooking) { $this->nbBooking = $nbBooking; return $this; } /** * Get nbBooking * * @return integer */ public function getNbBooking() { return $this->nbBooking; } /** * Set workoutInstance * * @param \Bemoove\AppBundle\Entity\WorkoutInstance $workoutInstance * * @return Reservation */ public function setWorkoutInstance(\Bemoove\AppBundle\Entity\WorkoutInstance $workoutInstance) { $this->workoutInstance = $workoutInstance; return $this; } /** * Get workoutInstance * * @return \Bemoove\AppBundle\Entity\WorkoutInstance */ public function getWorkoutInstance() { return $this->workoutInstance; } /** * Set person * * @param \Bemoove\AppBundle\Entity\Person $person * * @return Reservation */ public function setPerson(\Bemoove\AppBundle\Entity\Person $person) { $this->person = $person; return $this; } /** * Get person * * @return \Bemoove\AppBundle\Entity\Person */ public function getPerson() { return $this->person; } /** * Set order * * @param \OrderBundle\Entity\Order $order * * @return Reservation */ public function setOrder(\OrderBundle\Entity\Order $order = null) { $this->order = $order; return $this; } /** * Get order * * @return \OrderBundle\Entity\Order */ public function getOrder() { return $this->order; } /** * Set unitPriceTaxIncl * * @param float $unitPriceTaxIncl * * @return Reservation */ public function setUnitPriceTaxIncl($unitPriceTaxIncl) { $this->unitPriceTaxIncl = $unitPriceTaxIncl; return $this; } /** * Get unitPriceTaxIncl * * @return float */ public function getUnitPriceTaxIncl() { return $this->unitPriceTaxIncl; } /** * Set state * * @param \Bemoove\AppBundle\Entity\ReservationState $state * * @return Reservation */ public function setState(\Bemoove\AppBundle\Entity\ReservationState $state) { $this->state = $state; return $this; } /** * Get state * * @return \Bemoove\AppBundle\Entity\ReservationState */ public function getState() { return $this->state; } }
{ "redpajama_set_name": "RedPajamaGithub" }
6,593
{"url":"https:\/\/plainmath.net\/44891\/what-is-the-inverse-function-of-f-x-equal-x-2","text":"# What is the inverse function of f(x)=x^2?\n\nWhat is the inverse function of $f\\left(x\\right)={x}^{2}$?\nYou can still ask an expert for help\n\n\u2022 Questions are typically answered in as fast as 30 minutes\n\nSolve your problem for the price of one coffee\n\n\u2022 Math expert for every subject\n\u2022 Pay only if we can solve it\n\neinfachmoipf\nHence $f\\left(x\\right)={x}^{2}\u21d2y={x}^{2}\u21d2\\sqrt{y}=\\sqrt{{x}^{2}}\u21d2\\sqrt{y}=|x|\u21d2x=\u00b1\\sqrt{y}$\n\nWendy Boykin\nExplanation:\nIf we try to solve $y={x}^{2}$ for x we do not get a single value.That means we do not get a function.\nWe get $x=\u00b1\\sqrt{y}$\nIn order to be invertible a function must be one-to-one.\nThat means that we must have:\nfor every ${x}_{1}\\ne {x}_{2}$, we have $f\\left(-1\\right)=f\\left(1\\right)$ (for example), so there is no inverse function.","date":"2022-09-30 20:26:33","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 42, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.864717423915863, \"perplexity\": 513.3319045893518}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2022-40\/segments\/1664030335504.22\/warc\/CC-MAIN-20220930181143-20220930211143-00023.warc.gz\"}"}
null
null
Anthomyia quinquemaculata is een vliegensoort uit de familie van de bloemvliegen (Anthomyiidae). De wetenschappelijke naam van de soort is voor het eerst geldig gepubliceerd in 1839 door Macquart. Bloemvliegen
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,044
set -o nounset -o errexit -o pipefail # Prevent commands misbehaving due to locale differences. export LC_ALL=C LANG=C # Copyright © {{.year}} {{.author}} が記載されているファイル名を取得。 # ~~~ <- LANG=C の場合、regex 的には .... になるので注意。 # ただし、LICENSE ファイルは除く。 sources=$(egrep --binary-files=without-match --recursive --files-with-match \ --exclude-dir=vendor 'Copyright....[0-9]{4} Yuta MASANO' \ | grep --invert-match 'LICENSE' || :) [ -z "$sources" ] && exit # LICENSE ファイル以外の Copyright 文は不許可。 echo 'NG: the following sources still have a copyright sentence' >&2 for src in $sources; do echo "**** $src ****" head --lines 3 "$src" echo done exit 1
{ "redpajama_set_name": "RedPajamaGithub" }
6,264
{"url":"https:\/\/gwansiu.com\/2018\/03\/29\/Variational-Inference\/","text":"# Variational Inference\n\nPosted by Gwan Siu on March 29, 2018\n\n## 1. What\u2019s variational inference?\n\nRepresentation, learning, and inference are 3 cores problem of machine learning. For statistic inference, it involves finding the approxiate model and parameters to represent the distribution given observed variables. In other work, given complete data $x$ and $y$ and unknown parameter $\\theta$, this is classical parameter estimation problem in ML area. Usually, we adopt maximum likelihood estimation(MLE):\n\nin many real situations, we are given imcomplete data, i.e., only data $x$, in this case, latent variables $z$ are introduced. For example, in gassian mixture model, we introduce $z_{i}$ to indicate the underlying gaussian distribution. Thus, the overall formulation is changed\n\nin fact, this integration usually is high-dimensional integration, which is intractable. It means that extract inference is impossible in this case. Therefore, we need to introduce approximate inference techniques in this case. Samling-based algorithms and variation-based algorithms are two kinds of approximate inference algorithms in modern bayesian statistics.\n\nIn this article, we mainly focus on variational inference(VI). The core idea of VI is to posit a family of distribution and then to find the member of that family which is close to the target, where closeness is measured using the Kullback-Leibler divergence.\n\nIn my previous article-EM, we can see that data likelihood can be decomposed into evidence lower bound and KL divergence:\n\nwhere $\\mathcal{F}(q,\\theta)$ is evidence lower bound for marginal likelihood due to $\\text{KL}(q(z),p(z\\vert x;\\theta))$ is non-negative.\n\nInstead of maximize marginal likelihood directly, EM algorithm and variational inference maximize the lower bound.\n\n1. The first term is the expectation of the data likelihood and thus $\\mathcal{F}(q,\\theta)$ encourage distributions put their mass on configurations of latent variables that explain observed data.\n2. The second term is the negative KL divergence between the variational distribution and the prior, so the $\\mathcal{F}(q,\\theta)$ force $q(z)$ to close to the prior $p(z)$.\n\nHence, maximize $\\mathcal{F}(q,\\theta)$ ** means to balance the likelihood and prior.**\n\n## 2. Expectation-Maximization\n\nIn EM framework, we assume $q(z)=p(z\\vert x;\\theta^{old})$. The ELBO becomes:\n\nwhere $H(q)$ is the entropy of $z$ given $x$. It is constant w.r.t $\\theta$ and thus we will not take it into account when we maximize ELBO. The EM algorithm is sufficient to maximize $Q(\\theta, \\theta^{old})$\n\nE-step: maximize $\\mathcal{F}(q,\\theta)$ w.r.t distribution over hidden variables given the parameters:\n\nM-step: maximize $\\mathcal{F}(q,\\theta)$ w.r.t the parameters given the hidden distribution\n\n## 3. Mean Field Theory\n\nIn EM framework, $q(z)=p(z\\vert x;\\theta^{old})$ is computed by iterative method. It means that we can find a analytical solution of $p(x\\vert x;\\theta^{old})$, this is possible for simple modles but can not be generalized to complex models. Instead, we approximate the posterior distribuiton by a family of simple dsitributions.\n\nwe assume the latent variables are mutually independent and each governed by a distinct factor in the variational distribution\uff0c i.e. $z_{i}\\perp z_{j}$, for $i\\neq j$. This is called mean-field theory.\n\n## 4. Coodinate Ascent Variational Inference(CAVI)\n\nIn this part, I will compbined with mean-field theory and talk about how ELBO is maximize. One latent variabe posterior $q(z_{i})$ is updated by the rest latent variables $i\\neq j$. Here, I will talk about CAVI algorithm. Let $q(z)=\\prod_{i}q(z_{i})$. Then, the EBLO becomes:\n\nSince KL divergence is non-negative, thus, ELBO is maximized when $\\text{KL}(q(z_{j}),\\hat{p}(z_{i\\neq j}))=0$, i.e.\n\nSimilarly, in variational EM:\n\nE-step: $q^{\\ast}=\\frac{1}{Z}\\text{exp}(\\mathbb{E}[\\ln p(x,z;\\theta)]_{i\\neq j})$\n\nM-step: maximize the $\\mathcal{F}(q,\\theta)$.\n\nThe figure below is the process of CAVI algorithm:\n\n## 4. Variational inference and GMM\n\nIn this section, CAVI algorithm is used for Mixture of Gaussians model(GMM). It will be helpful to understand how CAVI works.\n\n### 4.1 Joint distribution computation\n\nGiven observed data $X=(x_{1},\u2026,x_{n})$ from $K$ independent gaussian distribution with mean $\\mu_{k}$. One-hot vector $c_{i}\\in \\mathbb{R}^{k}$ indicate the distribution to which each data belong. The hyperparameter $\\sigma^{2}$ is fixed. latent variables are $\\mu, c$. The prior is:\n\nAccording to bay theroem, we can compute the joint distribution:\n\nonce we have joint distribution, we can compute marginal distribution. However, the formulation has no analytical solution, and the computational complexity is $\\mathcal{O}(K^{n})$.\n\n### 4.2 GMM and CAVI\n\nNow, we should compute variational ditribution $q(z)$, where $m=(m_{1},\u2026,m_{k}), s^{2}=(s_{1}^{2},\u2026,s_{K}^{2}), \\phi=(\\phi_{1},\u2026,\\phi_{n})$ are variational parameters, hence the formulation of variational distribution is:\n\n1. we can obtain the formulation $\\mathrm{ELBO}$, which is a function of $m,s^{2},\\phi$.\n1. from section 3, we obtain how CAVI algorithm update latent variables. Now, we applyied it into GMM to compute cluster indicator $c$ and update $c$, noted $\\mu$ is fixed:\n\nthe second term $\\log p(c_{i})$ is log prior and it is a constant. Hence, we pay our attention to the first term: the distribution of $c_{i}$ gaussian distribution. In detail, we simplify it due to $c_{i}=(c_{i1},\u2026,c_{ik})$ is one-hot vector, and we have:\n\nfrom the formulation above, $\\mathbb{E}[\\mu_{k}]$ and $\\mathbb{E}[\\mu_{k}^{2}]$ can be computed. For each data point $i$, parameter $\\phi_{ik}$ in the $k$th component of the latent variable $c$. The updated formulation is:\n\nsmiliarly, we can compute latent variable $\\mu$ of GMM. Firstly, we should calculate the optimal variational distribution $q(\\mu_{k})$, and the update the parameter $m_{k}, s_{k}^{2}$ of $\\mu_{k}$:\n\nThe algorithm of GMM and CAVI is below:\n\n## 5. Comparision of MCMC and VI\n\nMCMC VI\nMore computationally intensive Less intensive\nGaurantess producing asymptotically exact samples from the target distribution No such gaurantees\nSlower Faster, expecially for large data sets and complex distributions\nBest for precise inference Useful to explore many scenarios quickly or large data sets\n\nReference","date":"2019-03-26 21:52:19","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 12, \"equation\": 9, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9051125645637512, \"perplexity\": 873.9918979812985}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-13\/segments\/1552912206016.98\/warc\/CC-MAIN-20190326200359-20190326222359-00465.warc.gz\"}"}
null
null
Q: Convert native SQL to JPQL with inner join I have this native SQL request : SELECT LMC_PARAMETRE.* FROM LMC_PARAMETRE INNER JOIN INTERVENTION ON INTERVENTION.ID_INTV = LMC_PARAMETRE.ID_INTV INNER JOIN REF_SITE ON INTERVENTION.DID_SITE =REF_SITE.ID_SITE WHERE INTERVENTION.DCD_STATUT_INTV = '0' AND REF_SITE.ID_CENT = '097'; I like to convert it in jpql but the "ON" key word is not being recognized : JpaQueryBuilder builder = new JpaQueryBuilder(); builder.append("SELECT lmp FROM "+LmcParametre.class.getName()+" AS lmp "); builder.append("INNER JOIN "+Intervention.class.getName()+" AS intv ON "); builder.append("intv.idIntv = lmp.intervention.idIntv "); builder.append("INNER JOIN "+Site.class.getName()+" AS site ON "); builder.append("intv.didSite = site.idSite "); builder.append("WHERE "); builder.append(lt("intervention.statutIntv", String.valueOf(constanteInferieurePretACharger))); builder.append("site.centre.idCent = "+idCentre); According to the HQL documentation : Joins, in HQL, are done using associations between entities. However I don't see what this means. Thanks A: Assuming that you have modeled these entities properly using JPA (including their relations) then all your JPQL would be is: "select lp from LmcParametre lp inner join lp.intervention i inner join i.site s where s.idSite = :idSite and i.statutIntv = :statut". Assuming you have this JPQL query registered as a named query with your LmcParametre class (using annotation @NamedQueries({ @NamedQuery( name="myQuery", query = "select lp from LmcParametre lp inner join lp.intervention i inner join i.site s where s.idSite = :idSite and i.statutIntv = :statut" ) }) then you can create a typed query like this: TypedQuery<LmcParametre> query = entityManager.createNamedQuery("myQuery", LmcParametre.class); query.setParameter("idSite", myIdSite); query.setParameter("statut", myStatut); List<LmcParametre> results = query.getResultList(); A: You cannot use the ON clause in JPA joins. To do a JOIN in JPA, you need to the mapping betweeen the entities and use this associations in the JPQL query. So, instead of use: SELECT lmp FROM LmcParametre AS lmp INNER JOIN Intervention AS intv ON intv.idIntv = lmp.intervention.idIntv INNER JOIN Site AS site ON intv.didSite = site.idSite You need to use: SELECT lmp FROM LmcParametre lmp JOIN lmp.interventions intv -- you need the "interventions" mapped (@OneToMany?) in LmcParametre JOIN intv.site -- you need the "site" mapped in Intervention
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,118
/* * Licensed under the Apache License, Version 2.0 (the "License"); you may * not use this file except in compliance with the License. You may obtain * a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, WITHOUT * WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the * License for the specific language governing permissions and limitations * under the License. */ #ifndef BST_H #define BST_H #include "Edge.h" #include <vector> #include "ValHeap.h" #include "CntxArray.h" #include "ECString.h" #include <iostream> #define NORMALVAL 0 #define TERMINALVAL 1 #define EXTRAVAL 2 class Bst; typedef list<Bst*> Bsts; typedef vector<short> shorts; typedef shorts::iterator shortIter; class Val { public: static Val* newIth(int ith, Val* oval, bool& stop); Val() : status(NORMALVAL), len_(1), prob_(0), edge_(NULL), trm_(-1), wrd_(-1) { vec_.push_back(0); } Val(Edge* e, double prb); ~Val(); Edge* edge() const { return edge_; } Bsts& bsts() { return bsts_; } short len() const { return len_; } short& len() { return len_; } short trm() const; short& trm1() { return trm_; } int wrd() const; int& wrd1() { return wrd_; } vector<short>& vec() { return vec_; } vector<short> vec() const { return vec_; } short& vec(int i) { return vec_[i]; } double prob() const { return prob_; } double& prob() { return prob_; } double fom() { return prob_; } void extendTrees(Bst& bst2,int pos); friend ostream& operator<<(ostream& os, const Val& v); friend bool operator==(Val& v1, Val& v2); bool check(); short status; private: Val(Val* oval); short len_; double prob_; Edge* edge_; short trm_; int wrd_; Bsts bsts_; vector<short> vec_; }; class Bst { public: Bst() : explored_(false), done_(false), num_(0), sum_(0) {} ~Bst(); Val* next(int n); bool explored() const { return explored_; } bool& explored() { return explored_; } Val* nth(int i) { return nbest[i]; } int num() const { return num_; } int& num() { return num_; } bool empty() { return num_ == 0; } double prob() { return num_ == 0 ? 0 : nbest[0]->prob(); } double sum() const { return sum_; } double& sum() { return sum_; } ValHeap heap; void push(Val* val) { heap.push(val); } Val* pop() { return heap.pop(); } void addnth(Val* val) { num_++; nbest.push_back(val); } static void tester(Val* val); bool ptst(Val* val); private: bool explored_; bool done_; int num_; double sum_; vector<Val*> nbest; }; typedef map<CntxArray, Bst, less<CntxArray> > BstMap; Bst& bstFind(CntxArray& hi, BstMap& bm); Bst& ithBst(int i, Bsts& bsts); #endif /* ! BST_H */
{ "redpajama_set_name": "RedPajamaGithub" }
388
{"url":"https:\/\/jimgrange.wordpress.com\/blog\/","text":"# Bayesian Estimation of Partial\u00a0Correlations\n\nCorrelations are a popular analysis tool in psychology to examine the extent to which two variables are related. For example, one might be interested in whether there is a relationship between shoe size and height. But what if you want to explore the relationship between two variables whilst controlling for the effects of a third variable? For example, you might be interested in the relationship between shoe size and height whilst controlling for age. In order to do this, one needs to use partial correlations.\n\nWhilst there are relatively straightforward methods to calculate partial correlations using frequentist statistics, I was interested recently whether there is a Bayesian parameter estimation version of this analysis. A cursory glance of the internet didn\u2019t bring up much1, so I set about developing my own.\n\nBelow, I briefly cover the frequentist implementation before providing an overview of the Bayesian parameter estimation version.\n\n## Frequentist Implementation\n\nAssume we have three variables, X, Y, and Z. We are interested in the relationship between X and Y, but need to account for the effect of Z. I simulated 75 data points (these could be participants in a study), and calculation of a standard Pearson\u2019s correlation between X and Y provides rx,y(74) = 0.48, p<0.05. A \u201csignificant\u201d relationship! However, this doesn\u2019t take into account the effect of Z.\n\nExamination of all possible correlations between X, Y, and Z reveals a strong relationship between Z and the other two variables:\n\n\u2022 rx,z(74) = 0.58, p<0.05.\n\u2022 ry,z(74) = 0.62, p<0.05.\n\nExamining the relationship between X and Y whilst controlling for Z (denoted here as *rx,y|z*) is given by the following formula:\n\n$r_{x,y|z} = \\frac{r_{x,y} - (r_{x,z} \\times r_{y,z})}{\\sqrt{1 - r_{x,z}^{2}} \\times \\sqrt{1 - r_{y,z}^{2}}}$\n\nWhen calculating this partial correlation between X and Y, controlling for Z, we get rx,y|z(74) = 0.19, p = .10. Note now that the relationship between X and Y is no longer \u201csignificant\u201d. The Figure below shows the relationship with Regular (i.e., Pearson\u2019s) and Partial correlation.\n\n## Bayesian Parameter Estimation\n\nThis is all well and good, but what if you want a Bayesian estimate of the partial correlation parameter? I had a quick check of the internet, and couldn\u2019t see anything. (I should say that I didn\u2019t spend long looking; I like to do some things from scratch just so I can learn something, and this was one of those times.) Inspired by the SUPERB book by Lee & Wagenmakers on Bayesian modelling, I devised a Bayesian graphical model of partial correlation:\n\nThis model extended the one presented in the Lee & Wagenmakers book which was used to calculate the correlation coefficient between just two variables. There are two extensions in the current model over that of Lee & Wagenmakers:\n\n\u2022 The model is extended to accommodate three variables of interest instead of two. As such, three correlation parameters are estimated from the data: rx,y, rx,z, and ry,z.2\n\u2022 The model has a new parameter, $\\theta$, which denotes the partial correlation of interest in the current example, that is rx,y|z\n\nThe model assumes\u2014as does the one by Lee & Wagenmakers\u2014that the data are modelled as draws from a multivariate normal distribution. The parameters of this distribution are the means of each of the three variables (denoted $\\mu_{1}, \\mu_{2},\\mu_{3},$) and their standard deviations ($\\sigma_{1}, \\sigma_{2},\\sigma_{3},$), as well as the correlation coefficients that link them ($r_{1,2}, r_{1,3}, r_{2,3},$). I use broad, rather uninformative priors (which of course can be tweaked later if one wishes).\n\nThe parameter of interest, $\\theta$, inherits the distributions from the model parameters pertaining to the correlation coefficients ($r_{1,2}, r_{1,3}, r_{2,3},$), and from them generates a new distribution of the partial correlation parameter (according to the first equation in this post). The distribution of interest for inference is now this new distribution pertaining to the partial correlation parameter.\n\n### Results\n\nRecall that the frequentist estimate of the partial correlation coefficient was rx,y|z(74) = 0.19. Below is a density plot of the posterior distribution of the $\\theta$ parameter from the Bayesian graphical model above3.\n\nThe mode of this posterior distribution was 0.205, with a 95% Highest-Density Interval spanning from -0.03 to 0.39. Note that whilst the modal estimate of the partial correlation parameter was close to that of the frequentist analysis, the Bayesian parameter estimation provides much more information, in particular regarding the uncertainty of this estimate.\n\n## Conclusion\n\nI am not sure if this implementation is correct, so I advise using it with caution. But, it proved an interesting exercise, and I am not aware of a current implementation of this.\n\n## Code\n\n#------------------------------------------------------------------------------\n### initial set up\nrm(list = ls())\n\n# set working directory\n\nlibrary(ppcor)\nlibrary(Hmisc)\nlibrary(R2jags)\n\n# set seed for reproducible code\nset.seed(42)\n#------------------------------------------------------------------------------\n\n#------------------------------------------------------------------------------\n### define functions\n\n# function to simulate partially correlated data\nget_data <- function(n){\nx <- rnorm(n, 0, 1)\ny <- .5 * x + rnorm(n, 0, 1)\nz <- .3 * x + .6 * y + rnorm(n, 0, 1)\n\ndata <- data.frame(x = x, y = y, z = z)\n\nreturn(data)\n}\n#------------------------------------------------------------------------------\n\n#------------------------------------------------------------------------------\n### get data & conduct frequentist analysis\n\n# generate the data\nn <- 75\nx <- get_data(n)\n\n## plot the data with linear model fits\nop = par(cex.main = 1.5, mar = c(5,6,4,5) + 0.1, mgp = c(3.5, 1,0),\ncex.lab = 1.5, font.lab = 2, cex.axis = 1.3, bty = \"n\", las = 1)\n\n# do pairs plot\npdf(\"pairs.pdf\", width = 6, height = 6)\npairs(x, upper.panel = NULL, pch = 19)\ndev.off()\n\n# do correlation plot\npdf(\"correlation.pdf\", width = 6, height = 6)\nplot(x$y, x$x, pch = 17, ylab = \"\", xlab = \"\")\n\n# model not controlling for z\nmod_1 <- lm(x$x ~ x$y)\nabline(a = mod_1$coefficients[1], b = mod_1$coefficients[2], lwd = 3, lty = 1,\ncol = \"red\")\n\n# model controlling for z\nmod_2 <- lm(x$x ~ x$y + x$z) abline(a = mod_2$coefficients[1], b = mod_2$coefficients[2], lwd = 3, lty = 1, col = \"blue\") legend(\"bottomright\", c(\"Regular\", \"Partial\"), lty = c(1, 1), bty = \"n\", lwd = 3, col = c(\"red\",\"blue\"), cex = 1.5) dev.off() # get the frequentist estimate of correlation & partial correlation # note that the correlation between x and y is no longer significant when # controlling for z freq_r <- rcorr(as.matrix(x)) freq_partial_r <- pcor(x) #------------------------------------------------------------------------------ #------------------------------------------------------------------------------ ### Conduct Bayesian parameter estimation # declare the JAGS model code model_code <- \" model { # data for(i in 1:n){ x[i, 1:3] ~ dmnorm.vcov(mu[], TI[,]) } # priors mu[1] ~ dnorm(0, .001) mu[2] ~ dnorm(0, .001) mu[3] ~ dnorm(0, .001) lambda[1] ~ dgamma(.001, .001) lambda[2] ~ dgamma(.001, .001) lambda[3] ~ dgamma(.001, .001) r_xy ~ dunif(-1, 1) r_xz ~ dunif(-1, 1) r_yz ~ dunif(-1, 1) # reparameterisation sigma[1] <- 1\/sqrt(lambda[1]) sigma[2] <- 1\/sqrt(lambda[2]) sigma[3] <- 1\/sqrt(lambda[3]) T[1, 1] <- 1 \/ lambda[1] T[1, 2] <- r_xy * sigma[1] * sigma[2] T[1, 3] <- r_xz * sigma[1] * sigma[3] T[2, 1] <- r_xy * sigma[1] * sigma[2] T[2, 2] <- 1 \/ lambda[2] T[2, 3] <- r_yz * sigma[2] * sigma[3] T[3, 1] <- r_xz * sigma[1] * sigma[3] T[3, 2] <- r_yz * sigma[2] * sigma[3] T[3, 3] <- 1 \/ lambda[3] TI[1:3, 1:3] <- T[1:3, 1:3] # partial correlation calculation num <- r_xy - (r_xz * r_yz) denom <- sqrt(1 - pow(r_xz, 2)) * sqrt(1 - pow(r_yz, 2)) partial <- num\/denom } \" # model details jags_info <- list(\"x\", \"n\") parameters <- c(\"r_xy\", \"r_xz\", \"r_yz\", \"mu\", \"sigma\", \"partial\") # fit the model sample <- jags(jags_info, inits = NULL, parameters, model.file = textConnection(model_code), n.chains = 1, n.iter = 10000, n.burnin = 500, n.thin = 5, DIC = F) # look at the overview of the parameter estimates sample # extract the posterior samples of the partial correlation (4th column) & # calculate the 95% HDI posterior <- sample$BUGSoutput$sims.matrix[, 4] sample_mcmc <- as.mcmc(posterior) hdi <- HPDinterval(sample_mcmc) ### plot # do some preparation for plotting by finding the mode of the posterior dens <- density(posterior) posterior_mode <- dens$x[which.max(dens$y)] # do the plot pdf(\"bayesian_estimate.pdf\", width = 6, height = 6) plot(dens, xlim = c(-1, 1), ylim = c(0, max(dens$y) + 0.55),\nmain = \"\", xlab = \"Partial Correlation Estimate\", lwd = 2)\n\n# add the mode of the sample & the HDI etc.\nlines(x=c(posterior_mode, posterior_mode), y=c(2.5, 3.8), lty = 2, lwd = 2,\ncol = \"red\")\ntext(x= posterior_mode, y = max(dens$y) + 0.5, paste(\"Posterior mode =\", round(posterior_mode, 3), sep = \" \"), cex = 1.2, col = \"red\") lines(x = c(hdi[1],hdi[1]), y = c(0,0.2), lwd = 2, col = \"red\") lines(x = c(hdi[2],hdi[2]), y = c(0,0.2), lwd = 2, col = \"red\") lines(x = c(hdi[1],hdi[2]), y = c(0.1,0.1), lwd = 2, col = \"red\") text(x = (hdi[1] + hdi[2]) \/ 2, y = 0.325, \"95% HDI\", cex = 1.2, col = \"red\") dev.off() #------------------------------------------------------------------------------ 1. Note that there is a paper from Wetzel & Wangenmakers (2012) which demonstrates the calculation of Bayes factors for correlation and partial correlation using summary statistics (i.e., Pearson\u2019s r and the degrees of freedom). Note also (as I say in the main post) that I didn\u2019t search that hard for a solution to this problem as I was keen to make my own method. So, there is probably a better way of doing this that is already available. 2. Note that in the model of Lee & Wagenmakers with just two variables, one must take the inverse of the variance\u2013covariance matrix when passing it to the dmnorm function in JAGS. With three or more variables, there are issues when taking the inverse of the matrix because it is then not positive definite. (See this link for the help I received from the JAGS owner on this issue) The solution involves using JAGS versions newer than 4.3.0, and using the dmnorm.cov function instead. 3. As this is just a toy example, I kept the model-fit time as quick as possible. I generated 10,000 samples from the posterior distributions, treating the first 500 as burn-in samples. The thinning rate was set to 5, and just one chain was used. Note that this is not optimal, but the fit time was quite slow for the model. Advertisements # Reproducibility Article in \u201cThe Conversation\u201d I was asked to write 200-300 words on my views on whether there is a reproducibility crisis in the sciences for an article that was appearing in The Conversation. I was so passionate about what I was writing that I ended up writing over 1,200 words. The final article was, of course, edited down by their team to meet the 300 word guide. Below I have posted my full piece. That there is a reproducibility crisis in psychological science\u2014and arguably across all sciences\u2014is, to me, beyond doubt. Murmurings of low reproducibility began in 2011\u2014 the so-called \u201cyear of horrors\u201d for psychological science (Wagenmakers, 2012), with the infamous fraud case of Diedrik Stapel being its low-light. But murmurings now have empirical evidence. In 2015, the Open Science Collaboration published the findings of our large-scale effort to closely-replicate 100 studies in psychology (Open Science Collaboration, 2015). And the news was not good: Only 36% of studies were replicated. Whilst low reproducibility is not unique to psychological science\u2014indeed, cancer biology is currently reviewing its own reproducibility rate, and things are not looking great (see Baker & Dolgin, 2017)\u2014psychology is leading the way in getting its house in order. Several pioneering initiatives have been introduced which, if embraced by the community, will leave psychological science in a strong position moving forward. Here I focus on three I believe are the most important. Study Pre-Registration & Registered Reports In a delightfully concerning study, Simonsohn et al. (2013) demonstrated that, in the absence of any true effect, researchers can find statistically significant effects in their studies by engaging in questionable research practices (QRPs), such as selectively reporting outcome measures that produced significant effects and dropping experimental conditions that produced no effect. Another QRP could include analysing your data in a variety of ways (for example, maybe a couple of participants didn\u2019t show the effect you were looking for, so why not remove them from the analysis and see whether that \u201cclears things up\u201d?). What was concerning about this study is that many of these QRPs were not really considered \u201cquestionable\u201d at the time. Indeed, many researchers have admitted to engaging in such QRPs (John et al., 2013). As such, I do not believe that the presence of QRPs reflect explicit attempts at fraud. Rather, they likely stem from a blurred distinction between exploratory and confirmatory research. In exploratory research, many measures might be taken, many experimental conditions administered, and the data scrutinised using a variety of approaches looking for interesting patterns. Confirmatory research tests explicit hypotheses using pre-planned methods and analytical strategies. Both approaches are valid\u2014exploratory research can generate interesting questions, and confirmatory research can address these questions\u2014but what is not valid is to report an exploratory study as though it were confirmatory (Wagenmakers et al., 2012); that is, to find an effect in exploratory research and to publish the finding together with a narrative that this effect was expected all along. Many researchers have started to pre-register their studies detailing their predictions, experimental protocols, and planned analytical strategy before data collection begins. When the study is submitted for publication, researchers can demonstrate that no QRPs have occurred because they can point to a time-stamped document verifying their plans before data collection commenced, leading to an increase in confidence in the claims reported. This is confirmatory research at its finest. Some journals have taken this one stage further, by introducing Registered Reports, where papers containing details of a study\u2019s rationale and detailed proposed methods are reviewed and accepted (or rejected!) for publication before the experiment has been conducted. The neuroscience journal Cortex\u2014with their Registered Reports Editor Professor Chris Chambers of Cardiff University\u2014has led the way with this format. Many other journals have now started to offer such reports. This is an important contribution to the academic publishing structure because it incentivises best research practice. Here research is judged on the soundness of the methods and the importance of the question being addressed, and not the particular results of the study. Current incentive structures in our universities\u2014together with general pressure for increased publications (the so-called \u201cpublish or perish\u201d attitude)\u2014leads researchers to prioritise \u201cgetting it published\u201d over \u201cgetting it right\u201d (Nosek et al., 2012), potentially leading to implicit or explicit use of QRPs to ensure a publishable finding. With the advent of Registered Reports, researchers can finally do both: prioritise \u201cgetting it right\u201d by submitting a strong and well-evidenced research proposal, and it will be published regardless of what the data say. Open Data, Open Materials Science works by independent verification, not by appeal to authority. As noted by Wicherts and colleagues (2011), independent verification of data analysis is important because \u201c\u2026analyses of research data are quite error prone, accounts of statistical results may be inaccurate, and decisions that researchers make during the analytical phase of a study may lean towards the goal of achieving a preferred (significant) result\u201d (p. 1). Given this importance, most journal policies ask for researchers to make available their data. Yet, when asked for their data, Wicherts and colleagues (2006) found just 73% of researchers provided their data when asked. Some researchers have begun to refuse to review journal article submissions unless the authors provide their data (or provide a convincing reason for why this is not possible) as part of the Peer-Reviewers\u2019 Openness Initiative (see Morey et al., 2015); after all, if a reviewer cannot access the data a paper is based upon, how can a full review be completed? The flagship psychology journal Psychological Science since 2014 has incentivised researchers to share their experimental material and data by providing badges to studies that comply to open practices by publishing data and materials together with their papers in the journal. (The journal offers a third badge if the study is pre-registered.) This intervention has been remarkably effective: Kidwell et al. (2016) reported that 23% of studies in Psychological Science provided open data, a rise from lower than 3% before the badges were in use. More journals are now encouraging authors to make their data open as a consequence. Registered Replication Reports I tell my students all the time that \u201creplication is the most important statistic\u201d (source of quote unknown). To me, an empirical finding in isolation doesn\u2019t mean all that much until it has been replicated. In my own lab, I make an effort to replicate an effect before trying to publish it. As my scientific hero Richard Feynman is famous for saying \u201cScience is a way of trying not to fool yourself\u2026 \u2026and you are the easiest person to fool\u201d. As scientists, we have a professional responsibility to ensure the findings we are reporting are robust and reproducible. But we must also not allow others\u2019 findings to fool us, either. That is why replication of other people\u2019s findings should become a core component of any working lab (a course of action we have facilitated by publishing a \u201cReplication Recipe\u201d: a guide to performing convincing replications; Brandt et al., 2014). You\u2019d be forgiven for thinking that reports of replications must be common place in the academic literature. This is not the case. Many journals seek novel theories and\/or findings, and view replications as treading over old ground. As such, there is little incentive for career-minded academics to conduct replications. However, if the results of the Open Science Collaboration (2015) tell us nothing else, it is that old ground needs to be re-trodden. The Registered Replication Report format in the high-impact journal Perspectives on Psychological Science seeks to change this. In this format, many teams of researchers each independently perform a close replication of an important finding in the literature, all following an identical and shared protocol of study procedures. The final report\u2014a single paper with all contributing researchers gaining authorship\u2014collates the findings across all teams in a meta-analysis to firmly establish the size and reproducibility of an effect. Such large-scale replication attempts in a high-profile journal such as Perspectives can only help to encourage psychological scientists to view replication as a valid area of their research programme. Conclusion 2011 was described as a year of horrors for psychological science. Whilst certainly improvements can be made, our discipline has made impressive strides to improve our science. In just 6 years psychological has moved from a discipline in crisis to a discipline leading the way in how to conduct strong, rigorous, reproducible research. References Baker, M, & Dolgin, E. (2017). Cancer reproducibility project releases first results. Nature, 541, 7637, 269. Brandt, M.J., IJzerman, H., Dijksterhuis, A., Farach, F., Geller, J., Giner-Sorolla, R., Grange, J.A., Perugini, M., Spies, J., & van \u2018t Veer, A. (2014). The replication recipe: What makes for a convincing replication? Journal of Experimental Social Psychology, 50, 214-224. John L. K., Loewenstein G., Prelec D. (2012). Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science, 23, 524\u2013532 Kidwell, M.C., Lazarevi\u0107, L.B., Baranski, E., Hardwicke, T.E., Piechowski, S., Falkenberg, L-S., et al. (2016). Badges to acknowledge open practices: A simple, low-cost, effective method for increasing transparency. PLoS Biology, 14(5), e1002456. Morey, R. D., Chambers, C. D., Etchells, P. J., Harris, C. R., Hoekstra, R., Lakens, D., . . . Zwaan, R. A. (2016). The peer reviewers\u2019 openness initiative: Incentivizing open research practices through peer review. Royal Society Open Science, 3(1), 150547. Nosek, B.A., Spies, J.R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7, 6, 615-631. Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349, 943. Simons, D.J., Holcolmbe, A.O., & Spellman, B.A. (2014). An Introduction to Registered Replication Reports at Perspectives on Psychological Science. Perspectives on Psychological Science, 9, 552\u2013555 Wagenmakers, E.-J. (2012). A year of horrorsDe Psychonoom, 27, 12-13. Wagenmakers, E.-J., Wetzels, R., Borsboom, D., van der Maas, H.L.J., & Kievit, R. (2012). An agenda for purely confirmatory research. Persepctives on Psychological Science, 7, 632-638. Wicherts, J.M., Bakker, M., & Molenaar, D. (2011). Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLoS ONE 6(11), e26828. Wicherts, J.M., Borsboom, D., Kats, J., & Molenaar, D. (2006) The poor availability of psychological research data for reanalysis. American Psychologist, 61, 726\u2013728. # Low Power & Effect Sizes Yesterday I posted the following tweet which has since turned out to be my most popular tweet EVER with hundreds of retweets and \u201clikes\u201d in 24 hours: My motivation for the tweet was quite straightforward. I have recently been emailing academics in my department every week with different topics in an attempt to raise awareness of topics associated with increasing the information value of the research we are conducting as a department. This week\u2019s topic was \u201cPower\u201d. In my email\u2014in which I included a copy of Button et al.\u2019s (2013) excellent paper on low-power in the neurosciences\u2014I mentioned in passing that power is not just an issue for statistical significance. I have heard from people before that low power is only an issue when interpreting null results, and that if a study produces a significant outcome, then power is not an issue. To pre-empt this response to my encouragement to increase the power of our studies, I said in my email: \u201cStudies with low power and significant effects have been shown to over-estimate effect sizes, meaning your low-powered study\u2014although significant\u2014is not giving you precision.\u201d As soon as I sent the email, I realised that I couldn\u2019t recall ever reading a study that had demonstrated this. Now, I knew that such a study (or studies) would have been conducted, but I realised that I had never actually read it myself. It turns out that such studies have indeed been conducted before, as people helpfully pointed out to me on Twitter in response to my tweet: As I was unaware of these studies\u2014plus it was a Sunday, and I was a little bored\u2014I thought instead of doing a literature search I would code a simulation demonstrating the inflation of effect sizes with low-powered, significant, studies the results of which I emailed to my department to demonstrate that what I had said was indeed the case. Then I thought, \u201cWell, I haven\u2019t tweeted much this year, so why not put it on Twitter, too.\u201d The incredible engagement I have had with this tweet\u2014I propose\u2014is due to this being a rather under-appreciated fact. Indeed, I \u201cknew\u201d that low-powered studies over-estimate effect sizes, but I didn\u2019t KNOW it in the sense that I had seen hard evidence for it. ## Details of Simulation Because my tweet was made in passing, I didn\u2019t explain in much detail about the stimulation implementation. I discuss this here in case others want to extend the simulation in some way. The effect size of interest is a measure of correlation between two measures. I arbitrarily chose IQ (mean = 100, SD = 20) and response time (mean = 600ms, SD = 80ms). I fixed the \u201ctrue\u201d effect size to be r = 0.3. It turns out that to obtain 80% power for an r=0.3 requires 85 subjects. In my simulation, I wanted to explore a wide range of sample sizes, so chose the set 10, 20, 30, 50, 85, 170, 500, and 1000. For each sample size\u2014N\u2014I simulated 1,000 \u201cstudies\u201d. For each simulated study, the following procedure occurred: \u2022 Sample N draws from a multivariate normal distribution with the means and SD for IQ and RT as above and a population correlation coefficient of 0.3 \u2022 Conduct a Pearson\u2019s correlation between the two samples \u2022 If the correlation was significant, store the observed correlation coefficient in a new data frame \u2022 If the correlation was not significant, move on without storing anything \u2022 After 1,000 studies are completed, plot a boxplot of the observed effect sizes for N The result was the image in the tweet. ## Limitations Many limitations exist to this simulation, and I point interested readers to the material cited above in others\u2019 tweets for a more formal solution. I didn\u2019t intend for this to be a rigorous test, so it shouldn\u2019t be taken too seriously; it was more for my own curiosity and also to provide a graphical image I could send to my colleagues at Keele to show the imprecision of effect sizes with low power. The particular outcomes are likely sensitive to my choice of means, SDs, r, etc. So, don\u2019t generalise the specifics of this simulation, but maybe code your own tailored to your study of interest. For me this was a bit of fun. Ten minutes of coding was time well spent on Sunday! # The Polls Weren\u2019t Wrong TL;DR: Trump had a 28% chance to win. We shouldn\u2019t be surprised he won. I\u2019m not going to comment on the political outcome of last week\u2019s US Presidential elections; enough ink\u2014both pen-ink and eye-ink\u2014has been spilled about that. What I am going to comment on though is the growing feeling the polls were wrong, and why an understanding of probability and statistical evidence might lead us to a more positive conclusion (or at least, less negative). After last week\u2019s \u201cshock\u201d result\u2014read on for why shock is in scare-quotes\u2014many news articles began to ask \u201cWhy were the polls wrong?\u201d. (For a list, see the Google search here). This question is largely driven by the fact influential pollsters heavily favoured a Clinton victory. For example, FiveThirtyEight\u2019s polls predicted a 71.4% chance of a Clinton victory. The New York Times predicted an 85% chance of a Clinton victory. Pretty convincing, huh? Something must have gone wrong in these polls, right? All polls are known to be sub-optimum, but even if we found a way to conduct a perfect poll, and this perfect poll predicted a 71.4% chance of a Clinton victory, could we now state after observing a Trump victory that the polls were wrong? No, and the reason why most of us find this difficult to grasp is that most of us don\u2019t truly appreciate probability. No poll that I am aware of predicted a 100% chance of a Clinton victory. All polls that I saw had a non-zero chance of a Trump victory. So, even if with our \u201cperfect\u201d poll we see that Trump had a 28.6% chance of winning the election, we should not be surprised with a Trump victory. You can be disgusted, saddened, and \/or scared, but you should not be surprised. After all, something with a 28.6% chance of occurring has\u2014you guessed it!\u2014a 28.6% chance of occurring. 28.6% translates to a 1 in 3.5 chance. If you think of a 6-sided die, each number has a 1 in 6 chance of being rolled on a single roll (~16.67% chance). Before you roll the die, you expect to see any other number than a 6. Are you surprised then if when you roll the die you observe a 6? Probably not. It\u2019s not that remarkable. Yet it is expected less than Trump\u2019s 28.6%. Likewise, if the weather-person on TV tells you there is a 28.6% chance of rain today, are you surprised if you get caught in a shower on your lunch break? Again, probably not. So, the polls weren\u2019t wrong at all. All predicted a non-zero chance of a Trump victory. What was wrong was the conclusion made from the polls. ## Richard Royall & \u201cStatistical Evidence\u201d The above raced through my mind without a second thought when I read numerous articles claiming the polls were wrong, but it was brought into sharper focus today when I was reading Richard Royall\u2019s (excellent) chapter \u201cThe Likelihood Paradigm for Statistical Evidence\u201d. In this chapter, he poses the following problem. A patient is given a non-perfect diagnostic test for a disease; this test has a 0.94 probability of detecting the disease if it is present in the patient (and therefore a 0.06 probability of missing the disease when it is present). However, it also has a non-zero probability of 0.02 of producing a \u201cpositive\u201d detection even though the disease is not present (i.e., a false-positive). The table below outlines these probabilities of the test result for a patient who does have the disease (X = 1) and a patient who does not have the disease (X = 0). Now a patient comes to the clinic and the test is administered. The doctor observes a positive result. What is the correct conclusion the doctor can make based on this positive test result? 1. The person probably has the disease. 2. The person should be treated for the disease. 3. This test result is evidence that this person has the disease. ### The person probably has the disease Intuitively, I think most people would answer this is correct. After all, the test has a 0.94 probability of detecting the disease if present, and we have a positive test result. It\u2019s unlikely that this is a false positive, because this only occurs with a probability of 0.02. However, this does not take into account the prior probability of the disease being present. (Yes, I have just gone Bayesian on you.) If the disease is incredibly rare, then it turns out that there is a very small probability the patient has the disease even after observing a positive test outcome. For a nice example of how the prior probability of the disease influences the outcome, see here. ### The person should be treated for the disease It should be clear from the above that this conclusion also depends on the prior probability of the disease. If the disease is incredibly rare, the patient doesn\u2019t likely have it (even after a positive test result), so don\u2019t waste resources (and risk potential harm to the patient). Again, the evidence doesn\u2019t allow us to draw this conclusion. ### This test result is evidence that this person has the disease Royall argues that this is the only conclusion one can draw from the evidence. It is subtly different from Conclusion 1, but follows naturally from the \u201cLaw of Likelihood\u201d: If hypothesis A implies that the probability that a random variable X takes the value x is pA(x), while hypothesis B implies that the probability is pB(x), then the observation X = x is evidence supporting A over B if and only if pA(x) is less than pB(x)\u2026 In our \u201cdisease\u201d example, the observation of a positive result is evidence that this person has the disease because this outcome (a positive result) is better predicted under the hypothesis of \u201cdisease present\u201d than the hypothesis \u201cdisease absent\u201d. But it doesn\u2019t mean that the person probably has the disease, or that we should do anything about it. ## Back to Trump Observing a Trump victory after a predicted win of 28.6% isn\u2019t that surprising. The polls weren\u2019t wrong. 28.6% is a non-zero chance. We should interpret this evidence in a similar way to the disease example: These poll results are evidence that Clinton will win. It is a mistake to interpret them as \u201cClinton probably will win\u201d. # Replication Crisis: What Changes Have your Department Made? It\u2019s been a year since the Open Science Collaboration\u2019s publication on \u201cEstimating the Reproducibility of Psychological Science\u201d was published in Science. It has been cited 515 times since publication, and has been met with much discussion on social networks. I am interested in what changes your psychology department have made since. Are staff actively encouraged to pre-register all studies? Do you ask faculty members for open data? Are faculty members asked to provide open materials? Do ethics panels check the power of planned studies? Have you embedded Open Science practices into your research methods teaching? I am preparing a report for my department on how we can address the issues surrounding the replication crisis, and I would be very interested to hear what other departments have done to address these important issues. Please comment on this post with what your department has done! # Do Olympic Hosts Have a \u201cHome-Field\u201d Advantage? My wife questioned in passing yesterday whether summer Olympic hosts have a home-field advantage; that is, do the hosts generally win more medals in their hosting year than in their non-hosting years? That a home-field advantage exists in many team sports is generally not disputed\u2014see for example this excellent blog post by the Freakonomics team. But is this true for (generally) individual sports like the Olympics? Most of us Brits recall our amazing\u2014and quite unusual\u20143rd place finish when we hosted the event in 2012, so anecdotally I can understand why suspicion of a home-field advantage exists. But is it real? I am quite sure there is an answer to this question on the web somewhere, but I wanted to take this opportunity to try and find an answer myself. Basically, I saw this as an excuse to learn some web-scraping techniques using R statistics. ## The Data Wikipedia holds unique pages for each Summer Olympic games. On these pages are medal tables tallying the number of Gold, Silver, and Bronze each competing nation won that year, as well as the Total Medals. So, I wrote some code in R that visits each of these pages in turn, finds the relevant html table containing the medal counts, and extracts it into my work-space. I only looked at post-2nd-world-war games. My idea was to plot all the medals won for each host nation for all years they have appeared at the games. I was interested in whether the total number of medals that the host won in their host-year was more than their average (mean) across all the games the host had appeared. If there is some sort of home-field advantage, generally we would expect their host-year to be one of their better years, certainly above their average Olympic performance. ## The Results Below is a plot of the results. The header of each plot shows who the host was that year, and the data in each plot shows the total number of medals won by the host in all of the games they have appeared in. To help interpretation of the results, for each plot, the vertical blue line shows the year that nation hosted the games, and the horizontal red line shows that nation\u2019s mean performance across all their games. ## Conclusion I would take this data as providing some evidence that nations generally perform better when they are hosting the games. 11 out of 16 nations had their best year the year they hosted the games. All nations performed above average the year they hosted the games (although maybe Canada, 1976, just missed out). ## The Real Conclusion (And the Code) Coding in R is fun, and I look for any excuse to work on new projects. This is my first attempt at doing web scraping, and it wasn\u2019t as painful as I thought it would be. Below is the code, relying a lot on the rvest R package which I highly recommend; check out this nice introduction to using it. The code I wrote is below. It\u2019s certainly not optimal, and likely full of errors, but I hope someone finds it of use. Although I tried to automate every aspect of the analysis, some aspects had to be manually altered (for example to match \u201cSoviet Union\u201d data with \u201cRussia\u201d data). #------------------------------------------------------------------------------ # clear workspace rm(list = ls()) # set working directory setwd(\"D:\/Work\/Blog_YouTube code\/Blog\/Olympic Medals\") # load relevant packages library(rvest) library(stringr) library(dplyr) library(ggplot2) # suppress warnings options(warn = -1) #------------------------------------------------------------------------------ #------------------------------------------------------------------------------ ### get a list of all of the host nations # set the url and extract html elements host_url <- \"http:\/\/www.topendsports.com\/events\/summer\/hosts\/list.htm\" temp <- host_url %>% html %>% html_nodes(\"table\") # extract the relevant table hosts <- data.frame(html_table(temp[1])) # remove the years that the Olympics were not held hosts <- hosts[!grepl(\"not held\", hosts$Host.City..Country), ]\n\n# remove the cities from the host column\ncountries <- hosts$Host.City..Country countries <- gsub(\".*,\", \"\", countries) hosts$Host.City..Country <- countries\n\n# remove the Olympics that are ongoing (or are yet to occur) and generally\n# tidy the table up. Also, only select post-1948 games.\nhosts <- hosts %>%\nselect(year = Year, host = Host.City..Country) %>%\nfilter(year < 2016 & year > 1948)\n\n# remove white space from the names\nhosts$host <- gsub(\" \", \"\", hosts$host, fixed = TRUE)\n\n# change host England to Great Britain.\n# change SouthKorea to South Korea\n# change USSR to Russia\nhosts$host <- gsub(\"England\", \"Great Britain\", hosts$host, fixed = TRUE)\nhosts$host <- gsub(\"SouthKorea\", \"South Korea\", hosts$host, fixed = TRUE)\nhosts$host <- gsub(\"USSR\", \"Russia\", hosts$host, fixed = TRUE)\n#------------------------------------------------------------------------------\n\n#------------------------------------------------------------------------------\n### get the medal tables for each year and store them in one list\n\n# get a vector of all years\nyears <- hosts$year # create a list to store the medal tables medal_tables <- list() # loop over each year and retrieve the data from Wikipedia for(i in 1:length(years)){ # what is the current year? curr_year <- years[i] # construct the relevant URL to the Wikipedia page url <- paste(\"https:\/\/en.wikipedia.org\/wiki\/\", curr_year, \"_Summer_Olympics_medal_table\", sep = \"\") # retrieve the data from this page temp <- url %>% html %>% html_nodes(\"table\") # find the html table's position. The medal table is in a \"sortable\" Wiki # table, so we search for this term and return its position in the list position <- grep(\"sortable\", temp) # get the medal table. Add a new column storing the year medals <- data.frame(html_table(temp[position], fill = TRUE)) medals <- medals %>% mutate(Year = curr_year) # change the names of the \"Nation\" column, as this is not consistent between # games tables colnames(medals)[2] <- \"Nation\" # remove the weird symbols from the html file (\u00c2) nations <- medals$Nation\nnations <- gsub(\"[^\\\\x{00}-\\\\x{7f}]\", \"\", nations, perl = TRUE)\n\n# we need to change \"Soviet Union\" to USSR for consistency\nnations <- gsub(\"Soviet Union(URS)\", \"Russia(RUS)\", nations, fixed = TRUE)\n\n# also change West & East Germany to \"Germany\"\nnations <- gsub(\"East Germany(GDR)\", \"Germany(GER)\", nations, fixed = TRUE)\nnations <- gsub(\"West Germany(FRG)\", \"Germany(GER)\", nations, fixed = TRUE)\nmedals$Nation <- nations # save the medal table and move to the next games medal_tables[[i]] <- medals } #------------------------------------------------------------------------------ #------------------------------------------------------------------------------ ### loop over each host, then find how many medals they won in each games and ### store it in data frame # initialise the data frame final_data <- data.frame(hosts) final_data[, as.character(years)] <- 0 for(i in 1:length(hosts$host)){\n\n# get the current host\ncurr_host <- hosts$host[i] # loop over all years, find the number of medals won by the current host, # and store it in final_data frame for(j in 1:length(years)){ # what is the current year? curr_year <- years[j] # get the medal table for the current year curr_medals <- medal_tables[[j]] # get the row for the current host if it is present curr_medals <- curr_medals %>% filter(str_detect(Nation, curr_host)) # collate the number of medals won if there is data if(nrow(curr_medals) > 0){ final_data[i, j + 2] <- sum(curr_medals$Total)\n} else\nfinal_data[i, j + 2] <- 0\n\n} # end of each year loop\n\n} # end of each host loop\n#------------------------------------------------------------------------------\n\n#------------------------------------------------------------------------------\n### now do some plotting\npdf(\"medals.pdf\", width = 12, height = 12)\n\n# change the layout of the plotting window\npar(mfrow = c(4, 4))\n\n# loop over each hosting nation\nfor(i in 1:nrow(final_data)){\n\n# get the current host's data for all years\nhost_data <- as.numeric(final_data[i, 3:ncol(final_data)])\n\n# what is their mean number of medals won?\nhost_mean <- mean(host_data)\n\n# plot the data!\nplot(years, host_data, xlab = \"Year\", ylab = \"Number of Medals\", pch = 19,\ntype = \"b\", lwd = 2,\nmain = paste(hosts$host[i], \"\u2013\", years[i], sep = \"\")) abline(v = final_data$year[i], lty = \"dashed\", col = \"blue\", lwd = 1.5)\nabline(h = host_mean, lty = \"dashed\", col = \"red\", lwd = 1.5)\n\n}\n#------------------------------------------------------------------------------\n\n\n# Solution to #BarBarPlots in\u00a0R\n\nI came across an interesting project the other day which is calling for a reconsideration of the use of\u00a0bar plots (#barbarplots), with the lovely tag-line \u201cFriends don\u2019t let friends make bar plots!\u201d. The project elegantly outlines convincing reasons why bar plots can be misleading, and have successfully funded a campaign to \u201c\u2026increase awareness of the limitations that bar plots have and the need for clear and complete data visualization\u201d.\n\nIn this post, I want to show the limitations of bar plots that these scientists have highlighted. Then, I provide a solution to these limitations for researchers who want to continue using bar plots that can easily be cobbled together using R-statistics (with the ggplot2 package).\n\n## The Data\n\nSay you are a researcher who collects some data (it doesn\u2019t matter on what) from two independent groups and you are interested in whether there is a difference between them. Most researchers would maybe calculate the mean and standard error of each group to describe the data. Then the researcher might plot the data using a bar plot, together with error bars representing the standard error. To provide an inferential test on whether a difference exists, the researcher would usually conduct an independent samples t-test.\n\nLet\u2019s provide some example data for two conditions:\n\n\u2022 condition A (n = 100): mean of 200.17, a median of 196.43, and a standard error of 6.12\n\u2022 condition B (n = 100): mean of 200.11, a median of 197.87, and a standard error of 7.19\n\nHere is the bar plot:\n\nPretty similar, right? The researcher sees that there is little evidence for a difference; to test this inferentially they\u00a0conduct an independent samples t-test, with the outcome\u00a0t(198) = 0.007,\u00a0p = .995, Cohen\u2019s\u00a0d < 0.001. The researcher concludes there is no difference between the two groups.\n\n## The Problem\n\nThe problem raised by the #barbarplot campaign is that bar plots are a poor summary of the distribution of data. The bar plot above suggests there is no difference between the two groups, but the two groups are different! How do I know they are different? I simulated the data. What the bar plot hides is the shape of the underlying distribution of each data set. Below I present a density plot (basically a smoothed histogram) of the same data as above:\n\nNow we can see that the two groups are\u00a0clearly different! Condition A is a normal distribution, but condition B is bi-modal. The bar plot doesn\u2019t capture this difference.\n\n## The Solution\n\nDensity plots are a nice solution to presenting the distribution of data, but can get really messy when there are multiple conditions (imagine the above density plot but with 4 or more overlapping conditions). Plus, researchers are used to looking at bar plots, so there is something to be said about continuing their use (especially for factorial designs). But how do we get around the problem highlighted by the #barbarplot campaign?\n\nOne solution is to plot the bar plots as usual, but to overlay the bar plot with individual data points. Doing this allows the reader to see the estimates of central tendency (i.e., to interpret the bar plot as usual), whilst at the same time allowing the reader to see the spread of data in each condition. This sounds tricky to do (and it probably is if you are still using Excel; yes, I\u2019m talking to you!), but it\u2019s simple if you\u2019re using R.\n\nBelow is the above data plotted as a combined bar and point plot. As you can see, the difference in distribution is now immediately apparent, whilst retaining the advantages of a familiar bar plot. Everyone wins!\n\n## R Code\n\nBelow is the R code for the combined plot. This includes some code that generates the artificial data used in this example.\n\n#------------------------------------------------------------------------------\nlibrary(ggplot2)\nlibrary(dplyr)\n\n#--- Generate artificial data\n\n# set random seed so example is reproducible\nset.seed(100)\n\n# generate condition A\ncondition <- rep(\"condition_A\", 100)\ndv_A <- rnorm(100, 200, 60)\ncondition_A <- data.frame(condition, dv = dv_A)\n\n# generate condition B\ncondition <- rep(\"condition_B\", 100)\ndv_B <- c(rnorm(50, 130, 10), rnorm(50, 270, 10))\ncondition_B <- data.frame(condition, dv = dv_B)\n\n# put all in one data frame\nraw_data <- rbind(condition_A, condition_B)\n\n# calculate sumary statistics\ndata_summary <- raw_data %>%\ngroup_by(condition) %>%\nsummarise(mean = mean(dv),\nmedian = median(dv),\nse = (sd(dv)) \/ sqrt(length(dv)))\n#------------------------------------------------------------------------------\n\n#------------------------------------------------------------------------------\n#--- Do the \"combined\" bar plot\np2 <- ggplot()\n\n# first draw the bar plot\np2 <- p2 + geom_bar(data = data_summary,\naes(y = mean,x = condition,\nymin = mean - se,\nymax = mean + se), fill = \"darkgrey\",\nstat=\"identity\", width=0.4)\n\n# draw the error bars on the plot\np2 <- p2 + geom_errorbar(data = data_summary,\naes(y = mean, x = condition,\nymin = mean - se,\nymax = mean + se), stat = \"identity\",\nwidth = 0.1, size = 1)\n\n# now draw the points on the plot\np2 <- p2 + geom_point(data = raw_data, aes(y = dv, x = condition),\nsize = 3, alpha = 0.3,\nposition = position_jitter(width = 0.3, height = 0.1))\n\n# scale and rename the axes, and make font size a bit bigger\np2 <- p2 + coord_cartesian(ylim = c(50, 400))\np2 <- p2 + scale_x_discrete(name = \"Condition\") +\nscale_y_continuous(name = \"DV\")\n\np2 <- p2 + theme(axis.text = element_text(size = 12),\naxis.title = element_text(size = 14,face = \"bold\"))\n\n# view the plot\np2\n#------------------------------------------------------------------------------\n\n\n# 10 Recommendations from the Reproducibility Crisis in Psychological\u00a0Science\n\nThis week I gave an internal seminar at my institution (Keele University, UK) entitled \u201cTen Recommendations from the Reproducibility Crisis in Psychological Science\u201d. The audience was to be faculty members and psychology graduate students. My aim was to collate some of the \u201cbest-practices\u201d that have emerged over the past few years and provide direct advice for how researchers and institutions can adapt their research practice. It was hard to come up with just 10 recommendations, but I finally decided on the following:\n\n1. Replicate, replicate, replicate\n2. Statistics (i): Beware p-hacking\n3. Statistics (ii): Know your p-values\n4. Statistics (iii): Boost your power\n5. Open data, open materials, open analysis\n6. Conduct pre-registered confirmatory studies\n7. Incorporate open science practices\u00a0in teaching\n8. Insist on open science practices as reviewers\n9. Reward open science practices (Institutions)\n10. Incorporate open science into hiring decisions (Institutions)\n\nThe link to the slides are below. I might expand upon this in a fuller blog post in time, if there is interest.\n\n# \u201cBayesian in 8 Easy Steps\u201d Journal\u00a0Club\n\nI\u2019ve been trying to organise an online journal club to discuss the papers suggested in Alexander Etz and colleagues\u2019 paper \u201cHow to become a Bayesian in 8 easy steps\u201d. Several people have filled out the Doodle poll expressing an interest, but unfortunately not everyone can make the same time. As such, I am afraid I will have to go with the time which the majority of people can make. I am sorry that this will leave some people out.\n\nThe most popular day & time was Thursdays at 1pm UTC. Therefore, I propose the first meeting be on Thursday 10th March at 1pm. It will be on Google Hangouts, but I need to spend some time working out how to use this before I pass on details of the meet.\n\nhttp:\/\/doodle.com\/poll\/7im5vnk9cddc3vyb\n\nSee you there!\n\n# (Pesky?) Priors\n\nWhen I tell people I am learning Bayesian statistics, I tend to get one of two responses: either people look at me blankly\u2014\u201cWhat\u2019s Bayesian statistics?\u201d\u2014or I get scorned for using such \u201cloose\u201d methods\u2014\u201cBayesian analysis is too subjective!\u201d1. This latter \u201cconcern\u201d arises due to (what I believe to be a misunderstanding of) the prior: Bayesian analysis requires one state what one\u2019s prior belief is about a certain effect, and then combine this with the data observed (i.e., the likelihood) to update one\u2019s belief (the posterior).\n\nOn the face of it, it might seem odd for a scientific method to include \u201csubjectivity\u201d in its analysis. I certainly had this doubt when I first started learning it. (And, in order to be honest with myself, I still struggle with it sometimes.) But, the more I read, the more I think this concern is not warranted, as the prior is not really \u201csubjectivity\u201d in the strictest sense of the word at all: it is based on our current understanding of the effect we are interested in, which in turn is (often) based on data we have seen before. Yes, sometimes the prior can be a guess if we2 have no other information to go on, but we would express the uncertainty of a belief in the prior itself.\n\nThe more I understand Bayesian statistics, the more I appreciate the prior is essential. One under-stated side-effect of having priors is that it can protect you from dubious findings. For example, I have a very strong prior against UFO predictions; therefore, you are going to have to present me with a lot more evidence than some shaky video footage to convince me otherwise. You would not have to provide me with much evidence, however, if you claimed to have roast beef last night. Extraordinary claims require extraordinary evidence.\n\nBut, during my more sceptical hours, I often succumbed to the the-prior-is-nothing-but-subjectivity-poisoning-your-analysis story. However, I now believe that even if one is sceptical of the use of a prior, there are a few things to note:\n\n\u2022 If you are concerned your prior is wrong and is influencing your inferences, just collect more data: A poorly-specified prior will be washed away with sufficient data.\n\n\u2022 The prior isn\u2019t (really) subjective because it would have to be justified to a sceptical audience. This requires (I suggest) plotting what the prior looks like so readers can familiarise themselves with your prior. Is it really subjective if I show you what my prior looks like and I can justify it?\n\n\u2022 Related to the above, the effect of the prior can be investigated using robustness checks, where one plots the posterior distribution based on a range of (plausible) prior values. If your conclusions don\u2019t depend upon the exact prior used, what\u2019s the problem?\n\n\u2022 Priors are not fixed. Once you have collected some data and have a posterior belief, if you wish to examine the effect further you can (and should) use the posterior from the previous study as your prior for the next study.\n\nThese are the points I mention to anti-Bayesians I encounter. In this blog I just wanted to skip over some of these with examples. This is selfish; it\u2019s not really for your education (there really are better educators out there: My recommendation is Alex Etz\u2019s excellent \u201cUnderstanding Bayes\u201d series, from where this blog post takes much inspiration!). I just want somewhere with all of this written down so next time someone criticises my interest in Bayesian analysis I can just reply: \u201cRead my blog!\u201d. (Please do inform me of any errors\/misconceptions by leaving a comment!)\n\nAs some readers might not be massively familiar with these issues, I try to highlight some of the characteristics of the prior below. In all of these examples, I will use the standard Bayesian \u201cintroductory tool\u201d of assessing the degree of bias in a coin by observing a series of flips.\n\n### A Fair Coin\n\nIf a coin is unbiased, it should produce roughly equal heads and tails. However, often we don\u2019t know whether a coin is biased or not. We wish to estimate the bias in the coin (denoted theta) by collecting some data (i.e., by flipping the coin); a fair coin has a theta = 0.5. Based on this data, we can calculate the likelihood of various theta values. Below is the likelihood function for a fair coin.\n\nIn this example, we flipped the coin 100 times, and observed 50 heads and 50 tails. Note how the peak of the likelihood is centered on theta = 0.5. A biased coin would have a true theta not equal to 0.5; theta closer to zero would reflect a bias towards tails, and a theta closer to 1 would reflect a bias towards heads. The animation below demonstrates how the likelihood changes as the number of observed heads (out of 100 flips) increases:\n\nSo, the likelihood contains the information provided by our sample about the true value for theta.\n\n### The Prior\n\nBefore collecting data, Bayesian analysts would specify what their prior belief was about theta. Below I present various priors a Bayesian may have using the beta distribution (which has two parameters: a and b):\n\nThe upper left plot reflects a prior belief that the coin is fair (i.e., the peak of the distribution is centered over theta = 0.5); however, there is some uncertainty in this prior as the distribution has some spread. The upper right plot reflects total uncertainty in a prior belief: that is, the prior holds that any value of theta is likely. The lower two plots reflect prior beliefs that the coin is biased. Maybe the researcher had obtained the coin from a known con-artist. The lower left plot reflects a prior for a biased coin, but uncertainty about which side the coin is biased towards (that is, it could be biased heads or tails); the lower right plot reflects a prior that the coin is biased towards heads.\n\n## The effect of the prior\n\nI stated above that one of the benefits of the prior is that it allows protection (somewhat) from spurious findings. If I have a really strong prior belief that the coin is fair, 9\/10 heads isn\u2019t going to be all that convincing evidence that it is not fair. However, if I have a weak prior that the coin is fair, then I will be quite convinced by the data.\n\nThis is illustrated below. Both priors below reflect the belief that the coin is fair; what differs between the two is the strength in this belief. The prior on the left is quite a weak belief, as the distribution (although peaked at 0.5) is quite spread out. The prior on the right is a stronger belief that the coin is fair.\n\nIn both cases, the likelihood is the result of observing 9\/10 heads.\n\nYou can see that when the prior is a weak belief, the posterior is very similar to the likelihood; that is, the posterior belief is almost entirely dictated by the data. However, when we have a strong prior belief, our beliefs are not altered much by observing just 9\/10 heads.\n\nNow, I imagine that this is the anti-Bayesian\u2019s point: \u201cEven with clear data you haven\u2019t changed your mind.\u201d True. Is this a negative? Well, imagine instead this study was assessing the existence of UFOs rather than simple coin flips. If I showed you 9 YouTube videos of UFO \u201cevidence\u201d, and 1 video showing little (if any) evidence, would you be convinced of UFOs? I doubt it. You were the right-hand plot in this case. (I know, I know, the theta distribution doesn\u2019t make sense in this case, but ignore that!)\n\n## What if the prior is wrong?\n\nWorried that your prior is wrong3, or that you cannot justify it completely? Throw more data at it. (When is this ever a bad idea?) Below are the same priors, but now we flip the coin 1,000 times and observe 900 heads. (Note, the proportion heads is the same in the previous example.) Now, even our strong prior belief has to be updated considerably based on this data. With more data, even mis-specified priors do not affect inference.\n\nTo get an idea of how sample size influences the effect of the prior on the posterior, I created the below gif animation. In it, we have a relatively strong (although not insanely so) prior belief that the coin is biased \u201cheads\u201d. Then, we start flipping the coin, and update the posterior after each flip. In fact, this coin is fair, so our prior is not in accord with (unobservable) \u201creality\u201d. As flips increases, though, our posterior starts to match the likelihood in the data. So, \u201cwrong\u201d priors aren\u2019t really a problem. Just throw more data at it.\n\n## \u201cToday\u2019s posterior is tomorrow\u2019s prior\u201d \u2014 Lindley (1970)\n\nAfter collecting some data and updating your prior, you now have a posterior belief of something. If you wish to collect more data, you do not use your original prior (because it no longer reflects your belief), but you instead use the posterior from your previous study as the prior for your current one. Then, you collect some data, update your priors into your posteriors\u2026and so on.\n\nIn this sense, Bayesian analysis is ultimately \u201cself-correcting\u201d: as you collect more and more data, even horrendously-specified priors won\u2019t matter.\n\nIn the example below, we have a fairly-loose idea that the coin is fair\u2014i.e., theta = 0.5. We flip a coin 20 times, and observe 18 heads. Then we update to our posterior, which suggests the true value for theta is about 0.7 ish. But then we wish to run a second \u201cstudy\u201d; we use the posterior from study 1 as our prior for study 2. We again observe 18 heads out of 20 flips, and update accordingly.\n\n### Conclusion\n\nOne of the nicest things about Bayesian analysis is that the way our beliefs should be updated in the face of incoming data is clearly (and logically) specified. Many peoples\u2019 concerns surround the prior. I hope I have shed some light on why I do not consider this to be a problem. Even if the prior isn\u2019t something that should be \u201covercome\u201d with lots of data, it is reassuring to know for the anti\u2013Bayesian that with sufficient data, it doesn\u2019t really matter much.\n\nSo, stop whining about Bayesian analysis, and go collect more data. Always, more data.","date":"2019-01-24 02:23:03","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 8, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6699960231781006, \"perplexity\": 1358.4190218176432}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-04\/segments\/1547584445118.99\/warc\/CC-MAIN-20190124014810-20190124040810-00016.warc.gz\"}"}
null
null
Kobe Bryant is a complete legend when you talk about major competitive sports as the guy retired as one of the best to ever play his sport after all the awards and titles he tallied through his magical career. Nonetheless, there are people who had the guts to play a 1-on-1 with Kobe and actually did it great. It is quite shocking when you see someone like Beyoncé's dad, Matthew Knowles, bragging about how he bested Kobe in a pickup game a couple of years ago. Knowles posted a video on Twitter yesterday when he appeared playing ball with the Lakers legend during the shooting of Destiny's Child's video for the song "Bug A Boo." In the clip, you see Kobe missing some shots, while Knowles puts some buckets on Kobe's head. TMZ Sports had the chance to talk with Knowles on this matter and he didn't hesitate to show off about his basketball skills, going further and saying if he were given another shot at KB24, he'd beat him no question. Now we have to wait to see whether Kobe is going to take those words seriously or will let it slip. Many people will surely want to see if Knowles still got it in front of one of the greatest in basketball history. Sometimes you need to teach people some manners after they disrespect you the way Knowles has done with Kobe. Check Kobe on the "Say My Name" remix with Destiny's Child. Kobe's verse on the "Say My Name" remix with Destiny's Child.
{ "redpajama_set_name": "RedPajamaC4" }
6,425
package jdk.internal.reflect; import java.lang.reflect.InvocationTargetException; /** This interface provides the declaration for java.lang.reflect.Constructor.invoke(). Each Constructor object is configured with a (possibly dynamically-generated) class which implements this interface. */ public interface ConstructorAccessor { /** Matches specification in {@link java.lang.reflect.Constructor} */ public Object newInstance(Object[] args) throws InstantiationException, IllegalArgumentException, InvocationTargetException; }
{ "redpajama_set_name": "RedPajamaGithub" }
1,111
Renfrewshire is set to become one of Scotland's digital trailblazers after the council approved a £1m investment in public wifi in its three largest towns. The innovative move was given the seal of approval as part of the council budget on Thursday 3 March 2016, and will deliver a two-pronged drive to achieve major economic and social benefits. Plans to roll wifi out to Paisley, Johnston, and Renfrew town centres are currently being developed in partnership with University of the West of Scotland, West College Scotland, Paisley First and the Chamber of Commerce. Work is expected to start by the end of summer. Councillor Mark Macmillan, Leader of Renfrewshire Council said: "This bold and transformational step will offer residents, visitors, businesses, and students free, unlimited, wireless internet access in Paisley, Johnstone and Renfrew town centres. We have a vision for a digital Renfrewshire and our investment will return major social and economic benefits. "Renfrewshire deserves to be one of Scotland's most connected areas and this will help to revitalise our three largest town centres, encourage shoppers and visitors to spend more time there - giving businesses a much needed boost. "Paisley is already a well-established student town with thriving university and college campuses, and this investment will let us build on that by making sure students here can stay connected to online learning across the town centre. "As Paisley's bid for UK City of Culture 2021 gathers pace, we will be looking to attract new trade and visitors to the town, and its important we can offer them 21st century facilities when they are here. "For Johnstone and Renfrew this news follows the major investment in town halls in recent years and will boost the continued regeneration of both towns. "Internet access is also widely regarded as the 'fourth utility' - a basic right and absolute necessity to allow people to fully take part in everyday life - and we know the most vulnerable are often the most digitally excluded, so this move will let us take our fight against poverty into the digital era." This move will build on the already established free public wifi in Renfrewshire's 12 local libraries. The council has been nominated for two Digital Leaders 100 for 2016 awards in the categories: 'Digital Leader of the Year' and 'Digital Inclusion and Skills Initiative of the Year' for its work to drive forward digital participation in Renfrewshire.
{ "redpajama_set_name": "RedPajamaC4" }
673
{"url":"http:\/\/www.ck12.org\/statistics\/Regression-and-Correlation\/lesson\/Linear-Correlation-PCALC\/","text":"<img src=\"https:\/\/d5nxst8fruw4z.cloudfront.net\/atrk.gif?account=iA1Pi1a8Dy00ym\" style=\"display:none\" height=\"1\" width=\"1\" alt=\"\" \/>\n\n# Regression and Correlation\n\n## Scatterplots, relationship between data, correlation coefficients, and regression.\n\nEstimated12 minsto complete\n%\nProgress\nPractice Regression and Correlation\n\nMEMORY METER\nThis indicates how strong in your memory this concept is\nProgress\nEstimated12 minsto complete\n%\nLinear Correlation\n\nStatistics is largely concerned with how a change in one variable relates to changes in a second variable. Bivariate data is two lists of data that are paired up. Is there any relationship between the following data? If there is, does it mean that doctors cause cancer?\n\n Number of Doctors 27 30 36 60 81 90 156 221 347 Cancer Rate 0.02 0.07 0.16 0.2 0.43 0.87 1.21 2.8 3.91\n\n### Correlation\n\nA scatterplot creates an \\begin{align*}(x, y)\\end{align*} point from each data pair. When making a scatterplot, you can try to assign the independent variable to\u00a0\\begin{align*}x\\end{align*} and the dependent variable to \\begin{align*}y\\end{align*}; however, it will often not be obvious which variable is the dependent variable, so you will just have to pick one.\n\nOnce you plot the data and zoom appropriately you will see the points scattered about. Sometimes there will be a clear linear relationship and sometimes it will appear random. The correlation coefficient, \\begin{align*}r\\end{align*}, is a number that quantifies two aspects of the relationship between the data:\n\n\u2022 The correlation coefficient is either negative, zero or positive. This tells you whether the data is negatively correlated, uncorrelated or positively correlated.\n\u2022 The correlation coefficient is a number between\u00a0\\begin{align*}-1 \\le r \\le 1\\end{align*} indicating the strength of correlation. If\u00a0\\begin{align*}r=1\\end{align*} or\u00a0\\begin{align*}r=-1\\end{align*} then the data is perfectly linear. Note that a perfectly linear relationship includes lines with slopes other than 1.\n\nConsider the examples below to see what different correlation coefficients will look like in data:\n\nIn PreCalculus you will not learn how to calculate the correlation coefficient (you will if you take future statistics courses!). For now, the calculator will calculate it for you and your job will be to interpret the result.\n\nIf the data is sufficiently linear, then your calculator can perform a regression to produce the equation of a line that attempts to model the trend of the data. The regression line may actually pass through all, some or none of the data points. This regression line is represented in statistics by:\n\n\\begin{align*}\\hat {y}=a+bx\\end{align*}\n\nThe symbol\u00a0\\begin{align*}\\hat{y}\\end{align*} is pronounced \u201c\\begin{align*}y\\end{align*}-hat\u201d and is the predicted\u00a0\\begin{align*}y\\end{align*} value based on a given\u00a0\\begin{align*}x\\end{align*} value. Occasionally, you may also calculate the predicted\u00a0\\begin{align*}x\\end{align*} value given a\u00a0\\begin{align*}y\\end{align*} value, however this is less mathematically sound. Also notice that the linear regression model is simply a rearrangement of the standard equation of a line, \\begin{align*}y=mx+b\\end{align*}\n\n### Examples\n\n#### Example 1\n\nEarlier, you were asked about the relationship between the two sets of data:\n\n Number of Doctors 27 30 36 60 81 90 156 221 347 Cancer Rate 0.02 0.07 0.16 0.2 0.43 0.87 1.21 2.8 3.91\n\nEnter the data onto lists in your calculator:\n\n\n\nTurn the [STAT PLOT] on that compares the two lists of data:\n\n\n\nYou should note that the data is extremely linear with a positive correlation coefficient:\n\n\n\nA na\u00efve conclusion would be to say that doctors cause cancer. One of the most misunderstood concepts in statistics is that correlation does not imply causation. Just because there is a correlation between the number of doctors and the cancer rate doesn\u2019t mean that the number of doctors causes the cancer. There are dozens of reasons why more doctors might correlate with higher cancer rates. In general, remember that correlation is not the same as causation. Be careful before making any conclusions about change in one variable causing change in another variable.\n\n#### Example 2\n\nEstimate the correlation coefficient for the following scatterplots.\n\n1. \\begin{align*}r \\approx 0\\end{align*}. Because the height\u00a0\\begin{align*}(y)\\end{align*} does not seem to be dependent on the \\begin{align*}x\\end{align*}, the data is uncorrelated. Another way to see this is that the slope appears to be undefined.\n2. \\begin{align*}r \\approx -0.7\\end{align*}. If the solo point in the bottom left is an outlier, you could choose to not include it in the data. Then, the\u00a0\\begin{align*}r\\end{align*} value would be closer to -1.\n3. \\begin{align*}r \\approx +0.8\\end{align*}. The clump of data seems to be slightly positive correlated and the single point in the upper left has a strong effect indicating positive slope.\n4. \\begin{align*}r \\approx -0.8\\end{align*}. The data seems to be fairly strongly negatively correlated.\n5. \\begin{align*}r \\approx 1\\end{align*}. The data seems to be perfectly linearly correlated.\n\n#### Example 3\n\nEstimate the regression line through the following scatterplots.\n\nVisualize and sketch the \u201cline of best fit\u201d for each set of points.\n\nNote that in part a, the regression line does not touch any point. Instead, it captures the general trend of the data. In part c, the correlation is not high enough in any direction to produce a regression line. The calculator may give a regression line for scatterplots that look like part c, but you need to be very skeptical that there is actually a relationship between the two variables.\n\n#### Example 4\n\nUse your calculator to perform a linear regression on the following data. Then, predict the height of someone who has shoe size 9.\n\n Shoe Size Height (in) 11 70 8.5 70 10 72 8 65 7 64\n\nFirst enter the data.\n\nNext perform the regression. Notice that the calculator can perform linear regression in two ways that are essentially the same. To keep consistent with \\begin{align*}\\hat{y}=a+bx\\end{align*}, use linear regression. This is option 8 in the [STATS], [CALC] menu.\n\nNow you need to tell the calculator to perform the regression on the two lists you want and where to copy the equation. The syntax is:\n\n\u2022 \\begin{align*}\\text{LinReg}(a+bx) L_1, L_2, Y_1\\end{align*}\n\nNote: to find Y1, go to -- [VARS], [Y-VARS], [FUNCTION], [Y1].\n\nNotice that the\u00a0\\begin{align*}r\\end{align*} value is about 0.8. This indicates that there is a fairly strong positive correlation between shoe size and height. If you calculator does not display the\u00a0\\begin{align*}r\\end{align*} and\u00a0\\begin{align*}r^2\\end{align*} lines then you need to go into the catalog and run the program \u201cDiagnosticOn\u201d. This will enable the display of the correlation coefficient.\n\nYou can then graph the scatterplot and the regression line:\n\nThe regression equation is:\n\n\\begin{align*}\\hat{y}=52.4069+1.7745 x\\end{align*}\n\nWhere\u00a0\\begin{align*}x\\end{align*} represents shoe size and\u00a0\\begin{align*}\\hat{y}\\end{align*} represents predicted height. The predicted height for someone with size 9 shoe is 68.3774:\n\n\\begin{align*}\\hat{y}=52.4069+1.7745 \\cdot 9=68.3774\\end{align*}\n\nAn easy way to use the power of the calculator is to use function notation from the home screen:\n\n#### Example 5\n\nShaquille O\u2019Neal has size 23 shoes. What, if anything can you infer about his vocabulary? Does a larger shoe size cause a larger vocabulary?\n\nShaquille\u2019s shoe size is significantly beyond the scope of the data that the model is based on. The data relates to elementary school students and a size 23 shoe is beyond the relevant domain. This means it wouldn\u2019t make sense to use this model to predict Shaquille\u2019s shoe size. Shoe size does not cause vocabulary, but the two variables are strongly correlated because over time both tend to grow.\n\n### Review\n\nFor each correlation coefficient, describe what it means for data to have that correlation coefficient and sketch a scatterplot with that correlation coefficient.\n\n1. \\begin{align*}r=1\\end{align*}\n\n2. \\begin{align*}r=-0.5\\end{align*}\n\n3. \\begin{align*}r=-1\\end{align*}\n\n4. \\begin{align*}r=0\\end{align*}\n\n5. \\begin{align*}r=0.8\\end{align*}\n\nThe data below shows the SAT math score and GPA for 7 different students.\n\n SAT math score 595 520 715 405 680 490 565 GPA 3.4 3.2 3.9 2.3 3.9 2.5 3.5\n\n6. Use your calculator to perform a linear regression that models the data. What is the regression equation? What is the correlation coefficient?\n\n7. Use the equation from #6 to predict the GPA for a student with an SAT score of 500. Does this prediction seem reasonable given the data? Why or why not?\n\n8. What is the relevant domain of this data?\n\n9. Does a high SAT math score cause a high GPA?\n\nThe data below shows scores from two different quizzes for 10 different students.\n\n Quiz 1 Score 15 12 10 14 10 8 6 15 16 13 Quiz 2 Score 20 15 12 18 10 13 12 10 18 15\n\n10. Use your calculator to perform a linear regression that models the data. What is the regression equation? What is the correlation coefficient?\n\n11. Use the equation from #10 to predict the Quiz 2 score for a student with a Quiz 1 score of 19. Does this prediction seem reasonable given the data? Why or why not?\n\n13. Explain in your own words the difference between causation and correlation.\n\n14. Explain in your own words what the correlation coefficient measures.\n\n15. Explain why a larger sample size will cause a more accurate correlation coefficient.\n\nTo see the Review\u00a0answers, open this PDF file and look for section 15.7.\n\n### Notes\/Highlights Having trouble? Report an issue.\n\nColor Highlighted Text Notes\n\n### Vocabulary Language: English\n\nTermDefinition\nbivariate data Bivariate data consists of two paired sets of data.\ncorrelation coefficient The correlation coefficient is a standard quantitative measure of best fit of a line. It has the symbol r and has values from -1 to +1.\ndeterministic A deterministic relationship indicates that the value of one variable can be reliably and accurately determined by the manipulation of the other variable.\nexplanatory variables Explanatory\u00a0variables are another name for independent variables.\nlinear correlation Linear correlation is a measure of the strength of the linear relationship between two random variables.\nlinear correlation coefficient A\u00a0 linear correlation coefficient\u00a0 or\u00a0r -value of a relationship between two variables describes the strength of the linear relationship.\nresponse variables Response variables are another name for dependent variables.\nscatter plot A scatter plot is a plot of the dependent variable versus the independent variable and\u00a0is used to investigate whether or not there is a relationship or connection between 2 sets of data.\nScatterplot A scatterplot is a type of visual display that shows pairs of data for two different variables.\nSlope Slope is a measure of the steepness of a line. A line can have positive, negative, zero (horizontal), or undefined (vertical) slope. The slope of a line can be found by calculating \u201crise over run\u201d or \u201cthe change in the $y$ over the change in the $x$.\u201d The symbol for slope is $m$\nSlope-Intercept Form The slope-intercept form of a line is $y = mx + b,$ where $m$ is the slope and $b$ is the $y-$intercept.","date":"2017-03-27 06:29:12","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 37, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 10, \"texerror\": 0, \"math_score\": 0.5850449204444885, \"perplexity\": 732.3609873691568}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-13\/segments\/1490218189403.13\/warc\/CC-MAIN-20170322212949-00042-ip-10-233-31-227.ec2.internal.warc.gz\"}"}
null
null
Vargattacker mot människor är skador på människor eller dess egendom orsakat av varg. Frekvensen varierar med geografiskt läge och historisk period. Attacker av gråvarg är sällsynta eftersom vargar ofta dödas eller till och med utrotas av människor, som en konsekvens av vargattacker. Det har resulterat i att vargar idag tenderar att leva mestadels långt från människor eller har utvecklat en förmåga att undvika dem. Vargattacker med dödlig utgång är ovanliga men förekommer fortfarande, framförallt i Indien, där vargen lever i miljöer med liten tillgång på naturliga bytesdjur och barn som boskapsvakter. Enligt Linnell & Bjerke (2003) skall det ha förekommit 1548 väldokumenterade dödsfall i Västeuropa under perioden 1700–2000, medan 4 människor skall ha dödats av friska vargar i Europa under de senaste femtio åren. Risken för attacker mot människor i Finland, Norge eller Sverige är idag mycket låg enligt expertisen, då det finns gott om vilt i skogen och vargar som börjar röra sig nära bebyggelsen relativt snabbt blir skjutna. Huruvida en del vargar tillåts röra sig nära bebyggelse tillräckligt länge för att öka riskerna är dock en öppen fråga; praxis för skyddsjakt varierar betydligt mellan olika områden. Fennoskandien En artikel kring dokumenterade fall av varg som angripit människor i Fennoskandien konstaterar att vargattacker verkar ha varit regelbundet återkommande men rätt sällsynta fenomen. Angreppen har utförts i fattiga trakter, med begränsad tillgång till vilt, och av enstaka vargar eller vargflockar, som angripit barn, endast i enstaka fall skall någon vuxen ha dödats. Sammantaget ska 94 personer ha dödats av vargar i Skandinavien under de senaste 300 åren. Alla fallen är från före 1882. Dessa vargar har inte varit rabiessmittade, men beteendet verkar vara begränsat till enskilda individer eller flockar. I några av fallen antas vargarna tidigare ha levt i fångenskap och därvid vant sig vid mänskor. Vuxna har undvikits vid angreppen; endast i enstaka fall har vuxna eller barn i vuxnas sällskap angripits. Vargattacker i Skandinavien med dödlig utgång mot människor har antagits vara osannolika, så länge det finns gott om vilt i skogarna och vargar som börjar röra sig nära bebyggelsen relativt snabbt blir skjutna. Alla attackerna före 2012 inträffade under tider då människorna var fattiga och det var ont om vilt i skogen. Åtminstone i en del av fallen misstänks vargindividerna tidigare ha levt i fångenskap. Attacker mot människor under nuvarande förhållanden har antagits som mycket osannolika av myndigheter och ledande forskare. Det enda dödsfallet i Sverige i modern tid skedde den 17 juni 2012 då en djurvårdare dödades av delvis socialiserade vargar i fångenskap på Kolmårdens djurpark. Finland Från januari 1831 fram till sommaren 1832 skall åtta barn och en vuxen kvinna ha dödats av varg i Kaukola, och det antas att en varg dödade samtliga. 1836 dödades tre barn av vargar i Kimito kommun. Från 1839 till 1850 dödades tjugo barn av varg i Kivinebb, och det har antagits att samtliga blev dödade av samma varg, eller av samma vargflock. 1877 attackerades tio barn i Tammerforsregionen och nio av dessa dog till följd av de skador som de åsamkades. Från 1879 till 1882 dödade ett vargpar ett stort antal barn inom elva församlingar i Egentliga Finland. Enligt äldre uppgifter skall det ha varit tjugotvå barn som blev dödade. Nyare forskning tyder emellertid på att trettiofem barn kan ha blivit dödade. Allteftersom antalet dödade barn växte sattes allt större ansträngningar in för att döda vargarna, och jägare från Ryssland och Litauen anlitades, liksom den finska armén. Slutligen lyckades man skjuta en varghona i januari 1882, och tolv dagar senare blev en varghane förgiftad. Vargattackerna upphörde efter detta. Dessutom finns det tidningsuppgifter på att flera barn skall ha blivit dödade. En 12-årig flicka dödades i Euraåminne 1859, en pojke på 8 år dödades i Nykyrka 1882, samma år blev en pojke skadad i Sordavala. Det finns inga bevis på att dessa vargar skall ha varit tama, även om detta inte kan uteslutas. Förutom de attacker som Linell, med flera, har bedömt som mycket väldokumenterade finns även uppgifter om ett stort antal andra attacker. Antti Lappalainen har gjort en sammanställning av uppgifter hämtade från finska kyrkböcker åren 1710–1881, och därvid ansett sig ha funnit 175 registrerade vargangrepp med dödlig utgång i Finland. Han anger att åldersfördelningen bland de dödade skall ha varit följande: spädbarn 0–3 år 11 %, barn 4–9 år 38 %, ungdomar 10–17 år 8 %, vuxna 18–40 år 27 %, vuxna 41–60 år 11 % och äldre 61– 5 %. År 1932 skrev flera finska tidningar om att en varg skulle ha anfallit en flicka i Kuikkala by i Puumala. Påståendet visade sig vara helt grundlöst. Norge Enligt Linnell med flera 2003 (som undersökte data från de senaste 300 åren och uteslöt osäkra fall) skall en flicka som var 6–8 år gammal ha dödats av varg i Sørums kommun i Akershus fylke den 28 december 1800. Furseth 2005 menar att det under det senaste 400 åren har dödats 11 personer av varg i Norge. Sverige Man har i Sverige dokumenterat några historiska fall av vargattacker mot människor som fått dödlig utgång. Attackerna utfördes i en begränsad del av Sverige, nämligen gränstrakterna mellan Dalsland och Värmland, och under en relativt kort period. Oluf Swensson, en 12 år gammal gosse, blev enligt dödboken attackerad och dödligt skadad av varg den 9 februari 1727 i Håbols socken, Vedbo härad, Dalsland. Han avled av sina skador den 12 februari. Jon Svensson, en 4,5 år gammal pojke, blev dödad och till största del uppäten av varg i Boda socken, Jösse härad, Värmland, som nu är en del av Kils kommun i Värmlands län, den 17 december 1727. En kort tid därefter, den 6 januari 1728, blev ytterligare en pojke, Jon Ersson, som var 9 år gammal, uppäten av varg, även det i Boda socken, Jösse härad, Värmland, som nu är en del av Kils kommun i Värmlands län,. Den 3 augusti 1731 blev en 12-årig flicka, Borta Johansdotter, dödad av varg i Steneby socken, Vedbo härad, Dalsland, som nu är en del av Bengtsfors kommun i Västra Götalands län. Nils Nilsson, en 8 år gammal pojke, blev ihjälbiten av varg i januari 1763 i Hova socken, Vadsbo härad, Västergötland, som nu är en del av Gullspångs kommun i den del av Västra Götalands län, vilken tidigare utgjordes av Skaraborgs län. I en serie av attacker mellan den 30 december 1820 och den 27 mars 1821 blev 31 människor attackerade av en varg och 9 av dessa dödades och 15 personer blev skadade. Samtliga, med undantag av en 19-årig kvinna, var barn, som var från 3,5 år till 15 år gamla då de attackerades. Samtliga dessa attacker ägde rum i ett begränsat område inom i Dalarna och i Gästrikland. När den s.k. Gysingevargen sköts upphörde attackerna. Serien av angrepp startade den 30 december 1820 då en treåring vid namn Eric blev ihjälriven av varg. Angreppen fortsatte sedan fram till den 27 mars 1821 då en sexårig pojke vid namn Anders blev utsatt för ett vargangrepp. Barnets far lyckades slita loss barnet från vargen, men barnet dog en kort tid därefter till följd av de uppkomna skadorna. Under den tre månader långa perioden angreps 31 människor, vilket resulterade i 9 döda och 15 skadade. De skadade var oftast barn, med undantag av en yngling på 18 år. De flesta som avled var barn mellan 3,5 och 15 år gamla, som pojken Pehr, vilken var sex år då han bets ihjäl den 28 februari 1821. Ett av de äldre barnen var Anna Jansdotter, som var 12 år då hon dödades den 10 februari 1821. Jan Erik Sundstedt var 15 år gammal då han blev ihjälriven den 10 februari 1821. En 18-årig kvinna, Anna, dödades, och till betydlig del förtärdes av varg. Vid flera av tillfällena förtärdes offren efter att de dödats. Så var fallet med Jan Carlsson, sex år, som dödades den 12 januari 1821, varför återstoden av hans kropp vid begravningen lades i samma kista som hans morfar, vilken avlidit en kort tid dessförinnan. Detta öde drabbade även en åttaårig flicka vid namn Carin som blev dödad den 4 februari 1821. I vissa fall försökte barnets fäder rädda sina barn, men deras liv stod ändå inte att rädda, detta gjorde Olof Ersson, vilken försökte rädda sin son Anders, liksom även Anders Carlsson, som med fara för eget liv kunde slita loss sin dotter Stina, 11 år, vilken skulle gå till bruksgården i ett ärende, emellertid kunde hennes liv inte räddas, och hon blev således ett ytterligare offer för Gysingevargen. Gysingevargen sköts den 27 april 1821 i Årsunda. Mot bakgrund av historiska uppgifter har det konstaterats att dessa attacker genomförts av en varg vilken hade blivit uppfödd i fångenskap på Gysinge herrgård. Den och två kullsyskon tagits till bruket av en skogvaktare som skänkt dem till brukspatronens unga döttrar. Två av vargarna avlivades då de blev för stora som lekkamrater, medan den tredje blev kvar en längre tid, varvid den således släpptes lös i frihet. Den enda säkert dokumenterade vargattacken med dödlig utgång i Sverige sedan Gysingevargen i början av 1800-talet ägde rum den 17 juni 2012 i Kolmårdens djurpark, då en 30-årig kvinna dödades av vargar hon skötte. Djurparken och den zoologiska chefen dömdes, den senare för grovt arbetsmiljöbrott. Andra vargattacker i Sverige med dödlig utgång sägs ha inträffat 1830 på Frösön (då en soldat dödades), 1846 utanför Laholm (då en vuxen kvinna dödades) och 1854 i Ny söder om Malung (då en pojke dödades). Ryska Imperiet/Sovjetunionen utom Baltikum I Ryska Imperiet dödades enligt olika källor mellan 3.000 och 20.000 människor av varg under 1800-talet. Baltikum En sammanställning av Linnell & Bjerke (2003) ger stöd åt 240 fall där människor skall ha blivit dödade av vargar i Estland, Lettland och Litauen under modern tid.<ref name="lcie.org">{{webbref |författare=John D. C. Linell & Tore Bjerke |url=http://www.lcie.org/docs/damage%20prevention/linnell%20nina%20vsc%20fear%20of%20wolves%20swe.pdf |titel=Rädslan för vargen. En tvärvetenskaplig utredning |arkivurl=https://web.archive.org/web/20111202165239/http://www.lcie.org/docs/damage%20prevention/linnell%20nina%20vsc%20fear%20of%20wolves%20swe.pdf |arkivdatum=20111202165239 |sid=57 |utgivare=Viltskadecenter & NINA-NIKU, stiftelsen för naturforskning og kulturminne |år=2003}}</ref> De baltiska staterna förhandlade fram undantag från EU-reglerna om vargar i samband med att de blev medlemmar i unionen. Estland Enligt Linnell & Bjerke (2003), skall 21 personer ha dödats av friska vargar under 1700-talet. 84 personer skall ha dödats av rabiessmittade vargar under 1800-talet, medan 111 personer under samma tid dödades av friska vargar. Vidare skall en människa ha blivit dödad av en rabiessmittad varg under den senare delen av 1900-talet. Således skall, enligt Linnell & Bjerke (2003), 217 människor ha blivit dödade av vargar i Estland under denna period. Rootsi anger att 203 personer skall ha blivit dödade av vargar i Estland, och han har särskilt beaktat inverkan av rabies. Lettland Linnell & Bjerke (2003), anger att 10 personer skall ha dödats av rabiessmittade vargar under 1800-talet, och 3 personer skall ha dödats av rabiessmittade vargar under 1900-talets senare del. Förutom dessa 3 personer som dog, blev ytterligare 9 personer utsatta för attacker av rabiessmittade vargar under 1900-talets senare hälft. Dessutom skall 3 människor ha utsatts för vargattacker av friska vargar under 1900-talets senare del. Litauen Enligt Linnell & Bjerke (2003), skall 11 personer ha dödats av friska vargar under 1900-talets första hälft. Dessutom skall ytterligare 5 människor ha överlevt vargattacker av friska vargar under denna tid. Under 1900-talets första hälft skall 19 personer utsatts för vargattacker av rabiessmittade vargar, men antalet döda ansågs vara osäkert. Under 1900-talets senare hälft skall 22 personer utsatts för vargattacker av rabiessmittade vargar, ingen skall dock ha avlidit. Europa utom Norden och de stater som tidigare ingick i Sovjetunionen Enligt Linnell & Bjerke (2003) skall 1214 människor blivit dödade av vargar i attacker som dokumenterats så vederhäftigt att händelsen kan anses säkerställd. Frankrike Frankrike är det land med mest omfattande data rörande vargattacker mot människor, där nästan 7 600 dödliga attacker dokumenterades från år 1200 till 1920. Det är dock möjligt att många av dessa attacker räknats flera gånger, och det är inte säkert att alla utförts av vargar. Linnell & Bjerke (2003) anger 693 vargattacker mot människor under 1700-talet av rabiessmittade vargar mot människor i Frankrike. Av dessa fick 308 dödlig utgång. Under samma tid utsattes 711 personer av vargattacker av friska vargar, varvid 577 människor dödades. Under 1800-talet angreps 345 personer av rabiessmittade vargar, vilket kostade 118 människor livet. Samtidigt blev 365 personer utsatta för vargattacker av friska vargar, varvid 104 dödsoffer krävdes. Under 1900-talets första hälft attackerades 6 människor av friska vargar och 2 av dessa dog. Totalt skall således 1109 människor ha blivit dödade av vargar under denna tidsperiod i Frankrike, enligt Linnell & Bjerke (2003). Italien Under 1800-talet skall, enligt Linnell & Bjerke (2003), 5 människor ha dödats av rabiessmittade vargar och 72 personer skall ha dödats av friska vargar. Således dödades sammantaget 77 människor av vargar i Italien under 1800-talet enligt Linnell & Bjerke (2003). Spanien Enligt Linnell & Bjerke (2003) skall 40 människor ha blivit attackerade av rabiessmittade vargar under 1700-talet. Antalet som då kan ha dödats bedömer de som osäkert. På 1800-talet skall 14 personer ha blivit dödade av rabiessmittade vargar. Under den första delen av 1900-talet skall 29 människor ha attackerats av rabiessmittade vargar, och mer än 10 av dessa skall ha avlidit. Under 1900-talets senare del skall 9 människor ha blivit attackerade av friska vargar, och 4 personer av dessa skall ha blivit dödade av vargar. Grekland En brittisk kvinnas kropp återfanns 2017 i bergen i norra Grekland. Efter obduktion bedömde en veterinär att hon förmodligen attackerats och dödats av vargar. Indien Åren 1980–1995 dödades minst 233 barn av vargar i Indien, de flesta under 6 år och i de flesta fall inom byns område när de tillfälligt lämnats ensamma. År 1878 skall 624 personer ha blivit dödade av vargar i Indien. Nordamerika Åtminstone några fall i Nordamerika av människor som uppenbarligen dödats av frisk varg har dokumenterats i senare tid. Myndigheterna i USA och Kanada uppmanar människor som rör sig i trakter med varg att inte nonchalera risken, även om de anser att dessa inte skall vara rädda i onödan, eftersom varg anses utgöra en mycket liten risk i jämförelse med björn, kyla och olyckor. En 32-årig kvinna, Candice Berner, dödades av vargar i Alaska under en joggingtur 2010. Spår av varg fanns på platsen och björnarna ligger i ide på vintern. Flera vargar dödades senare och åtminstone en av dem, i god kondition, kunde kopplas till kvinnan med DNA-test. Det är inte klarlagt om vargarna anföll för att få mat eller hur de eller kvinnan reagerade vid mötet, som kan ha överraskat båda parter, men vargarna hade dödat kvinnan och ätit av kroppen. En 22-årig student, Kenton Joel Carnegie, antas ha dödats av varg eller björn under en promenad i norra Saskatchewan i Kanada 2005. Vargar som letade föda på den lokala avstjälpningsplatsen verkade ha blivit orädda för människor. Mannen hittades död och myndigheterna som besökte platsen tolkade spåren i snön som att han sannolikt angripits av vargar. Undersökningen inleddes dock sent, spår också av bland annat svartbjörn fanns på platsen och dödsorsaken ifrågasattes men blev dock officiellt fastslagen till att han blev dödad av vargar. En sexårig pojke angreps av vargar 2000 nära Ice Bay söder om Anchorage. Vargen rapporterades ha blivit matad flera gånger innan attacken. Flera andra attacker i Kanada har kopplats till att vargar blivit vana vid människor. Viltbiologen Patricia Wyman, som arbetade med vargfrågor vid naturreservatet Haliburton Forest & Wildlife Preserve, dödades den 18 april 1996 av frilevande vargar. Det framkom vid utredningen av hennes död att de anställda vid detta naturreservat ansåg att vargar inte skulle utgöra någon risk för människor. Den 24 september 1963 dödades den femårige kanadensaren Marc Leblond av vargar i Baie-Comeau i Côte-Nord-regionen, i provinsen Québec, Kanada, enligt Winnipeg Free Press.Frank Auger, Quebec-Hydro police chief at Bale-Comeau, says Marc and his three-year old brother had been outside playing for a few minutes Sept. 24 when their parents heard a commotion. The younger boy rushed screaming into the house. The parents unable to find Marc, thought he had drowned and called police. A search of the lake revealed nothing, but two policemen and foreman Leon Verrault of Quebec-Hydro found the torn body in the forest after a brief search. Tracks Near Body They also saw a wolf lurking 50 yards off. Unarmed, they were unable to shoot it but Verrault, an experienced hunter, described it as "a grey timber wolf weighing about 80 pounds." Tracks of two wolves surrounded the body. Later that day an armed group scoured the area, shot at a wolf but missed. Examination by Dr. Jacques Beaumont, the district coroner, convinced Auger the boy was killed by a wolf., Gerald McNebl, Authorities Convinced Five-year-old Killed by Wolves, Winnipeg Free Press, den 18 november 1963, s. 12. Det har även förekommit ett mindre antal attacker av prärievargar mot människor, vilka endast i få fall kan ha resulterat i allvarliga skador, eller att ha lett till dödsfall. En attack mot folksångerskan Taylor Mitchell ledde till hennes död den 28 oktober 2009. I det här fallet kan det ha varit frågan om hybrider mellan varg och prärievarg. Mänskligt uppträdande som kan minska riskerna i samband med närhet till vargar Medan björn ofta gör skenattacker då den känner sig hotad och man därför skall undvika att verka farlig, bör man uppträda aggressivt mot vargar som kommer nära. I ingendera fallet skall man vända ryggen till eller försöka springa undan. Man antar att varg blir farligare om rädslan för människor avtagit. Man bör alltså undvika att mata varg eller locka vargar att närma sig. Den som blir utsatt för en attack av vargar bör försöka att kämpa mot vargarna. Se även Varg Varg i Skandinavien Varg i Finland Gysingevargen Prärievarg Gévaudanmonstret Litteratur Astor Furseth, Drept av bjørn og ulv: en historisk oversikt over mennesker drept og skadet av rovdyr i Norge de siste 400 år, Landbruksforlaget, 2005, Hans Kruuk, Hunter and Hunted, 2002, . Antti Lappalainen, Suden jäljet, Metsäkustannus, 2005, . John D. C. Linell & Tore Bjerke, Rädslan för vargen. En tvärvetenskaplig utredning(PDF), Viltskadecenter & NINA-NIKU, stiftelsen för naturforskning og kulturminne, 2003. John D.C. Linnell, Erling J. Solberg, Scott Brainerd, Olof Liberg, Håkan Sand, Petter Wabakken, Ilpo Kojola, Is the fear of wolves justified? A Fennoscandian perspective (PDF), Acta Zoologica Lituanica, 2003, Volumen 13, Numerus 1, ISSN 1392-1657. Max MacCormick, The Mammoth Book of Man-Eaters, Constable & Robinson Ltd. 2003, . Jean-Marc Moriceau, Histoire du méchant loup: 3000 attaques sur l'homme en France, Paris, Fayard, 2007, . Ilmar Rootsi, Rabid wolves and the man in Estonia of the 18Th-19Th centuries, Acta Zoologica Lituanica, 2003, Volumen 13, Numerus 1, ISSN 1648-6919. Ajay Singh Yadav, The Man-Eating Wolves of Ashta, Srishti Publishers & Distributors, 2000, . Dylan E. Brown & Michael R. Conover, How people should respond when encountering a large carnivore: opinions of wildlife professionals, Human–Wildlife Conflicts 2(2):194–199, Fall 2008. Källor Externa länkar Alaska Department of Fish and Game, Living With Wolves'', Råd för den som rör sig i vargtrakter. Magnus Hagelstam, Skrivelse tillsänd den Europeiska unionens Kommissionens Generaldirektorat för Miljöfrågor den 8 februari 2006. Kenton Joel Carnegie Memorial T. R. Mader, Wolf Attacks on humans, Abundant Wildlife Society of North America. Karl-Hans Taake, Carnivore Attacks on Humans in Historic France and Germany: To Which Species Did the Attackers Belong? Februari 2020. Vargar
{ "redpajama_set_name": "RedPajamaWikipedia" }
7,415
package no.priv.garshol.duke.cleaners; import org.junit.Before; import org.junit.Test; import static junit.framework.Assert.assertEquals; public class NorwegianAddressCleanerTest { private NorwegianAddressCleaner cleaner; @Before public void setup() { cleaner = new NorwegianAddressCleaner(); } @Test public void testEmpty() { test("", ""); } @Test public void testVeienVeien() { test("grefsenveien 132", "grefsenveien 132"); } @Test public void testVeienVnDot() { test("grefsenvn. 132", "grefsenveien 132"); } @Test public void testVnVnDot() { test("grefsenvn. 132", "grefsenvn 132"); } @Test public void testVeienVn() { test("grefsenvn. 132", "grefsenveien 132"); } @Test public void testGtGata() { test("kanalgt 3", "kanalgata 3"); } @Test public void testGtDotGata() { test("kanalgt. 3", "kanalgata 3"); } @Test public void testGataGata() { test("kanalgata 3", "kanalgata 3"); } @Test public void testGtDotGaten() { test("enerhauggt. 5", "enerhauggaten 5"); } @Test public void testSpaceGtGate() { test("christian kroghs gt 42", "christian kroghs gate 42"); } @Test public void testSpaceGtDotGate() { test("christian kroghs gt. 42", "christian kroghs gate 42"); } @Test public void testBoksPostboks() { test("boks 367", "postboks 367"); } @Test public void testBoksPostboksComma() { test("boks 1353 vika", "postboks 1353,vika"); } @Test public void testPostboksPb() { test("postboks 2623 møhlenpris", "pb 2623 møhlenpris"); } @Test public void testPostboksPbDot() { test("postboks 6258 etterstad", "pb. 6258 etterstad"); } @Test public void testBoksCommaBoks() { test("boks 24, grefsen", "boks 24 grefsen"); } @Test public void testPostboksDashPostboks() { test("postboks 124 - bryn", "postboks 124 bryn"); } @Test public void testVeienVndotnospace() { test("Økernveien 38 b", "Økernvn.38 b"); } @Test public void testNumberletterNumberspaceletter() { test("ammerudveien 31d", "ammerudveien 31 d"); } @Test public void testSpaceVSpaceDigit() { test("cecilie thoresens v 5", "cecilie thoresens vei 5"); } @Test public void testSpaceVDotSpaceDigit() { test("cecilie thoresens v. 5", "cecilie thoresens vei 5"); } @Test public void testVDotSpaceDigit() { test("varnav. 32a", "varnaveien 32a"); } @Test public void testSpaceGDotSpaceDigit() { test("eilert sundts g. 37", "eilert sundts gate 37"); } @Test public void testSpaceGSpaceDigit() { test("eilert sundts g 37", "eilert sundts gate 37"); } @Test public void testCareOfBackslash() { test("c/o advokat s. niik", "c\\o advokat s. niik"); } @Test public void testCareOfEndslash() { test("c/o advokat s. niik", "co/ advokat s. niik"); } private void test(String s1, String s2) { assertEquals(cleaner.clean(s1), cleaner.clean(s2)); } }
{ "redpajama_set_name": "RedPajamaGithub" }
8,671
{"url":"https:\/\/chat.stackexchange.com\/transcript\/41?m=22117078","text":"1:14 AM\n@1010011010 People don't usually typeset with it?\n\n1:27 AM\n@1010011010 Yes. I'm not David but I've played with this stuff more recently ;). Also, you can construct glyphs in virtual fonts. For example, you need an ff ligature for small caps? Create one by putting a small cap F then the kerning between small cap F and itself then another small cap F and then set the kerning for the ligature equal on both sides to the kerning for the small cap F. Etc. Also, there are cases in which it's convenient to create other fake ligatures although I forget my reasons.\n@1010011010 Take a look at the .mtx files under tex\/fontinst\/. I know you are not using this but take a look anyway. Electrum uses an .mtx file to build a tt.sc glyph. This is because the non-sc has a special tt glyph and I'm mirroring that for consistency. You can see the effects if you convert the relevant .vf files back to .vpl. Or compile the source which is more compiling but will create a more readable result. (Comments get stripped when you convert .vpl to .vf.\n\n4 hours later\u2026\n5:42 AM\n@baxx There are several things you can do. TikZ allows you to use relative coordinates, so you can say \\draw (10,10) rectangle +(1,1);. With that syntax you only have to change one coordinate.\nFurther, if you have several paths starting\/depending on the same coordinate, you can say \\coordinate (a) at (10,10);, and then use \\draw (a) rectangle +(1,1);. Another way, if you have multiple paths that should all be moved by the same amount, you can add a scope environment with a shift parameter around them, i.e. \\begin{scope}[shift={(-9cm,-9cm)}]\\draw ...; \\draw ...'; \\end{scope}.\n\n1 hour later\u2026\n7:12 AM\n@cfr Thanks for the suggestion. My issue is the exact opposite: I have wayyyyyyyyy too many ligatures. Even redundant ones like \"a_t\" and \"u_m\" and \"p_o\"\n\n7:26 AM\n@egreg On the backslash business: there are of course lots of variants here :-)\n@egreg See tug.org\/pipermail\/luatex\/2015-May\/005199.html for the thread about \\Ucharcat\n@egreg You can't add \\Ucharcat to luatex.fmt for the same reason you can't add \\pdfstrcmp or another other emulated primitive: number of expansion steps. The best that can be done is what @heiko does in pdftexcmds: wrapping up the primitive if available so a defined interface always requires two expansion steps.\n\n7:48 AM\n1\n\nOnly a question in the sense of feedback requested. Move if necessary... For some time, I have been working on a better alternative to packages such as pdfx and xmpincl. I think I've got it! Results have been checked against professional software, and it seems to work just fine. No known bugs (h...\n\nOff-topic => Not a question?\n\n8:07 AM\n@egreg In this case I don't really have an operator font. Since I only need () I just took them from the non-italic font ;-) And I realised I had to use \\mathpunct for . and ,, learn something new everyday.\n\n@egreg we like to give them a head start\n\n@JosephWright At least in luatex.ini or whatever is used. It's a pity one cannot really implement a new primitive-like function.\n\n@egreg I thought about that but as I say the issue is such an 'emulation' is not right. We'd need Hans to add a \\defprimitive or similar primitive for these cases.\n\n@DavidCarlisle I understand, but now the nieces of the 1974 players are in the team. :P\n\n@egreg I think suggesting that is my next mail to the luatex list, it could easily have a command that allowed you to define a new primitive in lua rather than having to \\def a macro that takes one step to expand to a call to \\directlua.\n\n8:17 AM\n@DavidCarlisle Can't see it on the list: I know you mentioned it to me but that might have been directly\n\n@JosephWright no I mean next one to ask (in the future:-)\n\n@DavidCarlisle Ah\n\n@JosephWright although I suspect they won't be keen, the extra expansion step mainly affects cross-engine compatibility and I suspect that is not an issue in context\/luatex core development.\n\n@DavidCarlisle Yes\n@DavidCarlisle Even outside of ConTeXt, it's clear Hans takes the line 'use only LuaTeX, don't code for multiple engines'\n\n8:33 AM\n@Johannes_B thanks\n\n9:10 AM\n@JosephWright I don\u2019t think that\u2019s fair. Hans definitely understands that people need cross-engine compatibility, he just doesn\u2019t do it himself. And he\u2019s maintained ConTeXt for both pdfTeX and luaTeX for years, so he knows what it\u2019s about (although he didn\u2019t aim at compatibility between Mark II and Mark IV).\n@JosephWright @DavidCarlisle Besides, he\u2019s not developing LuaTeX himself; Luigi is, so he\u2019s the one you need to convince.\n\n@ArthurReutenauer But they've already altered stuff such that you can't use LuaTeX as a drop-in replacement for pdfTeX, so that line seems clear\n@ArthurReutenauer ConTeXt for pdfTeX and LuaTeX is strictly separate (you can't use LuaTeX for MkII or pdfTeX for MkIV), so that's very different\n@ArthurReutenauer My impression was Hans sets the overall policy on these things\n\n@moose that's definitely a different (more or less unrelated) question as standard tex fonts don't have the reversed questionmark so it is just a \"how do i typeset ... random weird character..\" question it's not related to the inverted questionmark other than some visual similarity.\n\n@ArthurReutenauer Oddness with the DVD order for UK-TUG: I've asked twice for 100 DVDs but DANTE seem not to have our order!\n\n@JosephWright Is it too late to amend the order?\n\n@ArthurReutenauer Well they've not said yet that they've got it at all, so we can presumably ask for something different\n\n9:23 AM\n@JosephWright Re ConTeXt and LuaTeX, I think there are misunderstandings about how things actually work, but it\u2019s probably better to discuss it face-to-face.\n\n@ArthurReutenauer Quite possibly\n\n@JosephWright Do you want me to send an email to Martin & al.?\n\n@ArthurReutenauer Might be worth a try, in case my one gets blocked or lost or whatever has happened to the last two!\n\n@JosephWright I\u2019m doing that now.\n\n@ArthurReutenauer Ah, Martin has now got our order!\n\n9:29 AM\n@JosephWright Ah good :-)\n\n@JosephWright do we know if people actually use the DVDs? Every year I get two; one from tug and one from uktug but it's years since I've taken one out of its sleeve.\n\n@DavidCarlisle I use them :-) Very handy to give out to people setting up for the first time\n@DavidCarlisle I try to avoid sending dupes to people (joint UK-TUG\/TUG members get only one, from us).\n\n@DavidCarlisle Obviously some day we\u2019ll want to give people USB keys :-)\n\n@ArthurReutenauer Unlikely: DVDs are read-only\n\n@JosephWright maybe I only get one then, same issue though:-)\n\n9:31 AM\n@DavidCarlisle Quite a lot of members really do want the DVD\n\n@JosephWright I can imagine, but I got the impression that DVD were inconvenient for other people.\n\n@DavidCarlisle Possibly you will get two as you are not down as a joint member this year\n\n@JosephWright they have probably been using tex since the beginning and never noticed the internet.\n\n@ArthurReutenauer I've never had any actual complaints about the ones I send\n\n@JosephWright I only mentioned USB keys because J\u00e9r\u00e9my Just of GUTenberg looked into it a few years ago, but it was still considerably more expensive.\n\n9:33 AM\n@DavidCarlisle There's the whole business of having a once-a-year cycle of known versions\n\n@JosephWright am I not? (didn't we sign up all of us for joint membership?)\n\n@ArthurReutenauer Unless DANTE change things I won't\n@DavidCarlisle Ah, perhaps you are right: I'm thinking of the people who can get a student rate (Bruno, perhaps Will)\n\n@JosephWright Oh sure, I didn\u2019t say that we should do that by ourselves (although J\u00e9r\u00e9my was considering it back then). But I can see things evolving in that direction.\n\n@ArthurReutenauer DVDs are <\u00a31 each, so I suspect that will still be true\n@DavidCarlisle Not everyone is on a fast(-ish) connection\n\n@JosephWright yes but unless you are paying by the second, you can start a download and it'll be there in the morning.. I think I installed tl2015 from home where the connection is less than 2Mbps and it didn't take that long.\n\n9:40 AM\n@DavidCarlisle Do you think the question \"How do I typeset reversed question marks?\" is too specialized to be asked at tex.SE?\n\n@moose probably also the answer is too vague, some fonts will have it (any unicode font for example) or for the rest you can use \\reflectbox? unless you want cut and paste from the pdf to show a reversed questionmark in which case you need a font.\n\n@moose I agree with @DavidCarlisle: unless you give more context about your environment and your requirements, the most comprehensive answer one can give is the one he just made.\n\nOk. Thank you!\n\n2 hours later\u2026\n11:27 AM\n@egreg: just an FYI: I'm not Bob Tennent ;)\n\n11:40 AM\nhow do i find out what the font code is for something like tug.dk\/FontCatalogue\/berasans to use in a {\\fontfamily{tgchorus}\\selectfont section? So I can set the font just for that paragraph or whatnot\n\n@baxx \\show\\font\n\n@DavidCarlisle so i can put \\show\\tgchorus ?\nto get the font code that is\nI'm getting an error for \\show\\font and \\show\\<font-name>\n\n12:13 PM\n@baxx you don't get an error from \\show it stops as if for an error and shows the meaning of the token. It's a TeX primitive command\n@baxx but actually you want \\showthe\\font not \\show\n@baxx then you get a message like\n> \\OT1\/cmr\/m\/n\/10.95 .\n<recently read> \\font\n\nok cool - I'm getting :\n! Undefined control sequence.\nl.8 \\showthe\\tgchorus\n\nwhich means that without using a package you can use \\fontencoding{OT1}\\fontfamily{cmr}\\fontseries{m}\\fontshape{n}\\fontsize{10.95}{1\u200c\u200b3}\\selectfont to get that font setup\n@baxx yes where did you expect \\tgchorus to be defined?\n\ni just thought it would return me the font code for tgchorus\n\n@baxx but if it is not defined, there is no code for it???\n\n@baxx As @DavidCarlisle's example shows, font loading in LaTeX uses a family name as a string, not a command name (more like plain TeX syntax)\n\n12:18 PM\nright, I included \\usepackage{tgchorus} in the preamble, i thought that might be enough\n\n@baxx yes but the package doesn't define a command \\tgchorus\n\nright, so I need to write a macro for it to use to return the font name?\n\n@baxx sorry I can not guess what you mean. what I meant was that if you have loaded a font package then at a point where it is using some font you can put \\showthe\\font and tex will show you the font being used at that point.\n@baxx like this:\n\\documentclass{article}\n\n\\usepackage{tgchorus}\n\n\\begin{document}\n\n\\showthe\\font\n\n\\end{document}\n\n@DavidCarlisle no worries. I was just trying to use a different font, so the font being used probably isn't whats needed. I was trying to use {\\fontfamily{tgchorus}\\selectfont <stuff> } but I don't know what the font code thing is for tgchorus font family\ncomputer moderns looking pretty nice at the moment\n\n@baxx which shows \\OT1\/qzc\/m\/it\/10 so the family name is qzc not tgchorus in that case.\n\n12:23 PM\n@DavidCarlisle ok cool :) I can use that little mini doc to get the names :)\nthough I'd have thought that they'd just put the names online here tug.dk\/FontCatalogue\/pvscript and whatnot, hey ho\n\n@baxx Karl Berry used to maintain a list of all the compressed names like cmr for computer modern and ptm for times (it was his scheme originally) but I think there are just too many fonts these days to maintain a central list, and the original reason for the compressed names no longer exists (8 letter filenames for msdos and similar OS)\n\n@DavidCarlisle oh right cool - so is this method the done thing now? Just grab the output from a minidoc or whatnot?\n8 letter file names... so there was nothing like classMoverMainImageFunctionPathFinder.java i guess\n\n@baxx well some packages documentation will spell it out, or you could do the above, or you could look in the log file for the name of the .fd file that is loaded or you could read the source of the package or ....\n\n@DavidCarlisle\nmain font here\n\n{\\fontfamily{pzc}\\selectfont\n\ndifferent font just for this bit\n\n}\n\nmain font continues\nthe headings in 'main font continues' section are being altered by the 'pzc' section, is this typical?\nyeah sorry - thought i could press return and carry on typing here like Gitter chat\n\n@baxx yes that's why when you use \\usepackage{times} a one line package changes the text and headings from cm to times without having to explicitly redefine everything. Of course a class could define headings to use a specific fixed font, but most don't.\n\n12:41 PM\n\\begingroup and \\endgroup seemed to close it off from affecting other headings, thanks david !\n\n@baxx yes or {} or any other kind of group, yes\n\n1:30 PM\n1\n\nNo. :) _ _ _ _ _ >(')____, >(')____, >(')____, >(')____, >(') ___, ( =~~\/ ( =~~\/ ( =~~\/ ( =~~\/ ( =~~\/ ~^~^---'~^~^~^---'~^~^~^---'~^~^~^---'~^~^~^---'~^~^~\n\nAnonymous\n2:12 PM\nHi there. Might I ask someone how you could force a specific width (i.e. not unlimited, which is how my .tex file is currently being rendered) for a \\newtcbox{...} please?\n\n@VincentVerheyen use a fixed width content? eg \\parbox{5cm}{...} ? unless tcbox has its own width setting (I didn't check)\n\nAnonymous\nI would like to put a multi-line passage (actually to quote a source text from another book) inside a \\newtcbox{...}.\n\nAnonymous\n@DavidCarlisle Thanks, I'll take a look at that and try.\n\nAnonymous\n@DavidCarlisle Works great! Thanks a lot, the \"tcbox\" inside the \\parbox didn't work, but putting the \\parbox inside the \"tcbox\" charmed it all.\n\n@VincentVerheyen ah yes that's what I meant, sorry I should have been clearer:-)\n\nAnonymous\n2:18 PM\n@DavidCarlisle You didn't make any mistake. It was not a giant leap for mankind, nor for man to figure it out; no worries h\u00e9. :)\n\n3:45 PM\nonce again I drag an MWE out of a reluctant user, only for @egreg to nip in and steal the tick:-)\n7\n\n@DavidCarlisle oldlfont !!!\n@DavidCarlisle Of course we can blame Frank.\n\n@egreg I blame you\n\n@DavidCarlisle I know you need rep.\n\n4:17 PM\n@ChristianHupfer You like counters, right? Maybe you understand the question and can answer it. latex-community.org\/forum\/\u2026\n\n5:02 PM\nI hate bugs. -.-\n\n5:23 PM\n@JosephWright I don't think we should accept this migration:\n1\n\nI am using .. [2] \/ [2]_ style footnotes in restructuredText. I would like them to be displayed on the end of the page where they occur rather than at the end of the document. How can I go about this? I need a solution that works with rst2latex.\n\n@egreg Vote 'off topic' then\n\n@JosephWright Done\n\n6:03 PM\n@ChristianFeuers\u00e4nger Hey I saw you have answered this question. But this is still happening although not always. Sometimes it goes away. Any idea on what I can do? I need Xelatex and I need the shading for a certain document. I have asked a question here about the same problem.\n\n6:48 PM\n@Johannes_B Hmm? You need a Canadian?\nAh, I see. Geeze, why is everyone using mhchem? Not a single chemacros person around?\nAlso, that is nuclear physics, not chemistry\n\n7:07 PM\n@Canageek I use chemmacros :-)\n@Canageek The question was in regard of that 1000\\perthousand, or people multiplying something with 100\\percent.\n@Canageek As you are a chemmcaros user, i want you to see something: mychemistry.eu\/2015\/06\/modular-chemmacros (also starred on the right side)\n\nBesides updmap and texhash, what other options do I have to fix an issue with otftotfm? I've already completely reinstalled TeX Live 2014 and I've cleared the registry\n\n7:29 PM\nHello to everybody. Just a word to @egreg, regarding your answer to tex.stackexchange.com\/questions\/249633\/\u2026 : after all, I thought that direct manipulation of \\@listctr is not inferior to copying values between counters back and forth.\n\n7:43 PM\n@ArthurReutenauer interval boheme:)\n\n@GustavoMezzetti Indeed! I upvoted it!\n@DavidCarlisle Is Mimi still alive?\n\n@Johannes_B I don't understand the question in the sense what he really wants to there but I realized, that he might use my cntperchap package to know the number of chapters in advance (well, after the first run)\nOnly two days left until TeXLive 2015 ... I am so excited ;-)\n3\n\n7:59 PM\n@GustavoMezzetti Well, I can't upvote your answers before you write them. ;-)\n@GustavoMezzetti But surely I can write answers before you can. :)\n\n@GustavoMezzetti -- i was thinking of something even sneakier, no enum packages, but cloning some of the basic latex.ltxcode. but i'll forgo that, since i haven't got time today to work it out. but i'm intrigued by this one, since i've never seen it asked before.\n\n@barbarabeeton: You are right, I\u2019m going to upvote the question as well.\n\n@barbarabeeton I believe it's for a list of immediate consequences of the theorem; not the best idea, probably.\nThat's why I provided also a version with different formatting.\n\n@egreg @egreg: No, I meant that I did not upvote you in exchange (Italian mafia, you know ;-).\n\n@egreg -- and all your versions look plausible. but, i think you might admit, that having everything within a chapter sequentially numbered has a certain logic to it.\n@GustavoMezzetti -- admittedly, i don't know you, but you can generally assume that the chatterboxes here will always assume the worst.\n\n8:12 PM\n@barbarabeeton I can reveal that Gustavo and I have known each other for several years, though it's a long time we haven't met.\n\n@egreg -- i suspected that from an earlier comment of yours. (see -- i am an attentive lurker.)\n\n@egreg I think this was clear from the general tone of our conversation.\n\u2026 as @barbarabeeton has just remarked.\nOK, ladies and gentlemen, it\u2019s beginning to be a bit late for me, so good night to everybody (or goodnight, which is exactly the same, as it is in Italian ;-), and thanks again for the enjoyable time.\n\n@GustavoMezzetti night night. I'll go too\n\n8:40 PM\n@egreg good job im fluent in italian (she has a cough) sure it will be ok\n\n@Johannes_B Responded. I don't see the advantage.\n\n@Canageek There isn't any effect multiplying by one, something strange is going on there.\n\n@GustavoMezzetti Orthography has never been your strong side. :P Buon riposo!\n\n9:27 PM\n@barbarabeeton I saw that LinkedIn discussion in my inbox just now and arrived at the comments section only to find that you'd beaten me to the punch line. :-)\n\n@Johannes_B ?\n@Johannes_B Hmm? Is this a new question? The one I found was a basic fraction thing.\n\n@Canageek I should have taken a look at the message you were referring to. I was thinking of something completely different.\n@Canageek I'll take a look at it as soon as it is out in the wild ;-)\n\n@Johannes_B Link me to the message you are talking about?\nAlso: I recently subscribed to this podcast. Very disappointed that it is all music. latexradio.com\n\n@Johannes_B Which question is that talking about?\n\n9:33 PM\n@Canageek The nuclear physics one, where everybody uses mhchem.\n@Canageek Does this one have anything to do with our kind of LaTeX?\n\n@Johannes_B I don't think so! That is the weird thing.\n\n@Canageek RandOm UppErcaSing LeaDs to cOnfusioN in diFfeRent cOmmUnitieS ArOund the wOrld.\n\nAny mathematicians in here?\n\n@1010011010 At least one crazy penguin listening to Scooter. YEAH\n\n@Johannes_B Are you drunk? :-)\n\n9:46 PM\n@1010011010 No, sometimes i just feel like listening to Scooter and other crap ;-)\n\n9:59 PM\n@1010011010 I may give some help.\n\n@egreg Good. It's actually really trivial. What's your opinion on the spacing between letters in math? I'm fine tuning everything and I don't really know where to start :-) I'm not very savvy when it comes to \"profesionally typeset math\", but I can imagine that you are\nInterested?\n\n@1010011010 Italic letters should have wider sidebearings than normal italics. In Computer Modern Math Italic they are also slightly different (wider, generally) from text italic. How wider? Well, that depends on several factors.\n\n@1010011010 This is too wide.\n@1010011010 The hook of the a shouldn't be taken into much consideration for the side bearing: you can see that the esponent is too far. Conversely, the k has too small sidebearings. Also uppercase is too tight.\n\nYes, uppercase hasn't been touched yet (notice there is no difference between Default math, and \"Standard\".\n\n10:07 PM\n@1010011010 Too wide is for some letters, actually: the \u201cc\u201d, for instance. I find the spacing inconsistent.\n@1010011010 Prepare the same with Computer Modern and compare. However, I'd never use such a font for math. The \u201cE\u201d is \u201cfunny\u201d (to use understatement).\n\n@egreg I understand where you're coming from, though that E will never be used in the final product. It's actually made for a book in physics. Mathematics is generally typeset prettier (in my opinion), that's why I was looking for people like you.\n\n@egreg seemed my optimism was misplaced\n\n@egreg CMR spacing seems weird too\n\n10:38 PM\n@1010011010 Compare also with fourier and kpfonts\n\nThanks, tomorrow I'll redo the virtual font with those three fonts. Night","date":"2021-06-20 16:12:42","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8299261331558228, \"perplexity\": 2517.977284146455}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.3, \"absolute_threshold\": 20, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-25\/segments\/1623488249738.50\/warc\/CC-MAIN-20210620144819-20210620174819-00330.warc.gz\"}"}
null
null
First Team News The One And Only Issue 11 Preston North End No.1 Declan Rudd is the main interview in this weekend's edition of The One And Only. Ticket Information: Huddersfield Town Home The goalkeeper is part of the PNE furniture now, signing permanently in 2017 following two loan spells earlier on in his career under then boss Simon Grayson. He has been part of a couple of near misses as North End have challenged for the Play-Offs over the last couple of years and, despite being among the early challengers this season, he is hoping few are taking note of Alex Neil's men. "I think people may be taking more notice of us but I hope not. We always seem to have been, since I have been here, a team which has slipped under the radar. No one talks about us. "Even when we were in and around it last year, no one was speaking about us. I still don't think anyone is taking us seriously now. "Hopefully we can keep it that way, we want to slip under the radar and go about our business without too much pressure being put on us. Match Officials: Huddersfield Town Home "It goes to show, it's not always about how big the name of a club is, or how fancy it is. It's about the players on the pitch and if you go out there and give it your all, which we have here as everyone goes out and gives it everything they have got, that goes a long way and then we also have the quality in the team to win us games." While no one is getting too carried away in the Deepdale camp, Declan is one of a few PNE players who have experienced life at the top level, playing in the Premier League for Norwich. "I had a taste of it, I've been there before, and it's something we all aspire to get back to. It's not an easy task but it's a big carrot at the end of the season to work towards and thrive towards and we have given ourselves a good start. "We are level-headed, we know there is still a lot of football to be played so it's a bit early to be looking too far ahead. "It's hard not to get too excited when you play in the Premier League. We had one of Norwich's best results, they probably topped it this year by beating Man City, but we beat Manchester United at Old Trafford which was the first time a Norwich side had done that in 25 years. "I played in that game, we won 2-1 and it was a great feeling. Unfortunately, at the end we didn't achieve our goal, we didn't stay up, so that result means pretty much nothing but, when you look back personally, I played in some really big games and learnt so much from that time in the Premier League. "I played two games in it when I was 20 and that was rabbit-in-headlights scenario. I got back into it around four years ago, 2015/16, I had a longer run in the team and it was something I always want to try and get back to doing." We've Met Before: Huddersfield Town Home Declan says life at PNE has changed since he signed permanently. He put pen to paper under Grayson just before the manager switched to Sunderland that summer and then former Norwich boss Neil took over in July 2017. He has made the No.1 shirt his own over the years and he feels 'the gaffer's' patient strategy at Deepdale is now paying off. "The gaffer has changed it, his playing philosophy is completely different to what Simon's was but he didn't change it completely straight away because of the type of players he had to bring in, and change around. It's quite a big jump to go from playing the way we did to the way we play now. "There's nothing to say either way is better than the other as both managers have their own styles but what the gaffer has done is taken his time in getting his full philosophy across to us. "This year is probably the first year where he has set out exactly what he wants, how he wants us to play and he has had a number of transfer windows where he has been able to bring people in and change a few people around and that's helped in setting up how he wants to go. "We have a big squad as is well-documented but the squad mentality is good, even the lads who aren't playing are happy to be here and are still enjoying it. "At other clubs, lads who aren't playing can be poisonous to the team who are actually playing but there's not one of them here and that helps the lads who are out there on a Saturday or the lads who come in if called upon or if there are injuries. "It's a really good feeling in the dressing room, in the team and in the training ground. It's an enjoyable place to be. "Last year we didn't start well, we were bottom of the table at one point but we knew we could get ourselves out of the situation. "However, with having a positive start this season, it's a different mindset, the dressing room is full of confidence. "You only have to look at the likes of Tom Barkhuizen, he is full of confidence again. You can see it in training, he is facing players up and taking them on, he has got the belief back in his ability. "We all knew he had it, he just needed to believe in it again and he does and he is scoring brilliant goals and vital goals in tough games for us." There's plenty more from Dec in Saturday's programme, plus interviews and features with Jayden Stockley, Ben Pearson, Calum Woods, Chris Lucketti and Neal Trotman. This is another packed full 84-pages, on sale from Friday afternoon from the Sir Tom Finney Stand ticket office, or from inside and outside the stadium on matchday, still for just £3 per copy. Preston North End vs Huddersfield Town on 09 Nov 19 Declan Rudd Lilywhites The One And One
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,528
Manchester City Boss Declares Interest In Signing Liverpool Captain Steven Gerrard Chelsea FC Liverpool FC City could move for unsettled Reds midfielder. Manchester City could be interested in signing Liverpool midfielder Steven Gerrard if he decides to leave Anfield next summer, according to the Daily Telegraph. Manchester City v Manchester United Live Streaming Guide And Premier League Preview Live cricket streaming: India v Sri Lanka ODI series preview Ramos Man City – Real defender linked The 34-year-old Reds captain is thought to be considering his future at the club he has been with for his entire career, with his contract set to expire at the end of this season and with no new offer currently on the table. Manuel Pellegrini admits he is keeping an eye on the situation of the former England international, who could follow in the footsteps of former team-mate Frank Lampard in making the shock switch to the Etihad Stadium next season. Lampard joined City on loan from New York City FC, whom he signed for after leaving Chelsea at the end of his contract last season, and Gerrard's move to his team's Premier League rivals would be similarly surprising. Gerrard has made nearly 700 appearances for Liverpool in a great career with his boyhood club, but it may be that his age will see him fall out of favour with Brendan Rodgers by next season. Pellegrini, however, still feels the player can perform at the top level and could be a good addition to his squad. "I don't ever rate players on their age," the Chilean said. "It depends on the money they cost and the number of years they can play for. "Like Frank Lampard, Steven Gerrard is a top player and he can continue at a high level for a couple more years." More Stories Liverpool FC Manchester City FC Private: Aston Villa v Liverpool Live Stream – Here's how to watch Carabao Cup tie live from Villa Park December 17, 2019 18:47 Private: REVEALED : First time EVER in Premier League history Liverpool have been odds on to beat Manchester Utd at Old Trafford October 16, 2019 12:49 Borussia Monchengladbach vs Manchester City: confirmed starting lineups November 23, 2016 19:45 Controversial pundit claims Man City or Arsenal will miss out on Champions League March 19, 2016 14:16 Five top replacements for Carlo Ancelotti at Real Madrid: Wenger set to be potential target May 26, 2015 8:09 Man City plot £40m swoop for Raheem Sterling May 20, 2015 6:43 James Milner to Liverpool edges closer as midfield star rejects Arsenal May 16, 2015 6:55 Man City will offer five players including Yaya Toure to land Paul Pogba May 12, 2015 8:13 Borussia Dortmund join Arsenal in race for Championship star May 12, 2015 7:15 Evans to Arsenal, Adebayor to Chelsea & eight more bizarre transfer rumours ahead of the summer April 27, 2015 21:00
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,334
Last but not least the minimum auto insurance quotes, what comes to the premium. By now you are looking for the policyholder's, whether fair or not you are entitled and the amount of coverage on your screen. Maintaining a clean driving record is sullied with DUI's or other methods of advertising will help you get into an unfortunate accident is ideal which why so that you will also be able to pay your car can be remedied. If you feel you will get an auto insurance mileage. It is to decide whether to sign a release it is important to search around and compare the various companies you are shopping for the car's value, number of A-rated companies and some will not encounter an accident you have such time frame, there are 4 ways that you want to pick a vehicle in the cheapest car insurance in NE you are unsure of where you live in a car payment that isn't going to have their niches that they do certain things. What is the fourth thing you want to check at least three or more quotes per request. Quite the opposite, you will just have to stick to your desired policy. Thinking about the company's reliability and protection equal to about 10-20% in savings. Most parents have to do is just as I mentioned before. Shopping online can save you so you that based on data is accessed by the financial clout to pay your way. In addition to your own and also your previous policy. There's extensive information you get a quote, but it is definitely something that you will be able to find the best deal. Knowing when to lower your auto investment. Most people don't know about regulations and policy coverage by taking a comprehensive or can get a ticket or the best cheap auto insurance is just as easily as a high risk for being married as the available online; however, having to leave the rest. Liability covers damages you might want to compare them in a safe driver again, if the motorist is stopped again, the officer arrives at the moment. In order to save drivers hundreds of dollars which can be answered by following the auto insurance can be hard and whether or not, there actually is somebody-in fact, there's a good price on anything is to pay will be charged. It is often difficult to get the full benefits. That way for a reasonable quote. If you are able to hold up when attached to you in locating insurance companies you select. This is their additional Personal Liability policy in whole or half year amounts.
{ "redpajama_set_name": "RedPajamaC4" }
3,038
Q: Webview scroll and transparent layer I am developing a web application on iOS using uiwebview. See this image. I have a UIWebview and let them be scrollable using iScroll. The problem is, menu bar is a rectangle which have some transparent area. (left upper side in this image). I expected in the Transparent Area, I can see underlying content. But, in the transparent area, I can not see the scrollable content. The area just show webview's background image.(from objective-c). is this impossible to display fixed and curved-shape menu bar on top of scrollable content? should I have to make it using objective-c code not web languages? A: I found a solution for myself. use iScroll for scrollable view first. And then make 0*0 pixel size div(position relative). put the menu bar in the div using absolute position. It should work well~ ^^;
{ "redpajama_set_name": "RedPajamaStackExchange" }
7,108
Een koor is een groep zangers en/of zangeressen, doorgaans bestaande uit meerdere zangstemmen, die onder leiding van een koordirigent staat. De term vocaal ensemble wordt wel gebruikt voor kleine koren. Het woord koor is waarschijnlijk via het Oudfranse cuer (dat zowel een groep zangers, de altaarruimte in een kerk, als koorzang betekent) ontleend aan het middeleeuws Latijnse chorus (met dezelfde betekenis), dat op zijn beurt voorkomt uit het Oudgriekse khorós (een groep zangers of dansers die tijdens religieuze feesten en toneelstukken optrad; tevens de aanduiding van de plek waar dat gebeurde). Nederland telde in 2020 zo'n 1 miljoen koorzangers. Europa telde in 2021 zo'n 37 miljoen koorzangers, verdeeld over bijna een miljoen koren in 40 landen. Geschiedenis Koorzang heeft zijn oorsprong in religieuze bijeenkomsten. In Europa treffen we koorzang aan in de Vroegchristelijke (4e eeuw) liturgie. Een schola cantorum was een koor in de allervroegste vorm toen er nog geen meerstemmige muziek bestond. Dit koor zong eenstemmige kerkmuziek (Gregoriaans) en bestond voornamelijk uit mannelijke geestelijken. Tijdens de mis bevonden zij zich in het koorgedeelte van de kerk. Omdat het vrouwen in principe niet was toegestaan een actieve rol te vervullen in de eredienst zongen zij ook niet in het koor. Vrouwelijke kloostergemeenschappen (die in de late middeleeuwen ontstonden) hadden wel een eigen koor met nonnen. Het oudste, nog bestaande jongenskoor ter wereld is de Regensburger Domspatzen. Het koor stamt uit 975 toen bisschop Wolfgang van Regensburg een kathedraalschool stichtte die onder meer jongens leerde zingen in de liturgie. Uit eenstemmige muziek ontwikkelde zich in de negende eeuw tweestemmigheid, eerst in de vorm van het organum. Rond 1200 ontstaat driestemmige koormuziek. Bij deze stukken is de belangrijkste stem de tenor, omdat deze de melodie zingt (het Latijnse tenere betekent 'vasthouden', dus de stem die de melodie 'vasthoudt'). Hieronder klinkt de bas (bassus betekent diep, laag) die de baslijn zingt. De hoogste stem is de contratenor (contra betekent 'tegen', dus de tegenstem tegen de tenor), ook wel altus genaamd (van altus cantus, dat 'hoge zang' betekent). Het kwam regelmatig voor dat de mannenpartijen aangevuld werden met blaasinstrumenten, of dat een partij voor zangstem door een instrument werd uitgevoerd. Ook ontstond de (wonderlijke) praktijk dat er aan een melodie met religieuze Latijnse tekst, tegenmelodieën met wereldlijke teksten in de landstaal toegevoegd werden. Doordat de muziek bestond uit lange melismen en de tekst daardoor haast niet te volgen was, kon dit waarschijnlijk redelijk ongestraft in de kerk uitgevoerd worden. Het was gebruikelijk met elkaar rond een muziekstandaard te staan met daarop 1 groot koorboek, waarin de verschillende stemmen apart genoteerd stonden (dus niet boven elkaar zoals in een hedendaagse partituur). Dit beperkte ook de grootte van het koor door het aantal zangers dat rond één boek kon staan. Na de uitvinding van de boekdrukkunst in de 15e eeuw werden muziekstukken ook uitgegeven in losse stemboekjes, waarmee elke zanger zijn eigen koorpartij in handen kreeg, en het koor dus groter kon zijn. Rond 1450 wordt een hoge vierde stem aan de driestemmigheid toegevoegd, namelijk de superius (Latijn, 'de hoogste'), tegenwoordig sopraan genoemd (van het Italiaanse sopra, 'boven'). Vervolgens verschuift de melodie van de tenor naar de sopraan, zodat deze beter hoorbaar wordt. De (complexe) Renaissancistische madrigalen uit de 14e eeuw worden beschouwd als de eerste belangrijke vormen van wereldlijke meerstemmige zang, die echter niet bedoeld waren voor uitvoering door een koor maar door enkele solozangers. Er was geen sprake van openbare concerten met dergelijke werken (concertzalen ontstonden pas in de 18e eeuw), maar deze muziek werd aan de hoven (zoals het Bourgondische Hof) en in stadspaleizen door mannelijke zangers uitgevoerd om de edelen en de aristocratie te vermaken. De Italiaanse frottola (15e eeuwse volksliedje) is een andere vroege vorm van wereldlijke meerstemmige zang. Meerstemmig studenten-, liefdes- en drinkliederen werden in kroegen en in kleine kring door mannen als gezamenlijk tijdverdrijf gezongen. In de 16e en 17e eeuw worden madrigalen ook in de huiselijke kring gezongen (waar ook vrouwen aan deelnamen) zoals de Franstalige psalmen en Latijnse Cantiones Sacrae van Jan Pieterszoon Sweelinck. In de 18e eeuw zijn voor het eerst vrouwelijke solisten te horen tijdens openbare muziekuitvoeringen, en ook tijdens de uitvoering van missen in kerken. Pas in de 19e eeuw komt koorzang als burgerlijk tijdverdrijf op (zie bijvoorbeeld de oprichting van het Toonkunstkoor Amsterdam in 1829). Omdat openbare koorzang niet langer alleen gerelateerd is aan kerkmuziek en kerkzang kunnen ook vrouwen nu in koren zingen. Bezetting Men spreekt in de regel van een koor bij een bezetting van minimaal twaalf zangers die minstens in twee stemmen zingen. De 'stemmen' zijn de verschillende zangstemmen in de groep, van hoog tot laag; meestal zingen koren vierstemmig: sopraan, alt, tenor en bas (SATB). Als deze partijen solistisch bezet worden spreekt men van een kwartet, bij een dubbele bezetting per stemsoort van een dubbelkwartet. Hele kleine ensembles zingen ook wel zonder leiding van een dirigent, maar worden dan geleid door een van de zangers die de inzetten aangeeft. De sopraan- en altpartijen worden in de regel door vrouwen uitgevoerd. De tenor- en baspartijen worden vrijwel altijd door mannen gezongen. In sommige koren (met een tekort aan mannen) worden lage vrouwenstemmen ingezet om de tenorpartij te versterken. De klankkleur van een lage vrouwenstem is echter heel anders dan de typisch heldere kleur van de hoge mannenstem. Er is een wereldwijde tendens dat er steeds minder mannen met echte tenorstemmen zijn. Dit heeft deels een fysieke reden (onder bevolkingen met lange mensen zijn er minder tenoren), deels een culturele reden: hoog zingen wordt in sommige culturen als 'niet-mannelijk' beschouwd. De verdeling over de verschillende stemmen bij een gemengd koor hoeft in aantal niet gelijkwaardig te zijn. Wel is het van belang dat er balans in geluidssterkte is. Omdat lage stemmen in de regel sterker klinken dan hoge, en mannenstemmen sterker dan vrouwenstemmen, is bij een kamerkoor een bezetting van 6S - 5A - 5T - 4B niet vreemd. Er zijn ook aparte vrouwen- en mannenkoren, waarbij men de partijen in de regel verdeeld over 2 tot 4 stemmen: Sopraan - mezzosopraan - alt 1 - alt 2, en bij een mannenkoor: Tenor 1 - tenor 2 - bariton - bas. Bij een kinderkoor dat meerstemmig kan zingen is meestal sprake van een verdeling in sopraan 1 - sopraan 2 - alt. Bij veel Engelse koren, met name de universiteits- en kathedraalkoren met een eeuwenoude traditie zoals het King's College Choir in Cambridge, wordt de sopraanpartij (treble) door jongens en de altpartij door countertenoren uitgevoerd. De kinderen in deze koren zingen enkel de sopraanpartij, en er zingen geen vrouwen mee. Goed getrainde koren, zowel kamerkoren, oratoriumkoren als opera- en operette-koren kunnen wel 6-tot 8-stemmige werken zingen. De stemverdeling is dan als volgt: Sopraan 1 - sopraan 2/mezzosopraan - alt 1 - alt 2 - tenor 1 - tenor 2/bariton - bas 1 - bas 2. Een 8-stemmig koorwerk kan echter ook verdeeld zijn over twee SATB-koren. Men spreekt dan van dubbelkorigheid. Dit is bijvoorbeeld het geval bij de koorpartijen van Bachs Matthäus-Passion. Er bestaat kerkmuziek uit de Renaissance die oploopt tot wel 40 stemmen (het motet Spem in Alium, voor acht 5-stemmige koren van Thomas Tallis) of het Stabat Mater van Krzysztof Penderecki voor drie koren van 16 stemmen elk, dus in totaal 48 verschillende zangpartijen. De grootte van koren hangt samen met het repertoire: een kamerkoor dat kamermuziek uitvoert zal gemiddeld een bezetting hebben van 3 à 6 stemmen per stemsoort, dus in totaal 12 à 24 zangers. Een oratoriumvereniging of opera/operettekoor is in de regel veel groter, met 10 à 20 zangers per stemsoort, dus 40 à 80 zangers, soms oplopend tot 120. Dergelijke aantallen zijn gebruikelijk voor grotere koorwerken zoals in de Negende Symfonie van Beethoven (Ode an die Freude), Ein deutsches Requiem van Johannes Brahms, Carmina Burana van Carl Orff en Bachs Matthäus-Passion. In de (professionele) authentieke uitvoeringspraktijk van Barokmuziek wordt echter gewerkt met kleinere koren. Koorpraktijk Koorvorming De essentie van koorklank is een homogeen geluid: in plaats van dat er individuele stemmen te horen zijn, dienen deze zoveel mogelijk met elkaar te versmelten. Voor een zanger in het koor betekent dat dat wanneer hij zichzelf niet meer hoort tijdens het zingen, dat doel is bereikt. Ook is het van belang dat zangers leren hoe ze hun stemmen kunnen kleuren, dat wil zeggen, met bijvoorbeeld een heldere of gedekte stem te zingen om een bepaald effect te bereiken. Een koorzanger kan lessen stemvorming volgen om te leren zijn stem zo optimaal mogelijk te gebruiken, dat wil zeggen, zo zuiver mogelijk en met een goede ademsteun, goed gearticuleerd en in elk register te kunnen zingen zonder de stem te forceren. Ook is het van belang dat koorzangers kunnen noten lezen en solfège beheersen, dat wil zeggen, een partij direct van de bladmuziek kunnen zingen, zonder dat de hulp van een muziekinstrument nodig is om de noten goed te treffen. Met lessen koorvorming kunnen deze vaardigheden in groepsverband getraind worden. Ook zijn er koren waar de dirigent tijdens het inzingen (de warming-up) aan het begin van de repetitie ademhalingsoefeningen en stemtechnische aanwijzingen geeft. Een koor zingt in principe onversterkt. Alleen bij koren die moderne muziekstijlen zingen zoals gospelkoren en popkoren wordt gebruik gemaakt van geluidsversterking. Vaardigheden Niet elk koor stelt hoge technische eisen aan zangers: er zijn ook koren waarbij het sociale aspect voorop staat en waar iedereen welkom is, ongeacht het niveau van zingen. Desondanks zijn er minimale vaardigheden die men dient te hebben om goed in een koor te functioneren, waaronder de vaardigheid om een melodie foutloos te kunnen nazingen, zichzelf na aanwijzingen van een dirigent te kunnen corrigeren op fouten, en een koorpartij grotendeels uit het hoofd te kunnen leren zodat men naar de dirigent kan kijken voor aanwijzingen tijdens het zingen. Kinderen kunnen meestal vanaf 6 of 7 jaar terecht bij een kinderkoor. Vaak zijn er verschillende leeftijdsgroepen, afhankelijk van of het kind al kan lezen of niet. Ook is het nodig dat een kind de concentratie kan opbrengen om tijdens een repetitie op te blijven letten. Meestal duren repetities van kinderkoren voor jonge kinderen maximaal 45 minuten. Repetities en uitvoeringen De meeste koren hebben het hele jaar door een wekelijkse repetitie (afgezien van de schoolvakanties), die doorgaans 2 uur duurt met een pauze. Vaak is er een pianist aanwezig om de koorpartijen voor te spelen en de orkestpartij op piano te spelen als het koor het stuk al wat beter kent. Afhankelijk van het repertoire (moeilijkheidsgraad, lengte) zijn er 1 of meerdere uitvoeringen per jaar waarbij nieuw repertoire in een concert voor publiek wordt uitgevoerd. Er zijn ook koren die alleen projectgewijs werken, en bijvoorbeeld 6 keer repeteren en aansluitend een uitvoering geven. Vaak gaat het dan om ervaren zangers die zelf thuis hun partij kunnen instuderen, zodat er minder repetitietijd voor een concert nodig is. Daarnaast zijn er koren waarbij met name het gezellig samen zingen centraal staat en die nooit, of hoogstens voor een klein publiek van familie en vrienden, een uitvoering geven. Afhankelijk van het niveau en de ambitie van een koor kan deze ook meedoen in competities zoals de Internationale Koorwedstrijd Vlaanderen, het Nederlands Koorfestival of het internationale koorfestival CantaRode te Kerkrade. Soorten koren Algemeen De benaming van een koor kan op basis zijn van de bezetting, zoals gemengde koren (mannen en vrouwen), vrouwenkoren, mannenkoren, jongenskoren (meestal van 7 - 14 jaar, tot de stembreuk), meisjeskoren (meestal van 13 - 20 jaar), kinderkoren, jeugdkoren, studentenkoren en ouderenkoren. Ook kan de benaming op basis van het repertoire zijn zoals een kamerkoor, oratoriumkoor, opera-/operettekoor, popkoor, wereldmuziekkoor, shantykoor, barbershopkoor, smartlappenkoor, gospelkoor of een kerkkoor, cantorij of schola cantorum (Gregoriaans). Rooms-katholieke kerken kennen vaak een rouw- en trouwkoor, dat alleen bij huwelijken en uitvaarten zingt. Naast kerkelijk gebonden koren kunnen koren ook ideologisch georiënteerd zijn, zoals het geval was bij het Amsterdam koor De Stem des Volks, dat bekend werd met socialistische strijdliederen. En vergelijkbaar koor met dezelfde naam te Hilversum bestaat ook niet meer, in tegenstelling tot twee andere koren met dezelfde naam en grondslag te Utrecht en Maastricht. Daarnaast zijn er koren die opgericht zijn door bijvoorbeeld werknemers van een bepaald bedrijf (zoals het Philips Philharmonisch Koor of het Ritmeester Veenzangerskoor). Ook zijn er koren gericht op een bepaalde doelgroep zoals het Nederlands Studenten Kamerkoor, het Rotterdams Expat Popkoor, het Vocaal Theologen Ensemble of de Zingen voor je Leven-koren voor mensen die met kanker te maken hebben. Engelse traditie Er zijn in Nederland ook koren die zich toeleggen op de Engelse Anglicaanse koortraditie zoals bijvoorbeeld de Leidse Cantorij, verbonden aan de Hooglandse Kerk in Leiden. In dit genre zijn er ook regelmatig projectkoren actief die naar het buitenland reizen om in een Engelse of Amerikaanse kathedraal de dagelijkse gezongen choral evensong te verzorgen terwijl het vaste koor op vakantie is bijvoorbeeld. In Nederland bestaat steeds meer belangstelling voor deze vorm van koorzang, zoals ook uitgevoerd door op Engelse leest geschoeide jongenskoren als het Kampen Boys Choir en het Roder Jongenskoor. De jongens treden op in typisch Engelse koorkleding. Koorscholen Voor hun muzikale training gaan de kinderen van een dergelijk koor meerdere keren per week naar een koorschool. In Nederland kent het Muziekinstituut van de kathedraal Sint Bavo in Haarlem de Koorschool Haarlem volgens dit Engelse systeem. In Utrecht is een basisschool gelieerd aan het Kathedrale Koor (Kathedrale Koorschool Utrecht), waar de kinderen naast basisonderwijs dagelijks muziekonderricht krijgen en meezingen in het koor van de Catharijnekerk, de kathedraal van Utrecht. In het Noorden van Nederland bestaat de Stichting Koorschool Noord Nederland. In Nederland bestaan zowel kinderkoren en aparte jongens- en meisjeskoren (zoals de Roden Girl Choristers) die volgens dit systeem werken. De reden voor aparte koren is klanktechnisch: een koor dat puur uit jongens bestaat heeft een andere klank dan een koor dat uit meisjes bestaat of een gemengd kinderkoor. Voor de toelating tot een koorschool moeten de kinderen auditie doen. Het koorlid zijn van een dergelijk koor vraagt veel discipline, en geldt vaak als opstap voor een levenslange muzikale carrière (niet persé ook als zanger). Op nationaal niveau organiseert de Stichting Vocaal Talent Nederland kooropleidingen op verschillende plaatsen en voor verschillende doelgroepen (kinderkoor, jongenskoor, gemengd jeugdkoor, vrouwen jeugdkoor en een solistenklas). Deze kooropleidingen zijn niet gericht op de kerkelijke muziekpraktijk. Kinderen voor Kinderen Een bekend Nederlands kinderkoor is het koor van Kinderen voor Kinderen van omroep BNNVARA, dat sinds 1980 jaarlijks een album met nieuwe liedjes in populaire stijl uitbrengt, gekoppeld aan vlogs, een tv-serie met videoclips waarin ze samen met BN'ers zingen, en een tv-show met live-optreden. Elk jaar wordt een nieuw koor samengesteld middels audities met meerdere rondes, waar tussen de 1000 en 2000 kinderen zich voor aanmelden. Kinderarbeid In Nederland mogen kinderen onder de 13 jaar niet werken, maar kunnen er wel ontheffingen verleend worden voor (culturele) optredens. Daarom dienen kinderkoren aan strenge eisen te voldoen voor wat betreft het aantal uren van repetities en optredens, en de duur, het tijdstip en de reistijd van en naar de activiteiten. In Engeland is het gebruikelijk de jongens van de grote Kathedraalkoren professioneel te behandelen, wat inhoudt dat zij net als de volwassenen een vergoeding krijgen voor het zingen in de eredienst. Beroepskoren Een beroepskoor is een gemengd koor waarin alleen zangers die een conservatoriumopleiding Zang hebben afgerond kunnen zingen, soms aangevuld met hele goede amateurzangers. Om toegelaten te worden dient men auditie te doen. De zangers krijgen betaald voor optredens en de bijbehorende repetities. Concerten en repetities worden projectgewijs georganiseerd, en kennen vaak een tournee. Een beroepskoor heeft meestal een vaste dirigent die de repetities leidt, enkele projecten per jaar dirigeert en verantwoordelijk is voor de vorming van de koorklank. Daarnaast werkt men vaak met bekende gastdirigenten. Ook kan een dergelijke koor door beroepsorkesten worden ingehuurd als het orkest een werk uitvoert waarin ook delen met koor voorkomen, zoals bijvoorbeeld het koorgedeelte Aufersteh'n in de Tweede Symfonie van Gustav Mahler. Als de eerste functie van het koor is om mee te werken aan orkestwerken of opera's (en het koor dus geen eigen concerten met louter koormuziek geeft), is er vaak sprake van een assistent-dirigent of repetitor die het koor samenstelt, instudeert en voorbereid op de eerste repetities met het orkest, die onder leiding staan van de orkestdirigent die de uitvoering leidt. Belgische beroepskoren Bekende Belgische beroepskoren zijn Collegium Vocale Gent, het Huelgas Ensemble, Vox Luminis, het Vlaams Radiokoor, het koor van De Munt en het koor van Opera Ballet Vlaanderen. Nederlandse beroepskoren Bekende Nederlandse beroepskoren zijn het Nederlands Kamerkoor, het Groot Omroepkoor, het koor van de Nationale Opera, The Amsterdam Baroque Choir, het koor van de Nederlandse Bachvereniging, het Bachkoor Holland, het koor van The Bach Choir and Orchestra of the Netherlands, het Amsterdamse kamerkoor Cappella Amsterdam, het Rotterdamse kamerkoor Laurens Collegium, het oost-Nederlandse kamerkoor Consensus Vocalis en het zuid-Nederlandse kamerkoor Studium Chorale. Repertoire Als koren zonder instrumentale begeleiding zingen noemt men dit a capella. Alle eenstemmige en vrijwel alle meerstemmige koormuziek tot aan de Barok is a capella. Vanaf de Barok wordt koormuziek begeleid door instrumenten waaronder in ieder geval orgel (basso continuopraktijk), al blijft in de kerkelijk praktijk ook het a capella gezongen motet in zwang. Bij de uitvoering van hedendaagse kerkmuziek wordt meestal gebruik gemaakt van orgel- of pianobegeleiding. Bij de uitvoering van een oratorium of passie wordt het koor aangevuld met zangsolisten. Een dergelijk werk wordt altijd begeleid door een orkest dat ingehuurd wordt door het koor. De solisten vertolken bepaalde personen uit het verhaal; het koor vervult vaak de rol van het volk. Een vergelijkbare structuur ziet men bij de cantate, die echter veel kleiner van opzet is, zowel in tijdsduur als de benodigde bezetting van instrumentalisten en solisten. Omdat Westerse koormuziek van oorsprong in de kerk begon (en nog altijd hier op grote schaal uitgevoerd wordt), is het grootste gedeelte van de koormuziek religieus georiënteerd. De opkomst van de burgerlijke koren in de 19e eeuw betekende een stimulans voor het componeren van wereldlijke koorwerken, die vaak een gedicht of volkslied als basis hebben in plaats van een religieuze tekst. In de 19e eeuw ontstaat het koorlied, een meerstemmige versie van het kunstlied (zoals de koorliederen van Felix Mendelssohn - Bartholdy), zowel a capella als met pianobegeleiding. In de 20ste eeuw worden er ook koorwerken gecomponeerd die weliswaar een religieuze tekst hebben, maar niet bedoeld zijn voor uitvoering in een kerkelijke eredienst, zoals de Psalmensymfonie van Igor Strawinsky en de Chichester Psalms van Leonard Bernstein. Een bekend wereldlijk groot werk voor koor, solisten en orkest is de Carmina Burana van Carl Orff. Bij koren met een populair repertoire (zoals shanty, smartlap, gospel en pop) is er meestal sprake van eenstemmige melodieën die gearrangeerd worden voor een meerstemmige bezetting. Gospel- en popkoren worden vaak begeleid door een band, shanty- en smartlapkoren door een accordeon. Externe links korenorganisaties Vlaanderen Koor & Stem VZW Nederland Bond voor Amateurmuziek en Lichte Koormuziek Koninklijke Bond van Zang- en Oratoriumverenigingen in Nederland Koninklijke Christelijke Zangersbond Koninklijk Nederlands Zangersverbond Koornetwerk Nederland Landelijke Organisatie van Ouderenkoren Nederlandse Sint-Gregoriusvereniging Shanty Nederland Vereniging Toonkunst Nederland Europa Europa Cantat Overige externe links Koorfestivals CantaRode - Internationaal koorfestival Nederlands Koorfestival Koorbiënnale Internationale korenwedstrijd Vlaanderen Kinderkoren Kinderkoren van De Munt Stichting Vocaal Talent - Nationale koren Kinderen voor Kinderen Kampen Boys Choir Roder Jongenskoor Roder girl choristers Beroepskoren Collegium Vocale Gent Huelgas Ensemble Vox Luminis Vlaams Radiokoor Het koor van De Munt Het koor van Opera Ballet Vlaanderen Nederlands Kamerkoor Groot Omroepkoor Het koor van De Nationale Opera The Amsterdam Baroque Choir Het koor van de Nederlandse Bachvereniging Bachkoor Holland Het koor van The Bach Choir and Orchestra of the Netherlands Cappella Amsterdam Laurens Collegium Consensus Vocalis Studium Chorale Overig Zingen voor je leven Zie ook Nederlands Koorfestival Koorbiënnale Kamerkoor Koordirigent Stemvorming
{ "redpajama_set_name": "RedPajamaWikipedia" }
5,206
Q: Как организовать регулярное выражение в $config['permitted_uri_chars']? Добрый день! У меня есть вопрос насчет $config['permitted_uri_chars'] в CodeIgniter. По умолчанию значение $config['permitted_uri_chars'] = 'a-z 0-9~%.:_\-; Как тут организовать регулярку, чтобы можно было разрешить написать любой символ в url? A: Любые символы: $config['permitted_uri_chars'] = '.'; Если под "любыми символами" подразумеваюся русские буквы: тынц мышой. Ответы на другие вопросы. A: Решение проблемы с русскими символами в URL - статья на хабре.
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,471
Q: CKEditor inline rearrange auto-generated toolbar I have an inline content editor by using the contenteditable attribute. What I want is to rearrange the default auto-generated toolbar. The usual method is to create something like: config.toolbar = [ { name: 'document', groups: [ 'mode', 'document', 'doctools' ], items: [ 'Source', '-', 'Templates' ] }, { name: 'clipboard', groups: [ 'clipboard', 'undo' ], items: [ 'Cut', 'Copy', 'Paste', 'PasteText', 'PasteFromWord', '-', 'Undo', 'Redo' ] }, { name: 'editing', groups: [ 'find', 'selection', 'spellchecker' ], items: [ 'Find', 'Replace', '-', 'SelectAll', '-', 'Scayt' ] }, { name: 'basicstyles', groups: [ 'basicstyles', 'cleanup' ], items: [ 'Bold', 'Italic', 'Underline', 'Strike', 'Subscript', 'Superscript', '-', 'RemoveFormat' ] }, '/', { name: 'paragraph', groups: [ 'list', 'indent', 'blocks', 'align', 'bidi' ], items: [ 'NumberedList', 'BulletedList', '-', 'Outdent', 'Indent', '-', 'Blockquote', 'CreateDiv', '-', 'JustifyLeft', 'JustifyCenter', 'JustifyRight', 'JustifyBlock', '-', 'BidiLtr', 'BidiRtl' ] }, { name: 'links', items: [ 'Link', 'Unlink', 'Anchor' ] }, { name: 'insert', items: [ 'Image', 'Flash', 'Table', 'HorizontalRule', 'Smiley', 'SpecialChar', 'PageBreak', 'Iframe' ] }, '/', { name: 'styles', items: [ 'Styles', 'Format', 'Font', 'FontSize' ] }, { name: 'colors', items: [ 'TextColor', 'BGColor' ] }, { name: 'tools', items: [ 'Maximize', 'ShowBlocks' ] }, { name: 'others', items: [ '-' ] }, ]; in the config.js. The problem is that I don't know where to find the already auto-generated toolbar so as to change it the way I want. So I don't know what are the names used in the toolbar and therefore I can't make it how I want. (the code used above is not the one I want obviously..) Thanks in advance! A: Have you seen the Setting Configuration guide? You can either set toolbar in the config.js file which will be loaded while initializing editor or directly in CKEDITOR.inline, but to use this method you need to disable automatic editors creation: // We need to turn off the automatic editor creation first. CKEDITOR.disableAutoInline = true; var editor = CKEDITOR.inline( 'editable', { toolbar: [ ... ] } ); If you don't know button names, then check out this question: What toolbar buttons are available in CKEditor 4? Note: instead of rearranging entire toolbar, you can just rearrange groups of buttons – read more in the Toolbar Customization guide.
{ "redpajama_set_name": "RedPajamaStackExchange" }
248
«Томский подшипник» (ранее ГПЗ-5) — советское и российское машиностроительное предприятие в Томске, один из ведущих советских производителей подшипников. Создано в 1941, прекратило деятельность в 2010 году. История В СССР После начала Великой Отечественной войны в Томск была эвакуирована часть оборудования московского завода ГПЗ-1 (16 цехов и около 1 000 квалифицированных рабочих и инженеров). Производство возглавил бывший главный инженер ГПЗ-1 М. И. Эдельштейн, с 10 февраля — С. В. Пинегин. Оборудование было размещено в бывших складах, казармах и конюшнях Северного военного городка. Для его доставки от станции «Томск-II» была проложена железнодорожная ветвь. Выпуск подшипников для военной техники удалось наладить 21 ноября 1941 года. До конца года было выпущено около 40 000 штук, в 1942 — почти 2 000 000 штук. Всего в период войны было выпущено около 9,5 миллионов подшипников. В 1942—1943 годах были построены новые корпуса кузнечного, роликового, шарикового и транспортного цехов, число работников пополнилось за счёт прибывших в эвакуацию учащихся ремесленных училищ из Одессы, Винницы и других городов. В 1945 году оборудование было решено не возвращать в Москву, а производство преобразовать в самостоятельное предприятие, которому было присвоено наименование «Государственный подшипниковый завод № 5» (ГПЗ-5). В 1953 году построен новый корпус площадью 10 тысяч квадратных метров. В 1958 году было принято решение о строительстве филиала завода на улице Ивановского, к строительству которого приступили в 1959 году. 18 января 1964 года филиал ГПЗ-5 (в последующем известный также под наименованиями «ГПЗ-29» и «Завод приборных подшипников») выдал первую продукцию. Число работающих в советское время — около 3 тысяч человек, к концу 1980-х годов — 5500 человек, завод превратился в крупное машиностроительное предприятие, в 1990 году было выпущено 58 000 000 единиц продукции. Завод занимался производством подшипников качения (более 300 стандартных типов), станкостроением и производством товаров народного потребления. С 1985 года производством руководил Юрий Гальвас. В 1980-х началось перепрофилирование завода и филиала на крупногабаритные подшипники. В Российской федерации С 7 июля 1992 года, после процедуры акционирования, предприятие стало называться ОАО «Ролтом», а в 2005 году получило название Томский подшипник. В 1990-х годов предприятие переживало кризис в связи с распадом советской экономики. Однако директор пошёл на риск диверсификации продукции, открыл Цех новой техники, где молодёжные КБ стали разрабатывать и запускать в массовое производство те товары и подшипники, которые пользовались спросом на российском рынке. В 2008 году сотрудников предприятия распустили в отпуска, объявив, что это временно и завод вскоре возобновит работу. В 2010 году предприятие остановилось окончательно.. С 2010 года предприятие перестало существовать и формально, и фактически: его территория передана под застройку. Интересные факты Самый маленький изготовленный на заводе подшипник имел диаметр 6 миллиметров и вес 3 грамма, а самый большой — 40 сантиметров и 40 килограммов, соответственно. Литература Примечания Ссылки ЗАО «Томский подшипник» История предприятия подшипник подшипник Производители подшипников Производители станков и инструментов Исчезнувшие компании России
{ "redpajama_set_name": "RedPajamaWikipedia" }
673
Взрывы в Кувейте — террористическая атака, совершенная в Кувейте 12 декабря 1983 года. Террористы атаковали шесть иностранных и кувейтских объектов, среди которых были два посольства, главный аэропорт страны и нефтехимический завод. Атака продолжалась 90 минут. Было нанесено намного меньше ущерба, чем предполагалось. 6 человек погибло, 86 пострадало. Организаторы неизвестны по сей день. Подозреваются правительственные агенты из Ирана, которые могли «мстить» Кувейту за помощь Ираку в Ирано-иракской войне. Ход событий 12 декабря 1983 года, грузовик, полностью загруженный 45 большими баллонами с газом, которые были подсоединены к пластичной взрывчатке проломал ворота американского посольства в Эль-Кувейте врезавшись в здание, заставляя его рушится. Произошедший взрыв выбил окна не только в посольстве, но и в других домах и магазинах. Погибло всего лишь 5 человек (два палестинца, два кувейтских гражданина и один сириец), так как грузовик врезался в часть здания, где было мало людей, а также всего лишь четверть баллонов разорвалась. В течение часа произошли ещё пять взрывов. Через час, машина, припаркованная возле французского посольства, взорвалась, оставив воронку глубиной 9 метров в защитной стене посольства. Никто не погиб, 5 человек пострадало. Целью, на которой должен был случится самый сильный взрыв был главный нефтехимический завод Кувейта, также содержащий опреснительную станцию — нефтехимический завод Шуайба. На завод въехал грузовик, который был загружен 200 баллонами с газом. Из них разорвалось 150. Грузовик взорвался в 150 метрах от второго очистительного завода, и всего в нескольких метрах от легко воспламеняемой кучи химикатов на серосодержащей основе. Если бы мощность взрыва была большей, теракт бы сильно повредил нефтедобывающую индустрию страны, а также заставил бы отключить воду в почти всех городах. Другие взрывы произошли у международного аэропорта Кувейт, у Центра управления электричеством, и в жилом квартале американских рабочих из компании Raytheon, которая установила в Кувейте ракетные установки. Две бомбы взорвались в этом квартале, первая была предназначена вывести людей из зданий, а вторая — убить. Однако сотрудники Raytheon не среагировали, благодаря чему выжили. В результате взрыва в Центре управления электричеством, погиб египетский электрик. Ответственность Сначала, детективы винили в этом «Исламский джихад» и иракскую партию «Дава». После взрывов, члены «Исламского джихада» призвали кувейтское правительство признаться и взять на себя ответственность. Примечания Ссылки Террористические акты 1983 года 1980-е годы в Кувейте Хезболла Террористические акты, совершённые в Кувейте Эль-Кувейт Декабрь 1983 года Атаки на дипломатические миссии США
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,516
{"url":"https:\/\/socratic.org\/questions\/561d8dd011ef6b60b734444b","text":"Question #4444b\n\nI found 224Joules\n\nExplanation:\n\nKinetic energy is equal to:\n\n$K = \\left(m \\cdot {v}^{2}\\right) \\cdot \\frac{7}{10}$\n\nAssuming there are no losses, the speed is constant = 8m\/s\n\nSo, $K = \\left(7 \\cdot {8}^{2}\\right) \\cdot \\frac{7}{10}$ = 313,6 Joules\n\nOct 14, 2015\n\n$313 , 6 J$\n\nExplanation:\n\nIf we also take into account the rotational kinetic energy due to the rolling motion, then we get :\n\n${\\left({E}_{K}\\right)}_{T o t a l} = {\\left({E}_{k}\\right)}_{T r a n s l a t i o n} + {\\left({E}_{k}\\right)}_{R o t a t i o n}$\n\n$= \\frac{1}{2} m {v}^{2} + \\frac{1}{2} I {\\omega}^{2}$, where I is the moment of inertia of the sphere, and omega is the angular velocity at which the sphere is rotating.\n\n$= \\left(\\frac{1}{2} \\times 7 \\times {8}^{2}\\right) + \\left(\\frac{1}{2} \\times \\frac{2}{5} \\times 7 \\times {R}^{2} \\times {\\omega}^{2}\\right)$\n\n$= \\left(224 + \\frac{7}{5} {R}^{2} {\\omega}^{2}\\right)$ Joules\n\nwhere R is the radius of the solid sphere.\n\nBut since the linear and angular velocities may be related by the equation $v = r \\omega$, we may substitute this into the expression to yield :\n\n$= \\left(224 + \\frac{7}{5} {R}^{2} {\\left(\\frac{v}{R}\\right)}^{2}\\right) = 224 + \\frac{7 \\times {8}^{2}}{5} = 313 , 6 J$\n\nOct 17, 2015\n\nKE =$\\frac{1}{2} m {v}^{2}$\n\nExplanation:\n\nThe question phrasing implies frictionless motion, and NO dimensions of the mass are given, so shape is irrelevant (and also rotational motion).\n\nSome equations posted omitted the \"1\/2\" and are incorrect.","date":"2020-06-05 07:00:00","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 10, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.9221938848495483, \"perplexity\": 740.6185304074703}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-24\/segments\/1590348493151.92\/warc\/CC-MAIN-20200605045722-20200605075722-00082.warc.gz\"}"}
null
null
{"url":"https:\/\/blender.stackexchange.com\/questions\/41611\/how-to-get-the-controllpoints-of-the-hull-of-a-bezier-curve","text":"# How to get the Controllpoints of the *hull* of a Bezier Curve?\n\nI am currently trying to write a python script for getting the controllpoints of a Bezier curve. I want to use this points in a game, so I can reconstruct the curve there.\n\nGetting points with a python script is easy, but the only thing I can get are two (or more) points on the curve itself - this is not what I want. A Bezier-curve is defined by controlpoints where only start- and endpoint lie on a curve, and at least one point should be outside of it. Those points also make up the convex hull of the curve.\n\nSo my main question is: How can I get the \"real\" controlpoints of the curve? (Not only the ones with the handles, which lie on a curve).\n\nAdditional question: I can get those points from a NURBS-Curve, but can I make a NURBS curve behave like a bezier curve? (When I click the \"Bezier\"-Checkbox in the \"Active-Spline\" panel, it doesn't look to me like a proper Bezier curve, as the endpoints don't lie on the curve.\n\n### Edit\n\nFor clarification, what I would prefer to use is a Bezier curve. What I want for that curve, is 4 control points like the NURBS curve has (= the 4 points of the orange bounding polygon as seen in the NURBS-picture). (How) can I get those points for a Bezier-curve in blender?\n\n\u2022 You will have to give a picture with marked points that you want. Your terminology is quite different of standard and it's not clear much what you want. Bezier is made up of cubic spline segments each having 4 control points. What do you mean by \"real\" controlpoints? And yes you can make a BSpline (nurbs) curve behave like a Bezier. \u2013\u00a0Jaroslav Jerryno Novotny Nov 15 '15 at 16:41\n\u2022 Hello, i edited my post and added two pictures for clarification. So, basically, I want the control points of a polygon (like the orange one for the Nurbs), but for a Bezier curve. (A bezier curve also has those control points by mathematical definition. But in Blender, I seem to get only two points like in the 2nd picture.) \u2013\u00a0Luigi_Papardelle Nov 16 '15 at 13:06\n\nOh I see the confusion now. The handles of Bezier curve are the \"real\" control points. These points form the 4 control points for the cubic segment:\n\nAnd that's also how you convert Bezier to BSplines - per segment. You can make a BSpline segment in Blender with Path curve with 4 points.\n\nYou get the described points with text like this:\n\nbpy.data.curves['BezierCurve'].splines[0].bezier_points[1].handle_left\nbpy.data.curves['BezierCurve'].splines[0].bezier_points[1].co\nbpy.data.curves['BezierCurve'].splines[0].bezier_points[1].handle_right\n\n\nLike this you get all the blue points.\n\n\u2022 Thanks a lot. this was absolutely what i was looking for :) \u2013\u00a0Luigi_Papardelle Nov 19 '15 at 11:14","date":"2020-01-19 16:24:45","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.6827687621116638, \"perplexity\": 483.89249563093773}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2020-05\/segments\/1579250594662.6\/warc\/CC-MAIN-20200119151736-20200119175736-00441.warc.gz\"}"}
null
null
Trusting the Good News in the Age of Fake News Jonathan Landry Cruse Odds are, you're not willing to perform a trust fall into the arms of the American media. In fact, there aren't a whole lot of trust falls happening anymore in our society, as the General Social Survey reports that only 30% of Americans agreed with the statement that "most people can be trusted."... On Platt and Priorities Lisa Robinson Spencer It's been an amazing past few days watching the fallout from David Platt's prayer over president Trump. When I first heard about the situation and read the transcript of the prayer, my initial reaction was quite positive. This was further confirmed for me when I saw the video . From what I know of... Edmund Grindal and His Letter to the Queen Simonetta Carr Edmund Grindal and His Letter to the Queen In 1576, Archbishop Edmund Grindal joined the company of Puritans who offended Queen Elizabeth I. His most provocative statement was a reminder of her mortality. He was suspended from his duties for the rest of his life. The unwelcomed reminder came at the... Truth and Politics I've been listening to a fascinating audio book on the nature of warfare in World War II. Giles Milton's book, Churchill's Ministry of Ungentlemanly Warfare details the unconventional and sometimes brutal methods employed to defeat the Nazis. Churchill's belief was that the Nazis were inflicting... Eight Influential Ideas about Work Dan Doriani As we pass Labor Day and settle into the fall, I want to label a few of the most influential ideas about work in Western thought and invite you, my reader, to see which thoughts might be informing you and supplanting more biblical ideas about work. Without further ado Most Greeks thought work was a... The Apocryphal 100th Episode Theology on the Go reaches the big 1-0-0! To celebrate, we (with tongue in cheek) present this special edition of the podcast, as Jonathan and James search the Scriptures for the many and significant appearances of the number 100. You may be totally underwhelmed! Nonetheless, you're invited to join... Karl Marx: Still Important? William D. Dennison How do we evaluate the importance of Karl Marx (1818‒83) in the world? In May of this year, China commemorated his two-hundredth birthday (May 5) by donating a fourteen-foot statue of Marx to his birthplace, Trier, Germany. Indeed, hundreds of celebrations have been held throughout the world to... Understanding the Single-Issue Voter Anne Chamberlin Moving around often as a child and as an adult, I had the privilege of joining or regularly attending many different American churches in different regions. I've been a member of small (40-80 people), medium (200 members), and large (multi-service) churches. All of these churches have been non-... Pitying Criminals and Imprisoning Society In addition to the many rich theological insights one will glean from working through Herman Bavinck's Reformed Dogmatics , there are equally profound sociological observations from which we could benefit today. When he came to tackle the question of crime and punishment in a society that has cast...
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
9,219
{"url":"https:\/\/socratic.org\/questions\/the-reaction-of-methane-and-water-is-one-way-to-prepare-hydrogen-ch-4-g-h-2o-g-c","text":"The reaction of methane and water is one way to prepare hydrogen: CH_4(g) + H_2O (g) -> CO (g) + 3H_2(g) [Molar masses 16.04 18.02 28.01 2.02]. If you begin with 995 g of CH_4 and 2510 g of water, what is the maximum possible yield of H_2?\n\nAug 1, 2017\n\nMethane is the limiting reactant, therefore the maximum possible yield of hydrogen gas is 376 g.\n\nExplanation:\n\nBalanced Equation\n\n$\\text{CH\"_4(\"g\") + \"H\"_2\"O(g)}$$\\rightarrow$$\\text{CO(g)\" + \"3H\"_2(\"g\")}$\n\nThis is a limiting reactant stoichiometry problem. We will determine whether methane or water is the limiting reactant. The limiting reactant will give us the maximum possible yield of hydrogen gas.\n\n$\\textcolor{b l u e}{\\text{Maximum Yield of}}$ color(blue)(\"H\"_2\" color(blue)(\"Produced by\" color(blue)(\"995 g CH\"_4\n\nDetermine the moles ${\\text{CH}}_{4}$ in ${\\text{995 g CH}}_{4}$ by dividing the given mass by its molar mass by multiplying by the inverse of the molar mass.\n\n995color(red)cancel(color(black)(\"g CH\"_4))xx(1\"mol CH\"_4)\/(16.04color(red)cancel(color(black)(\"g CH\"_4)))=\"62.0 mol CH\"_4\n\nMultiply the moles ${\\text{CH}}_{4}$ by the mole ratio between ${\\text{H}}_{2}$ and ${\\text{CH}}_{4}$, ${\\text{3 mol H}}_{2} :$${\\text{1 mol CH}}_{4}$.\n\n62.0color(red)cancel(color(black)(\"mol CH\"_4))xx(3\"mol H\"_2)\/(1color(red)cancel(color(black)(\"mol CH\"_4)))=\"186 mol H\"_2\"\n\n$\\textcolor{t e a l}{\\text{Maximum Yield of}}$ $\\textcolor{t e a l}{{\\text{H}}_{2}}$ color(teal)(\"produced by\" color(teal)(2510\"g H\"_2\"O\"\n\nDetermine the moles $\\text{H\"_2\"O}$ by dividing the given mass by its molar mass by multiplying by the inverse of the molar mass.\n\n2510color(red)cancel(color(black)(\"g H\"_2\"O\"))xx(1\"mol H\"_2\"O\")\/(18.02color(red)cancel(color(black)(\"g H\"_2\"O\")))=\"139 mol H\"_2\"O\"\n\nMultiply the moles $\\text{H\"_2\"O}$ by the mole ratio between ${\\text{H}}_{2}$ and $\\text{H\"_2\"O}$, ${\\text{3 mole H}}_{2} :$$1 \\text{mol H\"_2\"O}$.\n\n139color(red)cancel(color(black)(\"mol H\"_2\"O\"))xx(3\"mol H\"_2)\/(1color(red)cancel(color(black)(\"mol H\"_2\"O\")))=\"417 mol H\"_2\"\n\nThe limiting reactant is ${\\text{CH}}_{4}$ since it produced the least number of moles of ${\\text{H}}_{2}$. In order to determine the mass $\\text{H\"_2}$ that is possible, multiply the moles ${\\text{H}}_{2}$ by the molar mass of ${\\text{H}}_{2}$.\n\ncolor(magenta)(\"Maximum Possible Yield of H\"_2\"\n\n186color(red)cancel(color(black)(\"mol H\"_2))xx(2.02\"g H\"_2)\/(1color(red)cancel(color(black)(\"mol H\"_2)))=\"376 g H\"_2\" (rounded to three sig figs)","date":"2019-10-18 04:17:29","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 35, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4603784680366516, \"perplexity\": 4890.157404353637}, \"config\": {\"markdown_headings\": false, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2019-43\/segments\/1570986677884.28\/warc\/CC-MAIN-20191018032611-20191018060111-00154.warc.gz\"}"}
null
null
Q: Make davfs2 accept wild card certificate (webDAV) I have multiple servers that share files via webDAV. The connections are secured with TLS and the same wild card certificate on all servers. I have different subdomains pointing to the respective servers. However I can't get davfs2 to accept my wild card certificates, it keeps complaining: /sbin/mount.davfs: the server certificate does not match the server name So for example I have: ServerA.mydomain.com ServerB.mydomain.com all have certificate that covers *.mydomain.com and a SAN for mydomain.com Everything works fine of course if I use mydomain.com for webDAV because that one is explicitly covered in the certificate. I could add all my subdomains as SAN's to the certificate, but I can't keep reissuing certificates each time I put a machine up (or take one down). So is there anyway to make davfs2 accept wildcard certificates? A: The problem has in fact nothing to do with how webDAV handles wild cards, but everything to do with how X509 certificate extensions are handled. As it turns out Subject Alternative Name is a misnomer, according to RFC 5280 (section 4.2) an application MUST reject any extensions it does not recognize if they are marked as critical (if they are marked as non-critical they MAY be ignored), but if an application recognizes an extension it MUST be used. What this means is that when webDAV encounters a SubjectAltName it checks that and only that against the server name. The Common Name with my wild card in it is completely ignored. The Subject Alternative Name doesn't provide the alternative or additional names, it must provide ALL identifying names. Thus put all the domain names including the wild card into the SAN.
{ "redpajama_set_name": "RedPajamaStackExchange" }
3,021
Q: Moving up and down the DOM in Jquery Ok, so I am having a major issue figuring this out....and getting nowhere fast. Question: Given the below HTML structure; use JQuery closest and find to go from the childOfSecond element to the third element and change its background color to orange when the second element is clicked. I have to use these two methods, so I have tried multiple angles here. I feel like I am close, but just can't get it to behave. Thank you so much for any help! <fieldset> <br /> <br /> <div id="prob2"> <div class="box first"> </div> <div class="box second"> <div class="child-of-second"></div> </div> <div class="box third"> </div> </div> <script> $('#prob2 .box.second').on('click', function(){ var childOfSecond = $("child-of-second").closest("div").find(".box.third").css("background", "orange"); }); </script> <br /> <br /> </fieldset> A: Several problems. First selector is looking for a tag <child-of-second> not a class. It's closest <div> is .box.second which is a sibling of the one you want. However find() only looks for descendents not siblings Try $(".child-of-second").parent().next().css("background", "orange");
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,334
FINAL SALE: ITEM CANNOT BE RETURNED OR EXCHANGED. Shift into another time period. Whatever the event, you'll stay stunning with this one. MOMO The Greatest of Gatsby Gold Sequin Dress features a deep v-neckline, low v-cut back, thick shoulder straps, an invisible side zipper closure, side slits at hem and a bead/ sequin-covered body. Lined. Pair this metallic mini with velvet pumps, delicate gold jewelry & a red lip.
{ "redpajama_set_name": "RedPajamaC4" }
3,772
Tiebas-Muruarte de Reta – gmina w Hiszpanii, w prowincji Nawarra, w Nawarze, o powierzchni 21,35 km². W 2011 roku gmina liczyła 659 mieszkańców. Przypisy Gminy w Nawarze
{ "redpajama_set_name": "RedPajamaWikipedia" }
6,834
{"url":"https:\/\/saotruchoalong.com\/btnuj5m6\/ridge-and-lasso-regression-0f0ace","text":"# ridge and lasso regression\n\nHow accurate do you think the model is? nice article, you have exlplained the concepts in simplistic way.Thanks for the efforts. For p =2, we get a circle and for larger p values, it approaches a round square shape. Lasso regression: Similar to ridge regression, but automatically performs variable reduction (allowing regression coefficients to be zero). Therefore we introduce a cost function, which is basically used to define and measure the error of the model. I was working on the same data set prior to stumbling on your article. Additionally, we might also want to make predictions about shark attacks based on other available data. Coefficients are basically the weights assigned to the features, based on their importance. So when we change the values of alpha and l1_ratio, a and b are set aaccordingly such that they control trade off between L1 and L2 as: Let alpha (or a+b) = 1, and now consider the following cases: So let us adjust alpha and l1_ratio, and try to understand from the plots of coefficient given below. I would highly recommend going through this article for a detailed understanding of assumptions and interpretation of regression plots. This means at this level of penalization, temp isn\u2019t as important for modeling shark attacks. Never have I seen a textbook to explain why regression error is preferable to be considered as the sum of square of residuals and not the sum of absolute value of residuals. Is it necessary? Let us examine them one by one. This equation is called a simple linear regression equation, which represents a straight line, where \u2018\u03980\u2019 is the intercept, \u2018\u03981\u2019 is the slope of the line. We already know that error is the difference between the value predicted by us and the observed value. The black point denotes that the least square error is minimized at that point and as we can see that it increases quadratically as we move from it and the regularization term is minimized at the origin where all the parameters are zero . Thank you! My final features includes all continuous variables and dummy variables for all categorical variables (make sure you drop the original column after encoding them), excluding Item_Identifier and Item_Outlet_Sales. This is one of the article which I would suggest to go through for any data scientist aspirant. No, you will actually wait until you see one fish swimming around, then you would throw the net in that direction to basically collect the entire group of fishes. Here, the coefficients $$\\beta_1, \\cdots ,\\beta_n$$ correspond to the amount of expected change in the response variable for a unit increase\/decrease in the predictor variables. Both lasso and ridge regression can be interpreted as minimizing the same objective function Let\u2019s say we have model which is very accurate, therefore the error of our model will be low, meaning a low bias and low variance as shown in first figure. Large enough to cause computational challenges. Lasso Regression . Therefore the total sales of an item would be more driven by these two features. Similarly list down all possible factors you can think of. You could explain many subjects in just one article and so well. The difference between ridge and lasso regression is that it tends to make coefficients to absolute zero as compared to Ridge which never sets the value of coefficient to absolute zero. 1.When variables are highly correlated, a large coe cient in one variable may be alleviated by a large coe cient in another variable, which is negatively correlated to the former. Looks like huge error. Extremely informative write-up!! Adaptive lasso demonstrated better stability in terms of the. from sklearn.model_selection import train_test_split, # importing linear regressionfrom sklearn, from sklearn.linear_model import LinearRegression, splitting into training and cv for cross validation, X = train.loc[:,['Outlet_Establishment_Year','Item_MRP']], x_train, x_cv, y_train, y_cv = train_test_split(X,train.Item_Outlet_Sales). For example, if we believe that sales of an item would have higher dependency upon the type of location as compared to size of store, it means that sales in a tier 1 city would be more even if it is a smaller outlet than a tier 3 city in a bigger outlet. Okay, now we know that our main objective is to find out the error and minimize it. Best tutorial about linear regression in analyticsvidhya. And without data set how would i practice . For instance, the number of attacks decrease as the percent of people on the beach who watched Jaws movies increases. ValueError: Input contains NaN, infinity or a value too large for dtype(\u2018float64\u2019). Thank you very much Linear regression comes to our rescue. So how to deal with high variance or high bias? On predicting the mean for all the data points, we get a mean squared error = 29,11,799. So let us now understand it. But the problem is that model will still remain complex as there are 10,000 features, thus may lead to poor model performance. Dashed lines indicate the lambda.min and lambda.1se values from cross-validation as before. This plot shows us a few important things: Among the variables in the data frame, watched_jaws has the strongest potential to explain the variation in the response variable, and this remains true as the model regularization increases. Here we have consider alpha = 0.05. Let\u2019s see if we can think of something to reduce the error. So by changing the values of alpha, we are basically controlling the penalty term. Mathematics behind lasso regression is quiet similar to that of ridge only difference being instead of adding squares of theta, we will add absolute value of \u0398. ridgeReg = Ridge(alpha=0.05, normalize=True), mse 1348171.96 ## calculating score ridgeReg.score(x_cv,y_cv) 0.5691. For example, let\u2019s say you have to predict the future medical cost of the next insurance claim per member given a dataset containing 10 million past claim records for 1 million members and 10 claims per member, We\u2019ll assume the 10 claim amounts per member are approximately normally distributed. Also, the value of r square is\u00a00.3354657 and the MSE is 20,28,692. For example, let us say, sales of car would be much higher in Delhi than its sales in Varanasi. Let us start\u00a0with making predictions using a few simple ways to start with. Let us try to visualize some by plotting them. Helped a lot\u2026thanks and cheers , Thanks abhishek. The amount of bread a store will sell in Ahmedabad would be a fraction of similar store in Mumbai. I will take a time to absorb the most of issues demonstrated, the theoretical aspects are a challenge for me at this moment, it is too much advanced for my basic statisticals knowledge, Can you tell what exactly happens when you # creating dummy variables to convert categorical into numeric values. Hi, I am new to data science. A bad decision can leave your customers to look for offers and products in the competitor stores. Then the penalty will be a ridge penalty. We know that location plays a vital role in the sales of an item. Some of the numerous applications of ML include classifying disease subtypes (for instance, cancer), predicting purchasing behaviors of customers, and computer recognition of handwritten letters. Posted on June 15, 2020 by R | Science Loft in R bloggers | 0 Comments. This is one of the best article on linear regression I have come across which explains all possible concepts step by step like all dots connected together with simple explanation. Here \u2018large\u2019 can typically mean either of two things: 1. Definitely yes, because quadratic regression fits the data better than linear regression. This modification is done by adding a penalty parameter that is equivalent to the square of the magnitude of the coefficients. So, the simplest way of calculating error will be, to calculate the difference in the predicted and actual values. When we have a high dimensional data set, it would be highly inefficient to use all the variables since some of them might be imparting redundant information. n is the number of. All the data points fit within the bulls-eye. Thank you for your feedback. Superb. Let us consider an example, we need to find the minimum value of this equation. Alpha and l1_ratio are the parameters which you can set accordingly if you wish to control the L1 and L2 penalty separately. But wait what you see is still there are many people above you on the leaderboard. The code is documented here https:\/\/github.com\/mohdsanadzakirizvi\/Machine-Learning-Competitions\/blob\/master\/bigmart\/bigmart.md Therefore our model performs poorly on the test data. 2 User's Guide. A seasoned data scientist working on this problem would possibly think of tens and hundreds of such factors. to see the video on Lasso Regression explained by Josh Starmer. The first figure is for L1 and the second one is for L2 regularization. It is a good thought to start, but it also raises a question \u2013 how good is that model? May I know how was the mse ( mse = 28,75,386) calculated based on location? Let\u2019s discuss it one by one. This is a \u201cnote-to-self\u201d type post to wrap my mind around how lasso and ridge regression works, and I hope it would be helpful for others like me. Forward selection starts with most significant predictor in the model and adds variable for each step. ( blue dashed line ), which are perfectly correlated with other independent features are shrunk to zero the. Large values of alpha, the data scientist ( or data term ) model will still complex. Did not increase that much assumptions and interpretation of regression techniques you discover... T it be considered a categorical variable swimmers variables and in that shop did not increase that much it! The haystack was the Big mart sales then how is it possible by regularizer. Point co-linearity becomes an issue and how it is possible to intersect the!, non-constant variance arises in presence of non-constant variance arises in presence outliers. That even at small values of p are given below notice that by using a simple. Multiple ways to select the right set of variables for the efforts the code R... Alpha parameter between 0 and 1, wouldn \u2019 t as important for explaining the variation in the error results. Three features higher as compared to rest of the coefficients for those input variables that do not contribute much the... Heteroskedasticity exists, the plot you would find out one optimum point in our first model would. Using these features code with the first figure is for L2 regularization are the final features you used = (... Ability to make predictions unstable to define and measure the error and it... Do same of kind of article on dimension reduction potential to model the response variable ) any... Have to choose it wisely by iterating it through a range of values and using the one gives... For making visualization easy, let \u2019 s see if we can predict sales using features! ( kind='bar ', 'Item_MRP ', 'Item_Weight ' ] ] for p=0.5, we high! Found this useful occurs, the value of lambda the more features are shrunk to zero for models. Basically used to prevent multicollinearity define alpha and l1_ratio = a \/ ( )... Too low, I wanted to know if I need to find out this line, simply it! Score ridgeReg.score ( x_cv, y_cv ), which are in which we have a in. Models in Python considering only these two only, can not judge that by using the right features would our. For polynomial regression maximum at alpha=0.05 so by changing the values of and. All those factors you can also start with and using the watched_jaws and swimmers variables simple possible! That much model, are we making it simple an article on dimension reduction explanation and you only have Big! The independence of the model improve our accuracy and measure the error various data science roles ridge and lasso regression instead blue! The help of regularization techniques [ Optional ] a beginner looking for a detailed understanding of assumptions interpretation! Store will be dependent on mathematics side also term as for various data science journey \u201c should... Impute it with the Big mart sales problem line represents our regression line point in our model include... And optimize it further to improve our accuracy than our model, are we making it more accurate high and. Posted on June 15, 2020 by R | science Loft in R, and they! Regularization to overcome this problem would possibly think of looking at the residual vs fitted values plot techniques! Majority of the coefficients pred_cv, ( ie are two forms of regularized regression of if. \u03981 and \u03982 ) simple linear regression model regression below der Zielvariablen verringert. Ahmedabad would be more driven by these two only, can \u2019 t we plot equation... One independent variable is more than 1 regularization in detail and how that factor would influence the sales a... Explain many subjects in just one article and so well have data scientist and a ML.. Pred_Cv, ( ie offers and products in the article which I would highly recommend going this! Compared to rest of the variables able to think of something to reduce the magnitude the... As sparse to account for member-level effects, a better predictive model would include a single random effect #! The figures where you predicted that the variance increases, the lasso regression involve adding to! Career in data science ( business Analytics ) transforma-tion of the penalty term to your more Shubham. Defining the model and adds variable for each case your model more generalized my class been... Face any difficulties while implementing it, mind blowing!!!!!!!!! Till now our idea was to basically minimize the cost function would be higher \u2019 in of. With all predictors in the same data set, we can clearly that. Makes sense, people may be more driven by these two features, but it was very untidily.. Dadurch die Vorhersagegenauigkeit der Zielvariablen und verringert das Auftreten von Overfitting would highly recommend going this. Response ) as in ridge, we got mse = 28,75,386 ) based. In Python we keep the same data set prior to stumbling on your article attacks decrease the... 9 of the model only used continuous features this, we get a circle and for larger p,! And b weights assigned to L1 and the haystack was the mse the. Selectfrommodel class from feature_selection package value can give us the point where this equation of 6! Accordingly if you do: similar to ridge regression each variable, which is basically combination... The mathematics side also in the first book is available on EdX and it requires setting alpha in. Are two forms of regularized linear regression store will be maximum at alpha=0.05 arises in presence of outliers extreme... Part, let us consider an example, we can see that MRP has a high coefficient, items! The axis line, even when minimum mse is 14,38,692 can eliminate some features entirely and us! In simplistic way.Thanks for the dummy variable, if Var_M and Var_F have values and! Michael, I enjoyed this article for a regression model in Python,!","date":"2021-04-15 22:22:22","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4723544120788574, \"perplexity\": 854.2648435479117}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038088264.43\/warc\/CC-MAIN-20210415222106-20210416012106-00011.warc.gz\"}"}
null
null
A Festa do Vinho é um evento anual organizado pela Câmara Municipal do concelho português de Cartaxo. A primeira Festa do Vinho realizou-se no ano de 1988. É um dos acontecimentos mais marcantes do certame "Capital do vinho". Esta festa criada com a finalidade de divulgar um dos principais factores de desenvolvimento do concelho, conta a participação de artistas musicais desde os estilos para a juventude até aos géneros rurais. Nesta festa, destaca-se sobretudo o vinho cuja qualidade competitiva motivou concursos que deram origem ao "Museu Rural e do Vinho", criado em 1984. Além de propulsora da economia local, a Festa do Vinho exerce importante papel social na preservação de valores culturais mantidos como elo entre o vinho e população. Ligações externas Vinho Cartaxo Vinho Fundações em Portugal em 1988
{ "redpajama_set_name": "RedPajamaWikipedia" }
2,664
Coelorinchus mirus is een straalvinnige vissensoort uit de familie van rattenstaarten (Macrouridae). De wetenschappelijke naam van de soort is voor het eerst geldig gepubliceerd in 1926 door McCulloch. Rattenstaarten
{ "redpajama_set_name": "RedPajamaWikipedia" }
1,479
Just tested out my new horn, and all i can say is Holy Smokes!...This thing is like 3 times louder than i thought it would be. My poor neighbors. I learned pretty quick not to test mine in the garage or in the middle of the night. Your E.A. horn has a nice tone and Ahoooooogah to it! I LIKE that very complete ahoogah note. I have a stock T horn mounted on the engine, but I'm looking for a 6V ahoogah to mount under the car, out of site. I'm thinking maybe an E.A. might be the way to go now. I've got a fire engine with a bell and a hand-crank siren. My granddaughter loves to excersize both! Neighbors don't seem to mind. In the video,What was he using for a speedometer with the green display. I have a horn like that out in the shop with a cut down horn that's currently missing. It scared the crap out of me after a clean up and a test. Zoiks! Hehehe! Not T related, but my 26 year old son just bought a fire truck. He's a '67-'79 Ford truck lover. He has two pickups, one of each of the two body styles, but just HAD to have this '72 F-500 fire truck. It's in rough shape aesthetically, but he is working on the mechanics of it. I'm sure much to his neighbor's chagrin, the siren works. And so does the pump. He now knows where all the leaks are. Does E A make a 6 volt? I want one.
{ "redpajama_set_name": "RedPajamaC4" }
2,029
{% extends "layout.html" %} {% block page_title %} Transaction list {% endblock %} {% block content %} <main id="content" role="main"> <div class="grid-row"> <!-- Main body --> <div class="column-two-thirds"> <h1 class="heading-xlarge"> <span class="heading-secondary">Roadmap overview</span>Gaap components</h1> <p class="lede">Creating a set of shared components that make world class services faster to assemble and cheaper to run.</p> </div> </div> <div class="grid-row"> <div class="column-one-quarter"> <h2 class="heading-medium component-type">Software</h2> <ul class="list list-lined"> <li>GOV.UK Verify</li> <li><a href="#notify">GOV.UK Notify</a></li> <li><a href="#pay">GOV.UK Pay</a></li> <li>Secure document exchange</li> </ul> </div> <div class="column-one-quarter"> <h2 class="heading-medium component-type">Patterns</h2> <ul class="list list-lined"> <li>GOV.UK Design Patterns</li> <li>Webchat</li> <li>Concessionary Travel</li> <li>Residents' parking permits</li> <li>Check before you start</li> </ul> </div> <div class="column-one-quarter"> <h2 class="heading-medium component-type">Data</h2> <ul class="list list-lined"> <li>Registers</li> <li>Data.gov.uk</li> <li>Private data exchange</li> </ul> </div> <div class="column-one-quarter"> <h2 class="heading-medium component-type">Tools</h2> <ul class="list list-lined"> <li><a href="#paas">GOV.UK Platform as a Service</a></li> <li><a href="#email-sub">Submit – Forms builder</a></li> <li>PQs and FOIs</li> <li>Surveys</li> </ul> </div> </div> <div class="grid-row"> <div class="column-full"> <div class="section-border-top"> </div> </div> <div class="column-three-quarters"> <h2 class="heading-large"><a name="notify">GOV.UK Notify</h2> <p class="beta status">Public Beta</p> <p>Making it easier to send emails, text messages and letters to your users.</p> </div> <div class="column-one-quarter"> <div class="product-overview-link"> <ul class="list"> <li><a href="https://www.notifications.service.gov.uk/">Product overview</a></li> <li><a href="https://governmentasaplatform.blog.gov.uk/2017/03/10/what-were-working-on/">Recent updates</a></li> <li><a href="#">Detailed roadmap</a></li> </ul> </div> </div> </div> <div class="grid-row"> <div class="column-full"> <h3 class="heading-medium">Roadmap</h3> <ul class="tabs"> <li><a href="component-overview-simple">Simple view</a></li> <li class="current"><a href="component-overview-detailed">Detailed view</a></li> </ul> </div> {% include "includes/detailed-roadmap-1.html" %} </div> <div class="grid-row"> <div class="column-full"> <div class="sub-section-border-top"> </div> </div> <div class="column-three-quarters"> <h2 class="heading-large"><a name="pay">GOV.UK Pay</h2> <p class="beta status">Private Beta</p> <p>GOV.UK Pay is a secure payment service that is easy to integrate with and designed to meet the Digital Service Standard. Give your users a trusted GOV.UK-branded payment experience.</p> </div> <div class="column-one-quarter"> <div <a class="product-overview-link"> <ul class="list"> <li><a href="https://www.payments.service.gov.uk/">Product overview</a></li> <li><a href="https://governmentasaplatform.blog.gov.uk/2017/03/10/what-were-working-on/">Recent updates</a></li> <li><a href="#">Detailed roadmap</a></li> </ul> </div> </div> </div> {% include "includes/simple-roadmap-1.html" %} <div class="grid-row"> <div class="column-full"> <div class="sub-section-border-top"> </div> </div> <div class="column-three-quarters"> <h2 class="heading-large"><a name="email-sub">Email Subscriptions</h2> <p class="discovery status">Discovery</p> </div> <div class="column-one-quarter"> <div <a class="product-overview-link"> <ul class="list"> <li><a href="https://governmentasaplatform.blog.gov.uk/2017/03/10/what-were-working-on/">Recent updates</a></li> </div> </div> </div> {% include "includes/simple-roadmap-1.html" %} <div class="grid-row"> <div class="column-full"> <div class="section-border-top"> </div> </div> <div class="column-three-quarters"> <h2 class="heading-large"><a name="paas">GOV.UK Platform as a Service</h2> <p class="beta status">Private Beta</p> </div> <div class="column-one-quarter"> <div <a class="product-overview-link"> <ul class="list"> <li><a href="https://www.cloud.service.gov.uk/">Product overview</a></li> <li><a href="https://governmentasaplatform.blog.gov.uk/2017/03/10/what-were-working-on/">Recent updates</a></li> </div> </div> </div> {% include "includes/simple-roadmap-1.html" %} </main> {% endblock %}
{ "redpajama_set_name": "RedPajamaGithub" }
8,448
Sage Reese lives for her job. More precisely, she lives for her debonair boss, Parker Andersen. Sage handles everything for Parker, even as she fantasizes about the one thing that isn't in her job description: him. But when a high-stakes account crosses the line from shady to deadly, a tough cop starts giving Sage the attention she wishes Parker would . . . Detective Dean Ryker couldn't be more different from Parker. While Parker wears expensive suits like a second skin and drives a BMW, Ryker's uniform is leather jackets and jeans . . . and his ride of choice is a Harley. While Parker's sexiness is a reserved, slow burn, Ryker is completely upfront about what-and who-he's after. And Sage tops his list. Now, as Ryker digs deeper into the dark side of Parker's business, Sage finds herself caught between two men: the one she's always wanted-and the one who makes her feel wanted like never before . . .
{ "redpajama_set_name": "RedPajamaC4" }
8,014
\section{Introduction} \IEEEPARstart{T}{he} idea of an inverse problem is to reconstruct, or retrieve, information from a set of measurements. In many problems, the quantities we measure are different from the ones we wish to study; and this set of \emph{d} measurements may depend on several elements. Our goal is thus to reconstruct, from the data, that which we wanted to study. In essence, given an effect, what is the cause? For example: If you have measurements of the temperature on a surface, you may want to find the coefficient in the heat equation.\\ The nonlinearity and ill-posedness of this problem lends itself well to Markov Chain Monte Carlo algorithms. We detail this algorithm in later sections, but we note now that there has been much work done on Metropolis-Hastings MCMC algorithms. However, much of it has been trying to determine optimal proposal densities (\cite{luengo},\cite{rosenthal}). Luengo and Martino (\cite{luengo}) treat this idea by defining an adaptive proposal density under the framework of Gaussian mixtures. Our work, however, is focused on improving the reconstruction given a proposal density.\\ We take no views on the optimality of the structure of the proposal density in our case, which we take from~\cite{fox}. We simply observe possible improvements to this density by normalizing it's terms through context-independent formulations. Eventually, we would like to implement the GM-MH algorithm of~\cite{luengo} on our proposal density, and provide a rigorous definition of our construction in an analogous manner to their work.\\ The paper is structured as follows. We first present the framework of our problem in the subsection below. Section~\ref{sec:mhmcmc} presents the MHMCMC algorithm and proposal densities along with non-normalized results. The error analysis of those results (in Section~\ref{sec:error}) motivates this work while Sections~\ref{sec:prelimnorm} to~\ref{sec:localnorm} present the new constructions and associated results. \subsection{Heat Diffusion} In this problem, we attempt to reconstruct the conductivity $K$ in a steady state heat equation of the cooling fin on a CPU. The heat is dissipated both by conduction along the fin and by convection with the air, which gives rise to our equation: \begin{equation}\label{eq:heatpde} u_{xx}+u_{yy}=\frac{2H}{K\delta}u \end{equation} with $H$ for convection, $K$ for conductivity, $\delta$ for thickness and $u$ for temperature. The CPU is connected to the cooling fin along the bottom half of the left edge of the fin. We use standard Robin Boundary Conditions with \begin{equation}\label{eq:robinbc} Ku_{normal}=Hu \end{equation} Our data in this problem is the set of boundary points of the solution to (\ref{eq:heatpde}), which we compute using a standard Crank-Nicolson scheme for an $n \times m$ mesh (here $20 \times 20$). We denote the correct value of $K$ by $K_{\textrm{correct}}$ and the data by $d$. In order to reconstruct $K_{\textrm{correct}}$, we will take a guess $K'$, solve the forward problem using $K'$, obtaining $d'$, and compare those boundary points to $d$ by implementing the Metropolis-Hastings Markov Chain Monte Carlo algorithm (or MHMCMC). \section{Metropolis-Hastings MCMC} \label{sec:mhmcmc} Markov Chains produce a probability distribution of possible solutions (in this case conductivities) that are most likely given the observed data (the probability of reaching the next step in the chain is entirely determined by the current step). The algorithm is as follows (see~\cite{fox}). Given $K_n$, $K_{n+1}$ can be found using the following: \begin{enumerate} \item Generate a candidate state $K'$ from $K_n$ with some distribution $g(K'|K_n)$. We can pick any $g(K'|K_n)$ so long as it satisfies \begin{enumerate} \item $g(K'|K_n)=0 \Rightarrow g(K_n|K')=0$ \item $g(K'|K_n)$ is the transition matrix of the Markov Chain on the state space containing $K_n,K'$. \end{enumerate} \item With probability \begin{equation}\label{eq:alpha} \alpha(K'|K_n)\equiv min\left\{1,\frac{Pr(K'|d)g(K_n|K')}{Pr(K_n|d)g(K'|K_n)}\right\} \end{equation} set $K_{n+1}=K'$, otherwise set $K_{n+1}=K_n$ (ie. accept or reject $K'$). Proceed to the next iteration. \end{enumerate} More formally, if $\alpha > u\sim U[0,1]$, then $K_{n+1}=K'$. Using the probability distributions of our example, (\ref{eq:alpha}) becomes \begin{multline}\label{eq:alpha2} \alpha (K'|K_n)\equiv\\ \min\left\{ 1,e^{\frac{-1}{2 \sigma^2}\sum_{i,j=1}^{n,m} \left[ \left( d_{ij}-d_{ij}' \right)^2 - \left( d_{ij}-d_{n_{ij}} \right)^2 \right] }\right\} \end{multline} where $d'$ and $d_n$ denote the set of boundary temperatures from $K'$ and $K_n$ respectively, and $\sigma=0.1$. To simplify (\ref{eq:alpha2}), collect the constants and separate the terms relating to $K'$ and $K_n$: \begin{align*} \frac{-1}{2 \sigma^2}\sum_{i,j=1}^{n,m}{\left[ \left( d_{ij}-d_{ij}' \right)^2 - \left( d_{ij}-d_{n_{ij}} \right)^2 \right]}&=\frac{-1}{2}\left[ f' - f_n \right]\\ &= -(D_1) \end{align*} Now, (\ref{eq:alpha2}) reads \begin{equation} \label{eq:alpha3} \alpha (K'|K_n) \equiv min\left\{ 1, e^{ -D_1 }\right\} \end{equation} Note that we are taking this formulation as given, and that the literature mentioned above (most notably Gaussian Mixture based algorithms) would be going from~\eqref{eq:alpha} to~\eqref{eq:alpha2} perhaps differently. \subsection{Generating $K'$} To generate our candidate states, we will perturb $K_n$ by a uniform random number $\omega\in[-0.005,0.005]$. In the simplest case, where we are dealing with a constant $K_{\textrm{correct}}$, then we could proceed by changing every point in the mesh by $\omega$, and the algorithm converges rapidly.\\ Looking at non-constant conductivities forces us to change our approach. If we simply choose to change one randomly chosen point at a time, then we have a systemic issue with the boundary points, which exhibit odd behavior and hardly change value. To sidestep this, we will change a randomly chosen grid ($2\times 2$) of the mesh at once. Thereby pairing up the troublesome boundary points with the well-behaved inner points. \subsection{Priors} While a gridwise change enables us to tackle non-constant conductivities, two issues remain. The first is that our reconstructions are still marred with ``spikes" of instability. The second, more profound, is that the ill-posedness of the problem means there are in fact infinitely many solutions, and we must isolate the correct one. This brings us to the notion of priors. These can be thought of as weak constraints imposed on our reconstructions. However, we do not wish to rule out any possibilities, keeping our bias to a minimum. So we define \begin{multline} T' =\sum_{j=1}^{n}\sum_{i=2}^{m} \left( K'(i,j)-K'(i-1,j) \right)^2\\ + \sum_{i=1}^{m}\sum_{j=2}^{n} \left( K'(i,j)-K'(i,j-1) \right)^2 \end{multline} \begin{multline} T_n =\sum_{j=1}^{n}\sum_{i=2}^{m} \left( K_n(i,j)-K_n(i-1,j) \right)^2\\ + \sum_{i=1}^{m}\sum_{j=2}^{n} \left( K_n(i,j)-K_n(i,j-1) \right)^2 \end{multline} let $D_2=T'-T_n$, and modifying~\eqref{eq:alpha3}, we obtain \begin{equation}\label{eq:alphac} \alpha_c (K'|K_n) \equiv min\left\{ 1, e^{-\lambda_1 D_1 -\lambda_2 D_2} \right\} \end{equation} By comparing the smoothness of $K'$ not in an absolute sense, but relative to the last accepted guess, we hope to keep as many solutions as possible open to us, while ensuring a fairly smooth result. We introduce one additional prior, this time imposing a condition on the gradient of our conductivity. The author explores the notion of priors more fully in~\cite{zambelli}, but much as we take the proposal density as given, the aim of this paper is not to examine priors per se. So we look at the mixed partial derivative of our candidate state and compare it to that of the last accepted guess \begin{multline} M'=\sum_{j=1}^{n}\sum_{i=2}^{m} \left( K_{xy}'(i,j)-K_{xy}'(i-1,j) \right)^2\\ +\sum_{i=1}^{m}\sum_{j=2}^{n} \left( K_{xy}'(i,j)-K_{xy}'(i,j-1) \right)^2 \end{multline} \begin{multline} M_n=\sum_{j=1}^{n}\sum_{i=2}^{m} \left( K_{n_{xy}}(i,j)-K_{n_{xy}}(i-1,j) \right)^2\\ + \sum_{i=1}^{m}\sum_{j=2}^{n} \left( K_{n_{xy}}(i,j)-K_{n_{xy}}(i,j-1) \right)^2 \end{multline} where $K_{xy}'$ and $K_{n_{xy}}$ are computed using central and forward/backward finite difference schemes. We let $D_3=M'-M_n$ and modify~\eqref{eq:alpha3} to get \begin{equation}\label{eq:alphas} \alpha_s(K'\mid K_n)\equiv\min\left\{1, e^{-\lambda_1D_1-\lambda_3D_3} \right\} \end{equation} We now take the acceptance step of our algorithm as \begin{equation}\label{eq:alphacs} \alpha=\max\left\{ \alpha_c,\alpha_s \right\} \end{equation} So the algorithm seeks to satisfy at least one of our conditions, though not necessarily both. We present some preliminary results in Figure~\ref{fig:tpfirst} and Figure~\ref{fig:gaussfirst} below. Note that we are clearly on the right path, with the algorithm approaching it's mark, but not to a satisfying degree. \begin{figure}[h!] \centering \subfigure[Target.]{ \includegraphics[height=3.1cm]{tiltedplane_surf.jpg} } \subfigure[Reconstruction with\newline $\lambda_1=1,\ \lambda_2=100,\ \lambda_3=15$.]{ \includegraphics[height=3.1cm]{tiltedplane_report_helper2_surf.jpg}} \caption{Reconstruction of a tilted plane with priors, $10$ million iterations.} \label{fig:tpfirst} \end{figure} \begin{figure}[h!] \centering \subfigure[Target.]{ \includegraphics[height=3.1cm]{gaussianwell_surf.jpg}} \subfigure[Reconstruction with\newline$\lambda_1=1,\ \lambda_2=10,\ \lambda_3=15$ .]{ \includegraphics[height=3.1cm]{gaussian_report_helper_newslope_surf.jpg}} \caption{Reconstruction of a Gaussian well with priors, $10$ million iterations.} \label{fig:gaussfirst} \end{figure} \section{Error Analysis} \label{sec:error} Our work so far has looked at qualitative improvements to our reconstructions, now we seek to quantify those improvements and the performance of the algorithm in general. Several metrics can be used for this purpose, but we will focus our writeup on the following: the difference between the data and the output using our guess ($\delta$), given by \begin{equation*} \delta=\left( \delta_1 \ \cdots \ \delta_n \right) \quad \textrm{, with } \delta_i=\sum{(d-d_i')^2} \end{equation*} the sum of differences squared between $K_{correct}$ and $K_n$ ($\beta$), \begin{equation*} \beta=\left( \sum{(K_{correct}-K_1)^2}\ \cdots \ \sum{(K_{correct}-K_n)^2}\right) \end{equation*} and most importantly, the rate of acceptance of guesses ($\Gamma$), where \[ \Gamma_0=0\quad \textrm{and} \quad \Gamma_i = \begin{cases} \Gamma_{i-1}+1 & \text{if guess is accepted.}\\ \Gamma_{i-1} & \text{if guess not accepted.} \end{cases} \] for each subsequent iteration.\\ The form of $\Gamma$ is a step function, where accepting every guess would resemble a straight line of slope $1$, and accepting none of the guesses results in a slope of $0$. The shape of this function should tell us something about when the algorithm is performing best. \subsection{$\delta$, $\beta$, $\Gamma$ Results} The results of tests involving these parameters reveals some interesting information (see Figure~\ref{fig:errorimg}). $\beta$ decreases, as expected, at a decreasing rate over time, slowing down around $6-7$ million iterations, which seems in line with the qualitative results.\\ On the other hand, $\delta$ decreases much more rapidly. The difference between the data and simulated temperatures becomes very small starting at as early as $250000$ iterations. In a sense, this fits with the problem of ill-posedness, the data is only useful to a certain degree, and it will take much more to converge to a solution (and we have been converging beyond $250k$ iterations). \begin{figure}[h!] \centering \subfigure[$\beta$.]{ \includegraphics[height=3cm]{gaussian_report_helper_beta2.jpg}} \subfigure[$\delta$.]{ \includegraphics[height=3cm]{gaussian_report_helper_delta2.jpg}} \subfigure[$\Gamma$.]{ \includegraphics[height=3cm]{gaussian_report_helper_gamma2.jpg}} \caption{Plot of the error metrics without normalizations.} \label{fig:errorimg} \end{figure} The most important result, however, comes from $\Gamma$. If we fit a line to our step function, we get slopes of $0.95$ or more. This means we are accepting nearly every guess. While this could be troubling on its own, the fact that we are accepting at a constant rate as well is indicative of a deeper problem in our method.\\ Given that $\Gamma$ is dependent solely on the likelihood of accepting a guess, we take a look at $\alpha$ directly. What we find is that $\alpha$ is evaluating at $1$ almost every iteration. The quantities we are looking at within it (comparing data and smoothness) are simply too large. We need to normalize our distribution. \section{Preliminary Structure} \label{sec:prelimnorm} In the following sections, we examine the impact of normalizations on our data terms, and explore the motivations behind the various constructions. More rigorous data is provided concerning the final form, while the earlier results focus on the concepts that guided their evolution.\\ One structural change which we will implement is to take equation~\eqref{eq:alphacs}, and change it to be more restrictive. Previously, it was looking for solutions which satisfied at least one of the prior conditions. Here we will instead look for solutions that satisfy all of them at once by setting \begin{equation} \alpha(K'\mid K_n)\equiv\min\left\{ 1,e^{-\left(\sum_{i=1}^3{\lambda_iz_iD_i}\right)} \right\} \end{equation} where $z_i$ are as-of-yet undetermined normalization terms. \subsection{Motivation} We first take a moment to examine the sensitivities $\lambda_i$, and impose the following condition: $\lambda_1>\lambda_2$ and $\lambda_1>\lambda_3$. Not doing so would mean the algorithm could give us some false positives. This leads us to notice that a key aspect of the MHMCMC method is information. Due to the ill-posed nature of the problem, we need to keep every piece of information that can be gleaned. We will keep this idea in mind throughout the later sections.\\ As for the normalizations proper, the naive approach to our problem would be to divide each data term by a constant value. In this formulation, our normalization terms would have the form \begin{equation} z_i=\frac{1}{c_i} \end{equation} where $c_i$ can be determined by looking at representative values of our data terms.\\ This approach has one advantage, which is that it retains information very well. The relationships between quantities is affected by a constant factor, and its evolution is therefore preserved across iterations. Unfortunately, this method is very unstable, and is not particularly viable. One can think of the opposite method to this one being dividing each data term by itself. Clearly, this would erase all information contained within our results, but it would successfully normalize it, given a broad enough definition of success.\\ Concretely, we seek to find a normalization that delivers information about the evolution of our data terms, but bounds the results so that we may control their magnitudes and work with their relative relationships. \section{Normalized with Inertia} \label{sec:norminertia} We introduce the concept of inertia in this framework. Inertia can be thought of as the weight (call it $w$) being applied to either previous method. Though we do not want to divide by only a constant, there is merit to letting some information trickle through to us. If we do not bound the quantities we are examining, then we will obtain very small or very large values for $\alpha$, effectively $0$ or $1$, which is undoing the work of the MHMCMC. We attempt to bound our likelihood externally. We define $\alpha_h$ such that \begin{equation} \alpha(K'\mid K_n)\equiv z_0\alpha_h=z_0e^{-\left(\sum_i{\lambda_iz_iD_i}\right)} \end{equation} \subsection{Global Normalizations} Even a cursory analysis of our early attempts at solving this heat conductivity problem have revealed a desperate need to correctly normalize our data in order to get meaningful likelihoods. Some issues of note have been the idea that the inertia of the process, the value of previous guesses, contains information which is important to the successful convergence of our algorithm. Another is the fact that the variance of data terms means that we require a strong normalization term, at the expense, perhaps, of information, if we are to obtain meaningful results.\\ Addressing the second point, we decide to deviate slightly from one aspect of our method, and use a global result. Computationally, we will only be tracking one variable, and this poses no problem. But note that using a global result in computing $\alpha$ implies that our process is no longer a Markov process, as the probability of reaching the next step is dependent on the past and not just the present. \subsection{Formulation of $Z^{(1)}$} \label{sec:z1} First, let $ \alpha_{h,m}=\max_j\left\{ \alpha_{h,j} \right\}, \ \forall j $ and $D_{i,m}=\max_j\left\{ D_{i,j} \right\}, \ \forall j$. We denote $Z^{(1)}$ the normalization \begin{align} z_{0,j}^{(1)}&=w_0\frac{1}{\alpha_{h,j}}+(1-w_0)\frac{1}{\alpha_{h,m}} \\ z_{i,j}^{(1)}&=w\frac{1}{\abs{D_{i,j}}}+(1-w)\frac{1}{\abs{D_{i,m}}} \end{align} While this effectively bounds our acceptance probability between $[0,1]$, it does so at the expense of the Markov property of our algorithm. Removing this property exhibits some instability in the evolution of the algorithm. Namely, they appear to converge to false positives, an effect which must be explored more fully. \subsection{Restricted Random Interval} Examining the values of $\alpha$ that we now produce reveals that we have greatly tightened the spread. Almost all of our values are contained in a narrow band (which changes depending on parameters), say between $0.6$ and $0.75$. Again, this means we are losing information, as the difference in the values of $\alpha$ are lost by comparing them over the entire $[0,1]$ interval.\\ We change the 2nd step in the MHMCMC algorithm, which was $\alpha>u\sim U[0,1]\Rightarrow K_{n+1}=K'$. We now restrict the interval over which we draw $u$, taking its lower and upper bounds at the $j$th iteration to be $[u_{\min},u_{\max}]$, where for some small constant $\zeta$, \begin{equation} u_{\min}=\min_{i<j}{\alpha_i}-\zeta \quad\wedge\quad u_{\max}=\max_{i<j}{\alpha_i}+\zeta \end{equation} While perhaps more restrictive, this formulation also greatly increases the speed at which the algorithm begins to converge by effectively selecting those guesses which are the most promising, relative to the past performance of the algorithm. This method implies that we will not, with probability $1$, decide the outcome of a guess, they simply become (as per $\zeta$) extremely unlikely to be accepted or rejected. \section{Locally Focused Normalization} \label{sec:localnorm} We now attempt to modify $Z^{(1)}$ in order to retain the original Markov property of the algorithm. The property was violated in the second term, which unfortunately also guarantees we bound our results. \subsection{Formulation of $Z^{(2)}$} Denote a new normalization scheme $Z^{(2)}$, given by \begin{align} z_{0,j}^{(2)}&=w_0\frac{1}{\alpha_{h,j}}+(1-w_0)\frac{1}{\alpha_{h,j-1}} \\ z_{i,j}^{(2)}&=w\frac{1}{\abs{D_{i,j}}}+(1-w)\frac{1}{\abs{D_{i,j-1}}} \end{align} While we have recovered the Markov property, we must now contend with unbounded values for $\alpha$. We note now that preliminary attempts to use $z_{i,j}^{(2)}$ with $z_{0,j}^{(1)}$ did not yield promising results.\\ While this formulation provides good results, it does require us to find an empirical bound for $\alpha$, as it is no longer bounded by $z_0$. For the results presented below, we imposed $\alpha\in[0,1.5]$, setting \begin{equation}\label{eq:z4alpha} \alpha\left(K'\mid K_n\right)=\min\left\{ 1.5,z_0e^{-\left(\sum_{i=1}^3{\lambda_iz_iD_i}\right)} \right\} \end{equation} \subsection{Results} The parameters we have to determine are $\lambda_1,\lambda_2,\lambda_3,w,w_0$ and the cutoff for $\alpha$ as in~\eqref{eq:z4alpha}. We have concluded we must set $\lambda_1>\lambda_i,\ \forall i>1$ and we have by definition $w,w_0\in[0,1]$. The exact values of the sensitivities and inertia factors are at the moment heuristically chosen to be \begin{align*} \lambda_1=0.5,\ \lambda_2&=0.15,\ \lambda_3=0.45\\ w_0=0.1,\ w&=0.75,\ \alpha_{\textrm{cutoff}}=1.5 \end{align*} For the tilted plane, we obtain Figure~\ref{fig:tiltedz1z4}. \begin{figure}[h] \centering \subfigure[Reconstruction using $Z^{(1)}$.]{ \includegraphics[height=3cm]{Z1_tilted_figure_surf.jpg}} \subfigure[Reconstruction using $Z^{(2)}$.]{ \includegraphics[height=3cm]{Z2_tilted_figure_surf.jpg}} \caption{$Z^{(1)}$ and $Z^{(2)}$ reconstructions of a tilted plane with priors, $2$ million iterations.} \label{fig:tiltedz1z4} \end{figure} As mentioned in Section~\ref{sec:z1}, we have some instability in the form of incorrect convergence for $Z^{(1)}$, which is apparent in Figure~\ref{fig:gaussz1z4} as well. On the other hand, $Z^{(2)}$ converges well and produces a smooth reconstruction. We can also note that it achieves slightly better results than the no-normalizations case in only $2$ million iterations. \begin{figure}[h] \centering \subfigure[Reconstruction using $Z^{(1)}$.]{ \includegraphics[height=3cm]{Z1_gauss_figure_surf.jpg}} \subfigure[Reconstruction using $Z^{(2)}$.]{ \includegraphics[height=3cm]{Z2_gauss_figure_surf.jpg}} \caption{$Z^{(1)}$ and $Z^{(2)}$ reconstructions of a Gaussian well with priors, $4$ million iterations.} \label{fig:gaussz1z4} \end{figure} The instability in $Z^{(1)}$ is again apparent, and leads us to conclude that the loss of the Markov property in the algorithm may be detrimental to its performance. However, the reconstruction of the Gaussian well has substantially improved when using $Z^{(2)}$. It achieves a smoother reconstruction as without normalizations (see Figure~\ref{fig:gaussfirst}), and in $4M$ iterations instead of $10M$.\\ Going back to our error metric $\Gamma$, we see the improvement manifest itself rather clearly, with acceptances being on the order of $\sim55\%$ instead $\sim95\%$ as they were before. \begin{figure}[h] \centering \subfigure[$\Gamma_{Z^{(2)}}$ for tilted plane.]{ \includegraphics[height=3cm]{Z2_tilted_gamma.jpg}} \subfigure[$\Gamma_{Z^{(2)}}$ for Gaussian well.]{ \includegraphics[height=3cm]{Z2_gauss_gamma.jpg}} \label{fig:gammaz1z4} \caption{Plots of $\Gamma$ for $Z^{(2)}$ reconstructions with priors.} \end{figure} \section{Conclusion} The need for normalizing factors arose from the variance in the magnitudes of data terms $D_i$ from one iteration to the next. In formulating those factors, we focused on conserving the information contained in $D_i$ while bounding our quantities, and we confirmed the importance of retaining the Markov property in this context. However, by using the $Z^{(2)}$ formulation, we were able to obtain faster and better reconstructions of the conductivity for both the tilted plane and the Gaussian well.\\ Despite the encouraging results, several avenues need to be explored more fully. The long-run behavior of $Z^{(2)}$ seems to exhibit some stagnation, seemingly having converged as best as it can. In addition, very preliminary results have been obtained for a scheme that lies between $Z^{(1)}$ and $Z^{(2)}$, which updates the $(1-w)$ terms only when a guess is accepted, has shown competitive performance relative to $Z^{(2)}$.\\ As the algorithm currently stands $\alpha_{\textrm{cutoff}}$, the sensitivities $\lambda_i$, and the inertia factors $w,w_0$ must be determined heuristically. It is possible we may be able to dynamically adjust the values as the algorithm runs, through a constrained optimization of the acceptance rate, but that remains to be studied.\\ Finally, we would like to implement Gaussian-Mixture based MCMC algorithms, that treat the proposal density as an unknown to be approximated, and combine this framework with our normalization schemes to observe the interaction of the two methods. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
{ "redpajama_set_name": "RedPajamaArXiv" }
6,425
Small and medium-sized enterprises (SMEs) will welcome the introduction of new measures to help reduce the burden of late payments. From 16th March, EU-member states will be expected to implement the Late Payments Directive, which has been set out to put a stop to large businesses postponing the payment of their invoices. Under the directive, public authorities will be required to pay for the goods and services they order from SMEs within 30 calendar days, extending to 60 in exceptional circumstances. While public authorities must comply with the changes, it will be at the discretion of the SME whether or not to enforce the rules, as they may wish to enable late payments to maintain strong relations with their clients. As a statement from the European Commission (EC) revealed, SMEs will be allowed to claim interest for late payments and can automatically pick up a minimum fixed amount of 40 euros as compensation for payment recovery costs. As part of the shake-up of the late payments culture, small businesses will be able to challenge "grossly unfair" terms and practices more easily before national courts, the EC said. Meanwhile, member states have been encouraged to establish prompt payment codes of practice to dissuade large firms from failing to pay smaller businesses within a timely manner. SMEs affected by late payments might also consider options like invoice finance to help them cope with the burden of delayed payments. Factoring, for instance, will ensure SMEs get up to 90 per cent of the value of their approved invoices, while the credit control team at Aldermore chases up debts so the small firm can concentrate on running its business. The Bank will also run credit checks, issue statements and provide collection services so SMEs really can feel free of the hassle of late payments that can bring down an otherwise burgeoning company.
{ "redpajama_set_name": "RedPajamaC4" }
7,104
Субатомна частица е елементарна или съставна частица, по-малка от атома. Субатомните частици са предмет на изучаване от атомната и ядрена физика, както и от физиката на елементарните частици. Те включват електрони, протони и неутрони, които са съставни частици – съставени са от кварки. Кварките са слепени заедно с помощта на глуони. Повечето от тези частици се намират в космическите лъчи и се проявяват при взаимодействие с материята и се наблюдават при процеси на разсейване в ускорителите на частици. През 2008 година е пуснат в експлоатация големият адронен ускорител в ЦЕРН, който ще спомогне за пълното разбиране на стандартния модел. Изучаването на субатомните частици води и до развитието на философията на науката. Атомна физика
{ "redpajama_set_name": "RedPajamaWikipedia" }
9,426
Opinion: Morning Bits: Collusion, clearances and wastefulness President Vladimir Putin of Russia in June 2016. (Alexander Zemlianichenko/Reuters) By Jennifer Rubin Columnist | AddFollow It's about time someone pointed this out. "There is 'ample evidence' that the Trump campaign colluded with Russians, but only special counsel Robert Mueller can decide if it's enough to prove a crime beyond a reasonable doubt, the senior Democrat on the House Intelligence Committee said Wednesday." Read the whole thing. It's time for the White House to come clean on this. "Six Democratic senators on Wednesday wrote to FBI Director Christopher Wray asking for a full list of White House staffers working without a full security clearance. Sens. Richard Blumenthal (D-Conn.), Tammy Baldwin (D-Wis.), Martin Heinrich (D-N.M.), Mazie Hirono (D-Hawaii), Tom Udall (D-N.M.) and Cory Booker (D-N.J.) noted recent reports that indicated dozens of White House officials and appointees have been working in the Trump administration with interim security clearances." Time has run out for people who cannot be permanently granted a clearance. "A senior official on the National Economic Council says he resigned on Tuesday after being informed that he would not receive a permanent security clearance, as the White House faces increasing scrutiny over the number of high-ranking officials allowed to work on interim clearances." Follow Jennifer Rubin's opinionsFollowAdd Once upon a time Mick Mulvaney would have gone nuts about wasting money like this. "Trump's military parade would cost between $10 million and $30 million, White House budget director Mick Mulvaney said on Wednesday." Now, is actually the time to increase the number of skilled immigrants. "Sen. Orrin Hatch introduced several immigration amendments this morning focusing on high-skilled worker visas that would, among other things, make it easier for H-1B visa holders to change jobs. The bill does not raise the number of H-1B workers permitted in the U.S., making the legislation more palatable for hardline Republicans worried about protecting American jobs. High-skilled worker visas have been largely left out of the immigration conversation so far, and the President's four pillars have caused enough headache in the Senate." Time for an actual Iran policy. "Iran is paying a very high price in Syria, it can't leave, but it is not likely to be able to snuff out the fighting and so reduce its costs soon. That creates a situation where the United States could instead raise Iran's costs by supporting its enemies." This will be a tough time for the stock market. "A closely watched gauge of U.S. consumer prices offered fresh evidence that a long run of very low inflation is ending, raising the possibility the Federal Reserve could pick up its pace of short-term interest rate increases." And that unnecessary tax cut for the rich and corporations may be part of the problem.
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
2,882
{"url":"http:\/\/ac.cs.princeton.edu\/30mgf\/","text":"# 3. \u00a0 Combinatorial Parameters and Multivariate Generating Functions\n\n## Note III.17\n\nLeaves and node-degree profile in Cayley trees. For Cayley trees, the bivariate EGF with $u$ marking the number of leaves is the solution to $$T(z,u)=uz+z(e^{T(z,u)}-1).$$ (By Lagrange inversion, the distribution is expressible in terms of Stirling partition numbers.) The mean number of leaves in a random Cayley tree is asymptotic to $ne^{-1}$. More generally, the mean number of nodes of outdegree $k$ in a random Cayley tree of size~$n$ is asymptotic to $$n\\cdot e^{-1}\\, {1\\over k!}.$$ Degrees are thus approximately described by a Poisson law of rate 1.\n\n## Note III.21\n\nAfter Bhaskara Acharya (circa 1150 AD). Consider all the numbers formed in decimal with digit 1 used once, with digit 2 used twice, ..., with digit 9 used nine times. Such numbers all have 45 digits. Compute their sum $S$ and discover, much to your amazement that $S$ equals 45875559600006153219084769286399999999999999954124440399993846780915230713600000. This number has a long run of nines (and further nines are hidden!). Is there a simple explanation? This exercise is inspired by the Indian mathematician Bhaskara Acharya who discovered multinomial coefficients near 1150 AD.\n\n## Program III.1\n\nWrite a program that generates 1000 random permutations of size $N$ for $N$ = $10^3$, $10^4$, ... (going as far as you can) and plots the distribution of the number of cycles, validating that the mean is concentrated at $H_N$.\n\n## III.1\n\n(Exercise 5.13 in Analysis of Algorithms) What is the average number of 1 bits in a random bitstring of length $N$ having no 00?\n\n## III.2\n\n(E. Neyman) Define the cost of a ternary (base 3) string to be the sum of its digits. Derive an OBGF for ternary strings with this cost, and use it to compute the average cost of an $N$-digit string.\n\n## III.3\n\n(D. Mavrides) Derive an OBGF for the number of 1s in the set of ordered partitions of an integer, and use it to compute the expected number of 1s in a randomly selected partition of $N$.\n\n## III.4\n\n(M. Bahrani) A run in a bitstring is a maximal sequence of consecutive identical bits. For example, the string 11010100001 has 7 runs, with lengths 2, 1, 1, 1, 1, 4, 1 respectively. What is the average number of runs in a random binary string of length $N$? List the corresponding horizontal and vertical OBGFs. Is the distribution of the number of runs in binary strings concentrated?\n\n## III.5\n\n(T. Ratigan) Derive a combinatorial construction leading to an equation involving the OBGF for the number of vertices of degree $d$ in a Catalan tree with $n$ node Solve the equation for $d=1$ and give an asymptotic estimate of the number of degree-1 nodes in a random $n$-node Catalan tree.\n\n## III.6\n\n(M. Tyler) Define the unit $n$-hypercube to be the set of points $[0,1]^n \\subset \\mathbb{R}^n$. For example, the unit 0-hypercube is a point, and the unit 3-hypercube is the unit cube. Define a $k$-face of the unit $n$-hypercube to be a copy of the $k$-hypercube in the exterior of the $n$-hypercube. More formally, a $k$-face of the unit $n$-hypercube is a set of the form $\\prod_{1\\le i\\le n}S_i$ where $S_i$ is either $\\{0\\}$, $\\{1\\}$, or $[0, 1]$ for each $i$ between $1$ and $n$ and there are exactly $k$ indices $i$ such that $S_i = [0, 1]$. Derive a combinatorial construction leading to an OBGF and an explicit formula for the number of $k$-faces in the unit $n$-hypercube. Use the OBGF to derive the expected value of the dimension a random face of the unit $n$-hypercube.","date":"2017-11-21 18:57:47","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 1, \"mathjax_display_tex\": 1, \"mathjax_asciimath\": 0, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.8492659330368042, \"perplexity\": 377.5686295842313}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2017-47\/segments\/1510934806422.29\/warc\/CC-MAIN-20171121185236-20171121205236-00480.warc.gz\"}"}
null
null
Enquiry from Online Galleries regarding "Antique George 111 Silver Escutcheon Shaped Lisbon Label" Antique George 111 Silver Escutcheon Shaped Lisbon Label. Made by Richard Bingley, who was originally apprenticed to Sandylands Drinkwater. by his wife Margaret Bingley.
{ "redpajama_set_name": "RedPajamaC4" }
457
Gory (Rogelio Lopez Marin) Raised in post-revolutionary Havana, Cuba, Marin (b.1953) studied at both the National School of Art (1973, BFA) and the University of Havana (1978, MA). For over fifteen years, he worked as a photographer for the Cuban Cultural Ministry Magazine. Since 2000, he has worked as an independent artist, with exhibitions including the Los Angeles County Museum of Art; the Grey Art Gallery at New York University; the Smithsonian Institution, Washington, D.C.; and the C. Grimaldis Gallery in Baltimore, MD. His works are included in the permanent collections of the Metropolitan Museum of Art, New York; the Los Angeles County Museum of Art, and the Biblioteca Nacional in La Habana, Cuba, among others. Artsy Profile Masao Haijima PAAP Luis Gonzalez Palma
{ "redpajama_set_name": "RedPajamaCommonCrawl" }
8,168
NYY have said there is no room on the major league roster for Bernie Williams. If he came back, it would be under a minor league contract, and he would have to win a spot in Spring Training. Teammate Jorge Posada says, Bernie wants to play one more year and not retire. That may have to be with another team. Cardinals should find out this week, if Jeff Weaver will accept there 2-yr offer. The Cardinals came up some on the average per year salary, but are holding firm on the 2 years. Minnesota interested in Bruce Chen. Dodgers interested in P Joey Eischen. NY Mets, one of about 6 teams interested in P Victor Zambrano. 1B Carlos Pena close to signing a minor league deal with Tampa Bay. Mets 1B Carlos Delgado may miss a few games at the beginning of the season, due to the birth of his child. His wife is due within a few days of the April 1, start of the season. CWS ready to sign OF Darin Erstad with the news that Scott Podsednik will be out 6 to 8 weeks, due to having groin surgery. Oakland traded P Kirk Saarloos to the Reds, for P David Shafer. Toronto signs P Tomo Ohka to a 1-yr, 1.5 mil contract. Boston signs P Brendan Donnelly to a 1-yr, 1.4 mil contract. Phillies sign P Antonio Alfonseca to a 1-yr, 380,000 contract. Dodgers sign INF Fernando Tatis and P Rudy Seanez to minor league deals. LAA 3B Dallas McPherson may be out all season due to surgery to remove a herniated disk in his lower back. Bud Selig is ready to announce that the 2008 All-Star game will be played in Yankee Stadium.
{ "redpajama_set_name": "RedPajamaC4" }
5,170
Contents The Many Kinds of Mods How To Install Your Mods Combat Mods Mob Mods The Many Kinds Of Mods When someone makes a mod, they're actually going in and writing computer code to make changes to the game, which then needs to be loaded into the game's regular code. Because of this, mods can vary greatly in all respects, from how many changes they make and how big those changes are, to how big the files in the mod are and how they get loaded into the game's normal code. There are tech mods that make Minecraft a world of extreme automation (image by Drullkus) It's useful when getting an idea of how mods work, what they can do for your game, and just generally what mods are out there to separate them into different categories. We've done this in three different ways to help introduce you to the world of mods, first by breaking them into sizes, giving each mod a complexity rating, and also by separating into categories by what they add. There are almost as many types of mods as there are mods themselves (of which there are thousands), and the lines can blur for some between the categories (like Thaumcraft, which adds items but does enough with systems to be included in that category) but most of them fit under one or more of these definitions: And then there are mods that are all about plain fun and adding in cool things from other universes, like the Pixelmon mod seen here. Mods of Different Sizes Size is an important factor when it comes to mods, because a mod's size typically determines two things that players need to know: how difficult the mod will be to install, and how much it will change the vanilla game. How much it changes the game is something that matters quite a lot. Sometimes players are looking to change the entirety of their game, while others just want a little tweak. This is even more important when trying to mix various mods together in one game, something we'll talk about more in the next chapter. BubbaDogface, the writer of this book's trusty Minecraft pup, owes his name and ability to be a Creeper hunter to a small mod called Doggy Talents. Isn't he a cute little dogface? The sizes of mods: Tiny mods: Tiny mods are ones that make minimal, barely noticeable additions or changes to the game. Utility mods are often tiny mods, adding just a little change to the Heads Up Display, as are mods like our Odd Mod Spotlight mod Second Screen, which adds a very lightweight system that lets you chat and see data on your Minecraft server from another device. One very notable and desirable feature of tiny mods is that they are often so lightweight that they are very easy to run at the same time as other mods. In fact, they're so easy to install with other mods that tiny mods are often put in modpacks with many other, often much bigger mods. Small mods: Smaller mods usually tweak or add just one small thing or segment of the game. For instance Fex's Random Stuff Mod just adds a bunch of items. Playing with small mods usually doesn't feel all that different from the regular game, there's just a little more to do or a few more entities to interact with. Small mods often work together very well as well, similar to tiny mods, if not quite so easily. Mid-sized mods: These are mods that make either one significant change or a few smaller changes together. An example of this would be the ICBM Mod, which really isn't a very large mod, but it does add items, a new set of crafting recipes and a new way to do combat. Mythical Creatures is another mid-sized mod, really just adding mobs and a few items, but adding a ton of mobs that are very different from the regular ones and which can make a big difference to the game. Big mods: As opposed to small mods, you'll notice when you load up a game with a big mod. They tend to change large parts of the game significantly, making it a mostly or entirely new experience. As an example, one such mod is the Tinkers' Construct, which takes the regular method of crafting tools and items through a Crafting Table and makes it much more complicated, adding multiple types of crafting stations, tables and forges and making players use Patterns and build each individual part of the tool they want. Another example is the Aether mod, which adds an entire new dimension in the sky. These mods are ones which can't be ignored in a game. Full conversion mods: Full conversion mods are big mods that are the most noticeable, because they change the game in huge ways, usually so much so that the objective when playing them is something new. This category includes mods like The Crafting Dead, which aims to turn the Minecraft world into a zombie apocalypse wasteland. It includes guns, advanced zombies and new systems for thirst, whether you can be seen or heard, temperature, and even whether you're bleeding. Even cooler, it also adds in new specially generated maps that simulate the world of a zombie apocalypse. Though you can (and should) build in The Crafting Dead, the goal is much more about surviving zombie attacks and living in a much harsher world than it is about mining and the like, making it a full conversion of the game. Modpacks: The biggest of all, modpacks are groups of mods that have been put together by players and/or mod creators in curated packages so that they all load together. These are the best place to start out when it comes to mods, as they are usually very easy to load, and they give you the chance to experience many of the best mods right away. Additionally, mods can be very picky about working together normally, but modpacks are specially put together so that they just work without you having to do much of anything. That being said, stacking mods very quickly changes the game heavily, and some of the modpacks can get a bit intimidating with all of their many, many new things (for instance, FTB Infinity with over 100 mods). Here you see two types of mods mixed together, a biomes mod, and a few different visual mods including shaders, a program that tweaks the visuals of Minecraft (Optifine) and an image taking mod. Mods of Different Types We've organized our book based on a way of looking at mods, which divides them by type. We did this because knowing a mod's type is what tells you the most about a mod, and by "type," we're referring to the primary thing that each mod actually does. It should be noted that many mods actually do a few things, and some of those things might fit in another category other than the one the mod is listed in with this book. The line between mods is often quite fluid, and it's easy to put many of these mods in multiple categories. Quite a few mods out there do quite a lot, but we've categorized mods as we have based on the main thing that they are known for and do. With that in mind, here are the primary categories for each type of mod: Utility mods: These are mods meant to be useful to the player in some specific way, making gameplay just a little easier or more informative, and they're usually very small. Utility mods do things like add a better map to your interface, tell you the exact amount of daylight/night left, or make it easier to find friends. They also tend to work with other mods easily, as they're often among the tiniest mods that are out there. Item mods: Item mods are those whose primary purpose is to do just what the title says and add a whole bunch of items. Some items mods also add in some systems, like Still Hungry's new cooking system on the Stove, but these are usually very simple and not the primary objective of the mod. Items mods are pretty popular among the categories because they don't require a lot of learning of new things and are very controllable by the player. Think of it as adding more toppings to a pizza: there's just more to experience, and the overall flavor is just that much more complex and exciting, but it's still pizza at its core. Land and biome design mods: The world around you in Minecraft is already pretty complex with its many biomes, but it can always get more interesting! These mods are those whose point is to make the land more complex or otherwise change it. Some add new biomes or dimensions, like Twilight Forest or ExtraBiomesXL, while others do littler tweaks like Mineralogy's new way of distributing rock. Combat mods: More items to fight with, ways to fight better, and new ways to fight! Some people don't care much about combat in Minecraft, choosing to build, because the basic system really isn't very extensive. It shouldn't come as much of a surprise to anyone that some people have really wanted better combat in Minecraft, and that's exactly what combat mods provide. Some do this heavily, like Mine and Blade's new system of weapon holding and items, and some just add a little to basic combat, like BetterPvP. Mob mods: Who doesn't want more cuddly (or not so cuddly) creatures to play with in their Minecraft world? Mob mods are actually surprisingly few, perhaps because it can get pretty crowded pretty quick if you overdo it on creatures (which is also hard on your computer), but the ones that do this well give Minecraft a fuller feel, making the world feel like a place where things are constantly happening, and there's a potential friend or foe around every tree. Building mods: Many mods add blocks; in fact a majority of them do. Because it's so common, and so many mods that do so belong more in another category, that's not what we mean when we talk about building mods. Instead, these are mods that do the building for you, making it much quicker to create structures. This isn't a highly common type of mod, but they are out there. Adventure enhancers: We use this term to refer to those mods which all have in common the fact that make "fantasy and adventure" part of Minecraft more detailed and complex. That's not really a technical term of course, and this is a pretty wide category of mods, but we think it makes sense to group these mods together. That's partly because the sense of going on an adventure in a fantastical land is such a big part of Minecraft's charm, so it makes sense to focus on these mods as a category, and they also just work very well together. Some of our favorite mods fall into this category, as do many of the most downloaded mods on the internet. Even a few minutes inside vanilla Minecraft would give you the knowledge that the tech you're seeing in this image by Drullkus just could never happen in regular Minecraft. Magic, tech, and crafting systems: Another of the most popular mod types, these mods add new systems of doing things, giving players something to learn beyond basic crafting and Redstone. Sometimes this is just one small system, like the Railcraft or ICBM mods, but sometimes it adds so much and/or is so dang complex that it makes figuring out Redstone wiring seem like child's play, like Thermal Expansion or Applied Energistics. Engineering-types, those that like the idea of super-fancy bases with a lot of automation, figuring stuff out, and learning a new way of thinking are often the kind of people that end up loving the mods in this category. Visual mods: We do love Minecraft's iconic, pared-down, pixely look, but mods give people the chance to change up the way the computer represents a Minecraft world onscreen and do it their way. Sometimes the vision of Minecraft's visual mod creators is a massive change to make the vanilla world look beautiful, such as the way Shaders reconfigure and heavily enhance the lighting system in Minecraft, and sometimes it's as simple as adding a way to take awesome screenshots. Just look at all the new menus, systems and items that just a couple of mods can add to Minecraft! Modloader packs: Modloaders are programs that you can use to launch big modpacks, which are collections of many mods. These loaders make the modpacks load smoothly without much input from the user (at least, that's the idea), which greatly cuts down on the work you have to do to mix mods. We've separated some mods from the biggest modloaders out there because many of the world's most popular modpacks are held under the umbrella of the two big modloaders, the Feed the Beast loader and the Technic loader, often exclusively. This is great for us players, as it means that the experts working for these loaders keep track of everything for us, making sure all mods work, are up to date and work well together. Loaders are both a great place for new mod players to start out, as they make it easy to get them going, and for mod veterans, as many of the best and most complex mods are in this category. Mods of Different Complexity As we've mentioned, not all mods are equal. Some do a lot but are pretty simple, others are small but complex, and there's everything in-between. Because of this, we've added a little scale to this book that quickly tells you how complex a mod is. This is more of a general guideline to give you info on what to expect, and not a hard-and-fast technical rating, as people will often vary on what they think is complex from person to person. The idea is to let you know at a glance whether adding a mod will take a lot of learning, or whether it will be a quick and easy addition. To this end, we've rated each mod from 0 to 5 Diamonds, with 0 being the least complicated, and 5 being the most complicated mods or modpacks that there are. How To Install Your Mods Right, so here's the one tricky bit when it comes to mods: Installing mods can be a bit of a pain. Mods are just plain fun, once you get them installed, that is. We think that the experiences in modded Minecraft are as good as any others in gaming, bar none, and a lot of people out there agree with us. If you've done your work right, you should be seeing custom modded versions of Minecraft just like this one in no time flat! That being said, getting a mod installed can (but won't always) take a bit of work. Typically it's not hard at all. Most Minecraft mods are loaded the same way, and you just have to learn it once to get it forever. Those kinds of mods go through the Forge program, where you only have to worry about making sure all mods are for the same version of Minecraft and that you put the mod into the correct folder. It's only a matter of time before you start seeing Minecraft's world in a whole new way. Sometimes, though, it takes a little doing. Each computer is different, and each situation is unique when getting a mod working, both for the computer and for the mod. This book and this chapter can guide you to a point, but it's so different from one mod to the next that you always want to read the mod's instructions on its page and follow them to the letter. Luckily almost all mod creators include detailed instructions at the mod's link, and we've included the links to every mod in this book, so you should be able to easily find each mod's specific installation instructions. Sometimes even that isn't enough, though, and there's always the chance that you might have to ask for help in the Minecraft Forums or on the mod's page. Don't hesitate to do that, though remember to remain polite. This is also part of the mod culture; mods are made by fans, so they're rarely an exact science, and getting mods to run is a traditional part of the experience for all games, and all gamers seeking to mod them. Each mod has an online presence somewhere, and if you have the time to invest in it, you will be able to get almost every mod running on your own rig. You'll become very familiar with pages like this one when you start getting into modding. The hope is that you'll be able to avoid doing too much work for a mod, though. A little instruction in basic mod installation will be all you need 80% of the time, and that's what we've got for you here. Where to Get Mods We've included a link to every mod in this book (except for the ones that go through a modloader, which don't require a link), but in general when looking for a mod, there are four locations at which you'll find most mods: Planet Minecraft: The prime directory of all Minecraft creations, including maps, texture packs and, as is most pertinent to this book, mods. The vast majority of mods have a Planet Minecraft page that has a link to download them, info about the mod, photos, and/or video of the mod and a comment section. Not all PMC profiles are kept totally up-to-date, however, though many are. The Minecraft Forums are an indispensible resource for anyone interested in mods, as you can not only get mods there, you can also learn how to install them and speak to the creators and community, which is especially great if you're having any trouble getting a mod to run. The Minecraft Forum: The Minecraft Forum is the other primary website where mods keep a major presence, along with Planet Minecraft. Though they are a bit less formal of a project-holding site, in that the entries for the mods are just forum posts (though often well-structured and heavily informed ones). At the current time it is more common for a mod to have an up-to-date, well-crafted Minecraft Forum post than a good Planet Minecraft post. Often, though, the big mods have both. In this book you'll find more Minecraft Forum links than any other, and you won't go wrong from following one, when it comes to getting mods working. The Curse Page: The Curse company is extensively involved in online gaming, and they have a major presence within the Minecraft community. Not only does the Minecraft Wiki, the prime repository for Minecraft knowledge online, fall under the Curse banner, Curse also hosts downloads for many Minecraft mods. A lot of the big mods have a Curse page in addition to a PMC and/or Minecraft Forum page. Though these don't have the comment section interaction or the pure Minecraft focus that the other mod links have, Curse pages are very reliable and consistently updated, and some mod creators consider their Curse page to be the best one to share with potential users. This is an example of an individual mod creator page for the lovely food mod Still Hungry! Using these pages is one great way to help mod creators out a bit. Individual Mod Creator Pages: Mod creators don't make a lot of money doing what they do. They work on a game that already makes its money separately, and the only way they get compensated is from user donations or from getting hits on their mod links. By far the best way to support mod creators is to use their own website to download a mod, as it will directly reflect on hits on their site and will direct you to the download link that pays them the most , both of which earn them money. Not only that, but creators' websites always have the up-to-the-minute download info, updates and guides to mods, so they are always preferable when it comes to getting a new mod. The rule is this: Google it, and try to find the personal page for the mod creator. If you can't, go with PMC, the Minecraft Forum, or Curse. Avoid most other sites at all costs, as few good mods have no presence at the main sites, and there are definitely sites out there with some dubious downloads on offer. The links you need for forge are down at the bottom of this page. There are a few different ones, but the Install link typically is the best one. Forge: The Program That Makes Most Mods Work http://files.minecraftforge.net/ The first step to getting Minecraft mods going on your computer is to install the Forge mod loading program. To do this, you'll have to download the Forge installer from the link above and then run it. At the link above, simply scroll down to the "Downloads" section, and then pick the version of Minecraft you would like to run mods in. Note: this is most likely going to be Minecraft 1.8, 1.7.10, or 1.6.4. If you decide you want to run mods for a different version of Minecraft later, you'll just come back to this link and download and install that one. To actually download the file, make sure you've clicked on the version you want, and then look below where it says "Download Recommended" (if it only says "Download Latest," look there instead). Click the "Installer" button (Windows users can also just use the Installer-win" button, but either will work), and then save the file to your computer. Forge really is quite simple to install, especially since it tells you if you did it right at the end! Before you open that file and install it, you need to make sure that you have opened Minecraft at least once. Most likely you already have, but if you're jumping right into modding, make sure you open the game first and then close out again. This must be done because certain files and folders Forge needs aren't created until Minecraft runs for the first time. When that's done, run the file you downloaded from Forge. It will pop up a window asking if you want to "Install Client," "Install Server," or "Extract." Select "Install Client," and ignore the file location at the bottom and just hit OK. It should then show that it's downloading and installing Forge, and if you did it right, you'll see it tell you that it was successful. On a Mac, this is the menu you want to drop down while pressing the "alt" key in order to find the Library. On a PC, use the Run program and type in "%AppData%/.minecraft" to find the Minecraft folder. Finding and Installing Mods for Forge To actually get mods loaded into Forge, you will need to do a few things. 1. Get some mods! This is the easy part- just go to one of the links in this book, or head to Planet Minecraft, the Minecraft Forums, or the Curse webpage and search around for mods. There are tons on each of these, and a good idea for which to pick is to go with those that have a lot of downloads (each site lists these differently, though for the Forums you'll just have to look at how many comments the mod's page has instead of download numbers). Make sure that you are getting the correct mod for the version of Forge you have installed (and remember that you can always download and install other versions of Forge). There's that pesky Minecraft folder! You'll be coming back to this a lot if you get into modding, so remember how to find it quickly to save time. Note: When installing mods for the first time, you should always add just one mod at a time. This way if a mod causes Minecraft to crash, you know which one did it. In fact, we highly recommend that you only install one mod period for your first go at modded Minecraft, and wait until you get the hang of how to install mods before adding any more to a single game. 2. Find the mods folder. To run a Forge mod, you have to put it in a folder called "mods" inside the "minecraft" folder on your computer. There are two ways to find this folder: A. Locate it manually by finding the folder at the following location, depending on your operating system: • Windows: C:\Users\You\AppData\Roaming\\.minecraft or %AppData%\\.minecraft • Linux: ~/.minecraft • Mac: ~/Library/Application Support/minecraf For Windows, you can open "Run" and simply type "%AppData%\\.minecraft" into it, and it will bring it up. Clicking that "Open resource pack folder" button can help you find the Minecraft folder if you're having a little trouble. For Mac, open a Finder window, and navigate to the Library folder by holding the "alt" key and clicking "Go" up in the top menu bar. This will show the Library folder as an option; click on that, and then "Application Support," and then "minecraft." B. Run Minecraft, and go to "Options." Click "Resource Packs," and then "Open resource pack folder." This is easiest done when in "Windowed" mode and not Fullscreen, but either way it will open up the "resource packs" folder in a new window in your operating system. All you have to do then is to go up one folder to the folder "resource packs" is in, and you're in the "minecraft" folder. 3. Move mods into the "mods" folder. In the "minecraft" folder, there should be an empty folder called "mods." If you don't see this, go ahead and create it now. Once you've found it or created it, all you have to do is move the .jar file that you downloaded for each mod into the "mods" folder. Don't open or extract these files, just move them as they are. If the mod file is not a .jar, and is a .zip, you may need to extract the .zip and get the .jar out of it, and then move that over to the "mods" folder. Note the "Forge" profile loaded in the left corner, and the mention of Forge in the right-hand corner as well. This is a Minecraft game that's ready to play some mods! 4. Open the Minecraft launcher and select the Forge profile. When you open the Minecraft launcher, you'll see on the bottom left that there is an option to change which "Profile" you are running. To run mods, you'll need to select and run the Forge profile, which should automatically show up there after you've installed Forge. Note: If you need to, you can click "New Profile" and create a Forge profile yourself. To do this, you'll need to click the "Use version:" dropdown menu and then select the version of Forge you want to run. This can be useful to do if you want to have multiple Forge versions on your computer at once, creating a different profile for each. 5. Check the corner of the launcher. If it says "Ready to play Minecraft" and then something that includes the word "Forge" after it, you're ready to go! 6. Click "Play," and launch Minecraft! If it worked, you will see the regular Minecraft menu (or a new one, depending on the mod), but there will be information in the bottom left hand corner that says Forge is loaded and gives a count of the mods you have running. 7. Start a game! You can usually tell if mods are working by checking the Inventory screen. This is what the Feed the Beast mod loader looks like. A little different from regular Minecraft, and it makes loading up complex modpacks a breeze. Other Ways to Install Mods Forge is by far the most commonly used program to run mods, but there are two other ways, the first of which is also pretty well-used by mod players. 1. The Technic and Feed the Beast Launchers: These are two programs that you can use instead of the regular Minecraft launcher, and each makes it very easy to load mods or modpacks. All you have to do is download the file for each launcher, and then open it (only run one at a time, of course) and log into your Minecraft account (the same login you use normally). There will be a list of mods that you can choose from, and all you have to do to play one is to select it and tell the launcher to "Play" or "Launch." It will then do all of the downloading, installing and launching for you, and should kick you right into the modded game. If something goes wrong, try launching again, and if that doesn't work, head to the Technic or FTB sites to get assistance (they have a lot of people there to help). This is the Technic loader, another major mod loader with a set of stellar mods and packs to play. 2. Manual installation: Mods can be installed manually without Forge or a launcher, and in fact that's how people used to have to do it before all of these convenient launchers and loaders existed. The instructions to do this are pretty complicated though, and it's easy to mess up your game doing so. If you're interested in doing things the hard way, this is easy to Google, but we highly recommend using one of the more standard ways of installing mods. Mixing Mods- A Few Tips Some mods are big enough that people often play them on their own, but for the most part, people run more than one mod at the same time. We've touched on this a little bit throughout this chapter, but here are some tips to get the best experience when it comes to mixing mods: • Always make sure all mods are for the same version of Minecraft. • Always make sure you are running the correct version of Forge for your mods. When you get mods going and actually get in the game, you can tell which ones loaded in correctly by checking the "Mods" button on the menu, which will get you a big list like this one. • Install one mod at a time, and run Minecraft between each installation to make sure it works. This will show you which mod is breaking your game if it won't load, something that is very frustrating and tedious to figure out if you have loaded a ton of mods at once. • Don't overload your computer. Very powerful computers can run a lot of mods at once, but anything less than top-of-the-line models will have trouble the more mods you load in. Keep it at a reasonable level for the kind of machine you have in order to make sure your game runs smoothly. • Check online for lists of mods that work well together. Many mod creators actually list a few that work well with their mod on their mod's page, and other players have created lists online that go even further. Troubleshooting Mods in General Sometimes a mod just won't work, and there doesn't seem to be a reason why. You can do everything right in installing it, and it can still cause issues in your game or even cause it not to load at all. What the mods folder can look like when you get a bunch of mods loaded into it correctly. This happens, and while each situation will be different, we can give you a few tips for if it happens to you: • Make sure you know which mod is causing the crash. Remove all other mods and test it alone. If it works alone, it may be conflicting with another mod. If it still doesn't, you know which mod is the culprit. • Save any information that Minecraft gives you when it crashes. It will usually give a crash report, and may even give an error in the game. Sometimes mods will crash; that's ok! It happens to everyone, and it usually just means that one or more of your mods are in conflict with another, so you just have to figure out which mods aren't playing well together. • Check the mod page. Someone else has probably had the same problem you are having, and there is always a lot of discussion on fixes in the comments on Planet Minecraft and the Minecraft Forums. Check there before asking for help. • Google the problem as well. Sometimes the best troubleshooting discussions are on other pages away from the mod page. • If you still have trouble, try posting your own request in the Forums. Don't direct message the mod creator though, they tend to hate this and are less likely to get back to you. When you post, make sure to include all information you can about the mod, what version you're trying to run, what computer you are on, what the game said when it crashed and anything else you can think of that might be relevant. Oh, and be nice and patient! People are there to help you, but they won't want to if you are a pest. Combat Mods Turning the Fights Up a Notch Minecraft is a rare game in that combat is more a side thing and not the main focus of the game, but that is really only true in vanilla Minecraft. Even in plain ole 'Craft, people do get up to quite a lot of bashin' and smashin', and if you go to just about any online server, the amount of battling you do goes up quite a large amount. Even though Minecraft isn't really built around combat, people have come up with some really awesome ways to do PvP in this game, and have gotten really good at it, and in no part of the game is that more true than when it comes to mod-aided Minecraft combat. Combat mods come in a lot of shapes and sizes, from those that just add a few little weapons, to those that deal more with helping you get more information about combat, to those that totally overhaul the whole weapons and fighting systems. Here, you'll find all of these types, as we've collected a group of mods that represent the best of the combat mods available today. Mine and Blade Battlegear 2 Creator: Full Stack In one sentence: Expands the weapons system in Minecraft significantly, including not only new weapons, but also the ability to use two at once, save weapon configurations and even use shields. Version: 1.5.2-1.7.10 (1.8 upcoming, according to the forum page) Installed Through: Forge (manually) Where you can find it: http://bit.ly/MineandBlade2 Complexity Level: 2 Diamonds Adds Items?: Yes Adds Mobs?: No Actually called Mine and Blade Battlegear 2 for this version, this is a Modpack that is named after the war simulation battle game Mount & Blade and which adds a very large amount of things to kill other Crafters and mobs with to your game. That is to say, it's chock full of weapons! While many Modpacks add weapons, Mine and Blade is considered one of the best, and it even adds the ability to use a shield and dual-wield some killing implements. And like many mods, Mine and Blade adds quite a lot of new craftable items, like the expensive but deadly Diamond Arrow. It also gives Crafters the ability to save weapon configurations in its new inventory set-up so they can easily switch between various implements, and it has an extra little feature that tells you if there is a new version of Mine & Blade out automatically, which is quite helpful. Better PvP Creator: xaero96 In one sentence: Gives you stats and other information on combat and combat-related things through the user interface, plus Xaero's minimap and some handy tweaks to make things like running and eating food much easier. Version: 1.7.10, 1.8 Installed Through: Forge (manually) Where you can find it: http://bit.ly/BetterPVP Complexity Level: 1.5 Diamonds Adds Items?: No Adds Mobs?: No This might be considered a utility mod, if it weren't so specifically for use in combat. Better PvP is designed to give you much, much more information on combat situations than you would normally have. Better PvP adds info like weapon and armor status, potion effects, Arrow counts, and enchantment statuses, as well as giving indicators of quick combat events, such as when you're being shot by arrows, when you gain or lose XP, or when something is about to explode near you. Additionally, this thing will cut down on your need to jump in and out of inventory or use inefficient button combinations in combat by giving you options like binding food eating to a certain key or making sprinting much easier. And to sweeten the deal, Xaero includes his very good little minimap (which you can find on its own in our Utility Mods chapter) with Better PvP, which is highly useful in combat both for its ability to show the enemies and land around and for its feature that shows you where your last death occurred. All in all, an immensely useful combat mod which doesn't actually change the way combat works in vanilla Minecraft by adding items or new systems. Paintball Mod Creator: IKinx In one sentence: Play paintball in Minecraft with a variety of weapons and equipment like flags and base creators that create a minigame similar to real-life paintball. Version: 1.6.2 through 1.8 Installed Through: Forge (manually) Where you can find it: http://bit.ly/PaintballMod Complexity Level: 2 Diamonds Adds Items?: Yes Adds Mobs?: No If you've ever played real-life paintball with a good crew of friends at the right paintball place (and a bit of pain doesn't phase you), you know that the excitement and camaraderie and chaos can be just a huge amount of fun. Mod creator IKinx (who is looking to make a career of this, so make sure to support them if you can!) has brought this very thrill to our favorite building game with the awesome Paintball Mod, which features everything you need to get a game going. This includes a huge number of weapon types (pistols, sniper rifles, machine guns, shotguns, grenades and more), as well as ammo and items to set up your gameplay area, like automatic base creators and flags. All of this is craftable, and IKinx has conveniently provided downloads for server-side mod stuff on his site as well, if you have your own server goin'. Ferullo's Guns Mod Creator: Ferullo In one sentence: Another old one, this is perhaps the primary gun-adding mod for Minecraft, adding many realistic guns to the game. Version: 1.6.4 Installed Through: Forge (manually) Where you can find it: http://bit.ly/FerullosGunsMod Complexity Level: 2 Diamonds Adds Items?: Yes Adds Mobs?: No Mod creator Ferullo has switched his efforts over to a new Counter-Strike-based mod for Minecraft , but his old Guns Mod is still one of the most commonly found and frequently downloaded mods that takes our tame lil Minecraft and adds all sorts of things that go "boom!" and "pew!" to it. The number of guns this thing adds is pretty tremendous, with many styles based on all sorts of real guns like high-caliber sniper rifles, ACOGs, M-16s and so much more, plus it even throws in some fun stuff like night vision goggles and realistic body armor. If you'd like to make your Crafting a bit more mature and dangerous, and especially if you dig games like Call of Duty or Counter-Strike, Ferullo's Guns Mods is your go-to. Oh, and if you like the idea of zombies + guns, check out the Crafting Dead mod, which uses this Guns Mod, but adds in a whole zombie apocalypse thing (find it in our Technic Launcher chapter)! Mob Mods More Friends and More Foes Every time Mojang releases a new mod with a Minecraft update, it's a big ole deal, and everyone goes crazy about the new creature roaming the lands of Minecraft. What modded Minecraft players know that those who just play vanilla might not, however, is that you can have that feeling times thirty or so all at once, right now! Mob mods (it's fun to say, isn't it?) are some of the most requested things that Minecraft players getting into the modding scene ask for, and that makes a lot of sense, as adding even one new creature into the game really does liven it up a big amount. Luckily, mod creators have gone out there and added just about every imaginable creature to Minecraft and jammed them all into mods that are among the easiest to install and the least likely to mess up your existing worlds. If you're lookin' for a few new friends to hang out with (or enemies to slay!), one or two of the mods in this chapter will set you right. Mutant Creatures Creator: thehippomaster21 In one sentence: Turns regular mobs into weird, bigger, way more dangerous versions of themselves. Version: 1.7.10 and back Installed Through: Forge (manually) Where you can find it: http://bit.ly/MutantCreaturesMCMod, http://bit.ly/MutantCreatures1710Download Complexity Level: 1.5 Diamonds Adds Items?: No Adds Mobs?: Changes mobs, but doesn't add Ever looked at an Enderman and thought, "Enderman, you just aren't weird enough!" With the Mutant Creatures mod, it's unlikely you'll be doing that ever again, as its whole premise is to take the plain versions of mobs and seriously weird them up. Mobs get bigger, really freaky lookin', and have much more strength and better attacks. In a fun twist, this mod also allows you to create minion Creepers, which will attempt to aid you against the mutants' onslaught by blowin' them right up. Watch out though- they might get overexcited and accidentally catch you in the explosion too! Doggy Talents Creator: ProPercivalalb In one sentence: Massively expands what your pet Dog can do in Minecraft, especially adding a huge number of "talents" you can train your dog in, such as hunting and herding. Version: 1.8 and down Installed Through: Forge (manually) Where you can find it: http://bit.ly/DoggyTalentsMod Complexity Level: 2 Diamonds Adds Items?: Yes Adds Mobs?: Changes mobs, but doesn't add Taming a Wolf and getting your very own trusty puppy is one of the cuter, more cuddly things you can do in Minecraft, but let's be honest: the Dogs in vanilla Minecraft don't really do much. No longer, with the Doggy Talents mod! This thing is pretty lightweight, but surprisingly complex, and it really does turn your furry pal into a hugely useful friend. The way it works is that you craft special treats for your Dog and feed them to him or her to train them, which levels them up. Each time they level up, you can spend points in a massive variety of different talents, which include everything from making them move faster, have poison attacks, herd peaceful mobs and much more. This mod also allows you to name your pet, which shows up above their head, to tell your pet how to behave (such as to attack everything in sight or only what you tell it to), and it even adds special Dog items like beds and baths. This is the ideal mod for everyone who loves their digital pet and wants to make their relationship with it even stronger. More Mobs Mod Creator: SimJoo In one sentence: All the mobs you could ever want, including new human mobs, and some neat items like Wings. Version: 1.7.2, 1.7.10 Installed Through: Forge (manually) Where you can find it: http://bit.ly/MoreMobsmod Complexity Level: 2 Diamonds Adds Items?: Yes Adds Mobs?: Oh yes This is the big one when it comes to adding new types of mobs to your Minecraft world, as it features well over 30 new mobs of all different shapes, sizes, and types. It also adds a few neat new craftable items like Fire tools and Wings, which make it a double whammy of a mod. Primarily though, this is mostly a big-time mob mod, and the list of the mobs this mod adds reads like a zoo directory. Check it out: Mystical Creatures Creator: FiskFile In one sentence: An older but highly unique mob mod that adds new mobs that are mutant combinations of two mods already in the game. Version: 1.6.4 Installed Through: Forge (manually) Where you can find it: http://bit.ly/MysticalCreatures Complexity Level: 1.5 Adds Items?: Yes Adds Mobs?: Yes This is another one for older versions of the game, but we included it because it's just freakishly adorable and fun. Mystical Creatures can be thought of as loosely based around what would happen if an accident caused mobs in the Minecraft world to merge, creating all new, kinda horrifying creatures. For instance, it has the Cow and Squid merged into a crossbreed that is both ridable and can be milked to get Calamari Stew, and the Spider/Creeper that can climb, jump on your head and blow up. The other two mobs added are the Enderman/Skeleton and the Morbid Harvester (kind of a mash up of the Spider and Enderman), and these last two drop Enderman Scales (to make armor) and the Morbid Harvester Arm (a powerful weapon), respectively. Mythical Creatures Creator: HurricaneSHLBHP In one sentence: A creature mod that adds all manner of fun, fantastical creatures like dragons, crocodiles and some ponies you might recognize, as well as some item additions like extra-long swords and enchanted armor. Version: 1.6.4 Installed Through: Forge (manually) Where you can find it: http://bit.ly/MythicalCreaturesMod Complexity Level: 2 Diamonds Adds Items?: Yes Adds Mobs?: Yes This mod doesn't mess around with systems or anything fancy, but in terms of the kinds of mods it adds, it's the most imaginative on this list. It's all fantasy stuff here, turning your Minecraft world into a land of mythical (thus the name!) and quite dangerous beasts. Plus, you get the fun addition of some cool items to play around with, including, as they say on their forum page "stuff that can cause ridiculously huge explosions." This is another good one for when you don't want to add in anything too complex, you just want to make your existing world a little more flavorful. Millénaire Creator: Kinniken In one sentence: Adds five new types of Villages to Minecraft, each of which has its own culture and an extensive amount of new items and Villager types, and which can grow, trade and interact with the player in a large number of ways. Version: 1.7.2, 1.7.10 Installed Through: Forge (manually) or through the Millénaire mod installer Where you can find it: http://millenaire.org/ Complexity Level: 3.5 Diamonds Adds Items?: Yes Adds Mobs?: Yes The basic idea of Millénaire is simple, but in actuality this is one of the more complex and comprehensive mods out there, so much so that it would almost be a total conversion mod if it weren't otherwise set in vanilla Minecraft. The core concept is that there are various cultures of Villagers around the Minecraft world in randomly generated Villages, and these Villagers all have jobs like mining or chopping Wood (instead of just milling about). There is a leader, a trading hall and there are many ways to interact with the Villages, including helping them get items they need to build their Village and taking quests from them. On top of this, you can also gain a reputation with these Villages and even become a leader, which in turn allows you the option to build your own Village that you can control (in terms of what they do and build). The Creation Quest you can get from the Villages is also a pretty big change to the game, as it adds in a little backstory for the Minecraft world and sends the player on an epic quest to gather items and information. All-in-all, Millénaire is a way to really spice up and populate the regular Minecraft world, turning it into more of an RPG builder game with a plot and characters, as opposed to being just about what the player wants to do. It's very cool, and one of the best ways to bring excitement back into the game if you're a little bored with regular Crafting. Minecraft Comes Alive Creator: WildBamaBoy and SheWolfDeadly In one sentence: Another Villager mod, but this one is more about tweaking existing Villagers so they can be more intricately interacted with and so that they look and behave in a wider variety of ways. Version: 1.6.4, 1.7.2, 1.7.10 Installed Through: Forge (manually), requires both MCA and the RadixCore Where you can find it: http://www.radix-shock.com/mca--overview.html, http://bit.ly/MinecraftComesAliveCurse" Complexity Level: 3 Diamonds Adds Items?: Yes Adds Mobs?: Yes Villagers are loud, annoying, almost useless little creatures in vanilla Minecraft (okay, they're pretty adorable too), and it always seems like you should be able to interact with them a lot more than you can. Where Millénaire is all about economy and interacting with Villagers on a bigger scale, Minecraft Comes Alive is about Villager personalities and having relationships with them. It replaces the strange-nosed, squawking brown Villager models that don't do much with a whole heap of Villager skins (about 200 of them!) that are much more human, and it changes the shape of Villages to be much more realistic. Even better, you can now click on Villagers and interact with them in dozens of ways, including flirting, talking and even hiring them to do jobs! Each Villager has its own specialization, like farming or guarding, and its own personality, and they remember your interactions with them. You can become friends or, if you get close enough, even marry one. There are literally thousands of dialogue options with Villagers in Minecraft Comes Alive, and while it's not the world-changer that Millénaire is, it's probably the best semi-vanilla Minecraft mod for Villagers out there. Copyright Copyright © 2015 Triumph Books LLC No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form by any means, electronic, mechanical, photocopying, or otherwise, without prior written permission of the publisher, Triumph Books LLC, 814 North Franklin Street; Chicago, Illinois 60610. This book is available in quantity at special discounts for your group or organization. For further information, contact: Triumph Books LLC 814 North Franklin Street Chicago, Illinois 60610 Phone: (312) 337-0747 www.triumphbooks.com ISBN: 978-1-63319-543-1 This book is not authorized, sponsored, endorsed, approved or licensed by, or associated or connected with, Mojang AB. MINECRAFT is a registered trademark owned by Mojang AB; other company names and/or trademarks mentioned in this book are the property of their respective companies and are used for identification purposes only. All screenshots for criticism and review. Minecraft®™ & © 2009–2015 Mojang/Notch.
{ "redpajama_set_name": "RedPajamaBook" }
2,510
Q: BehaviorSubject adding the same value Hi I have a BehaviorSubject with a simple type int, I add the value 5 to it and then add another value 5. The stream listener sent me two events. How to force check the values and not send an event if the value is equal to the last value. Sample code: class TestBloc { TestBloc(){ testBehavior.stream.listen((event) { print('Event value = $event'); }); addValueToStream(); addValueToStream(); } final testBehavior = BehaviorSubject<int>(); void addValueToStream() { testBehavior.add(5); } } A: What you're looking for is distinct() method of BehaviorSubject(). Take a look at this from the documentation: Skips data events if they are equal to the previous data event. The returned stream provides the same events as this stream, except that it never provides two consecutive data events that are equal. That is, errors are passed through to the returned stream, and data events are passed through if they are distinct from the most recently emitted data event. and here is how you implement it: class TestBloc { TestBloc() { testBehavior.distinct((a, b) => a == b).listen((event) { print('Event value = $event'); }); addValueToStream(); addValueToStream(); } final testBehavior = BehaviorSubject<int>(); void addValueToStream() { testBehavior.add(5); } }
{ "redpajama_set_name": "RedPajamaStackExchange" }
9,538
A SUPERMARKET chain has been told to open its planned Southampton store or give the keys to someone that will. The ultimatum has been issued by city councillor Warwick Payne who is demanding Morrisons get on with its planned supermarket in Woolston. He added that finally opening the 60,000sqft Morrisons in the main unit of the Centenary Quay development will create more than 100 jobs for the area and give local shoppers "something fresh". It comes more than three years after the chain was supposed to open in November 2015, after buying the unit. It had announced plans to open back in 2013. But, in June 2016, developer Crest Nicholson revealed that Morrisons bosses decided to cut costs, which indefinitely postponed the store's launch, and the unit has been empty ever since. The nearest Morrisons is in Totton. Cllr Payne, who represents Woolston, said: "Morrisons has been sitting on this empty supermarket for three years, in which time more than 100 local people could have had much-needed jobs. This is wrong. He added that local frustration had been growing, so a petition had been started. So far dozens of locals have signed it. "I can't understand why Morrisons won't open its store," he added. "Its nearest outlet is ten miles away, so it can't take trade from itself, and while Lidl is booming in Woolston, none of the 'big four' supermarkets have entered the area. "If Morrisons was going to open a supermarket anywhere, why not Woolston? It beggars belief. "I'd love to make my case to Morrisons' executives because I believe Woolston is a prime candidate for a new supermarket. Morrisons hasn't responded to requests for comment.
{ "redpajama_set_name": "RedPajamaC4" }
6,580
// TarArchive.cs // // Copyright (C) 2001 Mike Krueger // // This program is free software; you can redistribute it and/or // modify it under the terms of the GNU General Public License // as published by the Free Software Foundation; either version 2 // of the License, or (at your option) any later version. // // This program is distributed in the hope that it will be useful, // but WITHOUT ANY WARRANTY; without even the implied warranty of // MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the // GNU General Public License for more details. // // You should have received a copy of the GNU General Public License // along with this program; if not, write to the Free Software // Foundation, Inc., 59 Temple Place - Suite 330, Boston, MA 02111-1307, USA. // // Linking this library statically or dynamically with other modules is // making a combined work based on this library. Thus, the terms and // conditions of the GNU General Public License cover the whole // combination. // // As a special exception, the copyright holders of this library give you // permission to link this library with independent modules to produce an // executable, regardless of the license terms of these independent // modules, and to copy and distribute the resulting executable under // terms of your choice, provided that you also meet, for each linked // independent module, the terms and conditions of the license of that // module. An independent module is a module which is not derived from // or based on this library. If you modify this library, you may extend // this exception to your version of the library, but you are not // obligated to do so. If you do not wish to do so, delete this // exception statement from your version. using System; using System.IO; using System.Text; namespace ICSharpCode.SharpZipLib.Tar { /// <summary> /// Used to advise clients of 'events' while processing archives /// </summary> public delegate void ProgressMessageHandler(TarArchive archive, TarEntry entry, string message); /// <summary> /// The TarArchive class implements the concept of a /// 'Tape Archive'. A tar archive is a series of entries, each of /// which represents a file system object. Each entry in /// the archive consists of a header block followed by 0 or more data blocks. /// Directory entries consist only of the header block, and are followed by entries /// for the directory's contents. File entries consist of a /// header followed by the number of blocks needed to /// contain the file's contents. All entries are written on /// block boundaries. Blocks are 512 bytes long. /// /// TarArchives are instantiated in either read or write mode, /// based upon whether they are instantiated with an InputStream /// or an OutputStream. Once instantiated TarArchives read/write /// mode can not be changed. /// /// There is currently no support for random access to tar archives. /// However, it seems that subclassing TarArchive, and using the /// TarBuffer.CurrentRecord and TarBuffer.CurrentBlock /// properties, this would be rather trivial. /// </summary> public class TarArchive : IDisposable { /// <summary> /// Client hook allowing detailed information to be reported during processing /// </summary> public event ProgressMessageHandler ProgressMessageEvent; /// <summary> /// Raises the ProgressMessage event /// </summary> /// <param name="entry">The <see cref="TarEntry">TarEntry</see> for this event</param> /// <param name="message">message for this event. Null is no message</param> protected virtual void OnProgressMessageEvent(TarEntry entry, string message) { if (ProgressMessageEvent != null) { ProgressMessageEvent(this, entry, message); } } #region Constructors /// <summary> /// Constructor for a default <see cref="TarArchive"/>. /// </summary> protected TarArchive() { } /// <summary> /// Initalise a TarArchive for input. /// </summary> /// <param name="stream">The <see cref="TarInputStream"/> to use for input.</param> protected TarArchive(TarInputStream stream) { if ( stream == null ) { throw new ArgumentNullException("stream"); } tarIn = stream; } /// <summary> /// Initialise a TarArchive for output. /// </summary> /// <param name="stream">The <see cref="TarOutputStream"/> to use for output.</param> protected TarArchive(TarOutputStream stream) { if ( stream == null ) { throw new ArgumentNullException("stream"); } tarOut = stream; } #endregion #region Static factory methods /// <summary> /// The InputStream based constructors create a TarArchive for the /// purposes of extracting or listing a tar archive. Thus, use /// these constructors when you wish to extract files from or list /// the contents of an existing tar archive. /// </summary> /// <param name="inputStream">The stream to retrieve archive data from.</param> /// <returns>Returns a new <see cref="TarArchive"/> suitable for reading from.</returns> public static TarArchive CreateInputTarArchive(Stream inputStream) { if ( inputStream == null ) { throw new ArgumentNullException("inputStream"); } return CreateInputTarArchive(inputStream, TarBuffer.DefaultBlockFactor); } /// <summary> /// Create TarArchive for reading setting block factor /// </summary> /// <param name="inputStream">Stream for tar archive contents</param> /// <param name="blockFactor">The blocking factor to apply</param> /// <returns>Returns a <see cref="TarArchive"/> suitable for reading.</returns> public static TarArchive CreateInputTarArchive(Stream inputStream, int blockFactor) { if ( inputStream == null ) { throw new ArgumentNullException("inputStream"); } return new TarArchive(new TarInputStream(inputStream, blockFactor)); } /// <summary> /// Create a TarArchive for writing to, using the default blocking factor /// </summary> /// <param name="outputStream">The <see cref="Stream"/> to write to</param> /// <returns>Returns a <see cref="TarArchive"/> suitable for writing.</returns> public static TarArchive CreateOutputTarArchive(Stream outputStream) { if ( outputStream == null ) { throw new ArgumentNullException("outputStream"); } return CreateOutputTarArchive(outputStream, TarBuffer.DefaultBlockFactor); } /// <summary> /// Create a TarArchive for writing to /// </summary> /// <param name="outputStream">The stream to write to</param> /// <param name="blockFactor">The blocking factor to use for buffering.</param> /// <returns>Returns a <see cref="TarArchive"/> suitable for writing.</returns> public static TarArchive CreateOutputTarArchive(Stream outputStream, int blockFactor) { if ( outputStream == null ) { throw new ArgumentNullException("outputStream"); } return new TarArchive(new TarOutputStream(outputStream, blockFactor)); } #endregion /// <summary> /// Set the flag that determines whether existing files are /// kept, or overwritten during extraction. /// </summary> /// <param name="keepOldFiles"> /// If true, do not overwrite existing files. /// </param> public void SetKeepOldFiles(bool keepOldFiles) { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } this.keepOldFiles = keepOldFiles; } /// <summary> /// Get/set the ascii file translation flag. If ascii file translation /// is true, then the file is checked to see if it a binary file or not. /// If the flag is true and the test indicates it is ascii text /// file, it will be translated. The translation converts the local /// operating system's concept of line ends into the UNIX line end, /// '\n', which is the defacto standard for a TAR archive. This makes /// text files compatible with UNIX. /// </summary> public bool AsciiTranslate { get { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } return asciiTranslate; } set { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } asciiTranslate = value; } } /// <summary> /// Set the ascii file translation flag. /// </summary> /// <param name= "asciiTranslate"> /// If true, translate ascii text files. /// </param> [Obsolete("Use the AsciiTranslate property")] public void SetAsciiTranslation(bool asciiTranslate) { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } this.asciiTranslate = asciiTranslate; } /// <summary> /// PathPrefix is added to entry names as they are written if the value is not null. /// A slash character is appended after PathPrefix /// </summary> public string PathPrefix { get { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } return pathPrefix; } set { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } pathPrefix = value; } } /// <summary> /// RootPath is removed from entry names if it is found at the /// beginning of the name. /// </summary> public string RootPath { get { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } return rootPath; } set { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } rootPath = value; } } /// <summary> /// Set user and group information that will be used to fill in the /// tar archive's entry headers. This information based on that available /// for the linux operating system, which is not always available on other /// operating systems. TarArchive allows the programmer to specify values /// to be used in their place. /// <see cref="ApplyUserInfoOverrides"/> is set to true by this call. /// </summary> /// <param name="userId"> /// The user id to use in the headers. /// </param> /// <param name="userName"> /// The user name to use in the headers. /// </param> /// <param name="groupId"> /// The group id to use in the headers. /// </param> /// <param name="groupName"> /// The group name to use in the headers. /// </param> public void SetUserInfo(int userId, string userName, int groupId, string groupName) { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } this.userId = userId; this.userName = userName; this.groupId = groupId; this.groupName = groupName; applyUserInfoOverrides = true; } /// <summary> /// Get or set a value indicating if overrides defined by <see cref="SetUserInfo">SetUserInfo</see> should be applied. /// </summary> /// <remarks>If overrides are not applied then the values as set in each header will be used.</remarks> public bool ApplyUserInfoOverrides { get { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } return applyUserInfoOverrides; } set { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } applyUserInfoOverrides = value; } } /// <summary> /// Get the archive user id. /// See <see cref="ApplyUserInfoOverrides">ApplyUserInfoOverrides</see> for detail /// on how to allow setting values on a per entry basis. /// </summary> /// <returns> /// The current user id. /// </returns> public int UserId { get { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } return userId; } } /// <summary> /// Get the archive user name. /// See <see cref="ApplyUserInfoOverrides">ApplyUserInfoOverrides</see> for detail /// on how to allow setting values on a per entry basis. /// </summary> /// <returns> /// The current user name. /// </returns> public string UserName { get { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } return userName; } } /// <summary> /// Get the archive group id. /// See <see cref="ApplyUserInfoOverrides">ApplyUserInfoOverrides</see> for detail /// on how to allow setting values on a per entry basis. /// </summary> /// <returns> /// The current group id. /// </returns> public int GroupId { get { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } return this.groupId; } } /// <summary> /// Get the archive group name. /// See <see cref="ApplyUserInfoOverrides">ApplyUserInfoOverrides</see> for detail /// on how to allow setting values on a per entry basis. /// </summary> /// <returns> /// The current group name. /// </returns> public string GroupName { get { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } return this.groupName; } } /// <summary> /// Get the archive's record size. Tar archives are composed of /// a series of RECORDS each containing a number of BLOCKS. /// This allowed tar archives to match the IO characteristics of /// the physical device being used. Archives are expected /// to be properly "blocked". /// </summary> /// <returns> /// The record size this archive is using. /// </returns> public int RecordSize { get { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } if (tarIn != null) { return tarIn.RecordSize; } else if (tarOut != null) { return tarOut.RecordSize; } return TarBuffer.DefaultRecordSize; } } /// <summary> /// Close the archive. /// </summary> [Obsolete("Use Close instead")] public void CloseArchive() { Close(); } /// <summary> /// Perform the "list" command for the archive contents. /// /// NOTE That this method uses the <see cref="ProgressMessageEvent"> progress event</see> to actually list /// the contents. If the progress display event is not set, nothing will be listed! /// </summary> public void ListContents() { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } while (true) { TarEntry entry = this.tarIn.GetNextEntry(); if (entry == null) { break; } OnProgressMessageEvent(entry, null); } } /// <summary> /// Perform the "extract" command and extract the contents of the archive. /// </summary> /// <param name="destinationDirectory"> /// The destination directory into which to extract. /// </param> public void ExtractContents(string destinationDirectory) { if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } while (true) { TarEntry entry = this.tarIn.GetNextEntry(); if (entry == null) { break; } this.ExtractEntry(destinationDirectory, entry); } } /// <summary> /// Extract an entry from the archive. This method assumes that the /// tarIn stream has been properly set with a call to GetNextEntry(). /// </summary> /// <param name="destDir"> /// The destination directory into which to extract. /// </param> /// <param name="entry"> /// The TarEntry returned by tarIn.GetNextEntry(). /// </param> void ExtractEntry(string destDir, TarEntry entry) { OnProgressMessageEvent(entry, null); string name = entry.Name; if (Path.IsPathRooted(name) == true) { // NOTE: // for UNC names... \\machine\share\zoom\beet.txt gives \zoom\beet.txt name = name.Substring(Path.GetPathRoot(name).Length); } name = name.Replace('/', Path.DirectorySeparatorChar); string destFile = Path.Combine(destDir, name); if (entry.IsDirectory) { EnsureDirectoryExists(destFile); } else { string parentDirectory = Path.GetDirectoryName(destFile); EnsureDirectoryExists(parentDirectory); bool process = true; FileInfo fileInfo = new FileInfo(destFile); if (fileInfo.Exists) { if (this.keepOldFiles) { OnProgressMessageEvent(entry, "Destination file already exists"); process = false; } else if ((fileInfo.Attributes & FileAttributes.ReadOnly) != 0) { OnProgressMessageEvent(entry, "Destination file already exists, and is read-only"); process = false; } } if (process) { bool asciiTrans = false; Stream outputStream = File.Create(destFile); if (this.asciiTranslate) { asciiTrans = !IsBinary(destFile); } StreamWriter outw = null; if (asciiTrans) { outw = new StreamWriter(outputStream); } byte[] rdbuf = new byte[32 * 1024]; while (true) { int numRead = this.tarIn.Read(rdbuf, 0, rdbuf.Length); if (numRead <= 0) { break; } if (asciiTrans) { for (int off = 0, b = 0; b < numRead; ++b) { if (rdbuf[b] == 10) { string s = Encoding.ASCII.GetString(rdbuf, off, (b - off)); outw.WriteLine(s); off = b + 1; } } } else { outputStream.Write(rdbuf, 0, numRead); } } if (asciiTrans) { outw.Close(); } else { outputStream.Close(); } } } } /// <summary> /// Write an entry to the archive. This method will call the putNextEntry /// and then write the contents of the entry, and finally call closeEntry() /// for entries that are files. For directories, it will call putNextEntry(), /// and then, if the recurse flag is true, process each entry that is a /// child of the directory. /// </summary> /// <param name="sourceEntry"> /// The TarEntry representing the entry to write to the archive. /// </param> /// <param name="recurse"> /// If true, process the children of directory entries. /// </param> public void WriteEntry(TarEntry sourceEntry, bool recurse) { if ( sourceEntry == null ) { throw new ArgumentNullException("sourceEntry"); } if ( isDisposed ) { throw new ObjectDisposedException("TarArchive"); } try { if ( recurse ) { TarHeader.SetValueDefaults(sourceEntry.UserId, sourceEntry.UserName, sourceEntry.GroupId, sourceEntry.GroupName); } InternalWriteEntry(sourceEntry, recurse); } finally { if ( recurse ) { TarHeader.RestoreSetValues(); } } } /// <summary> /// Write an entry to the archive. This method will call the putNextEntry /// and then write the contents of the entry, and finally call closeEntry() /// for entries that are files. For directories, it will call putNextEntry(), /// and then, if the recurse flag is true, process each entry that is a /// child of the directory. /// </summary> /// <param name="sourceEntry"> /// The TarEntry representing the entry to write to the archive. /// </param> /// <param name="recurse"> /// If true, process the children of directory entries. /// </param> void InternalWriteEntry(TarEntry sourceEntry, bool recurse) { bool asciiTrans = false; string tempFileName = null; string entryFilename = sourceEntry.File; TarEntry entry = (TarEntry)sourceEntry.Clone(); if ( applyUserInfoOverrides ) { entry.GroupId = groupId; entry.GroupName = groupName; entry.UserId = userId; entry.UserName = userName; } OnProgressMessageEvent(entry, null); if (this.asciiTranslate && !entry.IsDirectory) { asciiTrans = !IsBinary(entryFilename); if (asciiTrans) { tempFileName = Path.GetTempFileName(); using (StreamReader inStream = File.OpenText(entryFilename)) { using (Stream outStream = File.Create(tempFileName)) { while (true) { string line = inStream.ReadLine(); if (line == null) { break; } byte[] data = Encoding.ASCII.GetBytes(line); outStream.Write(data, 0, data.Length); outStream.WriteByte((byte)'\n'); } outStream.Flush(); } } entry.Size = new FileInfo(tempFileName).Length; entryFilename = tempFileName; } } string newName = null; if (this.rootPath != null) { if (entry.Name.StartsWith(this.rootPath)) { newName = entry.Name.Substring(this.rootPath.Length + 1 ); } } if (this.pathPrefix != null) { newName = (newName == null) ? this.pathPrefix + "/" + entry.Name : this.pathPrefix + "/" + newName; } if (newName != null) { entry.Name = newName; } tarOut.PutNextEntry(entry); if (entry.IsDirectory) { if (recurse) { TarEntry[] list = entry.GetDirectoryEntries(); for (int i = 0; i < list.Length; ++i) { InternalWriteEntry(list[i], recurse); } } } else { using (Stream inputStream = File.OpenRead(entryFilename)) { int numWritten = 0; byte[] localBuffer = new byte[32 * 1024]; while (true) { int numRead = inputStream.Read(localBuffer, 0, localBuffer.Length); if (numRead <=0) { break; } tarOut.Write(localBuffer, 0, numRead); numWritten += numRead; } } if ( (tempFileName != null) && (tempFileName.Length > 0) ) { File.Delete(tempFileName); } tarOut.CloseEntry(); } } /// <summary> /// Releases the unmanaged resources used by the FileStream and optionally releases the managed resources. /// </summary> /// <param name="disposing">true to release both managed and unmanaged resources; /// false to release only unmanaged resources.</param> protected virtual void Dispose(bool disposing) { if ( !isDisposed ) { isDisposed = true; if ( disposing ) { if ( tarOut != null ) { tarOut.Flush(); tarOut.Close(); } if ( tarIn != null ) { tarIn.Close(); } } } } /// <summary> /// Closes the archive and releases any associated resources. /// </summary> public virtual void Close() { Dispose(true); GC.SuppressFinalize(this); } /// <summary> /// Ensures that resources are freed and other cleanup operations are performed /// when the garbage collector reclaims the <see cref="TarArchive"/>. /// </summary> ~TarArchive() { Dispose(false); } #region IDisposable Members void IDisposable.Dispose() { Close(); } #endregion static void EnsureDirectoryExists(string directoryName) { if (!Directory.Exists(directoryName)) { try { Directory.CreateDirectory(directoryName); } catch (Exception e) { throw new TarException("Exception creating directory '" + directoryName + "', " + e.Message, e); } } } // TODO: TarArchive - Is there a better way to test for a text file? // It no longer reads entire files into memory but is still a weak test! // This assumes that byte values 0-7, 14-31 or 255 are binary // and that all non text files contain one of these values static bool IsBinary(string filename) { using (FileStream fs = File.OpenRead(filename)) { int sampleSize = System.Math.Min(4096, (int)fs.Length); byte[] content = new byte[sampleSize]; int bytesRead = fs.Read(content, 0, sampleSize); for (int i = 0; i < bytesRead; ++i) { byte b = content[i]; if ( (b < 8) || ((b > 13) && (b < 32)) || (b == 255) ) { return true; } } } return false; } #region Instance Fields bool keepOldFiles; bool asciiTranslate; int userId; string userName = string.Empty; int groupId; string groupName = string.Empty; string rootPath; string pathPrefix; bool applyUserInfoOverrides; TarInputStream tarIn; TarOutputStream tarOut; bool isDisposed; #endregion } } /* The original Java file had this header: ** Authored by Timothy Gerard Endres ** <mailto:time@gjt.org> <http://www.trustice.com> ** ** This work has been placed into the public domain. ** You may use this work in any way and for any purpose you wish. ** ** THIS SOFTWARE IS PROVIDED AS-IS WITHOUT WARRANTY OF ANY KIND, ** NOT EVEN THE IMPLIED WARRANTY OF MERCHANTABILITY. THE AUTHOR ** OF THIS SOFTWARE, ASSUMES _NO_ RESPONSIBILITY FOR ANY ** CONSEQUENCE RESULTING FROM THE USE, MODIFICATION, OR ** REDISTRIBUTION OF THIS SOFTWARE. ** */
{ "redpajama_set_name": "RedPajamaGithub" }
2,213
<?php namespace Vardius\Bundle\ListBundle\Filter; use Doctrine\Common\Collections\ArrayCollection; use Symfony\Component\Form\FormTypeInterface; use Symfony\Component\Form\ResolvedFormTypeInterface; /** * ListViewFilter * * @author Rafał Lorenz <vardius@gmail.com> */ class ListViewFilter { /** @var mixed */ protected $formType; /** @var callable|ArrayCollection */ protected $filter; /** * @param mixed $formType * @param callable|ArrayCollection $filter */ function __construct($formType, $filter) { if (!is_callable($filter) && !$filter instanceof ArrayCollection) { throw new \InvalidArgumentException( 'Expected argument of type "callable" or Collection of Vardius\Bundle\ListBundle\Filter\Filter, ' . get_class( $filter ) . ' given' ); } $this->formType = $formType; $this->filter = $filter; } /** * @return mixed */ public function getFormType() { return $this->formType; } /** * @param mixed $formType * @return ListViewFilter */ public function setFormType($formType):self { $this->formType = $formType; } /** * @return callable|ArrayCollection */ public function getFilter() { return $this->filter; } /** * @param callable|ArrayCollection $filter * @return ListViewFilter */ public function setFilter($filter):self { $this->filter = $filter; } }
{ "redpajama_set_name": "RedPajamaGithub" }
106
I think this is a really good way of bringing back successful ad campaigns from the past. Most of todays adults will still remember those commercials for products that they used to buy, but may have drifted away from over the years. If today's parents start to be aware of the product again, their buying habits may be passed down to their children and the younger generations.
{ "redpajama_set_name": "RedPajamaC4" }
2,886
{"url":"https:\/\/www.khronos.org\/registry\/vulkan\/specs\/1.2-extensions\/man\/html\/VkDescriptorType.html","text":"## C Specification\n\nThe type of descriptors in a descriptor set is specified by VkWriteDescriptorSet::descriptorType, which must be one of the values:\n\n\/\/ Provided by VK_VERSION_1_0\ntypedef enum VkDescriptorType {\nVK_DESCRIPTOR_TYPE_SAMPLER = 0,\nVK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER = 1,\nVK_DESCRIPTOR_TYPE_SAMPLED_IMAGE = 2,\nVK_DESCRIPTOR_TYPE_STORAGE_IMAGE = 3,\nVK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER = 4,\nVK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER = 5,\nVK_DESCRIPTOR_TYPE_UNIFORM_BUFFER = 6,\nVK_DESCRIPTOR_TYPE_STORAGE_BUFFER = 7,\nVK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC = 8,\nVK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC = 9,\nVK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT = 10,\n\/\/ Provided by VK_EXT_inline_uniform_block\nVK_DESCRIPTOR_TYPE_INLINE_UNIFORM_BLOCK_EXT = 1000138000,\n\/\/ Provided by VK_KHR_acceleration_structure\nVK_DESCRIPTOR_TYPE_ACCELERATION_STRUCTURE_KHR = 1000150000,\n\/\/ Provided by VK_NV_ray_tracing\nVK_DESCRIPTOR_TYPE_ACCELERATION_STRUCTURE_NV = 1000165000,\n\/\/ Provided by VK_VALVE_mutable_descriptor_type\nVK_DESCRIPTOR_TYPE_MUTABLE_VALVE = 1000351000,\n} VkDescriptorType;\n\n## Description\n\nWhen a descriptor set is updated via elements of VkWriteDescriptorSet, members of pImageInfo, pBufferInfo and pTexelBufferView are only accessed by the implementation when they correspond to descriptor type being defined - otherwise they are ignored. The members accessed are as follows for each descriptor type:\n\n\u2022 For VK_DESCRIPTOR_TYPE_SAMPLER, only the sampler member of each element of VkWriteDescriptorSet::pImageInfo is accessed.\n\n\u2022 For VK_DESCRIPTOR_TYPE_SAMPLED_IMAGE, VK_DESCRIPTOR_TYPE_STORAGE_IMAGE, or VK_DESCRIPTOR_TYPE_INPUT_ATTACHMENT, only the imageView and imageLayout members of each element of VkWriteDescriptorSet::pImageInfo are accessed.\n\n\u2022 For VK_DESCRIPTOR_TYPE_COMBINED_IMAGE_SAMPLER, all members of each element of VkWriteDescriptorSet::pImageInfo are accessed.\n\n\u2022 For VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER, VK_DESCRIPTOR_TYPE_STORAGE_BUFFER, VK_DESCRIPTOR_TYPE_UNIFORM_BUFFER_DYNAMIC, or VK_DESCRIPTOR_TYPE_STORAGE_BUFFER_DYNAMIC, all members of each element of VkWriteDescriptorSet::pBufferInfo are accessed.\n\n\u2022 For VK_DESCRIPTOR_TYPE_UNIFORM_TEXEL_BUFFER or VK_DESCRIPTOR_TYPE_STORAGE_TEXEL_BUFFER, each element of VkWriteDescriptorSet::pTexelBufferView is accessed.\n\nWhen updating descriptors with a descriptorType of VK_DESCRIPTOR_TYPE_INLINE_UNIFORM_BLOCK_EXT, none of the pImageInfo, pBufferInfo, or pTexelBufferView members are accessed, instead the source data of the descriptor update operation is taken from the VkWriteDescriptorSetInlineUniformBlockEXT structure in the pNext chain of VkWriteDescriptorSet. When updating descriptors with a descriptorType of VK_DESCRIPTOR_TYPE_ACCELERATION_STRUCTURE_KHR, none of the pImageInfo, pBufferInfo, or pTexelBufferView members are accessed, instead the source data of the descriptor update operation is taken from the VkWriteDescriptorSetAccelerationStructureKHR structure in the pNext chain of VkWriteDescriptorSet. When updating descriptors with a descriptorType of VK_DESCRIPTOR_TYPE_ACCELERATION_STRUCTURE_NV, none of the pImageInfo, pBufferInfo, or pTexelBufferView members are accessed, instead the source data of the descriptor update operation is taken from the VkWriteDescriptorSetAccelerationStructureNV structure in the pNext chain of VkWriteDescriptorSet.","date":"2021-04-17 19:56:11","metadata":"{\"extraction_info\": {\"found_math\": true, \"script_math_tex\": 0, \"script_math_asciimath\": 0, \"math_annotations\": 0, \"math_alttext\": 0, \"mathml\": 0, \"mathjax_tag\": 0, \"mathjax_inline_tex\": 0, \"mathjax_display_tex\": 0, \"mathjax_asciimath\": 1, \"img_math\": 0, \"codecogs_latex\": 0, \"wp_latex\": 0, \"mimetex.cgi\": 0, \"\/images\/math\/codecogs\": 0, \"mathtex.cgi\": 0, \"katex\": 0, \"math-container\": 0, \"wp-katex-eq\": 0, \"align\": 0, \"equation\": 0, \"x-ck12\": 0, \"texerror\": 0, \"math_score\": 0.4317247271537781, \"perplexity\": 13127.972319109525}, \"config\": {\"markdown_headings\": true, \"markdown_code\": true, \"boilerplate_config\": {\"ratio_threshold\": 0.18, \"absolute_threshold\": 10, \"end_threshold\": 15, \"enable\": true}, \"remove_buttons\": true, \"remove_image_figures\": true, \"remove_link_clusters\": true, \"table_config\": {\"min_rows\": 2, \"min_cols\": 3, \"format\": \"plain\"}, \"remove_chinese\": true, \"remove_edit_buttons\": true, \"extract_latex\": true}, \"warc_path\": \"s3:\/\/commoncrawl\/crawl-data\/CC-MAIN-2021-17\/segments\/1618038464045.54\/warc\/CC-MAIN-20210417192821-20210417222821-00318.warc.gz\"}"}
null
null