arxiv_id
stringlengths 0
16
| text
stringlengths 10
1.65M
|
|---|---|
\documentclass[dissertation.tex]{subfiles}
\begin{document}
\chapter{Algorithms}\label{cha:algorithm}
In this chapter we analyze step-by-step the algorithms that implement
the different parts of the program. We do this with the help of the
test scene in \cref{fig:empty}.
\image{scrEmpty.png}{Initial scene.}{fig:empty}
The general idea is to use an open \bs curve of a certain degree
interpolating the chosen starting and ending points, whose
control polygon is a suitable modification of a polygonal chain
extracted from a graph
obtained with a \ac{VD} method. In \cref{sec:polChain} we
explain in detail how to
build such polygonal chain. The chain, before being used as a control
polygon for the \bs, is refined and adjusted - as explained in
detail in \cref{sec:obsAvoid} and \cref{sec:degreeInc} - in order to
ensure that the associated \bs curve has no obstacle
collision. Furthermore, in \cref{sec:knotSel} we implement a method
for an optional adaptive
arrangement of the breakpoints of the \bs. Finally, \cref{sec:postPro}
is devoted to an optional post-processing of the path.
\section{Polygonal chain}\label{sec:polChain}\index{Polygonal chain}
In the first phase, the purpose is to extract a suitable polygonal chain from
the scene, such that the extremes coincide with the start point $\ve{s}$
and the end point $\ve{e}$. In particular, we are interested in short length
chains. We calculate the shortest path
in a graph that is obtained by using an adaptation to three
dimensions of a well known bidimensional method
\cite{bhattacharya}\cite{ho-liu}\cite{seda-pich} that use
\acp{VD} as base.
We choose a Voronoi method because it builds a structure roughly
equidistant from obstacles,resulting in a low probability of
collisions between the curve and the obstacles.
\subsection{Base Graph}\label{sec:baseGraph}\index{Graph}\index{$G$}
First we
start distributing points on the \acp{OTF} and on
an invisible bounding box, as in \cref{fig:sites}.
\imageDouble{scrSites-a.png}{scrSites-b.png}{Scene with Voronoi sites
(distributed only on the
obstacles surfaces on the left, and on obstacles and bounding box on
the right).}{fig:sites}
The sites are distributed using a recursive method, for each triangle
of the scene we add three points - one for each
vertex, if not already added before - and then we calculate the area
of the triangle. If the area is bigger than a threshold, we decompose
the triangle in four triangles adding three more vertices on the
midpoints of the edges of the original triangle as in
\cref{fig:triangleDec}. We repeat the process recursively for
each new triangle.
\begin{myfig}{Decomposition of an \ac{OTF}.}{fig:triangleDec}
\begin{tikzpicture}[scale=2]
\coordinate (a1) at (1,0);
\coordinate (b1) at (2,1);
\coordinate (c1) at (3,0.3);
\path[obstacle] (a1) -- (b1) -- (c1) -- (a1);
\coordinate (dist) at (3.5,0);
\coordinate (a2) at ($ (a1) + (dist) $);
\coordinate (b2) at ($ (b1) + (dist) $);
\coordinate (c2) at ($ (c1) + (dist) $);
\path[obstacle] (a2) -- (b2) -- (c2) -- (a2);
\foreach \p in {a1,b1,c1,a2,b2,c2}
\filldraw[site] (\p) circle (2pt);
\coordinate (ab) at ($ (a2)!0.5!(b2) $);
\coordinate (bc) at ($ (b2)!0.5!(c2) $);
\coordinate (ac) at ($ (a2)!0.5!(c2) $);
\path[obstacleTract] (ab) -- (bc) -- (ac) -- (ab);
\foreach \p in {ab,bc,ac}
\filldraw[siteHigh] (\p) circle (2pt);
\end{tikzpicture}
\end{myfig}
We construct the \ac{VD} using the Fortune's algorithm
\cite{fortune} on
those points as input sites, and we build a graph
$$G=(V,E)$$
using the vertices
of the Voronoi cells as graph nodes in $V$, and the edges of the cells\footnote{Rejecting potential
infinite edges.} as graph edges in $E$. Furthermore, we make $G$
denser by adding all the diagonals as edges for every cell's face, in
other words we
connect every vertex to every other vertex of a face.
Subsequently, we prune such graph deleting every edge that
intersects an \ac{OTF} using the methods explained in
\cref{sec:intersections}. The edge-pruning process considers a margin
around the \acp{OTF} during the collision checks.
\image{scrGraph.png}{Scene with pruned graph.}{fig:graph}
The result, visible in \cref{fig:graph}, is a graph that embraces the
obstacles like a cobweb where the possible paths are roughly
equidistant from the obstacles.
\imageDouble{voronoi2d-a.eps}{voronoi2d-b.eps}{Voronoi graph in 2D
before (left) and after (right) pruning.}{fig:voronoi2d}
As visible
in \cref{fig:voronoi2d},
in the bidimensional scenario the equivalent method implies distributing
the sites (the blue dots) in the edges of the polygonal obstacles and
then pruning the
graph when an edge of the graph intersects an edge of the
obstacle. The result is a sparse graph composed of chains around the
obstacles (the green dots).
We decide to extend the method in 3 dimensions distributing points in
the whole \ac{OTF} surface. An alternative to this would be distributing
points only along the edges of the obstacles.
We attach the desired start and end
points $\ve{s}$ and $\ve{e}$ on the obtained graph $G$ and we can obtain a path between the two points using an
algorithm like Dijkstra \cite{dijkstra}\cite{knuth}. To attach $\ve{s}$
and $\ve{e}$ we finds the vertex $\ve{v_n}\in V_{vis}\subseteq V$ such
that $dist(\ve{s},\ve{v_n})\leq dist(\ve{s},\ve{v_i})$, $\forall
\ve{v_i}\in V_{vis}$, where
\begin{equation*}
V_{vis}=\{\ve{v}\in V\ :\ \overline{\ve{s} \ve{v_i}}\ \text{do not
intersects any obstacle}\},
\end{equation*}
then adds $\ve{s}$ to $V$ and the edge $(\ve{s},\ve{v_n})$ to $E$.
Similarly for $\ve{e}$.
Before using that path as a control polygon,
we need to take into account the degree of the \bs and the
position of the obstacles, the details are in \cref{sec:obsAvoid} and
\cref{sec:degreeInc}.
\subsubsection{Complexity considerations}\index{Complexity!$G$ creation}\index{$G$!complexity}
Fortune's algorithm runs in time $\bigO(|\sitesSet|\log |\sitesSet|)$ \cite{deberg},
where $\sitesSet$ is
the set
of input sites. If we impose a maximum area $A$ for the obstacles
\footnote{Inserting the obstacles in a progressive order, the area of the $i$-th
obstacle cannot be a function $f(i)$ of the number of
the obstacles.} then $|\sitesSet|=\bigO(|\obsSet|)$ where $\obsSet$ is the set of
obstacles, because in the worst case we have that $|\sitesSet|=C\cdot A\cdot
|\obsSet|$ for
some constant $C$ that depends on the chosen density of sites per area.
In conclusion, the time cost for the creation of the graph
is
\newcommand{\eqCostGraph}{\ensuremath{\bigO(|\obsSet|\log |\obsSet|)}}
\begin{equation}
\label{eq:costGraph}
\eqCostGraph
\end{equation}
and the number of the vertices in the graph
is
\begin{equation}
\label{eq:numV}
|V|=\bigO(|\sitesSet|)=\bigO(|\obsSet|)
\end{equation}
because the number of vertices in the resulting graph has the
same order of magnitude of the number of input sites.
If we formulate the hypothesis of having maximum degree $k$ in $G$ -
i.e. each vertex in $V$ is connected to other $k$ vertices at most -
then we have that
\begin{equation}
\label{eq:numE}
|E|=\bigO(k|V|)=\bigO(k|\obsSet|).
\end{equation}
In the worst case $k=|V|$ and $|E|=\bigO(|V|^2)$ but for \acp{VD} in
plane there is a property that if we have $n$ input sites
that lay on a circumference, without any other site inside the
circumference, then the center of the circumference is a vertex shared
by $n$ cells (\cref{sec:voronoi} for details). The same property holds
in the 3D case with respect to spheres.
We can make the assumption
that no more than three sites can lay
on a circumference, hence, no vertex can have more than three neighbours,
or the same with four vertices in sphere. This assumption is
plausible because we use floating point numbers for the coordinates of
the vertices of the obstacles and it is unlikely that more than
four points lay on a sphere.
Moreover, the average numbers of faces in a \ac{VD}'s cell and,
consequently, vertices
in a face are
bounded by a constant \cite{okabe}. Thus, we can make the assumption
that we do not increase
the maximum
graph degree by more than a constant when we make the graph
denser by adding the faces' diagonals.
With the previous two assumptions $k$ is a constant,
and \cref{eq:numE} becomes
\begin{equation*}
|E|=\bigO(|V|)=\bigO(|\obsSet|).
\end{equation*}
To prune the graph of every edge that intersects obstacles, we need
to solve a system of three unknowns in three equations for every edge
and every \ac{OTF}\footnote{See \cref{sec:intersectionST}.}, so
we have a cost of
\newcommand{\eqCostPruning}{\ensuremath{\bigO(k|\obsSet|^2)}}
\begin{equation}
\label{eq:costPruning}
\bigO(|E|\cdot|\obsSet|)=\eqCostPruning
\end{equation}
and, if we make the assumption of $k$ constant, it becomes
\begin{equation*}
\bigO(|\obsSet|^2).
\end{equation*}
\subsection{Graph's transformation}\label{sec:trigraph}\index{Graph!triple's graph}\index{$G_t$}
Before calculating the shortest path on the chosen graph with Dijkstra
\cite{dijkstra}\cite{knuth}, we
transform it in a graph containing all the triples
of three adjacent vertices in the original graph. This because we want
to filter the triples for collisions as described in
\cref{sec:inter1}. We call the transformed graph
$$G_t=(V_t,E_t)$$
where we have triples of vertices of $G$ in $V_t$.
The original graph $G$ is not directed and it is weighted
with the distance from vertex to vertex, whereas the transformed graph $G_t$ is
directed and weighted. If in $G$ the nodes $\ve{a}$
and $\ve{b}$ are
neighbouring, and $\ve{b}$ and $\ve{c}$ are neighbouring, then $G_t$
has the two nodes $(\ve{a},\ve{b},\ve{c})$ and $(\ve{c},\ve{b},\ve{a})$. In
$G_t$ a node $(\ve{a_1},\ve{b_1},\ve{c_1})$ is a predecessor of
$(\ve{a_2},\ve{b_2},\ve{c_2})$ if $\ve{b_1}=\ve{a_2}$ and $\ve{c_1}=\ve{b_2}$, and the weight of the arc
from $(\ve{a_1},\ve{b_1},\ve{c_1})$ to $(\ve{a_2},\ve{b_2},\ve{c_2})$ in $G_t$ is
equal to
the weight of the arc from $\ve{a_1}$ to $\ve{b_1}(=\ve{a_2})$ in $G$.
\begin{algo}{Create triples graph $G_t$}{alg:createTripleGraph}
\Function{createTriplesGraph}{$G$}
\State $V_t\Ass E_t\Ass \emptyset$
\ForAll{$(\ve{a},\ve{b})\in E$}\label{ln:tripleFor0}
\State $leftOut\Ass leftIn\Ass rightOut\Ass rightIn\Ass \emptyset$
\ForAll{$\ve{v}\in N_G(\ve{a})\setminus\{\ve{b}\}$}
\State $leftOut \Ass leftOut\cup \{(\ve{v},\ve{a},\ve{b})\}$
\State $leftIn \Ass leftIn\cup \{(\ve{b},\ve{a},\ve{v})\}$
\State $V_t \Ass V_t\cup \{(\ve{v},\ve{a},\ve{b}), (\ve{b},\ve{a},\ve{v})\}$
\EndFor
\ForAll{$\ve{v}\in N_G(\ve{b})\setminus\{\ve{a}\}$}
\State $rightOut \Ass rightOut\cup \{(\ve{v},\ve{b},\ve{a})\}$
\State $rightIn \Ass rightIn\cup \{(\ve{a},\ve{b},\ve{v})\}$
\State $V_t \Ass V_t\cup \{(\ve{v},\ve{b},\ve{a}), (\ve{a},\ve{b},\ve{v})\}$
\EndFor
\ForAll{$\ve{o}\in leftOut$}\label{ln:tripleFor1}
\ForAll{$\ve{i}\in rightIn$}
\State $E_t \Ass E_t\cup (\ve{o},\ve{i})$
\EndFor
\EndFor
\ForAll{$\ve{o}\in rightOut$}\label{ln:tripleFor2}
\ForAll{$\ve{i}\in leftIn$}
\State $E_t \Ass E_t\cup (\ve{o},\ve{i})$
\EndFor
\EndFor
\EndFor
\State $G_t\Ass(V_t,E_t)$
\State\Return $G_t$
\EndFunction
\end{algo}
The steps necessary to create $G_t$ are summarized in
\cref{alg:createTripleGraph}. The input $G$ is the base graph
that has vertices $V$ and edges $E$, $N_G(\ve{a})$ is the set of
neighbours in $G$ of the vertex $\ve{a}$, and the output is $G_t$.
The transformation of the graph is useful only for the obstacle
avoidance algorithm of
\cref{sec:inter1}, theoretically it is possible to bypass such
transformation for the algorithm described in \cref{sec:inter2}.
\subsubsection{Complexity considerations}\index{Complexity!$G_t$ creation}\index{$G_t$!complexity}
If we suppose a maximum degree $k$ for each vertex in the graph $G$ -
i.e. each vertex in $V$ can have $k$ edges insisting on
it at most, then the number of vertices in the transformed graph $G_t$ is
\begin{equation}
\label{eq:numTriples}
|V_t|\leq |V|\cdot k\cdot(k-1)=\bigO(k^2|V|)
\end{equation}
because for each vertex $\ve{v}$ in $G$ we need to
consider all the neighbours of $\ve{v}$ and the neighbours of the neighbours
of $\ve{v}$ (excluded $\ve{v}$).
For how we define the triples neighbour rule in $G_t$ we have
that each triple is a predecessor of $k-1$ other triples at most. For
instance, $(\ve{a},\ve{b},\ve{c})$ in $V_t$ is the predecessor of all the triples
$(\ve{b},\ve{c},*)$ where $*$ can be one of the $k$ neighbours
of $\ve{c}$ in $V$ excluded $\ve{b}$. Thus, the number of edges in $G_t$ is
\begin{equation}
\label{eq:numEdgesTriples}
|E_t|\leq |V_t|\cdot (k-1)=\bigO(k|V_t|)=\bigO(k^3|V|).
\end{equation}
Furthermore, the time cost for the creation of $G_t$ is
\newcommand{\eqCostVt}{\ensuremath{\bigO(k^3|\obsSet|)}}
\begin{equation}
\label{eq:costVt}
\bigO(k^2|E|)=\eqCostVt
\end{equation}
because \cref{alg:createTripleGraph} scans all the edges $e$ on
\cref{ln:tripleFor0} for creating the transformed
graph and for each
iteration
the biggest cost is due to the two \emph{for} on \cref{ln:tripleFor1} and
\cref{ln:tripleFor2}.
\section{Obstacle avoidance}\label{sec:obsAvoid}
Before using the polynomial chain extracted as
explained in \cref{sec:polChain} as a control polygon for the
\bs, we need to discuss a
problem: every possible path in the graph $G$ is
free from collisions by construction - in fact we prune the graph of
every edge that intersects an obstacle - but this does not guarantee
that the associated curve will not cross any obstacle. This concept is
exemplified in
\cref{fig:intersect}.
\begin{myfig}{\bs that intersects an obstacle in the plane.}{fig:intersect}
\begin{tikzpicture}
\path[obstacle] (1,0) -- (2,1) -- (3,0) -- (1,0);
\draw[controlPoly] (0,0) -- (2,2) -- (4,0);
\draw[spline] (0,0) to [bend left=40] (4,0);
\filldraw[controlVert] (0,0) circle (2pt);
\filldraw[controlVert] (2,2) circle (2pt);
\filldraw[controlVert] (4,0) circle (2pt);
\end{tikzpicture}
\end{myfig}
In this chapter we formulate the hypothesis of using
quadratic \bss\footnote{\bs curves with degree 2.}, in
\cref{sec:degreeInc} we explain how it is possible to use curves with
a higher degree. With this assumption, we can exploit the \acp{CHP}
explained in \cref{sec:bsplineProp} and assert that the
resulting curve is contained inside the union of all the triangles of
three consecutive control vertices of the control
polygon. Using that property we can solve the problem of the
collision, maintaining all the triangles associated to the control
polygon free
from collision with \acp{OTF}. Note that the \ac{CHP} of quadratic
\bss is also valid in space, hence, the convex hull is still composed
of triangles, like the faces of the
obstacles. This simplifies all the checks for collisions because they
are all between triangles in space and we can use the methods
described in \cref{sec:intersections}.
We design two different algorithms to approach the collision
problem. The first solution, described in \cref{sec:inter1}, implements
a modified version of Dijkstra's
algorithm that finds the shortest path from start to end in the graph
such that all the triangles formed by three consecutive points in the path
are free from collisions. The second solution, described in
\cref{sec:inter2}, uses the classical Dijkstra's algorithm to find
the shortest path from $s$ to $e$ in the graph $G$, checking later for
collisions in the triangles formed of three consecutive points in such
path. When a collision is found we add vertices to the path to manage that.
\subsection{First solution: Dijkstra's algorithm in $G_t$}\label{sec:inter1}\index{Dijkstra}\index{Dijkstra in $G_t$}
The first solution of the problem exploits the graph $G_t$ obtained as
explained in \cref{sec:trigraph}. Before applying Dijkstra's algorithm
to $G_t$ all the triples are filtered checking if the
triangle composed of the vertices of the triple intersects an
\ac{OTF}. If a triple intersects an obstacle then it is removed from the
graph
so that a path cannot pass from such vertices in that order.
Note that if a triple $(\ve{a},\ve{b},\ve{c})$ is removed from $V_t$ - and
consequently also the triple $(\ve{c},\ve{b},\ve{a})$ - this does not
necessarily exclude
the three vertices $\ve{a}$, $\ve{b}$, $\ve{c}$ from being part of the final
polynomial chain. For instance, in
\cref{fig:exampleTriples} we have a graph $G$ with vertices
$\ve{a},\ve{b},\ve{c},\ve{d},\ve{e},\ve{f}$ and an obstacle that
intersects triples on the transformed graph\footnote{In the plane, this graph cannot be
obtained using the procedure based on \acp{VD} explained in
\cref{sec:voronoi}, but a similar situation is plausible
considering Voronoi cells in space.} $G_t$. The triple
$(\ve{a},\ve{b},\ve{c})$ and $(\ve{c},\ve{b},\ve{a})$ are removed from $G_t$ because
the corresponding triangle intersects the obstacle, and the path
$\ve{d}\rightarrow \ve{a}\rightarrow \ve{b}\rightarrow \ve{c}\rightarrow \ve{e}$ cannot be
admissible. This doesn't preclude the nodes $\ve{a}$, $\ve{b}$ and $\ve{c}$ to be part
of the final admissible path $\ve{d}\rightarrow \ve{a}\rightarrow \ve{b}\rightarrow \ve{e}\rightarrow \ve{c}\rightarrow \ve{f}$.
\begin{myfig}{Example of triples.}{fig:exampleTriples}
\begin{tikzpicture}
\coordinate (D) at (-1,1);
\coordinate (A) at (0,0);
\coordinate (B) at (2,2);
\coordinate (C) at (4,0);
\coordinate (E) at (3,2);
\coordinate (F) at (5,1);
\path[obstacle] (1,-0.5) -- (2,1) -- (3,-0.5) -- (1,-0.5);
\draw[controlPoly] (D) -- (A) -- (B) -- (C) -- (F);
\draw[controlPoly] (B) -- (E) -- (C);
\filldraw[controlVert] (D) circle (2pt);
\filldraw[controlVert] (A) circle (2pt);
\filldraw[controlVert] (B) circle (2pt);
\filldraw[controlVert] (C) circle (2pt);
\filldraw[controlVert] (E) circle (2pt);
\filldraw[controlVert] (F) circle (2pt);
\node[above=0.5em] at (D) {$\ve{d}$};
\node[below=0.5em] at (A) {$\ve{a}$};
\node[above=0.5em] at (B) {$\ve{b}$};
\node[below=0.5em] at (C) {$\ve{c}$};
\node[above=0.5em] at (E) {$\ve{e}$};
\node[above=0.5em] at (F) {$\ve{f}$};
\end{tikzpicture}
\end{myfig}
On the cleaned transformed graph it is possible to find the shortest
path
$$
P_t=(\ve{a_0},\ve{b_0},\ve{c_0}), (\ve{a_1},\ve{b_1},\ve{c_1}),\dots,(\ve{a_i},\ve{b_i},\ve{c_i}),\dots,(\ve{a_n},\ve{b_n},\ve{c_n})
$$
using
an algorithm like Dijkstra. Then the shortest
path $P$ in $G$ is constructed by taking the central vertex $\ve{b_i}$
of every
triple $(\ve{a_i},\ve{b_i},\ve{c_i})$ of $P_t$, plus the extremes $\ve{a_0}$ and $\ve{c_n}$
of the first and last triple, obtaining
$$
P=\ve{a_0},\ve{b_0},\ve{b_1},\dots,\ve{b_i},\dots,\ve{b_{n-1}},\ve{b_n},\ve{c_n}.
$$
\imageDouble{scrSolution1a.png}{scrSolution1b.png}{Effects of
application of solution one.}{fig:sol11}[
\node[imageLabel] at (0.7,0.62) {$\ve{a_1}$};
\node[imageLabel] (B) at (0.35,0.4) {$\ve{b_1}$};
\node[imageLabel] (C) at (0.2,0.4) {$\ve{c_1}$};
\node[imageLabel] (O) at (0.8,0.2) {$Obs$};
\path[imageArrow] (B) -- (0.36,0.555);
\path[imageArrow] (C) -- (0.3,0.585);
\path[imageArrow] (O) -- (0.6,0.3);
][
\node[imageLabel] (A) at (0.7,0.7) {$\ve{a_2}$};
\node[imageLabel] (B) at (0.5,0.4) {$\ve{b_2}$};
\node[imageLabel] (C) at (0.35,0.4) {$\ve{c_2}$};
\node[imageLabel] (D) at (0.2,0.4) {$\ve{d_2}$};
\node[imageLabel] (O) at (0.8,0.2) {$Obs$};
\path[imageArrow] (A) -- (0.58,0.61);
\path[imageArrow] (B) -- (0.44,0.52);
\path[imageArrow] (C) -- (0.36,0.555);
\path[imageArrow] (D) -- (0.3,0.585);
\path[imageArrow] (O) -- (0.6,0.3);
]
\imageDouble{scrSolution1a2.png}{scrSolution1b2.png}{Effects of
application of solution one, other viewpoint.}{fig:sol12}[
\node[imageLabel] at (0.2,0.1) {$\ve{a_1}$};
\node[imageLabel] at (0.45,0.6) {$\ve{b_1}$};
\node[imageLabel] at (0.65,0.85) {$\ve{c_1}$};
\node[imageLabel] at (0.8,0.2) {$Obs$};
][
\node[imageLabel] at (0.15,0.2) {$\ve{a_2}$};
\node[imageLabel] at (0.3,0.4) {$\ve{b_2}$};
\node[imageLabel] at (0.45,0.6) {$\ve{c_2}$};
\node[imageLabel] at (0.65,0.85) {$\ve{d_2}$};
\node[imageLabel] at (0.8,0.2) {$Obs$};
]
In \cref{fig:sol11} and \cref{fig:sol12} the effect of the
application of the first solution is shown. The triangle formed by the
vertices $\ve{a_1}$, $\ve{b_1}$, $\ve{c_1}$ in the left picture of \cref{fig:sol11}
is colliding with the obstacle $Obs$ in the back. In the right picture
there
is the path $\ve{a_2},\ve{b_2},\ve{c_2},\ve{d_2}$ obtained applying
the solution - in this case no
triangles in the
path collide with obstacles. In \cref{fig:sol12} another
point of view of pictures in \cref{fig:sol11} is visible.
\subsubsection{Complexity considerations}\index{Complexity!Dijkstra's algorithm in $G_t$}\index{Dijkstra in $G_t$!complexity}
For each triple and each \ac{OTF} we need to solve three $3\times 3$
linear systems for the
collision check\footnote{See
\cref{sec:intersectionsTriangleTriangle}.}, hence, in total
the cost is
\begin{equation*}
\bigO(|V_t|\cdot |\obsSet|)
\end{equation*}
and for \cref{eq:numV} and \cref{eq:numTriples} this is equal to
\newcommand{\eqCostColl}{\ensuremath{\bigO(|\obsSet|^2 k^2)}}
\begin{equation}
\label{eq:costColl}
\eqCostColl .
\end{equation}
The cost of applying Dijkstra's algorithm\footnote{In the worst case
where no triples are removed in the cleaning phase.} in $G_t$ is \cite{bondy}\cite{lavalle}
\newcommand{\eqCostDijkstraTriples}{\ensuremath{\bigO(k^3|\obsSet|+k^2|\obsSet|\log(k^2|\obsSet|)}}
\begin{equation}
\label{eq:costDijkstraTriples}
\begin{split}
\bigO(|E_t|+|V_t|\log |V_t|) &= \bigO(k^3|V|+k^2|V|\log(k^2|V|)\\
&= \eqCostDijkstraTriples.
\end{split}
\end{equation}
Such cost has two special cases:
\begin{itemize}
\item if $G$ is a \emph{clique} - i.e. each
node in $V$ is connected to every other node \cite{bondy} - then
$k=|V|-1$ and the cost is
\begin{equation*}
\bigO(|V|^4);
\end{equation*}
\item if $k$ is constant - i.e. doesn't grow with $|V|$ - the
cost is
\begin{equation*}
\bigO(|V|\log|V|).
\end{equation*}
\end{itemize}
The latter case is the more plausible if we assume the hypothesis that
no more than four input sites in space can be on the
same sphere, in fact in that case
every Voronoi cell cannot have a vertex with more than four edges
connected to it (see \cref{sec:voronoi} for details).
If we sum all the costs we obtain:
\newcommand{\eqCostTotalOne}{\ensuremath{\bigO(k^2|\obsSet|^2+k^3|\obsSet|)}}
\begin{equation}\label{eq:costTotalOne}
\eqCostTotalOne
\end{equation}
where all the other terms are absorbed in those two. If we have $k$
constant, as we said before, then we have an overall cost of
\newcommand{\eqCostTotalOneK}{\ensuremath{\bigO(|\obsSet|^2)}}
\begin{equation}\label{eq:costTotalOneK}
\eqCostTotalOneK
\end{equation}
that originates from the collision-check controls.
We can improve
this result if we divide the algorithm in two parts:
\begin{enumerate}
\item first we
can construct the graph with cost $\bigO(|\obsSet|^2)$;
\item then we can
use the same graph in different situations\footnote{With specific starting
and ending points.} with cost $\bigO(|\obsSet|\log |\obsSet|)$, only for the
routing.
\end{enumerate}
\begin{table}
\centering
\begin{tabular}{|l|c|r|}
\hline
Description&Cost&Reference\\
\hline
\hline
Creation of $G$&\eqCostGraph&\cref{eq:costGraph}\\
Pruning of $G$&\eqCostPruning&\cref{eq:costPruning}\\
Creation of $G_t$&\eqCostVt&\cref{eq:costVt}\\
Pruning of $G_t$&\eqCostColl&\cref{eq:costColl}\\
Routing in $G_t$& \eqCostDijkstraTriples&\cref{eq:costDijkstraTriples}\\
\hline
Total&\eqCostTotalOne&\cref{eq:costTotalOne}\\
Total ($k$ constant)&\eqCostTotalOneK&\cref{eq:costTotalOneK}\\
\hline
\end{tabular}
\caption{Summary of the costs for solution one}
\label{tab:costsSol1}
\end{table}
On \cref{tab:costsSol1} we summarize all the terms that contributes to
the total costs, and the total cost itself.
\subsection{Second solution: Dijkstra's algorithm in $G$}\label{sec:inter2}\index{Dijkstra}\index{Dijkstra in $G$}
The First solution is interesting from an algorithmic point of view,
but it is not very practical. It ignores all the triples that
intersect an obstacle, thus possible paths in $G$ are lost.
We develop a
solution that uses another approach: obtain the shortest path
from the Voronoi's graph $G$ directly using Dijkstra's algorithm,
without removing any triple. On this path
- that we call $P$ - we
check every triple of consecutive vertices, and if it collides with an
\ac{OTF} then we take countermeasures (see
\cref{sec:intersectionsTriangleTriangle} for the procedure implemented
to identify collisions between two triangles). For instance, if the path
is composed from the vertices
\begin{equation*}
P=(\ve{v_0},\ve{v_1},\dots,\ve{v_n})
\end{equation*}
then we check every one of the triangles
\begin{eqnarray*}
T_0 &=& \triangle \ve{v_0}\ve{v_1}\ve{v_2}\\
T_1 &=& \triangle \ve{v_1}\ve{v_2}\ve{v_3}\\
&\cdots&\\
T_i &=& \triangle \ve{v_i}\ve{v_{i+1}}\ve{v_{i+2}}\\
&\cdots&\\
T_{n-3} &=& \triangle \ve{v_{n-3}}\ve{v_{n-2}}\ve{v_{n-1}}\\
T_{n-2} &=& \triangle \ve{v_{n-2}}\ve{v_{n-1}}\ve{v_n}
\end{eqnarray*}
for intersections with \acp{OTF}. $\triangle \ve{v_i}\ve{v_j}\ve{v_k}$
denotes the triangle having points $\ve{v_i}$, $\ve{v_j}$ and
$\ve{v_k}$ as vertices.
Consider that $G$ is pruned from all the edges that intersect any
obstacle, thus none of the edges of the triangles $T_i$ can intersect
an \ac{OTF}. The only possibility is that edges\footnote{If we ignore
special cases, two
edges for each \ac{OTF} at most.} of \ac{OTF} intersect a
triangle $T_i$. Hence for each $T_i$ we have a (possibly empty) set of points
of intersection between it and the edges of each \ac{OTF} - we call
that set $O$.
\begin{myfig}{$T_i$($=\triangle \ve{v_i}\ve{v_{i+1}}\ve{v_{i+2}}$) and the points
$\ve{o_1},\ve{o_2},\ve{o_3}$ of intersection between it and the edges of some \acp{OTF}.}{fig:triangleIntersection}
\begin{tikzpicture}
\coordinate (P) at (-1,0);
\coordinate (A) at (0,0);
\coordinate (B) at (4,5);
\coordinate (C) at (8,1);
\coordinate (D) at (9,1);
\coordinate (O1) at (barycentric cs:A=0.2,B=0.6,C=0.2);
\coordinate (O2) at (barycentric cs:A=0.5,B=0.3,C=0.2);
\coordinate (O3) at (barycentric cs:A=0.3,B=0.2,C=0.5);
\coordinate (W1) at (barycentric cs:A=0.4,B=0.6,C=0.);
\coordinate (W2) at (barycentric cs:A=0.,B=0.6,C=0.4);
\draw[controlPoly] (A) -- (B) -- (C);
\draw[controlPolyTract] (P) -- (A);
\draw[controlPolyTract] (C) -- (D);
\draw[controlPolyTractHigh] (W1) -- (W2);
\draw[controlPolyTractHigh] (A) -- (C);
\filldraw[controlVert] (A) circle (2pt);
\filldraw[controlVert] (B) circle (2pt);
\filldraw[controlVert] (C) circle (2pt);
\filldraw[obstaclePoint] (O1) circle (2pt);
\filldraw[obstaclePoint] (O2) circle (2pt);
\filldraw[obstaclePoint] (O3) circle (2pt);
\filldraw[controlVertHigh] (W1) circle (2pt);
\filldraw[controlVertHigh] (W2) circle (2pt);
\node[below=0.5em] at (A) {$\ve{v_i}$};
\node[above=0.5em] at (B) {$\ve{v_{i+1}}$};
\node[below=0.5em] at (C) {$\ve{v_{i+2}}$};
\node[below right=0.2em] (O1n) at (O1) {$\ve{o_1}$};
\node[below=0.2em] at (O1n) {$\scriptstyle(\equiv \ve{o_{near}})$};
\node[left=0.2em] at (O2) {$\ve{o_2}$};
\node[below=0.2em] at (O3) {$\ve{o_3}$};
\node[left=0.5em] at (W1) {$\ve{w_1}$};
\node[right=0.5em] at (W2) {$\ve{w_2}$};
\end{tikzpicture}
\end{myfig}
In \cref{fig:triangleIntersection} we have an example of the triangle
\begin{equation*}
T_i = \triangle \ve{v_i}\ve{v_{i+1}}\ve{v_{i+2}}
\end{equation*}
that is
intersected by obstacles in the points
\begin{equation*}
O = \{\ve{o_1},\ve{o_2},\ve{o_3}\}.
\end{equation*}
Each one of the points in $O$ is expressed in barycentric
coordinates of the vertices $\ve{v_i}$, $\ve{v_{i+1}}$ and $\ve{v_{i+2}}$ of the
triangle:
\begin{eqnarray*}
\ve{o_1}&=&\alpha_1 \ve{v_i}+\beta_1 \ve{v_{i+1}}+\gamma_1 \ve{v_{i+2}}\\
\ve{o_2}&=&\alpha_2 \ve{v_i}+\beta_2 \ve{v_{i+1}}+\gamma_2 \ve{v_{i+2}}\\
\ve{o_3}&=&\alpha_3 \ve{v_i}+\beta_3 \ve{v_{i+1}}+\gamma_3 \ve{v_{i+2}}
\end{eqnarray*}
where $\alpha_i+\beta_i+\gamma_i=1$ for $i=1,2,3$.
We want to avoid collisions adding vertices in the control
polygon, such that consecutive triangles are free from obstacles. We
obtain this by adding two new control vertices:
\begin{itemize}
\item $\ve{w_1}$ between $\ve{v_i}$ and $\ve{v_{i+1}}$;
\item $\ve{w_2}$ between $\ve{v_{i+1}}$ and $\ve{v_{i+2}}$.
\end{itemize}
We add those
points in a way that makes the segment $\overline{\ve{w_1}\ve{w_2}}$
parallel to the segment $\overline{\ve{v_i}\ve{v_{i+2}}}$ and
$\overline{\ve{w_1}\ve{w_2}}$ passing just above the obstacle point
$\ve{o_{near}}$ that is the nearest
to $\ve{v_{i+1}}$ ($\ve{o_1}$ in
\cref{fig:triangleIntersection}). The degenerate triangles $\triangle \ve{v_i}\ve{w_1}\ve{v_{i+1}}$ and $\triangle
\ve{v_{i+1}}\ve{w_2}\ve{v_{i+2}}$, and the not degenerate triangle
$\triangle \ve{w_1}\ve{v_{i+1}}\ve{w_2}$ replace the original triangle
$T_i$. They are built in a way that do not make them collide with obstacles.
When we check for collisions between a segment and a triangle, we
resolve a system of three unknowns in
three equations and we extract the barycentric
coordinates of
the point of collision from the solutions. When we have all the
coordinates of the points
in $O$, we can obtain $\ve{o_{near}}$ by picking the one with the biggest $\beta$
and then, using the corresponding $\beta_{near}$, we can obtain
\begin{eqnarray*}
W_1&=&\beta_{near} \ve{v_{i+1}}+(1-\beta_{near})\ve{v_i}\\
W_2&=&\beta_{near} \ve{v_{i+1}}+(1-\beta_{near})\ve{v_{i+2}}.
\end{eqnarray*}
\imageDouble{scrSolution2a.png}{scrSolution2b.png}{Effects of application of solution two.}{fig:sol21}[
\node[imageLabel] at (0.7,0.62) {$\ve{v_i}$};
\node[imageLabel] (B) at (0.45,0.4) {$\ve{v_{i+1}}$};
\node[imageLabel] (C) at (0.2,0.4) {$\ve{v_{i+2}}$};
\node[imageLabel] (O) at (0.8,0.2) {$Obs$};
\path[imageArrow] (B) -- (0.36,0.555);
\path[imageArrow] (C) -- (0.3,0.585);
\path[imageArrow] (O) -- (0.6,0.3);
][
\node[imageLabel] (W1) at (0.75,0.7) {$\ve{w_1}$};
\node[imageLabel] (W2) at (0.2,0.4) {$\ve{w_2}$};
\node[imageLabel] (O) at (0.8,0.2) {$Obs$};
\path[imageArrow] (W1) -- (0.61,0.6);
\path[imageArrow] (W2) -- (0.31,0.58);
\path[imageArrow] (O) -- (0.6,0.3);
]
\imageDouble{scrSolution2a2.png}{scrSolution2b2.png}{Effects of
application of solution two, other viewpoint.}{fig:sol22}[
\node[imageLabel] at (0.2,0.1) {$\ve{v_i}$};
\node[imageLabel] at (0.43,0.6) {$\ve{v_{i+1}}$};
\node[imageLabel] at (0.62,0.85) {$\ve{v_{i+2}}$};
\node[imageLabel] at (0.8,0.2) {$Obs$};
][
\node[imageLabel] at (0.2,0.1) {$\ve{v_i}$};
\node[imageLabel] at (0.43,0.6) {$\ve{v_{i+1}}$};
\node[imageLabel] at (0.62,0.85) {$\ve{v_{i+2}}$};
\node[imageLabel] at (0.8,0.2) {$Obs$};
\node[imageLabel] (W1) at (0.25,0.3) {$\ve{w_1}$};
\node[imageLabel] (W2) at (0.5,0.75) {$\ve{w_2}$};
\path[imageArrow] (W1) -- (0.3,0.15);
\path[imageArrow] (W2) -- (0.66,0.79);
\node[imageLabel] at (0.8,0.2) {$Obs$};
]
In \cref{fig:sol21} and \cref{fig:sol22} we can see the effects of the
application of this
solution to a piece of the curve. The original pieces of control
polygon are on the left pictures; the triangle
composed of those vertices
collide with the obstacle on the back. The two new vertices $\ve{w_1}$
and $\ve{w_2}$ are added to avoid the collision.
\subsubsection{Complexity considerations}\index{Complexity!Dijkstra's algorithm in $G$}\index{Dijkstra in $G$!complexity}
For this solution we still have the costs of \cref{eq:costGraph} and
\cref{eq:costPruning} for
the creation and pruning of the graph $G$. In addition, we need to
apply Dijkstra's
algorithm in $G$ to obtain $P$ with a cost \cite{bondy}\cite{lavalle}
\begin{equation*}
\bigO(|E|+|V|\log |V|).
\end{equation*}
For \cref{eq:numV} and \cref{eq:numE} this cost is equal to
\newcommand{\eqCostDijkstraG}{\ensuremath{\bigO(k|\obsSet|+|\obsSet|\log |\obsSet|)}}
\begin{equation}\label{eq:costDijkstraG}
\eqCostDijkstraG
\end{equation}
and if we make the assumption of $k$ constant we have a cost
\begin{equation*}
\bigO(|\obsSet|\log |\obsSet|).
\end{equation*}
We need to
consider every face of obstacles in $\obsSet$ for every
triangle in $P$ to check and remove the collisions in the path. The
cost to do this is\footnote{If
$\#\acp{OTF}=\bigO(|\obsSet|)$ - i.e. the
number of \acp{OTF} does not grow faster than the number of obstacles.}
$\bigO(|P|\cdot|\obsSet|)$ where $|P|$ means the number of vertices
in $P$. In the worst case
$|P|=\bigO(|V|)=\bigO(|\obsSet|)$, hence we have a cost
\newcommand{\eqCostCleanPath}{\ensuremath{\bigO(|P|\cdot|\obsSet|)=\bigO(|\obsSet|^2)}}
\begin{equation}\label{eq:costCleanPath}
\eqCostCleanPath .
\end{equation}
Summing up all the costs, we have
\newcommand{\eqCostTotalTwo}{\ensuremath{\bigO(k|\obsSet|^2)}}
\begin{equation}\label{eq:costTotalTwo}
\eqCostTotalTwo
\end{equation}
and, if we consider $k$ constant,
\newcommand{\eqCostTotalTwoK}{\ensuremath{\bigO(|\obsSet|^2)}}
\begin{equation}\label{eq:costTotalTwoK}
\eqCostTotalTwoK .
\end{equation}
\begin{table}
\centering
\begin{tabular}{|l|c|r|}
\hline
Description&Cost&Reference\\
\hline
\hline
Creation of $G$&\eqCostGraph&\cref{eq:costGraph}\\
Pruning of $G$&\eqCostPruning&\cref{eq:costPruning}\\
Routing in $G$&\eqCostDijkstraG&\cref{eq:costDijkstraG}\\
Clean path&\eqCostCleanPath&\cref{eq:costCleanPath}\\
\hline
Total&\eqCostTotalTwo&\cref{eq:costTotalTwo}\\
Total ($k$ costant)&\eqCostTotalTwoK&\cref{eq:costTotalTwoK}\\
\hline
\end{tabular}
\caption{Summary of the costs for solution two.}
\label{tab:costsSol2}
\end{table}
On \cref{tab:costsSol2} we summarize all the terms that contribute to
the total cost, and the total cost itself.
The cost is comparable with the one of the first
solution. Furthermore, in
this case we can divide the algorithm in two parts:
\begin{enumerate}
\item first we can construct $G$ with cost $\bigO(|\obsSet|^2)$;
\item then we can use it for different
situations with cost $\bigO(|\obsSet|\log|\obsSet|+|P|\cdot|\obsSet|)$.
\end{enumerate}
\section{Degree increase}\label{sec:degreeInc}\index{Degree
increase}\index{\bss!higher degree}
We have hitherto assumed we are dealing only with quadratic \bss{} -
i.e. of degree $2$ - because,
for the \ac{CHP} (\cref{sec:convexHull}), we need to check
intersections only between two triangles (one belonging to $P$ and the
other to \acp{OTF}). If we want to use higher degree
curves, we can modify the
previous algorithms to deal with polyhedral convex hulls, but this
implies an increase in complexity.
We are interested in increasing the degree to achieve smooth
curves with continue curvature and torsion. We adopt a compromise:
we adapt the path obtained from the previous algorithms adding
vertices and forcing the curve to remain in the same convex
hull. However, this approach have the disadvantage that we cannot achieve a
good torsion\footnote{We can improve this with the post process.}
because the curve changes plane in an inflection point of
the curvature.
We modify
\begin{equation*}
P=(\ve{v_0},\dots,\ve{v_n})
\end{equation*}
adding a certain number of aligned new vertices
$(\ve{w_0},\ve{w_1},\dots)$ between each pair
$(\ve{v_i},\ve{v_{i+1}})$ of vertices in $P$ for $i=0,\dots,n-1$. The
number of $\ve{w_j}$
between each pair $(\ve{v_i},\ve{v_{i+1}})$ depends on the desired
grade of the curve. In fact we need $m-2$ new vertices between each
$(\ve{v_i},\ve{v_{i+1}})$ for
\bs curves of degree $m$.
Thus the final modified path for a \bs curve of degree $m\ge 3$ is
\begin{equation*}
\tilde{P}=(\ve{v_0},\ve{w_0},\dots,\ve{w_{m-3}},\ve{v_1},\dots,\ve{v_i},\ve{w_{i(m-2)}},\dots,\ve{w_{(i+1)(m-2)-1}},\ve{v_{i+1}},\dots,\ve{v_n}).
\end{equation*}
This strategy is used in this project only to lift the degree from $2$ to $3$ or $4$.
\begin{myfig}{Increase the degree $m$ from $2$ to $4$.}{fig:highDegree}
\begin{tikzpicture}
\coordinate (a) at (-2,0);
\coordinate (b) at (1,0);
\coordinate (c) at (3,3);
\coordinate (d) at (5,4);
\coordinate (e) at (7,1);
\coordinate (f) at (9,2);
\coordinate (g) at (12,2);
\coordinate (ab1) at ($ (a)!0.33!(b) $);
\coordinate (ab2) at ($ (b)!0.33!(a) $);
\coordinate (bc1) at ($ (b)!0.33!(c) $);
\coordinate (bc2) at ($ (c)!0.33!(b) $);
\coordinate (cd1) at ($ (c)!0.33!(d) $);
\coordinate (cd2) at ($ (d)!0.33!(c) $);
\coordinate (de1) at ($ (d)!0.33!(e) $);
\coordinate (de2) at ($ (e)!0.33!(d) $);
\coordinate (ef1) at ($ (e)!0.33!(f) $);
\coordinate (ef2) at ($ (f)!0.33!(e) $);
\coordinate (fg1) at ($ (f)!0.33!(g) $);
\coordinate (fg2) at ($ (g)!0.33!(f) $);
\foreach \x/\y/\z in {a/b/c,b/c/d,c/d/e,d/e/f,e/f/g}{
\path[convexHull] (\x) -- (\y) -- ($ (\y)!0.33!(\z) $) -- (\x);
\path[convexHull] ($ (\x)!0.33!(\y) $) -- (\y) -- ($ (\z)!0.33!(\y) $) -- ($ (\x)!0.33!(\y) $);
\path[convexHull] ($ (\y)!0.33!(\x) $) -- (\y) -- (\z) -- ($ (\y)!0.33!(\x) $);
}
\foreach \x/\y/\z in {a/b/c,b/c/d,c/d/e,d/e/f,e/f/g}{
\draw[convexHullBord] (\x) -- ($ (\y)!0.33!(\z) $);
\draw[convexHullBord] ($ (\x)!0.33!(\y) $) -- ($ (\z)!0.33!(\y) $);
\draw[convexHullBord] ($ (\y)!0.33!(\x) $) -- (\z);
}
\draw[controlPoly] (a) -- (b) -- (c) -- (d) -- (e) -- (f) -- (g);
\foreach \p in {a,b,c,d,e,f,g}
\filldraw[controlVert] (\p) circle (2pt);
\foreach \g in {ab1,ab2,bc1,bc2,cd1,cd2,de1,de2,ef1,ef2,fg1,fg2}
\filldraw[controlVertHigh] (\g) circle (2pt);
\node[below] at (a) {$\ve{v_{0}}$};
\node[below] at (b) {$\ve{v_{1}}$};
\node[above left] at (c) {$\ve{v_{2}}$};
\node[above] at (d) {$\ve{v_3}$};
\node[below right] at (e) {$\ve{v_4}$};
\node[above] at (f) {$\ve{v_5}$};
\node[below] at (g) {$\ve{v_6}$};
\node[below] at (ab1) {$\ve{w_0}$};
\node[below] at (ab2) {$\ve{w_1}$};
\node[below right] at (bc1) {$\ve{w_2}$};
\node[below right=2pt] at (bc2) {$\ve{w_3}$};
\node[above] at (cd1) {$\ve{w_4}$};
\node[above] at (cd2) {$\ve{w_5}$};
\node[above right] at (de1) {$\ve{w_6}$};
\node[below left=2pt] at (de2) {$\ve{w_7}$};
\node[below right] at (ef1) {$\ve{w_8}$};
\node[below right=2pt] at (ef2) {$\ve{w_9}$};
\node[above] at (fg1) {$\ve{w_{10}}$};
\node[above] at (fg2) {$\ve{w_{11}}$};
\end{tikzpicture}
\end{myfig}
An example of path
\begin{equation*}
P=(\ve{v_0},\ve{v_1},\ve{v_2},\ve{v_3},\ve{v_4},\ve{v_5},\ve{v_6})
\end{equation*}
is visible in \cref{fig:highDegree}.
We have the vertices of $P$ in green, the added vertices in red and
the cyan area is the convex hull of the final curve.
We want to adapt $P$ to quartic \bs curves, hence we need to add two new
vertices between each pair of vertices $(\ve{v_i},\ve{v_{i+1}})$ for
$i=0,\dots,6$. Those new vertices are
\begin{equation*}
(\ve{w_0},\ve{w_1},\ve{w_2},\ve{w_3},\ve{w_4},\ve{w_5},\ve{w_6},\ve{w_7},\ve{w_8},\ve{w_9},\ve{w_{10}},\ve{w_{11}}).
\end{equation*}
Note that, with this algorithm, when we increase the degree from $2$
to $m\ge 3$ we have that the convex hull containing a \bs curve of
degree $m$ in
$\tilde{P}$ is a subset of the convex hull containing a \bs curve
of degree $2$
in $P$. This happens because the polyhedrons of consecutive $m+1$ vertices in
$\tilde{P}$ collapse in triangles contained inside the triangles
of consecutive vertices in $P$. For instance, in \cref{fig:highDegree}
the convex hull of the first $5$ vertices
$\ve{v_0},\ve{w_0},\ve{w_1},\ve{v_1},\ve{w_2}$ of $\tilde{P}$
coincides with the
triangle $\triangle \ve{v_0}\ve{v_1}\ve{w_2}$ that is contained inside
the triangle $\triangle \ve{v_0}\ve{v_1}\ve{v_2}$ of the first $3$
vertices of $P$.
One effect of the application of this method is that a curve of
degree $m$ in $\tilde{P}$ touches
every segment of the original control polygon $P$. This is because adding
$m-2$ aligned vertices between each pair $(\ve{v_i},\ve{v_{i+1}})$
will result in $m$ aligned vertices on each original segment
(\cref{sec:alignedVertices}).
\section{Knots selection}\label{sec:knotSel}\index{\bss!knot selection}
In the previous sections we never discuss the criterion adopted to
determine the extended knot vector $T$
\begin{equation*}
T=\{t_0,\dots,t_{m-1},t_{m},\dots,t_{n+1},t_{n+2},\dots,t_{n+m+1}\}
\end{equation*}
associated to the \bs curve.
In this section we discuss two
methods implemented to establish $T$.
First of all, we want
the curve to interpolate
the chosen start and end points that correspond to the extremes
$\ve{v_0}$ and $\ve{v_n}$ of the extracted path $P$. We see in
\cref{sec:clamped} that we can achieve such interpolation if we impose
\begin{equation}\label{eq:externalKnotsFix}
\begin{split}
&t_0 = t_1 = \dots = t_{m} = a\\
&t_{n+1} = t_{n+2} = \dots = t_{n+m+1} = b
\end{split}
\end{equation}
where $a$ and $b$ are the extremes of the parametric domain of
the curve.
The constraint of \cref{eq:externalKnotsFix} is a mandatory
choice, thus we cannot change it. Regarding the parametric domain, we
choose it to be $[0,1]$ because changing the extremes do not change
the behavior of the curve, only changing the ratios of the distances
between the knots is effective \cite{farin}. We still need to chose how to select
the inner $n-m$ knots $t_{m+1},\dots,t_n$, and we develop two different
ways to do this:
\begin{enumerate}[label=\textbf{method \arabic*}]
\item\label[void]{en:uniform} Use a uniform partition where $t_i-t_{i-1}=c$ for
$i=m+1,\dots,n+1$ for $c$ constant;
\item\label[void]{en:adaptive} Use an adaptive partition, where we try
to create dense knots in
correspondence of points on the curve where we have dense control
vertices.
\end{enumerate}
\cref{en:uniform} is the easiest way to choose a knot vector
and it is a common first choice in textbooks \cite{farin}\cite{docarmo},
but it has the disadvantage of ignoring the geometry of the curve
\cite{farin}. The steps to accomplish \cref{en:uniform} are quite
straightforward: we need to pick the nodes
\begin{equation*}
\frac{i}{n-m+1}
\end{equation*}
for $i=1,\dots,n-m$. Thus, we concentrate on \cref{en:adaptive}.
\begin{myfig}{Optimal case for a quadratic curve (we want uniform partition).}{fig:adaptive1}
\begin{tikzpicture}
\coordinate (v0) at (0,0);
\coordinate (v1) at (2,0);
\coordinate (v2) at (4,0);
\coordinate (v3) at (6,0);
\coordinate (v4) at (8,0);
\coordinate (v5) at (10,0);
\coordinate (t0) at (0,-1);
\coordinate (t4) at (10,-1);
\coordinate (t1) at ($ (t0)!0.25!(t4) $);
\coordinate (t2) at ($ (t0)!0.5!(t4) $);
\coordinate (t3) at ($ (t0)!0.75!(t4) $);
\foreach \v/\w in {v0/w0,v1/w1,v2/w2,v3/w3,v4/w4,v5/w5}{
\coordinate (\w) at ($(t0)!(\v)!(t4)$);
\draw[controlToKnot] (\v) -- (\w);
}
\draw[controlPoly] (v0) -- (v1) -- (v2) -- (v3) -- (v4) -- (v5);
\draw[knotPoly] (t0) -- (t1) -- (t2) -- (t3) -- (t4);
\foreach \v/\i in {v0/0,v1/1,v2/2,v3/3,v4/4,v5/5}{
\filldraw[controlVert] (\v) circle (2pt);
\node[above] at (\v) {$\ve{v_{\i}}$};
}
\foreach \t/\j/\pos in {t0/{0,1,2}/below,t1/3/above,t2/4/above,t3/5/above,t4/{6,7,8}/below}{
\filldraw[knot] (\t) circle (2pt);
\node[\pos] at (\t) {$t_{\j}$};
}
\foreach \w/\j in {w1/1,w2/2,w3/3,w4/4}{
\node[below] at (\w) {$\nu_{\j}$};
}
\node[below=15pt] at (w0) {$\nu_0$};
\node[below=15pt] at (w5) {$\nu_5$};
\node[left=20pt] at (v0) {$P$};
\node[left=20pt] at (t0) {$\tau$};
%\filldraw[color=yellow] ($ (w1)!0.25!(w2) $) circle (2pt);
\end{tikzpicture}
\end{myfig}
\begin{myfig}{General case for a quadratic (same distances between
$t_i$ and enclosing $\nu_j$, $\nu_{j+1}$ as \cref{fig:adaptive1}).}{fig:adaptive2}
\begin{tikzpicture}
\coordinate (v0) at (0,0);
\coordinate (v1) at (1,0);
\coordinate (v2) at (2,0);
\coordinate (v3) at (4,0);
\coordinate (v4) at (8,0);
\coordinate (v5) at (10,0);
\coordinate (t0) at (0,-1);
\coordinate (t4) at (10,-1);
\foreach \v/\w in {v0/w0,v1/w1,v2/w2,v3/w3,v4/w4,v5/w5}{
\coordinate (\w) at ($(t0)!(\v)!(t4)$);
\draw[controlToKnot] (\v) -- (\w);
}
\coordinate (t1) at ($ (w1)!0.25!(w2) $);
\coordinate (t2) at ($ (w2)!0.5!(w3) $);
\coordinate (t3) at ($ (w3)!0.75!(w4) $);
\draw[controlPoly] (v0) -- (v1) -- (v2) -- (v3) -- (v4) -- (v5);
\draw[knotPoly] (t0) -- (t1) -- (t2) -- (t3) -- (t4);
\foreach \v/\i in {v0/0,v1/1,v2/2,v3/3,v4/4,v5/5}{
\filldraw[controlVert] (\v) circle (2pt);
\node[above] at (\v) {$\ve{v_{\i}}$};
}
\foreach \t/\j/\pos in {t0/{0,1,2}/below,t1/3/above,t2/4/above,t3/5/above,t4/{6,7,8}/below}{
\filldraw[knot] (\t) circle (2pt);
\node[\pos] at (\t) {$t_{\j}$};
}
\foreach \w/\j in {w1/1,w2/2,w3/3,w4/4}{
\node[below] at (\w) {$\nu_{\j}$};
}
\node[below=15pt] at (w0) {$\nu_0$};
\node[below=15pt] at (w5) {$\nu_5$};
\node[left=20pt] at (v0) {$P$};
\node[left=20pt] at (t0) {$\tau$};
\end{tikzpicture}
\end{myfig}
We start from the idea that if we have a control polygon with
uniformly-spaced vertices -
i.e. $\norm{\ve{v_1}-\ve{v_0}}=\norm{\ve{v_2}-\ve{v_1}}=\cdots=\norm{\ve{v_n}-\ve{v_{n-1}}}$
- then we agree on a uniform partition of the knots
($t_{m+1}-t_m=t_{m+2}-t_{m+1}=\cdots=t_{n+1}-t_n$). In
\cref{fig:adaptive1} there is an example of a quadratic \bs curve with
uniformly-spaced control polygon. The above segment is a
\emph{rectified}
visualization of the control polygon with six control
vertices
$\ve{v_0},\dots,\ve{v_5}$. The segment below represents the
partition of the domain from $a$ (on the left) to $b$
(on the right), with the projections $\nu_0,\dots,\nu_5$ of the control
vertices,
scaled in length to the parametric domain axis\footnote{$\ve{v_0}$ is
projected to $a$, $\ve{v_5}$ is projected to $b$, and the ratios
between the distances between vertices are preserved.}, and the
knots $t_0,\dots,t_8$ on it.
Starting from this situation, if
we have a generic control polygon with segments of different length, as
in \cref{fig:adaptive2}, then we want each $t_i$ to keep the same
distance, in ratio, between the surrounding $\nu_j$ and $\nu_{j+1}$,
respect to the optimal case. For instance, in \cref{fig:adaptive1}
$\frac{t_3-\nu_1}{\nu_2-\nu_1}=\frac{1}{4}$ and
$\frac{\nu_2-t_3}{\nu_2-\nu_1}=\frac{3}{4}$, this means that in
\cref{fig:adaptive2} the same values must be preserved.
The problem now is how to calculate the values of $t_i$ in the general
case. We consider only the inner part $\tau$ of the partition vector,
included the extremes
\begin{equation*}
\tau_i = t_{i+m}\qquad i=0,\dots,n-m+1
\end{equation*}
where $\tau_0=a=0$ and $\tau_{n-m+1}=b=1$. In \cref{fig:adaptive1} and
\cref{fig:adaptive2}
$\tau=(t_2,t_3,t_4,t_5,t_6)$. Now we calculate the positions of all
$\tau_i$ respect to the $\nu_j$ in the optimal case. We can
achieve that using as unit the uniform distance $\nu_j-\nu_{j-1}$ to
calculate the positions of $\tau_i$. We specifically calculate
\begin{equation}\label{eq:adaptivePos}
\tau_i^\nu=\frac{n}{n-m+1}\cdot i\qquad i=0,\dots,n-m+1
\end{equation}
obtaining the numbers $\tau_i^\nu$ whose integer part
$\lfloor\tau_i^\nu\rfloor$ represents the index $j$ of the
$\nu_j$ that is to the left of $\tau_i$, and the decimal part
$(\tau_i^\nu-\lfloor\tau_i^\nu\rfloor)$ represents the distance from
it: $\frac{\tau_i-\nu_j}{\nu_{j+1}-\nu_j}$.
Now we calculate the projections $\nu_i$ in the
\emph{generic} case.
We start calculating the incremental distances between
the vertices
\begin{equation*}
\begin{cases}
d_0=0&\\
d_i=d_{i-1}+\norm{\ve{v_i}-\ve{v_{i-1}}}&\qquad i=1,\dots,n
\end{cases}
\end{equation*}
and, remembering that the parametric domain is $[0,1]$, we have
\begin{equation}\label{eq:adaptiveProj}
\nu_i=\frac{d_i}{d_n}\qquad i=0,\dots,n.
\end{equation}
Using the positions in \cref{eq:adaptivePos} on the projection in
\cref{eq:adaptiveProj}, we obtain the values
\begin{equation*}
\tau_i=\nu_{\lfloor\tau_i^\nu\rfloor}+(\tau_i^\nu-\lfloor\tau_i^\nu\rfloor)(\nu_{\lfloor\tau_i^\nu\rfloor+1}-\nu_{\lfloor\tau_i^\nu\rfloor})\qquad i=0,\dots,n-m+1.
\end{equation*}
Finally, adding the duplicated knots, we obtain
\begin{equation*}
t_i=\tau_{\min(n-m+1,\ \max(0,\ i-m))}\qquad i=0,\dots,n+m+1.
\end{equation*}
\section{Post processing}\label{sec:postPro}\index{Post process}
The purpose of the post processing phase is to try to simplify the
path $P=(\ve{v_0},\dots,\ve{v_n})$ obtained in the previous phase
removing useless vertices, in order
to achieve a smoother path.
\begin{algo}{Post processing algorithm on path $P$.}{alg:postProcess}
\Procedure{postProcess}{$P$}
\For{$i\Ass 1,n-1$}
\If{$i=1$ \IfOr \IfNot $intersect\acs{OTF}(\triangle\ve{v_{i-2}}\ve{v_{i-1}}\ve{v_{i+1}})$}
\If{$i=n-1$ \IfOr \IfNot $intersect\acs{OTF}(\triangle\ve{v_{i-1}}\ve{v_{i+1}}\ve{v_{i+2}})$}
\State $P\Ass P\setminus\{\ve{v_i}\}$
\EndIf
\EndIf
\EndFor
\EndProcedure
\end{algo}
To obtain this, we realize \cref{alg:postProcess} that
iterates through all the vertices, except the extremes, and checks if each
$v_i$ can be removed without consequences. With
consequences we mean that removing $v_i$ would cause a triangle
in $P$ to intersect one of the \acp{OTF}.
\begin{myfig}{Example of post process check that removes $\ve{v_i}$.}{fig:postProcess}
\begin{tikzpicture}
\coordinate (l1) at (0,0);
\coordinate (a) at (1,0);
\coordinate (b) at (3,3);
\coordinate (c) at (5,4);
\coordinate (d) at (7,1);
\coordinate (e) at (9,2);
\coordinate (l2) at (10,2);
\coordinate (o1) at (4,0);
\coordinate (o2) at (5,1.5);
\coordinate (o3) at (6,0);
\path[convexHull] (a) -- (b) -- (d) -- (a);
\path[convexHull] (b) -- (d) -- (e) -- (b);
\draw[controlPoly] (a) -- (b);
\draw[controlPoly] (d) -- (e);
\draw[controlPolyTract] (l1) -- (a);
\draw[controlPolyTract] (e) -- (l2);
\draw[controlPolyTractHigh] (b) -- (d);
\draw[controlPolyHigh] (b) -- (c) -- (d);
\path[obstacle] (o1) -- (o2) -- (o3) -- (o1);
\node at (barycentric cs:o1=0.3,o2=0.3,o3=0.3) {\acs{OTF}};
\foreach \p in {a,b,d,e}
\filldraw[controlVert] (\p) circle (2pt);
\filldraw[controlVertHigh] (c) circle (3pt);
\node[below] at (a) {$\ve{v_{i-2}}$};
\node[above left] at (b) {$\ve{v_{i-1}}$};
\node[above] at (c) {$\ve{v_i}$};
\node[below right] at (d) {$\ve{v_{i+1}}$};
\node[above] at (e) {$\ve{v_{i+2}}$};
\end{tikzpicture}
\end{myfig}
To clarify the concept, consider the
simplification in 2-dimensional space in \cref{fig:postProcess}. The
path to process
is
\begin{equation*}
P=(\dots,\ve{v_{i-2}},\ve{v_{i-1}},\ve{v_{i}},\ve{v_{i+1}},\ve{v_{i+2}},\dots)
\end{equation*}
and we are
considering removing $\ve{v_i}$ obtaining a modified path
\begin{equation*}
\tilde{P}=(\dots,\ve{v_{i-2}},\ve{v_{i-1}},\ve{v_{i+1}},\ve{v_{i+2}},\dots).
\end{equation*}
Before doing, this we need to check if any triangle in $\tilde{P}$
intersects any \ac{OTF}. In detail, we need to check only the two
triangles $\triangle\ve{v_{i-2}}\ve{v_{i-1}}\ve{v_{i+1}}$ and
$\triangle\ve{v_{i-1}}\ve{v_{i+1}}\ve{v_{i+2}}$ because the other
triangles in $\tilde{P}$ are already present in $P$. For instance
the obstacle in the figure do not intersects any of the triangles in
$P$, but it intersects $\triangle\ve{v_{i-2}}\ve{v_{i-1}}\ve{v_{i+1}}$ in
$\tilde{P}$.
\subsection{Complexity considerations}\index{Post process!complexity}\index{Complexity!post process}
We need to check if two triangles intersect
with an \ac{OTF} for every vertex of $P$, hence we have a complexity of
\begin{equation*}
\bigO(|P|\cdot|\obsSet|)=\bigO(|\obsSet|^2)
\end{equation*}
where $\obsSet$ is the set of obstacles and $|P|$ is the number of vertices
in $P$.
\section{Third solution: Simulated Annealing}\label{sec:inter3}\index{\acf{SA}}
The solutions in \cref{sec:inter1} and \cref{sec:inter2} have two
problems in common:
\begin{itemize}
\item both reject configurations in a prudent way
considering only the control polygon;
\item and both do not optimize neither
length nor other quantities.
\end{itemize}
These solutions have also the following benefits:
\begin{itemize}
\item they produce paths that are obstacle-free from construction;
\item the application of the post-processing often produces a reduction
in the curve length.
\end{itemize}
In this section, we describe a third
approach based on
probabilistic computation.
We can consider the problem of finding the shortest path as a constrained
optimization problem, in which a certain configuration of the control
vertices (and
consequently the \bs) is the state of the system, and we aim to
minimize both the length of the control polygon (and therefore the
\bs\footnote{We give to the users also the possibility of selecting the
arc length as the quantity to minimize.}) and the peak in curvature and torsion of the \bs, under
the constraint that the \bs must not intersect the obstacles. We
are interested in optimizing the length of the curve and the maximum
peaks of both curvature and torsion, because we want a path that is
short but also fair.
\subsection{\acf{LR} applied to the project}\index{\acf{LR}}
We can apply the concept explained in \cref{sec:lagrangianRelaxation}
to the project.
The variable space $X$ is composed of all the possible
configurations of the path, or, in other words,it is defined by all
the possible values of the vector $P=(\ve{v_1},\dots,\ve{v_n})$ of all
$n$ ordered
vertices $\ve{v_i}=(x_i,y_i,z_i)$ of the
path. The \cref{eq:opt} can be formulated as follows:
\begin{equation*}
\begin{aligned}
& \underset{P}{\text{minimize}}
& & \alpha\cdot maxCurv(P)+\beta\cdot
maxTors(P)+\gamma\cdot normLen(P) \\
& \text{subject to}
& & \left|bspline(P)\cap \bigcup_{i\in I}obstacle_i\right| = 0,
\end{aligned}
\end{equation*}
where $maxCurv(P)$ is the curvature peak of the \bs
constructed using $P$ as control polygon,
$maxTors(P)$ is the absolute value of the torsion peak and
$normLen(P)$ is the length of the control polygon
$P$ normalized as a percentage of the length of the initial
status\footnote{If the user chooses to minimize the
arc length, then $normLen(P)$ becomes the length of the \bs curve.}
. $\alpha$, $\beta$ and $\gamma$ are fixed
coefficients used to give different weights to
the curvature peak, torsion peak and length during the optimization
process. The normalization of
length is necessary to decouple the weight of the length from the
length of the path.
Curvature and torsion are obtained in a discrete form. The \bs curve is
tabulated in a number of points that depends on the length of $P$
by a multiplied constant, then for each point the curvature and
torsion values
are calculated.
Regarding the constraint, $bspline(P)$
is the set of points of the \emph{\bs}, using $P$ as
control polygon, and
$obstacle_i$ is the area of the $i^{th}$ of $m$ obstacles, and
$I=\{1,\dots,m\}$.
Thus, we need to build the Lagrangian function corresponding to
\cref{eq:lagrangianFun}.
The constraint function is not negative and is calculated as the
ratio
\begin{equation}\label{eq:constraintLag}
constraint(P) = \frac{\left|\ \{\ve{p} \in spline(P)\ : \exists i
\text{ s.t. } \ve{p}\in
obstacle_i\}\ \right|}{\left|\ \{\ve{p} \in
spline(P)\}\ \right|}.
\end{equation}
The points $\ve{p}$ of the spline are calculated in a discrete
form, like curvature and torsion. Thus, the constraint depends on the
tabulation of the curve and
it is also possible to have borderline cases where the
constraint does not reflect the real situation\footnote{For instance, if
we have very thin obstacles, a curve can pass through them having only few
points (or even none) inside.}.
The
function in \cref{eq:constraintLag} is not negative, thus the
Lagrangian function, corresponding to
\cref{eq:lagrangianFun}, is
\begin{equation}\label{eq:lagrangianFunProj}
L_d(P,\lambda)=gain(P)+\lambda\cdot constraint(P)
\end{equation}
where, for convenience,
\begin{equation}\label{eq:gainLag}
gain(P) = \alpha\cdot maxCurv(P)+\beta\cdot
maxTors(P)+\gamma\cdot normLen(P).
\end{equation}
\subsection{Annealing phase}
The purpose of the simulated annealing phase is to find the minimum
saddle point in
the curve represented by the
\cref{eq:lagrangianFunProj}.
\begin{algo}{Annealing}{alg:annealing}
\Procedure{annealing}{$\ve{x}$}
\State $\lambda\Ass initialLambda$\label{alg:annealing:initialize}
\State $T\Ass initialTemperature$\label{alg:annealing:initialize2}
\While{not $terminationCondition()$}\label{alg:annealing:while}
\ForAll{number of trials}\label{alg:annealing:for}
\State $changeLambda\Ass\True$ with $changeLambdaProb$\label{alg:annealing:lambdaProb}
\If{$changeLambda$}
\State $\lambda'\Ass neighbour(\lambda)$\label{alg:annealing:changeLambda}
\State $\lambda\Ass \lambda'$ with probability $\me^{-([energy(\ve{x},\lambda)-energy(\ve{x},\lambda')]^+/T)}$
\Else
\State $\ve{x}'\Ass neighbour(\ve{x})$\label{alg:annealing:changeX}
\State $\ve{x}\Ass \ve{x}'$ with probability $\me^{-([energy(\ve{x}',\lambda)-energy(\ve{x},\lambda)]^+/T)}$
\EndIf
\EndFor
\State $T\Ass T\cdot warmingRatio$\label{alg:annealing:cooling}
\EndWhile
\EndProcedure
\end{algo}
The \cref{alg:annealing} is the annealing process, and its input is the
initial status of the system $\ve{x}$ - i.e. the initial configuration
of the control polygon. It operates this way:
\begin{enumerate}
\item $\lambda$ and the
temperature are initialized on
\cref{alg:annealing:initialize} and \cref{alg:annealing:initialize2}
respectively;
\item the \emph{while} on
\cref{alg:annealing:while} is the main loop and the terminating
condition is given by a minimum temperature or a minimum variation of
energy between two iterations;
\item the \emph{for} at
\cref{alg:annealing:for} repeats the annealing move for a certain
number of trials, on each iteration the algorithm probabilistically
tries to make a move of the state of the system;
\begin{itemize}
\item first, on
\cref{alg:annealing:lambdaProb}, it chooses between moving in the
Lagrangian space or in the space of the path;
\item after that, based on the previous
choice, the algorithm probabilistically tries to move the system
in a neighbouring
state: in the
Lagrangian space at
\cref{alg:annealing:changeLambda} or in the path space at
\cref{alg:annealing:changeX};
\end{itemize}
\item finally, at the end of every trial set,
at \cref{alg:annealing:cooling}, the temperature $T$ is cooled by
a certain factor.
\end{enumerate}
The termination condition in \cref{alg:annealing:while} is triggered by a
minimum variation of energy $\Delta energy$ between two consecutive
iterations of the cycle. The termination is also triggered when a
minimum temperature is reached, this happens to impose a limit on the
number of cycles.
The choice of the neighbour is
probabilistic. If the energy increases in the
Lagrangian space or decreases in the path space, then the probability of
choosing the new state is 1. If the energy decreases in the Lagrangian
space or increases in the path space, then the new state is accepted
with a probability that is:
$$\exp(-\frac{\Delta energy}{T}).$$
The $neighbour$ function depends on the input:
\begin{itemize}
\item a neighbour of $\lambda$ is a value that is equal to $\lambda$
plus a uniform perturbation in range $[-maxLambdaPert, maxLambdaPert]$;
\item a neighbour of the path is obtained by randomly picking one of
the vertices $\ve{v_i}$ (except the extremes $\ve{v_0}$ and $\ve{v_n}$),
then uniformly choosing a direction and a distance
in a specific range and, finally, moving $\ve{v_i}$ by
the chosen values.
\end{itemize}
The $energy$ function is equivalent to $L_d$ in the
\cref{eq:lagrangianFunProj}:
\begin{equation}
\label{eq:annealingEnergy}
energy(\ve{x},\lambda)=gain(P)+\lambda\cdot constraint(P).
\end{equation}
The annealing process
finds a saddle point by probabilistically increasing the energy when
$\lambda$ is moved, and
decreasing the energy when the points are moved.
\subsubsection{Complexity considerations}\index{\acf{SA}!complexity}\index{Complexity!\acf{SA}}
For this solution, we still have the costs of \cref{eq:costGraph} and
\cref{eq:costPruning} for
the creation and pruning of the graph $G$. In addition, we need to
apply Dijkstra's
algorithm in $G$ to obtain the initial path $P$ with the cost of
\cref{eq:costDijkstraG}.
Regarding the annealing phase, for each \emph{step} (an
iteration of \cref{alg:annealing:while} in \cref{alg:annealing}) we
have a fixed number of \emph{trials} (the iterations of \cref{alg:annealing:for}). For each trial, we need to
calculate the value of the energy of \cref{eq:annealingEnergy} that is
the sum of the gain and the constraint.
For the gain of \cref{eq:gainLag} we need to calculate the values of
curvature and torsion for every tabulated point. Furthermore, there is
also a cost\footnote{Only if
the users do not choose to minimize the arc length.} of
$\bigO(|P|)$ to
calculate the length of the control polygon. Thus, the cost for
calculating the gain is
\begin{equation*}
\bigO(|Sp|+|P|)
\end{equation*}
where $Sp$ is the set of the tabulated points of the curve. The number
of points in $Sp$ depends on the length of the control polygon
$len(P)$. Thus,
we have a cost of $\bigO(len(P)+|P|)$, but in the worst case
$|P|=\bigO(|V|)=\bigO(|\obsSet|)$, thus the cost is
\newcommand{\eqCostGain}{\ensuremath{\bigO(len(P)+|P|)=\bigO(len(P)+|\obsSet|)}}
\begin{equation}
\label{eq:costGain}
\eqCostGain.
\end{equation}
In regards to the constraint of \cref{eq:constraintLag}, we need to
calculate if
every point
of the curve is inside an obstacle. This means a cost of
\newcommand{\eqCostConstraint}{\ensuremath{\bigO(len(P)|\obsSet|)}}
\begin{equation}
\label{eq:costConstraint}
\eqCostConstraint.
\end{equation}
Hence, the total cost for the calculation of the annealing phase is
\begin{equation*}
\bigO(\#steps\cdot\#trials\cdot(len(P)|\obsSet|)),
\end{equation*}
but the number of steps and trials are bounded by
constants\footnote{Although such constants can be very high.}. Thus,
the cost becomes
\newcommand{\eqCostAnneal}{\ensuremath{\bigO(len(P)|\obsSet|)}}
\begin{equation}
\label{eq:costAnneal}
\eqCostAnneal.
\end{equation}
The total cost for the solution is
\newcommand{\eqCostTotalThree}{\ensuremath{\bigO(k|\obsSet|^2+len(P)|\obsSet|)}}
\begin{equation}
\label{eq:costTotalThree}
\eqCostTotalThree.
\end{equation}
Similarly to the previous solutions, if we have that $k$ is a constant
then the total cost becomes
\newcommand{\eqCostTotalThreeK}{\ensuremath{\bigO(|\obsSet|^2+len(P)|\obsSet|)}}
\begin{equation}
\label{eq:costTotalThreeK}
\eqCostTotalThreeK.
\end{equation}
\begin{table}
\centering
\begin{tabular}{|l|c|r|}
\hline
Description&Cost&Reference\\
\hline
\hline
Creation of $G$&\eqCostGraph&\cref{eq:costGraph}\\
Pruning of $G$&\eqCostPruning&\cref{eq:costPruning}\\
Routing in $G$&\eqCostDijkstraG&\cref{eq:costDijkstraG}\\
Gain&\eqCostGain&\cref{eq:costGain}\\
Constraint&\eqCostConstraint&\cref{eq:costConstraint}\\
Annealing&\eqCostAnneal&\cref{eq:costAnneal}\\
\hline
Total&\eqCostTotalThree&\cref{eq:costTotalThree}\\
Total ($k$ costant)&\eqCostTotalThreeK&\cref{eq:costTotalThreeK}\\
\hline
\end{tabular}
\caption{Summary of the costs for solution three}
\label{tab:costsSol3}
\end{table}
In \cref{tab:costsSol3} we summarize all the costs. It is
difficult to quantitatively compare the cost of this solution with
the previous
ones. This is due to the presence of the factor $len(P)$ that depends
on the geometry of the scene. however, we can affirm that this solution is
more complex than the previous two by a term
$\bigO(len(P)|\obsSet|)$.
Furthermore, in this solution
we can divide the algorithm in two parts:
\begin{enumerate}
\item first we can construct $G$ with cost $\bigO(|\obsSet|^2)$;
\item then we can use it for different scenarios with cost
$|\obsSet|\log|\obsSet|+len(P)|\obsSet|$.
\end{enumerate}
\end{document}
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "../dissertation"
%%% End:
|
|
\documentclass[14pt]{extarticle}
% \usepackage[style=authoryear,maxbibnames=9,maxcitenames=2,uniquelist=false,backend=biber,doi=false,url=false]{biblatex}
% \addbibresource{$BIB} % bibtex location
% \renewcommand*{\nameyeardelim}{\addcomma\space} % have comma in parencite
\usepackage{natbib}
\usepackage{xcolor}
\usepackage{amsmath}
\newcommand{\tuple}[1]{ \langle #1 \rangle }
%\usepackage{automata}
\usepackage{times}
\usepackage{ltablex}
%%%%%% Template
\usepackage{hyperref}
\hypersetup{colorlinks=true,allcolors=blue}
\usepackage{vmargin}
\setpapersize{USletter}
\setmarginsrb{1.0in}{1.0in}{1.0in}{0.6in}{0pt}{0pt}{0pt}{0.4in}
% HOW TO USE THE ABOVE:
%\setmarginsrb{leftmargin}{topmargin}{rightmargin}{bottommargin}{headheight}{headsep}{footheight}{footskip}
%\raggedbottom
% paragraphs indent & skip:
\parindent 0.3cm
\parskip -0.01cm
\usepackage{tikz}
\usetikzlibrary{backgrounds}
% hyphenation:
% \hyphenpenalty=10000 % no hyphen
% \exhyphenpenalty=10000 % no hyphen
\sloppy
% notes-style paragraph spacing and indentation:
\usepackage{parskip}
\setlength{\parindent}{0cm}
% let derivations break across pages
\allowdisplaybreaks
\newcommand{\orange}[1]{\textcolor{orange}{#1}}
\newcommand{\blue}[1]{\textcolor{blue}{#1}}
\newcommand{\red}[1]{\textcolor{red}{#1}}
\newcommand{\freq}[1]{{\bf \sf F}(#1)}
\newcommand{\datafreq}[2]{{{\bf \sf F}_{#1}(#2)}}
\def\qqquad{\quad\qquad}
\def\qqqquad{\qquad\qquad}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
% \setcounter{section}{}
\centerline{\huge\bf Problem Set 2}
\smallskip
\centerline{\LARGE Hui-Jun Chen}
\medskip
\section*{Instruction}
Due at 11:59 PM (Eastern Time) on Sunday, June 14, 2022.
Please answer this problem set on Carmen quizzes ``Problem Set 2''. In the following problems, the part that is in \textbf{\red{red and bold}} are the order of questions that should be answered on Carmen quizzes.
\section*{Problem 1}
Remember the Example in Lecture 8.
Consumer: $ \max_{C, l} \ln C + \ln l \quad \text{subject to} \quad C \le w( 1-l ) + \pi $
%
\begin{align}
\text{FOC} \quad
& \frac{C}{l} = w
\label{eq:consumerFOC}
\\
\text{Binding budget constraint} \quad
& C = w ( 1-l ) + \pi
\label{eq:binding_budget}
\\
\text{Time constraint} \quad
& N^{s} = 1 - l
\label{eq:time_budget}
\end{align}
%
Firm: $ \max_{N^{d}} ( N^{d} )^{\frac{1}{2}} - w N^{d} $
%
\begin{align}
\text{FOC} \quad
& \frac{1}{2} ( N^{d} )^{- \frac{1}{2}} = w
\label{eq:firmFOC}
\\
\text{Output definition} \quad
& Y = ( N^{d} )^{\frac{1}{2}}
\label{eq:outputDef}
\\
\text{Profit definition} \quad
& \pi = Y - w N^{d}
\label{eq:profitDef}
\end{align}
%
Market clear:
%
\begin{align}
N^{s} & = N^{d}
\label{eq:laborClear}
\end{align}
%
Fill the following blanks for the step-by-step guide for algebraic calculation:
\begin{enumerate}
\item Step 1: Impose Market clear condition, so shrink all $ 7 $ equations to \textbf{\red{\underline{\quad 6 \quad}}} equations
Consumer: $ \max_{C, l} \ln C + \ln l \quad \text{subject to} \quad C \le w( 1-l ) + \pi $
%
\begin{align}
\text{FOC} \quad
& \frac{C}{l} = w
\label{eq:consumerFOC}
\\
\text{Binding budget constraint} \quad
& C = w N + \pi
\label{eq:binding_budget}
\\
\text{Time constraint} \quad
& N = 1 - l
\label{eq:time_budget}
\end{align}
%
Firm: $ \max_{N} ( N )^{\frac{1}{2}} - w N $
%
\begin{align}
\text{FOC} \quad
& \frac{1}{2} ( N )^{- \frac{1}{2}} = w
\label{eq:firmFOC}
\\
\text{Output definition} \quad
& Y = ( N )^{\frac{1}{2}}
\label{eq:outputDef}
\\
\text{Profit definition} \quad
& \pi = Y - w N
\label{eq:profitDef}
\end{align}
%
\item Step 2: replace $ l $ in terms of $ N $ using $ l = 1-N $
Consumer: $ \max_{C, l} \ln C + \ln l \quad \text{subject to} \quad C \le w( 1-l ) + \pi $
%
\begin{align}
\text{FOC} \quad
& \frac{C}{(\textbf{\red{\underline{\quad 1-N \quad}}})} = w
\label{eq:consumerFOC}
\\
\text{Binding budget constraint} \quad
& C = w (\textbf{\red{\underline{\quad N \quad}}}) + \pi
\label{eq:binding_budget}
\end{align}
%
Firm: $ \max_{N} ( N )^{\frac{1}{2}} - w N $
%
\begin{align}
\text{FOC} \quad
& \frac{1}{2} ( N )^{- \frac{1}{2}} = w
\label{eq:firmFOC}
\\
\text{Output definition} \quad
& Y = ( N )^{\frac{1}{2}}
\label{eq:outputDef}
\\
\text{Profit definition} \quad
& \pi = Y - w N
\label{eq:profitDef}
\end{align}
%
\item Step 3: replace $ \pi $ and $ Y $ as $ N $
Consumer: $ \max_{C, l} \ln C + \ln l \quad \text{subject to} \quad C \le w( 1-l ) + \pi $
%
\begin{align}
\text{FOC} \quad
& \frac{C}{(\textbf{\red{\underline{\quad $1-N$ \quad}}})} = w
\label{eq:consumerFOC}
\\
\text{Binding budget constraint} \quad
& C = w (\textbf{\red{\underline{\quad N \quad}}}) + \pi
\label{eq:binding_budget}
\end{align}
%
Firm: $ \max_{N} ( N )^{\frac{1}{2}} - w N $
%
\begin{align}
\text{FOC} \quad
& \frac{1}{2} ( N )^{- \frac{1}{2}} = w
\label{eq:firmFOC}
\\
\text{Profit definition} \quad
& \pi = (\textbf{\red{\underline{\quad $N^{\frac{1}{2}}$ \quad}}}) - w N
\label{eq:profitDef}
\end{align}
%
\item Step 4: Substitute $ \pi( N ) $ into Binding budget constraint and get
%
\begin{equation}
\label{eq:C_as_function_of_N}
C = (\textbf{\red{\underline{\quad $N^{\frac{1}{2}}$ \quad}}})
\end{equation}
%
\item Step 5: With consumer's FOC and firm's FOC both equate to $ w $, we can get another expression of $ C $:
%
\begin{equation}
\label{eq:C_as_function_of_N_ver_2}
C = (\textbf{\red{\underline{\quad $1-N$ \quad}}}) \times (\textbf{\red{\underline{\quad $\frac{1}{2} N^{-\frac{1}{2}}$ \quad}}})
\end{equation}
%
\item Step 6: Let \eqref{eq:C_as_function_of_N} equate \eqref{eq:C_as_function_of_N_ver_2} and we get $ N $ as
%
\begin{equation}
\label{eq:Nvalue}
N = (\textbf{\red{\underline{\quad $\frac{1}{3}$ \quad}}})
\end{equation}
%
\item Step 7: Trace back to all unknowns given the value of $ N $, we get
%
\begin{align}
C
& = (\textbf{\red{\underline{\quad $\sqrt{ \frac{1}{3}} $ \quad}}}) (0.577)
\\
l
& = (\textbf{\red{\underline{\quad $\frac{2}{3} $ \quad}}}) (0.666)
\\
Y
& = (\textbf{\red{\underline{\quad $ \sqrt{ \frac{1}{3}} $ \quad}}}) (0.577)
\\
\pi
& = (\textbf{\red{\underline{\quad $\sqrt{\frac{1}{3}} - \frac{1}{6} \sqrt{3} $ \quad}}}) (0.288)
\\
w
& = (\textbf{\red{\underline{\quad $\frac{1}{2} \sqrt{3} $ \quad}}}) (0.866)
\end{align}
%
\end{enumerate}
\end{document}
|
|
\section{Evolution, Not Revolution }
\label{evolution}
Akka systems can be smoothly migrated to TAkka systems. In other words,
existing systems can evolve to introduce more types, rather than requiring a
revolution where all actors and interactions must be typed.
The above property is analogous to adding generics to Java programs. Java
generics are carefully designed so that programs without generic types can be
partially replaced by an equivalent generic version (evolution), rather than
requiring generic types everywhere (revolution) \citep{JGC}.
In previous sections, we have seen how to use Akka actors in an Akka
system (Figure \ref{fig:akkastring}) and how to use TAkka actors in a TAkka
system (Figure \ref{takkastring}). In the following, we will explain how to
use TAkka actors in an Akka system and how to use an Akka actor in a TAkka
system.
\begin{figure}[!h]
\begin{lstlisting}[language=scala, escapechar=?]
class TAkkaStringActor extends ?\textcolor{blue}{takka.actor.TypedActor[String]}? {
def ?\textcolor{blue}{typedReceive}? = {
case m:String => println("received message: "+m)
}
}
class MessageHandler(system: akka.actor.ActorSystem) extends akka.actor.Actor {
def receive = {
case akka.actor.UnhandledMessage(message, sender, recipient) =>
println("unhandled message:"+message);
}
}
object TAkkaInAkka extends App {
val akkasystem = akka.actor.ActorSystem("AkkaSystem")
val akkaserver = akkasystem.actorOf(
akka.actor.Props[TAkkaStringActor], "aserver")
val handler = akkasystem.actorOf(
akka.actor.Props(new MessageHandler(akkasystem)))
akkasystem.eventStream.subscribe(handler,
classOf[akka.actor.UnhandledMessage]);
akkaserver ! "Hello Akka"
akkaserver ! 3
val takkasystem = ?\textcolor{blue}{takka}?.actor.ActorSystem("TAkkaSystem")
val typedserver = takkasystem.actorOf(
takka.actor.Props[?\textcolor{blue}{String,}? TAkkaStringActor], "tserver")
val untypedserver = takkaserver.untypedRef
takkasystem.system.eventStream.subscribe(
handler,classOf[akka.actor.UnhandledMessage]);
untypedserver ! "Hello TAkka"
untypedserver ! 4
}
/*
Terminal output:
received message: Hello Akka
unhandled message:3
received message: Hello TAkka
unhandled message:4
*/
\end{lstlisting}
\caption{TAkka actor in Akka application}
\label{takkaINakka}
\end{figure}
\subsection{TAkka actor in Akka system}
It is often the case that an actor-based library is implemented by one
organization but used in a client application implemented by another
organization. If a developer decides to upgrade the library implementation
using TAkka actors, for example, by upgrading the Socko Web Server
\citep{SOCKO}, the Gatling \citep{Gatling} stress testing tool, or the core
library of the Play framework \citep{play_doc}, as what we will do in Section
\ref{expressiveness}, will the upgrade affect client code, especially
legacy applications built using the Akka library? Fortunately, TAkka actors and actor
references are implemented using inheritance and delegation respectively so
that no changes are required for legacy applications.
TAkka actors inherits Akka actors. In Figure \ref{takkaINakka},
the actor implementation is upgraded to the TAkka version as in Figure
\ref{takkastring}. The client code, line 135 through line 23, is the same as the
old Akka version given in Figure \ref{fig:akkastring}. That is, no changes are
required for the client application.
TAkka actor reference delegates the task of message sending to an
Akka actor reference, its {\tt untypedRef} field. In line 29 in Figure
\ref{takkaINakka}, we get an untyped actor reference from {\tt typedserver}
and
use the untyped actor reference in code where an Akka actor reference is
expected. Because an untyped actor reference accepts messages of any type,
messages of unexpected type may be sent to TAkka actors if an Akka actor
reference is used. As a result, users who are interested in the {\tt
UnhandledMessage} event may subscribe to the event stream as in line 33.
\subsection{Akka Actor in TAkka system}
Sometimes, developers want to update the client code or the API before upgrading
the actor implementation. For example, a developer may not have access to
the actor code; or the library may be large, so the developer may want to
upgrade the library gradually.
Users can initialize a TAkka actor reference by providing an Akka actor
reference and a type parameter. In Figure \ref{akkaINtakka}, we re-use the
Akka actor, initialize the actor in an Akka actor system, and obtain an Akka
actor reference as in Figure \ref{fig:akkastring}. Then, we initialize a TAkka
actor reference, {\tt takkaServer}, which only accepts {\tt String} messages.
\begin{figure}[!h]
\begin{lstlisting}[language=scala, escapechar=?]
class AkkaStringActor extends akka.actor.Actor {
def receive = { case m:String => println("received message: "+m) }
}
object AkkaInTAkka extends App {
val system = akka.actor.ActorSystem("AkkaSystem")
val akkaserver = system.actorOf(
akka.actor.Props[AkkaStringActor], "server")
val takkaServer = new takka.actor.ActorRef?\textcolor{blue}{[String]}?{
val untypedRef = akkaserver
}
takkaServer ! "Hello World"
// takkaServer ! 3
// compile error: type mismatch; found : Int(3)
// required: String
}
/*
Terminal output:
received message: Hello World
*/
\end{lstlisting}
\caption{Akka actor in TAkka application}
\label{akkaINtakka}
\end{figure}
|
|
\documentclass[output=paper,biblatex,babelshorthands,newtxmath,draftmode,colorlinks,citecolor=brown]{langscibook}
\ChapterDOI{10.5281/zenodo.5599880}
\IfFileExists{../localcommands.tex}{%hack to check whether this is being compiled as part of a collection or standalone
\usepackage{../nomemoize}
\input{../localpackages}
\input{../localcommands}
\input{../locallangscifixes.tex}
\togglepaper[33]
}{}
\hyphenation{analy-sis}
\author{Richard Hudson\affiliation{University College London}}
\title{HPSG and Dependency Grammar}
\abstract{HPSG assumes Phrase Structure (PS), a partonomy, in contrast with Dependency Grammar (DG), which recognises Dependency Structure (DS), with direct relations between individual words and no multi-word phrases. The chapter presents a brief history of the two approaches, showing that DG matured in the late nineteenth century, long before the influential work by Tesnière, while Phrase Structure Grammar (PSG) started somewhat later with Bloomfield's enthusiastic adoption of Wundt's ideas. Since DG embraces almost as wide a range of approaches as PSG, the rest of the chapter focuses on one version of DG, Word Grammar. The chapter argues that classical DG needs to be enriched in ways that bring it closer to PSG: each dependent actually adds an extra node to the head, but the nodes thus created form a taxonomy, not a partonomy; coordination requires strings; and in some languages the syntactic analysis needs to indicate phrase boundaries. Another proposed extension to bare DG is a separate system of relations for controlling word order, which is reminiscent of the PSG distinction between dominance and precedence. The ``head-driven'' part of HPSG corresponds in Word Grammar to a taxonomy of dependencies which distinguishes grammatical functions, with complex combinations similar to HPSG's re-entrancy. The chapter reviews and rejects the evidence for headless phrases, and ends with the suggestion that HPSG could easily move from PS to DG.}
%\maketitle
\begin{document}
\maketitle
\label{chap-dg}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}
\label{sec:1}
HPSG\indexdgstart is firmly embedded, both theoretically and historically, in the phrase"=structure (PS) tradition of syntactic analysis, but it also has some interesting theoretical links to the dependency"=structure (DS) tradition. This is the topic of the present chapter, so after a very simple comparison of PS and DS and a glance at the development of these two traditions in the history of syntax, I consider a number of issues where the traditions interact.
The basis for PS analysis is the part-whole relation between smaller units (including words) and larger phrases, so the most iconic notation uses boxes \citep[6]{MuellerGT-Eng2}. In contrast, the basis for DS analysis is the asymmetrical dependency relation between two words, so in this case an iconic notation inserts arrows between words. (Although the standard notation in both traditions uses trees, these are less helpful because the lines are open to different interpretations.) The two analyses of a very simple sentence are juxtaposed in Figure~\ref{fig:1}. As in HPSG attribute-value matrices (AVMs), each rectangle represents a unit of analysis.
\begin{figure}
\centering
\begin{tikzpicture}[
every node/.style=frbox,
node distance=0.5em,
font=\strut,
]
\node(many) {many};
\node(students) [base right=of many]{students};
\node(enjoy) [base right=1.5em of students]{enjoy};
\node(syntax) [base right=of enjoy]{syntax};
\node(ms)[fit=(many)(students)]{};
\node(es)[fit=(enjoy)(syntax)]{};
\node[fit=(ms)(es)]{};
\end{tikzpicture}
\vspace{\baselineskip}
\begin{tikzpicture}[node distance=.2cm]
\node[draw](many) at (0,0){\strut{many}};
\node[draw](students) [right=of many]{\strut students};
\node[draw](enjoy) [right=of students]{\strut enjoy};
\node[draw](syntax) [right= of enjoy]{\strut syntax};
\draw[->] (enjoy)[out=north,in=north] to (syntax);
\draw[->] (enjoy)[out=north,in=north] to (students);
\draw[->] ([xshift=-.2cm]students.north)[out=north,in=north] to (many);
\end{tikzpicture}
%
\caption{Phrase structure and dependency structure contrasted}
\label{fig:1}
\end{figure}
In both approaches, each unit has properties such as a classification, a meaning, a form and
relations to other items, but these properties may be thought of in two different ways. In PS
analyses, an item contains its related items, so it also contains its other properties – hence the
familiar AVMs contained within the box for each item. But in DS analyses, an item's related items
are outside it, sitting alongside it in the analysis, so, for consistency, other properties may be
shown as a network in which the item concerned is just one atomic node. This isn't the only possible
notation, but it is the basis for the main DS theory that I shall juxtapose with HPSG, Word Grammar.
\largerpage
What, then, are the distinctive characteristics of the two traditions? In the following summary I
use \emph{item} to include any syntagmatic unit of analysis including morphemes, words and phrases
(though this chapter will not discuss the possible role of morphemes). The following generalisations
apply to classic examples of the two approaches: PS as defined by Chomsky in terms of labelled
bracketed strings \citep{Chomsky57a}, and DS as defined by \citeauthor{Tesniere59a-u}
(\citeyear{Tesniere59a-u,Tesniere2015a-u}). These generalisations refer to ``direct relations'',
which are shown by single lines in standard tree notation; for example, taking a pair of words such
as \emph{big book}, they are related directly in DS, but only indirectly via a mother phrase in
PS. A phenomenon such as agreement is not a relation in this sense, but it applies to word-pairs
which are identified by their relationship; so even if two sisters agree, this does not in itself
constitute a direct relation between them.
\begin{enumerate}
\item\label{it:1} Containment: in PS, but not in DS, if two items are directly related, one must
contain the other. For instance, a PS analysis of \emph{the book} recognises a direct relation (of
dominance) between \emph{book} and \emph{the book}, but not between \emph{book} and \emph{the},
which are directly related only by linear precedence. In contrast, a DS analysis does recognise a
direct relation between \emph{book} and \emph{the} (in addition to the linear precedence
relation).
\item\label{it:2} Continuity: therefore, in PS, but not in DS, all the items contained in a larger
one must be adjacent.
\item\label{it:3} Asymmetry: in both DS and PS, a direct relation between two items must be
asymmetrical, but in DS the relation (between two words) is dependency whereas in PS the relevant relation is the
part-whole relation.
\end{enumerate}
\largerpage
These generalisations imply important theoretical claims which can be tested; for instance,
\ref{it:2} claims that there are no discontinuous phrases, which is clearly false. On the other
hand, \ref{it:3} claims that there can be no exocentric or headless phrases, so DS has to consider
apparent counter-examples such as the NPN construction, coordination and verbless sentences (see
Sections~\ref{sec:4.2} and~\ref{sec:5.1} for discussion, and also
\crossrefchapteralt{coordination}).
The contrasts in \ref{it:1}--\ref{it:3} apply without reservation to ``plain vanilla''
\citep{Zwicky1985} versions of DS and PS, but as we shall see in the history section, very few
theories are plain vanilla. In particular, there are versions of HPSG that allow phrases to be
discontinuous \citep{Reape94a,Kathol2000a,Mueller95c,Babel}. Nevertheless, the fact is that HPSG
evolved out of more or less pure PS, that it includes \emph{phrase structure} in its name, and that
it is never presented as a version of DS.
On the other hand, the term \emph{head-driven} points immediately to dependency: an asymmetrical
relation driven by a head word. Even if HPSG gives some constructions a headless analysis
\citep[654--666]{MuellerGT-Eng2}, the fact remains that it treats most constructions as headed.
This chapter reviews the relations between HPSG and the very long DS tradition of grammatical
analysis. The conclusion will be that in spite of its PS roots, HPSG implicitly (and sometimes even
explicitly) recognises dependencies; and it may not be a coincidence that one of the main
power-bases of HPSG is Germany, where the DS tradition is also at its strongest
\citep[359]{MuellerGT-Eng2}.
Where, then, does this discussion leave the notion of a phrase? In PS, phrases are basic units of
the analysis, alongside words; but even DS recognises phrases indirectly because they are easily
defined in terms of dependencies as a word plus all the words which depend, directly or indirectly,
on it. Although phrases play no part in a DS analysis, it is sometimes useful to be able to refer to
them informally (in much the same way that some PS grammars refer to grammatical functions
informally while denying them any formal status).
Why, then, does HPSG use PS rather than DS? As far as I know, PS was simply default syntax in the
circles where HPSG evolved, so the choice of PS isn't the result of a conscious decision by the
founders, and I hope that this chapter will show that this is a serious question which deserves
discussion.\footnote{%
Indeed, I once wrote a paper (which was never published) called ``Taking the PS out of HPSG'' – a
title I was proud of until I noticed that PS was open to misreading, not least as ``Pollard and
Sag''. Carl and Ivan took it well, and I think Carl may even have entertained the possibility that
I might be right – possibly because he had previously espoused a theory called ``Head Grammar''
(HG). See also \crossrefchapterw[Section~\ref{evolution:sec-head-grammar}]{evolution} on
Head Grammar and the evolution of HPSG.
I hasten to add that while the PS view might have been the approach available at the time,
there have been many researchers thinking carefully about issues concerning general phrase structure
vs.\ dependency. For example, one general dependency structure is argued to be insufficient to
account for complex predicates\is{complex predicate} (\citealt{AG2010a-u}; \crossrefchapteralt{complex-predicates}) and \isi{negation}
(\citealt{KS2002a}; \crossrefchapteralt{negation}). See also \citew[Section~11.7]{MuellerGT-Eng4} for discussion of analyses of
\citet{Eroms2000a}, \citet{GO2009a}, and others, and a general comparison of phrase structure and dependency approaches.
}
Unfortunately, the historical roots and the general dominance of PS have so far discouraged discussion of this fundamental question.
\largerpage
HPSG is a theoretical package where PS is linked intimately to a collection of other assumptions;
and the same is true for any theory which includes DS, including my own Word Grammar
\citep{Hudson84a-u,Hudson90a-u,Hudson1998,Hudson2007a-u,Hudson2010b-u,Gisborne2010,GisborneTBA,Duran-Eppler2011,TraugottTrousdale2013}. Among
the other assumptions of HPSG I find welcome similarities, not least the use of default inheritance
in some versions of the theory. I shall argue below that inheritance offers a novel solution to one
of the outstanding challenges for the dependency tradition.
The next section sets the historical scene. This is important because it's all too easy for students
to get the impression (mentioned above) that PS is just default syntax, and maybe even the same as
``traditional grammar''. We shall see that grammar has a very long and rather complicated history in
which the default is actually DS rather than PS. Later sections then address particular issues
shared by HPSG and the dependency tradition.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Dependency and constituency in the history of syntax}
\label{sec:2}
The relevant history of syntax starts more than two thousand years ago in Greece. (Indian syntax may
have started even earlier, but it is hardly relevant because it had so little impact on the European
tradition.) Greek and Roman grammarians focused on the morphosyntactic properties of
individual words, but since these languages included a rich case system, they were aware of the
syntactic effects of verbs and prepositions governing particular cases. However, this didn't lead
them to think about syntactic relations, as such; precisely because of the case distinctions, they
could easily distinguish a verb's dependents in terms of their cases: ``its nominative'', ``its
accusative'' and so on \citep[29]{Robins1967}. Both the selecting verb or preposition and the item
carrying the case inflection were single words, so the Latin grammar of Priscian, written about 500
AD and still in use a thousand years later, recognised no units larger than the word: ``his model of
syntax was word-based – a dependency model rather than a constituency model''
\citep[91]{Law2003}. However, it was a dependency model without the notion of dependency as a
relation between words.
The dependency relation, as such, seems to have been first identified by the Arabic grammarian
Sibawayh in the eighth century \citep{Owens1988,Kouloughli1999}. However, it is hard to rule out the
possibility of influence from the then"=flourishing Paninian tradition in India, and in any case it
doesn't seem to have had any more influence on the European tradition than did Panini's syntax, so
it is probably irrelevant.
\largerpage[1.5]
In Europe, grammar teaching in schools was based on parsing (in its original sense), an activity
which was formalised in the ninth century \citep{Luhtala1994}. The activity of parsing was a
sophisticated test of grammatical understanding which earned the central place in school work that
it held for centuries – in fact, right up to the 1950s (when I myself did parsing at school) and
maybe beyond. In HPSG terms, school children learned a standard list of attributes for words of
different classes, and in parsing a particular word in a sentence, their task was to provide the
values for its attributes, including its grammatical function (which would explain its case). In the
early centuries the language was Latin, but more recently it was the vernacular (in my case,
English).
Alongside these purely grammatical analyses, the Ancient World had also recognised a logical one,
due to Aristotle, in which the basic elements of a proposition (\emph{logos}) are the logical
subject (\emph{onoma}) and the predicate (\emph{rhēma}). For Aristotle a statement such as
``Socrates ran'' requires the recognition both of the person Socrates and of the property of
running, neither of which could constitute a statement on its own \citep[30–31]{Law2003}. By the
twelfth century, grammarians started to apply a similar analysis to sentences; but in recognition of
the difference between logic and grammar they replaced the logicians' \emph{subiectum} and
\emph{praedicatum} by \emph{suppositum} and \emph{appositum} – though the logical terms would creep
into grammar by the late eighteenth century \citep[168]{Law2003}. This logical approach produced the
first top-down analysis in which a larger unit (the logician's proposition or the grammarian's
sentence) has parts, but the parts were still single words, so \emph{onoma} and \emph{rhēma} can now
be translated as ``noun'' and ``verb''. If the noun or verb was accompanied by other words, the
older dependency analysis applied.
The result of this confusion of grammar with logic was a muddled hybrid analysis in the
Latin/Greek tradition which combines a headless subject"=predicate analysis with a headed
analysis elsewhere, and which persists even today in some school grammars; this confusion took
centuries to sort out in grammatical theory. For the subject and verb, the prestige of Aristotle and
logic supported a subject-verb division of the sentence (or clause) in which the subject noun and
the verb were both equally essential – a very different analysis from modern first-order logic in
which the subject is just one argument (among many) which depends on the predicate. Moreover the
grammatical tradition even includes a surprising number of analyses in which the subject noun is the
head of the construction, ranging from the modistic grammarians of the twelfth century
\citep[83]{Robins1967}, through Henry Sweet \citep[17]{Sweet1891}, to no less a figure than Otto
Jespersen in the twentieth \citep{Jespersen37a-u}, who distinguished ``junction'' (dependency) from
``nexus'' (predication) and treated the noun in both constructions as ``primary''.
\largerpage
%\enlargethispage{8pt}
The first grammarians to recognise a consistently dependency-based analysis for the rest of the
sentence (but not for the subject and verb) seem to have been the French \emph{encyclopédistes} of
the eighteenth century \citep{Kahane2020a-u}, and, by the nineteenth century, much of Europe accepted a
theory of sentence structure based on dependencies, but with the subject-predicate analysis as an
exception – an analysis which by modern standards is muddled and complicated. Each of these units
was a single word, not a phrase, and modern phrases were recognised only indirectly by allowing the
subject and predicate to be expanded by dependents; so nobody ever suggested there might be such a
thing as a noun phrase until the late nineteenth century. Function words such as prepositions had no
proper position, being treated typically as though they were case inflections.
The invention of syntactic diagrams in the nineteenth century made the inconsistency of the hybrid
analysis obvious. The first such diagram was published in a German grammar of Latin for school
children \citep{Billroth1832}, and the nineteenth century saw a proliferation of diagramming
systems, including the famous Reed-Kellogg diagrams which are still taught (under the simple name
``diagramming'') in some American schools \citep{ReedKellog1890}; indeed, there is a website which
generates such diagrams, one of which is reproduced in Figure~\ref{fig:2}.%
%
\footnote{See a small selection of diagramming systems at
\url{http://dickhudson.com/sentence-diagramming/} (last access 2021-03-31), and the website
Sentence Diagrammer by 1aiway.}
%
The significant feature of this diagram is the special treatment given to the relation between the
subject and predicate (with the verb \emph{are} sitting uncomfortably between the two), with all the
other words in the sentence linked by more or less straightforward dependencies. (The geometry of
these diagrams also distinguishes grammatical functions.)
\begin{figure}
\centering
\begin{tikzpicture}
\draw (0,0) to [edge label=sentences] (4,0);
\draw (4,.5) to (4,-.5);
\draw (2,0) to [edge label=~~~like,sloped] (3,-1);
\draw (3,-1) to [edge label=this] (5,-1);
\draw (4,0) to [edge label=are] (6,0);
\draw (6,0) to (5.5,.5);
\draw (6,0) to [edge label=easy,sloped] (8,0);
\draw (7,0) to [edge label=to,sloped] (8,-1);
\draw (8,-1) to [edge label=diagram] (10,-1);
\end{tikzpicture}
\caption{Reed-Kellogg diagram by Sentence Diagrammer}
\label{fig:2}
\end{figure}
One particularly interesting (and relevant) fact about Reed and Kellogg is that they offer an analysis of \emph{that old wooden house} in which each modifier creates a new unit to which the next modifier applies: \emph{wooden house}, then \emph{old wooden house} \citep[18]{Percival1976} – a clear hint at more modern structures (including the ones proposed in Section~\ref{sec:4.1}), albeit one that sits uncomfortably with plain-vanilla dependency structure.
%\largerpage
However, even in the nineteenth century, there were grammarians who questioned the hybrid tradition
which combined the subject-predicate distinction with dependencies. Rather remarkably, three
different grammarians seem to have independently reached the same conclusion at roughly the same
time: hybrid structures can be replaced by a homogeneous structure if we take the finite verb as the
root of the whole sentence, with the subject as one of its dependents. This idea seems to have been
first proposed in print in 1873 by the Hungarian Sámuel Brassai
\citep{Imrenyi2013a,ImrenyiVladar2020a-u}; in 1877 by the Russian Aleksej Dmitrievsky
\citep{Seriot2004}; and in 1884 by the German Franz Kern \citep{Kern1884a-u}. Both Brassai and
Kern used diagrams to present their analyses, and used precisely the same tree structures which
Lucien Tesnière in France called \emph{stemmas} nearly fifty years later
\citep{Tesniere59a-u,Tesniere2015a-u}. The diagrams have both been redrawn here as
Figures~\ref{fig:3} and~\ref{fig:4}.
\begin{figure}
\centering
\begin{forest}
[\emph{tenebat}\\governing verb
[\emph{flentem}\\dependent]
[\emph{Uxor}\\dependent
[\emph{amans}\\attribute]
[\emph{ipsa}\\attribute]
[\emph{flens}\\attribute
[\emph{acrius}\\tertiary\\dependent]
]
]
[$\overbrace{\emph{imbre cadente}}$\\dependent
[\emph{usque}\\secondary\\dependent]
[$\overbrace{\emph{per genas}}$\\secondary\\dependent
[\emph{indignas}\\tertiary\\dependent]
]
]
]
\end{forest}
\caption{A verb-rooted tree published in 1873 by Brassai, quoted from \citew[\page 174]{ImrenyiVladar2020a-u}}\label{fig:3}
\end{figure}
Brassai's proposal is contained in a school grammar of Latin, so the example is also from Latin – an
extraordinarily complex sentence which certainly merits a diagram because the word order obscures the grammatical relations, which can be reconstructed only by paying attention to the morphosyntax. For example, \emph{flentem} and \emph{flens} both mean `crying', but their distinct case marking links them to different nouns, so the nominative \emph{flens} can modify nominative \emph{uxor} (woman), while the accusative \emph{flentem} defines a distinct individual glossed as `the crying one'.
\ea
\label{ex:1}
\longexampleandlanguage{
\gll Uxor am-ans fl-ent-em fl-ens acr-ius ips-a ten-eb-at, imbr-e per in-dign-as usque cad-ent-e gen-as.\\
wife\textsc{.f.sg.nom} love\textsc{-ptcp.f.sg.nom} cry\textsc{-ptcp-m.sg.acc} cry\textsc{-ptcp.f.sg.nom} bitterly-more self\textsc{-f.sg.nom} hug\textsc{-pst-3sg} shower\textsc{-m.sg.abl} on un-becoming\textsc{-f.pl.acc} continuously fall\textsc{-ptcp-m.sg.abl} cheeks\textsc{-f.pl.acc}\\}{Latin}
%\gll Uxor amans flentem flens acr-ius ipsa tenebat, imbre per indignas usque cadente genas.\\
% wife love cry cry bitterly-more self hug shower on unbecoming continuously fall cheeks\\
\glt `The wife, herself even more bitterly crying, was hugging the crying one, while a shower [of tears] was falling on her unbecoming cheeks [i.e.\ cheeks to which tears are unbecoming].'
\z
Brassai's diagram, including grammatical functions as translated by the authors
\citep{ImrenyiVladar2020a-u}, is in Figure~\ref{fig:3}. The awkward horizontal braces should not be seen
as a nod in the direction of classical PS, given that the bracketed words are not even adjacent in
the sentence analysed. Kern's tree in Figure~\ref{fig:4}, on the other hand, is for the German
sentence in (\ref{ex:2}).
\ea
\label{ex:2}
\longexampleandlanguage{
\gll Ein-e stolz-e Krähe schmück-t-e sich mit d-en aus-ge-fall-en-en Feder-n d-er Pfau-en.\\
a\textsc{-f.sg.nom} proud\textsc{-f.sg.nom} crow(\textsc{f})\textsc{.sg.nom} decorate\textsc{-pst}\textsc{-3sg} self\textsc{.acc} with the\textsc{-pl.dat} out-\textsc{ptcp}-fall-\textsc{ptcp}-\textsc{pl.dat} feather\textsc{-pl.dat} the\textsc{-pl.gen} peacock-\textsc{pl.gen}\\}{German}
% \gll Eine stolze Krähe schmückte sich mit den ausgefallenen Federn der Pfauen.\\
% a proud crow decorated self.\textsc{acc} with the.\textsc{dat} out-fallen.\dat{} feathers.\dat{} the.\textsc{gen} peacocks.\textsc{gen}\\
\glt `A proud crow decorated himself with the dropped feathers of the peacocks.'
\z
Once again, the original diagram includes function terms which are translated in this diagram into English.
\begin{figure}
\centering
\begin{forest}
[finite verb\\\emph{schmückte}
[subject word\\\emph{Krähe}
[counter\\\emph{eine}]
[attributive adjective\\\emph{stolze}]
]
[object\\\emph{sich}]
[case with preposition\\\emph{mit Federn}
[pointer\\\emph{den}]
[attributive adjective\\(participle)\\\emph{ausgefallenen}]
[genitive\\\emph{Pfauen}
[pointer\\\emph{der}]
]
]
]
\end{forest}
\caption{A verb-rooted tree from \citet[\page 30]{Kern1884a-u}}
\label{fig:4}
\end{figure}
Once again the analysis gives up on prepositions, treating \emph{mit Federn} `with feathers' as a
single word, but Figure~\ref{fig:4} is an impressive attempt at a coherent analysis which would have
provided an excellent foundation for the explosion of syntax in the next century. According to the
classic history of dependency grammar, in this approach,
\begin{quotation} [\dots] the sentence is not a basic grammatical unit, but merely results from
combinations of words, and therefore [\dots] the only truly basic grammatical unit is the word. A
language, viewed from this perspective, is a collection of words and ways of using them in
word-groups, i.e., expressions of varying length. \citep{Percival2007}
\end{quotation}
But the vagaries of intellectual history and geography worked against this intellectual
breakthrough. When Leonard Bloomfield was looking for a theoretical basis for syntax, he could have
built on what he had learned at school:
\begin{quotation} [\dots] we do not know and may never know what system of grammatical analysis
Bloomfield was exposed to as a schoolboy, but it is clear that some of the basic conceptual and
terminological ingredients of the system that he was to present in his 1914 and 1933 books were
already in use in school grammars of English current in the United States in the nineteenth
century. Above all, the notion of sentence ``analysis'', whether diagrammable or not, had been
applied in those grammars. \citep{Percival2007}
\end{quotation}
And when he visited Germany in 1913--1914, he might have learned about Kern's ideas, which were
already influential there. But instead, he adopted the syntax of the German psychologist
Wilhelm Wundt. Wundt's theory applied to meaning rather than syntax, and was based on a single idea:
that every idea consists of a subject and a predicate. For example, a phrase meaning ``a sincerely
thinking person'' has two parts: \emph{a person} and \emph{thinks sincerely}; and the latter breaks
down, regardless of the grammar, into the noun \emph{thought} and \emph{is sincere}
\citep[\page 239]{Percival1976}.
\largerpage
For all its reliance on logic rather than grammar, the analysis is a clear precursor to
neo-Bloomfieldian trees: it recognises a single consistent part-whole relationship (a partonomy)
which applies recursively. This, then, is the beginning of the PS tradition: an analysis based
purely on meaning as filtered through a speculative theory of cognition – an unpromising start for a
theory of syntax. However, Bloomfield's school experience presumably explains why he combined
Wundt's partonomies with the hybrid structures of Reed-Kellogg diagrams in his classification of
structures as endocentric (headed) or exocentric (headless). For him, exocentric constructions
include the subject-predicate structure and preposition phrases, both of which were problematic in
sentence analysis at school. Consequently, his Immediate Constituent Analysis (ICA) perpetuated the
old hybrid mixture of headed and headless structures.
The DS elements of ICA are important in evaluating the history of PS, because they contradict the
standard view of history expressed here:
\begin{quotation}
Within the Bloomfieldian tradition, there was a fair degree of consensus regarding the application
of syntactic methods as well as about the analyses associated with different classes of
constructions. Some of the \pagebreak{}general features of IC analyses find an obvious reflex in subsequent
models of analysis. Foremost among these is the idea that structure involves a part–whole relation
between elements and a larger superordinate unit, rather than an asymmetrical dependency relation
between elements at the same level. \citep[202–203]{BlevinsSag2013}
\end{quotation}
%
This quotation implies, wrongly, that ICA rejected DS altogether.
What is most noticeable about the story so far is that, even in the 1950s, we still haven't seen an
example of pure phrase structure. Every theory visited so far has recognised dependency relations in
at least some constructions. Even Bloomfieldian ICA had a place for dependencies, though it
introduced the idea that dependents might be phrases rather than single words and it rejected the
traditional grammatical functions such as subject and object. Reacting against the latter gap, and
presumably remembering their schoolroom training, some linguists developed syntactic theories which
were based on constituent structure but which did have a place for grammatical functions, though not
for dependency as such. The most famous of these theories are Tagmemics \citep{Pike1954} and
Systemic Functional Grammar \citep{Halliday1961,Halliday67b-u}. However, in spite of its very
doubtful parentage and its very brief history, by the 1950s virtually every linguist in America
seemed to accept without question the idea that syntactic structure was a partonomy.
This is the world in which Noam Chomsky introduced phrase structure, which he presented as a
formalisation of ICA, arguing that ``customarily, linguistic description on the syntactic level is
formulated in terms of constituent analysis (parsing)'' \citep[26]{Chomsky57a}. But such analysis
was only ``customary'' among the Bloomfieldians, and was certainly not part of the classroom
activity of parsing \citep[147]{Matthews1993}.
\largerpage
Chomsky's phrase structure continued the drive towards homogeneity which had led to most of the
developments in syntactic theory since the early nineteenth century. Unfortunately, Chomsky
dismissed both dependencies and grammatical functions as irrelevant clutter, leaving nothing but
part-whole relations, category-labels, continuity and sequential order.
Rather remarkably, the theory of phrase structure implied the (psychologically implausible) claim
that sideways relations such as dependencies between individual words are impossible in a syntactic
tree – or at least that, even if they are psychologically possible, they can (and should) be ignored
in a formal model. Less surprisingly, having defined PS in this way, Chomsky could easily prove that
it was inadequate and needed to be greatly expanded beyond the plain-vanilla version. His solution
was the introduction of transformations, but it was only thirteen years before he also recognised
the need for some recognition of head-dependent asymmetries in X-bar theory \citep{Chomsky70a}. At
the same time, others had objected to transformations and started to develop other ways of making PS
adequate. One idea was to include grammatical functions; this idea was developed variously in LFG
\citep{Bresnan78a,Bresnan2001a}, Relational Grammar \citep{PP83a-u,Blake1990} and Functional Grammar
\citep{Dik1989,Siewierska1991}. Another way forward was to greatly enrich the categories
\citep{Harman63a} as in GPSG \citep{GKPS85a} and HPSG \citep{ps2}.
Meanwhile, the European ideas about syntactic structure culminating in Kern's tree diagram developed
rather more slowly. Lucien Tesnière in France wrote the first full theoretical discussion of DS in
1939, but it was not published till 1959 \citep{Tesniere59a-u,Tesniere2015a-u}, complete with
stemmas looking like the diagrams produced seventy years earlier by Brassai and Kern. Somewhat
later, these ideas were built into theoretical packages in which DS was bundled with various other
assumptions about levels and abstractness. Here the leading players were from Eastern Europe, where
DS flourished: the Russian Igor Mel’čuk \citep{Melcuk88a-u}, who combined DS with multiple
analytical levels, and the Czech linguists Petr Sgall, Eva Hajičová and Jarmila Panevova
\citep{Sgall&co1986}, who included information structure. My own theory Word Grammar (developed,
exceptionally, in the UK), also stems from the 1980s
\citep{Hudson84a-u,Hudson90a-u,Sugayama2002,Hudson2007a-u,Gisborne2008,Rosta2008,Gisborne2010,Hudson2010b-u,Gisborne2011,Duran-Eppler2011,TraugottTrousdale2013,Duran-Eppler&co2016,Hudson2016,Hudson2017,Hudson2018a,Gisborne2019}. This
is the theory which I compare below with HPSG, but it is important to remember that other DS
theories would give very different answers to some of the questions that I raise.
\largerpage[2]
DS certainly has a low profile in theoretical linguistics, and especially so in anglophone countries, but there is an area of linguistics where its profile is much higher (and which is of particular interest to the HPSG community): natural-language processing \citep{KMcDN2009a-u}. For example:
\begin{itemize}
\item the Wikipedia entry for ``Treebank'' classifies 228 of its 274 treebanks as using DS.%
%
\footnote{\url{https://en.wikipedia.org/wiki/Treebank} (last access 2021-04-06).}%
%
\item The ``Universal dependencies'' website lists almost 200 dependency-based treebanks
for over 100 languages.%
%
\footnote{\url{https://universaldependencies.org/} (last access January 2021-04-06).}%
%
\item Google's n-gram facility allows searches based on dependencies.%
%
\footnote{\url{https://books.google.com/ngrams/info} and search for ``dependency'' (last access 2021-04-06).}%
%
\item The Stanford Parser \citep{ChenManning2014,deMarneffe&co2014} uses DS.%
%
\footnote{\url{https://nlp.stanford.edu/software/stanford-dependencies.shtml} (last access 2021-04-06).}%
%
\end{itemize}
The attraction of DS in NLP is that the only units of analysis are words, so at least these units
are given in the raw data and the overall analysis can immediately be broken down into a much
simpler analysis for each word. This is as true for a linguist building a treebank as it was for a
school teacher teaching children to parse words in a grammar lesson. Of course, as we all know, the
analysis actually demands a global view of the entire sentence, but at least in simple examples a
bottom-up word-based view will also give the right result.
To summarise this historical survey, PS is a recent arrival, and is not yet a hundred years
old. Previous syntacticians had never considered the possibility of basing syntactic analysis on a
partonomy. Instead, it had seemed obvious that syntax was literally about how words (not phrases)
combined with one another.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{HPSG and Word Grammar}
\label{sec:3}
The\is{Word Grammar|(} rest of this chapter considers a number of crucial issues that differentiate PS from DS by
focusing specifically on how they distinguish two particular manifestations of these traditions,
HPSG and Word Grammar (WG). The main question is, of course, how strong the evidence is for the PS
basis of HPSG, and how easily this basis could be replaced by DS.
The comparison requires some understanding of WG, so what follows is a brief tutorial on the parts
of the theory which will be relevant in the following discussion. Like HPSG, WG combines claims
about syntactic relations with a number of other assumptions; but for WG, the main assumption is the
Cognitive Principle:
\eanoraggedright
\label{it:CogPrin}
The Cognitive Principle:\\
Language uses the same general cognitive processes and resources as general cognition, and has access to all of them.
\z
%
\largerpage[2]
This principle is of course merely a hypothesis which may turn out to be wrong, but so far it seems
correct \citep[494]{MuellerGT-Eng2}, and it is more compatible with HPSG than with the innatist
ideas underlying Chomskyan linguistics \citep*{Berwick:Chomsky2013a-u}. In WG, it plays an important
part because it determines other parts of the theory.
On the one hand, cognitive psychologists tend to see knowledge as a network of related concepts
\citep[252]{Reisberg2007}, so WG also assumes that the whole of language, including grammar, is a
conceptual network (\citealt[1]{Hudson84a-u}; \citeyear[1]{Hudson2007a-u}). One of the consequences
is that the AVMs of HPSG are presented instead as labelled network links; for example, we can
compare the elementary example in (\mex{1}) of the HPSG lexical item for a \ili{German} noun
\citep[264]{MuellerGT-Eng2} with an exact translation using WG notation.
\ea
\label{fig:5}
AVM for the \ili{German} noun \emph{Grammatik}:\\
\avm{
[\type*{word}
phonology & <\type{Grammatik}\,> \\
syntax-semantics \ldots & [\type*{local}
category & [\type*{category}
head & [\type*{noun}
case & \1] \\
spr & < Det![case \1]! > \\
\ldots ] \\
content & \ldots[\type*{grammatik}
inst & X ] ] ]
}
%\vspace{-\baselineskip}
\z
HPSG regards AVMs as equivalent to networks, so translating this AVM into network notation is
straightforward; however, it is visually complicated, so I take it in two steps. First I introduce
the basic notation in Figure~\ref{fig:6}: a small triangle showing that the lexeme
\textsc{grammatik} ``isa'' word, and a headed arrow representing a labelled attribute (here,
``phonology'') and pointing to its value. The names of entities and attributes are enclosed in
rectangles and ellipses respectively.
\begin{figure}
\centering
\begin{tikzpicture}[node distance=3cm]
\node[draw](word) at (0,1.5){word};
\node[draw](grammatik) at (0,0){\textsc{grammatik}};
\node[draw,ellipse](phonology) [right of=grammatik]{phonology};
\node[draw](Grammatik) [right of=phonology]{\emph{Grammatik}};
\draw[<-,>=open triangle 90 reversed] (word) to (grammatik);
\draw (phonology) to (grammatik);
\draw[->] (phonology) to (Grammatik);
\end{tikzpicture}
\caption{The German noun \emph{Grammatik} `grammar' in a WG network}
\label{fig:6}
\end{figure}
The rest of the AVM translates quite smoothly (ignoring the list for \spr), giving Figure~\ref{fig:7}, though an actual WG analysis would be rather different in ways that are irrelevant here.
\begin{figure}
\centering
\begin{tikzpicture}[node distance=1.8cm]
%relative placement of nodes starting from 1
\node[draw](1) at (0,0){1};
\node[draw,ellipse](case1)[above left of=1]{case};
\node[draw,ellipse](case2)[above right of=1]{case};
\node[draw](noun)[above left of=case1]{\emph{noun}};
\node[draw](det)[above right of=case2]{\emph{det}};
\node[draw,ellipse](head)[above right of=noun]{head};
\node[draw,ellipse](spr)[above left of=det]{spr};
\node[draw](category1)[above left of=spr]{\emph{category}};
\node[draw,ellipse](category2)[above right of=category1]{category};
\node[draw](local)[above right of=category2]{\emph{local}};
\node[draw,ellipse](content)[below right of=local]{content};
\node[draw](grammatik1)[below right of=content]{\emph{grammatik}};
\node[draw,ellipse](inst)[below of=grammatik1]{inst};
\node[draw](X)[below of=inst]{X};
\node[draw,ellipse](synsem)[above of=local]{syntax-semantics};
\node[draw](grammatik2)[above of=synsem]{\textsc{grammatik}};
%arrows (I didnt manage to find a way to whitefill the ellipses afterwards, so every arrow is split up :/)
\draw (grammatik2) to (synsem);
\draw[->] (synsem) to (local);
\draw (local) to (category2);
\draw (local) to (content);
\draw[->] (category2) to (category1);
\draw[->] (content) to (grammatik1);
\draw (grammatik1) to (inst);
\draw[->] (inst) to (X);
\draw (category1) to (head);
\draw (category1) to (spr);
\draw[->] (head) to (noun);
\draw[->] (spr) to (det);
\draw (noun) to (case1);
\draw (det) to (case2);
\draw[->] (case1) to (1);
\draw[->] (case2) to (1);
\end{tikzpicture}
\caption{The German noun \emph{Grammatik} `grammar' in a WG network}
\label{fig:7}
\end{figure}
The other difference based on cognitive psychology between HPSG and WG is that many cognitive psychologists argue that concepts are built around prototypes \citep{Rosch1973,Taylor1995}, clear cases with a periphery of exceptional cases. This claim implies the logic of default inheritance \citep{BCdP93a-ed}, which is popular in AI, though less so in logic. In HPSG, default inheritance is accepted by some \citep{LC99a} but not by others \citep[403]{MuellerGT-Eng2}, whereas in WG it plays a fundamental role, as I show in Section~\ref{sec:4.1} below. WG uses the \rel{isa} relation to carry default inheritance, and avoids the problems of non-monotonic inheritance by restricting inheritance to node-creation \citep[18]{Hudson2018a}. Once again, the difference is highly relevant to the comparison of PS and DS because one of the basic questions is whether syntactic structures involve partonomies (based on whole:part relations) or taxonomies (based on the \rel{isa} relation). (I argue in Section~\ref{sec:4.1} that taxonomies exist within the structure of a sentence thanks to \rel{isa} relations between tokens and sub-tokens.)
\enlargethispage{3pt}
Default inheritance leads to an interesting comparison of the ways in which the two theories treat
attributes. On the one hand, they both recognise a taxonomy in which some attributes are grouped
together as similar; for example, the HPSG analysis in (\ref{fig:5}) classifies the attributes
\textsc{category} and \textsc{content} as \textsc{local}, and within \textsc{category} it
distinguishes the \textsc{head} and \textsc{specifier} attributes. In WG, attributes are called
relations, and they too form a taxonomy. The \pagebreak{}simplest examples to present are the
traditional grammatical functions, which are all subtypes of ``dependent''; for example, ``object''
\rel{isa} ``complement'', which \rel{isa} ``valent'', which \rel{isa} ``dependent'', as shown in
Figure~\ref{fig:8} (which begs a number of analytical questions such as the status of depictive
predicatives, which are not complements).
\begin{figure}
\centering
\begin{tikzpicture}[>=open triangle 90 reversed]
\node[shape=ellipse,draw](object) at (0,0) {object};
\node[shape=ellipse,draw](complement) at (0,1.5) {complement};
\node[shape=ellipse,draw](valent) at (0,3) {\strut{valent}};
\node[shape=ellipse,draw](dependent) at (0,4.5) {dependent};
\node[shape=ellipse,draw](predicative) at (3,0) {predicative};
\node[shape=ellipse,draw](subject) at (3,1.5) {subject};
\node[shape=ellipse,draw](adjunct) at (3,3) {adjunct};
\draw[<-] (dependent) to (valent);
\draw (adjunct) to ([yshift=-.13cm]dependent.south);
\draw[<-] (valent) to (complement);
\draw (subject) to ([yshift=-.13cm]valent.south);
\draw[<-] (complement) to (object);
\draw (predicative) to ([yshift=-.13cm]complement.south);
\end{tikzpicture}
\caption{A WG taxonomy of grammatical functions}
\label{fig:8}
\end{figure}
In spite of the differences in the categories recognised, the formal similarity is striking. On the
other hand, there is also an important formal difference in the roles played by these taxonomies. In
spite of interesting work on default inheritance \citep{LC99a}, most versions of HPSG allow
generalisations but not exceptions (``If one formulates a restriction on a supertype, this
automatically affects all of its subtypes''; \citealt[275]{MuellerGT-Eng2}), whereas in WG the usual
logic of default inheritance applies so exceptions are possible. These are easy to illustrate from
word order, which (as explained in Section~\ref{sec:4.4}) is normally inherited from dependencies: a
verb's subject normally precedes it, but an inverted subject (the subject of an inverted auxiliary
verb, as in \emph{did he}) follows it.
\largerpage
Another reason for discussing default inheritance and the \rel{isa} relation is to explain that WG,
just like HPSG, is a constraint-based theory. In HPSG, a sentence is grammatical if it can be
modelled given the structures and lexicon provided by the grammar, which are combined with each
other by inserting less complex structures into daughter slots of more complex
structures. Similarly, in WG it is grammatical if its word tokens can all be inherited from entries
in the grammar (which also includes the entire lexicon). Within the grammar, these may involve
overrides, but overrides between the grammar and the word tokens imply some degree of
ungrammaticality. For instance, \emph{He slept} is grammatical because all the properties of
\emph{he} and \emph{slept} (including their syntactic properties such as the word order that can be
inherited from their grammatical function) can be inherited directly from the grammar, whereas
*\emph{Slept he} is ungrammatical in that the order of words is exceptional, and the exception is
not licensed by the grammar.
This completes the tutorial on WG, so we are now ready to consider the issues that distinguish HPSG
from this particular version of DS. In preparation for this discussion, I return to the three
distinguishing assumptions about classical PS and DS theories given earlier as~\ref{it:1}
to~\ref{it:3}, and repeated here:
\begin{enumerate}
\item Containment: in PS, but not in DS, if two items are directly related, one must contain the
other.
\item Continuity: therefore, in PS, but not in DS, all the items contained in a larger one
must be adjacent.
\largerpage
\item Asymmetry: in both DS and PS, a direct relation between two items must be
asymmetrical, but in DS the relation (between two words) is dependency whereas in PS it is
the part-whole relation.
\end{enumerate}
These distinctions will provide the structure for the discussion:
\begin{itemize}
\item Containment and continuity:
\begin{itemize}
\item semantic phrasing
\item coordination
\item phrasal edges
\item word order
\end{itemize}
\item Asymmetry:
\begin{itemize}
\item structure sharing and raising/lowering
\item headless phrases
\item complex dependency
\item grammatical functions
\end{itemize}
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Containment and continuity (PS but not DS)}
\label{sec:4}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Semantic phrasing}
\label{sec:4.1}
One apparent benefit of PS is what I call ``semantic phrasing'' \citep[146–151]{Hudson90a-u}, in
which the effect of adding a dependent to a word modifies that word's meaning to produce a different
meaning. For instance, the phrase \emph{typical French house} does not mean `house which is both
typical and French', but rather `French house which is typical (of French houses)'
\citep[\page 486]{Dahl80a}. In other words, even if the syntax does not need a node corresponding to the
combination \emph{French house}, the semantics does need one.
For HPSG, of course, this is not a problem, because every dependent is part of a new structure,
semantic as well as syntactic \citep{MuellerEvaluating}; so the syntactic phrase \emph{French house}
has a content which is `French house'. But for DS theories, this is not generally possible, because
there is no syntactic node other than those for individual words – so, in this example, one node for
\emph{house} and one for \emph{French} but none for \emph{French house}.
Fortunately for DS, there is a solution: create extra word nodes but treat them as a taxonomy, not a
partonomy \citep{Hudson2018a}. To appreciate the significance of this distinction, the connection
between the concepts ``finger'' and ``hand'' is a partonomy, but that between ``index finger'' and
``finger'' is a taxonomy; a finger is part of a hand, but it is not a hand, and conversely an index
finger is a finger, but it is not part of a finger.
In this analysis, then, the token of \emph{house} in \emph{typical French house} would be factored into three distinct nodes:
\begin{itemize}
\item \emph{house}: \label{it:house} an example of the lexeme \textsc{house}, with the inherited meaning `house'.
\item \emph{house+F}: \label{it:house+f} the word \emph{house} with \emph{French} as its dependent, meaning `French house'.
\item \emph{house+t}: \label{it:house+t} the word \emph{house+F} with \emph{typical} as its dependent, meaning `typical example of a French house'.
\end{itemize}
\noindent
(It is important to remember that the labels are merely hints to guide the analyst, and not part of
the analysis; so the last label could have been \emph{house+t+F} without changing the analysis at
all. One of the consequences of a network approach is that the only substantive elements in the
analysis are the links between nodes, rather than the labels on the nodes.) These three nodes can be
justified as distinct categories because each combines a syntactic fact with a semantic one: for
instance, \emph{house} doesn't simply mean `French house', but has that meaning because it has the
dependent \emph{French}. The alternative would be to add all the dependents and all the meanings to
a single word node as in earlier versions of WG \citep[146–151]{Hudson90a-u}, thereby removing all
the explanatory connections; this seems much less plausible psychologically. The proposed WG
analysis of \emph{typical French house} is shown in Figure~\ref{fig:9}, with the syntactic structure
on the left and the semantics on the right.
\begin{figure}
\centering
\begin{tikzpicture}[node distance=2cm]
%first row
\node[draw](typical) at (0,0){\emph{typical}};
\node[draw](french)[right of=typical]{\emph{French}};
\node[draw](house1)[right of=french]{\emph{house}};
\node[draw,ellipse](sense1)[right of=house1]{sense};
\node[draw](house2)[right of=sense1]{`house'};
%second row
\node[draw](housef)[above of=house1]{\emph{house}+\emph{F}};
\node[draw,ellipse](sense2)[right of=housef]{sense};
\node[draw, align=center](fhouse)[above of=house2]{`French\\house'};
%third row
\node[draw](houset)[above of=housef]{\emph{house}+\emph{t}};
\node[draw,ellipse](sense3)[right of=houset]{sense};
\node[draw, align=center](tfhouse)[above of=fhouse]{`typical\\French\\house'};
%arrows
\draw[<-,>=open triangle 90 reversed] (house1) to (housef);
\draw[<-,>=open triangle 90 reversed] (housef) to (houset);
\draw[<-,>=open triangle 90 reversed] (house2) to (fhouse);
\draw[<-,>=open triangle 90 reversed] (fhouse) to (tfhouse);
\draw (house1) to (sense1);
\draw (housef) to (sense2);
\draw (houset) to (sense3);
\draw[->] (sense1) to (house2);
\draw[->] (sense2) to (fhouse);
\draw[->] (sense3) to (tfhouse);
\draw[->] (housef) to[out=west,in=north] (french);
\draw[->] (houset) to[out=west,in=north] (typical);
\end{tikzpicture}
\caption{\emph{typical French house} in WG}
\label{fig:9}
\end{figure}
Unlike standard DG analyses \citep{MuellerEvaluating}, the number of syntactic nodes in this
analysis is the same as in an HPSG analysis, but crucially these nodes are linked by the \rel{isa}
relation, and not as parts to wholes – in other words, the hierarchy is a taxonomy, not a
partonomy. As mentioned earlier, the logic is default inheritance, and the default semantics has
\rel{isa} links parallel to those in syntax; thus the meaning of \emph{house+F} (\emph{house} as
modified by \emph{French}) \rel{isa} the meaning of \emph{house} – in other words, a French house is
a kind of house. But the default can be overridden by exceptions such as the meanings of adjectives
like \emph{fake} and \emph{former}, so a fake diamond is not a diamond (though it looks like one)
and a former soldier is no longer a soldier.\footnote{
See also \crossrefchapterw[Section~\ref{semantics-sec-adjunct-scope}]{semantics} on adjunct scope.
} The exceptional semantics is licensed by the grammar –
the stored network – so the sentence is fully grammatical. All this is possible because of the same
default inheritance that allows irregular morphology and syntax.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Coordination}
\label{sec:4.2}
Another potential argument for PS, and against DS, is based on coordination: coordination is a
symmetrical relationship, not a dependency, and it coordinates phrases rather than single words. For
instance, in (\ref{ex:3}) the coordination clearly links the VPs \emph{came in} to \emph{sat down}
and puts them on equal grammatical terms; and it is this equality that allows them to share the
subject \emph{Mary}.
\begin{exe}
\ex \label{ex:3} Mary came in and sat down.
\end{exe}
%
But of course, in a classic DS analysis \emph{Mary} is also attached directly to \emph{came},
without an intervening VP node, so \emph{came in} is not a complete syntactic item and this approach
to coordination fails, so we have a prima facie case against DS. (For coordination in HPSG, see
\crossrefchapteralt{coordination}.)
Fortunately, there is a solution: sets \citep[404--421]{Hudson90a-u}. We know from the vast
experimental literature (as well as from everyday experience) that the human mind is capable of
representing ordered sets (strings) of words, so all we need to assume is that we can apply this
ability in the case of coordination. The members of a set are all equal, so their relation is
symmetrical; and the members may share properties (e.g.\ a person's children constitute a set united
by their shared relation to that person as well as by a multitude of other shared
properties). Moreover, sets may be combined into supersets, so both conjuncts such as \emph{came in}
and \emph{sat down} and coordinations (\emph{came in and sat down}) are lists. According to
this analysis, then, the two lists (\emph{came}, \emph{in}) and (\emph{sat}, \emph{down}) are united
by their shared subject, Mary, and combine into the coordination ((\emph{came}, \emph{in})
(\emph{sat}, \emph{down})). The precise status of the conjunction \emph{and} remains to be
determined. The proposed analysis is shown in network notation in Figure~\ref{fig:10}.
\begin{figure}
\centering
\begin{forest}
for tree={l sep=2cm}
[((\emph{came}{,} \emph{in}){,} (\emph{sat}{,} \emph{down})),s sep=1.25cm, l sep=1.5cm,draw,name=cisd
[(\emph{came}{,} \emph{in}),draw,name=ci
[\emph{came},draw,name=came]
[\emph{in},draw,name=in]
]
[(\emph{sat}{,} \emph{down}),draw,name=sd
[\emph{sat},draw,name=sat]
[\emph{down},draw,name=down]
]
]
\node[draw](and)[right of=in]{\strut\emph{and}};
\node[draw](mary)[left of=came,node distance=1.3cm]{\strut\emph{Mary}};
\draw[->](came) to[out=north,in=north]([xshift=-.2cm]in.north);
\draw[->](sat) to[out=north,in=north]([xshift=-.2cm]down.north);
\draw[->](came) to[out=north,in=north]([xshift=.4cm]mary.north);
\draw[->,dashed](sat) to[out=north,in=north]([xshift=.2cm]mary.north);
\draw[->,dashed](sd.west) to[out=south west,in=north](mary);
\draw[->](ci) to[out=west,in=north]([xshift=-.2cm]mary.north);
\draw[->](cisd) to[out=west,in=north]([xshift=-.4cm]mary.north);
\end{forest}
\caption{Coordination with sets}
\label{fig:10}
\end{figure}
\largerpage
Once again, inheritance plays a role in generating this diagram. The \rel{isa} links have been
omitted in Figure~\ref{fig:10} to avoid clutter, but they are shown in Figure~\ref{fig:11}, where
the extra \rel{isa} links are compensated for by removing all irrelevant matter and the dependencies
are numbered for convenience. In this diagram, the dependency d1 from \emph{came} to \emph{Mary} is
the starting point, as it is established in processing during the processing of \emph{Mary came} –
long before the coordination is recognised; and the endpoint is the dependency d5 from \emph{sat} to
\emph{Mary}, which is simply a copy of d1, so the two are linked by isa. (It will be recalled from
Figure~\ref{fig:8} that dependencies form a taxonomy, just like words and word classes, so \rel{isa}
links between dependencies are legitimate.) The conjunction \emph{and} creates the three set nodes,
and general rules for sets ensure that properties – in this case, dependencies – can be shared by
the two conjuncts.
It's not yet clear exactly how this happens, but one possibility is displayed in the diagram: d1
licenses d2 which licenses d3 which licenses d4 which licenses d5. Each of these licensing relations
is based on isa. Whatever the mechanism, the main idea is that the members of a set can share a
property; for example, we can think of a group of people sitting in a room as a set whose members
share the property of sitting in the room. Similarly, the set of strings \emph{came in} and
\emph{sat down} share the property of having \emph{Mary} as their subject.
\begin{figure}
\centering
\begin{tikzpicture}[node distance=2.5cm]
%first row
\node[draw](mary) at(0,0){\emph{\strut Mary}};
\node[draw](came)[right of=mary]{\emph{\strut came}};
\node[draw](sat)[right of=came, node distance=3cm]{\emph{\strut sat}};
%other nodes
\node[draw](ci) at (3,2.8){(\emph{came}, \emph{in})};
\node[draw](sd) at (6,4){(\emph{sat}, \emph{down})};
\node[draw](cisd)[above left of=sd, node distance=2cm]{((\emph{came}, \emph{in}), (\emph{sat}, \emph{down}))};
%arrows
\draw[->] (came) to[in=north, out=north] node[draw,ellipse,fill=white](d1){d1} ([xshift=.4cm]mary.north);
\draw[->] (ci) to[in=north, out=west] node[draw,pos=.2,ellipse,fill=white](d2){d2} (mary);
\draw[->] (cisd) to[in=north, out=west] node[draw,pos=.214,ellipse,fill=white](d3){d3} ([xshift=-.4cm]mary.north);
%
\draw[dashed,->] (sd) to[in=north, out=west] node[draw,pos=.095, ellipse,solid,fill=white](d4){d4} (-.2,1)% here could go the specs of the triangle thingy
to ([xshift=-.2cm]mary.north);
\draw[dashed,->] (sat) to[in=north, out=north] node[draw,pos=.305,ellipse,solid,fill=white](d5){d5}
%(x,y)% triangle specs
([xshift=.2cm]mary.north);
%
\draw[<-,>=open triangle 90 reversed] (d1) to (d2);
\draw[<-,>=open triangle 90 reversed] (d2) to (d3);
\draw[<-,>=open triangle 90 reversed] (d3) to (d4);
\draw[<-,>=open triangle 90 reversed] (d4) to (d5);
\end{tikzpicture}
\caption{Coordination with inherited dependencies}
\label{fig:11}
%\itdblue{JP: IMHO, this is amphibological use of ISA, as the relation between a member of a set and a set (in Figure 10), is not an ISA relation.}
\end{figure}
\largerpage
The proposed analysis may seem to have adopted phrases in all but name, but this is not so because
the conjuncts have no grammatical classification, so coordination is not restricted to coordination
of like categories. This is helpful with examples like (\ref{ex:4}) where an adjective is
coordinated with an NP and a PP.
\begin{exe}
\ex \label{ex:4} Kim was intelligent, a good linguist and in the right job.
\end{exe}
The possibility of coordinating mixed categories is a well-known challenge for PS-based analyses
such as HPSG: ``Ever since Sag et al. (1985), the underlying intuition was that what makes
Coordination of Unlikes acceptable is that each conjunct is actually well-formed when combined
individually with the shared rest'' \citep[61]{Crysmann2003c}. Put somewhat more precisely, the
intuition is that what coordinated items share is not their category but their function
\citep[414]{Hudson90a-u}. This is more accurate because simple combinability isn't enough; for
instance, \emph{we ate} can combine with an object or with an adjunct, but the functional difference
prevents them from coordinating:
\begin{exe}
\ex[]{We ate a sandwich.}\label{ex:5}
\ex[]{We ate at midday.}\label{ex:6}
\ex[*]{We ate a sandwich and at midday.}\label{ex:7}
\end{exe}
\noindent
Similarly, \emph{a linguist} can combine as dependent with many verbs, but these can only coordinate
if their relation to \emph{a linguist} is the same:
\begin{exe}
\ex[]{She became a linguist.}\label{ex:8}
\ex[]{She met a linguist.}\label{ex:9}
\ex[*]{She became and met a linguist.}\label{ex:10}
\end{exe}
\noindent
It is true that HPSG can accommodate the coordination of unlike categories by redefining categories
so that they define functions rather than traditional categories; for example, if ``predicative'' is
treated as a category, then the problem of (\ref{ex:4}) disappears because \emph{intelligent},
\emph{a good linguist} and \emph{in the right job} all belong to the category
``predicative''. However, this solution generates as many problems as it solves. For example, why is
the category ``predicative'' exactly equivalent to the function with the same name, whereas
categories such as ``noun phrase'' have multiple functions? And how does this category fit into a
hierarchy of categories so as to bring together an arbitrary collection of categories which are
otherwise unrelated: nominative noun phrase, adjective phrase and preposition phrase?
Moreover, since the WG analysis is based on arbitrary strings and sets rather than phrases, it
easily accommodates ``incomplete'' conjuncts (\citealt[405]{Hudson90a-u}; \citealt{Hudson1982})
precisely because there is no expectation that strings are complete phrases. This claim is born out
by examples such as (\ref{ex:11}) (meaning `\dots\ and parties for foreign girls \dots').
\begin{exe}
\ex \label{ex:11} We hold parties for foreign \emph{boys on Tuesdays} and \emph{girls on Wednesdays}.
\end{exe}
%\largerpage
In this example, the first conjunct is the string (\emph{boys}, \emph{on}, \emph{Tuesdays}), which
is not a phrase defined by dependencies; the relevant phrases are \emph{parties for foreign boys}
and \emph{on Tuesdays}.
This sketch of a WG treatment of coordination ignores a number of important issues (raised by
reviewers) such as joint interpretation (\ref{ex:12}) and special choice of pronoun forms
(\ref{ex:13}).
\begin{exe}
\ex \label{ex:12} John and Mary are similar.
\ex \label{ex:13} Between you and I, she likes him.
\end{exe}
\noindent
These issues have received detailed attention in WG (\citealt[Chapter~5]{Hudson84a-u};
\citeyear{Hudson88a}; \citeyear[Chapter~14]{Hudson90a-u}; \citeyear{Hudson1995}; \citeyear[175--181,
304--307]{Hudson2010b-u}), but they are peripheral to this chapter.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Phrasal edges}
\label{sec:4.3}
One of the differences between PS and DS is that, at least in its classic form, PS formally
recognises phrasal boundaries, and a PS tree can even be converted to a bracketed string where the
phrase is represented by its boundaries. In contrast, although standard DS implies phrases (since a
phrase can be defined as a word and all the words depending on it either directly or indirectly), it
doesn't mark their boundaries.
This turns out to be problematic in dealing with \ili{Welsh} soft mutation
\citep{Tallerman2009}. Tallerman's article is one of the few serious discussions by a PS advocate of
the relative merits of PS and DS, so it deserves more consideration than space allows here. It
discusses examples such as (\ref{ex:14}) and (\ref{ex:15}), where the emphasised words are
morphologically changed by soft mutation in comparison with their underlying forms shown in
brackets.
\begin{exe}
\ex \label{ex:14}
\gll Prynodd y ddynes \emph{\smash{delyn}}. (telyn)\\
buy.\textsc{pst}.3\textsc{s} the woman harp\\\hfill(\ili{Welsh})
\glt `The woman bought a harp.'
\ex \label{ex:15}
\gll Gwnaeth y ddynes [\emph{werthu} telyn]. (gwerthu)\\
do.\textsc{pst}.3\textsc{s} the woman \spacebr{}sell.\textsc{inf} harp\\
\glt `The woman sold a harp.'
\end{exe}
\noindent
Soft mutation is sensitive to syntax, so although `harp' is the object of a preceding verb in both
examples, it is mutated when this verb is finite (\emph{prynodd}) and followed by a subject, but not
when the verb has no subject because it is non-finite (\emph{werthu}). Similarly, the non-finite
verb `sell' is itself mutated in example (\ref{ex:15}) because it follows a subject, in contrast
with the finite verbs which precede the subject and have no mutation.
A standard PS explanation for such facts (and many more) is the ``XP Trigger Hypothesis'': that soft
mutation is triggered on a subject or complement (but not an adjunct) immediately after an XP
boundary \citep[226]{BorsleyTallermanWillis2007}. The analysis contains two claims: that mutation
affects the first word of an XP, and that it is triggered by the end of another XP. The first claim
seems beyond doubt: the mutated word is simply the first word, and not necessarily the
head. Examples such as (\ref{ex:16}) are conclusive.
%
\ea
\label{ex:16}
\gll Dw i [\emph{lawn} mor grac â chi]. (llawn)\\
be.\textsc{prs}.1\textsc{s} I \spacebr{}full as angry as you\\\hfill(\ili{Welsh})
\glt `I'm just as angry as you.'
\z
%
\noindent
The second claim is less clearly correct; for instance, it relies on controversial assumptions about
null subjects and traces in examples such as (\ref{ex:17}) and (\ref{ex:18}) (where \emph{t} and
\emph{pro} stand for a trace and a null subject respectively, but have to be treated as full phrases
for purposes of the XP Trigger Hypothesis in order to explain the mutation following them).
%
\begin{exe}
\ex \label{ex:17}
\gll Pwy brynodd \emph{t} delyn? (telyn)\\
who buy.\textsc{pst}.3\textsc{s} {} harp\\\hfill(\ili{Welsh})
\glt `Who bought a harp?'
\ex \label{ex:18}
\gll Prynodd \emph{pro} delyn. (telyn)\\
buy.\textsc{pst}.3\textsc{s} {} harp\\
\glt `He/she bought a harp.'
\end{exe}
%
\noindent
But suppose both claims were true. What would this imply for DS? All it shows is that we need to be
able to identify the first word in a phrase (the mutated word) and the last word in a phrase (the
trigger). This is certainly not possible in WG as it stands, but the basic premise of WG is that the
whole of ordinary cognition is available to language, and it's very clear that ordinary cognition
allows us to recognise beginnings and endings in other domains, so why not also in language?
Moreover, beginnings and endings fit well in the framework of ideas about linearisation that are
introduced in the next subsection.
%\largerpage
The \ili{Welsh} data, therefore, do not show that we need phrasal nodes complete with attributes and
values. Rather, edge phenomena such as \ili{Welsh} mutation show that DS needs to be expanded, but
not that we need the full apparatus of PS. Exactly how to adapt WG is a matter for future research,
not for this chapter.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Word order}
\label{sec:4.4}
In both WG and some variants of HPSG, dominance and linearity are separated, but this separation
goes much further in WG. In basic HPSG, linearisation rules apply only to sisters, and if the binary
branching often assumed for languages such as \ili{German} \citep[Section~10.3]{MuellerGT-Eng2}
reduces these to just two, the result is clearly too rigid given the freedom of ordering found in
many languages. It is true that solutions are available \citep[Chapter~10]{MuellerGT-Eng2}, such as
allowing alternative binary branchings for the same word combinations \crossrefchapterp[Section~\ref{sec-binary-flat}]{order} or combining binary branching
with flat structures held in lists, but these solutions involve extra complexity in other parts of
the theory such as additional lists. For instance, one innovation is the idea of linearisation
domains \citep{Reape94a,Kathol2000a,Babel}, which allow a verb and its arguments and adjuncts to be
members of the same linearisation domain and hence to be realized in any order
(\citealt[302]{MuellerGT-Eng2}; \crossrefchapteralt[Section~\ref{sec-domains}]{order}). These
proposals bring HPSG nearer to DS, where flat structures are inevitable and free order is the
default (subject to extra order constraints).
WG takes the separation of linearity from dominance a step further by introducing two new syntactic
relations dedicated to word order: ``position'' and ``landmark'', each of which points to a node in
the overall network \citep{Hudson2018a}. As its name suggests, a word's landmark is the word from
which it takes its position, and is normally the word on which it depends (as in the HPSG list of
dependents); what holds phrases together by default is that dependents keep as close to their
landmarks as possible, because a general principle bans intersecting landmark relations. Moreover,
the word's ``position'' relative to its landmark may either be free or defined as either ``before''
or ``after''.
However, this default pattern allows exceptions, and because ``position'' and ``landmark'' are
properties, they are subject to default inheritance which allows exceptions such as raising and
extraction (discussed in Section~\ref{sec:5.2}). To give an idea of the flexibility allowed by these
relations, I start with the very easy \ili{English} example in Figure~\ref{fig:12}, where ``lm''
and ``psn'' stand for ``landmark'' and ``position'', and ``<'' and ``>'' mean ``before'' and
``after''.
\begin{figure}
\centering
\begin{tikzpicture}[node distance=2cm]
%fig 1 repeated
\node[draw](many) at (0,0){\strut{\emph{many}}};
\node[draw](students) [right of=many]{\strut \emph{students}};
\node[draw](enjoy) [right of=students]{\strut \emph{enjoy}};
\node[draw](syntax) [right of=enjoy]{\strut \emph{syntax}};
\draw[->] (enjoy)[out=north,in=north] to (syntax);
\draw[->] (enjoy)[out=north,in=north] to (students);
\draw[->] ([xshift=-.2cm]students.north)[out=north,in=north] to (many);
%lower ellipses
\node[draw,ellipse](psn1)[below of=many]{psn};
\node[draw,ellipse](psn2)[below of=students]{psn};
\node[draw,ellipse](psn3)[below of=enjoy]{psn};
\node[draw,ellipse](psn4)[below of=syntax]{psn};
\draw[dashed] (many) to (psn1);
\draw[dashed] (students) to (psn2);
\draw[dashed] (enjoy) to (psn3);
\draw[dashed] (syntax) to (psn4);
%lower rechtangles
\node[draw,inner sep=.3cm](1) [below of=psn1, node distance=1cm]{};
\node[draw,inner sep=.3cm](2) [below of=psn2, node distance=1cm]{};
\node[draw,inner sep=.3cm](3) [below of=psn3, node distance=1cm]{};
\node[draw,inner sep=.3cm](4) [below of=psn4, node distance=1cm]{};
\draw[->,dashed] (psn1) to (1);
\draw[->,dashed] (psn2) to (2);
\draw[->,dashed] (psn3) to (3);
\draw[->,dashed] (psn4) to (4);
%lowest ellipses
\node[draw,ellipse](1a)[right of=1, node distance=1cm]{<};
\node[draw,ellipse](2a)[right of=2, node distance=1cm]{<};
\node[draw,ellipse](3a)[right of=3, node distance=1cm]{>};
\draw[dashed] (1) to (1a);
\draw[->,dashed] (1a) to (2);
\draw[dashed] (2) to (2a);
\draw[->,dashed] (2a) to (3);
\draw[<-,dashed] (3) to (3a);
\draw[dashed] (3a) to (4);
%higher ellipses
\node[draw,ellipse](lm1)[above of=1a]{lm};
\node[draw,ellipse](lm2)[above of=2a]{lm};
\node[draw,ellipse](lm3)[above of=3a]{lm};
\draw[dashed] (many) to (lm1);
\draw[->,dashed] (lm1) to (students);
\draw[dashed] (students) to (lm2);
\draw[->,dashed] (lm2) to (enjoy);
\draw[<-,dashed] (enjoy) to (lm3);
\draw[dashed] (lm3) to (syntax);
\end{tikzpicture}
\caption{Basic word order in English}
\label{fig:12}
\end{figure}
\largerpage[-1]
It could be objected that this is a lot of formal machinery for such a simple matter as word order. However, it is important to recognise that the conventional left-right ordering of writing is just a written convention, and that a mental network (which is what we are trying to model in WG) has no left-right ordering. Ordering a series of objects (such as words) is a complex mental operation, which experimental subjects often get wrong, so complex machinery is appropriate.
Moreover, any syntactician knows that language offers a multiplicity of complex relations between
dependency structure and word order. To take an extreme example, non-configurational languages pose
problems for standard versions of HPSG (for which Bender suggests solutions) as illustrated by a
\ili{Wambaya} sentence, repeated here as (\ref{ex:19}) \parencites[\page
8]{Bender2008a}{Nordlinger1998}:\footnote{
See also \crossrefchapterw[Section~\ref{sec-free-without-domains}]{order} for a discussion of
Bender's approach and \crossrefchapterw[Section~\ref{sec-warlpiri}]{order} for an analysis
of the phenomenon in linearization-based HPSG.
}
\begin{exe}
\ex \label{ex:19}
\longexampleandlanguage{
\gll Ngaragana-nguja ngiy-a gujinganjanga-ni jiyawu ngabulu\\
grog\textsc{-prop}.\textsc{iv}.\textsc{acc} 3\textsc{sg}.\textsc{nm}.\textsc{a}-\textsc{pst} mother-\textsc{ii}.\textsc{erg} give milk.\textsc{iv}.\textsc{acc}\\}{Wambaya}
\glt `(His) mother gave (him) milk with grog in it.'
\end{exe}
\largerpage
\noindent
The literal gloss shows that both `grog' and `milk' are marked as accusative, which is enough to allow the former to modify the latter in spite of their separation. The word order is typical of many Australian non-configurational languages: totally free within the clause except that the auxiliary verb (glossed here as \textsc{3sg.pst}) comes second (after one dependent word or phrase). Such freedom of order is easily accommodated if landmarks are independent of dependencies: the auxiliary verb is the root of the clause's dependency structure (as in \ili{English}), and also the landmark for every word that depends on it, whether directly or (crucially) indirectly. Its second position is due to a rule which requires it to precede all these words by default, but to have just one ``preceder''. A simplified structure for this sentence (with \ili{Wambaya} words replaced by \ili{English} glosses) is shown in Figure~\ref{fig:13}, with dotted arrows below the words again showing landmark and position relations. The dashed horizontal line separates this sentence structure from the grammar that generates it. In words, an auxiliary verb requires precisely one preceder, which \rel{isa} descendant. ``Descendant'' is a transitive generalisation of ``dependent'', so a descendant is either a dependent or a dependent of a descendant. The preceder precedes the auxiliary verb, but all other descendants follow it.
\begin{figure}
\centering
\begin{tikzpicture}[node distance= 3cm]
\draw[dashed] (-1,1) to (10,1);
%%%%%%%%%%%
%below dashed line
%first row
\node[draw](grog) at (0,-2){\strut \emph{grog}};
\node[draw](3rd)[right of=grog]{\strut \emph{3\textsc{sg.pst}}};
\node[draw](mother)[right of=3rd, node distance=2.5cm]{\strut \emph{mother}};
\node[draw](give)[right of=mother, node distance=2cm]{\strut \emph{give}};
\node[draw](milk)[right of=give, node distance=2cm]{\strut \emph{milk}};
%squares
\node[draw,inner sep=.3cm](1)[below of=grog]{};
\node[draw,inner sep=.3cm](2)[below of=3rd]{};
\node[draw,inner sep=.3cm](3)[below of=mother]{};
\node[draw,inner sep=.3cm](4)[below of=3, node distance=1.25cm]{};
\node[draw,inner sep=.3cm](5)[below of=4, node distance=1.25cm]{};
%arrows
\draw[->,dashed] (grog) to (1);
\draw[->,dashed] (3rd) to (2);
\draw[->,dashed] (mother) to (3);
\draw[->,dashed] (give.south) to (4.north);
\draw[->,dashed] (milk.south) to (5.north);
%
\draw[->,dashed] (milk.south) to[in=300,out=225] ([xshift=.2cm]3rd.south);
\draw[->,dashed] (give.south) to[in=300, out=225] ([xshift=.4cm]3rd.south);
\draw[->,dashed] (mother.south) to[in=300, out=225] ([xshift=.6cm]3rd.south);
\draw[->,dashed] (grog.south) to[in=south, out=south] ([xshift=-.2cm]3rd.south);
%arrow with small circle
\draw[->] ([xshift=-.2cm]3rd.north) to[in=40, out=135] node[draw,circle,fill=white,inner sep=.2cm](circ){} ([xshift=.2cm]grog.north);
%
\draw[->] (3rd.north) to[in=north, out=north] (give);
\draw[->] ([xshift=-.2cm]give.north) to[in=north, out=north] (mother);
\draw[->] ([xshift=.2cm]give.north) to[in=135, out=40] (milk.north);
\draw[->] ([xshift=.2cm]milk.north) to[in=40, out=135] ([xshift=-.2cm]grog.north);
%arrows with ellipses
\draw[->,dashed] (1) to node[draw,solid,ellipse,fill=white](a){<} (2);
\draw[->,dashed] (3) to (2);
\node[draw,solid,ellipse,fill=white](b)[left of=3, node distance=.78cm]{>};
\draw[->,dashed] (4) to node[draw,near start,solid,ellipse,fill=white](c){>} (2);
\draw[->,dashed] (5) to node[draw,near start,solid,ellipse,fill=white](d){>} (2.south east);
%above dashed line
%%%%%%%%%%%
%middle row
\node[draw](one) at (0,4){\strut\ ~1 ~ };
\node[draw](aux)[right of=one]{\strut aux verb};
\node[draw,inner sep=.3cm](1a)[right of=aux]{};
%lower row
\node[draw,inner sep=.3cm](1b)[below of=1a, node distance=2cm]{};
\node[draw,inner sep=.3cm](1c) at (3.5,2){};
\node[draw,inner sep=.3cm](1d) at (1,2){};
%arrows with ellipses
%ellipses with words
\draw[->] (aux) to[in=north, out=north] node[draw,ellipse,fill=white](prec){preceder} (one);
\draw[->] (aux) to[in=north, out=north] node[draw,ellipse,fill=white](desc){descendant} (1a);
%ellipses with <
\draw[->,dashed] (1b) to node[draw,solid,ellipse,fill=white](aa){>} (1c);
\draw[->,dashed] (1d) to node[draw,solid,ellipse,fill=white](ab){<} (1c);
%triangle "arrows"
\draw[<-,>=open triangle 90 reversed] (one) to (grog);
\draw[<-,>=open triangle 90 reversed] (prec) to (circ);
\draw[<-,>=open triangle 90 reversed] (desc) to (prec);
\draw[<-,>=open triangle 90 reversed] (aux) to (3rd);
%arrows
\draw[->,dashed] (1a) to (1b);
\draw[->,dashed] ([xshift=.5cm]aux.south) to (1c);
\draw[->,dashed] (one) to (1d);
\draw[->,dashed] (one.south east) to[in=south west, out=south east] (aux.south west);
\draw[->,dashed] (1a.south west) to[in=south east, out=south west] (aux.south east);
\end{tikzpicture}
\caption{A non-configurational structure}
\label{fig:13}
\end{figure}
Later sections will discuss word order, and will reinforce the claims of this subsection: that
plain-vanilla versions of either PS or DS are woefully inadequate and need to be supplemented in
some way.
This completes the discussion of ``containment'' and ``continuity'', the characteristics of
classical PS which are missing in DS. We have seen that the continuity guaranteed by PS is also
provided by default in WG by a general ban on intersecting landmark relations; but, thanks to
default inheritance, exceptions abound. HPSG offers a similar degree of flexibility but using
different machinery such as word-order domains \citep{Reape94a}; see also
\crossrefchapterw{order}. An approach to Wambaya not using linearisation domains but rather
projection of valence information is discussed in Section~\ref{sec-free-without-domains} of \citew{chapters/order}. Moreover, WG offers a great deal of flexibility in other relations: for
example, a word may be part of a string (as in coordination) and its phrase's edges may need to be
recognised structurally (as in \ili{Welsh} mutation).
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Asymmetry and functions}
\label{sec:5}
This section considers the characteristics of DS which are missing from classical PS: asymmetrical relations between words and their dependents. Does syntactic theory need these notions? It's important to distinguish here between two different kinds of asymmetry that are recognised in HPSG. One is the kind which is inherent to PS and the part-whole relation, but the other is inherent to DS but an optional extra in PS: the functional asymmetry between the head and its dependents. HPSG, like most other theories of syntax, does recognise this asymmetry and indeed builds it into the name of the theory, but more recently this assumption has come under fire within the HPSG community for reasons considered below in Section~\ref{sec:5.1}.
But if the head/dependent distinction is important, are there any other functional distinctions between parts that ought to be explicit in the analysis? In other words, what about grammatical functions such as subject and object? As Figure~\ref{fig:8} showed, WG recognises a taxonomy of grammatical functions which carry important information about word order (among other things), so functions are central to WG analyses. Many other versions of DS also recognise functional distinctions; for example, Tesnière distinguished actants from circumstantials, and among actants he distinguished subjects, direct objects and indirect objects \citep[xlvii]{Tesniere2015a-u}. But the only functional distinction which is inherent to DS is the one between head and dependents, so other such distinctions are an optional extra in DS – just as they are in PS, where many theories accept them. But HPSG leaves them implicit in the order of elements in \argst (like phrases in DS), so this is an issue worth raising when comparing HPSG with the DS tradition.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Headless phrases}
\label{sec:5.1}
Bloomfield assumed that phrases could be either headed (endocentric) or not (exocentric). According to WG (and other DS theories), there are no headless phrases. Admittedly, utterances may contain unstructured lists (e.g.\ \emph{one two three four} \dots), and quotations may be unstructured strings, as in (\ref{ex:20}), but presumably no-one would be tempted to call such strings ``phrases'', or at least not in the sense of phrases that a grammar should generate.
\begin{exe}
\ex \label{ex:20} He said ``One, two, three, testing, testing, testing.''
\end{exe}
%
Such strings can be handled by the mechanism already introduced for coordination, namely ordered sets.
The WG claim, then, is that when words hang together syntactically, they form phrases which always
have a head. Is this claim tenable? There are a number of potential counterexamples including
(\ref{ex:21})--(\ref{ex:24}):
\settowidth\jamwidth{(Arnold \& Borsley, 2014)}
\eal
\ex \label{ex:21} \emph{The rich} get richer.\footnote{\citew[403]{MuellerGT-Eng2}}
\ex \label{ex:22} \emph{The more you eat}, the fatter you get.\footnote{\citew[\page 164]{Fillmore1986}}
\ex \label{ex:23} In they came, \emph{student after student}.\footnote{\citew[8]{Jackendoff2008a}}
\ex \label{ex:24} \emph{However intelligent the students}, a lecture needs to be
clear.\footnote{Adapted from \citew[\page 28]{AB2014a-u}.}
\zl
\noindent
All these examples can in fact be given a headed analysis, as I shall now explain, starting with
(\ref{ex:21}). \emph{The rich} is allowed by \emph{the}, which has a special sub-case which allows a
single adjective as its complement, meaning either ``generic people'' or some contextually defined
notion (such as ``apples'' in \emph{the red} used when discussing apples); this is not possible with
any other determiner. In the determiner-headed analysis of standard WG, this is unproblematic as the
head is \emph{the}.
The comparative correlative in (\ref{ex:22}) is clearly a combination of a subordinate clause
followed by a main clause \citep{CJ99a-u}, but what are the heads of the two clauses? The obvious
dependency links the first \emph{the} with the second (hence ``correlative''), so it is at least
worth considering an analysis in which this dependency is the basis of the construction and, once
again, the head is \emph{the}. Figure~\ref{fig:14} outlines a possible analysis, though it should be
noted that the dependency structures are complex. The next section discusses such complexities,
which are a reaction to complex functional pressures; for example, it is easy to see that the
fronting of \emph{the less} reduces the distance between the two correlatives. Of course, there is
no suggestion here that this analysis applies unchanged to every translation equivalent of our
comparative correlative; for instance, \ili{French} uses a coordinate structure without an
equivalent of \emph{the}: \emph{Plus \dots\ et plus \dots} (\citealt{Abeille:Borsley:08}; \crossrefchapteralt[Section~\ref{coord:sec-comparative-correlatives}]{coordination}).
\begin{figure}
\centering
\begin{forest}
%[node distance=1cm]
%\node[draw](the1) at (0,0){\strut the};
%\node[draw](harder)[right of=the1]{\strut harder};
%\node[draw](he1)[right of=harder]{\strut he};
%\node[draw](works)[right of=he1]{\strut works};
%\node[draw](the2)[right of=works]{\strut the};
%\node[draw](less)[right of=the2]{\strut less};
%\node[draw](he2)[right of=less]{\strut he};
%\node[draw](learns)[right of=he2]{\strut learns};
wg
[,phantom,for tree={font=\it}
[the]
[more]
[you]
[eat]
[the]
[fatter]
[you]
[get]
]
\draw[->] (the)[out=north,in=north] to (more);
\draw[->] (the)[out=north,in=north] to ([xshift=.2cm]eat.north);
\draw[->] ([xshift=-.2cm]the2.north)[out=north,in=north] to ([xshift=-.2cm]the.north);
\draw[->] (eat)[out=north,in=north] to (you);
\draw[->] (eat)[out=north,in=north] to ([xshift=.2cm]more.north);
\draw[->] ([xshift=.2cm]the2.north)[out=north,in=north] to (fatter);
%\draw[->] (the2)[out=north,in=north] to ([xshift=.2cm]get.north);
\draw[->] (get)[out=north,in=north] to (the2);
\draw[->] (get)[out=north,in=north] to (you2);
\draw[->] (get)[out=north,in=north] to ([xshift=.2cm]fatter.north);
\end{forest}
\caption{A WG sketch of the comparative correlative}
\label{fig:14}
\end{figure}
\largerpage
Example\label{dg:page-npn-construction} (\ref{ex:23}) is offered by Jackendoff as a clear case of
headlessness, but there is an equally obvious headed analysis of \emph{student after student} in
which the structure is the same as in commonplace NPN examples like \emph{box of matches}. The only
peculiarity of Jackendoff's example is the lexical repetition, which is beyond most theories of
syntax. For WG, however, the solution is easy: the second N token \rel{isa} the first, which allows
default inheritance. This example illustrates an idiomatic but generalisable version of the NPN
pattern in which the second N \rel{isa} the first and the meaning is special; as expected, the pattern is
recursive. The grammatical subnetwork needed to generate the syntactic structure for such examples
is shown (with solid lines) in Figure~\ref{fig:15}; the semantics is harder and needs more
research. What this diagram shows is that there is a subclass of nouns called here
``noun\textsubscript{npn}'', which is special in having as its complement a preposition with the
special property of having another copy of the same noun\textsubscript{npn} as its complement. The
whole construction is potentially recursive because the copy itself inherits the possibility of a
preposition complement, but the recursion is limited by the fact that this complement is optional
(shown as ``0,1'' inside the box, meaning that its quantity is either 0 (absent) or 1
(present)). Because the second noun \rel{isa} the first, if it has a prepositional complement this is also
a copy of the first preposition – hence \emph{student after student after student}, whose structure
is shown in Figure~\ref{fig:15} with dashed lines.
\begin{figure}
\centering
\begin{tikzpicture}[node distance=1.5cm]
\node[draw](noun) at (0,0){\strut noun};
%first row
\node[draw](nounnpn)[below of=noun, node distance=2.5cm]{\strut noun\textsubscript{npn}};
\node[draw,ellipse](c1)[above right of=nounnpn]{c};
\node[draw](1a)[below right of=c1]{1};
\node[draw](prep)[above of=1a, node distance=2.5cm]{\strut preposition};
\node[draw,ellipse](c2)[above right of=1a]{c};
\node[draw](1b)[below right of=c2]{1};
\node[draw,ellipse](c3)[above right of=1b]{c};
\node[draw](10)[below right of=c3]{1,0};
%second row
\node[draw,dashed](student1)[below of=nounnpn, node distance=2.6cm]{\strut \emph{student}};
\node[draw,ellipse,dashed](c4)[above right of=student1]{c};
\node[draw,dashed](after1)[below right of=c4]{\strut \emph{after}};
\node[draw,ellipse,dashed](c5)[above right of=after1]{c};
\node[draw,dashed](student2)[below right of=c5]{\strut \emph{student}};
\node[draw,ellipse,dashed](c6)[above right of=student2]{c};
\node[draw,dashed](after2)[below right of=c6]{\strut \emph{after}};
\node[draw,ellipse,dashed](c7)[above right of=after2]{c};
\node[draw,dashed](student3)[below right of=c7]{\strut \emph{student}};
%%arrows
\draw[<-,>=open triangle 90 reversed] (noun) to (nounnpn);
\draw[<-,>=open triangle 90 reversed] (prep) to (1a);
\draw[<-,>=open triangle 90 reversed,dashed] (nounnpn) to (student1);
\draw (1b.south west) to[out=south west,in=south east] ([yshift=-.13cm]nounnpn.south);
%first row
\draw (nounnpn) to (c1);
\draw[->] (c1) to (1a.north west);
\draw (1a.north east) to (c2);
\draw[->] (c2) to (1b.north west);
\draw (1b.north east) to (c3);
\draw[->] (c3) to (10.north west);
%second row
\draw[dashed] (student1) to (c4);
\draw[->,dashed] (c4) to (after1);
\draw[dashed] (after1) to (c5);
\draw[->,dashed] (c5) to (student2);
\draw[dashed] (student2) to (c6);
\draw[->,dashed] (c6) to (after2);
\draw[dashed] (after2) to (c7);
\draw[->,dashed] (c7) to (student3);
\end{tikzpicture}
\caption{The NPN construction in Word Grammar}
\label{fig:15}
\end{figure}
The ``exhaustive conditional'' or ``unconditional'' in (\ref{ex:24}) clearly has two parts:
\emph{however smart} and \emph{the students}, but which is the head? A verb could be added, giving
\emph{however smart the students are}, so if we assumed a covert verb, that would provide a head,
but without a verb it is unclear – and indeed this is precisely the kind of subject-predicate
structure that stood in the way of dependency analysis for nearly two thousand years.
However, there are good reasons for rejecting covert verbs in general. For instance, in \ili{Arabic}
a predicate adjective or nominal is in different cases according to whether ``be'' is overt:
accusative when it is overt, nominative when it is covert. Moreover, the word order is different in
the two constructions: the verb normally precedes the subject, but the verbless predicate follows
it. In \ili{Arabic}, therefore, a covert verb would simply complicate the analysis; but if an
analysis without a covert verb is possible for \ili{Arabic}, it is also possible in \ili{English}.
\largerpage
Moreover, even \ili{English} offers an easy alternative to the covert verb based on the structure
where the verb \textsc{be} is overt. It is reasonably uncontroversial to assume a raising analysis
for examples such as (\ref{ex:25}) and (\ref{ex:26}), so (\ref{ex:27}) invites a similar analysis
\citep{MuellerPredication,MuellerCopula}.
\eal
\ex \label{ex:25} He keeps talking.
\ex \label{ex:26} He is talking.
\ex \label{ex:27} He is cold.
\zl
\noindent
But a raising analysis implies a headed structure for \emph{he ... cold} in which \emph{he} depends (as subject) on \emph{cold}. Given this analysis, the same must be true even where there is no verb, as in example (\ref{ex:24})'s \emph{however smart the students} or so-called ``Mad-Magazine sentences'' like (\ref{ex:28}) \citep{Lambrecht:90}.%
%
\footnote{A reviewer asks what excludes alternatives such as *\emph{He smart?} and *\emph{Him smart.} (i.e.\ as a statement). The former is grammatically impossible because \emph{he} is possible only as the subject of a tensed verb, but presumably the latter is excluded by the pragmatic constraints on the ``Mad-magazine'' construction.}%
%
\begin{exe}
\ex \label{ex:28} What, him smart? You're joking!
\end{exe}
\noindent
Comfortingly, the facts of exhaustive conditionals support this analysis because the subject is
optional, confirming that the predicate is the head:
\begin{exe}
\ex \label{ex:29} However smart, nobody succeeds without a lot of effort.
\end{exe}
\noindent
In short, where there is just a subject and a predicate, without a verb, then the predicate is the
head.
Clearly it is impossible to prove the non-existence of headless phrases, but the examples considered
have been offered as plausible examples, so if even they allow a well-motivated headed analysis, it
seems reasonable to hypothesise that all phrases have heads.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Complex dependency}
\label{sec:5.2}\label{dg-sec-complex-dependency}
The differences between HPSG and WG raise another question concerning the geometry of sentence structure, because the possibilities offered by the part-whole relations of HPSG are more limited than those offered by the word-word dependencies of WG. How complex can dependencies be? Is there a theoretical limit such that some geometrical patterns can be ruled out as impossible? Two particular questions arise:
\begin{enumerate}
\item \label{it:4} Can a word depend on more than one other word? This is of course precisely what structure sharing allows, but this only allows ``raising'' or ``lowering'' within a single chain of dependencies. Is any other kind of ``double dependency'' possible?
\item \label{it:5} Is mutual dependency possible?
\end{enumerate}
\noindent
The answer to both questions is yes for WG, but is less clear for HPSG.
Consider the dependency structure for an example such as (\ref{ex:30}).
\begin{exe}
\ex \label{ex:30} I wonder who came.
\end{exe}
\noindent
In a dependency analysis, the only available units are words, so the clause \emph{who came} has no status in the analysis and is represented by its head. In WG, this is \emph{who}, because this is the word that links \emph{came} to the rest of the sentence.
Of interest in (\ref{ex:30}) are three dependencies:
\begin{enumerate}
\item \label{it:6} \emph{who} depends on \emph{wonder} because \emph{wonder} needs an interrogative complement – i.e.\ an interrogative word such as \emph{who} or \emph{whether}; so \emph{who} is the object of \emph{wonder}.
\item \label{it:7} \emph{who} also depends on \emph{came}, because it is the subject of \emph{came}.
\item \label{it:8} \emph{came} depends on \emph{who}, because interrogative pronouns allow a following finite verb (or, for most but not all pronouns, an infinitive, as in \emph{I wonder who to invite}). Since this is both selected by the pronoun and optional (as in \emph{I wonder who}), it must be the pronoun's complement, so \emph{came} is the complement of \emph{who}.
\end{enumerate}
\noindent
Given the assumptions of DS, and of WG in particular, each of these dependencies is quite obvious and uncontroversial when considered in isolation. The problem, of course, is that they combine in an unexpectedly complicated way; in fact, this one example illustrates both the complex conditions defined above: \emph{who} depends on two words which are not otherwise syntactically connected (\emph{wonder} and \emph{came}), and \emph{who} and \emph{came} are mutually dependent. A WG analysis of the relevant dependencies is sketched in Figure~\ref{fig:16} (where ``s'' and ``c'' stand for ``subject'' and ``complement'').
\begin{figure}
\centering
\begin{tikzpicture}[node distance=1.3cm]
\node[draw](I) at (0,0){\strut ~~\emph{I}~~ };
\node[draw,ellipse](s1)[above right of=I]{s};
\node[draw](wonder)[below right of=s1]{\strut \emph{wonder}};
\node[draw,ellipse](c1)[above right of=wonder]{c};
\node[draw](who)[below right of=c1]{\strut \emph{who}};
\node[draw,ellipse](c2)[above right of=who]{c};
\node[draw](came)[below right of=c2]{\strut \emph{came}};
\node[draw,ellipse](s2)[above of=c2]{s};
\draw (wonder) to (s1);
\draw[->] (s1) to (I);
\draw (wonder) to (c1);
\draw[->] (c1) to (who);
\draw (who) to (c2);
\draw[->] (c2) to (came);
\draw (came) to (s2);
\draw[->] (s2) to (who);
\end{tikzpicture}
\caption{Complex dependencies in a relative clause}
\label{fig:16}
\end{figure}
A similar analysis applies to relative clauses. For instance, in (\ref{ex:31}), the relative pronoun \emph{who} depends on the antecedent \emph{man} as an adjunct and on \emph{called} as its subject, while the ``relative verb'' \emph{called} depends on \emph{who} as its obligatory complement.
\begin{exe}
\ex \label{ex:31} I knew the man who called.
\end{exe}
Pied-piping presents well-known challenges. Take, for example, (\ref{ex:32}) \citep[212]{ps2}.
\begin{exe}
\ex \label{ex:32} Here's the minister [[in [the middle [of [whose sermon]]]] the dog barked]
\end{exe}
\noindent
According to WG, \emph{whose} (which as a determiner is head of the phrase \emph{whose sermon}) is both an adjunct of its antecedent \emph{minister} and also the head of the relative verb \emph{barked}, just as in the simpler example. The challenge is to explain the word order: how can \emph{whose} have dependency links to both \emph{minister} and \emph{barked} when it is surrounded, on both sides, by words on which it depends? Normally, this would be impossible, but pied-piping is special. The WG analysis \citep{Hudson2018a} locates the peculiarities of pied-piping entirely in the word order, invoking a special relation ``pipee'' which transfers the expected positional properties of the relative pronoun (the ``piper'') up the dependency chain – in this case, to the preposition \emph{in}.
And so we finish this review of complex dependencies by answering the question that exercised the
minds of the Arabic grammarians in the Abbasid Ca\-liph\-ate: is mutual dependency possible?
The arrow notation of WG allows grammars to generate the relevant structures, so the answer is yes,
and HPSG can achieve the same effect by means of re-entrancy (see
\citew[\page 50]{ps2} for the mutual selection of determiner and noun); so this conclusion reflects
another example of theoretical convergence.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Grammatical functions}
\label{sec:5.3}
%\largerpage
As I have already explained, more or less traditional grammatical functions such as subject and
adjunct play a central part in WG, and more generally, they are highly compatible with any version
of DS, because they are all sub-divisions of the basic function ``dependent''. This being so, we can
define a taxonomy of functions such as the one in Figure~\ref{fig:8}, parts of which are developed in Figure~\ref{fig:17} to accommodate an example of the very specific functions which are needed in any complete grammar: the second complement of \emph{from}, as in \emph{from London to Edinburgh}, which may be unique to this particular preposition.
\begin{figure}
\centering
\begin{tikzpicture}[node distance=1.5cm]
%first row
\node[shape=ellipse,draw](object) at (0,0){object};
\node[shape=ellipse,draw](comp2)[right of=object, node distance=4cm]{2\textsuperscript{nd} complement of \emph{from}};
%second row
\node[shape=ellipse,draw](subject)[above of=object]{subject};
\node[shape=ellipse,draw](comp1)[above of=comp2]{complement};
%third row
\node[shape=ellipse,draw](adj)[above of=subject]{adjunct};
\node[shape=ellipse,draw](valent)[above of=comp1]{valent};
%fourth row
\node[shape=ellipse,draw](dep)[above of=valent]{dependent};
%arrows
\draw[<-,>=open triangle 90 reversed] (dep) to (valent);
\draw[<-,>=open triangle 90 reversed] (valent) to (comp1);
\draw[<-,>=open triangle 90 reversed] (comp1) to (comp2);
%
\draw (object.north) to ([yshift=-.13cm]comp1.south);
\draw (subject.north) to ([yshift=-.13cm]valent.south);
\draw (adj.north) to ([yshift=-.13cm]dep.south);
\end{tikzpicture}
\caption{A taxonomy of grammatical functions}
\label{fig:17}
\end{figure}
HPSG also recognises a taxonomy of functions by means of three lists attached to any head word:
\begin{description}
\item[\textnormal{\textsc{spr}:}] \label{it:spr} the word's specifier, i.e.\ for a noun its determiner and (in some versions of HPSG) for a verb its subject
\item[\textnormal{\textsc{subj}:}] \label{it:subj} the word's subject, i.e.\ the subject of a verb
(in some versions of HPSG) and the subject of certain other predicates (\eguk adjectives)
\item[\textnormal{\textsc{comps}:}] \label{it:comps} its complements
\item[\textnormal{\textsc{arg-st}:}] \label{it:arg-st} its specifier, its subject, and its complements, i.e.\ in
WG terms, its valents.
\end{description}
\noindent
The third list concatenates the first two, so the same analysis could be achieved in WG by a
taxonomy in which \textsc{spr} and \textsc{comps} both \rel{isa} \textsc{arg-st}. However, there are
also two important differences: in HPSG, adjuncts have a different status from other dependents, and
these three general categories are lists.
Adjuncts are treated differently in the two theories. In WG, they are dependents, and located in the
same taxonomy as valents; so in HPSG terms they would be listed among the head word's attributes,
along with the other dependents but differentiated by not being licensed by the head. But HPSG
reverses this relationship by treating the head as a \textsc{mod} (``modified'') of the adjunct. For
example, in (\ref{ex:33}) \emph{she} and \emph{it} are listed in the \textsc{arg-st} of \emph{ate}
but \emph{quickly} is not mentioned in the AVM of \emph{ate}; instead, \emph{ate} is listed as
\textsc{mod} of \emph{quickly}.
%\largerpage
\begin{exe}
\ex \label{ex:33} She ate it quickly.
\end{exe}
\noindent
This distinction, inherited from Categorial Grammar, correctly reflects the facts of government:
\emph{ate} governs \emph{she} and \emph{it}, but not \emph{quickly}. It also reflects one possible
analysis of the semantics, in which \emph{she} and \emph{it} provide built-in arguments of the
predicate ``eat'', while \emph{quickly} provides another predicate ``quick'', of which the whole
proposition \emph{eat}(\emph{she}, \emph{it}) is the argument. Other semantic analyses are of course
possible, including one in which ``manner'' is an optional argument; but the proposed analysis is
consistent with the assumptions of HPSG.
On the other hand, HPSG also recognises a \textsc{head-daughter} in schemata like
the Specifier-Head, the Filler-Head, the Head"=Complement and the Head"=Adjunct Schema and in
the construction which includes \emph{quickly}, the latter is not the head. So what unifies
arguments and adjuncts is the fact of not being heads (being members of the \textsc{non-head-dtrs}
list in some versions of HPSG). In contrast, DS theories (including WG) agree
in recognising adjuncts as dependents, so arguments and adjuncts are unified by this category, which
is missing from most versions of HPSG, though not from all \citep*{BMS2001a}. The DS analysis follows
from the assumption that dependency isn't just about government, nor is it tied to a logical
analysis based on predicates and arguments. At least in WG, the basic characteristic of a dependent
is that it modifies the meaning of the head word, so that the resultant meaning is (typically) a
hyponym of the head's unmodified meaning. Given this characterisation, adjuncts are core dependents;
for instance \emph{big book} is a hyponym of \emph{book} (i.e.\ ``big book'' \rel{isa} ``book''), and
\emph{she ate it quickly} is a hyponym of \emph{she ate it}. The same characterisation also applies
to arguments: \emph{ate it} is a hyponym of \emph{ate}, and \emph{she ate it} is a hyponym of
\emph{ate it}. (Admittedly hyponymy is merely the default, and as explained in Section~\ref{sec:4.1}
it may be overridden by the details of particular adjuncts such as \emph{fake} as in \emph{fake
diamonds}; but exceptions are to be expected.)
Does the absence in HPSG of a unifying category ``dependent'' matter? So long as \textsc{head} is
available, we can express word-order generalisations for head-final and head-initial languages, and
maybe also for ``head-medial'' languages such as \ili{English} \citep[172]{Hudson2010b-u}. At least
in these languages, adjuncts and arguments follow the same word-order rules, but although it is
convenient to have a single cover term ``dependent'' for them, it is probably not essential. So
maybe the presence of \textsc{head} removes the need for its complement term, \textsc{dependent}.
The other difference between HPSG and WG lies in the way in which the finer distinctions among
complements are made. In HPSG they are shown by the ordering of elements in a list, whereas WG
distinguishes them as further subcategories in a taxonomy. For example, in HPSG the direct object is
identified as the second NP in the \textsc{arg-st} list, but in WG it is a sub-category of
``complement'' in the taxonomy of Figure~\ref{fig:17}. In this case, each approach seems to offer
something which is missing from the other.
On the one hand, the ordered lists of HPSG reflect the attractive ranking of dependents offered by
Relational Grammar \citep{PP83a-u,Blake1990} in which arguments are numbered from 1 to 3 and can be
``promoted'' or ``demoted'' on this scale. The scale had subjects at the top and remote adjuncts at
the bottom, and appeared to explain a host of facts from the existence of argument-changing
alternations such as passivisation \citep{Levin93a-u} to the relative accessibility of different
dependents to relativisation \citep{KC77a}. An ordered list, as in \textsc{arg-st}, looks like a
natural way to present this ranking of dependents.
On the other hand, the taxonomy of WG functions has the attraction of open-endedness and
flexibility, which seems to be in contrast with the HPSG analysis which assumes a fixed and universal list of
dependency types defined by the order of elements in the various categories discussed previously
(\textsc{spr}, \textsc{comps} and \textsc{arg-st}). A universal list of categories seems to require an
explanation: Why a universal list? Why this particular list? How does the list develop in a
learner's mind? In contrast, a taxonomy can be learned entirely from experience, can vary across
languages, and can accommodate any amount of minor variation. Of these three attractions, the
easiest to illustrate briefly is the third. Take once again the \ili{English} preposition
\emph{from}, as in (\ref{ex:34}).
%
\begin{exe}
\ex \label{ex:34} From London to Edinburgh is four hundred miles.
\end{exe}
%
\noindent
Here \emph{from} seems to have two complements: \emph{London} and \emph{to Edinburgh}. Since they
have different properties, they must be distinguished, but how? The easiest and arguably correct
solution is to create a special dependency type just for the second complement of \emph{from}. This
is clearly unproblematic in the flexible WG approach, where any number of special dependency types
can be added at the foot of the taxonomy, but much harder if every complement must fit into a
universal list. So, HPSG seems to have a problem here, but on closer inspection this is not the
case: first, there is no claim that \argst is universal. For example, \citet{KM15a-u} discuss \ili{Oneida}
(Iroquoian) and argue that this language does not have syntactic valence and hence it would not make
sense to assume an \argstl, which entails that \argst is not universal. (See also
\citew{MuellerCoreGram} and
\crossrefchapteralt[Section~\ref{sec-empty-els-innate-knowledge}]{minimalism} on the non-assumption
of innate language-specific knowledge in HPSG.) \citet{KC77a} discussed
the obliqueness order as a universal tendency and it plays a role in various phenomena:
relativization, case assignment, agreement, pronoun binding (see the chapters on these phenomena by
\citealt{chapters/case}, \citealt{chapters/agreement}, \citealt{chapters/binding}) and an order is also
needed for capturing generalizations on linking \citep*{chapters/arg-st}. But apart from this there
is no label or specific category information attached to say the third element in the
\argstl. The general setting also allows for subjectless \argst lists as needed in grammars of
German. The respective lexemes would have an object at the first position of the \argstl.
English \emph{from} is also unproblematic: the second element in an \argstl can be
anything. A respective specification can be lexeme specific or specific for a class of lexemes (see
Chapters by \citet*{chapters/idioms} on idioms and by \citet*{chapters/arg-st} on linking).
To summarise the discussion, therefore, HPSG and WG offer fundamentally different treatments of
grammatical functions with two particularly salient differences. In the treatment of adjuncts, there
are reasons for preferring the WG approach in which adjuncts and arguments are grouped together
explicitly as dependents. But in distinguishing different types of complement, the HPSG lists seem
to complement the taxonomy of WG, each approach offering different benefits. This is clearly an area
needing further research.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{HPSG without PS?}
\label{sec:6}
This chapter on HPSG and DS raises a fundamental question for HPSG: does it really need PS? Most
introductory textbooks present PS as an obvious and established approach to syntax, but it is only
obvious because these books ignore the DS alternative: the relative pros and cons of the two
approaches are rarely assessed. Even if PS is in fact better than DS, this can't be described as
``established'' (in the words of one of my reviewers) until its superiority has been
demonstrated. This hasn't yet happened. The historical sketch showed very clearly that nearly two
thousand years of syntactic theory assumed DS, not PS, with one exception: the subject-predicate
analysis of the proposition (later taken to be the sentence). Even when PS was invented by
Bloomfield, it was combined with elements of DS, and Chomsky's PS, purified of all DS elements, only
survived from 1957 to 1970.
A reviewer also argues that HPSG is vindicated by the many large-scale grammars that use it (see
also \crossrefchapterw[Section~\ref{cl:resources}]{cl} for an overview). These
grammars are indeed impressive, but DS theories have also been implemented in the equally
large-scale projects listed in Section~\ref{sec:2}. In any case, the question is not whether HPSG is
a good theory, but rather whether it might be even better without its PS assumptions. The challenge
for HPSG, then, is to explain why PS is a better basis than DS. The debate has hardly started, so
its outcome is unpredictable; but suppose the debate favoured DS. Would that be the end of HPSG? Far
from it. It could survive almost intact, with just two major changes.
The first would be in the treatment of grammatical functions. It would be easy to bring all
dependents together in a list called \textsc{deps} \citep{BMS2001a} with \textsc{adjuncts} and
\textsc{comps} as sub-lists, or even with a separate subcategory for each sub-type of dependent
\citep{Hellan2017}.
The other change would be the replacement of phrasal boxes by a single list of words. (\ref{ex:35})
gives a list for the example with which we started (with round and curly brackets for ordered and
unordered sets, and a number of sub-tokens for each word):
\begin{exe}
\ex \label{ex:35} ({\emph{many}, \emph{many}+\emph{h}} {\emph{students}, \emph{students}+\emph{a}}, {\emph{enjoy}, \emph{enjoy}+\emph{o}, \emph{enjoy}+\emph{s}}, \emph{syntax})
\end{exe}
\noindent
Each word in this list stands for a whole box of attributes which include syntactic dependency links
to other words in the list. The internal structure of the boxes would otherwise look very much like
standard HPSG, as in the schematic neo-HPSG structure in Figure~\ref{fig:18}. (To improve
readability by minimizing crossing lines, attributes and their values are separated as usual by a
colon, but may appear in either order.)
\begin{figure}
\centering
\begin{tikzpicture}[node distance=2.5cm]
%first row
\node[draw, align=center](many1) at (0,0){\emph{many}\\\textsc{mod:} \{~~\}};
\node[draw, align=center](students1)[right of=many1]{\emph{students}\\\{~~\} :\textsc{deps}};
\node[draw, align=center](enjoy1)[right of=students1]{\emph{enjoy}\\\{~~\} :\textsc{sbj}\\\textsc{obj}: \{~~\}\\\textsc{deps}: \{~~\}};
\node[draw, align=center](syntax)[right of=enjoy1]{\emph{syntax}\\\textsc{deps}: \{~~\}};
%second row
\node[draw, align=center](many2)[above of=many1, node distance=3cm]{\emph{many$+$h}\\\textsc{mod}: \{~~\}};
\node[draw, align=center](students2)[right of=many2]{\emph{students$+$a}\\\{~~\} :\textsc{deps}};
\node[draw, align=center](enjoy2)[right of=students2]{\emph{enjoy}\\\{~~\} :\textsc{sbj}\\\textsc{obj}: \{~~\}\\\textsc{deps}: \{~~\}};
%third row
\node[draw, align=center](enjoy3)[above of=enjoy2, node distance=3cm]{\emph{enjoy}\\\{~~\} :\textsc{sbj}\\\textsc{obj}: \{~~\}\\\textsc{deps}: \{~~\}};
%arrows
\draw[<-,>=open triangle 90 reversed] (many1) to (many2);
\draw[<-,>=open triangle 90 reversed] (students1) to (students2);
\draw[<-,>=open triangle 90 reversed] (enjoy1) to (enjoy2);
\draw[<-,>=open triangle 90 reversed] (enjoy2) to (enjoy3);
%%%
\draw[->] (.4,2.8) to (students2);
\draw[->] (2,2.7) to[in=south east, out=south] (many2);
\draw[->] (4.5,6.2) to[in=north, out=west] (students2);
\draw[->] (5.5,2.7) to[in=north, out=east] (syntax);
%%%
\draw[->] (5.5,-.7) to (5.4,-.2);
\draw[->] (5.4,-.6) to[in=south, out=north] (4.7,.2);
%
\draw[->] (5.5,2.3) to (5.4,2.8);
\draw[->] (5.4,2.4) to[in=south, out=north] (4.7,3.2);
%
\draw[->] (5.5,5.3) to (5.4,5.8);
\draw[->] (5.4,5.4) to[in=south, out=north] (4.7,6.2);
%braces
\node(b1)[left of=many1, node distance=1cm]{(};
\node(b2) [right of=syntax, node distance=1cm]{)};
\end{tikzpicture}
\caption{A neo-HPSG analysis}
\label{fig:18}
\end{figure}
Figure~\ref{fig:18} can be read as follows:
\begin{itemize}
\item The items at the bottom of the structure (\emph{many}, \emph{students}, \emph{enjoy} and
\emph{syntax}) are basic types stored in the grammar, available for modification by the
dependencies. These four words are the basis for the ordered set in (\ref{ex:35}), and shown here
by the round brackets, with the ordering shown by the left-right dimension. This list replaces the
ordered partonomy of HPSG.
\item Higher items in the vertical taxonomy are tokens and sub-tokens, whose names show the
dependency that defines them (\emph{h} for ``head'', \emph{a} for ``adjunct'', and so on). The
taxonomy above \emph{enjoy} shows that \emph{enjoy}+\emph{s} \rel{isa} \emph{enjoy}+\emph{o} which isa
\emph{enjoy}, just as in an HPSG structure where each dependent creates a new representation of
the head by satisfying and cancelling a valency need and passing the remaining needs up to the new
representation.
\item The taxonomy above \emph{students} shows that \emph{students}+\emph{a} is a version of
\emph{students} that results from modification by \emph{many}, while the parallel one above
\emph{many} shows that (following HPSG practice) \emph{many}+\emph{h} has the function of
modifying \emph{students}.
\end{itemize}
\noindent
Roughly speaking, each boxed item in this diagram corresponds to an AVM in a standard HPSG analysis.
In short, modern HPSG could easily be transformed into a version of DS, with a separate AVM for each
word. As in DS, the words in a sentence would be represented as an ordered list interrelated partly
by the ordering and partly by the pairwise dependencies between them. This transformation is
undeniably possible. Whether it is desirable remains to be established by a programme of research
and debate which will leave the theory more robust and immune to challenge.%
\indexdgend
\is{Word Grammar|)}
\section*{Abbreviations}
\begin{tabularx}{.99\textwidth}{@{}lX}
%\textsc{a} & agent\\ is in LGR
\textsc{nm} & non-masc. (class II--IV)\\
\textsc{ii} & noun class II\\
\textsc{iv} & noun class IV\\
\textsc{prop} & proprietive\\
\end{tabularx}
\section*{\acknowledgmentsEN}
I would like to take this opportunity to thank Stefan Müller for his unflagging insistence on getting everything right.
{\sloppy
\printbibliography[heading=subbibliography,notkeyword=this]
}
\end{document}
% <!-- Local IspellDict: en_GB-ise-w_accents -->
%%% Local Variables:
%%% mode: latex
%%% TeX-master: t
%%% TeX-engine: xetex
%%% End:
|
|
\myChapter{Generation of bots for RTS games using Genetic Programming}\label{chap:rts}
\begin{flushright}{\slshape
This last week with Fry has been great. Beneath his warm,
\\soft exterior beats the cold, mechanical heart of a robot. } \\ \medskip
--- {Bender. I, Roomate. Futurama.}
\end{flushright}
\minitoc\mtcskip
\vfill
\lettrine{T}{he} last application to fulfill the \textsc{Objective 4} is to use OSGiLiath to obtain competitive bots for RTS games. With this application, Genetic Programming (explained in Section \ref{subsec:classicEAs}) will be used to validate if the genericity in evolutionary model explained in Section \ref{sec:distributed:design}, can also be adapted to SOA. SOA-EA and OSGiLiath will be used to perform an study on tree depth influence of GP to create competitive bots for RTS games. New individuals will be used and new implementations for mutation and crossover, and services to execute remote environments outside OSGi will be developed with SOA-EA.
\section{Background}
\lettrine{R}{TS} Time Strategy (RTS) games are a type of videogame
where the play takes action in real time (that is, there are not
turns, as in chess). Well-known games of this genre are Age of
Empires\texttrademark~ or Warcraft\texttrademark. In this kind of game
the players have units, structures and resources and they have to
confront with other players to win battles. Artificial Intelligence
(AI) in these games is usually very complex, because they are dealing
with a lot of actions and strategies at the same time. % y esto es
% importante para la tesis
% porque... todo capítulo debe tener
% una introducción que lo relacione
% con la metodología y los algoritmos
% que constituyen LA TESIS. Hasta
% ahora, parece "Blancanieves y los
% siete enanitos" o bien "Osgiliath y
% todo lo que se me ha ocurrido hacer
% con él" - JJ FERGU: reescribiendo
The {\em Planet Wars} game, presented under the Google AI Challenge 2010\footnote{\url{http://planetwars.aichallenge.org/}} has been used by several authors for the study of computational intelligence in RTS games
\cite{Lara2013mapgenerator,Mora2012Genebot,FernandezAres2012adaptive}. This
is because it is a simplification of the elements that are present in
the complex games previously mentioned (only one type of resource and
one type of unit). % Todo problema debe ser acompañado por una
% caracterización del mismo y una metodología par
% aaplicar LA TESIS a ese tipo de problema. la
% conclusión debe generalizar lo hallado en el
% capítulo. Y no sé si seguírmelo leyendo, porque
% si no haces esto una opción es simplemente
% eliminar el capítulo si no contribuye a LA
% TESIS. - JJ FERGU: pues eso, cambiando todo :)
Although this game has been described in previous works
\cite{Lara2013mapgenerator,Mora2012Genebot,FernandezAres2012adaptive},
% pero no en esta tesis, así que resúmelo (si al final vas a meter
% esto) - JJ
we summarize saying that the objective of the player is to conquer
enemy and neutral planets in a space-like simulator. Each player has
planets (resources) that produce ships (units) depending of a
growth-rate. The player must send these ships to other planets
(literally, crashing towards the planet) to conquer them. A player win
if he is the owner of all the planets. As requirements, only one
second is the limit to calculate next actions (this time windows is
called {\em turn}\footnote{Although in this work we are using this
term, note that the game is always performed in real time.}), and no
memory about the previous turns must be used. Figure \ref{fig:naves}
shows a screen capture of the game. % Hay que caracterizar el
% problema, situarlo en contexto. ¿Es
% fácil o difícil? ¿A qué otro
% problema se parece? ¿Cuál es el
% estado del arte? No puedes
% copy/pastear simplemente lo que
% hayas escrito en el paper - JJ
\begin{SCfigure}
\includegraphics[scale =0.7] {gfx/rts/naves.pdf}
\caption{Example of execution of the Player Wars game. White planets and ships are owned by the player and dark gray ones are controlled by the enemy. Clear gray are neutral planets (not invaded).}
\label{fig:naves}
\end{SCfigure}
In this chapter we use Genetic Programming (GP) to obtain agents that play
Planet Wars game. The reason is to obtain agents without any human knowledge, obtaining the rules to play automatically.% hala, así, sin vaselina ni nada. ¿Por qué GP? ¿Por
% qué no ES? ¿O cualquier otra cosa? FERGU: puesto
The objective of GP is to create functions or programs to solve determined problems. Individual representation is usually in form of a tree, formed by operators (or {\em primitives}) and variables ({\em terminals}). These sets are usually fixed and known. The genome size is, therefore, variable, but the maximum size (depth) of the individuals is usually fixed, to avoid high evaluation costs. %GP has been used to evolve LISP (LISt Processing) programs \cite{Koza1990Tools}, or XSLT (eXtensible Stylesheet Language Transformations) scripts \cite{Garcia2008XSLT}, among others.
We try to solve the next questions:
\begin{itemize}
\item Can a tree-generated behaviour of an agent defeat an agent hand-coded by a player with experience and whose parameters have been also optimized?
\item Can this agent beats a more complicated opponent that is adapted to the environment?
\item How does the maximum depth affects the results?
\end{itemize}
%The rest of the work is structured as follows: after the state of the art, the description of our agent is presented in Section \ref{sec:agent}. Then, the experimental setup conduced with the EA are showed (Section \ref{sec:experiments}). Finally, results, conclusions and future works are discussed.
RTS games have been used extensively in the computational intelligence area (see \cite{Lara2013review} for a review).
Among other techniques, Evolutionary Algorithms (EAs) have been widely used in computational intelligence in RTS games \cite{Lara2013review}. For example, for parameter optimization \cite{Esparcia10FPS}, learning \cite{Kenneth2005neuroevolution} or content generation \cite{Mahlmann2012MapGeneration}.
One of these types, genetic programming, has been proved as a good tool for developing strategies in games, achieving results comparable to human, or human-based competitors \cite{Sipper2007gameplaying}. They also have obtained higher ranking than solvers produced by other techniques or even beating high-ranking humans \cite{Elyasaf2012FreeCell}. GP has also been used in different kind of games, such as board-games \cite{Benbassat2012Reversi}, or (in principle) simpler games such as Ms. Pac-Man \cite{Brandstetter2012PacMan} and Spoof \cite{Wittkamp2007spoof} and even in modern video-games such as First Person Shothers (FPS) (for example, Unreal\texttrademark~ \cite{Esparcia2013GPunreal}).
Planet Wars, the game we are going to use in this chapter, has been used as experimental environment for testing agents in other works. For example, in
\cite{Mora2012Genebot} the authors programmed the behaviour of a {\em bot} (a computer-controlled player) with a decision tree of 3 levels. Then, the values of these rules were optimized using a genetic algorithm to tune the strategy rates and percentages. % ein????? - JJ - FERGU: re-escrito.
Results showed a good performance confronting with other bots
provided by the Google AI Challenge.
In \cite{FernandezAres2012adaptive} the authors improved this agent optimizing
in different types of maps and selecting the set of optimized
parameters depending of the map where the game was taking place,
using a tree of 5 levels. These results outperformed the previous
version of the bot with 87\% of victories.
In this paper we use GP to create the decision tree,
instead of using our own gaming experience to model it, and compare
this agent with the two presented before.
\section{Application of SOA-EA}
\subsection{Identification}
As in previous examples, in the {\em Problem domain} an {\em Initializer} of individual and a {\em FitnessCalculator} service need to be used. The first one generates needs to generate individuals codified as a tree of decisions and operations ({\em TreeGenome}), and the latter one will integrate the Planet Wars environment to OSGiLiath.
The operators of the {\em Algorithm Domain} to deal with this new codification of indivudals needs to be created: a Crossover and the Mutation. However, some of the operators previously defined in previous chapter (such as {\em Parent Selector}) does not need modification.
Finally, inside the {\em Infrastructure domain} services to test each individual ({\em IndidivualTester}) and convert the codification of the individual to the appropriate codification to different playing environments ({\em Conversor}) needs to be created.
\subsection{Specification}
The proposed agent receives a tree to be executed. The generated tree
is a binary tree of expressions formed by two different types of nodes:
\begin{itemize}
\item {\em Decision}: a logical expression formed by a variable, a less than operator ($<$), and a number between 0 and 1. They are the equivalent to the ``primitives'' in the field of GP.
\item {\em Action}: the leaves of the the tree (therefore, the ``terminals''). Each decision is the name of the method to call that indicates to which planet send a percentage of available ships (from 0 to 1) from the planet that executes the tree.
\end{itemize}
The different variables for the decisions are:
\begin{itemize}
\item {\em myShipsEnemyRatio}: Ratio between the player's ships and enemy's ships.
\item {\em myShipsLandedFlyingRatio}: Ratio between the player's landed and flying ships.
\item {\em myPlanetsEnemyRatio}: Ratio between the number of player's planets and the enemy's ones.
\item {\em myPlanetsTotalRatio}: Ration between the number of player's planet and total planets (neutrals and enemy included)-
\item {\em actualMyShipsRatio}: Ratio between the number of ships in the specific planet that evaluates the tree and player's total ships.
\item {\em actualLandedFlyingRatio}: Ratio between the number of ships landed and flying from the specific planet that evaluates the tree and player's total ships.
\end{itemize}
The decision list is:
\begin{itemize}
\item {\em Attack Nearest (Neutral|Enemy|NotMy) Planet}: The objective is the nearest planet.
\item {\em Attack Weakest (Neutral|Enemy|NotMy) Planet}: The objective is the planet with less ships.
\item {\em Attack Wealthiest (Neutral|Enemy|NotMy) Planet}: The objective is the planet with higher lower rate.
\item {\em Attack Beneficial (Neutral|Enemy|NotMy) Planet}: The objective is the planet more beneficial, that is the one with growth rate divided by the number of ships.
\item {\em Attack Quickest (Neutral|Enemy|NotMy) Planet}: The objective is the planet with higher facility to conquest: the lowest product between the distance from the planet that executes the tree and the number of the ships in the objective planet.
\item {\em Attack (Neutral|Enemy|NotMy) Base}: The objective is the planet with more ships (that is, the base).
\item {\em Attack Random Planet}.
\item {\em Reinforce Nearest Planet}: Reinforce the nearest player's planet to the planet that executes the tree.
\item {\em Reinforce Base}: Reinforce the player's planet with higher number of ships.
\item {\em Reinforce Wealthiest Planet}: Reinforce the player's planet with higher grown rate.
\item {\em Do nothing}.
\end{itemize}
An example of a possible tree is shown in Figure \ref{fig:java}. This example tree has a total of 5 nodes, with 2 decisions and 3 actions, and a depth of 3 levels.
\newsavebox{\javaboxrts}
\begin{lrbox}{\javaboxrts}
\begin{minipage}{10cm}
\begin{minted}[mathescape,
linenos,
frame=lines,
framesep=2mm]{java}
if(myShipsLandedFlyingRatio<0.796)
if(actualMyShipsRatio<0.201)
attackWeakestNeutralPlanet(0.481);
else
attackNearestEnemyPlanet(0.913);
else
attackNearestEnemyPlanet(0.819);
\end{minted}
\end{minipage}
\end{lrbox}
\begin{SCfigure}[tb]
\usebox{\javaboxrts}
\caption{Example of a generated Java tree.}
\label{fig:java}
\end{SCfigure}
The bot behaviour is explained in Figure \ref{alg:turn}.
\newsavebox{\algoartsbox}
\begin{lrbox}{\algoartsbox}
\fbox{
\begin{minipage}{10cm}
\begin{algorithmic}
%\COMMENT{At the beginning of the execution the agent receives the tree}
\STATE tree $\gets$ readTree()
\WHILE{$game not finished$}
\STATE{starts the turn} %COMENTARIO!!!!
\STATE {calculateGlobalPlanets()}%\tcp{e.g. Base or Enemy Base}\ %ARREGLAR
\STATE {calculateGlobalRatios()}%\tcp{e.g. myPlanetsEnemyRatio}\
\FOR{p in PlayerPlanets} %%%FOR EACH!
\STATE calculateLocalPlanets(p)%\tcp{e.g. NearestNeutralPlanet to p}\
\STATE calculateLocalRatios(p)%\tcp{e.g actualMyShipsRatio}\
\STATE executeTree(p,tree)%\tcp{Send a percentage of ships to destination}\
\ENDFOR
\ENDWHILE
\end{algorithmic}
\end{minipage}
}
\end{lrbox}
\begin{SCfigure}[20][htb]
\usebox{\algoartsbox}
\caption{Pseudocode of the proposed agent. The tree is fixed during all the agent's execution}
\label{alg:turn}
\end{SCfigure}
%\COMMENT {In each turn}
%\LOOP
% \STATE calculateGlobalPlanets()
% \COMMENT{{\em for example Base, Enemy Base...}}
% \STATE calculateGlobalRatios ()
% \COMMENT {{\em for example myPlanetEnemyRatio, myShipsEnemyRatio...}}
% \FOR{each Planet: p}
% \STATE calculateLocalPlanets (p)
% \COMMENT{{\em for example NearestNeutralPlanet to planet p}}
% \STATE calculateLocalRatios (p)
% \COMMENT{{\em for example actualMyShipsRatio}}
% \STATE executeTree(p,tree)
% \COMMENT{{\em Send a percentage of the ships to another planet}}
% \ENDFOR
%\ENDLOOP
A hierarchical fitness ({\em HierarchichalFitness} implementation) will be used, as proposed in \cite{Mora2012Genebot}. An individual is better than another if it wins in a higher number of maps. In case of equality of victories, then the individual with more turns to be defeated (i.e. it is stronger) is considered better. The {\em PlanetWarsFitnessCalculator} will confront each individual to other agents a number of times.
\subsection{Implementation and Deployment}
The {\em TreeGenome} individual is composed of {\em Decisions} and {\em Actions} codified as strings coincident with the actions the agent can execute, according the description in previous section. The {\em Conversor} implementation translates the tree to a string of Java code to be executed in the Planet Wars environment by the {\em IndividualTester}, using the {\em Javassist}\footnote{\url{http://www.csg.ci.i.u-tokyo.ac.jp/~chiba/javassist/}} library to compile the string into executable Java bytecode.
The rest of implementations used are the ones available in OSGiLiath (and previously explained in chapter \ref{chap:osgiliath}): {\em ListPopulation}, {\em DeterministicTournamentSelector}, {\em NGenerationsStopCriterion} and {\em DistributedFitness} to execute several simulations at the same time. Besides using Genetic Programming, the flow to use the previous services is the {\em EvolutionaryAlgorithm} implementation used in previous chapters.
\section{Experimental Setup}
\label{sec:experiments}
\lettrine{S}{ub-tree} crossover and 1-node mutation evolutionary operators have been used, following other researchers' proposals that have used these operators obtaining good results \cite{Esparcia2013GPunreal}. In our case, the mutation randomly changes the decision of a node or mutate the value with a step-size of 0.25 (an adequate value empirically tested). Each configuration is executed 30 times, with a population of 32 individuals and a 2-tournament selector for a pool of 16 parents.
To test each individual during the evolution a battle with a previously created bot is performed in 5 different (but representative) maps provided by Google. The maximum fitness is, therefore 5 victories and 0 turns. Also, as proposed by \cite{Mora2012Genebot}, and due to the noisy fitness effect, in every generation all individuals are re-evaluated.
Two publicly available bots have been chosen for our experiments\footnote{Both can be downloaded from \url{https://github.com/deantares/genebot}}. The first bot to confront is {\em GeneBot}, proposed in \cite{Mora2012Genebot}. This bot was trained using a GA to optimize the 8 parameters that conforms a set of hand-made rules, obtained from an expert human player experience. The second one is an advanced version of the previous, called {\em Exp-Genebot} (Expert Genebot) \cite{FernandezAres2012adaptive}. This bot outperformed Genebot widely. Exp-Genebot bot analyses the distribution of the planets of the map to chose a previously optimized set of parameters by a GA. Both bots are the best individual obtained of all runs of their algorithm (not an average one).
After running our algorithm without tree limitation in depth, it has also been executed with the lower and average levels obtained for the best individuals: 3 and 7, respectively, to study if this number has any effect on the results. Table \ref{tab:parameters} summarizes all the parameters used.
\begin{SCtable}[][tb]
\resizebox{11cm}{!}{
\begin{tabular}{ll}
\hline
\rowcolor{colorCorporativoSuave}{\em Parameter Name} & {\em Value} \\\hline\hline
\rowcolor{colorCorporativoMasSuave}Population size & 32 \\\hline
\rowcolor{colorCorporativoSuave}Crossover type & Sub-tree crossover \\ \hline
\rowcolor{colorCorporativoMasSuave}Crossover rate & 0.5\\ \hline
\rowcolor{colorCorporativoSuave}Mutation & 1-node mutation\\ \hline
\rowcolor{colorCorporativoMasSuave}Mutation step-size & 0.25 \\ \hline
\rowcolor{colorCorporativoSuave}Selection & 2-tournament \\ \hline
\rowcolor{colorCorporativoMasSuave}Replacement & Steady-state\\ \hline
\rowcolor{colorCorporativoSuave}Stop criterion & 50 generations \\ \hline
\rowcolor{colorCorporativoMasSuave}Maximum Tree Depth & 3, 7 and unlimited \\ \hline
\rowcolor{colorCorporativoSuave}Runs per configuration & 30 \\ \hline
\rowcolor{colorCorporativoMasSuave}Evaluation & Playing versus Genebot \cite{Mora2012Genebot} and Exp-Genebot \cite{FernandezAres2012adaptive} \\ \hline
\rowcolor{colorCorporativoSuave}Maps used in each evaluation & map76.txt map69.txt map7.txt map11.txt map26.txt \\ \hline
\end{tabular}
}
\caption{Parameters used in the experiments.}
\label{tab:parameters}
\end{SCtable}
After all the executions we have evaluated the obtained best individuals in all runs confronting to the bots in a larger set of maps (the 100 maps provided by Google) to study the behaviour of the algorithm and how good are the obtained bots in maps that have not been used for training.
%The used framework is OSGiLiath, a service-oriented evolutionary
%framework. The generated tree is compiled in real-time and injected
%in the agent's code using Javassist library. All the source code used
%in this work is available under a LGPL V3 License in
%\url{http://www.osgiliath.org}.
%Anda que ya te vale, cuando lo mencionas y resulta que lo borras - JJ
\section{Results}
\lettrine{T}{ables} \ref{tab:resultsGenebot} and \ref{tab:resultsExpgenebot} summarize all the obtained results of the execution of our EA. These tables also show the average age, depth and number of nodes of the best individuals obtained and also the average population at the end of the run. The average turns rows are calculated only taking into account the individuals with lower victories than 5, because this number is 0 if they have win the five battles.
\newcommand{\SetRowColor}[1]{\noalign{\gdef\RowColorName{#1}}\rowcolor{\RowColorName}}
\newcommand{\mymulticolumn}[3]{\multicolumn{#1}{>{\columncolor{\RowColorName}}#2}{#3}}
\newcommand{\mymultirow}[3]{\multirow{#1}{>{\rowcolor{\RowColorName}}#2}{#3}}
\begin{SCtable}[][tb]
\resizebox{11cm}{!}{
\begin{tabular}{ccccc} \hline
\SetRowColor{colorCorporativoSuave}\mymulticolumn{2}{c}{} & {\em Depth 3} & {\em Depth 7} & {\em Unlimited Depth} \\ \hline \hline
\rowcolor{colorCorporativoMasSuave} & Victories & \textbf{4.933} $\pm$ 0.25 & 4.83 $\pm$ 0.53 & 4.9 $\pm$ 0.30 \\ \cline{2-5}
\SetRowColor{colorCorporativoMasSuave}\multirow{-2}{*}{Best Fitness} & Turns & 244.5 $\pm$ 54.44 & 466 $\pm$ 205.44 & 266.667 $\pm$ 40.42 \\ \hline
\rowcolor{colorCorporativoSuave} & Victories & \textbf{4.486}$\pm$ 0.52 & 4.43 $\pm$ 0.07 & 4.711 $\pm$ 0.45 \\ \cline{2-5}
\SetRowColor{colorCorporativoSuave}\multirow{-2}{*}{Population Ave. Fitness} & Turns & 130.77$\pm$ 95.81 & 139.43 $\pm$ 196.60 & 190.346 $\pm$ 102.92\\ \hline
\rowcolor{colorCorporativoMasSuave} & Best & 3 $\pm$ 0 & 5.2 $\pm$ 1.78 & 6.933 $\pm$ 4.05 \\ \cline{2-5}
\SetRowColor{colorCorporativoMasSuave}\multirow{-2}{*}{Depth} & Population & 3 $\pm$ 0 & 5.267 $\pm$ 1.8 & 7.353 $\pm$ 3.11 \\ \hline
\rowcolor{colorCorporativoSuave} & Best & 7 $\pm$ 0 & 13.667 $\pm$ 7.68 & 22.133 $\pm$ 22.21 \\ \cline{2-5}
\SetRowColor{colorCorporativoSuave}\multirow{-2}{*}{Nodes} & Population & 7 $\pm$ 0 & 13.818 $\pm$ 5.86 & 21.418 $\pm$ 13.81 \\ \hline
\rowcolor{colorCorporativoMasSuave} & Best & \textbf{8.133} $\pm$ 3.95 & 5.467 $\pm$ 2.95 & 5.066 $\pm$ 2.11 \\ \cline{2-5}
\SetRowColor{colorCorporativoMasSuave}\multirow{-2}{*}{Age} & Population & \textbf{4.297} $\pm$ 3.027 & 3.247 $\pm$ 0.25 & 3.092 $\pm$ 1.27 \\ \hline
\end{tabular}
}
\caption{Average results obtained from each configuration versus Genebot. Each one has been tested 30 times.}
\label{tab:resultsGenebot}
\end{SCtable}
\begin{SCtable}[][tb]
\resizebox{11cm}{!}{
\begin{tabular}{ccccc} \hline
\SetRowColor{colorCorporativoSuave}\mymulticolumn{2}{c}{} & {\em Depth 3} & {\em Depth 7} & {\em Unlimited Depth} \\ \hline \hline
\rowcolor{colorCorporativoMasSuave} & Victories & 4.133 $\pm$ 0.50 & 4.2 $\pm$ 0.48 & \textbf{4.4} $\pm$ 0.56 \\ \cline{2-5}
\SetRowColor{colorCorporativoMasSuave}\multirow{-2}{*}{Best Fitness} & Turns & 221.625 $\pm$ 54.43 & 163.667 $\pm$ 106.38 & 123.533 $\pm$ 112.79\\ \hline
\rowcolor{colorCorporativoSuave} & Victories & 3.541 $\pm$ 0.34 & 3.689 $\pm$ 0.37 & \textbf{4.043} $\pm$ 0.38 \\ \cline{2-5}
\SetRowColor{colorCorporativoSuave}\multirow{-2}{*}{Population Ave. Fitness} & Turns & 200.086 $\pm$ 50.79 & 184.076 $\pm$ 57.02 & 159.094 $\pm$ 61.84 \\ \hline
\rowcolor{colorCorporativoMasSuave} & Best & 3 $\pm$ 0 & 5.2 $\pm$ 1.84 & 6.966 $\pm$ 4.44 \\ \cline{2-5}
\SetRowColor{colorCorporativoMasSuave}\multirow{-2}{*}{Depth} & Population & 3 $\pm$ 0 & 5.216 $\pm$ 0.92 & 6.522 $\pm$ 1.91 \\ \hline
\rowcolor{colorCorporativoSuave} & Best & 7 $\pm$ 0 & 12.6 $\pm$ 6.44 & 18.466 $\pm$ 15.46 \\ \cline{2-5}
\SetRowColor{colorCorporativoSuave}\multirow{-2}{*}{Nodes} & Population & 7 $\pm$ 0 & 13.05 $\pm$ 3.92 & 16.337 $\pm$ 7.67 \\ \hline
\rowcolor{colorCorporativoMasSuave} & Best & 4.266 $\pm$ 5.01 & 4.133 $\pm$ 4.26 & \textbf{4.7} $\pm$ 4.72 \\ \cline{2-5}
\SetRowColor{colorCorporativoMasSuave}\multirow{-2}{*}{Age} & Population & 3.706 $\pm$ 0.58 & 3.727 $\pm$ 0.62 & \textbf{3.889} $\pm$ 0.71 \\ \hline
\end{tabular}
\caption{Average results obtained from each configuration versus Exp-Genebot. Each one has been tested 30 times.}
\label{tab:resultsExpgenebot}
}
\end{SCtable}
As can be seen, versus Genebot, the average population fitness is nearest to the optimum than versus Exp-Genebot, even with the lowest depth. Highest permanence in the population is also with the depth of 3 levels. On the contrary, confronting with Exp-Genebot the configuration with unlimited depth achieves better results. This make sense because more decisions should be taken because the enemy can be different in each map.
In the second experiment, we have confronted the 30 bots obtained in each configuration again with Genebot and Exp-Genebot, but in the 100 maps provided by Google. This has been used to validate if the obtained individuals of our method can be competitive in terms of quality in maps not used for evaluation. Results are shown in Table \ref{tab:allmaps} and boxplots in Figure \ref{fig:victories}. It can be seen that in average, the bots produced by our algorithm perform equal or better than the best obtained by the previous authors. Note that, even obtaining individuals with maximum fitness (5 victories) that have been kept in the population several generations (as presented
before in Tables \ref{tab:resultsGenebot} and \ref{tab:resultsExpgenebot}) cannot be representative of a extremely good bot in a wider set of maps that have not been used for training. As the distributions are not normal, a Kruskal-Wallis test has been used, obtaining significant differences in turns for the experiment versus Genebot (p-value = 0.0028) and victories in Exp-genebot (p-value = 0.02681). Therefore, there are differences using a maximum depth in the generation of bots. In both configurations, the trees created with 7 levels of depth as maximum have obtained the better results.
To explain why results versus Genebot (a weaker bot than Exp-Genebot) are slightly worse than versus Exp-Genebot, even when the best individuals produced by the GP have higher fitness, we have to analyse how the best individual and the population are being evolved. Figure \ref{fig:gens} shows that best individual using Genebot reaches the optimal before Exp-Genebot, and also the average population converges quicker. This could lead to over-specialization: that is, the generated bots are over-trained to win in the five maps, and because re-evaluation these individuals are still changing after they have reached the optimal.
\begin{SCtable*}
\centering{
\begin{tabular}{ccccccc} \hline
\rowcolor{colorCorporativoSuave}{\em Configuration} & {\em Average maps won} & {\em Average turns} \\ \hline\hline
\SetRowColor{colorCorporativoMasSuave} \mymulticolumn{3}{c}{Versus Genebot} \\ \hline
\rowcolor{colorCorporativoSuave} Depth 3 & 47.033 $\pm$ 10.001 & 133.371 $\pm$ 16.34 \\ \hline
\rowcolor{colorCorporativoMasSuave} Depth 7 & 48.9 $\pm$ 10.21 & \textbf{141.386} $\pm$ 15.54 \\ \hline
\rowcolor{colorCorporativoSuave} Unlimited Depth & 50.23 $\pm$ 11.40 & 133.916 $\pm$ 10.55 \\ \hline
\SetRowColor{colorCorporativoMasSuave} \mymulticolumn{3}{c}{Versus Exp-Genebot} \\ \hline
\rowcolor{colorCorporativoSuave} Depth 3 & 52.367 $\pm$ 13.39 & 191.051 $\pm$ 67.79 \\ \hline
\rowcolor{colorCorporativoMasSuave} Depth 7 & \textbf{58.867} $\pm$ 7.35 & 174.694$\pm$ 47.50 \\ \hline
\rowcolor{colorCorporativoSuave} Unlimited Depth & 52.3 $\pm$ 11.57 & 197.492 $\pm$ 72.30 \\ \hline
\end{tabular}
\caption{Results confronting the 30 best bots attained from each configuration in the 100 maps each.}
\label{tab:allmaps}
}
\end{SCtable*}
\begin{SCfigure}[htb]
\centering
\begin{tabular}{c}
\subfloat[Victories]{
\includegraphics[scale =0.50] {gfx/rts/victories.pdf}
\label{fig:subfigvictories}
}
\\
\subfloat[Turns]{
\includegraphics[scale =0.50] {gfx/rts/turns.pdf}
\label{fig:subfigturns}
}
\end{tabular}
\caption{Average of executing the 30 best bots in each configuration (3, 7 and U) versus Genebot (G) and Exp-Genebot (E).}
\label{fig:victories}
\end{SCfigure}
\begin{SCfigure}[htb]
\centering
\includegraphics[scale =0.60] {gfx/rts/generations.png}
\caption{Evolution of the best individual and the average population during one run for depth 7 versus Genebot and Exp-Genebot.}
\label{fig:gens}
\end{SCfigure}
\section{Conclusions}
\label{sec:conclusion}
\lettrine{T}{his} chapter presents a Service Oriented Genetic Programming algorithm that generates
agents for playing Planet Wars game without using human knowledge. OSGiLiath has been used to obtain relevant results in this field, adding services to manipulate individuals codified as a tree. All services developed follow the genericity in EA development and the SOA requirements. Independence of the individual representation with respect to the existent services to facilitate reuse of existent services have been shown.
% Las conclusiones del trabajo explícito no sirven. Debes de sacar
% conclusiones de la aplicación de una metodología a un problema
% específico y generalizar para problemas de esa clase. También tienes
% que mostrar como TU TESIS ha contribuido, de forma decisiva, a la
% solución de ese problema. - JJ FERGU: ya, que es que lo copié y na más xD
%In future work, other rules will be added to our algorithm (for
%example, the ones that analyse the map, as the Exp-Genebot does) and
%different enemies will be used. Other games used in the area of
%computational intelligence in videogames, such as
%Unreal\texttrademark~ or Super Mario\texttrademark~ will be tested.
% Borra future work en la tesis.
|
|
%!TEX TS-program = lualatex
%!TEX encoding = UTF-8 Unicode
\documentclass[12pt, hidelinks, addpoints]{exam}
\usepackage{graphicx}
\graphicspath{{/Users/goby/Pictures/teach/163/lab/}
{img/}} % set of paths to search for images
\usepackage{geometry}
\geometry{letterpaper, left=1.5in, bottom=1in}
%\geometry{landscape} % Activate for for rotated page geometry
\usepackage[parfill]{parskip} % Activate to begin paragraphs with an empty line rather than an indent
\usepackage{amssymb, amsmath}
\usepackage{mathtools}
\everymath{\displaystyle}
\usepackage{fontspec}
\setmainfont[Ligatures={TeX}, BoldFont={* Bold}, ItalicFont={* Italic}, BoldItalicFont={* BoldItalic}, Numbers={OldStyle}]{Linux Libertine O}
\setsansfont[Scale=MatchLowercase,Ligatures=TeX, Numbers=OldStyle]{Linux Biolinum O}
%\setmonofont[Scale=MatchLowercase]{Inconsolatazi4}
\usepackage{microtype}
% To define fonts for particular uses within a document. For example,
% This sets the Libertine font to use tabular number format for tables.
%\newfontfamily{\tablenumbers}[Numbers={Monospaced}]{Linux Libertine O}
% \newfontfamily{\libertinedisplay}{Linux Libertine Display O}
\usepackage{booktabs}
\usepackage{multicol}
\usepackage[normalem]{ulem}
\usepackage{longtable}
%\usepackage{siunitx}
\usepackage{array}
\newcolumntype{L}[1]{>{\raggedright\let\newline\\\arraybackslash\hspace{0pt}}p{#1}}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash\hspace{0pt}}p{#1}}
\newcolumntype{R}[1]{>{\raggedleft\let\newline\\\arraybackslash\hspace{0pt}}p{#1}}
\usepackage{enumitem}
\setlist{leftmargin=*}
\setlist[1]{labelindent=\parindent}
\setlist[enumerate]{label=\textsc{\alph*}.}
\setlist[itemize]{label=\color{gray}\textbullet}
\usepackage{hyperref}
%\usepackage{placeins} %PRovides \FloatBarrier to flush all floats before a certain point.
\usepackage{hanging}
\usepackage[sc]{titlesec}
%% Commands for Exam class
\renewcommand{\solutiontitle}{\noindent}
\unframedsolutions
\SolutionEmphasis{\bfseries}
\renewcommand{\questionshook}{%
\setlength{\leftmargin}{-\leftskip}%
}
\newcommand{\hidepoints}{%
\pointsinmargin\pointformat{}
}
\newcommand{\showpoints}{%
\nopointsinmargin\pointformat{(\thepoints)}
}
%Change \half command from 1/2 to .5
\renewcommand*\half{.5}
\pagestyle{headandfoot}
\firstpageheader{\textsc{bi}\,063 Evolution and Ecology}{}{\ifprintanswers\textbf{KEY}\else Name: \enspace \makebox[2.5in]{\hrulefill}\fi}
\runningheader{}{}{\footnotesize{pg. \thepage}}
\footer{}{}{}
\runningheadrule
\newcommand*\AnswerBox[2]{%
\parbox[t][#1]{0.92\textwidth}{%
\vspace{-0.5\baselineskip}\begin{solution}\textbf{#2}\end{solution}}
% \vspace*{\stretch{1}}
}
\newenvironment{AnswerPage}[1]
{\begin{minipage}[t][#1]{0.92\textwidth}%
\begin{solution}}
{\end{solution}\end{minipage}
\vspace*{\stretch{1}}}
\newlength{\basespace}
\setlength{\basespace}{5\baselineskip}
%\printanswers
\hidepoints
\begin{document}
\subsection*{Quiz: did you read the scientific paper? (5~points)}
You were requird to read a paper before lab today. Let us find out if you did.
\bigskip \bigskip
\begin{questions}
\question
What kind of organism was studied?
\AnswerBox{2\baselineskip}{A plant.}
% Obolaria virginica
\question
Where was the study conducted?
\AnswerBox{2\baselineskip}{Trail of Tears State Park, Cape Girardeau County, MO}
\question
Identify one variable that was measured for the study.
\AnswerBox{2\basespace}{Possible answers are stem length, flower count, or number of individuals for density or dispersion.}
\end{questions}
\end{document}
|
|
\documentclass{acm_proc_article-sp}
\title{3D Printing in Lock Picking}
% NOTE FROM SIGITE WEBSITE: "All other submissions (papers, lightning talks, and posters) should be anonymous, with all author information removed."
% \numberofauthors{3}
% \author{
% % First author
% \alignauthor Byron Doyle\\
% \affaddr{Brigham Young University}\\
% \affaddr{School of Technology}\\
% \affaddr{265 Crabtree Building}\\
% \affaddr{Provo, UT 84602}\\
% \email{byrondoyle@gmail.com}
% % Second author
% \alignauthor Colby Goettel\\
% \affaddr{Brigham Young University}\\
% \affaddr{School of Technology}\\
% \affaddr{265 Crabtree Building}\\
% \affaddr{Provo, UT 84602}\\
% \email{colby.goettel@gmail.com}
% % Third author
% \alignauthor Dale Rowe\\
% \affaddr{Brigham Young University}\\
% \affaddr{School of Technology}\\
% \affaddr{265 Crabtree Building}\\
% \affaddr{Provo, UT 84602}\\
% \email{dale\_rowe@byu.edu}
% }
% Force footnotes to stay on the same page and not bleed over. Long footnotes should be placed in the appendix.
\interfootnotelinepenalty=10000
\begin{document}
\maketitle
\begin{abstract}
Physical security analysts have always sought to overcome challenges in security infrastructure using novel approaches and new technology. One of these challenges is preset, mechanical lock mechanisms.\footnote{A locking mechanism that is opened by predefined key.} 3D printing technology provides a valuable tool for those interested in attacking or bypassing high-security locks. This technology can allow such practitioners to create key blanks or replicas from key data such as physical key measurements or photographic evidence.
\end{abstract}
\category{K.6.5}{Management of Computing and Information Systems}{Security and Protection}[Unauthorized access]
\category{I.4.9}{Image processing and computer vision}{Applications}
\category{J.6}{Computer-Aided Engineering}{Computer-aided manufacturing}[CAM]
\terms{Security}
\keywords{Physical security, penetration testing, physical lock, 3D printing, computer aided design}
\section{Introduction}
Preset, mechanical locks are generally vulnerable to a variety of attacks, but due to the enormity of designs and technologies in the world today, each lock typically requires a different technique to exploit or bypass. For example, simple pin and wafer locks can be picked with moderate skill, but more complicated locks with sidebar mechanisms make picking impractical without specialized tools and a high degree of skill. These factors and more must be recognized by information security practitioners because physical access controls for sensitive infrastructure are just as important as logical access controls. No amount of digital security is enough if attackers can bypass the physical security and gain direct access to hardware.
Impressioning is a common lock picking technique that allows an attacker to create a copy of the key for the target lock. However, it requires a decent amount of skill and key blanks specific to the target lock type. Another option is to have the key copied, but there are countermeasures in place to make this difficult. These countermeasures include controlling the key blanks and cutting facilities for high-security locks. Additionally, it is inherently difficult to obtain the bitting\footnote{A code that defines the key cuts that will properly open the target lock.} of the original key.
3D printing can make all of these attacks more effective, increasing the risk that high-security locks may be circumvented. To understand how these manufacturing techniques can be used, a few methods of 3D printing will first be discussed, including their benefits and drawbacks. Next, some popular attacks on preset, mechanical lock systems will be examined. Finally, the two approaches will be combined to understand how 3D printing technology can enhance lock picking high-security locks.
\section{3D Printing Techniques}
3D printing, a form of rapid manufacturing, is a broad field with various methods of producing products in a variety of materials. Each of these techniques has pros and cons for the penetration of physical security systems. Notable techniques include are fused filament modeling, stereo lithography, and direct metal laser sintering.
\subsection{Fused Filament Fabrication}
Fused filament fabrication (FFF)\footnote{Also called fused deposition modeling (FDM).} is one of the commonest and cheapest 3D printing techniques. Relatively high quality models are built out of various types of plastic by a machine that lays down traces of material in patterns, building up layers in the $z$-axis \cite{VALAVAARA}.
Fused filament fabrication is one of the largest areas of new material development due to the relatively high adoption rate in the consumer market. Typically, prints are produced in either ABS\footnote{Acrylonitrile butadiene styrene, a common thermoplastic used in the manufacturing of cheap, mass-produced parts.} or PLA,\footnote{Polylactic acid, a biodegradable thermoplastic made out of various renewable resources such as corn starch and sugar cane.} but many materials can be used for a given application. FFF-manufactured parts have different properties depending on the material they are made from, so it is important to choose the right material for the application. ABS plastic offers a good balance of hardness and flexibility, both of which are important when producing parts like thin key blanks. PLA is more rigid, but also more likely to snap rather than bend.
This method is particularly useful because of its availability: a 3D printer with the accuracy required to produce basic key blanks can be purchased for under \$500. However, if that option is not available, there are many online services that offer high-quality and fairly-priced prints using this method.
Of particular interest to this topic may be nylon. Nylon materials specific to 3D printing are designed to offer a wide range of characteristics depending on the temperature they are printed at; this builds off the innate properties of the material. Warmer temperatures will yield an extremely strong bond and hard part. Cooler temperatures result in a more flexible part with weaker bonds. In addition, nylon is extremely abrasion-resistant which is important for working in locks without leaving behind plastic shavings.
\subsection{Stereolithography}
Stereolithography (SLA) is the original 3D printing process, first patented in 1984. This process is characterized by the transformation of a liquid photopolymer into a solid by a laser or other curing light element \cite{HULL}. In most situations, this process produces some of the highest-resolution models available.
The cost of stereolithography equipment is generally more than FFF equipment. Consumer-level printers range from \$2000 to \$3000. These machines provide a build volume\footnote{The total volume in which a 3D printer can construct a part; the limiting factor in the size of any one print job.} large enough for keys while still maintaining a relatively low cost. Online 3D printing services can provide larger, higher quality prints as well which could be valuable when mass-producing parts.
High resolution is paramount in the design of complex keys and key blanks. The higher resolution and bonding method of SLA can produce stronger models, though strength varies with the type of material used. This printing process necessitates the addition of support sprues to the model; these must be taken into account when designing precision-driven projects such as keys, as the connection points between the part and the sprues leave small bits of material that must be carefully filed away.
Materials for SLA machines are more limited in variety than their FFF counterparts. Photopolymers for SLA printing generally resemble ABS after being cured. Unlike thermoplastics, however, photopolymers will continue to cure as long as they are exposed to ultraviolet light. Depending on the material used, parts made with photopolymer liquids can become brittle over time as they over-cure through exposure. Materials research continues to mitigate this effect, but because of the material development cost, the relatively low demand, and the relatively high production cost, liquid photopolymer materials are often quite costly.
\subsection{Direct Metal Laser Sintering}
Direct metal laser sintering (DMLS) is a manufacturing method that creates solid metal parts by exposing and bonding fine metal powders with a high-power laser \cite{DAS}. This process differs from the others in that it can create metal parts which rival or exceed the strength of cast parts and, in some cases, even forged parts. Parts produced with this process are extremely accurate and typically have a very smooth finish.
The benefits of this process may be less obvious as DMLS equipment is not available directly to consumers. However, DMLS is now a common format available through online 3D printing services, making it especially useful when custom one-off tools are required or when a controlled key blank distribution system (such as with many high-security locks) must be circumvented.
Since DMLS parts are typically expensive, it is usually better to use another form of 3D printing to produce early prototypes. Only after the design is finalized should the part design be sent off for DMLS manufacturing. DMLS should be thought of as an option only for high fidelity, high strength prototypes and finished products.
Many different metals can be used with DMLS, the material only needs to have stable thermal properties over its melting point. As the technology advances, more metals are made available for manufacturing. Titanium is a very popular material as it is light, strong, and corrosion resistant. Steel and brass are also popular, but are more often offered online as casts from a 3D printed wax or sand model. These parts typically offer good strength, but their surface finishes are not as accurate.
\section{Popular Attacks on Preset Mechanical Lock Systems}
\subsection{Lock Picking}
Lock picking is the most famous attack on preset mechanical lock systems, though not always the most effective. A small torque is applied to the keyway of the lock or cylinder of the lock, and the pins of the lock are pushed up slowly until the pin tumblers sitting on top of the pins engage the shear line between the lock cylinder and the lock body \cite{TOOOL1}.
\begin{figure}[htb]
\centering
\includegraphics{lockpicking}
\caption{Side view of the lock picking process \cite{TOOOL1}.}
\label{lockpicking}
\end{figure}
This attack can also be used, with the appropriate tools, on wafer locks, tube locks, and others. Wafer locks are opened in much the same way as pin tumbler locks, but with one key difference: the shear line is wide and the wafers all engage a single large slot. This allows for a large amount of play in the workings of the lock, making picking much easier. Tube locks are easier to pick for a different reason: they only require a special tool, conveniently called a ``tube lock pick.'' The tool is engaged with the lock, and torque is applied on and off while the tool is gently pressed down. The binding of the pins against the shear line of the lock impression the bitting of the tube lock into the lock pick.
Raking is performed by dragging a tool lightly across the bottom of the pins while applying light torque to the lock cylinder. As the tool bumps against the pins, they rebound against the pin tumblers which should rise above the shear line and set. This can be done very quickly to open simple locks. This technique varies greatly depending on the tools used, the target lock, and the attacker's preferences and skill. Many different pick shapes can be used for raking, but typically half-diamond, hook, and snake shapes are preferred. The target lock's geometry affects how raking is performed and if it is useful. This is typically a function of the pick shape in contrast to the keyway\footnote{The cut in the lock cylinder through which the body of the key passes.} shape.
Scrubbing is a variation of raking: instead of dragging a tool across the pins, a wide, flat tool is used to push groups of pins up and down in a tooth brushing motion. In this way the attacker is effectively attempting to pick multiple pins at once. This technique is extremely useful when raking fails because of restrictive keyway shapes or when the target lock's pins are very close together. Scrubbing may also be preferred if the spring tension on the pins is high~--- a common trait of padlocks and other outdoor locks. Scrubbing is sometimes used instead of raking because of preference; however, both raking and scrubbing are equally useful.
Bumping is a variation of the raking technique using a specialized key blank, cut just past the deepest pin depth on all pins with ridges in-between. This tool is called a bump key, or 999 key.\footnote{So-called because the key is cut to a modified all-nines bitting} The bumping technique is performed by pulling the bump key slightly out of the lock, applying light torque, and then lightly bumping the key with a wooden hammer or other apparatus in order to strike all the pins at once. This forces all the pin tumblers above the shear line at once, opening the lock.
Raking and scrubbing generally fall together as the two techniques experienced lock pickers use to quickly open easy-to-bypass locks. These techniques are most useful, however, when combined with skillful single-pin picking: problematic pins can be set, and then raking or scrubbing can be used to set the rest. Bumping expands on raking by using specialized key blanks made specifically to set all the pins in the target lock at once. With the proper tools and some practice, lock bumping can open hard-to-pick locks.
\subsection{Impressioning}
Physical decoding of a target lock is performed through impressioning. A specially-prepared key blank is used to make a copy of the key for the target lock. This key reflects the lock's bitting. The blank is placed in the lock, torque is applied, and the key is moved up and down against the pins; any pin at the improper height will be bound against the sides of the lock body and cylinder. This binding friction slightly marks the pins on the blank. The key is then removed from the lock, inspected for marks, and cut with a file where they are found. Cuts are made one bit-depth at a time, and the process is repeated. This can be done for all pins in the lock at once under normal circumstances. If the attack is successful, the attacker will end up with a working key. The only caveat is that the attacker must apply the proper torque and force on the pins: too little torque or too much force and the pins will slip, causing missed bittings; too much torque or too little force and the pins will bind in the wrong places, causing false positives. Either of these mistakes will damage the target lock.
Attackers should use caution with this technique because improper use can lead to positive indications of an attack on the lock. In some cases, locks may degrade quickly and seize or bind because many shear forces are being applied to the inner bearing surfaces of the lock in an unusual manner. If discretion is required, care should also be taken to thoroughly clean the filed blank before re-inserting it into the target lock. Leaving loose metal shavings inside the lock is a fairly obvious giveaway, as filings from regular use typically differ from those resulting from filing a key blank. The permanence of impressioning is also very useful for quiet attacks: an attacker can slowly impression a lock over a space of days or weeks by taking one sample at a time and then leaving, only to return later.
\subsection{Copying}
The biggest difficulty when making a copy of a key is exactly duplicating the key's bitting. This can be done via certain lock picking techniques, such as using a tube lock pick, impressioning, using a pin lock decoder, or by gaining physical access to the key in order to take measurements or a mold. This is difficult because all of these techniques can be quite intrusive.
One technique for decoding keys from a distance was developed by Laxton, Wang, and Savage \cite{LAXTON}. It involves taking photos of a key from a distance and then using computer vision algorithms to decode the bitting. This proved equally successful up close and at a distance using long-focus lenses. This technique removes the attacker from the immediate vicinity of the key owner and is much less intrusive. More powerful optics could be used to not only capture key information without the risk of notifying the target, but also to keep the attacker completely out of view. This entirely obviates the security of sole key ownership of common locks because key copying facilities are widely available. Key blanks are freely available and copies can often be manufactured without providing the original key.
Copies of keys from impressioning are also extremely useful for providing the bitting for a lock. Because they must open the lock for the attack to be successful, the original impressioned key can be regarded as the lock's decoded bitting. This can then be used to create copies of the target key that appear genuine in order to facilitate a more successful attack. This is especially true of keys for high-security locks; typically, high-security blanks are only provided by the lock manufacturer, resulting in every key looking similar. Depending on the manufacturing techniques used, impressioned keys can be copied with the minutest details intact, producing incredibly realistic functional facsimiles.
In order to fool people, attackers need to consider that an entirely new key is being produced during the copying process, but probably should not look brand new. High-security-keys, however, have more exploitable features so care must be taken when making copies in order to replicate not only the working parts of the key, but the aesthetic portions as well. If photos of a target key are taken, an exact replica can be made: the replica can be weathered to match the original, serial numbers can be matched, logos can be copied. These features all add up making a key that has history in a social context; it can be used in social engineering attacks as well as to open doors.
\section{Augmenting Attacks with 3D Printing}
The attacks described above can all be augmented by 3D printing in a variety of ways, depending on the printing processes and materials used. One notable use of the technology is the ease of making specialized tooling to attack high-security locks such as the ASSA Twin Series. These locks have a coded sidebar preventing the lock from turning independently of the top pins. Both blanks, spare parts, and discarded cylinders are carefully controlled.
\begin{figure}[htb]
\centering
\includegraphics{spool}
\caption{Internal diagram of ASSA Twin Series lock \cite{TOOOL2}.}
\label{spool}
\end{figure}
\subsection{Lock picking}
3D printing allows the cheap manufacturing of specialized tools for lock picking. To facilitate a lock picking attack against a Twin Series lock, a special torsion wrench, with the correct sidebar bitting built in, could be printed. The top bitting may be unknown as it is unique to each lock, but sidebar bittings are set by region. Thus any sample key from the target site may lead to a more complete exploitation of the security system.
In the case of torsion wrenches or other force-applying tools, a stronger-bonding polymer process (like SLA) or a metal printing process (like DMLS) could be more useful than the cheaper fused filament fabrication process found in consumer-level 3D printers. The designer must decide how much torque is required for the application and how that torque will be applied through the tool. The designer can then choose a manufacturing process with this information.
For permanent or reusable tool manufacturing it is useful to design and implement a prototype version in a plastic format because it is cheaper and more readily producible. These prototypes can then be evaluated and iterated upon as needed. To provide longevity and strength in the tool, a final version can be made using a metal printing process. This might be especially useful for creating specialized tools that are not specific to a certain lock bitting, but are specific to an application; for example, a bypass tool for a lock with a common vulnerability found during the picking process.
Bump key manufacturing benefits greatly from this process as well, especially in the case of high-security locks. Bump keys are normally made out of stock key blanks; however, manufacturer controls limit this process in the case of high-security locks. To circumvent this, a complete bump key can be designed, tested, and then printed in metal. Since bump keys are only specific to the type of lock, and not the bitting, such tools are still useful when the target lock has been exploited.
\subsection{Impressioning}
The applications of 3D printing for impressioning are a natural extension of the applications in lock picking. ASSA is deliberately protective over key blank distribution and key cutting for their locks, as it increases security. However, with 3D printing, a blank can be manufactured for the lock based on any cut key sample and rough specifications of the lock; both of these are easily acquired via proper reconnaissance, or photography and computer vision.
In practice, creating blanks for high-security locks using low-cost, 3D printing tools is very easy. Figure~\ref{model} shows a 3D model created from physical measurements taken from samples of ASSA Twin Series keys; this model has been used to successfully print key blanks on a consumer-level FFF 3D printer. These key blanks were printed in PLA, a stiff but malleable plastic. Impressioning with plastic blanks manufactured this way should work quite well if used in conjunction with a torsion wrench to apply torque to the lock after the sidebar bitting is modeled into the blank.
\begin{figure}[htb]
\centering
\includegraphics{model}
\caption{3D model of an ASSA Twin Series key blank.}
\label{model}
\end{figure}
In some high-security locks it may be possible to impression the sidebar when the top-bitting is known but the sidebar bitting is not (for example, if a picture of the back of an original key is used to decode the lock). In this case, a key with a coded top bitting and a blank sidebar would be printed, and impressioning would be performed as usual. Depending on the lock's sidebar mechanism this may not work~--- the ASSA Twin Series uses a sidebar mechanism where this is not possible. Other locks with sidebar mechanisms, such as Medeco high-security locks, may have more success.
If correctly designed, blanks can be used repeatedly for the same type of lock without care for specific fitment within the same lock model. Once such a design is achieved for a high-security lock, the blank can be publicly released. Public release of blank models removes the security advantages of limiting blank distribution. Easy access to 3D printing facilities ensures that attackers can easily access blanks. Blank distribution limitations may actually become a liability as the availability of printed blanks could exceed the availability of legitimate ones, putting control of distribution firmly out of the hands of the manufacturer.
\subsection{Copying}
After a lock or its key is decoded by impressioning, computer vision, or other means, a true copy of the key can be made via 3D printing as well. At this point, a metal printing process should be used if the goal is to have a permanent key; otherwise, a copy could be printed and torque applied with a torsion wrench to open the lock. In both cases this implies a persistent threat to the preset system: even if the copy is recovered by the affected organization, another can be made without need for special tooling or blanks, just a CAD file.
The usefulness of 3D printing in key copying is perhaps the most obvious, but warrants extra consideration. When fine detail is needed, an exact duplicate can be made, down to the numbering, logos, and simulated wear, if properly modeled. During a penetration test this may be extremely useful as a social element: in addition to the key working, it can also be shown to personnel as proof that the attacker is supposed to be there. For example, a copy of a key can be made, worn down, and then taken to a facilities office under the pretense that it no longer works. The key can then be traded for a new, properly serialized key from the organization~--- the forged key is no longer in circulation, the attacker has a real key, and the original key will stay in circulation until it is turned in or audited.
\section{Conclusion}
The capabilities of 3D printing technology have posed a persistent threat to organizations' physical infrastructures for quite a while. This is done by clever manipulation of lock picking, impressioning, copying, and other means. Essentially, the supply chain control paradigm for high-security physical locks will no longer stop a determined attacker from gaining physical access to what those locks are protecting. For this reason, new security models need to be put in place for physical security, and lock manufacturers need to stop relying on outdated supply chain control and innovate.
The methods presented here can be used to create custom tools to address even the most obscure physical security platforms. A clever attacker no longer has to worry about security through obscurity in the physical space and can fill his toolbox with items meant specifically to address the physical security weaknesses of a targeted organization. These tools afford the same flexibility to the attacker in the physical space as he already has in the digital space. A physical key can be copied and used without notifying the original owner, compromised in the same manner a digital key would. A physical system can be reverse-engineered without leaving evidence behind, quietly probing a secure system over a long period of time. Security access controls can be bypassed entirely as evidence for social attacks appears out of only collected data.
3D printing is advanced enough that complete copies of legitimate keys can be made, akin to copying organizational IDs or badges. These keys can be used in social attacks, as well as to open doors and gain access to physical spaces. For security professionals, this means looking at physical security threats in a new way, as attackers are not necessarily impeded by physical access restrictions as they once were. With a little time and raw material, security professionals now have a toolset that gives them the ability to produce what was once a trusted credential. A serial number and an earnest demeanor are not be as dependable as they once were.
The rise of cryptographically-keyed, electronically-controlled physical locking systems provides an alternative that avoids many of the vulnerabilities presented here. Increasing adoption of these systems continues to drive down their cost and improves secure operation best practices. As with any access control system, the responsibility falls on the adopting party to ensure that the system is sound both in terms of overall secure implementation and the system's individual parts. This includes finding alternatives to standard, physical keys (e.g., smart cards and smart card readers), the supporting server infrastructure, and the related locking mechanisms themselves (e.g., magnetic or electromechanical locks). With careful planning, design, testing, and deployment, the advantage can be tilted back in favor of active defenders so long as they are willing to consider the security of the system as a whole and not only its parts.
\bibliographystyle{acm_proc_article-sp}
\bibliography{references}
\nocite{*}
\balancecolumns
\end{document}
|
|
\documentclass[xcolor=dvipsnames,beamer,unknownkeysallowed]{beamer} %handout,notes=show
\usepackage{textcomp}
\usepackage[utf8]{inputenc}
% \usepackage{default}
\usepackage{graphicx}
% \usepackage[pdftex]{hyperref}
\usepackage{url}
\usepackage{amsmath}
\usepackage{xcolor}
% frames have to be fragile
\newif\ifnotes
\input{tmpnotessettings}
%\notestrue
\ifnotes
%\setbeamertemplate{note page}[plain]
\setbeamertemplate{note page}[compress]
\setbeamerfont{note page}{size=\large}
\setbeameroption{show only notes}
%\setbeameroption{show notes}
\usepackage{pgfpages}
\pgfpagesuselayout{2 on 1}[a4paper,border shrink=5mm]%
\else
%\setbeameroption{hide notes}
\fi
%\notesfalse
% nastaveni TypeWriter
%\usepackage{courier}
%\usepackage{lmodern}
%\renewcommand*\ttdefault{txtt}
\DeclareFontShape{OT1}{cmtt}{bx}{n}{<5><6><7><8><9><10><10.95><12><14.4><17.28><20.74><24.88>cmttb10}{}
% \usepackage{verbatim}
\usepackage[absolute,overlay]{textpos}
\usepackage{listings}
% \usepackage{courier}
\definecolor{grey}{RGB}{70,70,70}
\definecolor{green}{RGB}{0,255,0}
\definecolor{red}{RGB}{202,53,53}
\definecolor{lightGrey}{RGB}{250,250,250}
\definecolor{darkGrey}{RGB}{50,50,50}
\usepackage{color}
\definecolor{lightgray}{rgb}{.9,.9,.9}
\definecolor{darkgray}{rgb}{.4,.4,.4}
\definecolor{purple}{rgb}{0.65, 0.12, 0.82}
%\usetheme{Boadilla}
\usetheme{Goettingen}
% \usetheme{Montpellier}
% \usetheme{Warsaw}
% \usetheme{Madrid}
% \usetheme{Szeged}
% \useoutertheme{infolines}
% \usecolortheme[named=MidnightBlue]{structure}
%\usecolortheme[named=PineGreen]{structure}
\usecolortheme[named=NavyBlue]{structure}
\setbeamertemplate{navigation symbols}{}
\title[IWMI - UoM.LK]
{\ \\
\ \\
An Open Source Hardware \& Software\\
Online Grid of Weather Stations\\
For Sri Lanka}
%\subtitle{SVO\v{C}}
%\pdforstring{}{}
\author[Chemin, Bandara]
{\vspace{30pt}\\
Yann Chemin$^{1}$, Niroshan Bandara$^{2,3}$}
\institute[IWMI - U of Moratuwa]
{$^1$International Consulting Scientist, \textcolor{orange}{yann.chemin@gmail.com}\\
\vspace{5pt}
$^2$University of Moratuwa - Town and Country Planning Department\\
\vspace{5pt}
$^3$Osaka-City University, \textcolor{orange}{nsanj88@gmail.com}\\
\begin{center}
% \includegraphics[width=4cm]{foss4g2013logo_s}
\end{center}
}
\date{\tiny May 15th, 2015}
%\AtBeginSection[]{\begin{frame}\frametitle{Obsah}%
%\tableofcontents[currentsection ]\end{frame}}
%\AtBeginSubsection[]
%{
% \begin{frame}<beamer>
% \frametitle{Obsah}
% \tableofcontents[currentsection,currentsubsection]
% \end{frame}
%}
\setbeamercovered{transparent}
\hypersetup{%
pdfauthor={Yann Chemin},%
pdfsubject={Presentation},%
pdfkeywords={FOSS4G, OSHW, RaspberryPI, ET, Raingauge, Road Condition, OSGEO}
}
\input{nastavenilst}
\newcommand{\overovaciref}[1]{{\scriptsize(\ref{#1})}}
\usepackage{tipa}
\newcommand{\pron}[2]{#1 [#2]}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% TOC frame setup
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\usepackage{multicol}
\colorlet{mycolor}{orange!80!black}% change this color to suit your needs
\AtBeginSection[]{
\setbeamercolor{section in toc shaded}{use=structure,fg=structure.fg}
\setbeamercolor{section in toc}{fg=mycolor}
\setbeamercolor{subsection in toc shaded}{fg=black}
\setbeamercolor{subsection in toc}{fg=mycolor}
\frame<beamer>{\begin{multicols}{2}
\frametitle{Outline}
\setcounter{tocdepth}{2}
\tableofcontents[currentsection,subsections]
\end{multicols}
}
}
\setbeamercolor{author in head/foot}{fg=white}
\setbeamercolor{title in head/foot}{fg=white}
\setbeamercolor{section in head/foot}{fg=mycolor}
\setbeamertemplate{section in head/foot shaded}{\color{white!70!black}\insertsectionhead}
\setbeamercolor{subsection in head/foot}{fg=mycolor}
\setbeamertemplate{subsection in head/foot shaded}{\color{white!70!black}\insertsubsectionhead}
\setbeamercolor{frametitle}{fg=white}
\setbeamercolor{framesubtitle}{fg=white}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
\maketitle
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Contents}
\begin{multicols}{2}
\setcounter{tocdepth}{2}
\tableofcontents
\end{multicols}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{CGIAR}
%
%Consultative Group for International Agricultural Research\\
%Ratified on October 2nd, 2013\\
%Full Open Access \& Open Source\\
%Research data and publication
%
%\begin{columns}
%\column{0.5\textwidth}
%\begin{center}
%\begin{itemize}
% \item International Public Goods
% \item Public Domain
% \item Publications Open Access
% \item FOSS models and algorithms
%\end{itemize}
%\end{center}
%
%\column{0.5\textwidth}
%\begin{center}
% \includegraphics[width=1.5cm]{CGIAR_Green}
% \hspace{5mm}
% \includegraphics[width=2cm]{WLE_and_partners-vertical_logo_strip.png}
%\end{center}
%\end{columns}
%\vspace{5mm} 2018: all 15 CG centres, already FOSS4G Lab:
%(\href{http://gsl.worldagroforestry.org}{gsl.worldagroforestry.org})
%\end{frame}
\section{Introduction}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Overview}
For agricultural and hazard monitoring, WMO-level accuracy of weather data is not needed. We are introducing a low-cost weather station based on Arduino for extending the National network of the Meteorological Department in Sri Lanka.
\newline\linebreak
\begin{itemize}
\item Low-cost, locally-made, OSHW weather station
\item National Distributed Monitoring Grid
\item Online Aggregation
\item Mobile/Web Apps
\end{itemize}
\end{frame}
%\section{Early Prototyping}
%\subsection{Rationale}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Rationale}
%
%\begin{center}
% \includegraphics[width=10cm]{MWS_v1_deltaT_rationale_0}
%\end{center}
%
%\end{frame}
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Rationale}
%
%\begin{center}
% \includegraphics[width=10cm]{MWS_v1_deltaT_rationale_1}
%\end{center}
%
%\end{frame}
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Rationale}
%
%\begin{center}
% \includegraphics[width=10cm]{MWS_v1_deltaT_rationale_2}
%\end{center}
%
%\end{frame}
%
%\subsection{ $\delta T$ Tower}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Open Source Hardware Micro Weather Station v0}
%
%\textbf{Micro Weather Station v0:}\\
%Temperature Profiler for ET models calibration
%\vspace{5mm}
%\begin{itemize}
% \item Arduino Pro 3.3V
% \item Water-proof Digital Temperature Sensors
% \item Li-ion Battery + Solar Panel
% \item OpenLog data logger with SD card
% \item Cost $<$ 100 USD
%\end{itemize}
%\begin{flushright}
% \includegraphics[width=5cm]{MWS}
% \hspace{5mm}
% \includegraphics[width=2cm]{MWS_radshield}
%\end{flushright}
%\end{frame}
%
%\subsection{ $\delta T$ parts}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{Open Source Hardware Micro Weather Station v0}
%
%\begin{center}
%OpenLog + Arduino Pro\\
%\vspace{5mm}
%\includegraphics[width=5cm]{Arduino_OpenLog}
%\end{center}
%
%\begin{flushright}
% \includegraphics[width=5cm]{MWS}
% \hspace{5mm}
% \includegraphics[width=2cm]{MWS_radshield}
%\end{flushright}
%\end{frame}
%
%\subsection{ $\delta T$ Setup}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{MWS Setup}
%
%\begin{center}
% \includegraphics[width=10cm]{MWS_v1_deltaT_sketch_hot}
%\end{center}
%
%\end{frame}
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\begin{frame}[fragile]{ $\delta T$ Setup}
%
%\begin{center}
% \includegraphics[width=10cm]{MWS_v1_deltaT_sketch_cold}
%\end{center}
%
%\end{frame}
\section{ MWS Tower}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Open Source Hardware Micro Weather Station v1}
\textbf{Micro Weather Station v1:}\\
Meteorological support for Irrigation Department in Sri Lanka, for faster management of rural reservoirs spilling in case of high rain intensity.
\vspace{5mm}
\begin{itemize}
\item Lakduino (\href{www.lakduino.com}{\textit{www.lakduino.com}})
\item Weather Sensor Board
\item GPRS Modem Board
\item Data logger with 16Gb micro-SD card
\item Moto battery + Solar Panel
\end{itemize}
\begin{flushright}
\includegraphics[width=5cm]{MWSv1}
\hspace{5mm}
% \includegraphics[width=2cm]{MWS_radshield}
\end{flushright}
\end{frame}
\subsection{Power Supply}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Open Source Hardware Micro Weather Station v0}
\begin{center}
\includegraphics[width=4.5cm]{MWSv1_power}
\end{center}
\end{frame}
\subsection{Wind Sensors}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Wind Sensors}
\begin{center}
\includegraphics[width=7cm]{MWSv1_sensors}
\end{center}
\end{frame}
\subsection{Raingauge 1}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Raingauge 1}
\begin{center}
\includegraphics[width=6cm]{MWSv1_rain1}\\
\vspace{5mm}
Chinese made raingauge 3D view\\
from scion.lk
\end{center}
\end{frame}
\subsection{Raingauge 2}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Raingauge 2}
\begin{center}
\includegraphics[width=4cm]{oshw_raingauge}\\
\vspace{5mm}
Public Domain, locally-designed rain gauge\\
https://grabcad.com/library/rain-gauge-design-1\\
from scion.lk
\end{center}
\end{frame}
\subsection{Electronics}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Electronics}
\begin{center}
\includegraphics[width=10cm]{MWSv1_annotated}
\end{center}
\end{frame}
\subsection{GPRS}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{GPRS}
\begin{center}
\includegraphics[height=5cm]{aptinex_GPRS_0}\\
GPRS shield designed now by a local SME\\
\vspace{2mm}
\includegraphics[height=1cm]{aptinex}
\end{center}
\end{frame}
\subsection{Weather Shield}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Weather Shield}
\begin{center}
\includegraphics[width=5.5cm]{WeatherShield}\\
\vspace{5mm}
Made in country by a local SME A\&T Labs.\\
Picture credit: Neil Palmer (IWMI)
\end{center}
\end{frame}
\section{Initial work}
\subsection{Irrigation Dept.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Set up}
\begin{center}
\includegraphics[width=5.5cm]{IMG_20140902_151739}\\
\vspace{5mm}
Picture credit: Niroshan Bandara (UoM)
\end{center}
\end{frame}
\subsection{Met.Dept.}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Meteorological Department of Sri Lanka}
\begin{center}
\includegraphics[width=4.5cm]{LKmetdept}
\hspace{5mm}
\includegraphics[width=4.5cm]{LKmetdept1}
\end{center}
COSTI (www.costi.gov.lk) is catalizing the proposal for the National Climate Observatory.\\
Test in the Met. Dept. in Colombo (on-going).
\end{frame}
\section{Adoption}
\subsection{LRWHF}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Lanka Rainwater Harvesting Forum}
\begin{center}
\includegraphics[height=2cm]{lrwhf_logo}
\hspace{2mm}
\includegraphics[width=4.5cm]{lrwhf1}
\hspace{5mm}
\includegraphics[width=4.5cm]{lrwhf3}\\
\end{center}
LRWHF built 10 units (+5 units for spare parts) under their main USAID project for monitoring the assistance of CKDu stricken villages in drinking rainwater.
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Lanka Rainwater Harvesting Forum}
\begin{center}
\includegraphics[width=4.5cm]{tanuja1}
\hspace{5mm}
\includegraphics[width=4.5cm]{tanuja2}\\
\end{center}
\begin{center}
4 Deployed with training in May 2015. \\
6 more in July across country.\\
Operation/Maintenance trainings, schools \& outreach.
\end{center}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Lanka Rainwater Harvesting Forum}
\begin{center}
\includegraphics[width=4.7cm]{2}
\hspace{2mm}
\includegraphics[width=4.7cm]{3}\\
\end{center}
\begin{center}
Operation/Maintenance training in Monaragala\\
for schools students and teachers who went back with a unit.
\end{center}
\end{frame}
\subsection{SMEs}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Electronic SMEs}
Electronic SMEs and start-ups were engaged from the beginning of our search for local availability of components/parts.
\begin{center}
\includegraphics[width=8cm]{LK_SMEs}
\end{center}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Electronic SMEs}
One start-up developed its own version and got a notional innovation prize for it.
\begin{center}
\includegraphics[width=10cm]{thilina}
\end{center}
\end{frame}
\section{Media}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Media}
Local and international media helped our business partners marketing outlook and growth.
\begin{center}
\includegraphics[width=8cm]{MWS_press}
\end{center}
\end{frame}
\section{Future}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{On-going discussions}
\begin{center}
\includegraphics[height=3cm]{suparco_logo}
\hspace{1mm}
\includegraphics[height=3cm]{wwf_logo}
\hspace{1mm}
\includegraphics[height=3cm]{undp_logo} \\
\vspace{5mm}
\includegraphics[height=3cm]{icrc_logo} \\
\vspace{3mm}
\includegraphics[height=1cm]{wb_logo}
\end{center}
\end{frame}
\section{Conclusions}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Conclusions}
\begin{block}{An Open Source Hardware/Software\\ Low-Cost Weather Station}
\begin{itemize}
\item {\bf Arduino:} Micro-controller
\item {\bf Sensors:} Rain, wind, temperature, humidity
\item {\bf Local:} 90+\% made in the country of use by SMEs
\item {\bf Local:} Maintenance \& spare parts with local SMEs
\item {\bf Local:} Local shop sells rural solar power kit
\item {\bf Local:} Local blacksmith for steel work
\end{itemize}
We work with a rural tank manager from irrigation department for realtime rain alerts.\\Red Cross is evaluating the concept for a project in Togo. Other countries are evaluating for other applications.
\end{block}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Thank You}
\begin{center}
\includegraphics[height=6cm]{WeatherShieldSmall}
\end{center}
\begin{flushright}
\includegraphics[height=0.9cm]{iwmi}
\hspace{5mm}
\includegraphics[height=1cm]{uoMoratuwa}
\hspace{5mm}
\includegraphics[height=1cm]{uoMoratuwa_foa}
\end{flushright}
\end{frame}
\end{document}
|
|
\section{Hierarchies of objects}
|
|
\chapter{Gnarled Woods}
\index{Gnarled Woods} \index{Gnarled} \index{Woods}
\startMonsterName
Assassin Vine \CMTags{Solitary, Stealthy, Gibbous}
\stopMonsterName
Thorns (d10 damage 1 piercing) 15 HP 1 Armor
\CMTags{Close, Reach, Messy}
\startMonsterQualities
{\bf Special Qualities:} Plant
\stopMonsterQualities
\startMonsterDescription
Among the animals there exists a clear division ‘tween hunter and hunted. All it takes is a glance to know—by fangs and glowing eyes or claws or venomous sting—which of the creature of this world are meant to kill and which stand to be killed. Such a split, if you have the eyes to see it, cuts the world of leaves and flowers in twain, as well. Druids in their forest circles know it. Rangers, too, might spot such a plant before it’s too late. Lay folk, though, they wander where they oughtn’t—paths into the deep woods covered in creeping vines and with a snap, these hungry ropes snap tight, dragging their meaty prey into the underbrush. Mind your feet, traveller. {\em Instinct} : To grow
\stopMonsterDescription
\startitemize[1,packed]
\item Shoot forth new growth
\item Attack the unwary
\stopitemize
\startMonsterName
Blink Dog \CMTags{Group, Small, Magical, Organized}
\stopMonsterName
Bite (d8 damage) 6 HP 4 Armor
\CMTags{Close}
\startMonsterQualities
{\bf Special Qualities:} Illusion
\stopMonsterQualities
\startMonsterDescription
Now you see it, now you don’t. Hounds once owned by a sorcerer lord and imbued with a kind of illusory cloak, they escaped into the woods around his lair and began to breed with wolves and wild dogs of the forest. You can spot them, if you’re lucky, by the glittering silver of their coats and their strange, ululating howls. They have a remarkable talent for being not-quite where they appear to be and use it to take down prey much stronger than themselves. If you find yourself facing a pack of blink dogs you might well close your eyes and fight. You’ll have an easier time when not betrayed by your natural sight. By such sorceries are the natural places of the world polluted with unnatural things. {\em Instinct} : To hunt
\stopMonsterDescription
\startitemize[1,packed]
\item Give the appearance of being somewhere they're not
\item Summon the pack
\item Move with amazing speed
\stopitemize
\startMonsterName
Centaur \CMTags{Horde, Large, Organized, Intelligent}
\stopMonsterName
Bow (d6+2 damage 1 piercing) 11 HP 1 Armor
\CMTags{Close, Reach, Near}
\startMonsterQualities
{\bf Special Qualities:} Half-horse, Half-man
\stopMonsterQualities
\startMonsterDescription
It will be a gathering of clans unseen in this age. Call Stormhoof and Brightspear. Summon Whitemane and Ironflanks. Sound the horn and we shall begin our meeting—we shall speak the words and bind our people together. Too long have the men cut the ancient trees for their ships. The elves are weak and cowardly, friend to these mannish slime. It will be a cleansing fire from the darkest woods. Raise the red banner of war! Today we strike back against these apes and retake what is ours! {\em Instinct} : To rage
\stopMonsterDescription
\startitemize[1,packed]
\item Overrun them
\item Move with unrelenting speed
\stopitemize
\startMonsterName
Chaos Ooze \CMTags{Solitary, Planar, Terrifying, Gibbous}
\stopMonsterName
Warping touch (d10 damage ignores armor) 23 HP 1 Armor
\CMTags{Close}
\startMonsterQualities
{\bf Special Qualities:} Ooze, Fragments of other planes embedded in it
\stopMonsterQualities
\startMonsterDescription
The barrier between Dungeon World and the Elemental Planes is not, as you might hope, a wall of stone. It’s much more porous. Thin-like, with holes. Places where the civil races do not often tread can sometimes, how to put this, spring a leak. Like a dam come just a little loose. Bits and pieces of the chaos spill out. Sometimes, they’ll congeal like an egg on a pan—that’s where we get the material for many of the Guild’s magical trinkets. Useful, right? Sometimes, though, it squirms and squishes around a bit and stays that way, warping all it touches into some other, strange form. Chaos begets chaos, and it grows. {\em Instinct} : To change
\stopMonsterDescription
\startitemize[1,packed]
\item Cause a change in appearance or substance
\item Briefly bridge the planes
\stopitemize
\startMonsterName
Cockatrice \CMTags{Group, Small, Hoarder}
\stopMonsterName
Peck (d8 damage) 6 HP 1 Armor
\CMTags{Close}
\startMonsterQualities
{\bf Special Qualities:} Stone touch
\stopMonsterQualities
\startMonsterDescription
I ain’t ever seen such a thing, sir. Rodrick thought it a chicken, maybe. Poor Rodrick. I figured it to be a lizard of a sort, though he was right—it had a beak and grey feathers like a chicken. Right, well, see, we found it in the woods, in a nest at the foot of a tree while we were out with the sow. Looking for mushrooms, sir. I told Rodrick we were—yes, sir, right sir, the bird—see, it was glaring at Rodrick and he tried to scare it off with a stick to steal the eggs but the thing pecked his hand. Quick it was, too. I tried to get him away but he just got slower and slower and…yes, as you see him now, sir. All frozen up like when we left the dog out overnight in winter two years back. Poor, stupid Rodrick. Weren’t no bird nor lizard, were it, sir? {\em Instinct} : To defend the nest
\stopMonsterDescription
\startitemize[1,packed]
\item Start a slow transformation to stone
\stopitemize
\startMonsterName
Dryad \CMTags{Solitary, Magical, Intelligent, Devious, Gibbous}
\stopMonsterName
Crushing vines (2d10·w damage) 23 HP 5 Armor
\CMTags{Close}
\startMonsterQualities
{\bf Special Qualities:} Plant
\stopMonsterQualities
\startMonsterDescription
More beautiful by far than any man or woman born in the civil realms. To gaze upon one is to fall in love. Deep and punishing, too. Thing is, they don’t love. Not the fleshy folk who often find them, though. Their love is a primal thing, married to the woods—to a great oak that serves as home and mother and sacred place to them. It’s a curse to see one, too, they’ll never love you back. No matter what you do. No matter how you pledge yourself to them, they’ll always spurn you. If ever their oak comes to harm, you’ve not only the dryad’s wrath to contend with, but in every nearby village there’s a score of men with a secret longing in their heart, ready to murder you where you sleep for just a smile from such a creature. {\em Instinct} : To love nature passionately
\stopMonsterDescription
\startitemize[1,packed]
\item Entice a mortal
\item Merge into a tree
\item Turn nature against them
\stopitemize
\startMonsterName
Eagle Lord \CMTags{Group, Large, Organized, Intelligent}
\stopMonsterName
Claw (2d8·b+1 damage 1 piercing) 10 HP 1 Armor
\CMTags{Close, Reach}
\startMonsterQualities
{\bf Special Qualities:} Mighty wings
\stopMonsterQualities
\startMonsterDescription
Some the size of horses. Bigger, even—the Kings and Queens of the Eagles. Their cry pierces the mountain sky and woe to those who fall under the shadow of their mighty wings. The ancient wizards forged a pact with them in the primordial days. Men would take the plains and valleys and leave the mountaintops to the Eagle Lords. These sacred pacts should be honored, lest they set their talons into you. Lucky are the elves, for the makers of their bonds yet live and when danger comes to Elvish lands, the Eagle Lords often serve as spies and mounts for the elves. Long-lived and proud, some might be willing to trade their ancient secrets for the right price, too. {\em Instinct} : To rule the heights
\stopMonsterDescription
\startitemize[1,packed]
\item Attack from the sky
\item Pull someone into the air
\item Call on ancient oaths
\stopitemize
\startMonsterName
Elvish Warrior \CMTags{Horde, Intelligent, Organized}
\stopMonsterName
Sword (2d6·b damage) 3 HP 2 Armor
\CMTags{Close}
\startMonsterQualities
{\bf Special Qualities:} Sharp sense
\stopMonsterQualities
\startMonsterDescription
Like all the elves do, war is an art. I saw them fight, once. The Battle of Astrid’s Veil. Yes, I am that old, boy, now hush. She was clad in plate that shone like the winter sky. White hair streaming and a pennant of ocean blue tied to her spear. She seemed to glide across between the trees the way an angel might, striking out and bathing her blade in blood that steamed in the cold air. I never felt so small before. I trained with the master-at-arms of Battlemoore, you know. I’ve held a sword longer than you’ve been alive, boy, and in that one moment I knew that my skill meant nothing. Thank the gods the elves were with us then. A more beautiful and terrible thing I have not seen since. {\em Instinct} : To seek perfection
\stopMonsterDescription
\startitemize[1,packed]
\item Strike at a weak point
\item Set ancient plans in motion
\item Use the woods to advantage
\stopitemize
\startMonsterName
Elvish High Arcanist \CMTags{Solitary, Magical, Intelligent, Organized}
\stopMonsterName
Flame (d10 damage ignores armor) 12 HP 0 Armor
\CMTags{Near, Far}
\startMonsterQualities
{\bf Special Qualities:} Sharp senses
\stopMonsterQualities
\startMonsterDescription
True elvish magic isn’t like the spells of men. Mannish wizardry is all rotes and formulas. They cheat to find the arcane secrets that resound all around them. They are deaf to the arcane symphony that sings in the woods. Elvish magic is fine ear to hear it and the voice with which to sing. To harmonize with what is already resounding. Men bind the forces of magic to their will; Elves simply pluck the strings and hum along. The High Arcanists, in a way, have become more and less than any elf. The beat of their blood is the throbbing of all magic in this world. {\em Instinct} : To unleash power
\stopMonsterDescription
\startitemize[1,packed]
\item Work the magic that wants to be worked
\item Cast forth the elements
\stopitemize
\startMonsterName
Griffin \CMTags{Group, Large, Organized}
\stopMonsterName
Claw (d8+3 damage) 10 HP 1 Armor
\CMTags{Close, Reach, Forceful}
\startMonsterQualities
{\bf Special Qualities:} Wings
\stopMonsterQualities
\startMonsterDescription
On first glance, one might mistake the Griffin for another magical mistake like the Manticore or the Chimera. It looks the part, doesn’t it? These creatures have the regal haughtiness of a lion and the arrogant bearing of a eagle but temper it with the unshakeable loyalty of both. To earn the friendship of a Griffin is to have an ally all your living days. Truly a gift, that. If you’re ever lucky enough to meet one be respectful and deferential above all else. It may not seem it but they can tell and answer perceived slights with a sharp beak and talons. {\em Instinct} : To serve allies
\stopMonsterDescription
\startitemize[1,packed]
\item Carry an ally aloft
\item Strike from above
\stopitemize
\startMonsterName
Ogre \CMTags{Group, Large, Intelligent}
\stopMonsterName
Club (d8+5 damage) 10 HP 1 Armor
\CMTags{Close, Reach, Forceful}
\startMonsterDescription
A tale, then. Somewhere in the not-so-long history of the Mannish race there was a divide. In days when men were merely dwellers-in-the-mud with no magic to call their own, they split in two: one camp left their caves and the dark forests and built the First City to honor the gods. The others, a wild and savage lot, retreated into darkness. They grew, there. In the deep woods a grim loathing for their softer kin gave them strength. They found dark gods of their own, there in the woods and hills. Ages passed and they bred tall and strong and full of hate. We have forged steel and they match it with their savagery. We may have forgotten our common roots, but somewhere, deep down, the Ogres remember. {\em Instinct} : To return the world to darker days
\stopMonsterDescription
\startitemize[1,packed]
\item Destroy something
\item Topple trees
\item Bring down the roof
\stopitemize
\startMonsterName
Hill Giant \CMTags{Group, Huge, Intelligent, Organized}
\stopMonsterName
Rock (d8+3 damage) 10 HP 1 Armor
\CMTags{Reach, Near, Far, Forceful}
\startMonsterDescription
Ever seen an ogre before? Bigger than that. Dumber and meaner, too. Hope you like having cows thrown at you. {\em Instinct} : To hurl
\stopMonsterDescription
\startitemize[1,packed]
\item Throw something
\item Shake the earth
\stopitemize
\startMonsterName
Razor Boar \CMTags{Solitary}
\stopMonsterName
Bite (d10 damage 3 piercing) 16 HP 1 Armor
\CMTags{Close, Messy}
\startMonsterDescription
The tusks of the razor boar shred metal plate like so much tissue. Voracious, savage and unstoppable, they tower over their mundane kin. To kill one? A greater trophy of bravery and skill is hard to name, though I hear a razor boar killed the Drunkard King in a single thrust. You think you’re a better hunter than he? {\em Instinct} : To shred
\stopMonsterDescription
\startitemize[1,packed]
\item Rip them apart
\item Rend armor and weapons
\stopitemize
\startMonsterName
Sprite \CMTags{Horde, Tiny, Stealthy, Magical, Devious, Intelligent}
\stopMonsterName
Dagger (2d4·w damage) 3 HP 0 Armor
\CMTags{Hand}
\startMonsterQualities
{\bf Special Qualities:} Wings, Fey Magic
\stopMonsterQualities
\startMonsterDescription
I’d classify them elementals, except that “Being Annoying” isn’t an element. {\em Instinct} : To play tricks
\stopMonsterDescription
\startitemize[1,packed]
\item Play a trick to expose someone's true nature
\item Confuse their senses
\item Craft an illusion
\stopitemize
\startMonsterName
Treant \CMTags{Group, Huge, Intelligent, Gibbous}
\stopMonsterName
Wallop (d8+5 damage) 21 HP 4 Armor
\CMTags{Reach, Forceful}
\startMonsterQualities
{\bf Special Qualities:} Wooden
\stopMonsterQualities
\startMonsterDescription
Old and tall and thick of bark
\stopMonsterDescription
walk amidst the tree-lined dark
Strong and slow and forest-born
the treants anger quick, we warned
If to the woods with axe ye go
know the treants be thy foe
{\em Instinct} : To protect nature
\startitemize[1,packed]
\item Move with implacable strength
\item Set down roots
\item Spread old magic
\stopitemize
\startMonsterName
Werewolf \CMTags{Solitary, Intelligent}
\stopMonsterName
Bite (d10+2 damage 1 piercing) 12 HP 1 Armor
\CMTags{Close, Messy}
\startMonsterQualities
{\bf Special Qualities:} Weak to silver
\stopMonsterQualities
\startMonsterDescription
Beautiful, isn’t it? The moon, I mean. She’s watching us, you know? Her pretty silver eyes watch us while we sleep. Mad, too—like all the most beautiful ones. If she were a woman, I’d bend my knee and make her my wife on the spot. No, I didn’t ask you here to speak about her, though. The chains? For your safety, not mine. I’m cursed, you see. You must have suspected. The sorcerer-kings called it “lycanthropy” in their day—passed on by a bite to make more of our kind. No, I could find no cure. Please, Don’t be scared. You have the arrows I gave you? Silver, yes. Ah, you begin to understand. Don’t cry, sister. You must do this for me. I cannot bear more blood on my hands. You must end this. For me. {\em Instinct} : To shed the appearance of civilization
\stopMonsterDescription
\startitemize[1,packed]
\item Transform to pass unnoticed as beast or man
\item Strike from within
\item Hunt like man and beast
\stopitemize
\startMonsterName
Worg \CMTags{Horde, Organized}
\stopMonsterName
Bite (d6 damage) 3 HP 1 Armor
\CMTags{Close}
\startMonsterDescription
As horses are to the civil races, so go the worg to the goblins. Mounts, fierce in battle, ridden by only the bravest and most dangerous, are found and bred in the forest primeval to serve the goblins in their wars on men. The only safe worg is a pup, separated from its mother. If you can find one of these, or make orphans of a litter with a sharp sword, you’ve got what could become a loyal protector or hunting hound in time. Train it well, mind you, for the worg are smart and never quite free of their primal urges. {\em Instinct} : To serve
\stopMonsterDescription
\startitemize[1,packed]
\item Carry a rider into battle
\item Give its rider an advantage
\stopitemize
\startMonsterName
Satyr \CMTags{Group, Devious, Magical, Hoarder}
\stopMonsterName
Charge (2d8·w damage) 10 HP 1 Armor
\CMTags{Close}
\startMonsterQualities
{\bf Special Qualities:} Enchantment
\stopMonsterQualities
\startMonsterDescription
One of only a very few creatures to be found in the old woods that don’t right out want to maim, kill, or eat us. They dwell in glades pierced by the sun, and dance on their funny goat-legs to enchanting music played on pipes made of bone and silver. They smile easily and, so long as you please them with jokes and sport, will treat our kind with friendliness. They’ve a mean streak, though, so if you cross them, make haste elsewhere; very few things hold a grudge like the stubborn Satyr. {\em Instinct} : To enjoy
\stopMonsterDescription
\startitemize[1,packed]
\item Pull others into revelry through magic
\item Force gifts upon them
\item Play jokes with illusions and tricks
\stopitemize
|
|
% SPDX-FileCopyrightText: © 2021 Martin Michlmayr <tbm@cyrius.com>
% SPDX-License-Identifier: CC-BY-4.0
\setchapterimage[9.5cm]{images/window}
\chapter{Transparency}
\labch{transparency}
Projects that accept donations have an obligation to be transparent about the ways the funds are being used.
Several projects and foundations publish annual reports that cover their activities for the year. Contractors funded to work on open source projects often publish monthly reports in which they document their accomplishments.
Many organizations also publish detailed financial reports about income and expenses. Charities that are based in the US have to publish an annual tax filing (form 990) if they receive a certain amount of donations that year. The tax filing contains information about the income and expenses, and it often gives a good overview of the activities of an organization.
Transparency also applies to other areas of an organization, such as the publication of by-laws and other governance documents.
\begin{kaobox}[frametitle=Transparency of FOSS foundations]
Many FOSS foundations publish annual reports, public filings, audited financial statements, and other materials.
Some examples include:
\begin{itemize}
\item Mozilla Foundation: \href{https://www.mozilla.org/en-US/foundation/annualreport}{annual reports, public filings, and audited financial statements}
\item NumFOCUS: \href{https://numfocus.org/community/mission/annual-reports}{annual reports} and \href{https://numfocus.org/legal}{public filings}
\item Open Infrastructure Foundation: \href{https://openinfra.dev/about/}{annual reports}
\item Software Freedom Conservancy: \href{https://sfconservancy.org/about/filings/}{public filings and audited financial statements}
\item The Documentation Foundation: \href{https://www.documentfoundation.org/foundation/financials/}{annual reports and accounting ledgers}
\end{itemize}
There's also a \href{https://gitlab.com/floss-foundations/npo-public-filings}{repository} where public filings from FOSS foundations are archived.
\end{kaobox}
|
|
\chapter{Implementation}\label{ch:implementation}
To fulfill the assumptions of this work, the authors prepared a system that is able to collect and verify the mouse dynamics data.
The high-level flow of the system is presented in Fig.~\ref{fig:overall_system_structure} and provides the basis for further considerations.
\begin{figure}[!hbt]
\includegraphics[width=\linewidth]{resources/overall_diagram.png}
\captionof{figure}{Overall system structure diagram}
\label{fig:overall_system_structure}
\end{figure}
The entry point for the system is the user's browser that allows the human user as well as a human impersonating bot for access to the prepared website which is hosted on the Internet.
The \mbox{Data Collection}\upperref{itm:data-collection} module which is persisted and operates within a cloud acts as a mouse dynamics data collector, which means that every single mouse event generated by the user on the website is intercepted, transformed and stored in the underlying database (Fig.~\ref{fig:overall_system_structure}, pt. 1 and pt. 2).
The administrator of the presented system is able to retrieve the data from the database in any time (pt. 3).
The downloaded data can be further processed (pt. 4) to the dataset which will be used in machine learning stage.
The administrator of the system should prepare a machine learning model and upload it to the Git repository (pt. 5) which will be then obtained by a computing cluster to perform computation using the model from the observed branch.
Such an approach allows to work on the solution simultaneously by many data scientists and makes it possible to accelerate the research.
The dataset should be uploaded to the computing cluster and persisted in the group's storage that allows using it by many different paralleled computations (not included in the diagram due to decrease of readability).
Each computation is requested from the local computer using prepared Git's `deploy' alias (described in Section~\ref{sec:prometheus-computing-cluster}) that performs a sequence of operations such as establishing the connection to the cluster, fetching the current version of code from the repository and submitting the job to the SLURM\upperref{itm:slurm} workload manager (pt. 6, 7, 8).
The work of the cluster is fully asynchronous because each job is queued and therefore the completion time is unknown.
This is very inconvenient because there is no notification system that allows getting the information about the finished job.
To overcome these limitations, the notification system is proposed as a part of Bot Detection\upperref{itm:bot-detection} module.
The creation of such a system enables the presentation of the results of the computed job in the message as well as the graphs prepared based on the output of the job.
To provide the possibility of sending the images, the notification system uses the external image hosting website called Imgur\upperref{itm:imgur} which allows to upload pictures to the server and host them under the generated \gls{url} (pt. 11).
The results and the graphs are combined into Slack's\upperref{itm:slack} message and then sent to the previously prepared channel by using the webhook \gls{api} (pt. 12, 13).
The further subchapters treat about the implementation details.
At the beginning the Data Collection module is described along with the cloud configuration and performance tests, further, the bot which impersonates the human user and finally the machine learning model alongside the tools such as, among others, a notification module.
|
|
%\documentclass[10pt,flushrt,preprint]{aastex}
\documentclass[iop]{emulateapj}
%\documentclass[manuscript]{aastex}
%\documentclass[preprint2]{aastex}
\usepackage{graphicx}
\usepackage[space]{grffile}
\usepackage{latexsym}
\usepackage{amsfonts,amsmath,amssymb}
\usepackage{url}
\usepackage[utf8]{inputenc}
\usepackage{fancyref}
\usepackage{hyperref}
\usepackage{textcomp}
\usepackage{longtable}
\usepackage{multirow,booktabs}
\usepackage{subfigure}
\usepackage{natbib}
%\usepackage{pdflscape}
\usepackage{longtable}
\newcommand{\ZFIRE}{{\scshape ZFIRE}}
\newcommand{\ZFOURGE}{{\scshape ZFOURGE}}
\newcommand{\Ks}{K$_{\rm s}$}
\newcommand{\Halpha}{H$\alpha$}
\newcommand{\Hbeta}{H$\beta$}
\newcommand{\degree}{\hbox{$^\circ$}}
\newcommand{\msol}{M$_\odot$}
\newcommand{\hubble}{{\it Hubble}}
\newcommand{\kms}{km~s$^{-1}$}
\newcommand{\SII}{[\hbox{{\rm S}\kern 0.1em{\sc ii}}]}
\newcommand{\AlIII}{\hbox{{\rm Al}\kern 0.1em{\sc iii}}}
\newcommand{\NII}{[\hbox{{\rm N}\kern 0.1em{\sc ii}}]}
\newcommand{\OII}{[\hbox{{\rm O}\kern 0.1em{\sc ii}}]}
\newcommand{\OIII}{[\hbox{{\rm O}\kern 0.1em{\sc iii}}]}
\newcommand{\MgII}{\hbox{{\rm Mg}\kern 0.1em{\sc ii}}}
\newcommand{\MgI}{\hbox{{\rm Mg}\kern 0.1em{\sc i}}}
\newcommand{\FeII}{\hbox{{\rm Fe}\kern 0.1em{\sc ii}}}
\newcommand{\CIII}{\hbox{{\rm C}\kern 0.1em{\sc iii}}}
\newcommand{\CIV}{\hbox{{\rm C}\kern 0.1em{\sc iv}}}
\newcommand{\HII}{{\ion{H}{2}}}
\newcommand{\OIIIHb}{[{\ion{O}{3}}]/H$\beta$}
\newcommand{\NIIHa}{[\ion{N}{2}]/H$\alpha$}
\newcommand{\CII}{\hbox{{\rm C}\kern 0.1em{\sc ii}}}
\newcommand{\OI}{\hbox{{\rm O}\kern 0.1em{\sc i}}}
\newcommand{\NeIII}{[\hbox{{\rm Ne}\kern 0.1em{\sc iii}}] }
\newcommand{\NeII}{[\hbox{{\rm Ne}\kern 0.1em{\sc ii}}] }
\newcommand{\NaI}{[\hbox{{\rm Na}\kern 0.1em{\sc i}}] }
\newcommand{\around}{$\sim$}
\newcommand{\AvSED}{$\mathrm{Av_{\mathrm{SED}}}$}
\newcommand{\NMAD}{$\sigma_{\mathrm{NMAD}}$}
\newcommand{\zspec}{$z_{\mathrm{spec}}$}
\newcommand{\zgrism}{$z_{\mathrm{grism}}$}
\newcommand{\zphoto}{$z_{\mathrm{photo}}$}
\newcommand{\mass}{M$_*$/M$_\odot$}
\def\KG#1{{[\bf KG: #1]}}
%% preprint2 produces a double-column, single-spaced document:
%\documentclass[preprint2]{aastex}
%% Sometimes a paper's abstract is too long to fit on the
%% title page in preprint2 mode. When that is the case,
%% use the longabstract style option.
%% \documentclass[preprint2,longabstract]{aastex}
%% If you want to create your own macros, you can do so
%% using \newcommand. Your macros should appear before
%% the \begin{document} command.
%%
%% If you are submitting to a journal that translates manuscripts
%% into SGML, you need to follow certain guidelines when preparing
%% your macros. See the AASTeX v5.x Author Guide
%% for information.
%% You can insert a short comment on the title page using the command below.
\slugcomment{To appear in The Astrophysical Journal Supplements (ApJS)}
%% If you wish, you may supply running head information, although
%% this information may be modified by the editorial offices.
%% The left head contains a list of authors,
%% usually a maximum of three (otherwise use et al.). The right
%% head is a modified title of up to roughly 44 characters.
%% Running heads will not print in the manuscript style.
\shorttitle{The ZFIRE\ Survey}
\shortauthors{Nanayakkara et al.}
%% This is the end of the preamble. Indicate the beginning of the
%% paper itself with \begin{document}.
\begin{document}
%% LaTeX will automatically break titles if they run longer than
%% one line. However, you may use \\ to force a line break if
%% you desire.
\title{ZFIRE: A KECK/MOSFIRE Spectroscopic Survey of Galaxies in Rich
Environments at $z\sim2$}
% spectroscopy of galaxies in rich environments at $z\sim2$:
% Catalogue Release I and a comparison of spectroscopic and photometric derived properties of galaxies}
%% Use \author, \affil, and the \and command to format
%% author and affiliation information.
%% Note that \email has replaced the old \authoremail command
%% from AASTeX v4.0. You can use \email to mark an email address
%% anywhere in the paper, not just in the front matter.
%% As in the title, use \\ to force line breaks.
\author{Themiya Nanayakkara\altaffilmark{1,*} }
\author{Karl Glazebrook\altaffilmark{1}}
\author{Glenn G. Kacprzak\altaffilmark{1}}
\author{Tiantian Yuan\altaffilmark{2}}
\author{Kim-Vy Tran\altaffilmark{3}}
\author{Lee Spitler\altaffilmark{5,6}}
\author{Lisa Kewley\altaffilmark{2}}
\author{Caroline Straatman\altaffilmark{4}}
\author{Michael Cowley\altaffilmark{5,6}}
\author{David Fisher\altaffilmark{1}}
\author{Ivo Labbe\altaffilmark{4}}
\author{Adam Tomczak\altaffilmark{3}}
\author{Rebecca Allen\altaffilmark{1,6}}
\author{Leo Alcorn\altaffilmark{3}}
\altaffiltext{1}{Centre for Astrophysics and Supercomputing, Swinburne University of Technology, Hawthorn, Victoria 3122, Australia.}
\altaffiltext{*}{tnanayak@astro.swin.edu.au}
\altaffiltext{2}{Research School of Astronomy and Astrophysics, The Australian National University, Cotter Road, Weston Creek, ACT 2611, Australia.}
\altaffiltext{3}{George P. and Cynthia W. Mitchell Institute for Fundamental Physics and Astronomy, Department of Physics and Astronomy, Texas A \& M University, College Station, TX 77843.}
\altaffiltext{4}{Leiden Observatory, Leiden University, PO Box 9513, 2300
RA Leiden, Netherlands.}
\altaffiltext{5}{Department of Physics \& Astronomy, Macquarie University,
Sydney, NSW 2109, Australia.}
\altaffiltext{6}{Australian Astronomical Observatory, PO Box 915, North
Ryde, NSW 1670, Australia.}
\begin{abstract}
We present an overview and the first data release of
ZFIRE, a spectroscopic
redshift survey of star-forming galaxies that utilizes the MOSFIRE
instrument on Keck-I to study galaxy properties in rich environments
at $1.5<z<2.5$. ZFIRE measures accurate spectroscopic redshifts and
basic galaxy properties derived from multiple emission lines. The
galaxies are selected from a stellar mass limited sample based on deep
near infrared imaging ($\mathrm{K_{AB}<25}$) and precise photometric
redshifts from the ZFOURGE and UKIDSS surveys as well as grism
redshifts from 3DHST. Between 2013 and 2015 ZFIRE has observed the COSMOS and UDS
legacy fields over 13 nights and has obtained 211 galaxy redshifts over
$1.57<z<2.66$ from a combination of nebular emission lines (such as
\Halpha, \NII, \Hbeta, \OII, \OIII, \SII) observed at 1--2\micron.
Based on our medium-band near infra-red photometry, we are able to
spectrophotometrically flux calibrate our spectra to
\around10\% accuracy. ZFIRE reaches $5\sigma$ emission line flux
limits of \around$\mathrm{3\times10^{-18}~erg/s/cm^2}$ with a
resolving power of $R=3500$ and reaches masses down to
\around10$^{9}$\msol.
We confirm that the primary input survey,
ZFOURGE, has produced photometric redshifts for star-forming galaxies
(including highly attenuated ones) accurate to $\Delta
z/(1+z\mathrm{_{spec})}=0.015$ with $0.7\%$ outliers. We measure a
slight redshift bias of $<0.001$, and we note that the redshift bias tends to
be larger at higher masses. We also examine the
role of redshift on the derivation of rest-frame colours and stellar
population parameters from SED fitting techniques.
The ZFIRE survey extends spectroscopically confirmed $z\sim 2$ samples
across a richer range of
environments, here we make available the first public release of
the data for use by the community.\footnote{\url{http://zfire.swinburne.edu.au}}
\end{abstract}
% delta(z) = zspec-zphot
%% Keywords should appear after the \end{abstract} command. The uncommented
%% example has been keyed in ApJ style. See the instructions to authors
%% for the journal to which you are submitting your paper to determine
%% what keyword punctuation is appropriate.
\keywords{galaxies: catalogs --- galaxies: clusters ---
galaxies: distances and redshifts --- galaxies: general--- galaxies:
high-redshift--- surveys}
\section{Introduction}
The rapid development of very deep multi-wavelength imaging surveys from the ground and space in the past decade has greatly enhanced our understanding of important questions in galaxy evolution particularly through the provision of `photometric redshift' estimates (and hence the evolutionary sequencing of galaxies) from multi-band spectral energy distribution (SED)
fitting \citep{Whitaker2011,McCracken2012,Skelton2014}. Studies using data from these surveys have led to a more detailed understanding of topics such as the evolution of the galaxy mass function \cite[eg.,][]{Marchesini2010,Muzzin2013,Tomczak2014,Grazian2015}, stellar population properties \cite[eg.,][]{Maseda2014,Spitler2014,Pacifici2015}, evolution of galaxy morphology \cite[eg.,][]{Huertas-Company2015,Papovich2015}, and the growth of the large-scale structure in the universe \citep{Adelberger2005,Wake2011}.
\subsection{Advances with Deep Near-IR Imaging Surveys}
Near-infrared data is vital for this endeavour, both for photometric redshift estimation \citep{Dahlen2013,Rafelski2015} and provision of stellar mass estimates \citep{Brinchmann2000,Muzzin2009}.
Stellar mass is especially useful for tracking galaxy evolution as it increases monotonically with time, but data at near-infrared wavelengths
are needed to estimate it accurately at high-redshift \citep[][Straatman et al. in press]{Whitaker2011}. New surveys have been made possible by the recent development of relatively wide-field sensitive near infrared (NIR) imagers in 4-8m telescopes such as FourStar \citep{Persson2013} , HAWK-I \citep{Pirard2004}, NEWFIRM \citep{NEWFIRM} and VIRCAM \citep{Dalton2006}. Surveys such as ZFOURGE (Straatman et al., in press), the NEWFIRM medium-band Survey (NMBS) \citep{Whitaker2011}, and ULTRAVISTA \citep{McCracken2012} have obtained deep imaging over relatively large sky areas (up to 1.5 deg$^2$). The introduction of near-infrared medium-band filters ($\Delta\lambda\sim 1000$\AA) has resulted in photometric redshifts with accuracies of \around2\% \citep{Whitaker2011} and enabled galaxy properties to be accurately derived by SED fitting techniques such as EAZY \citep{Brammer2008} and FAST \citep{Kriek2009}.
These photometric redshift surveys have greatly enhanced our understanding of the universe at $z\sim2$, which is a critical epoch in the evolution of the universe. At this redshift, the universe was only 3 billion years old and was at the peak of cosmic star formation rate activity \citep{Hopkins2006,Lee2015}. We see the presence of massive, often dusty, star-forming galaxies
\citep{Spitler2014,Reddy2015} which were undergoing rapid evolution and the development of a significant
population of massive, quiescent galaxies \citep{vanDokkum2008,Damjanov2009}.
Galaxy clusters have also now been identified at $z\sim2$, and results
indicate that this may be the epoch when environment starts to influence galaxy evolution \citep{Gobat2011,Spitler2012,Yuan2014,Casey2015}.
\subsection{Need for Spectroscopy}
Even though immense progress on understanding galaxy evolution has been made possible by deep imaging surveys, the spectroscopy of galaxies remains critically important. Spectroscopy provides the basic, precision redshift information that can be used to investigate the accuracy of photometric redshifts derived via SED fitting techniques. The galaxy properties derived via photometry have a strong dependence on the redshifts, and quantifying any systematic biases will help constrain the derived galaxy properties and understand associated errors.
Spectral emission and absorption lines also provide a wealth of information on physical processes and kinematics within galaxies \citep{Shapley2009}. Spectroscopy also provides accurate environmental information (for example, the velocity dispersions of proto-clusters; e.g. \cite{Yuan2014}) beyond the resolution of photometric redshifts.
Rest-frame ultraviolet (UV) spectroscopy of galaxies provides information on the properties of massive stars in galaxies and the composition and kinematics of the galaxies' interstellar medium \citep[ISM;][]{Dessauges2010,Quider2010}.
Rest-frame optical absorption lines are vital to determine the older stellar population properties of the galaxies \citep[eg.,][]{vandeSande2011,Belli2014}. Rest-frame optical emission lines provide information on the state of the ionized gas in galaxies, its density, ionization degree, and metallicity \citep{Pettini2004,Steidel2014,Kacprzak2015,Kewley2016,Shimakawa2015}.
\subsection{Spectroscopy of $z\lesssim1$ Galaxies}
Large-scale spectroscopy is now routine at the low redshift universe.
Surveys such as the Sloan Digital Sky Survey \citep[][]{York2000}, the 2-Degree Field Galaxy Redshift Survey \citep[][]{Colless2001}, and the Galaxy and Mass Assembly Survey \citep[][]{Driver2009} extensively explored the $z\la 0.2$ universe ($10^5$--$10^6$ galaxies). At $z\sim 1$
the DEEP2 Galaxy Redshift Survey \citep{Newman2013}, the VIMOS VLT Deep Survey \citep{LeFevre2005}, the VIMOS Public Extragalactic Survey \citep{Garilli2014}, and zCOSMOS \citep{Lilly2007} have produced large spectroscopic samples ($10^4$--$10^5$ galaxies).
The large number of galaxies sampled in various environmental and physical conditions by these surveys has placed strong constraints on galaxy models at $z<1$ while revealing rare phases and mechanisms of galaxy evolution \cite[e.g.,][]{Cooper2007,Coil2008,Cheung2012,Newman2013}.
\subsection{Spectroscopy of $z\sim2$ Galaxies}
At a $z\gtrsim1.5$ rest-frame optical features are redshifted to the NIR regime and therefore accessing these diagnostics becomes more challenging. Historically, the spectroscopy of galaxies in these redshifts focussed on the follow up of Lyman break galaxies, which are rest-frame UV selected using the distribution of the objects in $\cal{U}$, $\cal{G}$, and $\cal{R}$ colour space \citep{Steidel1992}. This technique takes advantage of the discontinuity of the SEDs near the Lyman limit. \citet{Steidel2003} used this technique to target these candidates with multi-object optical spectrographs to obtain rest frame UV spectra for \around1000 galaxies at $z\sim3$.
Furthermore, $\cal{U}$, $\cal{G}$, and $\cal{R}$ selections can be modified to select similar star-forming galaxies between $1.5<z<2.5$ via their U-band excess flux \citep{Steidel2004}.
Such sample selections are biased toward UV bright sources and do not yield homogeneous mass complete samples. Surveys such as the Gemini Deep Deep Survey \citep[][]{Abraham2004} and the Galaxy Mass Assembly ultra-deep Spectroscopic Survey \citep[][]{Kurk2013} have attempted to address this by using the IR selection of galaxies (hence much closer to mass-complete samples) before obtaining optical spectroscopy.
The K20 survey \citep{Cimatti2002} used a selection based on Ks magnitude (Ks$<20$) to obtain optical spectroscopy of extremely dusty galaxies at $z\sim1$.
These surveys have provided redshift information, but only rest-frame UV spectral diagnostics, and many red galaxies are extremely faint in the rest-UV requiring very long exposure times.
The development of near-IR spectrographs has given us access to rest-frame optical spectroscopy of galaxies at $z\gtrsim1.5$, but the ability to perform spectroscopy of a large number of galaxies has been hindered due to low sensitivity and/or unavailability of multiplexed capabilities.
For example the MOIRCS Deep Survey \citep{Kajisawa2006} had to compromise between area, sensitivity, number of targets, and resolution due to instrumental limits with MOIRCS in Subaru \citep{Ichikawa2006}.
The Subaru FMOS galaxy redshift survey \cite{Tonegawa2015}, yielded mostly bright line emitters due to limitations in sensitivity of FMOS \citep{Kimura2010}.
Furthermore, FMOS does not cover the longer K-band regime which places an upper limit for \Halpha\ detections at $z\sim1.7$. Sensitive long slit spectrographs such as GNIRS \citep{Elias2006} and XShooter \citep{Vernet2011} have been utilised to observe limited samples of massive galaxies at $z\sim2$.
NIR-grism surveys from the \emph{Hubble Space Telescope (HST)} have yielded large samples such as in the 3DHST survey \citep{Momcheva2015,Treu2015} but have low spectral resolution ($R\sim70-300$) and do not probe wavelengths $>$ 2\micron.
With the introduction of the Multi-object Spectrometer for infrared Exploration (MOSFIRE), a cryogenic configurable multislit system on the 10m Keck telescope \citep{McLean2012}, we are now able to obtain high-quality near-infrared spectra of galaxies in large quantities \citep{Kulas2013,Steidel2014,Kriek2015,Wirth2015}.
The Team Keck Redshift Survey 2 observed a sample of 97 galaxies at $z\sim2$ to test the performance of the new instrument \citep{Wirth2015} and investigate the ionization parameters of galaxies at $z\sim2$.
The Keck Baryonic Structure Survey is an ongoing survey of galaxies currently with 179 galaxy spectra, which is primarily aimed to investigate the physical processes between baryons in the galaxies and the intergalactic medium \citep{Steidel2014}.
The MOSFIRE Deep Evolution Field (MOSDEF) survey is near-infrared selected and aims to observe \around1500 galaxies $1.5<z<3.5$ to study stellar populations, Active Galactic Nuclei, dust, metallicity, and gas physics using nebular emission lines and stellar absorption lines \citep{Kriek2015}.
\subsection{The ZFIRE Survey}
In this paper, we present the ZFIRE survey, which utilizes MOSFIRE to observe galaxies in rich environments at $z>1.5$ with a complementary sample of field galaxies. A mass/magnitude complete study of rich galaxy environments is essential to overcome selection-bias.
Galaxy clusters are the densest galaxy environments in the universe and are formed via various physical processes \citep{Kravtsov2012}.
They are a proxy for the original matter density fields of the universe and can be used to constrain fundamental cosmological parameters. Focusing on these rich environments at high-redshift provides access to numerous galaxies with various physical conditions that are rapidly evolving and interacting with their environments.
These galaxies can be used to study the formation mechanisms of local galaxy clusters in a period where they are undergoing extreme evolutionary processes. Such environments are rare at $z\sim 2$ \citep{Gobat2011,Newman2014,Yuan2014}: for example, we target the \cite{Spitler2012} cluster at $z=2.1$, which was the only such massive structure found in the 0.1 deg$^2$ ZFOURGE survey (and that at only 4\% chance, \citep{Yuan2014}). Hence, a pointed survey on such clusters and their environs is highly complementary to other field surveys being performed with MOSFIRE.
Here we present the ZFIRE\ survey overview and first data release.
We release data for two cluster fields: one at $z=2.095$ \citep{Spitler2012,Yuan2014} and the other at $z=1.62$ \citep{Papovich2010,Tanaka2010}.
The structure of the paper is as follows:
in Section \ref{sec:survey}, we describe the ZFIRE survey design, target selection and data reduction.
In Section \ref{sec:results}, we present our data and calculate the completeness and detection limits of the survey.
We investigate the accuracy of photometric redshifts of different surveys that cover the ZFIRE\ fields in Section \ref{sec:photometric_redshifts}. In Section \ref{sec:implications}, we study the role of photometric redshift accuracy on galaxy physical parameters derived via common SED fitting techniques and how spectroscopic accuracy affects cluster membership identification.
A brief description of the past/present work and the future direction of the survey is presented in Section \ref{sec:summary}.
We assume a cosmology with H$_0$= 70 km/s/Mpc, $\Omega_\Lambda$=0.7 and $\Omega_m$= 0.3.
Unless explicitly stated we use AB magnitudes throughout the paper.
Stellar population model fits assume a \citet{Chabrier2003} initial mass function (IMF), \citet{Calzetti2001} dust law and solar metallicity. We define \zspec\ as the spectroscopic redshift, \zphoto\ as the photometric redshift, and \zgrism\ as the grism redshift from 3DHST \citep{Momcheva2015}. We express stellar mass (M$_*$) in units of solar mass (M$_\odot$).
Data analysis was performed using \texttt{iPython} \citep{Perez2007} and \texttt{astropy} \citep{Astropy2013} and \texttt{matplotlib} \citep{Hunter2007} code to reproduce the figures, will be available online\footnote{https://github.com/themiyan/zfire\_survey}.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{ZFIRE\ Observations and Data Reduction}
\label{sec:survey}
The MOSFIRE \citep{McLean2008,McLean2010,McLean2012} operates from 0.97--2.41 microns (i.e. corresponding to atmospheric $YJHK$ bands, one band at a
time) and provides a 6.1$'\times 6.1'$ field of view with a resolving power of $R$\around3500. It is equipped with a cryogenic configurable slit unit that can include up to 46 slits and be configured in
\around6 minutes. MOSFIRE has a Teledyne H2RG HgCdTe detector with
2048 $\times$ 2048 pixels ($0''.1798$/pix) and can be used as a multi-object spectrograph and a wide-field imager by removing the masking bars from the field of view. ZFIRE\ utilizes the multi-object spectrograph capabilities of MOSFIRE.
The galaxies presented in this paper consist of observations of two cluster fields from the Cosmic Evolution Survey (COSMOS) field \citep{Scoville2007} and the Hubble Ultra Deep Survey (UDS) Field \citep{Beckwith2006}.
These clusters are the \cite{Yuan2014} cluster at \zspec=2.095 and IRC 0218 cluster \citep{Papovich2010,Tanaka2010,Tran2015} at \zspec=1.62.
\cite{Yuan2014} spectroscopically confirmed the cluster, which was identified by \citet{Spitler2012} using photometric redshifts and deep Ks band imaging from ZFOURGE.
The IRC 0218 cluster was confirmed independently by \citet{Papovich2010} and \citet{Tanaka2010}.
Field galaxies neighbouring on the sky, or in redshift shells, are also observed and provide a built-in comparison sample.
\subsection{ZFIRE\ Survey Goals and Current Status}
The primary science questions addressed by the ZFIRE\ survey are as follows:
\begin{enumerate}
\item What are the ISM physical conditions of the galaxies?
We test the Mappings IV models by using \Halpha, \NII, \Hbeta, \OII, \OIII, and \SII\ nebular emission lines to study the evolution of chemical enrichment and the ISM as a function of redshift \citep{Kewley2016}.
\item What is the IMF of galaxies?
We use the \Halpha\ equivalent width as a proxy for the IMF of star-forming galaxies at z\around2 ( T. Nanayakkara et al., in preparation).
\item What are the stellar and gas kinematics of galaxies?
Using \Halpha\ rotation curves we derive accurate kinematic parameters of the galaxies. Using the Tully-Fisher relation \citep{Tully1977} we track how stellar mass builds up inside dark matter halos to provide a key observational constraint on galaxy formation models \citep[][C. Straatman et al., in preparation]{Alcorn2016}.
\item How do fundamental properties of galaxies evolve to $z\sim2$ ?
Cluster galaxies at z\around2 include massive star-forming members that are absent in lower redshift clusters.
We measure their physical properties and determine how these members must evolve to match the galaxy populations in clusters at $z<$1 \citep{Tran2015, Kacprzak2015}.
%Furthermore, we will investigate the role of AGN in galaxy clusters and the quenching mechanisms of galaxies in dense high-redshift environments.
\end{enumerate}
Previous results from ZFIRE\ have already been published.
\citet{Yuan2014} showed that the galaxy cluster identified by ZFOURGE \citep{Spitler2012} at $z=2.095$ is a progenitor for a Virgo like cluster. \citet{Kacprzak2015} found no significant environmental effect on the stellar MZR for galaxies at $z\sim2$. \citet{Tran2015} investigated \Halpha\ SFRs and gas phase metallicities at a lower redshift of $z\sim1.6$ and found no environmental imprint on gas metallicity but detected quenching of star formation in cluster members.
\citet{Kewley2016} investigated the ISM and ionization parameters of galaxies at $z\sim2$ to show significant differences of galaxies at $z\sim2$ with their local counterparts.
Here the data used to address the above questions in past and future papers is presented.
\subsection{Photometric Catalogues}
Galaxies in the COSMOS field are selected from the ZFOURGE survey (Straatman et al. in press) which is
a 45 night deep Ks band selected photometric legacy survey carried out using the 6.5 meter Magellan Telescopes located at Las Campanas observatory in Chile.
The survey covers 121 arcmin$\mathrm{^2}$ each in COSMOS, CDFS, and UDS cosmic fields using the near-IR medium-band filters of the FourStar imager \citep{Persson2013}.
All fields have \emph{HST} coverage from the CANDELS survey \citep{Grogin2011,Koekemoer2011} and a wealth of multi-wavelength legacy data sets \citep{Giacconi2002,Capak2007,Lawrence2007}.
For the ZFIRE\ survey, galaxy selections were made from the v2.1 of the internal ZFOURGE catalogues. A catalogue comparison between v2.1 and the the updated ZFOURGE public data release 3.1 is provided in the Appendix \ref{sec:ZFOURGE comparison}.
The v2.1 data release reaches a $5\sigma$ limiting depth of $Ks=25.3$ in FourStar imaging of the COSMOS field \citep{Spitler2012} which is used to select the ZFIRE K-band galaxy sample.
\emph{HST} WFC3 imaging was used to select the ZFIRE H-band galaxy sample.
EAZY \citep{Brammer2008} was used to derive photometric redshifts by fitting linear combinations of nine SED templates to the observed SEDs\footnote{An updated version of EAZY is used in this analysis compared to what is published by \citet{Brammer2008}. Refer \citet{Skelton2014} Section 5.2 for further information on the changes. The updated version is available at \url{https://github.com/gbrammer/eazy-photoz}.}.
With the use of medium-band imaging and the availability of multi-wavelength data spanning from UV to Far-IR (0.3-8$\mu$m in the observed frame), ZFOURGE produces photometric redshifts accurate to $1-2\%$ \cite[Straatman et al., in press;][]{Kawinwanichakij2014,Tomczak2014}.
Galaxy properties for the ZFOURGE catalogue objects are derived using FAST \citep{Kriek2009} with synthetic stellar populations from \citet{Bruzual2003} using a $\chi^2$ fitting algorithm to derive ages, star-formation time-scales, and dust content of the galaxies.
Full information on the ZFOURGE imaging survey can be found in Straatman et al. (in press).
The IRC 0218 cluster is not covered by the ZFOURGE survey. Therefore publicly available UKIDSS imaging \citep{Lawrence2007} of the UDS field is used for sample selection.
The imaging covers 0.77 deg$^2$ of the UDS field and reaches a $5\sigma$ limiting depth of $\rm K_{AB}= 25$ (DR10; \cite{UDS_DR10}).
Similar to ZFOURGE, public K-band selected catalogues of UKIDSS were used with EAZY and FAST to derive photometric redshifts and galaxy properties \citep{Quadri2012}.
\subsection{Spectroscopic Target Selection}
\label{sec:sample_def}
In the first ZFIRE\ observing run, the COSMOS field between redshifts $2.0<$\zphoto$<2.2$ was surveyed to spectroscopically confirm the overdensity of galaxies detected by \cite{Spitler2012}.
The main selection criteria were that the \Halpha\ emission line falls within the NIR atmospheric windows and within the coverage of the MOSFIRE filter set.
For each galaxy, H and K filters were used to obtain multiple emission lines to constrain the parameters of interest.
Nebular emission lines such as \Halpha\ are strong in star-forming galaxies and hence it is much quicker to detect them than underlying continuum features of the galaxies.
Therefore, rest frame UVJ colour selections \citep{Williams2009} were used to select primarily star-forming galaxies in the cluster field for spectroscopic follow up.
While local clusters are dominated by passive populations, it is known that high-$z$ clusters contain a higher fraction of star-forming galaxies \citep{Wen2011,Tran2010,Saintonge2008}.
This justifies our use of K band to probe strong emission lines of star-forming galaxies, but due to the absence of prominent absorption features, which fall in the K band at $z\sim2$, we note that our survey could be incomplete due to missing weak star-forming and/or quiescent cluster galaxies.
The primary goal was to build a large sample of redshifts to identify the underlying structure of the galaxy overdensity, therefore, explicitly choosing star-forming galaxies increased the efficiency of the observing run.
Quiescent galaxies were selected either as fillers for the masks or because they were considered to be the brightest cluster galaxies (BCG).
Rest-frame U$-$V and V$-$J colours of galaxies are useful to distinguish star-forming galaxies from quenched galaxies \citep{Williams2009}.
The rest-frame UVJ diagram and the photometric redshift distribution of the selected sample is shown in the left panel of Figure \ref{fig:UVJ_selection}.
All rest-frame colours have been derived using photometric redshifts using EAZY with special dustier templates as per \citet{Spitler2014}.
Out of the galaxies selected to be observed by ZFIRE, \around83\% are (blue) star-forming. The rest of the population comprises \around11\% dusty (red) star-formers and \around6\% quiescent galaxies.
For all future analysis in this paper, the \citet{Spitler2014} EAZY templates are replaced with the default EAZY templates in order to allow direct comparison with other surveys.
More information on UVJ selection criteria is explained in Section \ref{sec:UVJ}.
\begin{figure*}
\includegraphics[scale=0.60]{figures/Rest_frame_UVJ_pre_observed.pdf}
\includegraphics[scale=0.60]{figures/Rest_frame_UVJ_pre_observed_UDS.pdf}
\caption{ Rest frame UVJ diagram of the galaxy sample selected from ZFOURGE and UKIDSS surveys to be observed.
Quiescent, blue star-forming, and red (dusty) star-forming galaxies are selected using \citet{Spitler2014} criteria which are shown as red, blue, and orange stars, respectively.
Galaxies above the outlined section are considered to be quiescent. The remaining galaxies are divided to blue and red star-forming galaxies by the dashed vertical line.
Photometric redshifts are used to derive the rest-frame colours using EAZY. The photometric redshift distribution of the selected sample is shown by the histogram in the inset.
{\bf Left:} the ZFOURGE sample in the COSMOS field selected to be observed by ZFIRE.
The logarithmic (2D density) greyscale histogram shows the total UVJ distribution of the ZFOURGE galaxies between 1.90$<$\zphoto$<$2.66.
In the sample selection, priority is given for the star-forming galaxies that lie below the outlined section in the diagram.
{\bf Right:} similar, but now for the UKIDSS sample in the UDS field with galaxies within $10'$ radii from the cluster BCG and at redshifts $1.57<$\zphoto$<1.67$ shown as the greyscale.
}
\label{fig:UVJ_selection}
\end{figure*}
The COSMOS sample at $z\sim2$ requires K-band observations from MOSFIRE to detect \Halpha\ emission lines.
A subset of the K-band selected galaxies are then followed up in H-band to retrieve \Hbeta\ and \OIII\ emission lines.
During the first observing run, object priorities for the galaxies in the COSMOS field were assigned as follows.
\begin{enumerate}
\item K-band observations for rest frame UVJ selected star-forming K$<$24 galaxies with 2.0$<$\zphoto$<$2.2.
\item K-band observations for rest frame UVJ selected star-forming K$>$24 galaxies with 2.0$<$\zphoto$<$2.2.
\item K-band observations for rest frame UVJ selected non-star-forming galaxies with 2.0$<$\zphoto$<$2.2.
\item Galaxies outside the redshift range to be used as fillers.
\end{enumerate}
In subsequent observing runs, the following criteria were used to assign priorities.
\begin{enumerate}
\item H-band observations for galaxies with \Halpha\ and \NII\ detections from K-band.
\item H-band observations for galaxies with only \Halpha\ detection for follow up spectroscopic redshift verification with \Hbeta\ and/or \OIII\ emission lines.
\item K-band observations for galaxies with only \Halpha\ emission lines for deeper spectroscopic redshift verification and gas phase metallicity study with deeper \NII\ emission lines.
\end{enumerate}
The UDS sample was selected from the XMM-LSS J02182-05102 cluster \citep{Papovich2010,Tanaka2010} in order to obtain \OIII, \Halpha\ and \NII\ emission lines. At $z=1.62$, these nebular emission lines are redshifted to J and H-bands.
Cluster galaxies were specifically targeted to complement with the Keck Low Resolution Imaging Spectrometer (LRIS) observations \citep{Tran2015}.
Y-band spectra were obtained for a subset of galaxies in the cluster in order to detect \MgII\ absorption features and the D4000 break.
The UVJ diagram and the photometric redshift distribution of the selected sample is shown by the right panel of Figure \ref{fig:UVJ_selection}. In the selected sample, \around65\% of galaxies are star-forming while dusty star-forming and quiescent galaxies are each \around17\%.
The highest object priorities for the UDS sample were assigned as follows.
\begin{enumerate}
\item BCGs of the \citet{Papovich2010} cluster.
\item LRIS detections with \zspec$\sim$1.6 by \cite{Tran2015}.
\item Grism spectra detections with $z_{\mathrm{grism}}\sim1.6$ \citep[3DHST][]{Momcheva2015}
\item Cluster galaxy candidates within R$<1$ Mpc and \zphoto$\sim1.6$ \citep{Papovich2010}.
\end{enumerate}
For further information on target selection, refer to \citet{Tran2015}.
\subsection{Slit Configurations with MAGMA}
\label{sec:mask_design}
MOSFIRE slit configurations are made through the publicly available MOSFIRE Automatic GUI-based Mask Application (MAGMA\footnote{http://www2.keck.hawaii.edu/inst/mosfire/magma.html}) slit configuration design tool.
The primary purpose of MAGMA is to design slit configurations to be observed with MOSFIRE and to execute the designed slit configurations in real time at the telescope.
Once the user specifies a target list and priorities for each of the objects, the software will dither the pointing over the input parameters (which can be defined by the user) to determine the most optimized slit configuration.
%An optimized configuration is defined as the configuration with the highest total priority score, which is calculated by adding the assigned priorities of the objects selected to the slit configuration.
%The dithering process is done iteratively and the user is provided with the overall highest priority score and the associated slit configuration with the object list.
%The number of iterations can be defined by the user. Higher the number of iterations the more optimized the slit configuration is expected to be.
The slit configurations can then be executed during MOSFIRE observing. With MAGMA, the physical execution of the slit configurations can be done within $<$15 minutes.
For the objects in the COSMOS field \around10,000 iterations were used to select objects from a target list compromising of \around2000 objects.
\citet{vanderWel2012} used \emph{HST} imaging to derive position angles of galaxies in the CANDELS sample using GALFIT \citep{Peng2010b}.
The number of slits within $\pm30^{\circ}$ of the galaxy major axis were maximized using position angles of \citet{vanderWel2012} catalogue by cross-matching it with ZFOURGE.
Due to the object prioritization, a subset of galaxies was observed in multiple observing runs. These galaxies were included in different masks and hence have different position angles. When possible, position angles of these slits were deliberately varied to allow coverage of a different orientation of the galaxy.
\subsection{MOSFIRE Observations}
Between 2013 and 2016 15 MOSFIRE nights were awarded to the ZFIRE program by a
combination of Swinburne University (Program IDs- 2013A\_W163M, 2013B\_W160M, 2014A\_W168M, 2015A\_W193M, 2015B\_W180M), Australian National University (Program IDs- 2013B\_Z295M, 2014A\_Z225M, 2015A\_Z236M, 2015B\_Z236M), and NASA (Program IDs- 2013A\_N105M, 2014A\_N121M) telescope time allocation committees.
Data for 13 nights observed between 2013 and 2015 are released with this paper, where six nights resulted in useful data collection.
Observations during 2013 December resulted in two nights of data in excellent conditions, while four nights in 2014 February were observed in varying conditions. Exposure times and observing conditions are presented in Table \ref{tab:observing_details}.
With this paper, data for 10 masks observed in the COSMOS field and four masks observed in the UDS field are released. An example of on-sky orientations of slit mask designs used for K-band observations in the COSMOS field is shown in Figure \ref{fig:masks}.
Standard stars were observed at the beginning, middle, and end of each observing night.
The line spread functions were calculated using Ne arc lamps in the K-band, and were found to be \around2.5 pixels. The partial first derivative for the wavelength (CD1\_1) in Y, J, H, and K-bands are respectively 1.09 \AA/pixel, 1.30 \AA/pixel, 1.63 \AA/pixel, and 2.17 \AA/pixel.
$0.7''$ width slits were used for objects in science masks and the telluric standard, while, for the flux standard star a slit of width $3''$ was used to minimize slit loss. On average, $\sim$ 30 galaxies were included per mask. A flux monitor star was included in all of the science frames to monitor the variation of the seeing and atmospheric transparency. In most cases only frames that had a FWHM of $\lesssim0''.8$ was used for the flux monitor stars. A standard 2 position dither pattern of ABBA was used.\footnote{For more information, see: \url{http://www2.keck.hawaii.edu/inst/mosfire/dither\_patterns.html\#patterns}}
\begin{deluxetable*}{llllrrr}
\tabletypesize{\scriptsize}
\tablecaption{ ZFIRE\ Data Release 1: Observing details}
\tablecomments{ This table presents information on all the masks observed by ZFIRE\ between 2013 and 2015 with the integration times and observing conditions listed.
\label{tab:observing_details}}
\tablecolumns{6}
\tablewidth{0pt}
\startdata
\hline \hline \\ [+1ex]
Field & Observing & Mask & Filter & Exposure & Total integra- & Average \\
& Run & Name & & Time (s) & -tion Time (h) & Seeing ($''$) \\ [+1ex] \hline \\ [+1ex]
COSMOS & Dec2013 & Shallowmask1 (SK1) & K & 180 & 2.0 & 0$''$.70\\
COSMOS & Dec2013 & Shallowmask2 (SK2) & K & 180 & 2.0 & 0$''$.68\\
COSMOS & Dec2013 & Shallowmask3 (SK3) & K & 180 & 2.0 & 0$''$.70\\
COSMOS & Dec2013 & Shallowmask4 (SK4) & K & 180 & 2.0 & 0$''$.67\\
COSMOS & Feb2014 & KbandLargeArea3 (KL3) & K & 180 & 2.0 & 1$''$.10\\
COSMOS & Feb2014 & KbandLargeArea4 (KL4) & K & 180 & 2.0 & 0$''$.66\\
COSMOS & Feb2014 & DeepKband1 (DK1) & K & 180 & 2.0 & 1$''$.27\\
COSMOS & Feb2014 & DeepKband2 (DK2) & K & 180 & 2.0 & 0$''$.70\\
COSMOS & Feb2014 & Hbandmask1 (H1) & H & 120 & 5.3 & 0$''$.90\\
COSMOS & Feb2014 & Hbandmask2 (H2) & H & 120 & 3.2 & 0$''$.79\\
UDS & Dec2013 & UDS1 (U1H) & H & 120 & 1.6 & 0$''$.73\\
UDS & Dec2013 & UDS2 (U2H) & H & 120 & 1.6 & 0$''$.87\\
UDS & Dec2013 & UDS3 (U3H) & H & 120 & 0.8 & 0$''$.55\\
UDS & Dec2013 & UDS1 (U1J) & J & 120 & 0.8 & 0$''$.72\\
UDS & Dec2013 & UDS2 (U2J) & J & 120 & 0.8 & 0$''$.90\\
UDS & Dec2013 & UDS3 (U3J) & J & 120 & 0.8 & 0$''$.63\\
UDS & Feb2014 & uds-y1 (UY) & Y & 180 & 4.4 & 0$''$.80\\
\end{deluxetable*}
%The COSMOS field was observed in 6 masks in the K-band with $\sim$ 2 hours of on source integration time with 180s exposures and 2 masks in the H-band with $\sim$ 5.3 \& 3.2 hours of on source integration time with 120s exposures.
%UDS field was observed in 3 masks in J and H-bands with 120s exposures and 1 mask in the Y-band with 180s exposures. The J-band masks and 1 H-band mask was observed for 0.8 hours per mask while the remaining 2 H masks were observed for 1.6 hours each. The Y-band mask was observed for $\sim$ 4.4 hours.
\begin{figure*}
\includegraphics[scale=0.61]{figures/masks.pdf}
\caption{MOSFIRE slit configurations for the 6 K-band masks in the COSMOS field.
The blue lines show each individual slit.
Each slit in a mask is expected to target a single galaxy. However, some galaxies are targeted in multiple masks.
The red boxes are the individual masks.
The inverse greyscale image is from the Ks imaging from FourStar obtained as a part of the ZFOURGE survey.}
\label{fig:masks}
\end{figure*}
\subsection{MOSFIRE Spectroscopic Reduction}
\label{data_reduction}
The data were reduced in two steps.
Firstly, a slightly modified version of the publicly available 2015A MOSFIRE DRP release \footnote{A few bug fixes were applied along with an extra function to implement barycentric corrections to the spectra. This version is available at \url{https://github.com/themiyan/MosfireDRP\_Themiyan}.} was used to reduce the raw data from the telescope.
Secondly, a custom made IDL package was used to apply telluric corrections and flux calibrations to the data and extract 1D spectra. Both are described below.
Extensive tests were performed to the MOSFIRE DRP while it was in a beta stage, and multiple versions of the DRP were used to test the quality of the outputs.
The accuracy of the error spectrum generated by the DRP was investigated by comparing the noise we expect from the scatter of the sky values with the DRP noise.
The following steps are currently performed by the modified MOSFIRE DRP.
\begin{enumerate}
\item Produce a pixel flat image and identify the slit edges.
\item For K-band: remove the thermal background produced by the telescope dome.
\item Wavelength calibrate the spectra. This is performed using the sky lines. For K-band: due to the lack of strong sky lines at the red end of the spectra, a combination of night sky lines along with Neon and/or Argon\footnote{As of version 2015A, using both Ar and Ne lamps together with sky line wavelength calibration is not recommended. See the MOSFIRE DRP github issues page for more details.} arc lamp spectra are used to produce per pixel wavelength calibration.
\item Apply barycentric corrections to the wavelength solution.
\item Remove the sky background from the spectra. This is done in two steps. Firstly, the different nod positions of the telescope are used to subtract most of the background.
Secondly, any residual sky features are removed following the prescription by \citet{Kelson2003}.
\item Rectify the spectra.
\end{enumerate}
All the spectra from the DRP were calibrated to vacuum wavelengths with a typical residual error of $<$ 0.1 \AA.
The customized IDL package was used to continue the data reduction process using outputs of the public DRP. The same observed standard star was used to derive telluric sensitivity and flux calibration curves to be applied to the science frames as follows.
\begin{enumerate}
\item The 1D standard star spectrum was extracted from the wavelength calibrated 2D spectra.
\item Intrinsic hydrogen absorption lines in the stellar atmosphere were removed from the telluric A0 standard by fitting Gaussian profiles and then interpolating over the filled region.
\item The observed spectrum was ratioed to a theoretical black body function corresponding to the temperature of the star.
%A black body transmission function is fit to the spectra to remove the intrinsic black body shapes of the stellar spectra. \kg{Don't understand this part. Don't you mean the observed spectrum is divided by a theoretical BB coresponding to the temperature of the star?}
\item The resulting spectrum was then normalised and smoothed to be used as the sensitivity curve, i.e., the wavelength-dependent sensitivity that is caused by the atmosphere and telescope-instrument response.
\item The sensitivity curve was used on the flux standard star to derive the flux conversion factor by comparing it to its 2MASS magnitude \citep{Skrutskie2006}.
\end{enumerate}
These corrections are applied to the 2D science frames to produce telluric corrected, flux calibrated spectra.
Further information is provided in Appendix \ref{sec:MOSFIRE cals}.
The derived response curves that were applied to all data include corrections for the MOSFIRE response function, the telescope sensitivity, and atmospheric absorption.
If the mask were observed in multiple nights, the calibrated 2D spectra were co-added by weighting by the variance spectrum. Extensive visual inspections were performed to the 2D spectra to identify possible emission line-only detections and to flag false detections due to, e.g. sky line residuals.
To extract 1D spectra, Gaussian extractions were used to determine the FWHM of the spatial profile. If the objects were too faint compared to the sky background, the profile from the flux monitor star of the respective mask was used to perform the extraction.
The same extraction procedure was performed for any secondary or tertiary objects that fall within any given slit.
Depending on how object priorities were handled, some objects were observed during multiple observing runs in different masks. There were 37 such galaxies.
Due to variations in the position angles between different masks, these objects were co-added in 1D after applying the spectrophotometric calibration explained in Section \ref{sec:sp calibration}.
\subsection{Spectrophotometric Flux Calibration}
\label{sec:sp calibration}
\subsubsection{COSMOS Legacy Field}
Next zero-point adjustments were derived for each mask to account for any atmospheric transmission change between mask and standard observations. Synthetic slit aperture magnitudes were computed from the ZFOURGE survey to calibrate the total magnitudes of the spectra, which also allowed us to account for any slit-losses due to the $0''.7$ slit-width used during the observing.
The filter response functions for FourStar \citep{Persson2013} were used to integrate the total flux in each of the 1D calibrated spectra.
For each of the masks in a respective filter, first, all objects with a photometric error $>0.1$ mag were removed.
Then, a background subtracted Ks and F160W (H-band) images from ZFOURGE were used with the seeing convolved from $0''.4$ to $0''.7$ to match the average Keck seeing.
Rectangular apertures, which resemble the slits with various heights were overlaid in the images to integrate the total counts within each aperture.
Any apertures that contain multiple objects or had bright sources close to the slit edges were removed.
Integrated counts were used to calculate the photometric magnitude to compare with the spectroscopy.
A slit-box aligned with similar PA to the respective mask with a size of $0''.7 \times 2''.8$ was found to give the best balance between the spectrophotometric comparison and the number of available slits with good photometry per mask.
Next, the median offset between the magnitudes from photometry and spectroscopy were calculated by selecting objects with a photometric magnitude less than 24 in the respective filters.
This offset was used as the scaling factor and was applied to all spectra in the mask. Typical offsets for K and H bands were $\sim \pm0.1$ mag.
We then performed 1000 iterations of bootstrap re-sampling of the objects in each mask to calculate the scatter of the median values. We parametrized the scatter using normalized absolute median deviation (\NMAD) which is defined as 1.48$\times$ median absolute deviation.
The median \NMAD\ scatter in K and H-bands for these offsets are \around0.1 and \around0.04 mag, respectively.
The median offset values per mask before and after the scaling process with its associated error is shown in the top panel of Figure \ref{fig:scaling_values}. Typical offsets are of the order of $\lesssim0.1$ mag which are consistent with expected values of slit loss
and the small amount of cloud variation seen during the observations.
The offset value after the scaling process is shown as green stars with its bootstrap error.
The scaling factor was applied as a multiple for the flux values for the 2D spectra following Equation \ref{eq:scale_single_masks},
\begin{subequations}
\label{eq:scale_single_masks}
\begin{equation}
F_i = f_i \times \hbox{scale}_{\mathrm{mask}}
\end{equation}
\begin{equation}
\Sigma_i = \sigma_i \times \hbox{scale}_{\mathrm{mask}}
\end{equation}
\end{subequations}
where $\mathrm{f_i}$ and $\mathrm{\sigma_i}$ are, respectively, the flux and error per pixel before scaling and scale$\mathrm{_{mask}}$ is the scaling factor calculated.
1D spectra are extracted using the same extraction aperture as before.
The bootstrap errors after the scaling process is \around0.08 mag (median) for the COSMOS field, which is considered to be the final uncertainty of the spectrophotometric calibration process. Once a uniform scaling was applied to all the objects in a given mask, the agreement between the photometric slit-box magnitude and the spectroscopic magnitude increased.
As aforementioned, if an object was observed in multiple masks in the same filter, first the corresponding mask scaling factor was applied and then co-added optimally in 1D such that a higher weight was given to the objects, which came from a mask with a lower scaling value (i.e. better transmission). The procedure is shown in equation \ref{eq:1D_scale_and_coadd},
\begin{subequations}
\label{eq:1D_scale_and_coadd}
\begin{equation}
F_i = \frac{\sum\limits_{j=1}^n (P_j/\sigma_{ji})^2 (F_{ji}/P_j)}{\sum\limits_{j=1}^n(P_j/\sigma_{ji})^2 }
\end{equation}
\begin{equation}
\sigma_i^2 = \frac{\sum \limits_{j=1}^n \big\{(P_j/\sigma_{ji})^2 (F_{ji}/P_j)\big\}^2} {\big\{\sum \limits_{j=1}^n(P_j/\sigma_{ji})^2\big\}^2 }
\end{equation}
\end{subequations}
where $P$ is the 1/scale value, $i$ is the pixel number, and $j$ is the observing run. Further examples for the spectrophotometric calibration process are shown in Appendix \ref{sec:MOSFIRE cals}.
\begin{figure}
\includegraphics[scale=0.57]{figures/scaling_values_cosmos.pdf}
\includegraphics[scale=0.57]{figures/scaling_values_uds.pdf}
\caption{ Spectrophotometric calibration of the ZFIRE masks. The median offsets between spectroscopic flux and the photometric flux before and after the scaling process is shown in the figure. Filter names correspond to the names in Table \ref{tab:observing_details}.
The grey stars denote the median offsets for the standard star flux calibrated data before any additional scaling is applied.
The median mask sensitivity factors are applied to all objects in the respective masks to account for slit loss. The green stars show the median offsets after the flux corrections are applied. The errors are the \NMAD\ scatter of the median offsets calculated via bootstrap re-sampling of individual galaxies.
{\bf Top:} all COSMOS masks. Photometric data are from a slit-box aligned with similar PA to the respective mask with a size of $0''.7 \times 2''.8$.
{\bf Bottom:} all UDS masks. Photometric data are total fluxes from UKIDSS.}
\label{fig:scaling_values}
\end{figure}
\subsubsection{UDS Legacy Field}
The filter response functions for WFCAM \citep{Casali2007} was used to integrate the total flux in each of the 1D calibrated spectra in the UDS field.
The {\it total} photometric fluxes from the UKIDSS catalogue were used to compare with the integrated flux from the spectra since images were not
available to simulate slit apertures.
To calculate the median offset a magnitude limit of 23 was used.
This magnitude limit was brighter than the limit used for COSMOS data since the median photometric magnitude of the UDS data are \around0.5 mag brighter than COSMOS.
Typical median offsets between photometric and spectroscopic magnitudes were \around0.4 magnitude.
the lower panel of Figure \ref{fig:scaling_values} shows the median offset values per mask before and after the scaling process with its associated error.
The median of the bootstrap errors for the UDS masks after scaling is \around0.06 mag.
Comparing with the COSMOS offsets, the UDS values are heavily biased toward a positive offset.
This behaviour is expected for UDS data because the broadband total fluxes from the UKIDSS data are used, and therefore the flux expected from the finite MOSFIRE slit should be less than the total flux detected from UKIDSS.
Since UDS objects are not observed in multiple masks in the same filter, only Equation \ref{eq:scale_single_masks} is applied to scale the spectra. \\
\subsection{Measuring Emission Line Fluxes}
\label{sec:line_fits}
A custom made IDL routine was used to fit nebular emission lines on the scaled 1D spectra. This was done by fitting Gaussian profiles to user defined emission lines.
The code identifies the location of the emission line in wavelength space and calculates the redshift.
In emission line fitting, if there were multiple emission lines detected for the same galaxy in a given band, the line centre and velocity width were kept to be the same. Emission lines with velocity structure were visually identified and were fit with multiple component Gaussian fits.
If the line was narrower than the instrumental resolution, the line width was set to match the instrument resolution.
The code calculated the emission line fluxes (f) by integrating the Gaussian fits to the emission lines. The corresponding error for the line fluxes ($\sigma$(f)) were calculated by integrating the error spectrum using the same Gaussian profile. The code further fits a 1$\sigma$ upper level for the flux values (f$_{limit}$). The signal-to-noise ratio (SNR) of the line fluxes was defined as the line flux divided by the corresponding error for the line flux.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Properties of ZFIRE Galaxies}
\label{sec:results}
\subsection{Spectroscopic Redshift Distribution}
\label{sec:Q flags}
Using nebular emission lines, 170 galaxy redshifts were identified for the COSMOS sample and 62 redshifts were identified for the UDS field.
A combination of visual identifications in the 2D spectra and emission line fitting procedures explained in Section \ref{sec:line_fits} were used to identify these redshifts.
The redshift quality is defined using three specific flags:
\begin{itemize}
\item Q$_z$ Flag $=$ 1: These are objects with no line detection with SNR $<5$. These objects are not included in our final spectroscopic sample.
\item Q$_z$ Flag $=$ 2: These are objects with one emission line with SNR $>$ 5 and a $|$\zspec $-$ \zphoto $|$ $> 0.2$.
\item Q$_z$ Flag $=$ 3: These are objects with more than one emission line identified with SNR $>$ 5 or one emission line identified with SNR $>$ 5 with a $|$\zspec $-$ \zphoto $|$ $< 0.2$.
\end{itemize}
The redshift distribution of all ZFIRE\ Q$_z$=2 and Q$_z$=3 detections are shown in Figure \ref{fig:zspec}. 62 galaxy redshifts were detected in the UDS field, out of which 60 have a Q$_z$ of 3 and 2 have a Q$_z$ of 2. Similarly, for the COSMOS field, there are 161 Q$_z$=3 objects and 9 Q$_z$=2 objects.
\begin{figure}
\includegraphics[scale=0.85]{figures/zfire_zspec.pdf}
\caption{Redshift distribution of the ZFIRE data release. All detected galaxies with Q$_z$=2 and Q$_z$=3 from UDS (light green) and COSMOS (dark green) are shown in the figure. The two dashed vertical lines at x=1.620 and x=2.095 shows the location of the IRC 0218 cluster \citep{Tran2015} and the COSMOS cluster \citep{Yuan2014}, respectively.}
\label{fig:zspec}
\end{figure}
The systematic error of the redshift measurement was estimated by comparing Q$_z$=3 objects with a SNR $>$ 10 in both H and K-bands in the COSMOS field.
\citet{Yuan2014} showed that the agreement between the redshifts in the two bands is $\Delta z$(median) = 0.00005 with a rms of $\Delta z$(rms) = 0.00078. Therefore, the error in redshift measurement is quoted as $\Delta z$(rms) = 0.00078/$\sqrt 2$= 0.00055, which corresponds to $\sim\mathrm{53km~s^{-1}}$ at $z=2.1$.
This is $\sim$2 times the spectral resolution of MOSFIRE, which is $\sim\mathrm{26km~s^{-1}}$ \citep{Yuan2014}. However, for the \citet{Yuan2014} analysis barycentric corrections were not applied to the redshifts and H and K masks were observed on different runs. Once individual mask redshifts were corrected for barycentric velocity, the rest-frame velocity uncertainty decreased to $\sim\mathrm{15km~s^{-1}}$.
A few example spectra are shown in Figure \ref{fig:spectra}. Object 5829 is observed in both H and K-bands with strong emission lines detected in both instances. Object 3622 has strong H-band detections, while 3883 has only one emission line detection. Therefore, 3883 is assigned a Q$_z$ of 2. The 2D spectrum of object 3633 shows two emission line detections around \Halpha\ at different y pixel positions, which occur due to multiple objects falling within the slit. Object 9593 shows no emission line or continuum detection.
Objects 7547 and 5155 have strong continuum detections with no nebular emission lines. These galaxies were selected to be the BCGs of the D and A substructures by \citet{Yuan2014} and \citet{Spitler2012}, respectively, and have absorption line redshifts from \citet{Belli2014}.
\begin{figure*}
\includegraphics[scale=0.85]{figures/spectra.pdf}
\caption{Example MOSFIRE H and K-band spectra from the COSMOS field.
In the 1D spectra, the flux is shown in blue and the corresponding error in red. The 1$\sigma$ scatter of the flux value parametrized by the error level is highlighted around the flux value in cyan.
Each 1D spectra are accompanied by the corresponding 2D spectra covering the same wavelength range.
Each panel shows the name of the object, the wavelength it was observed in, and the redshift quality of the object. Vertical dashed lines show where strong optical emission lines ought to lie given the spectroscopic redshift.}
\label{fig:spectra}
\end{figure*}
The ZFIRE data release catalogue format is given in Table \ref{tab:catalogue}.
An overview of the data presented is provided in the table, along with the 1D spectra, which is available online at \url{zfire.swinburne.edu.au}.
Galaxy stellar mass and dust extinction values are from ZFOURGE, but for Q$_z>1$ galaxies these values are rederived using the spectroscopic redshifts with FAST.
The ZFIRE-COSMOS galaxy sample comprises both field and cluster galaxies selected in the Ks band with an 80\% mass completeness down to $\log_{10}($\mass$)>9.30$ (Figure \ref{fig:detection_limits}).
The survey selection for this data release was done using the ZFOURGE internal catalogues, and therefore the results presented here onwards could vary slightly from the ZFOURGE public data release.
For the 2016 ZFOURGE public data release, the catalogue was upgraded by including pre-existing public K-band imaging for the source detection image.
This increased the amount of galaxies in the COSMOS field by \around 50\%, which was driven by the increase of fainter smaller mass galaxies. In Appendix \ref{sec:ZFOURGE comparison}, a comparison between the internal ZFOURGE catalogue and the public data release version is shown.
\begin{deluxetable*}{ || l | l || }
\tabletypesize{\scriptsize}
\tablecaption{ The ZFIRE\ v1.0 data release }
%\tablenotemark{a}
%\tablenotetext{a}{}
\tablecomments{ This table presents an overview of the data available online.
All galaxy properties and nebular emission line values of the galaxies targeted by ZFIRE between 2013 to 2015 are released with this paper.
\label{tab:catalogue}}
\tablecolumns{2}
\tablewidth{0pt}
\tablewidth{0pt}
\startdata
\hline
& \\
ID & Unique ZFIRE\ identifier. \\ [+1ex]
RA & Right ascension (J2000) \\ [+1ex]
DEC & Declination (J2000) \\ [+1ex]
Field & COSMOS or UDS \\ [+1ex]
\Ks \tablenotemark{a} & \Ks\ magnitude from ZFOURGE \\ [+1ex]
$\mathrm{\sigma}$\Ks & Error in \Ks\ magnitude. \\ [+1ex]
\zspec & ZFIRE\ spectroscopic redshift. \\ [+1ex]
$\sigma$(\zspec) & Error in spectroscopic redshift. \\ [+1ex]
Q$_z$ & ZFIRE\ redshift quality flag (see Section \ref{sec:Q flags}) \\ [+1ex]
Cluster\tablenotemark{b} & Cluster membership flag \\ [+1ex]
Mass\tablenotemark{c} & Stellar mass from FAST. \\ [+1ex]
Av & Dust extinction from FAST. \\ [+1ex]
AGN\tablenotemark{d} & AGN flag. \\[+1ex]
\Halpha \tablenotemark{e} & Emission line \Halpha\ flux from ZFIRE\ spectrum \\ [+1ex]
$\sigma$(\Halpha)\tablenotemark{f} & Error in \Halpha\ flux. \\ [+1ex]
\Halpha$_{\mathrm{limit}}$\tablenotemark{g} & 1$\sigma$ upper limit for the \Halpha\ flux detection \\ [+1ex]
\NII \tablenotemark{e} & Emission line \NII\ flux (6585\AA) from ZFIRE\ spectrum \\ [+1ex]
$\sigma$(\NII) \tablenotemark{f} & Error in \NII\ flux \\ [+1ex]
\NII$_{\mathrm{limit}}$\tablenotemark{g} & 1$\sigma$ upper limit for the \NII\ flux detection \\ [+1ex]
\Hbeta \tablenotemark{e} & Emission line \Hbeta\ flux from ZFIRE\ spectrum \\ [+1ex]
$\sigma$(\Hbeta) \tablenotemark{f} & Error in \Hbeta\ flux \\ [+1ex]
\Hbeta$_{\rm limit}$\tablenotemark{g} & 1$\sigma$ upper limit for the \Hbeta\ flux detection \\ [+1ex]
\OIII \tablenotemark{e} & Emission line \OIII\ flux (5008\AA) from ZFIRE\ spectrum \\ [+1ex]
$\sigma$(\OIII) \tablenotemark{f} & Error in \OIII\ flux \\ [+1ex]
\OIII$_{\rm limit}$\tablenotemark{g} & 1$\sigma$ upper limit for the \OIII\ flux detection. \\
&
\tablenotetext{a}{Magnitudes are given in the AB system.}
\tablenotetext{b}{Cluster=True objects are spectroscopically confirmed cluster members in either the COSMOS \citep{Yuan2014} or UDS \citep{Tran2015} fields.}
\tablenotetext{c}{Stellar mass (M$_*$) is in units of $\mathrm{log_{10}}$\msol\ as measured by FAST.}
\tablenotetext{d}{AGNs are flagged following \citet{Cowley2016} and/or \citet{Coil2015} selection criteria.}
\tablenotetext{e}{The nebular emission line fluxes (along with errors and limits) are given in units of $10^{-17}ergs/s/cm^2$.}
\tablenotetext{f}{The error of the line fluxes are from the integration of the error spectrum within the same limits used for the emission line extraction.}
\tablenotetext{g}{Limits are $1\sigma$ upper limits from the Gaussian fits to the emission lines.}
\end{deluxetable*}
\subsection{Spectroscopic Completeness}
\label{sec:completeness}
The main sample of galaxies in the COSMOS field were selected in order to include \Halpha\ emission in the MOSFIRE K-band, which corresponds to a redshift range of $1.90<$\zphoto$<2.66$. Due to multiple objects in the slits and object priorities explained in Section \ref{sec:mask_design}, there were nine galaxies outside this redshift range.
We assess completeness against an expectation computed using the photometric redshift likelihood functions ($P(z)$) from EAZY, i.e. the expected number of galaxies with \Halpha\ within the bandpass in the ZFIRE-COSMOS sample, taking account of the slightly different wavelength coverage of each slit.
There were 203 galaxies targeted in the K-band. Of the galaxies, 10 had spectroscopic redshifts that were outside the redshift range of interest ($1.90<$\zspec$<2.66$).
The remaining 193 $P(z)$s of the detected and non-detected galaxies were stacked.
Figure \ref{fig:completeness} shows the average $P(z)$ of the stacked 193 galaxies.
If the \Halpha\ emission line falls on a sky line, the emission line may not be detected. Therefore, in the $P(z)$ of each of the galaxies' sky line regions parametrized by the MOSFIRE K-band spectral resolution was masked out ($\pm$5.5\AA).
We then calculate the area of the $P(z)$ that falls within detectable limits in K-band of the object depending on the exact wavelength range of each slit.
Since each $P(z)$ is normalized to 1, this area gives the probability of an \Halpha\ detection in K-band for a given galaxy.
The probability to detect all 193 galaxies is calculated to be \around73\%.
141 galaxies are detected with \Halpha\ SNR $>5$ which is a \around73\% detection rate.
As seen by the overlaid histogram in Figure \ref{fig:completeness}, the detected redshift distribution of the ZFIRE-COSMOS sample is similar to the expected redshift distribution from $P(z)$.
%Invoking a lower detection threshold of \Halpha\ SNR $>3$, we get 135 \Halpha\ detected galaxies increasing the detection rate to ~73\%. conf=5 galaxy is removed: shows a positive \Halpha\ detection due to high continuum level but there is no evident emission line.
\begin{figure}[h!]
\includegraphics[trim = 10 0 5 5, clip, scale=0.9]{figures/pz.pdf}
\caption{Stacked probability distribution functions of the photometric redshifts for galaxies targeted in the ZFIRE-COSMOS field (shown by the black solid line).
The black dotted lines show the redshift limits for \Halpha\ detection in the K-band.
The wavelength coverage is corrected by the slit positions for each of the galaxies and the total probability that falls within the detectable range is calculated to be \around73\%.
The actual \Halpha\ detection in the COSMOS field is \around73\%.
The bias toward z=2.1 is due to the object priorities weighting heavily towards the cluster galaxies.
The green histogram shows the distribution of \zspec\ values for galaxies with \Halpha\ detections in K-band in the COSMOS field.
}
\label{fig:completeness}
\end{figure}
Figure \ref{fig:Halpha} shows the \Halpha\ luminosity (left) and SNR distribution (middle) of the ZFIRE-COSMOS galaxies with \Halpha\ detections.
The detection threshold is set to SNR $\geq$ 5 which is shown by the vertical dashed line in the centre panel. There are 134 galaxies in the Q$_{z}$=3 sample, 7 in the Q$_{z}$=2 sample.
The \Halpha\ luminosity in Figure \ref{fig:Halpha} (left panel) is peaked $\sim10^{42}$ergs/s. From the SNR distribution it is evident that the majority of galaxies detected have a \Halpha\ SNR $>10$, with the histogram peaking \around SNR of 20.
Normally astronomical samples are dominated by low SNR detections near the limit.
It is unlikely that objects with SNR$<$20 are missed. Our interpretation of this distribution is that because the sample is mass-selected the drop off of low flux \Halpha\ objects is because the region below the stellar mass-SFR main sequence \citep{Tomczak2014} at $z\sim 2$ is probed.
This is shown in Figure \ref{fig:Halpha} where we make a simple conversion of \Halpha\ to SFR assuming the \citet{Kennicutt1998} conversion and stellar extinction values from FAST
which we convert to nebula extinction using the \citet{Calzetti2000} prescription with $R_V=4.05$.
It is indeed evident that the ZFIRE-COSMOS sample limits do probe the limits of the galaxies in the star-forming main sequence at $z\sim2$ with a $3\sigma$ \Halpha\ SFR detection threshold at $\sim4$ \msol/yr. A more detailed analysis of the \Halpha\ main sequence will be presented in a future paper (K. Tran et al. in preparation).
%#################
\begin{figure*}
\includegraphics[trim = 10 0 5 5, clip, scale=0.6]{figures/Halpha_luminosity.pdf}
\includegraphics[trim = 10 0 5 5, clip, scale=0.6]{figures/Halpha_SNR.pdf}
\includegraphics[trim = 10 0 5 5, clip, scale=0.6]{figures/lmass_vs_HaSFR.pdf}
\caption{ {\bf Left:} the distribution of \Halpha\ luminosity of all ZFIRE-COSMOS galaxies in log space. The green histogram (with horizontal lines) is for galaxies with a quality flag of 3, while the ivory histogram is for galaxies with a quality flag of 2.
The vertical dotted line is the \Halpha\ SFR for a typical \Halpha\ SNR of \around 5 at z=2.1.
{\bf Middle:} similar to the left figure, but the distribution of \Halpha\ SNR of all ZFIRE-COSMOS detections are shown. The dashed vertical line is SNR = 5, which is the \Halpha\ detection threshold for ZFIRE.
{\bf Right:} the \Halpha\ SFR vs. stellar mass distributions for the objects shown in the left histograms. The stellar masses and dust extinction values are derived from FAST. The dashed line is the star-forming main sequence from \citet{Tomczak2014}. The horizontal dotted line is the \Halpha\ SFR for a typical \Halpha\ SNR of \around 5 at z=2.1.
}
\label{fig:Halpha}
\end{figure*}
\subsection{Magnitude and Stellar Mass Detection Limits}
The ZFIRE-COSMOS detection limits in Ks magnitude and stellar mass are estimated using ZFOURGE photometry.
Out of 141 objects with \Halpha\ detections (Q$_{z}$=2 or Q$_{z}$=3) and $1.90<$\zspec$<2.66$, galaxies identified as UVJ quiescent are removed since the spectroscopic sample does not significantly sample these (see Section \ref{sec:UVJ}). The remaining sample comprises 140 UVJ blue (low dust attenuation) and red (high dust attenuation) star-forming galaxies.
Similarly, galaxies from the ZFOURGE survey are selected with redshifts between $1.90<$\zspec$<2.66$ and all UVJ quiescent galaxies are removed. The Ks magnitude and the stellar mass distributions of the remaining 1106 ZFOURGE galaxies with the selected ZFIRE sample are compared in Figure \ref{fig:detection_limits}.
The top panel of Figure \ref{fig:detection_limits} demonstrates that the \Halpha\ detected galaxies reach Ks$>$24.
80\% of the detected ZFIRE-COSMOS galaxies have Ks$\leq$24.11. The ZFOURGE input sample reaches deeper to
Ks$\leq$24.62 (80\%-ile). The photometric detection completeness limit of ZFOURGE is discussed in detail in Straatman
et al. (2014), but we note that at $K=24.62$, 97\% of objects are detected. It is important to understand if the distribution in Ks of the spectroscopic sample is biassed relative to the photometric sample. A two-sample K-S test for Ks$\leq$24.1 is performed to find a $p$ value of 0.03 suggesting that there is no significant bias between the samples.
Similarly, the mass distribution of the \Halpha\ detected sample is investigated in the bottom panel of Figure \ref{fig:detection_limits}. Galaxies are detected down to $\log_{10}($\mass$)\sim9$.
80\% of the \Halpha\ detected galaxies have a stellar masses $\log_{10}($\mass$)>9.3$. A K-S test on the two distributions for galaxies $\log_{10}($\mass$)>9.3$ gives a $p$ value of 0.30 and therefore, similar to the Ks magnitude distributions, the spectroscopic sample shows no bias in stellar mass compared to the ZFOURGE photometric sample.
This shows that the ZFIRE-COSMOS detected sample of UVJ star-forming galaxies has a similar distribution in magnitude and stellar mass as the ZFOURGE distributions except at the very extreme ends.
Removing UVJ dusty galaxies from the star-forming sample does not significantly change this conclusion.
A final test is to evaluate the photometric magnitude at which continuum emission in the spectra can be typically detected. To estimate this, a constant continuum level is fit to blank sky regions across the whole $K$-band spectral range. This shows that the $2\sigma$ spectroscopic continuum detection limit for the ZFIRE-COSMOS sample is Ks$\simeq 24.1$ ($0.05 \times 10^{-17} \mathrm{erg/s/cm^2/\AA}$). More detailed work on this will be presented in the IMF analysis (T. Nanayakkara et al., in preparation).
\begin{figure}[h!]
\includegraphics[trim = 10 10 10 5, clip, scale=0.58]{figures/detected_limits.pdf}
\caption{The Ks magnitude and mass distribution of the $1.90<z<2.66$ galaxies from ZFOURGE (cyan) overlaid with the ZFIRE (green) detected sample for the COSMOS field. The ZFOURGE distribution is derived using the photometric redshifts and spectroscopic redshifts (when available). The ZFIRE histogram uses the spectroscopic redshifts.
The histograms are normalized for area. UVJ quiescent galaxies (only 1 in ZFIRE) are removed from both the samples.
{\bf Top:} Ks magnitude distribution. The black dashed line (Ks=24.11) is the limit in which 80\% of the detected sample lies below.
{\bf Bottom:} stellar mass distribution of the galaxies in log space as a fraction of solar mass. Masses are calculated using FAST and spectroscopic redshifts are used where available. The black dashed line ($\mathrm{Log}_{10}($\mass$)=9.3$) is the limit down to where the detected sample is 80\% mass complete.
}
\label{fig:detection_limits}
\end{figure}
\subsection{Rest frame UVJ colours}
\label{sec:UVJ}
The rest-frame UVJ colours are used to assess the stellar populations of the detected galaxies.
In rest frame U$-$V and V$-$J colour space, star-forming galaxies and quenched galaxies show strong bimodal dependence \citep{Williams2009}. Old quiescent stellar populations with strong 4000\AA\ and/or Balmer breaks show redder U$-$V colours and bluer V$-$J colours, while effects from dust contribute to redder V$-$J colours.
Figure \ref{fig:UVJ} shows the UVJ selection of the COSMOS sample, which lies in the redshift range between $1.99<$\zspec$<2.66$.
The selection criteria are adopted from \citet{Spitler2014} and are as follows.
Quiescent galaxies are selected by (U$-$V)$>$1.3 , (V$-$J)$<$1.6, (U$-$V) $>$ 0.867$\times$(V$-$J)$+$0.563.
Galaxies which lie below this limits are considered to be star-forming.
These star-forming galaxies are further subdivided into two groups depending on their dust content. Red galaxies with (V$-$J)$>$1.2 are selected to be dusty star-forming galaxies, which correspond to A$_{v}\gtrsim$1.6. Blue galaxies with (V$-$J)$<$1.2 are considered to be relatively unobscured. MOSFIRE detected galaxies are shown as green stars while the non-detections (selected using \zphoto\ values) are shown as black filled circles.
The total sampled non-detections are \around23\% for this redshift bin.
\around82\% of the blue star-forming galaxies and \around70\% of the dusty star-forming galaxies were detected, but only 1 quiescent galaxy was detected out of the potential 12 candidates in this redshift bin.
Galaxies in the red sequence are expected to be quenched with little or no star formation and hence without any strong \Halpha\ features; therefore the low detection rate of the quiescent population is expected. \citet{Belli2014} has shown that \around8 hours of exposure time is needed to get detections of continua of quiescent galaxies with J\around22 using MOSFIRE.
The prominent absorption features occur in the H-band at $z\sim2$. ZFIRE currently does not reach such integration times per object in any of the observed bands and none of the quiescent galaxies show strong continuum detections. We note that this is a bias of the ZFIRE survey, which may have implications on the identification of weak star-forming and quiescent cluster members by \citet{Yuan2014}.
For comparison MOSDEF and VUDS detections in the COSMOS field with matched ZFOURGE candidates are overlaid in Figure \ref{fig:UVJ}. All rest-frame UVJ colours for the spectroscopic samples are derived from photometry using the spectroscopic redshifts. The MOSDEF sample, which is mainly H-band selected,
primarily includes star-forming galaxies independently of the dust obscuration level.
VUDS survey galaxies are biased toward blue star-forming galaxies, which is expected because it is an optical spectroscopic survey. This explains why their spectroscopic sample does not include any rest-frame UVJ selected dusty star-forming or quiescent galaxies.
\begin{figure}[h!]
\includegraphics[trim=12.5 10 0 0, clip, scale=0.625]{figures/Rest_frame_UVJ.pdf}
\caption{ The rest frame UVJ diagram of the ZFIRE-COSMOS sample with redshifts $1.90<z<2.66$.
Quiescent, star-forming, and dusty star-forming galaxies are selected using \citet{Spitler2014} criteria.
The green stars are ZFIRE detections (filled$\rightarrow$Q$_z=3$, empty$\rightarrow$Q$_z=2$) and the black circles are the non-detections.
Pink diamonds and yellow triangles are MOSDEF and VUDS detected galaxies respectively, in the same redshift bin with matched ZFOURGE counterparts.
Rest frame colours are derived using spectroscopic redshifts where available.
}
\label{fig:UVJ}
\end{figure}
\subsection{Spatial distribution}
\label{sec:spatial}
The COSMOS sample is primarily selected from a cluster field. The spatial distribution of the field is shown in Figure \ref{fig:detection_map}.
(The ZFOURGE photometric redshifts are replaced with our spectroscopic values where available.) A redshift cut between $2.0<z<2.2$ is used to select galaxies in the cluster redshift range.
Using necessary ZFOURGE catalogue quality cuts there are 378 galaxies within this redshift window.
Following \citet{Spitler2012}, these galaxies are used to produce a seventh nearest neighbour density map.
Similar density distributions are calculated to the redshift window immediately above and below $2.0<z<2.2$. These neighbouring distributions are used to calculate the mean and the standard deviation of the densities. The density map is plotted in units of standard deviations above the mean of the densities of the neighbouring bins similar to \citet{Spitler2012}. Similar density maps were also made by \citet{Allen2015}.
The figure shows that ZFIRE has achieved a thorough sampling of the underlying density
structure at $z\sim2$ in the COSMOS field. Between
$1.90<z_\mathrm{spec}<2.66$, in the COSMOS field the sky density of
ZFIRE is 1.47 galaxies/arcmin$^2$. For MOSDEF and VUDS it is 1.06
galaxies/arcmin$^2$ and 0.26 galaxies/arcmin$^2$, respectively. A
detailed spectroscopic analysis of the cluster from
ZFIRE redshifts has been published in \citet{Yuan2014}.
\begin{figure*}
\includegraphics[trim = 10 20 10 5, clip, scale=1.00]{figures/detection_map.pdf}
\caption{ Spatial distribution of the ZFIRE-COSMOS sample.
Galaxies that fall within $2.0<z<2.2$ are used to produce the underlying seventh nearest neighbour density map. The units are in standard deviations above the mean of redshift bins (see Section~\ref{sec:spatial}).The white crosses are the ZFOURGE galaxies with M$>$10$^{9.34}$\msol, which is the 80\% mass completeness of the ZFIRE\ detections.
Spectroscopically detected galaxies with redshifts between $1.90<z_\mathrm{spec}<2.66$ have been overlaid on this plot.
The stars are ZFIRE-COSMOS detections (green filled$\rightarrow$Q$_z=3$, white filled $\rightarrow$Q$_z=2$) and the black circles are the non-detections. Galaxies outlined in bright pink are the confirmed cluster members by \citet{Yuan2014}.
The light pink filled diamonds are detections from the MOSDEF survey. Yellow triangles are from the VUDS survey.
}
\label{fig:detection_map}
\end{figure*}
Figure \ref{fig:density_hist} shows the relative density distribution of the $1.90<$\zspec$<2.66$ galaxies. The MOSDEF sample is overlaid on the left panel and a Gaussian best-fit functions are fit for both ZFIRE (cluster and field) and MOSDEF samples. It is evident from the distributions, that in general ZFIRE galaxies are primarily observed in significantly higher density environments (as defined by the Spitler et al. metric) compared to MOSDEF.
Because of the explicit targeting of `cluster candidate' fields, this is expected.
In the right panel, the density distribution of the confirmed
cluster members of \citet{Yuan2014} is shown.
\begin{figure*}
\includegraphics[scale=0.87]{figures/density_hist_all.pdf}
\includegraphics[scale=0.87]{figures/density_hist_cluster.pdf}
\caption{ {\bf Left:} the relative galaxy density distribution of the galaxies with confident redshift detections in the COSMOS field. Galaxies with $1.90<z_\mathrm{spec}<2.66$ in ZFIRE (green) and MOSDEF (pink) surveys are shown in the histogram. The density calculated is similar to what is shown in Figure \ref{fig:detection_map}.
Gaussian fits have been performed to both the samples. The density of the ZFIRE sample is distributed in logarithmic space around $\mu=0.369$ and $\sigma=0.180$, which is shown by the green dashed line. Similarly, the fit for the MOSDEF sample shown by the pink dashed line has $\mu=0.175$ and $\sigma=0.301$. Compared to MOSDEF, ZFIRE probes galaxies in richer environments.
{\bf Right:} similar to the left plot but only the confirmed cluster members by \citet{Yuan2014} are shown in the histogram. The normalisation is lower because the cluster identification of \citet{Yuan2014} came from a smaller earlier sample. (MOSDEF has only detected two cluster members and hence only the ZFIRE sample is shown in the figure.) The Gaussian best-fit parameters shown by the green dashed line has $\mu=0.404$ and $\sigma=0.180$.
}
\label{fig:density_hist}
\end{figure*}
%------------------------------------------------------------
\section{Comparing ZFIRE Spectroscopic Redshifts to the Literature}
\label{sec:photometric_redshifts}
The new spectroscopic sample, which is in well-studied deep fields is ideal to test the redshift accuracy of some of the most important photometric redshift surveys, including the ZFOURGE survey from which it is selected.
\subsection{Photometric Redshifts from ZFOURGE and UKIDSS}
The comparison of photometric redshifts and the spectroscopic redshifts for the ZFIRE-COSMOS sample is shown by the left panel of Figure \ref{fig:specz_photoz}. The photometric redshifts of the v3.1 ZFOURGE catalogue are used for this purpose because they represent the best calibration and photometric-redshift performance of the imaging.
For the 42 detected secondary objects in the slits, 25 galaxies are identified with Ks selected ZFOURGE candidates.
Deep HST F160W band selected catalogues from ZFOURGE show probable candidates for eight these galaxies.
Five galaxies cannot be confidently identified. HST imaging shows unresolved blends for four of these galaxies, which are listed as single objects in ZFOURGE.
Only galaxies uniquely identified in ZFOURGE are shown in the figure.
Straatman at al., (in press) has determined that photometric redshifts are accurate to $<$2\% based on previous spectroscopic redshifts.
Results from ZFIRE\ agree within this estimate.
This error level is shown as a grey shaded region in Figure \ref{fig:specz_photoz} (left panel). Defining $\Delta z=\mathrm{z_{spec}-z_{photo}}$ (which will be used throughout this paper)
galaxies with $|\Delta z$/(1+\zspec)$|>$ 0.2 are considered to be ``drastic outliers''. There is one drastic outlier in the Q$_{z}$=3 sample.
The advantage of medium-band NIR imaging relies on probing the D4000 spectral feature at $z>1.6$ by the J1, J2, and J3 filters, which span \around 1--1.3\micron.
Drastic outliers may arise due to blue star-forming galaxies having power-law-like SEDs, which do not have a D4000 breaks \citep{Bergh1963}, leading to uncertain photometric redshifts at $z\sim2$ and also from confusion between Balmer and Lyman breaks. Furthermore, blending of multiple sources in ground based imaging can also lead to drastic outliers.
%With new confidence levels: there are no drastic outliers
%The single Q$_{z}$=3 drastic outlier is due to the latter, where \emph{HST} imaging shows possible multiple targets being which identified as a single object by FourStar imaging.
The inset in Figure \ref{fig:specz_photoz} (left panel) is a histogram that shows the residual for the Q$_{z}$=3 sample.
A Gaussian best fit is performed for these galaxies (excluding drastic outliers). The $\sigma$ of the Gaussian fit is considered to be the the accuracy of the photometric redshift estimates for a typical galaxy. The Q$_{z}$=3 sample is bootstrapped 100 times with replacement and the \NMAD\ scatter is calculated, which is defined as the error on $\sigma$.
The photometric redshift accuracy of the ZFOURGE-COSMOS sample is $1.5\pm0.2\%$ which is very high.
The bright Ks $<23$ Q$_{z}$=3 galaxies show better redshift accuracy, but are within error limits of the redshift accuracy of the total sample.
Furthermore, the Q$_{z}$=3 blue and red star-forming galaxies (as shown by Figure \ref{fig:UVJ}) also show similar redshift accuracy within error limits.
The Q$_{z}$=2 ZFOURGE-COSMOS sample comprises 8 galaxies with a redshift accuracy of 14$\pm$12\%.
In Figure \ref{fig:specz_photoz} (right panel), a similar redshift analysis is performed to investigate the accuracy of the UKIDSS photometric redshift values with the ZFIRE-UDS spectroscopic sample. For the Q$_{z}$=3 objects, there are four drastic outliers (which give a rate of $\sim7\%$) and the accuracy is calculated to be 1.4$\pm$0.8\%. There are 12 Q$_{z}$=2 objects with one drastic outlier (which gives a rate of $\sim14\%$) and an accuracy of $3\pm12\%$.
UKIDSS, which does not contain medium-band imaging has a comparable accuracy to the ZFOURGE medium-band survey. This is likely to arise from the lower redshifts probed by UKIDSS compared to ZFOURGE.
\begin{figure*}
\includegraphics[trim = 15 0 5 5, clip, scale=0.62]{figures/specz_vs_photo_z_COSMOS_v3.1.pdf}
\includegraphics[trim = 15 0 5 5, clip, scale=0.62]{figures/specz_vs_photo_z_UDS.pdf}
\caption{ Comparison between the photometrically derived redshifts from ZFOURGE and UKIDSS with the ZFIRE Q$_{z}$=3 spectroscopic redshifts.
{\bf Upper left:} $z_{\mathrm{photo}}$ vs. $z_{\mathrm{spec}}$ for the COSMOS field. $z_{\mathrm{photo}}$ values are from ZFOURGE v3.1 catalogue.
The black dashed line is the one-to-one line. The grey shaded region represents the 2\% error level expected by the photometric redshifts (Straatman et al., in press).
The dashed dotted line shows the $\mid$$\Delta z$/(1+$z_{\mathrm{spec}}$)$\mid$ $>$ 0.2 drastic outlier cutoff.
The inset histogram shows the histogram of the $\Delta z$/(1+$z_{\mathrm{spec}}$) values and Gaussian fits as described in the text.
Only galaxies with $1.90<z_{\mathrm{spec}}<2.70$ are shown in the figure.
{\bf Lower left:} similarly for the residual $\Delta z / (1+z_{\mathrm{spec}})$ between photometric and spectroscopic redshifts plotted against the spectroscopic redshift.
{\bf Right:} similar to left panels but for the UDS field. $z_{\mathrm{photo}}$ values are from UKIDSS.
}
\label{fig:specz_photoz}
\end{figure*}
\subsection{Photometric Redshifts from NMBS and 3DHST}
Figure \ref{fig:photo_z_comp} shows a redshift comparison for the 3DHST photometric redshift input sample \citep{Skelton2014} and NMBS \citep{Whitaker2011} surveys with the ZFIRE Q$_{z}$=3 spectroscopic redshifts. 3DHST comes from the photometric data release of \cite{Skelton2014}.
The catalogues are compared to ZFOURGE by matching objects within a 0$''$.7 radius.
The ZFOURGE survey is much deeper than NMBS, so comparison to NMBS is only possible for a smaller number of brighter objects. 3DHST and ZFOURGE are similarly deep, with much better overlap.
The residuals between the photometric redshifts and spectroscopic redshifts are calculated using the same methods as for ZFOURGE.
Table \ref{tab:photo_z_comparision} shows the Gaussian best-fit values, redshift accuracies, and the drastic outlier fractions of all comparisons.
All surveys resulted in high accuracy for the photometric redshifts. In particular, at $z\sim2$ some comparisons can be made between the ZFOURGE, 3DHST, and NMBS surveys. NMBS has the worst performance, both in scatter, bias, and outlier fraction, presumably because of the shallower data set, which also includes fewer filters (no HST-CANDELS data).
NMBS samples brighter objects, and in ZFOURGE such bright objects have better photometric redshift performance compared to the main sample (for galaxies with $K<23$ photometric redshift accuracies for ZFOURGE and NMBS are respectively, $1.3\pm0.2\%$ and $2\pm1$).
3DHST fares better in all categories. ZFOURGE performs the best of the three in this comparison.
This is attributed to the much better seeing and depth of ZFOURGE NIR medium-band imaging, which is consistent with the findings of Straatman et al., (in press).
%For NMBS and 3DHST surveys, the ZFIRE derived redshift accuracies are slightly less accurate than what is expected from the relevant survey findings \citep{Whitaker2011,Skelton2014}.
\begin{deluxetable*}{lrrcccccc}
\tabletypesize{\scriptsize}
\tablecaption{
Photometric (P)/Grism (G) redshift comparison results for ZFIRE Q$_{z}$=3 galaxies.
\label{tab:photo_z_comparision}}
\tablecolumns{8}
\tablewidth{0pt}
\tablehead{
\colhead{Survey}&
\colhead{ N (Q$_z=3$)\tablenotemark{a}}&
\colhead{ $\mu$ ($\Delta z$/(1+$z_{\mathrm{spec}}$))} &
\colhead{ $\sigma$ ($\Delta z$/(1+$z_{\mathrm{spec}}$))} &
\colhead{ $z_{\mathrm{err}}$\tablenotemark{b}} &
\colhead{$\Delta z_{\mathrm{err}}$\tablenotemark{c}} &
\colhead{ Drastic Outliers \tablenotemark{d}} &
\colhead{ N$\mathrm{_{Q_z=3}\ Ks<23}$ \tablenotemark{e}}&
}
\startdata
ZFOURGE (P)-Total & 147 & 0.002 & 0.016 & 1.5\% & $\pm$0.2\% & $0.7\%$ & 53 \\
ZFOURGE (P)-Ks $<23$ & 53 & 0.004 & 0.013 & 1.3\% & $\pm$0.2\% & $2.0\%$ & -- \\
& & & & & & &\\
\hline
& & & & & & &\\
NMBS (P) & 67 & -0.014 & 0.030 & 3.0\% & $\pm$0.8\% & 10.0\% & 48 \\
3DHST (P) & 127 & -0.002 & 0.025 & 2.5\% & $\pm$0.3\% & 3.2\% & 49 \\
3DHST (P+G) & 64 & -0.001 & 0.009 & 0.9\% & $\pm$0.2\% & 4.7\% & 43 \\
& & & & & & &\\
UKIDSS (P) & 58 & -0.006 & 0.014 & 1.4\% & $\pm$0.8\% & 7.0\% & 38 \\
\tablenotetext{a}{The number of spectroscopic objects matched with each photometric/grism catalogue.}
\tablenotetext{b}{The accuracy of the photometric redshifts. }
\tablenotetext{c}{The corresponding bootstrap error for the redshift accuracy.}
\tablenotetext{d}{Drastic outliers defined as $\Delta z$/(1+$z_{\mathrm{spec}}$) $>0.2$. They are given as a percentage of the total matched sample $(N)$ for each photometric/grism catalogue. Limits correspond to having $<1$ outlier.}
\tablenotetext{e}{The number of bright galaxies with Ks$<$23.}
\end{deluxetable*}
\begin{figure*}
\includegraphics[trim = 15 0 5 5, clip, scale=0.62]{figures/specz_vs_photo_z_NMBS.pdf}
\includegraphics[trim = 15 0 5 5, clip, scale=0.62]{figures/specz_vs_photo_z_3DHST_photo.pdf}
\caption{ Comparison between photometric redshifts derived by NMBS and 3DHST photometric \citep{Skelton2014} with the ZFIRE\ spectroscopic sample.
Lines and inset figures are similar to Figure \ref{fig:specz_photoz}. }
\label{fig:photo_z_comp}
\end{figure*}
\subsection{Grism Redshifts from 3DHST}
3DHST grism data is used to investigate the improvement of redshift accuracy with the introduction of grism spectra to the SED fitting technique. \citet{Momcheva2015} uses a combination of grism spectra and multi-wavelength photometric data to constrain the redshifts of the galaxies.
\citet{Momcheva2015} states that 3DHST grism data quality has been measured by two independent users. All objects, which are flagged to be of good quality by both of the users are selected to compare with the ZFIRE sample.
This gives 175 common galaxies out of which 123 have Q$_{z}$=3 and 64 of them pass the 3DHST grism quality test.
The \zgrism\ vs. \zspec\ distributions of these 64 galaxies are shown in Figure \ref{fig:specz_3DHST_grism}.
There are three drastic outliers, which have been identified as low-redshift galaxies by 3DHST grism data with \zgrism$<0.5$. ZFIRE \zspec\ of these outliers are $>$2.
Comparing with the 3DHST redshifts derived only via pure photometric data, it is evident that the introduction of grism data increases the accuracy of the redshifts by \around$\times$3 to an accuracy of $0.9\pm0.1$\%. The \zgrism\ accuracy is lower than the \around0.4\% accuracy computed by \citet{Bezanson2016} for grism redshifts. We note that \citet{Bezanson2016} is performed for galaxies with $H_{F160W}<24$ and that the ZFIRE-COSMOS sample probes much fainter magnitudes.
\begin{figure}
\includegraphics[trim = 15 0 5 5, clip, scale=0.59]{figures/specz_vs_3DHST_grism.pdf}
\caption{ Spectroscopic redshift comparison between ZFIRE and 3DHST grism + photometric redshifts.
This figure is similar to Figure \ref{fig:specz_photoz} with the exception of all photometric redshifts being replaced with the 3DHST \citet{Momcheva2015} data. The Gaussian fit to $\Delta$z/(1+z) has a $\mu$=-0.0011 and $\sigma$=0.009$\pm$0.001. Only galaxies with 1.90$<z_{\mathrm{spec}}<$2.70 are shown in the figure.
}
\label{fig:specz_3DHST_grism}
\end{figure}
\subsection{Spectroscopic Redshifts from MOSDEF and VUDS}
\label{sec:specz_comparisions}
The final comparison is with other public spectroscopic redshifts in these fields. Galaxies from MOSDEF \citep{Kriek:2014fk} and VUDS \citep{Cassata2015} surveys are matched with the ZFIRE sample within a 0$''$.7 aperture.
The MOSDEF overlap comprises 84 galaxies in the COSMOS field with high confidence redshift detections, out of which 74 galaxies are identified with matching partners from the ZFOURGE survey. In the ZFOURGE matched sample, 59 galaxies are at redshifts between 1.90$<$\zspec$<2.66$.
7 galaxies are identified to be in common between ZFIRE and MOSDEF detections.
The RMS of the scatter between the spectroscopically derived redshifts is \around0.0007.
This corresponds to a rest frame velocity uncertainty of \around67 km s$^{-1}$, which is attributed to barycentric redshift corrections not being applied for the MOSDEF sample.
We note that barycentric velocities should be corrected as a part of the wavelength solution by the DRP for each observing night, and therefore we are unable to apply such corrections to the MOSDEF data. Considering ZFIRE data, once the barycentric correction is applied we find, by analysing repeat observations in K band, that our redshifts are accurate to $\pm13$ km s$^{-1}$.
Similarly, the VUDS COSMOS sample comprises 144 galaxies with redshift detections $>3\sigma$ confidence, out of which 76 galaxies have ZFOURGE detections. In the ZFOURGE matched sample, 43 galaxies lie within $1.90<$\zspec$<2.66$.
There are two galaxies in common between ZFIRE and VUDS detections and redshifts agree within 96 km s$^{-1}$ and 145 km s$^{-1}$. The redshift confidence for the matched two galaxies are $<\mathrm{2}\sigma$ in the VUDS survey, while the ZFIRE has multiple emission line detections for those galaxies. Furthermore, the VUDS survey employs VIMOS in the low-resolution mode ($R\sim200$) in its spectroscopy leading to absolute redshift accuracies of $\sim200$ km s$^{-1}$. Therefore, we expect the ZFIRE redshifts of the matched galaxies to be more accurate than the VUDS redshifts.
Figure \ref{fig:survey_depth_comp} shows the distribution of the redshifts of the ZFIRE sample as a function of Ks magnitude and stellar mass. ZFIRE detections span a wide range of Ks magnitudes and stellar masses at $z\sim2$. The subset of galaxies observed at $z\sim3$ are fainter and are of lower mass. MOSDEF and VUDS samples are also shown for comparison. VUDS provides all auxiliary stellar population parameters, which are extracted from the CANDELS survey and hence all data are included. However, MOSDEF only provides the spectroscopic data and thus, only galaxies with identified ZFOURGE counterparts are shown in the figure, which is $\sim$90\% of the MOSDEF COSMOS field galaxies with confident redshift detections.
In Figure \ref{fig:survey_depth_comp}, MOSDEF detections follow a similar distribution to ZFIRE. Since both the surveys utilize strong emission lines in narrow NIR atmospheric passbands, similar distributions are expected. VUDS, however, samples a different range of redshifts as it uses optical spectroscopy. We note the strong \zspec=2.095 overdensity due to the cluster in the ZFIRE sample, but not in the others.
\begin{figure*}
\includegraphics[trim = 0 0 10 5, clip, scale=0.90]{figures/zspec_vs_Ks.pdf}
\includegraphics[trim = 0 0 10 5, clip, scale=0.90]{figures/zspec_vs_mass.pdf}
\caption{ Redshift comparison as a function of Ks magnitude and stellar mass. Along with the ZFIRE\ Q$_{z}$=3 detections, the MOSDEF and VUDS samples are shown for comparison.
For the MOSDEF sample, only galaxies with identified ZFOURGE detections are shown. All VUDS galaxies with $\mathrm{z_{spec}}>1.8$ with $>3\sigma$ detections are shown.
Note that VUDS observes galaxies in the optical regime, while ZFIRE\ and MOSDEF observes in the NIR.
{\bf Left:} $\mathrm{z_{spec}}$ vs. Ks magnitude for the spectroscopically detected galaxies. The VUDS sample is plotted as a function of K magnitude.
{\bf Right:} $\mathrm{z_{spec}}$ vs. stellar mass for the same samples of galaxies.
}
\label{fig:survey_depth_comp}
\end{figure*}
%------------------------------------------------------------
\section{Broader Implications}
\label{sec:implications}
The large spectroscopic sample presented can be used to assess the fundamental accuracy of galaxy physical parameters (such as stellar mass, SFR, and galaxy
SED classification) commonly derived from photometric redshift surveys. It can also be used to understand the performance of the previous
cluster selection that was done.
\subsection{Galaxy Cluster Membership}
The completeness and purity of galaxy cluster membership of the $z=2.1$ cluster based on photometric redshifts is next investigated and compared with spectroscopic results.
First, photometric redshifts are used to compute a seventh nearest neighbour density map as shown in Figure \ref{fig:detection_map}.
Any galaxy that lies in a region with density $>3\sigma$ is assumed to be a photometric cluster candidate.
From the ZFOURGE photometric redshifts in the COSMOS field (coverage of $\sim 11'\times11'$) for $2.0<$\zphoto$<2.2$, there are 66 such candidates. All of these galaxies have been targeted to obtain spectroscopic redshifts.
\citet{Yuan2014} cluster galaxies are chosen to be within $3\sigma$ of the Gaussian fit to the galaxy peak at $z = 2.095$.
Only 25 of the photometric candidates are identified to be a part of the \citet{Yuan2014} cluster, which converts to \around38\% success rate.
The other 32 spectroscopically confirmed cluster galaxies at $z=2.095$ from Yuan et al. are not selected as cluster members using photometric redshifts, $i.e.$ membership identification based on photometric redshifts and seventh nearest
neighbour is \around56\% incomplete.
\citet{Yuan2014} finds the velocity dispersion of the cluster structure to be $\sigma_{\mathrm{v1D}}= 552\pm52$ km s$^{-1}$, while the photometric redshift accuracy of ZFOURGE at $z=2.1$ is $\sim4500$ km s$^{-1}$. Therefore, even high-quality photometric redshifts such as from ZFOURGE, we are unable to precisely identify cluster galaxy members, which demonstrates that spectroscopic redshifts are crucial for identifying and studying cluster galaxy populations at $z\sim2$.
\subsection{Luminosity, Stellar Mass, and Star Formation Rate}
\label{sec:M-SFR-dz}
An important question in utilising photometric redshifts is whether their accuracy depends on key galaxy properties such as luminosity, stellar mass, and/or SFR. This could lead
to biases in galaxy evolution studies.
The Ks total magnitudes and stellar masses from ZFOURGE (v2.1 catalogue) are used for this comparison, which is shown in Figure \ref{fig:delta_z_vs_param}.
The redshift error is plotted as a function of Ks magnitude and stellar mass for all Q$_z$=3 ZFIRE galaxies. The sample is binned into redshift bins and further subdivided into star-forming, dusty star-forming, and quiescent galaxies depending on their rest-frame UVJ colour.
The least squares best-fit lines for the Ks magnitude and stellar mass are $y=-0.001 (\pm0.003)x+0.05 (\pm0.06)$ and $y=0.010 (\pm0.005)x-0.08 (\pm0.05)$, respectively.
Therefore, it is evident that there is a slight trend in stellar mass in determining the accuracy of photometric redshifts with more massive galaxies showing positive offsets for $\Delta z/(1+$\zspec$)$.
However, the relationship of $\Delta z/(1+$\zspec$)$ with Ks magnitude is not statistically significant.
The typical \NMAD\ of $\Delta z/(1+$\zspec$)$ is 0.022 with a median of 0.009.
Note that the $\Delta z/(1+$\zspec$)$ scatter parametrized here is different from the \zphoto\ vs. \zspec\ comparison in Figure \ref{fig:specz_photoz} for the ZFOURGE sample. We use the ZFOURGE catalogue version 2.1 for the $\Delta z/(1+$\zspec$)$ vs. mass, magnitude comparison while for the \zphoto\ vs. \zspec\ comparison, we use v3.1. Furthermore, the scatter here is calculated using \NMAD\ , while in Figure \ref{fig:specz_photoz} a Gaussian function is fit to the $\Delta z/(1+$\zspec$)$ after removing the drastic outliers. The changes in \zphoto\ between v2.1 and v3.1 is driven by the introduction of improved SED templates. This comparison is expanded on in Appendix \ref{sec:ZFOURGE comparison}.
\begin{figure}
\includegraphics[trim = 10 10 10 10, clip, scale=0.58]{figures/delta_z_vs_mass_mag.pdf}
\caption{ Photometric redshift accuracies as a function of Ks magnitude and stellar mass.
All Q$_z=3$ ZFIRE-COSMOS galaxies with redshifts between $1.90<z<2.66$ have been selected.
All galaxies are divided into blue star-forming, red (dusty) star-forming, and quiescent galaxies, which are shown with different symbols. Galaxies are further sub-divided into redshifts and are colour coded as shown.
{\bf Top:} $\Delta z/(1+z_\mathrm{spec}$) vs. Ks total magnitude from ZFOURGE.
{\bf Bottom:} similar to above but with stellar mass on the x-axis.
The median $\Delta z/(1+z_\mathrm{spec}$) is 0.009.
The grey shaded region in both the plots shows the \NMAD\ of the $\Delta z/(1+z_\mathrm{spec}$) scatter (0.022) around the median of the selected galaxies. The solid lines are the least squares best-fit lines for the data.
}
\label{fig:delta_z_vs_param}
\end{figure}
There should be a dependency of galaxy properties derived via SED fitting techniques on $\Delta z$. Figure \ref{fig:delta_param_vs_delta_z} shows the change of stellar mass and SFR (both calculated using FAST using either photometric or spectroscopic redshifts) as a function of $\Delta z$. To first order, an analytic calculation of the expected residual can be made.
SED fitting techniques estimate galaxy stellar masses from luminosities and
mass-to-light ratios. The luminosity calculated from the flux will depend on the redshift used, and hence the mass and redshift change should correlate.
Ignoring changes in mass to light ratios and K-correction effects, from the luminosity distance change we expect
\begin{subequations}
\begin{equation}
\frac{d[\log_{10}(M)]}{dz} = \frac{2}{D_L} \left(\frac{dD_L}{dz}\right)_{z=2}
\end{equation}
where $M$ is the stellar mass of the galaxy and $D_L$ is the luminosity distance. Evaluating for $z=2$, with $D_L=15.5$ Gpc:
\begin{equation}
\label{eq:delta_m_z}
\Delta \log_{10}(M) = 0.67 \Delta z
\end{equation}
\end{subequations}
Equation (\ref{eq:delta_m_z}) is plotted in Figure \ref{fig:delta_param_vs_delta_z}. The top panel of the figure shows that the mass and redshift changes correlate approximately as expected with a \NMAD\ of 0.017 dex. SED SFRs are also calculated from luminosities, albeit with a much greater
weight to the UV section of the SED, and thus should scale similarly to mass.
The \NMAD\ scatter around this expectation is 0.086 dex, which is higher than the mass scatter with a much greater number of outliers. To fully comprehend the role of outliers in the scatter, we fit a Gaussian function to the deviation of $\Delta\log_{10}$(Mass) and $\Delta\log_{10}$(SFR) for each galaxy from its theoretical expectation. The $\Delta\log_{10}$(SFR) shows a larger scatter of $\sigma=0.2$ in the Gaussian fit compared to the $\sigma=0.03$ of $\Delta\log_{10}$(Mass).
It is likely that the higher scatter in $\Delta\log_{10}$(SFR) is because the rest-frame UV luminosity is much more sensitive to the star formation history and dust extinction encoded in the best-fit SED than the stellar mass.
It is evident that photometric-redshift derived stellar masses are robust against the typical redshift errors, however, caution is warranted when using SED based SFRs with
photometric redshifts because they are much more sensitive to small redshift changes
(in our sample \around26\% of galaxies have $|\Delta\log_{10}$SFR$|>0.3$ even though the photometric redshifts have good precision).
Studies that investigate galaxy properties solely relying on photometric redshifts may result in inaccurate conclusions about inherent galaxy properties and therefore, it is imperative that they are supported by spectroscopic studies. It should be noted that previous ZFOURGE papers have extensively used photometric redshift derived stellar masses (for example, the mass function evolution of \citet{Tomczak2014}), which we find to be reliable, but not SED-based SFRs. Most commonly, the best-fit SEDs are used to derive the UV+IR fluxes in order to derive SFRs, since SFRs derived directly via FAST templates \cite[eg.,][]{Maraston2010} are degenerate with age, metallicity, and dust law. See \citet{Conroy2013} for a review on this topic.
\begin{figure}
\includegraphics[trim = 10 10 10 10, clip, scale=0.58]{figures/delta_mass_lsfr_vs_delta_z.pdf}
\caption{ Effect of $\Delta z$ on galaxy stellar mass and dust extinction derived by FAST.
All ZFIRE-COSMOS galaxies with redshifts between $1.90<z<2.66$ have been selected. All galaxies are divided into blue star-forming, red (dusty) star-forming and quiescent galaxies which are shown as different symbols. Galaxies are further sub-divided into redshifts and are colour coded as shown.
The diagonal solid lines are Equation (\ref{eq:delta_m_z}), which is the simplified theoretical expectation for mass/SFR correlation with redshift error.
The grey shaded regions corresponds to the $\sigma$ value of the best-fit Gaussian functions that describes the deviation of the observed values from the theoretical expectation.
{\bf Top:} $\Delta\log_{10}$Mass vs. $z_\mathrm{spec}-z_\mathrm{photo}$.
{\bf Bottom:} similar to top but with $\Delta\log_{10}$(SFR) on the y axis.
}
\label{fig:delta_param_vs_delta_z}
\end{figure}
\subsection{Rest-Frame UVJ Colours}
ZFOURGE rest frame UVJ colours are derived using photometric redshifts.
UVJ colours from \zphoto\ are commonly used to identify the evolutionary stage of a galaxy \citep{Williams2009}. Here we investigate the effect of photometric redshift
accuracy on the UVJ colour derivation of galaxies.
Figure \ref{fig:UVJ} shows the rest frame UVJ colours of Q$\mathrm{_z}$=3 objects re-derived using spectroscopic redshifts from the same SED template library.
Figure \ref{fig:delta_UVJ} shows the change of location of the galaxies in rest frame UVJ colour when ZFIRE redshifts are used to re-derive them (the lack of quiescent galaxies overall is a bias in the ZFIRE\ sample selection as noted earlier).
Only one to two galaxies change their classifications from the total sample of 149.
The inset histograms show the change of (U$-$V) and (V$-$J) colours. Gaussian functions are fit to the histograms to find that the scatter in (U$-$V) colours ($\sigma$=0.03) to be higher than that of (V$-$J) colours ($\sigma$=0.02) and (U$-$V) has a greater number of outliers.
The conclusion is that the U$-$V rest-frame colours are more sensitive to redshift compared to V$-$J colours by \around50\%, which may contribute to a selection bias in high-redshift samples. This sensitivity of the UV part of the SED is in accordance with the results of Section~\ref{sec:M-SFR-dz}.
To further quantify the higher sensitivity of U magnitude on redshift, Gaussian fits are performed on the $\Delta$U, $\Delta$V, and $\Delta$J magnitudes of the ZFIRE galaxies, by calculating the difference of the magnitudes computed when using \zphoto\ and \zspec. $\Delta$U shows a larger scatter of $\sigma=0.04$, while $\Delta$V and $\Delta$J show a scatter of $\sigma=0.01$. This further validates our conclusion that the UV part of the SED has larger sensitivity to redshift.
\begin{figure}
\includegraphics[trim = 0 0 0 0, clip, scale=0.925]{figures/Rest_frame_UVJ_arrows.pdf}
\caption{ Effect of $\Delta z$ on rest frame UVJ colours.
All ZFIRE-COSMOS galaxies are shown in the redshift bin $1.90<z<2.66$.
The green stars are rest frame UVJ colours derived using photometric redshifts from EAZY. The rest frame colours are re-derived using spectroscopic redshifts from ZFIRE. The brown arrows denote the change of the position of the galaxies in the rest frame UVJ colour space when $z_{\mathrm{spec}}$ is used. The large arrows (one of which moves outside the plot range) are driven by $\Delta z$ outliers.
The two inset histograms show the change in (V$-$J) and (U$-$V) colours for these sample of galaxies. Gaussian fits with $\sigma$ of 0.02 and 0.03 are performed, respectively, for the (V$-$J) and (U$-$V) colour differences.
}
\label{fig:delta_UVJ}
\end{figure}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Summary}
\label{sec:summary}
Here we present the ZFIRE survey of galaxies in rich environments and our first public data release. A detailed description of the data reduction used by ZFIRE is provided. The use of a flux standard star along with photometric data from ZFOURGE and UKIDSS has made it possible to flux calibrate the spectra to $\lesssim10$\% accuracy. The ZFIRE-COSMOS sample spans a wide range in Ks magnitude and stellar mass and secures redshifts for UVJ star-forming galaxies to Ks=24.1 and stellar masses of $\log_{10}($\mass$)>9.3$. We show that selecting using rest-frame UVJ colours is an effective method for identifying \Halpha-emitting galaxies at $z\sim2$ in rich environments. Redshifts have been measured for 232 galaxies of which 87 are identified as members of the rich clusters we have targeted in COSMOS and UDS fields.
Photometric redshift probability density functions from EAZY are used to show that the expected \Halpha\ detections are similar to the ZFIRE detection rate in the COSMOS field. In the COSMOS field, the ZFIRE survey has detected \around80\% of the targeted star-forming galaxies. We also show that the density structure discovered by \citet{Spitler2012} has been thoroughly sampled by ZFIRE.
Using spectroscopic redshifts from ZFIRE with ZFOURGE and other public photometric survey data, we investigated the accuracies of photometric redshifts. The use of medium-band imaging in SED fitting techniques can result in photometric redshift accuracies of $\sim1.5\%$. ZFIRE calculations of photometric redshift accuracies are consistent with the expectations of the ZFOURGE survey (Straatman at al., in press) but are slightly less accurate than the NMBS \citep{Whitaker2011} and 3DHST \citep{Skelton2014} survey results. The higher redshift errors can be attributed to sampling differences, which arises from the deeper NIR medium-band imaging in ZFOURGE compared to the other surveys (i.e. overlapping galaxies tend to be fainter than typical in the respective galaxies in NMBS).
If we select a brighter subset of NMBS (Ks $<23$) we find that the redshift accuracy increases by 30\%.
Using UKIDSS, \citet{Quadri2012} shows that the photometric redshift accuracy is dependent on redshift and that at higher redshifts the photometric redshift error is higher. Between UKIDSS at $z\sim1.6$ and ZFOURGE at $z\sim2$ the photometric redshift accuracies are similar. Therefore, the use of medium-band imaging in ZFOURGE has resulted in more accurate redshifts at $z\sim2$, due to finer sampling of the D4000 spectral feature by the J1, J2, and J3 NIR medium-band filters. The introduction of medium-bands in the K band in future surveys may allow photometric redshifts to be determined to higher accuracies at $z\gtrsim4$.
The importance of spectroscopic surveys to probe the large-scale structure of the universe is very clear. For the COSMOS \citet{Yuan2014} cluster, we compute a 38\% success rate (i.e., 38\% of galaxies in $3\sigma$ overdensity regions are identified spectroscopically as cluster galaxies) and a 56\% incompleteness (56\% of spectroscopic cluster galaxies are not identified from data based on purely photometry) using the best photometric redshifts (with seventh nearest neighbour algorithms) to identify clustered galaxies.
We find a systematic trend in photometric redshift accuracy, where massive galaxies give higher positive offsets up to $\sim$0.05 for $\Delta z/(1+z_\mathrm{spec}$) values as a function of galaxy stellar mass. However, it is not evident that there is any statistically significant trend for a similar relationship with galaxy luminosity.
Results also suggest that the stellar mass and SFR correlates with redshift error. This is driven by the change in the calculated galaxy luminosity as a function of the assigned redshift and we show that the values correlate approximately with the theoretical expectation. SFR shows larger scatter compared to stellar mass in this parameter space, which can be attributed to the stronger weight given to UV flux, which is very sensitive to the underlying model, in the derivation of the SFR.
This stronger correlation of the UV flux with redshift error is further evident when comparing the change in (U$-$V) and (V$-$J) colour with change in redshift. When rest-frame U,V, and J colours are re-derived using spectroscopic redshifts, our results show a stronger change in (U$-$V) colour compared to the (V$-$J) colour. Therefore, a redshift error may introduce an extra selection bias on rest-frame UVJ selected galaxies. Further studies using larger samples of quiescent and dusty star-forming galaxies at $z\sim2$ are needed to quantify this bias.
Clearly the use of photometric redshifts can lead to biases even when using the same SED template set. However, it is important to acknowledge the underlying uncertainties that lie in deriving galaxy properties even with spectroscopic redshifts.
Future work could consider the role of SED templates used in SED fitting techniques. Generally the templates used are empirically derived, which limits the capability to understand the inherent properties of the observed galaxies. With the use of physically motivated models such as MAGPHYS \citep{daCunha2008}, more statistically meaningful relationships between different physical parameters of the observed galaxies could be obtained. Improving such models to include photo-ionization of galaxies the in future will allow us to directly make comparisons of star-forming galaxies at $z\sim2$, which will be vital to study the inherent galaxy properties.
Furthermore, the accuracy of underlying assumptions used in SED fitting techniques such as the IMF, dust properties, and star formation histories at $z\sim2$ should be investigated. These assumptions are largely driven by observed relationships at $z\sim0$, and if the galaxies at higher redshifts are proven to be inherently different from the local populations, results obtained via current SED fitting techniques may be inaccurate. Future work should focus on the physical understanding of the galaxy properties at $z\gtrsim2$ with large spectroscopic surveys to better constrain the galaxy evolution models. The recent development of sensitive NIR integral field spectrographs with multiplexed capabilities will undoubtedly continue to add a wealth of more information on this topic over the next few years.
The ZFIRE survey will continue focusing on exploring the large spectroscopic sample of galaxies in rich environments at $1<z<3$ to investigate galaxy properties in rich environments. Upcoming papers
include analyses of the IMF (T. Nanayakkara et al. 2016, in preparation), kinematic scaling relations (\citet{Alcorn2016}; C. Straatman et al. 2016, in preparation), the mass--metallicity fundamental plane \citep{Kacprzak2016}, and galaxy growth in cluster and field samples (K. Tran et al., in preparation).
\acknowledgements
The data presented herein were obtained
at the W.M. Keck Observatory, which is operated as a scientific
partnership among the California Institute of Technology, the
University of California and the National Aeronautics and Space
Administration. The Observatory was made possible by the generous
financial support of the W.M. Keck Foundation.
The authors wish to recognize and acknowledge the very significant cultural role and
reverence that the summit of Mauna Kea has always had within the
indigenous Hawaiian community. We are most fortunate to have the
opportunity to conduct observations from this mountain and we hope we
will be able to continue to do so.
We thank Nick Konidaris and the Keck observatory support staff for the
extensive and generous help given during the observing and data
reduction. We thank Gabriel Brammer for providing us the updated EAZY
and help with several issues. T.N., K.G., and G.G.K. acknowledge
Swinburne-Caltech collaborative Keck time, without which this survey
would not have been possible. K.G acknowledges the support of the
Australian Research Council through Discovery Proposal awards
DP1094370, DP130101460, and DP130101667. G.G.K. acknowledges the support
of the Australian Research Council through the award of a Future
Fellowship (FT140100933). K.T. acknowledges the support of the
National Science Foundation under Grant \#1410728. This work was
supported by a NASA Keck PI Data Award administered by the NASA
Exoplanet Science Institute.
Facilities: \facility{Keck:I (MOSFIRE)}
\bibliographystyle{apj}
%\bibliography{../../../papers/bibliography.bib}
\bibliography{bibliography}
\clearpage
\newpage
\appendix
\section{A: MOSFIRE calibrations}
\label{sec:MOSFIRE cals}
\subsection{Telluric Corrections}
Additional figures related to the MOSFIRE data reduction process are shown in this section.
Figure \ref{fig:sensitivity} shows an example set of derived sensitivity curves and the normalized 1D spectra applied to all observed bands.
\begin{figure*}[h!]
\includegraphics[trim = 10 10 10 10, clip, scale=0.45]{figures/sensitivity.pdf}
\caption{Example set of derived sensitivity curves for MOSFIRE filters. From left to right, in the top panels we show the Y and J-bands and in the bottom panels we show the H and K bands. Pre-ship spectroscopic throughput for MOSFIRE is shown in blue. This takes into account the instrument response and the telescope throughput and \cite{McLean2012} shows that the these predictions agree extremely well with the measured values. The green line is the measured atmospheric transmission provided by the University of Hawai'i (private communication).
The normalized spectra of the observed 1D standard stars before any corrections are applied are shown in brown.
We remove the stellar atmospheric hydrogen lines and fit the spectra by a blackbody emission curve.
We use this derived spectra as a sensitivity curve (shown in black) and multiply our galaxy spectra by this to apply telluric corrections.
We multiply the observed standard star spectra with the derived sensitivity curve to obtain a telluric corrected normalized standard star spectrum, which is shown in cyan.
Each panel is accompanied with a 2D spectra of the standard star as given by the DRP. The black and white lines are the negative and positive images. Strong telluric features can be seen in regions where the intensity of the 2D spectra drops rapidly.
All 1D curves are normalized to a maximum value of 1.}
\label{fig:sensitivity}
\end{figure*}
\clearpage
\subsection{Spectrophotometric Calibrations}
As mentioned in Section \ref{sec:sp calibration}, for the COSMOS field, we overlaid synthetic slit apertures with varying slit heights on the ZFOURGE imaging to count the integrated flux within each aperture. The main purpose of the process was to account for the light lost due to the finite slit size.
Figure \ref{fig:scaling_values_varying_slit_boxes} shows the change of median offset values for varying aperture sizes for each of the COSMOS mask. As is evident from the figure, when the slit height increases from $1''.4$ to $2''.8$, most of the light emitted by the galaxies is included within the slit aperture. For any slit height beyond that, there is no significant change to the integrated counts, thus suggesting the addition of noise. Driven by this reason, we choose the $0''.7\times2''.8$ slit size to perform the spectrophotometric calibrations.
We show the magnitude distribution of two example masks in Figure \ref{fig:mask_scaling_example}. Once a uniform scaling is applied to all the objects in a given mask, the agreement between the photometric slit-box magnitude and the spectroscopic magnitude increases.
\begin{figure}[h!]
\includegraphics[scale=1.20]{figures/scaling_values_with_diff_slit_sizes_bs.pdf}
\caption{ The median offset values for different aperture sizes for the COSMOS field masks. This figure is similar to Figure \ref{fig:scaling_values} top panel, but shows the median offset values computed for all slit-box like aperture sizes considered in our spectrophotometric calibration process.
Filter names correspond to the names in Table \ref{tab:observing_details}.
The green stars in different shades for a given mask relates to the median offset between spectroscopic magnitude of the objects in the mask to its photometric magnitude computed using ZFOURGE and HST imaging with varying aperture sizes.
The errors are the \NMAD\ scatter of the median offsets calculated via bootstrap re-sampling of individual galaxies.
The vertical lines are for visual purposes to show data points belonging to each mask.
}
\label{fig:scaling_values_varying_slit_boxes}
\end{figure}
\begin{figure*}
\includegraphics[scale=0.9]{figures/magcomp_KL3_bs_comp.pdf}
\includegraphics[scale=0.9]{figures/magcomp_KL3_as_comp.pdf}
\includegraphics[scale=0.9]{figures/magcomp_H1_bs_comp.pdf}
\includegraphics[scale=0.9]{figures/magcomp_H1_as_comp.pdf}
\caption{Two example masks showing the comparison between spectroscopically derived magnitude to the photometrically derived magnitude using a $0''.7 \times 2''.8$ slit box.
{\bf Top left:} K-band mask (KL3) before spectrophotometric calibration. The legend shows the median offset of galaxies with slit magnitude $<$24 and the corresponding bootstrap error.
{\bf Top right:} similar to left panel but after the spectrophotometric calibration has been applied. Since the scaling factor is now applied to the data, the median offset for galaxies with slit magnitude $<$24 is now 0. The inset shows the bootstrap error after the scaling is applied. This is considered to be the error of the spectrophotometric calibration process.
{\bf Bottom:} similar to the top panels but for a H-band mask (H1).
The grey shaded area in all the panels is the bootstrap error.
Error bars are from the ZFOURGE photometric catalogue. The flux monitor stars have been removed from the figure to focus the value range occupied by the galaxies. }
\label{fig:mask_scaling_example}
\end{figure*}
\clearpage
\section{B: Differences between ZFOURGE versions}
\label{sec:ZFOURGE comparison}
Here we show the effect of minor changes between different versions of ZFOURGE catalogues.
ZFIRE\ sample selection was performed using an internal data release intended for the ZFOURGE team (v2.1). In this version, detection maps were made from Ks band photometry from FourStar imaging.
The 5$\sigma$ depth for this data release is Ks $\leq$25.3 in AB magnitude (Straatman et al., in press; This is 24.8 in \citet{Spitler2012}).
All results shown in the paper, except for the photometric redshift analysis, are from v2.1.
ZFOURGE COSMOS field has now been upgraded by combining the FourStar imaging with VISTA/K from UltraVISTA \citep{McCracken2012} to reach a 5$\sigma$ significance of 25.6 in AB magnitude (v3.1). This has increased the number of detected objects of the total COSMOS field by \around50\%.
All ZFIRE\ galaxies identified by v2.1 of the catalogue are also identified with matching partners by v3.1.
Figure \ref{fig:detection_limits_newcat} shows the distribution of the Ks magnitude and masses of the updated ZFOURGE catalogue in the redshift bin $1.90<z<2.66$.
The 80\%-ile limit of ZFOURGE in this redshift bin increases by 0.4 magnitude to to $Ks = 25.0$.
Similarly, the 80\% mass limit is \around$10^9$ \msol\ which is an increase of 0.2 dex in sensitivity.
It is evident from the histograms that the significant increase of the detectable objects in this redshift bin has largely been driven by faint smaller mass galaxies.
The 80\% limit for the ZFIRE-COSMOS galaxies is Ks=24.15 with the new catalogue. The change is due to the change of photometry between the two catalogues.
Figure \ref{fig:cat_differences} shows the ZFOURGE catalogue differences between Ks total magnitude and the photometric redshift of the ZFIRE\ targeted galaxies.
The Ks magnitude values may change due to the following reasons.
\begin{enumerate}
\item The detection image is deeper and different, which causes subtle changes in the location and the extent of the galaxies.
\item The zero point corrections applied to the data uses an improved method and therefore the corrections are different between the versions.
\item The correction for the total flux is applied using the detection image, rather than the Ks image. Due to subtle changes mentioned in 1, this leads to a different correction factor.
\end{enumerate}
The \NMAD\ of the scatter for the Ks total magnitude is \around0.1 mag and is shown by the grey shaded region. There are few strong outliers.
Two of the three catastrophic outliers are classified as dusty galaxies. One of them is close to a bright star and has an SNR of \around5 in v2.1.
With the updated catalogue, the SNR has increased by \around30\% and therefore the new measurement is expected to be more robust.
For the remaining galaxy, we see no obvious reason for the difference.
Figure \ref{fig:specz_photoz_newcat} shows the redshift comparison between ZFIRE spectroscopy and the v2.1 of the ZFOURGE catalogue. In v3.1, the photometric redshifts were updated by the introduction of high \Halpha\ equivalent width templates to EAZY and improved zero-point corrections to the photometric bands.
These changes along with the extra Ks depth have driven the increase in accuracy of the photometric redshifts from \around2.0\% in v2.1 to \around1.6\% in v3.1.
\begin{figure}[b]
\centering
\includegraphics[trim = 10 10 10 5, clip, scale=0.90]{figures/detected_limits_newcat.pdf}
\caption{The Ks magnitude and mass distribution of the 1.90$<z<$2.66 galaxies from ZFOURGE overlaid on the ZFIRE\ detected sample.
This figure is similar to Figure \ref{fig:detection_limits}, but the ZFOURGE data has been replaced with the updated deeper ZFOURGE catalogue (v3.1) and shows all ZFOURGE and ZFIRE detected galaxies in this redshift bin (In Figure \ref{fig:detection_limits} the quiescent sample is removed to show only the red and blue star-forming galaxies).
}
\label{fig:detection_limits_newcat}
\end{figure}
\begin{figure}
\includegraphics[trim = 10 10 10 5, clip, scale=0.925]{figures/cat_differences.pdf}
\caption{Ks magnitude and the photometric redshift differences of ZFOURGE catalogues.
Only galaxies targeted by ZFIRE\ are shown.
{\bf Left:} the Ks band total magnitude difference between v2.1 and v3.1 of the ZFOURGE catalogues.
{\bf Right:} the photometric redshift difference between v2.1 and v3.1 of the ZFOURGE catalogues. The grey shaded region denotes the \NMAD\ of the distribution.
In both panels, the grey shaded region denotes the \NMAD\ of the distribution, which are respectively 0.09 magnitude and 0.03.
}
\label{fig:cat_differences}
\end{figure}
\begin{figure}
\includegraphics[trim = 10 10 10 5, clip, scale=0.65]{figures/specz_vs_photo_z_COSMOS_v2.1.pdf}
\caption{ Photometric and spectroscopic redshift comparison between ZFOURGE v2.1 and ZFIRE.
This Figure is similar to Figure \ref{fig:specz_photoz} with the exception of all photometric redshifts now being from v2.1 of the ZFOURGE catalogue.
}
\label{fig:specz_photoz_newcat}
\end{figure}
\end{document}
|
|
%!TEX root=cargo_reference.tex
\subsection{Classes}
The following classes are defined in \code{include/libcargo/classes.h}:
\subsubsection{Stop}
A \code{Stop} represents a customer or vehicle origin or destination. It is
the basic unit for \emph{schedules}. It is \textbf{immutable}.
\hi{Constructor:} \code{Stop(...)} takes 6 parameters, 1 with default:
\begin{itemize}
\item[] \code{TripId} -- corresponds to ID of the stop owner
\item[] \code{NodeId} -- corresponds to the location of the stop
\item[] \code{StopType} -- type of the stop
\item[] \code{ErlyTime} -- early time window bound
\item[] \code{LateTime} -- late time window bound
\item[] \code{SimlTime v=-1} -- when the stop was visited ($-1=$ not visited)
\end{itemize}
\hi{Example:}
The following creates \code{Stop A\_o} for Customer A's origin.
\code{Stop A\_o(A.id(), A.orig(), StopType::CustOrig, 0, 600);}
\hi{Getters:}
\begin{itemize}
\item[] \code{owner()} -- returns ID of the stop owner (\code{TripId})
\item[] \code{loc()} -- returns location of the stop (\code{NodeId})
\item[] \code{type()} -- returns type of the stop (\code{StopType})
\item[] \code{early()} -- returns early time window bound (\code{ErlyTime})
\item[] \code{late()} -- returns late time window bound (\code{LateTime})
\item[] \code{visitedAt()} -- returns when the stop was visited (\code{SimlTime})
\end{itemize}
\hi{Equality Comparator:}
Two \code{Stop}s are equal if their owners and locations are equal.
\subsubsection{Schedule}
A \code{Schedule} is a sequence of stops that vehicles visit in order.
Schedules are associated with a particular vehicle. YOU MUST ensure a vehicle's
schedule and its route are synchronized (\emph{i.e.} all stops in the schedule
are visited by the route). Schedules are \textbf{immutable}. To change a
vehicle's existing schedule, create a new one and commit it into the database
using method \code{assign} (\code{rsalgorithm.h}).
\hi{Constructor:} \code{Schedule(...)} takes 2 parameters:
\begin{itemize}
\item[] \code{VehlId} -- corresponds to ID of the schedule owner
\item[] \code{vec\_t<Stop>} -- the raw sequence of \code{Stop}s
\end{itemize}
\hi{Example:}
The following creates schedule \code{sch} for vehicle \code{I} traveling from
its origin, then to pick up customer \code{A} at \code{A\_o}, then to
drop off the customer at \code{A\_d}, then to arrive at \code{I\_d}.
\code{\\ vec\_t<Stop> I\_sch \{I\_o, A\_o, A\_d, I\_d\}; \\ Schedule sch(I.id(), I\_sch);}
\hi{Getters:}
\begin{itemize}
\item[] \code{owner()} -- returns ID of the schedule owner (\code{VehlId})
\item[] \code{data()} -- returns raw sequence of stops (\code{vec\_t<Stop>})
\item[] \code{at(SchIdx)} -- returns stop at index \code{SchIdx} (\code{Stop})
\item[] \code{front()} -- returns first stop in the schedule (\code{Stop})
\item[] \code{size()} -- returns number of stops (\code{size\_t})
\item[] \code{print()} -- prints schedule to standard out (\code{void})
\end{itemize}
\subsubsection{Route}
A \code{Route} is a sequence of waypoints that a vehicle travels along. Routes
are associated with a particular vehicle. Routes are \textbf{immutable}. The
function \code{route\_through(...)} (\code{functions.h}) creates a shortest
route given a schedule.
\hi{Constructor:} \code{Route(...)} takes 2 parameters:
\begin{itemize}
\item[] \code{VehlId} -- corresponds to ID of the schedule owner
\item[] \code{vec\_t<Wayp>} -- the raw sequence of \code{Wayp}s
\end{itemize}
\hi{Example:} The following creates route \code{rte} using the schedule in the
above example for vehicle \code{I}.
\code{\\ vec\_t<Wayp> I\_rte; \\ route\_through(I\_sch, I\_rte); \\ Route rte(I.id(), I\_rte);}
\hi{Getters:}
\begin{itemize}
\item[] \code{owner()} -- returns ID of the route owner (\code{VehlId})
\item[] \code{data()} -- returns raw sequence of waypoints (\code{vec\_t<Wayp>})
\item[] \code{node\_at(RteIdx)} -- returns ID of node at index \code{RteIdx} (\code{NodeId})
\item[] \code{dist\_at(RteIdx)} -- returns distance to node at index (\code{DistInt})
\item[] \code{cost()} -- returns total distance remaining (\code{DistInt})
\item[] \code{at(RteIdx)} -- returns waypoint at index \code{RteIdx} (\code{Wayp})
\item[] \code{size()} -- returns number of waypoints (\code{size\_t})
\item[] \code{print()} -- prints schedule to standard out (\code{void})
\end{itemize}
\subsubsection{Trip}
Class \code{Trip} is the base class representing customers and vehicles. It is
\textbf{immutable}.
\hi{Constructor:} \code{Trip(...)} takes 6 parameters:
\begin{itemize}
\item[] \code{TripId} -- corresponds to ID of the trip
\item[] \code{OrigId} -- location of the trip origin
\item[] \code{DestId} -- location of the trip destination
\item[] \code{ErlyTime} -- early time window bound
\item[] \code{LateTime} -- late time window bound
\item[] \code{Load} -- load of the trip; negative indicates capacity
\end{itemize}
\hi{Getters:}
\begin{itemize}
\item[] \code{id()} -- returns ID of the trip (\code{TripId})
\item[] \code{orig()} -- returns location of the origin (\code{OrigId})
\item[] \code{dest()} -- returns location of the destination (\code{DestId})
\item[] \code{early()} -- returns early time window bound (\code{ErlyTime})
\item[] \code{late()} -- returns late time window bound (\code{LateTime})
\item[] \code{load()} -- returns load of the trip (\code{Load})
\end{itemize}
\subsubsection{Customer}
A \code{Customer} represents a ridesharing customer. Usually it is not
constructed by the user. It is \textbf{immutable}.
\hi{Constructor:} \code{Customer(...)} takes 8 parameters, with 1 default parameter:
\begin{itemize}
\item[] \code{CustId} -- corresponds to ID of the customer
\item[] \code{OrigId} -- location of the origin
\item[] \code{DestId} -- location of the destination
\item[] \code{ErlyTime} -- early time window bound
\item[] \code{LateTime} -- late time window bound
\item[] \code{Load} -- load of the customer; should be positive
\item[] \code{CustStatus} -- customer status
\item[] \code{VehlId a=-1} -- assigned to which vehicle ($-1$= unassigned)
\end{itemize}
\hi{Getters:} In addition to getters for \code{Trip}, \code{Customer} has the
following getters:
\begin{itemize}
\item[] \code{status()} -- returns status of the customer (\code{CustStatus})
\item[] \code{assignedTo()} -- returns vehicle customer is assigned to (\code{VehlId})
\item[] \code{assigned()} -- returns true if customer is assigned (\code{bool})
\item[] \code{print()} -- prints customer to standard out (\code{void})
\end{itemize}
\hi{Equality Comparator:}
Two \code{Customer}s are equal if their IDs are equal.
\subsubsection{Vehicle}
A \code{Vehicle} represents a ridesharing vehicle. Usually it is not
constructed by the user. It is \textbf{immutable}.
\hi{Constructor:} \code{Vehicle(...)} has two constructors. The first takes
7 parameters:
\begin{itemize}
\item[] \code{VehlId} -- corresponds to ID of the vehicle
\item[] \code{OrigId} -- location of the origin
\item[] \code{DestId} -- location of the destination
\item[] \code{ErlyTime} -- early time window bound
\item[] \code{LateTime} -- late time window bound
\item[] \code{Load} -- load of the vehicle; should be negative
\item[] \code{G\_Tree\&} -- which g-tree index to use for constructing
vehicle's default route
\end{itemize}
\hi{} The second constructor takes 12 parameters. The first 6 are the same
as the top construct. The following 6 are:
\begin{itemize}
\item[] \code{Load} -- number of customers queued to be served
\item[] \code{DistInt} -- distance to next node in its route
\item[] \code{Route} -- specific route for the vehicle
\item[] \code{Schedule} -- specific schedule for the vehicle
\item[] \code{RteIdx} -- index of vehicle's last-visited node in its route
\item[] \code{VehlStatus} -- vehicle status
\end{itemize}
\hi{Getters:} In addition to getters for \code{Trip}, \code{Vehicle} has the
following getters:
\begin{itemize}
\item[] \code{next\_node\_distance()} -- returns distance to next node in the route (\code{DistInt})
\item[] \code{route()} -- returns route (\code{Route})
\item[] \code{idx\_last\_visited\_node()} -- returns index of vehicle's last-visited node in its route (\code{RteIdx})
\item[] \code{last\_visited\_node()} -- returns last-visited node (\code{NodeId})
\item[] \code{status()} -- returns status (\code{VehlStatus})
\item[] \code{queued()} -- returns number of customers queued to be served (\code{Load})
\item[] \code{capacity()} -- returns maximum capacity (\code{Load})
\item[] \code{print()} -- prints vehicle to standard out (\code{void})
\end{itemize}
\hi{Equality Comparator:}
Two \code{Vehicles}s are equal if their IDs are equal.
\subsubsection{MutableVehicle}
A \code{MutableVehicle} is a \code{Vehicle} that can be modified.
\hi{Constructor:} \code{MutableVehicle(...)} is constructed via copy-constructor:
\begin{itemize}
\item[] \code{Vehicle\&} -- create a \code{MutableVehicle} version of \code{Vehicle}
\end{itemize}
\hi{Methods:} In addition to getters for \code{Vehicle}, \code{MutableVehicle} has the
following methods:
\begin{itemize}
\item[] \code{set\_rte(const vec\_t<Wayp>\&)} -- set new route (\code{void})
\item[] \code{set\_rte(const Route\&)} -- set new route (\code{void})
\item[] \code{set\_sch(const vec\_t<Stop>\&)} -- set new schedule (\code{void})
\item[] \code{set\_sch(const Schedule\&)} -- set new schedule (\code{void})
\item[] \code{set\_nnd(const DistInt\&)} -- set next-node distance (\code{void})
\item[] \code{set\_lvn(const RteIdx\&)} -- set index of last-visited node in route (\code{void})
\item[] \code{reset\_lvn()} -- set index of last-visited node to 0 (\code{void})
\item[] \code{incr\_queued()} -- increase by 1 \# of customers queued to be served (\code{void})
\item[] \code{decr\_queued()} -- decrease by 1 \# of customers queued to be served (\code{void})
\end{itemize}
\subsubsection{ProblemSet}
A \code{ProblemSet} represents a ridesharing problem instance. Usually it is
not constructed by the user.
\hi{Constructor:} \code{ProblemSet()} has an empty constructor.
\hi{Methods:}
\begin{itemize}
\item[] \code{name()} -- returns name of the instance (\code{std::string\&})
\item[] \code{road\_network()} -- returns name of road network (\code{std::string\&})
\item[] \code{set\_trips(const dict<ErlyTime, vec\_t<Trip>>\&)} -- store trips (\code{void})
\end{itemize}
|
|
\documentclass[fontsize=9pt]{scrartcl}
\usepackage{custom}
\title{{'{'}}{{cookiecutter.description}}{{'}'}}
\author[1]{ {{cookiecutter.author_name}}\thanks{ {{cookiecutter.email}} } }
% \author[1,2]{Random Name\thanks{r.name@uni.co.uk}}
% \affil[1]{School of Electronics Engineering and Computer Science, Queen Mary, University of London, London E1 4NS, United Kingdom}
\affil[1]{School of things, University of stuff, City XXXXX, Country}
% \affil[2]{Company, City XXXXX, Country}
\addbibresource{{'{'}}{{cookiecutter.repo_name}}.bib}
\begin{document}
\maketitle
\begin{abstract}
\lipsum[1]
\end{abstract}
\begin{multicols}{2}
\lipsum[2]
\section{First section}
\subsection{First subsection}
Testing \textbf{bold}, \textit{italics}, \textit{\textbf{bold italics}}.
Trying $a^3$
\lipsum
Let us try to cite this article \cite{knuth:ct:a}, which is really good. Also
there is things on \fig{fig:1}. \\
Here is some math:
\begin{equation}
B'=-\nabla \times E
\end{equation}
\begin{figure}[H]
\includegraphics[width=1\columnwidth]{img}
\caption{Quisque ullamcorper placerat ipsum. Cras nibh. Morbi
vel justo vitae lacus tincidunt ultrices. Lorem ipsum dolor
sit amet, consectetuer adipiscing elit. In hac habitasse
platea dictumst. Integer tempus convallis augue. Etiam facilisis.}
\label{fig:1}
\end{figure}
\begin{table}[H]
\centering
\begin{tabular}{||c c c c||}
\hline
Col1 & Col2 & Col2 & Col3 \\ [0.5ex]
\hline\hline
1 & 6 & 87837 & 787 \\
2 & 7 & 78 & 5415 \\
3 & 545 & 778 & 7507 \\
4 & 545 & 18744 & 7560 \\
5 & 88 & 788 & 6344 \\ [1ex]
\hline
\end{tabular}
\caption{Lacus tincidunt ultrices. Lorem ipsum dolor
sit amet, consectetuer adipiscing elit.}
\end{table}
\lipsum[2-6]
\vspace*{1em}
\printbibliography[title=References]
%
\end{multicols}
\end{document}
|
|
\chapter{The \texttt{loop} macro}
Our \texttt{loop} module uses all available Common Lisp functions for
its analysis of syntax and semantics. We believe this is not a
problem, even though we assume the existence of \texttt{loop} for many
other modules, because the code in this module will be executed during
macro-expansion time, and for a new Common Lisp system, it would be
executed during cross compilation by another full Common Lisp
implementation.
Our \texttt{loop} module uses only standard Common Lisp code in its
resulting expanded code, so that macro-expanded uses of \texttt{loop}
will not require any other \sysname{} module in order to work.
The code for the \sysname{} \texttt{loop} macro is located in the
directory \texttt{Code/Loop}.
For parsing a \texttt{loop} expression, we use a technique called
\emph{combinatory parsing}, except that we do not handle arbitrary
backtracking. Luckily, arbitrary backtracking is not required to
parse the fairly simple syntax of \texttt{loop} clauses.
\section{Current state}
All \texttt{loop} clauses have been tested with the test cases
provided by Paul Dietz' ANSI \commonlisp{} test suite.
Future work includes providing an alternative parser to be used when
the normal parser fails. The purpose of the alternative parser is to
provide good error messages to the programmer.
\section{Protocol}
\subsection{Package}
The symbols documented in this section, and that are not in the
package \texttt{common-lisp}, are defined in the package named
\texttt{sicl-loop}.
\subsection{Classes}
\Defclass {clause}
This class is the base class for all clauses.
\Defclass {subclauses-mixin}
This class is a superclass of all classes of clauses that accept the \texttt{and}
\texttt{loop} keyword.
\Defclass {var-and-type-spec-mixin}
This class is a superclass of all classes of clauses and subclauses
that take a \texttt{var-spec} and a \texttt{type-spec}.
\Defclass {compound-forms-mixin}
This class is a superclass of all classes of clauses that take a list
of compound forms.
\Defclass {loop-return-clause-mixin}
This class is a superclass of all classes of clauses that can make the
loop return a value.
\subsection{Functions}
\Defgeneric{bound-variables} {clause}
The purpose of this generic function is to generate a list of all
bound variables in a clause. The same variable occurs as many times
in the list as the number of times it is bound in the clause.
\Defgeneric{accumulation-variables} {clause}
The purpose of this generic function is to generate a list of all the
accumulation variables in a clause. Each element of the list is
itself a list of three elements. The first element is the name of a
variable used in an \texttt{into} clause, or \texttt{nil} if the
clause has no \texttt{into}. The second element determines the kind
of accumulation, and can be one of the symbols \texttt{list},
\texttt{count/sum}, or \texttt{max/min}. The third element is a type
specifier which can be \texttt{t}.
\Defgeneric{declarations} {clause}
The purpose of this generic function is to extract a list of
declaration specifiers from the clause. Notice that it is a list of
declaration specifiers, not a list of declarations. In other words,
the symbol \texttt{declare} is omitted.
\Defgeneric {initial-bindings} {clause}
The purpose of this generic function is to extract the outermost level
of bindings required by the clause.
\Defgeneric {final-bindings} {clause}
The purpose of this generic function is to extract the innermost level
of bindings required by the clause.
\Defgeneric {bindings} {clause}
The default method of this generic function appends the result of
calling \texttt{initial-bindings} and that of calling
\texttt{final-bindings}.
|
|
\documentclass{beamer}
\input{preamble.tex}
\usefonttheme[onlymath]{serif}
\title[Introdução ao \LaTeX]{Uma Introdução ao \LaTeX\thanks{Adaptado de
\href{https://docs.google.com/file/d/0B1WX73woAuwgR2hHTE9ybmk1WG8/edit}{``An
interactive introduction to \LaTeX''} por John Lees-Miller.}}
\author{Luiz Rafael dos Santos}
\institute{IFC-Camboriú}
\date{\today}
\subtitle{Parte 2: Documentos Estruturados \& muito mais}
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
\titlepage
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Documentos Estruturados}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Sumário}
\begin{multicols}{2}
\tableofcontents[currentsection]
\end{multicols}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{\insertsection}
\begin{itemize}
\item Na Parte 1, aprendemos sobre comandos e ambientes para digitar textos e textos matemáticos.
\item Agora, aprenderemos sobre comandos e ambientes para documentos estruturados.
\item Você pode tentar os novos comandos no write\LaTeX{}:
\end{itemize}
\vskip 2em
\begin{center}
\fbox{\href{\wlnewdoc{basics.tex}}{%
Clique aqui para abrir o documento-exemplo no \wllogo{}}}
\\[1ex]\scriptsize{}Ou vá para esta URL: \url{http://bit.ly/1cU9qBC}\\
Para melhores resultados, por favor use \href{http://www.google.com/chrome}{Google Chrome} ou \href{http://www.mozilla.org/en-US/firefox/new/}{FireFox}.
\end{center}
\vskip 2ex
\begin{itemize}
\item Vamos começar!
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Título e Resumo}
\begin{frame}[fragile]{\insertsubsection}
\begin{itemize}{\small
\item Informe ao \LaTeX{} no preâmbulo o título, usando o comando \cmdbs{title} e o autor, usando o comando \cmdbs{author}.
\item Então use \cmdbs{maketitle} no documento para realmente criar o título.
\item Use o ambiente \bftt{abstract} para criar um resumo.
}\end{itemize}
\begin{minipage}{0.55\linewidth}
\inputminted[fontsize=\scriptsize,frame=single,resetmargins]{latex}%
{structure-title.tex}
\end{minipage}
\begin{minipage}{0.35\linewidth}
\includegraphics[width=\textwidth,clip,trim=2.2in 7in 2.2in 2in]{structure-title.pdf}
\end{minipage}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Seções}
\begin{frame}{\insertsubsection}
\begin{itemize}{\small
\item Apenas utilize os comandos \cmdbs{section} e \cmdbs{subsection} para seções e subseções.
\item Você pode adivinhar o que os comandos \cmdbs{section*} e \cmdbs{subsection*} fazem?
}\end{itemize}
\begin{minipage}{0.55\linewidth}
\inputminted[fontsize=\scriptsize,frame=single,resetmargins]{latex}%
{structure-sections.tex}
\end{minipage}
\begin{minipage}{0.35\linewidth}
\includegraphics[width=\textwidth,clip,trim=1.5in 6in 4in 1in]{structure-sections.pdf}
\end{minipage}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Rótulos e Referência Cruzada}
\begin{frame}[fragile]{\insertsubsection}
\begin{itemize}{\small
\item Use \cmdbs{label} para rotular e \cmdbs{ref} para referência cruzada automática.
\item O pacote \bftt{amsmath} disponibiliza o comando \cmdbs{eqref} para referenciar equações.
}\end{itemize}
\begin{minipage}{0.55\linewidth}
\inputminted[fontsize=\scriptsize,frame=single,resetmargins]{latex}%
{structure-crossref.tex}
\end{minipage}
\begin{minipage}{0.35\linewidth}
\includegraphics[width=\textwidth,clip,trim=1.8in 6in 1.6in 1in]{structure-crossref.pdf}
\end{minipage}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Exercício}
\begin{frame}[fragile]{Exercícios de um documento estruturado}
\begin{block}{Digite um artigo curto em \LaTeX:
\footnote{Traduzido a partir de \url{http://pdos.csail.mit.edu/scigen/}, um gerador de artigos aleatórios.}}
\begin{center}
\fbox{\href{\fileuri/structure-exercise-solution.pdf}{%
Clique aqui para abrir o artigo}}
\end{center}
Faça seu artigo parecer com este aqui. Use os comandos \cmdbs{ref} e \cmdbs{eqref} para evitar escrever explicitamente seção e número de equações no texto.
\end{block}
\vskip 2ex
\begin{center}
\fbox{\href{\wlnewdoc{structure-exercise.tex}}{%
Clique aqui para abrir este exercício no \wllogo{}}}
\end{center}
\begin{itemize}
\item Uma vez que tenha tentado,
\fbox{\href{\wlnewdoc{structure-exercise-solution.tex}}{%
clique aqui para ver a solução}}.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Figures and Tables}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Outline}
\begin{multicols}{2}
\tableofcontents[currentsection]
\end{multicols}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Graphics}
\begin{frame}[fragile]{\insertsubsection}
\begin{itemize}
\item Requires the \bftt{graphicx} package, which provides the
\cmdbs{includegraphics} command.
\item Supported graphics formats include JPEG, PNG and PDF (usually).
\end{itemize}
\begin{exampletwouptiny}
\includegraphics[
width=0.5\textwidth]{big_chick}
\includegraphics[
width=0.3\textwidth,
angle=270]{big_chick}
\end{exampletwouptiny}
\tiny{Image from \url{http://www.andy-roberts.net/writing/latex/importing_images}}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{Interlude: Optional Arguments}
\begin{itemize}
\item We use square brackets \keystrokebftt{[} \keystrokebftt{]} for optional
arguments, instead of braces \keystrokebftt{\{} \keystrokebftt{\}}.
\item \cmdbs{includgraphics} accepts optional arguments that allow you to transform the
image when it is included. For example, \bftt{width=0.3\cmdbs{textwidth}} makes
the image take up 30\% of the width of the surrounding text (\cmdbs{textwidth}).
\item \cmdbs{documentclass} accepts optional arguments, too. Example:
\mint{latex}|\documentclass[12pt,twocolumn]{article}|
\vskip 3ex
makes the text bigger (12pt) and puts it into two columns.
\item Where do you find out about these? See the slides at the end of this
presentation for links to more information.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection[fragile]{Floats}
\begin{frame}{\insertsubsection}
\begin{itemize}
\item Allow \LaTeX{} to decide where the figure will go (it can ``float'').
\item You can also give the figure a caption, which can be referenced with
\cmdbs{ref}.
\end{itemize}
\begin{minipage}{0.55\linewidth}
\inputminted[fontsize=\scriptsize,frame=single,resetmargins]{latex}%
{media-graphics.tex}
\end{minipage}
\begin{minipage}{0.35\linewidth}
\includegraphics[width=\textwidth,clip,trim=2in 5in 3in 1in]{media-graphics.pdf}
\end{minipage}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Tables}
\begin{frame}[fragile]{\insertsubsection}
\begin{itemize}
\item Tables in \LaTeX{} take some getting used to.
\item Use the \bftt{tabular} environment from the \bftt{tabularx} package.
\item The argument specifies column alignment --- \textbf{l}eft, \textbf{r}ight, \textbf{r}ight.
\begin{exampletwouptiny}
\begin{tabular}{lrr}
Item & Qty & Unit \$ \\
Widget & 1 & 199.99 \\
Gadget & 2 & 399.99 \\
Cable & 3 & 19.99 \\
\end{tabular}
\end{exampletwouptiny}
\item It also specifies vertical lines; use \cmdbs{hline} for horizontal lines.
\begin{exampletwouptiny}
\begin{tabular}{|l|r|r|} \hline
Item & Qty & Unit \$ \\\hline
Widget & 1 & 199.99 \\
Gadget & 2 & 399.99 \\
Cable & 3 & 19.99 \\\hline
\end{tabular}
\end{exampletwouptiny}
\item Use an ampersand \keystrokebftt{\&} to separate columns and a double backslash \keystrokebftt{\bs}\keystrokebftt{\bs} to start a new row (like in the \bftt{align*} environment that we saw in part 1).
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\addtocontents{toc}{\newpage}
\section{Bibliographies}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Outline}
\begin{multicols}{2}
\tableofcontents[currentsection]
\end{multicols}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{bib\TeX}
\begin{frame}[fragile]{\insertsubsection{} 1}
\begin{itemize}
\item Put your references in a \bftt{.bib} file in `bibtex' database format:
\inputminted[fontsize=\scriptsize,frame=single]{latex}{bib-example.bib}
\item Most reference managers can export to bibtex format.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{\insertsubsection{} 2}
\begin{itemize}
\item Each entry in the \bftt{.bib} file has a \emph{key} that you can use to
reference it in the document. For example, \bftt{Jacobson1999Towards} is the key for this article:
\begin{minted}[fontsize=\small,frame=single]{latex}
@Article{Jacobson1999Towards,
author = {Van Jacobson},
...
}
\end{minted}
\item It's a good idea to use a key based on the name, year and title.
\item \LaTeX{} can automatically format your in-text citations and generate a
list of references; it knows most standard styles, and you can design your own.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}[fragile]{\insertsubsection{} 3}
\begin{itemize}
\item Use the \bftt{natbib} package (recommended).
\item Use \cmdbs{citet} and \cmdbs{citep} to insert citations by key.
\item Reference \cmdbs{bibliography} at the end, and specify a \cmdbs{bibliographystyle}.
\end{itemize}
\begin{minipage}{0.55\linewidth}
\inputminted[fontsize=\scriptsize,frame=single,resetmargins]{latex}%
{bib-example.tex}
\end{minipage}
\begin{minipage}{0.35\linewidth}
\includegraphics[width=\textwidth,clip,trim=1.8in 5in 1.8in 1in]{bib-example.pdf}
\end{minipage}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Exercise}
\begin{frame}[fragile]{Exercise: Putting it All Together}
Add an image and a bibliography to the paper from the previous exercise.
\begin{enumerate}
\item Download these example files to your computer.
\begin{center}
\fbox{\href{\fileuri/big_chick.png?dl=1}{Click to download example image}}
\fbox{\href{\fileuri/bib-exercise.bib?dl=1}{Click to download example bib file}}
\end{center}
\item Upload them to writeLaTeX (use the files menu).
\item (To find the keys in the \bftt{.bib} file, you'll have to open it in
Notepad on your computer --- you can't view it online in writeLaTeX, yet.)
\end{enumerate}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{What's Next?}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}{Outline}
\begin{multicols}{2}
\tableofcontents[currentsection]
\end{multicols}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{More Neat Things}
\begin{frame}[fragile]{\insertsubsection}
\begin{itemize}
\item Add the \cmdbs{tableofcontents} command to generate a table of contents
from the \cmdbs{section} commands.
\item Change the \cmdbs{documentclass} to
\mint{latex}!\documentclass{scrartcl}!
or
\mint{latex}!\documentclass[12pt]{IEEEtran}!
\item Define your own command for a complicated equation:
\begin{exampletwouptiny}
\newcommand{\rperf}{%
\rho_{\text{perf}}}
$$
\rperf = {\bf c}'{\bf X} + \varepsilon
$$
\end{exampletwouptiny}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{More Neat Packages}
\begin{frame}{\insertsubsection}
\begin{itemize}
\item \bftt{beamer}: for presentations (like this one!)
\item \bftt{todonotes}: comments and TODO management
\item \bftt{tikz}: make amazing graphics
\item \bftt{pgfplots}: create graphs in \LaTeX
\item \bftt{spreadtab}: create spreadsheets in \LaTeX
\item \bftt{gchords}, \bftt{guitar}: guitar chords and tabulature
\item \bftt{cwpuzzle}: crossword puzzles
\end{itemize}
See \url{https://www.writelatex.com/examples} and \url{http://texample.net} for
examples of (most of) these packages.
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Installing \LaTeX{}}
\begin{frame}{\insertsubsection}
\begin{itemize}
\item To run \LaTeX{} on your own computer, you'll want to use a \LaTeX{}
\emph{distribution}. A distribution includes a \bftt{latex} program
and (typically) several thousand packages.
\begin{itemize}
\item On Windows: \href{http://miktex.org/}{Mik\TeX}
\item On Linux: \href{http://tug.org/texlive/}{\TeX Live}
\item On Mac: \href{http://tug.org/mactex/}{Mac\TeX}
\end{itemize}
\item You'll also want a text editor with \LaTeX{} support. See \url{http://en.wikipedia.org/wiki/Comparison_of_TeX_editors} for a list of (many) options.
\item You'll also have to know more about how \bftt{latex} and its related tools
work --- see the resources on the next slide.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Online Resources}
\begin{frame}{\insertsubsection}
\begin{itemize}
\item \href{http://en.wikibooks.org/wiki/LaTeX}{The \LaTeX{} Wikibook} ---
excellent tutorials and reference material.
\item \href{http://tex.stackexchange.com/}{\TeX{} Stack Exchange} --- ask
questions and get excellent answers incredibly quickly
\item \href{http://www.latex-community.org/}{\LaTeX{} Community} --- a large
online forum
\item \href{http://ctan.org/}{Comprehensive \TeX{} Archive Network (CTAN)} ---
over four thousand packages plus documentation
\item Google will usually get you to one of the above.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{frame}
\begin{center}
Thanks, and happy \TeX{}ing!
\end{center}
\end{frame}
\end{document}
% -- latex understands words, sentences and paragraphs
Words are separated by one or more spaces. Paragraphs are separated by
one or more blank lines. The output is not affected by adding extra
spaces or extra blank lines to the input file.
Double quotes are typed like this: ``quoted text''.
Single quotes are typed like this: `single-quoted text'.
Emphasized text is typed like this: \emph{this is emphasized}.
Bold text is typed like this: \textbf{this is bold}.
-- Adding structure to your document
\section{Hello}
\subsection{World}
\subsection{Foo}
\subsubsection*{Stuff} % star form
\subsubsection*{Results}
-- Labels and cross-references
\label{sec:intro}
\label{sec:method}
\ref{sec:method}
--> maybe introduce the prettyref package here.
-- Mathematics
Inline mathematics: $x + y < 7$.
'Displayed' mathematics:
\begin{equation}
\end{equation}
\begin{equation*}
\end{equation*}
\begin{align}
\end{align}
-- Figures
- Need the graphicx package.
- here we can start introducing options
\includegraphics[width=\textwidth]{}
- where do you find out about these options? --> link to the Wikibook
-- Floating Figures
\begin{figure}
\includegraphics{...}
\caption{\label{}Here is a caption.}
\end{figure}
-- Tables
- not the nicest part of LaTeX
\usepackage{tabularx}
\begin{tabular}{llr}
Item & Quantity & Price (\$) & Amount
Widget & 1 &
\end{tabular}
Bonus points: check out the fp package and the spreadtab package.
-- Document Classes
a .cls file
article
some journal templates come with one
-- Bibliographies
-- For Typesetting Geeks
- dashes: -, --, ---
- ellipsis.
- controlling spaces: ~, \ , \,, \@
- spacing after periods (et al., etc.)
- Nested quotation marks: ``\,`
\vskip 2ex
\item Use the \emph{star form} to display an equation without a number.
\begin{exampletwouptiny}
\begin{equation*}
F(x) = \int_{a}^{x}{f(t) dt}
\end{equation*}
\end{exampletwouptiny}
\begin{itemize}
\item \bftt{equation} and \bftt{equation*} are called \emph{environments}.
\begin{itemize}
\item The \cmdbs{begin} and \cmdbs{end} commands define the environment.
\item The \cmd{\$} also starts and ends an environment.
\item Some commands are defined only within certain environments.
\item Some commands behave differently in different environments.
\end{itemize}
\end{itemize}
\end{block}
\begin{center}
\fbox{\href{http://ctan.org/}{The Comprehensive \TeX Archive Network (CTAN)}}
\end{center}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Typography tweaks}
\begin{frame}{\insertsubsection}
\begin{tabular}{lll}
& character name & used mainly for \ldots \\\hline
\bftt{\bs} & backslash & commands, tables \\
\bftt{\{} & open brace & commands \\
\bftt{\}} & close brace & commands \\
\bftt{\%} & percent sign & comments \\
\bftt{\#} & hash (pound / sharp) sign & custom commands \\
\bftt{\$} & dollar sign & equations \\
\bftt{\_} & underscore & equations (subscripts) \\
\bftt{\^} & caret & equations (superscripts) \\
\bftt{\&} & ampersand & tables \\
\bftt{\~} & tilde & spacing \\
\end{tabular}
\end{frame}
%\item We've used several environments:
%\vskip 1ex
%{\scriptsize
%\begin{tabular}{ll}
%\cmdbs{begin}\bftt{\{document\}}\ldots\cmdbs{end}\bftt{\{document\}} &
% document environment \\
%\cmdbs{begin}\bftt{\{itemize\}}\ldots\cmdbs{end}\bftt{\{itemize\}} &
% itemized list environment \\
%\bftt{\$\ldots\$} & \emph{in-text} math environment \\
%\bftt{\$\$\ldots\$\$} & \emph{displayed} math environment \\
%\cmdbs{begin}\bftt{\{equation\}}\ldots\cmdbs{end}\bftt{\{equation\}} &
% displayed math environment w/ number
%\end{tabular}
%}
|
|
\documentclass{report}
\begin{document}
\chapter{Introduction}
The goal of the language is to write code efficiently and easally while it's still extendable, even at the core language.
The language is thereby inspired by Rust, C\#, Python and C/C++.
\chapter{Keywords}
\section{Variables}
\paragraph{const} Defines a constant
\paragraph{let} Defines a variable
\section{Flow Control keywords}
\subsection {Conditional}
\paragraph{if} Indicates an if statement
\paragraph{else} Indicates an else clause of an if statement
\paragraph{when} Indicates pattern-matching construct
\subsection{Procedures}
\paragraph{fn} Function, a pure function, without side-effects
\paragraph{rt} Routine, a not pure function, may have side-effects
\section{Structures}
\paragraph{struct} Indicates a datastructure
\paragraph{enum} Indicates an enumeration
\section {Other}
\paragraph{infix} Indicates that a function can be placed between 2 constants or variables.
\paragraph{assoc} Indicates that a procedure is associated with a structure.
\chapter{Operators}
\section{Logical}
\paragraph{$||$} Logical or
\paragraph{$\&\&$} Logical and
\paragraph{$!$} not
\section{Assignment}
\paragraph{$=$} Assigns a certain value to a variable or constant.
\subsection{Combined}
\begin{itemize}
\item{+}
\item{-}
\item{*}
\item{/}
\item{}
\end{itemize}
\chapter{Grammar}
\section{Comments}
\begin{verbatim}
// I'm a single line comment
/*
* I'm a multi-line
* comment
*/function
\end{verbatim}
\subsection{Variables}
\begin{verbatim}
let a = 0;
let b, c = 1;
let d = 2, e = 3;
\end{verbatim}
\subsection{Constants}
Constants are immutable variables.
\begin{verbatim}
const a = 0;
const b, c = 1;
const d = 2, e = 3;
\end{verbatim}
\section{Flow control}
\subsection{if-statement}
\begin{verbatim}
if a == b {
// code...
}
\end{verbatim}
\subsection {Pattern match}
\begin{verbatim}
when x {
0 => print("Hello World");
1 => print("Hello John");
}
\end{verbatim}
\subsection{Veriable declaration}
\begin{verbatim}
fn hello(x: isize)
const a = when x {
0 => "Aap";
1 => "Noot";
2 => "Mies";
}
print(a);
}
\end{verbatim}
\subsection{Variable assignment}
\section {typing}
\subsection {Structs}
\begin{verbatim}
struct Structure {
a, b: int;
c: float;
}
\end{verbatim}
\subsection {Enums}
\begin{verbatim}
enum Seasons {
winter,
spring
summer,
autumn
}
\end{verbatim}
\section{Literals}
\subsection {Strings}
\section {Routines}
\subsection {Callables}
Callables are functions or routines that can be called from somewhere else.
\begin{verbatim}
rt print_hello(){
print("Hello")
}
\end{verbatim}
\subsection {Functions}
Functions cannot, so do not have side-effects
\begin{verbatim}
fn sum(lhs: int, rhs: int) {
return a + b;
}
\end{verbatim}
\subsection{Shorthand notation}
\begin{verbatim}
rt print_hello => print("Hello");
fn sum(a: int, b: int) => a + b;
\end{verbatim}
\subsection {Infix}
\begin{verbatim}
infix fn + (a, b)
\end{verbatim}
\chapter{Examples}
\section{Hello World}
\begin{verbatim}
rt main() => print("Hello World!");
\end{verbatim}
\section{Operator Overloading}
Operator overloading can be accheived by defining the operator as an Infix callable.
\begin{verbatim}
struct Vector [int:2];
infix fn + (rhs: Vector, lhs: Vector) => [ lhs[0] + rhs[0], lhs[1] + rhs[1] ]
rt main() {
const a = Vector[1, 2];
const b = Vector[3, 4];
print(a + b); // Vector[4, 6]
}
\end{verbatim}
\section {Custom operators}
\begin{verbatim}
infix fn % (rhs: int, lhs: int) => (rhs / lhs) + rhs - (rhs / lhs);
rt main() {
print(5 % 2); // 1
}
\end{verbatim}
\section{Member callable declaration}
\begin{verbatim}
struct Vector [int:2];
member rt Vector::print() => print(print[0], print[1]);
rt main() {
const a = Vector[1, 2];
const b = Vector[3, 4];
print(a + b); // Vector[4, 6]
}
\end{verbatim}
\end{document}
|
|
\section{Conclusion}
In this paper, we demonstrated that the dispersion of velocities in the
direction of galactic latitude, \vb, can be used as an age proxy, by showing
that there is no strong evidence for mass-dependent heating in low-mass
\kepler\ dwarfs: the velocity dispersions of K and M dwarfs, whose
main-sequence lifetimes are longer than around 11 Gyrs, do not appear to
increase with decreasing mass.
Although {\it vertical} velocity, \vz, is the quantity that most strongly
traces time-dependent orbital heating in the disc of the Galaxy, most stars
with measured rotation periods do not yet have radial velocities, so we used
velocity in the direction of Galactic latitude, \vb\, as a proxy for \vz.
Using stars in the GUMS simulation, we showed that using \vb\ as a proxy for
\vz\ introduces an additional velocity dispersion, which increases with
increasing Galactic latitude.
For this reason we did not attempt to convert \vb\ dispersions into ages using
an age-velocity dispersion relation.
However, after removing high-latitude ($b>15^\circ$) stars from the sample, we
confirmed that using \vb\ instead of \vz\ does not introduce any
mass-dependent velocity dispersion bias into the sample.
We therefore assumed that \vb\ velocity dispersion can be used to accurately
rank stars by age, \ie\ a group of stars with a large velocity dispersion is,
on average, older than a group of stars with a small velocity dispersion.
% We found that old groups of cool dwarfs, selected to be coeval using the
% \citet{angus2019} gyrochronology relation, do {\it not} have the same velocity
% dispersion across all temperatures.
We used the \vb\ velocity dispersions of stars in the \mct\ catalog to explore
the evolution of stellar rotation period as a function of effective
temperature and age.
We found that the \citet{angus2019} relation, which is based on the
period-color relation of the 650 Myr Praesepe cluster, does not correctly
describe the period-age-\teff\ relation for old stars.
At young ages, rotation period is anti-correlated with \teff: cooler stars
spin more slowly than hotter stars of the same age.
However, at intermediate ages the relation flattens out and K dwarfs rotate at
the same rate, regardless of mass.
At old ages, it seems that cooler K dwarfs spin more rapidly than hotter K
dwarfs of the same age.
We showed that the period-\teff\ relations change shape over time in a way
that qualitatively agrees with theoretical models which include a
mass-dependent core-envelope angular momentum transport \citep{spada2019}.
% The period-color/\teff\ relation seems to start flattening out after $\sim$1
% Gyr (see figure \ref{fig:age_cut}), potentially around the same age, or just
% older than the period gap which is located at a gyrochronal age of around
% $\sim$1.1 Gyr (see figure \ref{fig:dispersion_period_teff}).
We also found that the oldest stars in the \mct\ catalog are cooler than 4500
K, which suggests that lower-mass stars remain active for longer, allowing
their rotation periods to be measured at older ages.
We speculate that the rotation period gap \citep{mcquillan2014} may separate
a young regime where stellar rotation periods decrease with increasing mass
from an old regime where periods increase with increasing mass, however more
data are needed to provide a conclusive result.
The velocity dispersions of stars increase smoothly across the rotation period
gap, indicating that the gap does not separate two distinct populations.
Finally, we used kinematics to reveal a population of synchronized binaries
with rotation periods less than around 10 days.
% We outlined a number of scenarios which could provide alternative
% explanations for these observations, including incorrect dust corrections for
% the lowest-mass stars and an excess of companions increasing the velocity
% dispersion for these stars.
% If the period-color/\teff\ relation does invert at old ages as our results
% suggest, this would be a paradigm shift for gyrochronology.
% Stellar spin-down rate is thought to be directly tied to magnetic field
% strength, and the deeper convection zones of cooler stars generate stronger
% magnetic fields which {\it should} lead to more efficient angular momentum
% loss.
% However, the micro- and macro-physics of stellar structure and evolution and
% magnetic dynamo models are extremely complicated and a lot is still unknown
% about the magnetic behavior of stars.
% Observations like these can provide useful constraints for physical models,
% and may help to reveal new physical processes at work in stars like our own
% Sun and other planet hosts.
|
|
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{array}
\usepackage{geometry}
\usepackage{stix}
\usepackage{microtype}
\usepackage[export]{adjustbox}
\usepackage{float}
%%%%%%%%%%%%%%%%%%
%% SETTINGS %%
%%%%%%%%%%%%%%%%%%
\geometry{%
left=2.5cm,
right=2.5cm,
top=1.5cm,
bottom=2.0cm
}
\setlength{\parindent}{0pt}
\pagestyle{empty}
\title{Dataset v0.5.0}
\begin{document}
\maketitle
\section{Description}
Each dataset are built with R1, R2 and interleaved file.\par
Bases quality are built randomly using 3 ranges : good (25-30), medium (18-21) and bad (1-3).
\section{GoldInput}
\subsection{Big - BIG}
Several groups of sequence with differents range of length :
\begin{itemize}
\item 1500 sequences of 30 pb
\item 2000 sequences of 40 pb
\item 3000 sequences of 50 pb
\item 1500 sequences of 75 pb
\item 2000 sequences of 100 pb
\end{itemize}
A pair between R1 and R2 are the same length.\par
Groups of length are chooseen randomly.\par
Good quality is used.\par
\subsection{Length Minimum - LENMIN}
10 sequences following :
\begin{table}[H]
\begin{tabular}{|l|c|c|}\hline
\textbf{Record} & \textbf{R1} (pb) & \textbf{R2} (pb) \\ \hline
1 & 20 & 19 \\ \hline
2 & 20 & 20 \\ \hline
3 & 20 & 21 \\ \hline
4 & 49 & 50 \\ \hline
5 & 51 & 50 \\ \hline
6 & 50 & 50 \\ \hline
7 & 100 & 99 \\ \hline
8 & 100 & 100 \\ \hline
9 & 101 & 101 \\ \hline
\end{tabular}
\end{table}
A good quality is used.
\subsection{Quality Sliding Window - QUALSLD}
10 sequences following :
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline
\textbf{Record} & \multicolumn{4}{c|}{R1} & \multicolumn{4}{c|}{R2} \\ \hline
Quality & Good & Medium & Bad & Total length & Good & Medium & Bad & Total length\\ \hline
1 & 0-5 & & & 5 & 0-15 & & & 15 \\ \hline
2 & 0-15 & & & 15 & 0-5 & & & 5 \\ \hline
3 & 0-74,81-100 & 75-80 & & 100 & 0-100 & & & 100 \\ \hline
4 & 0-100 & & & 100 & 0-74,81-100 & 75-80 & & 100 \\ \hline
5 & 0-74,81-100 & 75-80 & & 100 & 0-74,81-100 & 75-80 & & 100 \\ \hline
6 & 0-49,71-100 & 50-70 & & 100 & 0-100 & & & 100 \\ \hline
7 & 0-100 & & & 100 & 0-69,81-100 & 70-80 & & 100 \\ \hline
8 & & 0-2,7-100 & 3-6 & 100 & 0-100 & & & 100 \\ \hline
\end{tabular}
\end{table}
\subsection{Quality Tail - QUALTAIL}
7 sequences following:
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|c|c|c|c|} \hline
\textbf{Record} & \multicolumn{4}{c|}{R1} & \multicolumn{4}{c|}{R2} \\ \hline
Quality & Good & Medium & Bad (Q30) & Total length & Good & Medium & Bad (Q30) & Total length\\ \hline
1 & 0-5 & & & 5 & 0-15 & & & 15 \\ \hline
2 & 0-99 & & 5 & 15 & 0-5 & & & 5 \\ \hline
3 & 0-99 & & 100- (5) & 150 & 0-150 & & & 150 \\ \hline
4 & 0-150 & & & 150 & 0-99 & & 100- (5) & 150 \\ \hline
5 & 0-99 & & 100- (5) & 150 & 0-99 & & 100- (5) & 150 \\ \hline
6 & 0-119 & & 120- (10) & 150 & 0-150 & & & 150 \\ \hline
7 & & 0-69,81-100 & 70-80 & 150 & & 0-69,81-100 & 70-80 & 150 \\ \hline
\end{tabular}
\end{table}
\subsection{Information Dust - INFODUST}
6 sequences following:
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Record} & \multicolumn{2}{c|}{R1} & \multicolumn{2}{c|}{R2} \\ \hline
Quality & Score & Length & Score & Length\\ \hline
1 & 1.06 & 50 & 1.06 & 50 \\ \hline
2 & 1.06 & 50 & 1.42 & 50 \\ \hline
3 & 1.98 & 150 & 2.40 & 150 \\ \hline
4 & 2.97 & 200 & 4.45 & 300 \\ \hline
5 & 6.03 & 300 & 2.40 & 150 \\ \hline
6 & 3.97 & 200 & 4.45 & 300 \\ \hline
\end{tabular}
\end{table}
A good quality is used.
\subsection{Information N - INFON}
4 sequences following:
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Record} & \multicolumn{2}{c|}{R1} & \multicolumn{2}{c|}{R2} \\ \hline
Quality & N & Length & Score & Length\\ \hline
1 & 0 & 50 & 0 & 50 \\ \hline
2 & 1 & 50 & 0 & 150 \\ \hline
3 & 0 & 150 & 0 & 150 \\ \hline
4 & 2 & 200 & 3 & 300 \\ \hline
5 & 4 & 200 & 4 & 300 \\ \hline
6 & 0 & 300 & 0 & 300 \\ \hline
\end{tabular}
\end{table}
\section{GoldOutput}
\subsection{BIG}
\textbf{BIG-A} : keeping length from 35 pb\\
\textbf{BIG-B} : keeping length from 55 pb
\subsection{LENGTHMIN}
\textbf{LENGTHMIN-A} : remove sequences below 20] pb\\
\textbf{LENGTHMIN-B} : remove sequences below 50] pb\\
\textbf{LENGTHMIN-C} : remove sequences below 100] pb
\subsection{QUALSLD}
\textbf{QUALSLD-A} : R1+R2 : 1, 2, 7, 8 removed\\
\textbf{QUALSLD-B} : R1+R2 : 1, 2 removed, R2 : 7 truncated\\
\textbf{QUALSLD-C} : R1+R2 : 1, 2, 8 removed, R2 : 7 truncated\\
\textbf{QUALSLD-D} : R1+R2 : 1, 2, 8 removed, R2 : 7 truncated\\
\textbf{QUALSLD-E} : R1 : 8 truncated, R2 : 7 truncated\\
\textbf{QUALSLD-F} : R2 : 7 truncated
\subsection{QUALTAIL}
\textbf{QUALTAIL-A} : 6 truncated\\
\textbf{QUALTAIL-B} : 3, 7 truncated, R1 : 4 truncated, R2 : 5 truncated\\
\textbf{QUALTAIL-C} : without 7, 3 truncated, R1 : 4 truncated, R2 : 5 truncated\\
\textbf{QUALTAIL-D} : without 3, 4, 5, 7\\
\textbf{QUALTAIL-E} : without 1,2 ; 8 truncated\\
\textbf{QUALTAIL-F} : without 1, 2, 8\\
\subsection{INFODUST}
\textbf{INFODUST-A} : under 3, without 3, 4, 5\\
\textbf{INFODUST-B} : under 5, without 4
\subsection{INFON}
\textbf{INFON-A} : without 2, 4, 5\\
\textbf{INFON-B} : without 5\\
\subsection{Other}
\textbf{EMPTY.fastq} : empty file
\section{Tests}
\subsection{Description}
Two main groups : General, Trimmer divide in several categories\\
Categories : GenTrim, GenDiscard, GenFormat, GenThread, GenCompress, TrimLengthMin, TrimQualTail, TrimQualSld, TrimInfoDust, TrimInfoEntropy\par
For tests belong to General group, no rules exist, better combination are employed.\\
For test belong to Trimmer group, only trimming paired is test, with 4 datasets : 1 without discarded sequences and 3 removing sequences.
For each test, it's given its goal, its main and input used and output file to compare.
\subsection{GenTrim}
Goal : test if trimming works properly for trimming R1 single and R1-R2 paired.\par
Gold Input : LENGTHMIN\\
Sequencing : paired input/output
Single sequencing
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Output} \\ \hline
\textbf{A} & Absence of trimming & LenMin - 10 & Same input\\ \hline
\textbf{B} & 3 sequences trimmed & LenMin - 20 & LENGTH-A\\ \hline
\textbf{C} & 5 sequences trimmed & LenMin - 50 & LENGTH-B\\ \hline
\end{tabular}
\end{table}
Paired sequencing
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Output}\\ \hline
\textbf{D} & Absence of trimming & LenMin - 10 & Same input\\ \hline
\textbf{E} & R1+R2 : 2, R1 : 1 & LenMin - 20 & LENGTH-A\\ \hline
\textbf{F} & R1+R2 : 3, R1 : 1, R2, 1 & LenMin - 50 & LENGTH-B\\ \hline
\end{tabular}
\end{table}
\subsection{GenDiscard}
Goal : check unkeep sequences.\\
Gold Input : LENGTHMIN\\
Sequencing : paired input/output
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Output} \\ \hline
\textbf{A} & Absence of trimming & LenMin - 10 & Empty file \\ \hline
\textbf{B} & R1+R2 : 2, R1 : 1 & LenMin - 20 & LENGTHMIN-A.R1.single.discard.fastq \\ \hline
\textbf{C} & R1+R2 : 3, R1 : 1, R2, 1 & LenMin - 50 & LENGTHMIN-B.discard.fastq \\ \hline
\end{tabular}
\end{table}
\subsection{GenFormat}
Goal : test input or output are paired or interleaved matter make no differences.\\
Gold Input : LENGTHMIN
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Input} & \textbf{Output} \\ \hline
\textbf{A} & Absence of trimming & LenMin - 10 & Input - Paired & Input - Interleaved \\ \hline
\textbf{B} & Absence of trimming & LenMin - 10 & Input - Interleaved & Input - Paired \\ \hline
\textbf{C} & R1+R2 : 3, R1 : 1, R2, 1 & LenMin - 50 & Input - Paired & LENGTHMIN-B - Interleaved\\ \hline
\textbf{D} & R1+R2 : 3, R1 : 1, R2, 1 & LenMin - 50 & Input - Interleaved & LENGTHMIN-B - Paired\\ \hline
\end{tabular}
\end{table}
\subsection{GenThread}
Goal : test if threading modify data trimmed or order of sequences.
Gold Input : BIG\\
Sequencing : paired input/output
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Read batch} & \textbf{Threads} & \textbf{Output} \\ \hline
\textbf{A} & Absence of trimming & LenMin - 10 & 100 & 2 & Same input \\ \hline
\textbf{B} & Absence of trimming & LenMin - 10 & 100 & 4 & Same input\\ \hline
\textbf{C} & Absence of trimming & LenMin - 10 & 100 & 6 & Same input\\ \hline
\textbf{D} & Absence of trimming & LenMin - 10 & 100 & 8 & Same input\\ \hline
\textbf{E} & 6500 sequences trimmed & LenMin - 55 & 100 & 2 & BIG-B\\ \hline
\textbf{F} & 6500 sequences trimmed & LenMin - 55 & 100 & 4 & BIG-B\\ \hline
\textbf{G} & 6500 sequences trimmed & LenMin - 55 & 100 & 6 & BIG-B\\ \hline
\textbf{H} & 6500 sequences trimmed & LenMin - 55 & 100 & 8 & BIG-B \\ \hline
\end{tabular}
\end{table}
\subsection{GenCompress}
Goal : test compression gzip format.\\
Gold Input : BIG\\
Sequencing : paired input/output
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Input} & \textbf{Output}\\ \hline
\textbf{A} & Absence of trimming & LenMin - 10 & Paired - raw & Paired - gzip\\ \hline
\textbf{B} & Absence of trimming & LenMin - 10 & Paired - gzip & Paired - raw\\ \hline
\textbf{C} & 6500 sequences trimmed & LenMin - 55 & Paired - raw & BIG-B - gzip\\ \hline
\end{tabular}
\end{table}
\subsection{TrimLengthMin}
Goal : test trimmer length minimum.\\
Gold Input : LENGTHMIN\\
Sequencing : paired input/output
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Output} \\ \hline
\textbf{A} & Absence of trimming & 10 & Same input \\ \hline
\textbf{B} & 5 sequences trimmed, 3 from paired, 1 from R1, 1 from R2 & 50 & LENGTH-B\\ \hline
\textbf{C} & 8 sequences trimmed & 100 & LENGTH-C\\ \hline
\end{tabular}
\end{table}
\subsection{TrimQualSld}
Goal : test trimmer quality sliding window on end of read\\
Gold Input : QUALSLD\\
Sequencing : paired input/output
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Output} \\ \hline
\textbf{A} & Absence of trimming & 1:5 & Same input \\ \hline
\textbf{B} & Remove on R1 and R2 & 20:25 & QUALSLD-A\\ \hline
\textbf{C} & Remove on both, truncated on 1 side & 10:10 & QUALSLD-B\\ \hline
\textbf{D} & Remove on both, truncated on 1 side & 17:4 & QUALSLD-C\\ \hline
\textbf{E} & Remove on both, truncated on 1 side & 17:10 & QUALSLD-D\\ \hline
\textbf{F} & Truncated on both side & 4:1 & QUALSLD-E\\ \hline
\textbf{G} & Truncated only on 1 side & 4:4 & QUALSLD-F\\ \hline
\end{tabular}
\end{table}
\subsection{TrimQualTail}
Goal : test trimmer quality on tail.\\
Gold Input : QUALTAIL\\
Sequencing : paired input/output
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Output} \\ \hline
\textbf{A} & Absence of trimming & 2:2 & Same input \\ \hline
\textbf{B} & 3 sequences truncated & 5:2 & QUALTAIL-A\\ \hline
\textbf{C} & 3 sequences truncated & 5:2:60 & QUALTAIL-A\\ \hline
\textbf{D} & 3 sequences truncated & 5:5:60 & QUALTAIL-A\\ \hline
\textbf{E} & 3 sequences trimmed & 5:5:70 & QUALTAIL-B\\ \hline
\textbf{F} & 2 sequences trimmed, 1 sequence truncated & 3:10:46 & QUALTAIL-E\\ \hline
\textbf{G} & 1 sequence trimmed & 3:10:81 & QUALTAIL-C\\ \hline
\end{tabular}
\end{table}
\subsection{TrimInfoDust}
Goal : test trimmer information dust.\\
Gold Input : INFODUST\\
Sequencing : paired input/output
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Output} \\ \hline
\textbf{A} & Absence of trimming & 1 & Same input\\ \hline
\textbf{B} & 3 sequences trimmed (1,2,3) & 2 & INFODUST-A\\ \hline
\textbf{C} & 5 sequences trimmed (1,2,3,4,5) & 3 & INFODUST-B\\ \hline
\end{tabular}
\end{table}
\subsection{TrimInfoN}
Goal : test trimmer information N.\\
Gold Input : INFON\\
Sequencing : paired input/output
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Output} \\ \hline
\textbf{A} & Absence of trimming & 5 & Same input\\ \hline
\textbf{B} & 3 sequences trimmed & 1 & INFON-A\\ \hline
\textbf{C} & 1 sequence trimmed & 4 & INFON-B\\ \hline
\end{tabular}
\end{table}
\subsection{Report file}
Goal : test data in report file.\\
Gold Input : QUALTAIL\\
Sequencing : paired input/output
\begin{table}[H]
\begin{tabular}{|l|c|c|c|c|} \hline
\textbf{Name} & \textbf{Test} & \textbf{Trimmer} & \textbf{Output} \\ \hline
\textbf{E} & 3 sequences trimmed & 5:5:70 & QUALTAIL-B\\ \hline
\textbf{F} & 2 sequences trimmed, 1 sequence truncated & 3:10:46 & QUALTAIL-E\\ \hline
%\textbf{G} & 1 sequence trimmed & 3:10:81 & QUALTAIL-C\\ \hline
\end{tabular}
\end{table}
\end{document}
|
|
% Part: lambda-calculus
% Chapter: lambda-definability
\documentclass[../../../include/open-logic-chapter]{subfiles}
\begin{document}
\chapter{Lambda Definability}
\begin{editorial}
This chapter is experimental. It needs more explanation, and the
material should be structured better into definitions and
propositions with proofs, and more examples.
\end{editorial}
\olimport{introduction}
\olimport{arithmetical-functions}
\olimport{pairs}
\olimport{truth-values}
%\olimport{lists}
\olimport{primitive-recursive-functions}
\olimport{fixpoints}
\olimport{minimization}
\olimport{partial-recursive-functions}
\olimport{lambda-definable-recursive}
\OLEndChapterHook
\end{document}
|
|
\documentclass[a4paper]{article}
\usepackage[utf8]{inputenc}
\usepackage[affil-it]{authblk}
\usepackage[justification=centering]{caption}
\usepackage[numbers]{natbib}
\usepackage{amssymb}
\usepackage{color}
\usepackage{mathtools}
\newcommand{\red}[1]{\textcolor{red}{#1}}
\newcommand{\R}{{\mathbb{R}}}
\newcommand{\norm}[1]{\left\lVert #1 \right\rVert}
\newcommand{\overbar}[1]{
\mkern 1.5mu\overline{\mkern-1.5mu#1\mkern-1.5mu}\mkern 1.5mu
}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator*{\argmax}{arg\,max}
\begin{document}
\title{Neural Machine Translation}
\author{Arul Selvam}
\author{Ehsan Nezhadian}
\affil{Rheinische Friedrich-Wilhelms-Universität Bonn}
\date{July 25, 2017}
\maketitle
\begin{abstract}
Machine translation (MT) is the task of automatically translating a text from
one natural language into another. Ever since the invention of modern computers,
MT has been one of the applications that envisioned to be automated. In the
recent years,the progress in MT has been tremendous. In today's mobile phone
era, the need for MT systems is only growing and the demand has grown beyond
just text translation. Live translation on video conferences is possible with
the current MT systems. In this article, we discuss the latest improvements in
the MT research and highlight the most important breakthroughs.
\end{abstract}
\section{Deep Learning}
In the recent years, Deep learning has been making a big impact in various
fields of computer science. This impact is profound in the perceptual learning
task in the fields like computer vision, natural language processing, etc
\cite{lecun2015deep}. Though the idea of training a multi layered neural network
to perform function approximation is known to the research community for at
least a couple of decades\cite{schmidhuber2015deep}, due to the nature of
training problem requiring large computational resources, the multi layered
neural network remained less practical. With the advent of general purpose
graphical processing units (GPGPUs) and the availability of training data, the
neural networks are now a practical solution for many perceptual tasks. One of
the early success of deep learning was in the field of image classification. In
the annual ImageNet classification challenge \cite{deng2009imagenet}, AlexNet
\cite{krizhevsky2012imagenet} showed a remarkable improvement in the state-of-
the-art accuracy. Within a few years, more sophisticated architectures like
Google Inception Network \cite{szegedy2016rethinking}, Deep Residual Network
\cite{he2016deep}, etc., have improved the accuracy to be comparable with a
human in that task. The generality of the neural network made it easily possible
to be used for a wide variety of tasks. With more people working on deep
learning and the ideas arising in solving the problems being easily
transferable, the deep learning is the state-of-the-art for many learning tasks.
In the following section, we will briefly summarize the basics of deep learning.
\begin{figure}
\includegraphics[width=.99\linewidth]{img/artificial.jpg}
\caption{ An artificial neuron (perceptron)}
\label{fig:ann}
\end{figure}
The basic building block of a Multilayer network, or in more technical term
Multilayer perceptron (MLP) is a perceptron. The preceptron shown in
Fig.\ref{fig:ann} closely resembles a neuron in a human brain. A preceptron
takes a set of input values, computes a weighted sum of the inputs, and outputs
a scalar value of a after applying an activation function (mostly non-linear).
The weights for computing the weighted sum and the threshold in the activation
function are initially set to random values and are learned from the training
data. Mathematically, a perceptron with the weight matrix $A$ and the threshold
$T$ for an step activation function, for the input vector $x$, outputs the
following,
\[
f (x)=
\begin{cases}
1, & \text{if } A x > c \\
0, & \text{otherwise}
\end{cases}
\]
A layer has many number of neurons and many layers are stacked one on top of the
other to form a MLP. An MLP with many layers is called as Deep Neural Network
(DNN). In the following section we will discuss the properties of DNNs and how
they are useful in the context of supervised machine learning.
\begin{figure}
\includegraphics[width=.99\linewidth]{img/mlp.png}
\caption{Layered representation in a MLP}
\label{fig:mlp}
\end{figure}
\subsection{Supervised learning}
Supervised learning is the problem of predicting a new output $y^\prime$ for a
new input $x^\prime$, and set of training data that contains a set of input and
output $\{x \mapsto y\}$. Most of the supervised learning problems can be
formulated as either a classification or regression problem. Classification is
the problem of predicting the label for a given $x^\prime$ among a set of given
labels $\{Y\}$, while regression is predicting a continuous valued output for a
given input. The usual way of doing supervised learning is to extract features
from the given data and train a model to perform classification or regression.
\begin{equation*}
x
\xrightarrow[\text{Feature extraction}]{
\mathbb{D}(x)
}
x^\prime
_{intermediate}
\xrightarrow[\text{classifier/regressor}]{
\mathbb{F}(x^\prime | \theta)
}
y
\end{equation*}
The power of the DNNs lies in the fact that the models learns the features
directly from the given training data and avoids hand engineered features.
\begin{equation*}
x \xrightarrow[ \text{hierarchical}] { \mathbb{M}(x | \Theta) } y
\end{equation*}
The DNNs with neurons in one layers being connected to all the neurons in the
next layer are called Feed Forward Networks. The feed forward networks are
harder to train since they have lots of parameters. To make a the DNNs learn
useful representation for performing supervised learning, we need special types
of connections between the neurons. Two of the most commonly used DNN
architectures are Convolutional Neural Networks (CNN) and Recurrent Neural
Networks (RNN). In the following section we will discuss these architectures in
detail.
\subsubsection{Convolutional Neural Networks}
Convolutional Neural netowrks are a class of feed-forward networks in which
hidden layers are either convolutional, pooling for fully-connected. A
convolutional layer applies a convolution operation to the input, and passes the
result to the next layer. A pooling layer combines the outputs of a number of
neurons from previous layer into a single value. The most common pooling
operations are max-pooling which selects the maximum value from its receptive
field and average-pooling which calculates the average of values from its
receptive field. A fully-connected layer connects every neuron in one layer to
every neuron in another layer. Fig. \ref{fig:cnn} depicts a CNN.
\begin{figure}
\includegraphics[width=.99\linewidth]{img/cnn.png}
\caption{A CNN with a fully connected neurons in the last layer.}
\label{fig:cnn}
\end{figure}
\subsubsection{Recurrent Neural Networks}
The neurons in the RNNs have recurrent self connections. i.e. the output of the
neurons are connected back to the inputs. Thus the output at time step $t_1$ is
computed with taking output at time step $t_0$ as input. The recurrent
connections are shown Fig.\ref{fig:rnn}. This enables the RNNs to have an
internal memory. RNNs are trained with the a modified version of back
propagation called \textbf{Back Propagation Through Time (BPTT)}
\cite{werbos1990backpropagation}.
RNNs in general are harder to train and this is particularity evident when the
BPTT is done over a larger number of time steps. This is because the gradients
simply converges to zero after a few time steps. This problem known as
\textbf{vanishing gradients problem} is a well studied phenomenon
\cite{bengio1994learning}.
To alleviate the vanishing gradients problem, a special architecture called
\textbf{Long Short Term Memory (LSTM)} \cite{hochreiter1997long} is used.
LSTM cells has separate internal memory vector and has three gates--forget gate,
add gate, and output gate. To understand the Mathematical intuition behind these
gates, let us consider a LSTM cell in a hidden state $h$ and internal cell
memory vector $c_{t-1}$. At time $t$, the cell takes input $x_t$, previous
output $h_{t-1}$ and update the cell memory to $c_{t}$, produces the output
$h_{t}$. The forget computes a vector size $c$ between 0 and 1. This vector
determines what information should be preserved and what should be forgotten
based on the current input $x_t$ as shown in Fig.\ref{fig:lstm_forget}. This
vector is later multiplied with the cell memory. If the vector is all zeros,
multiplying makes all cell state to forget everything while all ones preserve
everything. Then the add gate computes which parts in the cell states to be
updates as shown in Fig.\ref{fig:lstm_add} and the new data to be updated as
shown in Fig.\ref{fig:lstm_add1}. Finally the output gate generates the output
$h_t$ and does not modify the cell state as shown in Fig.\ref{fig:lstm_output}.
\begin{figure}
\includegraphics[width=.99\linewidth]{img/rnn.png}
\caption{Recurrent connections in a RNN}
\label{fig:rnn}
\end{figure}
\begin{figure}
\includegraphics[width=.99\linewidth]{img/lstm_forget.png}
\caption{Forget gate in LSTM}
\label{fig:lstm_forget}
\end{figure}
\begin{figure}
\includegraphics[width=.99\linewidth]{img/lstm_add.png}
\caption{Add gate computing parts to be to modified in LSTM}
\label{fig:lstm_add}
\end{figure}
\begin{figure}
\includegraphics[width=.99\linewidth]{img/lstm_add1.png}
\caption{Add gate computing data to be added to cell state in LSTM}
\label{fig:lstm_add1}
\end{figure}
\begin{figure}
\includegraphics[width=.99\linewidth]{img/lstm_output.png}
\caption{Add gate computing output in LSTM}
\label{fig:lstm_output}
\end{figure}
\begin{figure}
\includegraphics[width=.99\linewidth]{img/enc_dec.png}
\caption{Simple Encoder-Decoder architecture for machine translation}
\label{fig:enc_dec}
\end{figure}
\section{Traditional Machine Translation}
\subsection{Evaluation score}
Translation is a hard task to define an evaluation score for measuring how well
a model is performing due to inherent complexity of the task. The most widely
used evaluation metric is called Bilingual Evaluation Understudy (BLEU)
\cite{papineni2002bleu}. It depends on modified n-gram precision (or co-
occurrence) and needs lots of target sentences for better results. Despite being
the most widely used metric, many researchers have expressed concern about the
effectiveness of the metric \cite{zhang2004interpreting}, \cite{callison2006re},
\cite{ananthakrishnan2007some}. To understand how BLEU score works, consider the
following translation candidates
\begin{itemize}
\item Candidate 1: It is a guide to action which ensures that the military
always obey the commands the party.
\item Candidate 2: It is toinsure the troops forever hearing the activity
guidebook that party direct.
\end{itemize}
to be evaluated against the set of three reference translations.
\begin{itemize}
\item Reference 1: It is a guide to action that ensures that the military will
forever heed Party commands.
\item Reference 2: It is the guiding principle which guarantees the military
forces always being under the command of the Party.
\item Reference 3: It is the practical guide for the army always to heed
directions of the party.
\end{itemize}
The BLUEs core idea is to use count the number of N-gram matches. The match
could be position-independent. Reference could be matched multiple times. These
steps are linguistically-motivated.
Candidate 1: \red{It is a guide to action which ensures that the military
always} obey \red{the commands of the party}. \\
Reference 1: \red{It is a guide to action} that \red{ensures that the military}
will forever heed \red{Party commands}. \\
Reference 2: It is the guiding principle \red{which} guarantees \red{the}
military forces \red{always} being under \red{the} command \red{of} the Party.
\\
Reference 3: It is the practical guide for the army always to heed directions of
the party. \\
N-gram Precision: 17
Candidate 2: \red{It is to} insure \red {the} troops \red {forever} hearing \red
{the} activity guidebook \red{that party} direct. \\
Reference 1: \red {It is} a guide \red {to} action \red {that} ensures that \red
{the} military will \red{forever} heed \red{Party} commands. \\ Reference 2: It
is \red{the} guiding principle which guarantees the
military forces always being under the command of the Party. \\ Reference 3: It
is the practical guide for the army always to heed directions of the party. \\
N-gram Precision: 8
Thus candidate 1 is better.
\textbf{Issues with N-gram precision:}
Candidate: \red{the the the the the the the.} \\
Reference 1: \red{The} cat is on the mat. \\
Reference 2: There is a cat on the mat. \\
\textbf{\red{N-gram Precision: 7 and BLEU: 1}}
This result is very misleading. Thus the following modified BLEU score is often
used.
\begin{table}[h]
\centering
\begin{tabular}{ll}
\hline
\textbf{Algorithm} & \textbf{Example} \\
\hline
Count the max number of times & Ref 1: The cat is on the mat. \\
a word occurs in any single reference & Ref 2: There is a cat on the mat. \\
& “the” has max count 2 \\
\hline
Clip the total count of & Unigram count = 7 \\
each candidate word & Clipped unigram count = 2 \\
&Total no. of counts = 7 \\
\hline
Modified N-gram & Modified-ngram precision: \\
Precision equal to & Clipped count = 2 \\
Clipped count/ & Total no. of counts =7 \\
Total no. of candidate word & Modified-ngram precision = 2/7\\
\hline
\end{tabular}
\caption{Translation evalution scores}
\end{table}
N-grams with different Ns are used but 4 is most common metric.
\subsection{Phrase based Machine Translation (PBMT)}
Traditionally there exists two types of machine translation systems:
\begin{itemize}
\item \textbf{The Rule-based Approach}
The source language text is analyzed using parsers and/or morphological
tools and transformed into intermediary representation. This
representation is used to generate target sentence. The rules are
written by human experts. As a large number of rules is required to
capture the phenomena of natural language, this is a time consuming
process. As the set of rules grows over time, it gets more and more
complicated to extend it and ensure consistency.
\item \textbf{The Data-driven Approach}
In the data-driven approach, bilingual and monolingual corpora are used
as main knowledge source. In the statistical approach, MT is treated as
a decision problem: given the source language sentence, we have to
decide for the target language sentence that is the best translation.
Then, Bayes rule and statistical decision theory are used to address
this decision problem.
\end{itemize}
For a given sentence in source language $f^{I} = f_1\ldots f_i\ldots f_{I} $, we
need to find the sentence $e^{I} = e_1\ldots e_i\ldots e_{I}$ in target sentence
by applying Bayes rule to find the sentence that minimizes the expected loss
\begin{equation*}
e^{I} = \argmin_{I,e^{I}} \Bigg\{
\sum_{I^{'} } Pr(e^{\prime i^\prime} | f^J) \cdot L(e^I, e^{\prime i^\prime})
\Bigg\}
\end{equation*}
Here $L (e^I, e^{\prime i^\prime})$ denotes loss function under construction. It
measures the loss (or errors) of a candidate translation $e^{I}$ assuming the
correct translation is $e^{\prime i^\prime}$. If the loss function is assumed to
be 0-1 loss meaning zero if wrong and 1 if correct, then the Bayes rule can be
simplified to,
\begin{equation*}
e^{I} = \argmax_{I,e^{I}} \Bigg\{Pr (e^{\prime i^\prime} | f^J) \Bigg\}
\end{equation*}
PBMT were the widely scalable models in the beginning of the millennium
\cite{koehn2003statistical}. The model works at the level of phrases instead of
words and had a lot of individual components but the core idea is learning
statistical patterns in training data.
Some of the major components used in the PBMT were
\begin{itemize}
\item Sentence alignment: Gale and Church Algorithm based on Dynamic
programming \cite{lewis1994sequential}.
\item Word alignment: Expectation Maximization.
\item Phrases generation: Heuristic based complex algorithms.
\item Phrase lookup: Statistical matching.
\item Beam search: For generating target sentence. Beam search is a generic
algorithm that is used even in the latest NMT systems.
\end{itemize}
Beam search in general is a heuristic search algorithm that explores a graph by
expanding the most promising node in a limited set. In machine translation, beam
search is used to generate a set of most likely sentence given a input sentence
by retaining only the most promising translations. Number of sentence is kept
constant.
\section{Neural Machine Translation (NMT)}
Though usage of neural networks did not yield promising results in the early
years, the recurrent neural networks started to achieve performance comparable
to PBMT as in \cite{kalchbrenner2013recurrent} and
\cite{hermann2013multilingual}. Most of the early architectures were simple
encoder-decoder architectures. An simple working of encoder-decoder architecture
for machine translation is shown in Fig. \ref{fig:enc_dec}. The architecture has
an encoder that takes a sentence in source language and encodes it into a vector
$S$ of fixed length. The decoder takes the embedding $S$ as input and generates
the sentence in the target language. Some of the major drawbacks with the simple
architecture is that
\begin{enumerate}
\item The encoder has to encode all the information in the source sentence
into fixed size embedding $S$.
\item The decoder never sees the actual input sentence and has to rely
completely on $S$ for generating target sentence.
\item Having fixed size embedding vector makes the architecture less flexible.
Smaller size means less information where as using larger vector means
for smaller sentences we need zero padding and more computation time.
\end{enumerate}
\begin{figure}
\includegraphics[width=.99\linewidth]{img/context.png}
\caption{Context for machine translation}
\label{fig:context}
\end{figure}
\subsection{Jointly learning to align and translate} \label{sec:JT}
The paper \cite{bahdanau2014neural} address this issue of having a fixed size
embedding by introducing the idea of context as shown in Fig.\ref{fig:context}.
The main proposal was
\begin{enumerate}
\item Encoder outputs a hidden representation for each word in the source
sentence $F_s$.
\item One context vector $C$ in the size of the input sentence that has values
between 0 and 1.
\item Each element of the embedding $F_s$ is multiplied with one element in
$C$.
\end{enumerate}
The whole embedding is made available to the decoder and $C$ describes which
part of the embedding should be focused to generate the current word based on
the previous word.
Encoder RNN at each input step t, generates hidden state, $h_t = f(x_t,
h_{t-1})$. Unlike in the previous models where only the last hidden state is
made available to the decoder, this paper provided all the hidden state and also
a context vector that has a weight for each of the hidden vector. The context
vector helps the decoder to focus on a part of the sentence. $c =
q(\{h_1,\cdots,h_{T_x}\})$.
The decoder is trained to predict the next work $y_t$ given the context vector
$c$ and all previously predicted words $\{ y_1, \cdots, y_{t-1}\}$
\begin{equation*}
p(y) = \prod^{T}_{t=1} p(y_t | \{ y_1, \cdots, y_{t-1}\}, c)
\end{equation*}
With RNN, each conditional probability is modeled as,
\begin{equation*}
p(y_t | \{ y_1, \cdots, y_{t-1}\}, c) = g(y_{t-1}, s_t, c)
\end{equation*}
where $s_t$ is the hidden state of the RNN. The context vector for a input
sentence $i$, is computed as a weighted sum of hidden states of the encoder
(also known as \textbf{annotations})
\begin{equation*}
c_i = \sum_{j=1}^{T_{x}} \alpha_{ij} h_j
\end{equation*}
\begin{equation*}
\alpha_{ij} = \frac{ exp (e_{ij})}{ \sum_{k=1}^{T_x} exp(e_{ik})}
\end{equation*}
where, $$ e_{ij} = a(s_{i-1}, h_j) $$ is the alignment model that scores how
well the inputs around the $j$ and the output at the position $i$ match.
A feedforward neural network is used as the alignment model and is
\textbf{jointly trained} with all the NMT system as a whole.
\begin{figure}
\center
\includegraphics[height=8cm]{img/contextres.png}
\caption{Visualization of the context in action}
\label{fig:contextvis}
\end{figure}
The functionality of the context vector is visualized in the Fig.
\ref{fig:contextvis}. For example, the French language has the words
\textquotedblleft European Economic Area\textquotedblright exists in the reverse
order as \textquotedblleft zone \'{e}conomique europ\'{e}enne\textquotedblright.
The context learns to focus on the correct order for the decoder.
The idea of context used in the paper was adapted by the MT research community
with the name of attention. Several attention mechanism were proposed
\cite{luong2015effective}, \cite{cho2014learning}, \cite{gregor2015draw}. The
context proposed in section \ref{sec:JT} to align hidden state $h_t$ with each
source hidden state $\bar{h_s}$ can be formulated a,
\begin{equation}
\begin{split}
a_t(s) & = align(h_t,\bar{h_s}) \\
& = \frac{exp(score(h_t,\bar{h_s}))}{\sum_{s'} exp(score(h_t,\bar{h_s'}))}
\end{split}
\end{equation}
The new attention mechanisms proposed are,
\begin{equation*}
align(h_t,\bar{h_s}) = \begin{cases}
h_t^T\bar{h_s} & \text{dot} \\
h_t^TW_a\bar{h_s} & \text{general} \\
v_a^T tanh(W_a[h_t;\bar{h_s'}]) & \text{concat}
\end{cases}
\end{equation*}
These mechanisms are shown do perform better in certain scenarios but none of
them is shown to be the best.
\begin{figure}
\includegraphics[width=0.9\linewidth]{img/birnn.png}
\caption{Bi-directional Encoder}
\label{fig:birnn}
\end{figure}
This paper also made use of the bi-directional LSTM in the layers of the
Encoder. Just like an LSTM that has an internal memory to comprehend the past
states, the LSTM also comprehends the words that comes next in the sentence. It
has two internal state---one for past states and one for the future state. A bi-
directional LSTM is shown in Fig.\ref{fig:birnn}.
The whole model is trained with standard maximum-likelihood error minimization
$\mathbb{O}_{ML}(\Theta) = \sum_{i=1}^N log P_\Theta(Y^{*(i)} | X^{(i)}) $ with
stochastic gradient descent (SGD) on a mini-batch of 80 sentences. For the first
time, the BLEU score was comparable to phrase based machine translation system.
But the fact that there are no individual pieces in the model was a great
benefit and the research community started to consider neural systems as an
viable alternative phrase based machine translation system.
\subsection{Sequence to Sequence models}
\label{sec:seq} With in an year, more deep and powerful models were
computationally possible. Taking the availability of the computational power to
advantage a new framework for learning sequence to sequence learing was
proposed. A sequence to sequence model formulated as $$ p(y_1, \cdots, y_T| x_1,
\cdots, x_T) = \prod_{t=1}^{T} p(y_t| v, y_1, \cdots, y_{t-1})$$ where $v$ is
the internal memory of the RNNs. With more powerful RNNs, this model can be used
to learn a variety of tasks like speech recognition, handwritten digits
recognition, machine translation, etc as shown in the paper
\cite{sutskever2014sequence}
\begin{figure}
\centering
\includegraphics[width=.49\linewidth]{img/seq2seq.pdf}
\caption{Sequence to sequence model}
\label{fig:seq}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.49\linewidth]{img/seq2seq_deep.pdf}
\caption{A more powerfuls Sequence to sequence model}
\label{fig:seqdeep}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{img/wordvec.png}
\caption{
Two-dimensional PCA projection of the 1000-dimensional word embedding.
}
\label{fig:wordvec}
\end{figure}
A simple sequence to sequence model is shown in Fig.\ref{fig:seq}. Just by
making the model more deeper, without any special treatment for machine
translation, the paper achieved state-of-the art results. The model is trained
on WMT English to French dataset with 12M sentences consisting of 348M French
words and 304M English words using 160,000 of the most frequent words for the
source language and 80,000 of the most frequent words for the target language.
Every out-of-vocabulary word was replaced with a special \textbf{UNK} token. The
network has 4 LSTM layers with 1000 LSTM cells in each layer. The paper used
1000 dimensional word embedding to represent the words as vector following the
word2vec paper \cite{mikolov2013distributed}. word2vec formulated the problem of
learning word embedding as an energy maximization problem using a simple neural
network with just one hidden layer. The energy maximized is called Negative
sampling. The resulting vector embedding is empirically shown to have arithmetic
properties. i.e. $\,$France - Paris + Germany is roughly equal to Berlin as
shown in Fig.\ref{fig:wordvec}. The paper an impressive BLEU score of 33.3 even
without doing anything specific for machine translation.
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{img/gnmt_1.png}
\caption{The core architecture of GNMT }
\label{fig:gnmt1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{img/residual.png}
\caption{Residual connections}
\label{fig:residual}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=.9\linewidth]{img/residual_gnmt.png}
\caption{GNMT with residual connections}
\label{fig:gnmtres}
\end{figure}
\subsection{Google Neural Machine System}
Most likely Google runs the worlds biggest machine translation service
supporting 103 languages (at the time of writing this article) serving about a
billion query every day. Google translate team published a detailed paper
describing their neural machine translation system \cite{wu2016google}. In the
following section we will discuss the key ideas described in the paper. This
paper is very unique in the sense that it is much more than an academic paper.
The paper explains most of the components of the system including a lot of
engineering details and also has a lot of components that are inspired from
other recent breakthroughs in machine translation and deep learning research in
general. The basic architecture is heavily inspired by the jointly learning to
align and translate \ref{sec:JT} and \ref{sec:seq}. The core architecture is
shown in Fig.\ref{fig:gnmt1}. It combines the idea of context with the a deeper
sequence to sequence encoder- decoder model with bi-directional LSTM in the
initial layers of the encoder. Both the encoder and decoder contains eight
layer. It also made use of the Residual connections introduced in
\cite{he2016deep}. Residual connection are helpful in alleviating the vanishing
gradients problem in an interesting manner as shown in Fig.\ref{fig:residual}.
In a DNN each layer can be interpreted as transforming the input $x$ to a new
manifold by learning $F (x)$ but in a residual network only learn the change to
be applied to the input ($x + F (x)$). There is always an identity skip
connection between layers helping in the constant gradient flow which otherwise
might vanish. The residual learning enables training very deep networks as shown
in Fig.\ref{fig:gnmtres}.
The author also tried to use a enhanced cost function. The usual cost function
maximized by the training process is the maximum likelihood
$$\mathbb{O}_{ML}(\Theta) = \sum_{i=1}^N log P_\Theta (Y^{*(i)} | X^{(i)}) $$
but this does not directly correspond to the BLEU score. To improve the BLEU the
authors proposed the following cost function. $$ \mathbb{O}_{ML}(\Theta) =
\sum_{i=1}^N \sum_{Y=\mathbb{Y}}^N log P_\Theta (Y^{*(i)} | X^{(i)}) r(Y,
Y^{*(i)} ) $$ where $r (\cdot)$ is per-sentence score computed as an expectation
over all $Y$ up to certain-length.
In terms of the training process, just one encoder and one decoder is used for
all language pairs. The authors reported several nice properties of the joint
language training. One highlight is that this enables zero-shot learning. The
system is able to translate between the language pairs it never saw in the
training data. Also the languages for which huge training data does not exists
benefited from the joint training. The zero shot learning property is discussed
in detail the paper \cite{johnson2016google}. To enable a single decoder to
generate target sentence for all the languages, the input text is suffixed with
additional tokes like $<\_\_EN\_\_>, <\_\_FR\_\_>, <\_\_DE\_\_>, <\_\_ES\_\_>$
indicating the target language to be generated.
A simple sequence to sequence model is shown in Fig. \ref{fig:seq}. Just by
making the model more deeper as in Fig. \ref{fig:seqdeep}, without any special
treatment for machine translation, the paper achieved state-of-the art results.
The model as trained on WMT English to French dataset with 12M sentences
consisting of 348M French words and 304M English words using 160,000 of the most
frequent words for the source language and 80,000 of the most frequent words for
the target language. Every out-of-vocabulary word was replaced with a special
\textbf{UNK} token. The network has 4 LSTM layers with 1000 LSTM cells in each
layer. The paper used 1000 dimensional word embedding to represent the words as
vector following the word2vec paper \cite{mikolov2013distributed}. Word2vec
formulated the problem of learning word embedding as an energy maximization
problem using a simple neural network with just one hidden layer. The energy
maximized is called Negative sampling. The resulting vector embedding is
empirically shown to have arithmetic properties. i.e. $\,$France - Paris +
Germany is roughly equal to Berlin as shown in Fig. \ref{fig:wordvec}. The paper
an impressive BLEU score of 33.3 even without doing anything specific for
machine translation.
One another interesting aspect of the GNMT is the use of Quantizable Model and
Quantized Inference. Since GNMT has to serve a huge volume of users in the
production, the service is very computationally intensive making hard for low
latency translation difficult. Quantized Inference is the process of using low
precession arithmetic operations during the inference. The idea of using low
precession arithmetic got inference has been tried since the early days of
neural networks. In the recent years, the authors in \cite{wu2016quantized}
successfully demonstrated that a CNN training can be sped up by a factor of 4-6
with minimal loss of accuracy in the classification task. More interestingly
\cite{li2016ternary} showed that weights of the neural networks can be quantized
to just three states, -1,0,1. But most of these success were restricted to CNNs.
In the paper GNMT team did the same of RNNs. To achieve this, the model were
constrained during training time. The nature of LSTMs remembering state across
multiple time steps posts additional challenges in quantizing RNNs. he forward
computation of an LSTM stack with residual connections is modified to the
following:
\begin{equation}
\begin{split}
c^{'i}_t,m^i_t & = LSTM_i(c^{i}_{t-1},m^i_{t-1}, x^{i-1}_t; W^i ) \\
c^{i}_t & = max(-\delta, min (\delta,c^{'i}_t )) \\
x^{'i}_t & = m^{i}_t + x^{i-1}_t \\
x^{i}_t & = max(-\delta, min (\delta,x^{'i}_t )) \\
c^{'i}_t,m^i_t & = LSTM_{i+1}(c^{i+1}_{t-1},
m^{i+1}_{t-1}, x^{i}_t; W^{i+1} ) \\
c^{i+1}_t & = max(-\delta, min (\delta,c^{'i+1}_t ))
\end{split}
\end{equation}
and for keeping notation simple we drop the superscript $i$ and expand further
as,
\begin{equation}
\begin{split}
W & = [W_1,W_2....W_8] \\
i_t & = sigmoid(W_1x_t + W_2m_t) \\
{i'}_t & = tanh(W_3x_t + W_4m_t) \\
f_t & = sigmoid(W_5x_t + W_6m_t) \\
o_t & = sigmoid(W_7x_t + W_8m_t) \\
c_t & = c_{t-1} \odot f_t + i^{'}_t \odot i_t \\
m_t & = c_t \odot o_t
\end{split}
\end{equation}
All the operations in the above equations are performed only with fixed point 8
bit or 16 bit integer multiplications.
\subsection{A Convolutional Encoder Model for NMT}
Despite all the success that recurrent neural netowrks have achieved in neural
machine translation, they have some shortcomings which makes them less appealing
for such a task. Recurrent neural networks have an inherently serial structure,
meaning that their camputations cannot be fully parallelized. This contrasts
with other types of networks, specifically covolutioanl neural networks, which
operate over a fixed-size input sequence enabling simultaneous computation of
all the features. Furthermore recurrent neural networks need to traverse the
full distance of the serial path between two features to reach from one to
another. This characteristics leads to higher difficulty of training these
models. Once again this is not the case for some other models such as
convolutional neural networks in which a succession of convolutional layers
provide shroter path between features. \cite{DBLP:journals/corr/GehringAGD16}
Such desired characteristics of convolutional neural networks have resulted in
the recent trend to exploit their power in neural machine translations. In this
and next sections we review a number of propsed techniques.
\begin{figure}
\center
\includegraphics[width=0.5\textwidth]{img/CE}
\caption{The architecture of convolutional encoder model}
\label{fig:convenc}
\end{figure}
\citet{DBLP:journals/corr/GehringAGD16} proposes an encoder-decoder model
similar to that of \cite{bahdanau2014neural} in which the encoder is replaced by
a convolutional neural network. In this model, as in \cite{bahdanau2014neural},
the decoder is a long short-term memory recurrent neural network, and the
encoder consists of two one-dimensional convolutional networks: One network
(CNN-a) generates a sequence containing the encoded information about fixed-
sized contexts (of kernel size), and the other (CNN-c) computes the conditional
input for the soft-attention mechanism connecting the encoder and the decoder.
Fig. \ref{fig:convenc} schematically illustrates this architecture.
The convolutional networks in the encoder only contain convolutional layers
without any pooling layer. This allows the full input sequence length to the
encoder to be retained. Moreover stacking multiple convolutional layers on top
of each other can increase the context width for each input token of the input
sequence.
As convolutional layers do all their calculations in parallel and therefore
cannot distinguish position of the input tockens, it is necessary to somehow
encode the positional information in the input sequence before feeding it to the
encoder. To this end, \citet{DBLP:journals/corr/GehringAGD16} proposes to feed a
sequence $e_1, e_2, ..., e_m$ to the ecoder in which,
\begin{equation*}
e_i = w_i + l_i
\end{equation*}
where $w_i$ is a word embedding on the vocabulary of the source language and
$l_i$ is a positional embedding of the words of the source sentence.
The decoder aims to capture a distribution over the possible target vocabulary
by transforming its LSTM hidden state $h_i$ via a linear layer with weights
$W_o$ and bias $b_o$,
\begin{equation*}
p(y_{i+1} \vert y_1, ..., y_i, \textbf{x}) =
\text{softmax}(W_o h_{i + 1} + b_o)
\end{equation*}
The conditional input $c_i$ to the LSTM is computed via a simple dot-product
soft-attention mechanism, i.e. a dot-product between the set of calculated
attention weights and the sequence coming form CNN-c. In order to calculate the
attention scores, first the hidden state $h_i$ is transformed by a linear
transformation with weights $W_d$ and bias $b_d$ to match the size of an
embedding of the previous target word $g_i$ and then it is summed with that
embedding. Then this vector is used to calculate the each attention score via a
softmax over the product of all these neworks and the sequence comming from
CNN-a:
\begin{align*}
d_i &= W_d h_i + b_d + g_i \\
z_i &= \text{CNN-a}(e_i) \\
c_i &= \sum_{j = 1}^{T}{a_{ij}\text{CNN-c}(e_j)} \\
a_{ij} &= \frac{exp(d_i^T z_j)}{\sum_{t = 1}^{m}{exp(d_i^T z_t)}}
\end{align*}
The experimental results show that this architecture's performance is comparable
to that of bi-directional recurrent neural networks while achieving two times
speed up in generation of target translation. Table \ref{tab:convseqblue} lists
the BLEU score achieved by this model for a few datesets.
\begin{table}
\center
\begin{tabular}{lrrr}
\hline
Encoder & WMT'16 en-ro & WMT'15 en-de & WMT'14 en-fr \\
\hline
BiLSTM & 27.4 & 23.2 & 34.6 \\
Convolutional & 27.8 & 24.3 & 35.7 \\
\hline
\end{tabular}
\caption{Performance comparison of convolutional encoder model with a bi-
directional neural network in terms of BLEU score}
\label{tab:convseqblue}
\end{table}
Another interesting observation is that in recurrent neural networks the learned
attention scores are sharp but do not necessarily represent a correct alignment,
while for convolutional encoders the scores are less focused but still indicate
an approximate source location. Fig. \ref{fig:atts} illustrates the learned
attention scores for a sentence from WMT'15 English-German translation using a
2-layer BiLSTM and a convolutional encoder with 15-layer CNN-a and 5-layer
CNN-c.
\begin{figure}
\center
\includegraphics[width=\textwidth]{img/atts}
\caption{Attention scores for a BiLSTM (top) and a convolutional encoder
(bottom)}
\label{fig:atts}
\end{figure}
\subsection{Convolutional Sequence to Sequence Learning}
Based on \cite{DBLP:journals/corr/GehringAGD16},
\citet{DBLP:journals/corr/GehringAGYD17} further developed the idea of
convolutional encoders and introduced a fully convolutional encoder-decoder
architecture. In this model both encoder and decoder networks share a simple
convolution block structure that computes intermediate states based on a fixed
number of input elements. Each block consists of a one-dimensional convolution
followed by a non-linearity. Once again the relationship between distant words
in the input sequence is captured by stacking multiple such blocks on top of one
another, and non-linearities in the blocks allow the network to exploit this
full input field, or to focus on fewer elements if needed.
\citet{DBLP:journals/corr/GehringAGYD17} proposes the use of gated linear units
(GLU) as non-linearity.
Each convolution kernel of size $k$ is parameterized as $W \in \R^{2d \times
kd}$, $b_w \in \R^{2d}$ and takes as input $X \in \R^{k \times d}$ which is a
concatenation of $k$ input elements embedded in $d$ dimensions and maps them to
a single element $Y = [A B] \in \R^{2d}$. Then applying a gated linear unit non-
linearity $v([A B]) = A \otimes \sigma(B)$ where $A, B \in \R^d$ and $\otimes$
is point-wise multiplicaiton, results in a single output element of size $d$.
Subsequent layers operate in the same manner over $k$ such elements from
previous layer. Fig. \ref{fig:convseq2seq} illustrates the architecture of the
model.
\begin{figure}
\center
\includegraphics[width=0.6\textwidth]{img/CS2S}
\caption{Architecture of convolutional sequence to sequence learning.}
\label{fig:convseq2seq}
\end{figure}
Another introduced improvement over previous model is multi-step attention in
which a set of different conditional inputs are calculated for each layer of the
decoder. To compute the attention, the current hidden state $h^l_i$ of $l$-th
layer of decoder is combined with an embedding of the previous target element
$g_i$. Once the conditional input $c_i^l$ has been computed, it is simply added
to the output of the corresponding layer $h^l_i$.
\begin{align*}
y_i^l &= h_i^l + c_i^l \\
c_i^l &= \sum_{j = 1}^{m}{a_{ij}^l (z_j^u + e_j)} \\
a_{ij}^l &= \frac{exp (d_i^l \cdot z_j^u)}
{\sum_{t = 1}^{m}{exp (d_i^l \cdot z_t^u)}} \\
d_i^l &= W_d^l h_i^l + b_d^l + g_i
\end{align*}
Experimental results show that this translation model achieves similar or higher
BLEU scores as that of GNMT while bringing almost an order of magnitude seep up
in translation generation. Table \ref{tab:convseq2seqbleu} lists the BLEU score
achieved by this model for a few datesets.
\begin{table}
\center
\begin{tabular}{lrrr}
\hline
Model & WMT'16 en-ro & WMT'15 en-de & WMT'14 en-fr \\
\hline
GNMT & - & 24.61 & 39.92 \\
ConvS2S & 29.88 & 25.16 & 40.46 \\
\hline
\end{tabular}
\caption{Performance comparison of convolutional sequence to sequence model
with GNMT in terms of BLEU score}
\label{tab:convseq2seqbleu}
\end{table}
\subsection{Depthwise Separable Convolutions for NMT}
Another convolution-based translation model is due to
\citet{DBLP:journals/corr/KaiserGC17}. Even though convolutions unlike recurrent
neural networks can provide an efficient non-local referencing across time, they
still suffer from computational complexity and large parameter count.
\cite{DBLP:journals/corr/KaiserGC17} Addresses this issue by introducing a model
based upon so-called depthwise separable convolutions. Depthwise Separable
Convolution consists of a depthwise convolution, i.e an operator which performs
a spatial convolution independently over each channel of the input, followed by
a pointwise convolution, i.e. a regular $1 \times 1$ convolution.
The rationale behind this definition is the expermentally-verified assumption
that the 2D and 3D inputs that convolutions operate on will feature both fairly
independent channels and highly correlated spatial locations
\cite{DBLP:journals/corr/KaiserGC17}. As a result one can simplify regular
convolution's feature learning over a joint "space-cross-channels realm" into
two simpler steps, i.e. a spatial feature learning step, and a channel
combination step, while reducing the number of parameters. More formally
depthwise separable convolution can be written as follows:
\begin{align*}
Conv (W, y)_{i,j} &= \sum_{k,l,m}^{K,L,M}{W_{k,l,m} \cdot y_{i+k,j+l,m}} \\
PointwiseConv (W, y)_{i,j} &= \sum_{m}^{M}{W_m \cdot y_{i,j,m}} \\
DepthwiseConv (W, y)_{i,j} &= \sum_{k,l}^{K,L}{W_{k,l} \cdot y_{i+k,j+l}} \\
SepConv (W_p, W_d, y)_{i,j} &=
PointwiseConv_{i,j}(W_p, DepthwiseConv_{i,j}(W_d, y))
\end{align*}
Table \ref{tab:convpar} lists the number fo parameters for a regular convolution
as well as a depthwise separable convolution with $c$ channels and filters and
one-dimensional kernel of size $k$. As it can be seen for larger kernel sizes
the number of parameters in depthwise separable convolution is significantly
less than regular convolutions.
\begin{table}
\center
\begin{tabular}{lr}
\hline
Convolution & Number of parameters\\
\hline
Conv & $k \cdot c^2$ \\
SepConv & $k \cdot c + c^2$ \\
\hline
\end{tabular}
\caption{Number of parameters in convolution operations}
\label{tab:convpar}
\end{table}
The SliceNet model introduced in \cite{DBLP:journals/corr/KaiserGC17}, has a
similar structure to other encoder-decoder models. It first embeds the inputs
and outputs by two independent networks and concatenates them and the feeds the
result into a decoder. At each step, the decoder produces a new output
prediction given the encoded inputs and the encoding of the previously produces
outputs. Fig. \ref{fig:sn} illustrates the architecture of SliceNet model.
\begin{figure}
\center
\includegraphics[width=\textwidth]{img/SN}
\caption{The architecture of SliceNet}
\label{fig:sn}
\end{figure}
The convolution modules in SliceNet use rectified linear units (ReLU) as non-
linearities. Besides the ReLU activations are followed by a normalization layer.
Therefore a complete convolution module is defined as,
\begin{align*}
ConvStep_{d,s}(W, x) &= LN (SepConv_{d,s}(W, ReLU (x))) \\
LN (x) &= \frac{G}{\sigma (x)}(x - \mu (x)) + B
\end{align*}
In order to encode the positional information, SliceNet add a so-called timing
signal to the input sequence. This signal is defined as follows,
\begin{align*}
timing (t, 2d) &= sin (t/1000^{2d/depth}) \\
timing (t, 2d + 1) &= cos (t/1000^{2d/depth})
\end{align*}
For given source of shape $[m, depth]$ and target of shape $[n, depth]$ the
attention mechanism used in SliceNet computes the feature vector similarities at
each position and re-scales according to the depth,
\begin{align*}
attention(source, target) &= \\
Attend(source, &ConvStep_{4,1}((W_a1^{5 \times 1}, attention1(target))))
\end{align*}
\begin{align*}
attention1(x) &= ConvStep_{1,1}(W_a1^{5 \times 1} \cdot x + timing) \\
Attend(source, target) &= \frac{1}{\sqrt{depth}} \cdot
softmax(target \cdot source^T) \cdot source
\end{align*}
The experimental results presented in \cite{DBLP:journals/corr/KaiserGC17} shows
that SliceNet improves the BLEU score in comparison to both GNMT and
convolutional sequence to sequence models. Table \ref{tab:snbleu} lists the BLEU
scores for one dateset.
\begin{table}
\center
\begin{tabular}{lr}
\hline
Model & newstest 14 \\
\hline
GNMT & 24.6 \\
ConvS2S & 25.1 \\
SliceNet & 26.1 \\
\hline
\end{tabular}
\caption{Performance comparison of SliceNet with other models in terms of BLEU
score}
\label{tab:snbleu}
\end{table}
\bibliographystyle{plainnat}
\bibliography{literature}
\end{document}
|
|
\documentclass{article}
\usepackage{amsmath}
\input{artisynthDoc}
\setcounter{tocdepth}{5}
\setcounter{secnumdepth}{3}
\title{ArtiSynth Coding Standard}
\author{John Lloyd}
\setpubdate{May 9, 2012}
\iflatexml
\date{}
\fi
\begin{document}
\maketitle
\iflatexml{\large\pubdate}\fi
\tableofcontents
\section{Introduction}
It is notoriously difficult to create a coding standard that satisfies
all developers, let alone create one that is actually adhered to.
Nonetheless, there is an ArtiSynth coding standard which all
contributors are encouraged to follow.
The standard is similar in appearance to the
\href{http://java.sun.com/docs/codeconv}{coding rules provided by Sun},
although it does differ slightly. For users of Eclipse, there is an code
style file located in
\begin{lstlisting}[]
$ARTISYNTH_HOME/support/eclipse/artisynthCodeFormat.xml
\end{lstlisting}
that supports most of the conventions, though not all, as noted
below.
\section{Cuddle braces and indent by 3}
As with the Sun coding rules, all braces are cuddled. Basic indentation
is set to 3 to keep code from creeping too far across that page.
For example:
%
\begin{lstlisting}[]
public static RootModel getRootModel() {
if (myWorkspace != null) {
return myWorkspace.getRootModel();
}
else {
return null;
}
}
\end{lstlisting}
\section{Always used braced blocks}
As in the Sun rules, code following control constructs is always
enclosed in a braced block, even when this is not necessary. That
means that
%
\begin{lstlisting}[]
for (int i=0; i<cnt; i++) {
if (i % 2 == 0) {
System.out.println ("Even");
}
else {
System.out.println ("Odd");
}
}
\end{lstlisting}
should be used instead of
\begin{lstlisting}[]
for (int i=0; i<cnt; i++)
if (i % 2 == 0)
System.out.println ("Even");
else
System.out.println ("Odd");
\end{lstlisting}
The reason for this is to provide greater uniformity and to
make it easier to add and remove statements from the blocks.
\section{Do not use tabes}
Code should be indented with spaces only. Tabs should not be used
since the spacing associated with tabs varies too much between coding
environments and can not always be controlled.
\section{Keep line widths to 80 columns}
Again, as with the Sun rules, code should be kept to 80 columns. The
idea here is that it is easier to read code that doesn't creep too far
across the page. However, to maintain an 80 column width, it will
often be necessary to wrap lines.
\section{Break lines at the beginning of argument lists}
If breaking a line is necessary, the beginning of an argument list is
a good place to do it. The break should be indented by the usual 3 spaces.
The result can look at bit like Lisp:
\begin{lstlisting}[]
public String getKeyBindings() {
return artisynth.core.util.TextFromFile.getTextOrError (
ArtisynthPath.getResource (
getDirectoryName()+KEY_BINDINGS_FILE));
}
\end{lstlisting}
If at all possible, do not break lines at the '{\tt .}' used for method or
field references. For clarity, it is better to keep these together with
their objects. Therefore, please *do not* do this:
\begin{lstlisting}[]
public String getKeyBindings() {
return artisynth.core.util.TextFromFile
.getTextOrError (ArtisynthPath
.getResource (getDirectoryName()+KEY_BINDINGS_FILE));
}
\end{lstlisting}
Note that Eclipse will not generally enforce these breaking
conventions, so you need to do this yourself.
\section{Break lines after assignment operators}
Another good place for a line break is after an assignment
operator, particularly if the left side is long:
\begin{lstlisting}[]
LinkedList<ClipPlaneControl> myClipPlaneControls =
new LinkedList<ClipPlaneControl>();
if (hasMuscle) {
ComponentList<MuscleBundle> muscles =
((MuscleTissue)tissue).getMuscleList();
}
\end{lstlisting}
Again, Eclipse will not generally enforce this, so it must be done
manually.
\section{Align conditional expressions with the opening parentheses}
When line wrapping is used inside a conditional expression,
the expression itself should be aligned with the opening
parentheses, with operators placed at the right:
\begin{lstlisting}[]
if (e.getSource() instanceof Rotator3d ||
e.getSource() instanceof Transrotator3d ||
e.getSource() instanceof Scaler3d) {
centerTransform (transform, dragger.getDraggerToWorld());
}
\end{lstlisting}
Again, note the Eclipse will not generally enforce this, and will
instead
tend to produce output like this:
\begin{lstlisting}[]
if (e.getSource() instanceof Rotator3d
|| e.getSource() instanceof Transrotator3d
|| e.getSource() instanceof Scaler3d) {
centerTransform (transform, dragger.getDraggerToWorld());
}
\end{lstlisting}
\section{No space before empty argument lists}
No spaces are placed before empty argument lists, as in
\begin{lstlisting}[]
getMainFrame().getMenuBarHandler().enableShowPlay();
public ViewerManager getViewerManager() {
return myViewerManager;
}
\end{lstlisting}
This is largely to improve readability, particularly when
accessors are chained together in a single statement.
\end{document}
|
|
\chapter{Introduction}
%%recent advances in machine learning
%% alphago, more examples.
Lee Sedol, one of the best Go players in the world, was beaten by the Go engine AlphaGo in a match with a score of 4 to 1. The engine was clearly stronger \cite{leesedol}.
AlphaGo learned from a big dataset of games and got stronger only by playing with itself.
Artificial Intelligence is quite popular nowadays because of its many use cases: self-driving cars, playing Atari games, robotics and more.
But how do these engines learn how to get so good at their areas? The answer lies in reinforcement learning (RL), an area of machine learning.
\vspace{0.5cm}
The idea of RL is to have a state that an agent is in and actions that it can choose from. Each action results in different states and amounts of earned values that are called rewards. Rewards are used by agent to measure how good an action was. This process is repeated which results in the agent learning which actions in each state are better.
Imagine you are a soccer player. You are standing in front of the goal (which is the state you are in). You can either shoot or pass the ball (which are your available actions). You choose to shoot, but the ball is blocked by the goalkeeper (you get a low reward). So the next time you are in front of the goal again, you are more likely to try to pass the ball to a teammate. This time your teammate scored a goal (you got a high reward). From these experiences you learn that it is probably better to pass the ball if you are standing in front of the goal.
The concept of RL can be used in a variety of environments, for example robotic arms.
\vspace{0.5cm}
%%robotic arms uses
Already in the 14th century, Leonardo da Vinci made blueprints of robotic arms \cite{roboarmhistory}.
A robotic arm resembles a human arm. It consists of segments which are connected by joints \cite{howroboarmworks}.
The number of joints correspond to what is called Degrees of Freedom (DOF). A robotic arm with 5 joints would have 5 DOF because it can pivot in 5 ways. Each joint is connected to a step motor. Step motors supply energy needed to the robotic arm and make the robot move very precisely.
The equivalent to a human hand is the end effector. The end effector can vary depending on the task that the robotic arm has to solve.
\vspace{0.5cm}
Robotic arms have many advantages. They are very accurate and consistent which is why they are mostly used for repetitive tasks or tasks that require high accuracy which are hard for humans \cite{roboarmuk}.
This is the main reason why they are used in laboratories and hospitals for surgeries. Being able to work automatically without any human makes robotic arms useful in manufacturing and assembly lines.
\vspace{0.5cm}
Humans still have to teach the robotic arms how to move when
setting them up. For path planning of a robotic arm, a sequence of actions has to be found that solves the task. This sequence is saved and repetitively executed by the robotic arm. Finding the path still requires human labor. To improve path planning, many complicated algorithms have been developed. For example, Klanke et. al \cite{dynpath} developed a dynamic path planning algorithm for a 7 DOF Robotic Arm. They solved it by reducing the task to a 6 DOF problem, which is easier to solve.
A robotic arm needs 6 DOF to be able to move its end effector in every direction and orientation. This also means that robotic arms with more DOF do not have a unique path to solve the tasks. There are different paths which can vary in length and energy consumption.
To improve the quality of the path and to do path planning without a human, using RL for robotic arms is a logical approach.
\vspace{0.5cm}
%%problems with reinforcement learning because of sparse rewards
There is an issue that prevents robotic arms to learn with RL. It is hard to construct a suitable reward function for tasks where robotic arms are used.
So either a suitable reward function has to be constructed which is time and cost consuming, or the simplest reward function, a binary and sparse reward function has to be used. Both approaches have some issues.
\newline
Constructing a reward function can be quite complicated. Also, for each task an individual reward function has to be made. So someone has to do this work which defeats the purpose of using RL for robotic arms over path planning by hand. Depending on the case it might be easier to just plan the path without RL.
\newline
Using only a sparse reward for robotic arms works as follows. A reward is given if the goal is reached, and no reward is given if the goal is not reached. Robotic arms usually have many DOF, so there is a huge action space for the robotic arm. It is quite unlikely for robotic arms to fulfill the task by doing random movements. Tasks like moving an object are near impossible to solve with random actions.
So it is very unlikely for the robotic arm to earn a reward and learn. It takes a very long time to train a robotic arm with sparse rewards.
But recently hindsight experience replay (HER) has been introduced by Andrychowicz et al. \cite{herpaper}.
HER allows a high learning rate even with sparse rewards by making the training samples more efficient.
\vspace{0.5cm}
%%her introduces
HER is inspired by the human ability of not only learning from successes, but also learning from failures \cite{herpaper}. After each episode of training, the actions taken and the states that the agent was in are saved to a Replay Buffer. In case of a failure, the terminating state and the goal state are different, so the earned reward is negative. When replaying the episodes in the Replay Buffer, the goal will be replaced by the terminating state or a state that is close to it. Therefore, when replaying that episode, the agent will be successful and learn how to reach that state. By doing this, the agent will not learn how to reach the goal that was desired, but it will learn another goal which might be helpful in learning how to reach the desired goal. Andrychowicz et al. \cite{herpaper} researched that HER is especially usable for tasks with multiple possible goals, but also for tasks with a single goal they showed that the learning performance improved.
\vspace{0.5cm}
%%many endeavors to use and improve her
There are many endeavors to improve HER.
Most of the improvements are based on the selection of experiences that should be replayed. This is based on the idea that some experiences are more valuable to learning the task than other.
\newline
Zhao and Tresp propose a curiosity-driven experience prioritization approach in which rarer experiences are rated more valuable and are therefore prioritized for hindsight replay. \cite{curiousher}
\newline
Fang et al. propose a similar approach with their "Curriculum-guided HER". At earlier stages of training, curiosity-driven experience prioritization is used to enforce exploration of the potential approaches to solve the task. At latter stages experiences with higher proximity to the actual goals are prioritized to reinforce actually finding paths to solve the task. \cite{curricher}
\newline
Zhao and Tresp also propose an energy-based Prioritization of Hindsight Experiences. They hypothesized that episodes with higher trajectory energy are more useful for learning than those with lower energy. Their experiments have shown that this approach improves performance and sample-efficiency over state-of-the-art approaches. \cite{energyher}
\newline
Ren et al. introduce Hindsight Goal Generation, an approach that modifies the replay buffer to generate goals that are easy to achieve but still valuable to guide the agent to learn the actual goal. \cite{hgg}
\newline
Lank and Wu propose ARCHER, a modification to HER. They explain that replaying an experience is biased, because the modified new goal would influence the actions of the agent and by forcing the agent to take the same actions, these actions are biased. So there would be a difference between replaying a hindsight experience and a real experience, because in the real experience, the agent might choose a different action because it is not forced. To counter this bias, they use aggressive/high rewards for hindsight experiences so that the agent is more likely to do the same action in the same situation in a real experience. \cite{archer}
Most of the research on HER revolves around improving the performance of HER using simple environments.
%% some robotic arm tasks , eg. stacking stones
%%mostly work on improving her
\vspace{0.5cm}
%%using harder environments for her in this thesis to see if it also works.
In this thesis the performance of HER is examined for harder environments. Two environments which are prototypes of golf and basketball, will be used to experiment with. In the first environment, the task is for a robotic arm to roll a ball to a point that is far outside of its reach. In the second environment the robotic arm has to toss a ball into a box.
\vspace{0.5cm}
This thesis is structured as follows:
Chapter 2 describes the theoretical background on RL in general, artificial neural networks (ANN) the RL algorithm deep deterministic policy gradients (DDPG) and HER.
Chapter 3 explains the methodology used for this thesis. The simulation environment is showcased.
In chapter 4, the experiments are presented and the results are discussed.
In the last chapter, the results are summarized and suggestions for further work is provided.
|
|
\section{Pip tricks}
\subsection{Selecting a fast source mirror}
\begin{verbatim}
pip install dnspython -i http://pypi.douban.com/simple --trusted-host pypi.douban.com
\end{verbatim}
Here is a list of mirrors in mainland China:
\begin{verbatim}
http://pypi.douban.com/
http://pypi.hustunique.com/
http://pypi.sdutlinux.org/
http://pypi.mirrors.ustc.edu.cn/
http://mirrors.aliyun.com/
\end{verbatim}
In file ~/.pip/pip.conf:
\begin{verbatim}
[global]
trusted-host=mirrors.aliyun.com
index-url=http://mirrors.aliyun.com/pypi/simple
\end{verbatim}
\subsection{requirements freezing}
\begin{verbatim}
pip freeze > requirements.txt
pip install -r requirements.txt
\end{verbatim}
\subsection{create a local mirror}
\begin{verbatim}
https://aboutsimon.com/blog/2012/02/24/Create-a-local-PyPI-mirror.html
\end{verbatim}
\subsection{package installation}
To download package wheels without installing it:
\begin{verbatim}
pip download --platform=manylinux1_x86_64 --python-version 27
--only-binary=:all: -r requirement.txt
\end{verbatim}
To download package source tars without installing it:
\begin{verbatim}
pip download --no-binary=:all: virtualenv
\end{verbatim}
In an offline environment, package installation with pre-downloaded installers is like:
\begin{verbatim}
pip install --no-index --find-links="~/download" fabric
\end{verbatim}
\subsection{show package information}
\begin{verbatim}
[user@host]$ pip show fabric
Name: fabric
Version: 2.1.3
Summary: High level SSH command execution
Home-page: http://fabfile.org
Author: Jeff Forcier
Author-email: jeff@bitprophet.org
License: BSD
Location: /Users/mingzhe/anaconda/lib/python3.6/site-packages
Requires: cryptography, paramiko, invoke
\end{verbatim}
Direct dependancies of a package is listed. For a more detailed dependency graph on installed packages, pipdeptree tool can be used.
\begin{verbatim}
pip install pipdeptree
pipdeptree -p fabric
\end{verbatim}}
\subsection{Anaconda as pip}
Anaconda can act as pip, even in offline environment:
\begin{verbatim}
conda install fabric --offline python=2.7
conda list
\end{verbatim}
By specifying Python version when creating a virtual environment we effectively have a way for Python2/Python3 coexistance.
Anaconda can be configured to used a customized download site in order to accelerate package downloading. An example of ~/.condarc is
\begin{verbatim}
channels:
- https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
- defaults
show_channel_urls: true
\end{verbatim}
|
|
In this section we present a short evaluation of our findings from running
benchmarks in the benchmarking framework. After a discussion of the evaluated
implementations and datasets, we present four questions that we answer
using the framework. Subsequently, we discuss the answers to
these questions, present some observations regarding the build time of
indexes and their ability to answer batched queries, and summarise our
findings.
\begin{table}[t]
% \setlength{\tabcolsep}{0.5em} % for the horizontal padding
% {\renewcommand{\arraystretch}{1.2}% for the vertical padding
\begin{tabular}{l l}
\textbf{Principle} & \textbf{Algorithms} \\ \hline \hline
$k$-NN graph & \texttt{KGraph (KG)} \cite{kgraph}, \texttt{SWGraph (SWG)} \cite{swgraph,nmslib}, \texttt{HNSW} \cite{hnsw,nmslib}, \texttt{PyNNDescent (NND)} \cite{pynndescent}, \\
& \texttt{PANNG} \cite{ngt,Iwasaki16}, \texttt{ONNG} \cite{ngt, Iwasaki18} \\
tree-based & \textit{FLANN} \cite{flann}, \textit{BallTree (BT)} \cite{nmslib}, \texttt{Annoy (A)} \cite{annoy}, \texttt{RPForest (RPF)} \cite{rpforest}, \texttt{MRPT} \cite{Hyvonen2016} \\
LSH & \texttt{FALCONN (FAL)} \cite{falconn}, \textit{MPLSH} \cite{mplsh,nmslib}%, \texttt{FAISS (FAI)} \cite{faiss} (+PQ codes)
\\
other %& \texttt{DOLPHINN (DOL)} \cite{dolphinn} (Projections + Hypercube)\\
& \texttt{Multi-Index Hashing (MIH)} \cite{mihalgo} (exact Hamming search), \\
& \texttt{FAISS-IVF (FAI)} \cite{faiss} (inverted file)
\end{tabular}
%}
\caption{Overview of tested algorithms (abbr. in parentheses). Implementations in \textit{italics} have ``recall'' as quality measure
provided as an input parameter.}
\label{tab:algorithms}
\end{table}
\begin{table}[t]
%\setlength{\tabcolsep}{0.5em} % for the horizontal padding
%{\renewcommand{\arraystretch}{1.2}% for the vertical padding
\begin{tabular}{l l l r l}
\textbf{Dataset} & \textbf{Data/Query Points} & \textbf{Dimensionality} & \textbf{LID} & \textbf{Metric} \\ \hline \hline
\textsf{SIFT} & 1\,000\,000 / 10\,000 & 128 & 21.9 & Euclidean \\
\textsf{GIST} & 1\,000\,000 / 10\,000 & 960 & 48.0 & Euclidean \\
\textsf{GLOVE} & 1\,183\,514 / 10\,000 & 100 & 18.0 & Angular/Cosine \\
\textsf{NYTimes} & 234\,791 / 10\,000 & 256 & 18.8 & Euclidean \\
\textsf{Rand-Euclidean} & 1\,000\,000 / 10\,000 & 128 & 6.8 & Angular/Cosine \\
\textsf{SIFT-Hamming} & 1\,000\,000 / 1\,000& 256 & 12.8 & Hamming \\
\textsf{Word2Bits} & 399\,000 / 1\,000 & 800 & 24.7 & Hamming \\
\end{tabular}
%}
\caption{Datasets under consideration with their local intrinsic dimensionality (LID) computed by MLE \cite{Amsaleg15} from the 100-NN of the queries.}
\label{tab:datasets}
\end{table}
%
\medskip
\noindent{\textbf{Experimental setup.}} All experiments were run in Docker
containers on \emph{Amazon EC2}
\emph{c5.4xlarge} instances that are equipped with Intel Xeon Platinum 8124M CPU (16 cores available, 3.00 GHz, 25.0MB Cache) and
32GB of RAM running Amazon Linux.
Every experiment was repeated multiple times to verify that performance was reliable, and the results were compared with those obtained on a 4-core Intel Core i7-4790 clocked at 3.6 GHz with 32GB RAM. While the latter was a little faster, the relative order of algorithms remained stable. For each parameter setting and dataset, the algorithm was given
five hours to build the index and answer the queries.
\medskip
\noindent{\textbf{Tested Algorithms.}} Table~\ref{tab:algorithms} summarizes the algorithms that are used in the evaluation; see the references provided for details. The framework has support for more implementations and many of these were included
in the experiments, but they
turned out to be either non-competitive or too similar to other implementations.\footnote{For example, the framework contains three different
implementations of \texttt{HNSW}: the original one from NMSlib, a standalone variant inspired by that one, and an implementation in \texttt{FAISS} that is
again inspired by the implementation in NMSlib. The first two implementations perform almost indistinguishably, while the implementation provided in \texttt{FAISS} was a bit slower. For the sake of brevity, we also omit the two random projection forest-based methods \texttt{RPForest} and \texttt{MRPT} since they were always slower than \texttt{Annoy}. } The scripts that set up the framework automatically fetch the most current version found in each algorithm's repository.
In general, the implementations under evaluation can be separated into three main algorithmic principles: graph-based, tree-based, and hashing-based algorithms. \emph{Graph-based algorithms} build a graph in which vertices are the points in the dataset and edges connect vertices that are true nearest neighbors of each other, forming the so-called $k$-NN graph. Given a query point, close neighbors are found by traversing the graph in a greedy, implementation-specific fashion \cite{kgraph,swgraph,hnsw,pynndescent,Iwasaki16}.
\emph{Tree-based
algorithms} use a collection of trees as their data structure. In these trees, each node splits the dataset into subsets that are then processed in the children of the node. If the dataset associated with a node is small enough, it is directly stored in the node which is then a leaf in the tree. For example, \texttt{Annoy} \cite{annoy} and \texttt{RPForest} \cite{rpforest} choose in each node a random hyperplane to split the dataset. Given a query point, the collection of trees is traversed to obtain a set of
candidate points from which the closest to the query are returned. \emph{Hashing-based algorithms} apply hash functions such as locality-sensitive hashing \cite{IndykM98} to map data points to hash values. At query time, the query point is hashed and keys colliding with it, or not too far from it using the
multi-probe approach \cite{mplsh}, are retrieved. Among them, those closest to the query point are returned. Different implementations are mainly distinguished by their choice of underlying locality-sensitive hash function.
\medskip
\noindent{\textbf{Datasets.}} The datasets used in this evaluation are summarized in Table~\ref{tab:datasets}. More information on these datasets, as well as results for other datasets, can be found on the framework's website. The \textsf{NYTimes} dataset was generated by building tf-idf descriptors from the bag-of-words version, and embedding them into a lower dimensional space using the Johnson-Lindenstrauss Transform \cite{JohnsonL86}. The Hamming space version of \textsf{SIFT}
was generated by applying Spherical Hashing \cite{HeoLHCY15} using the implementation provided by the authors of \cite{HeoLHCY15}. The dataset \textsf{Word2Bits} comes from
the quantized word vector approach described in \cite{Lam18} using the top-400\,000 words in the English Wikipedia from 2017.
The dataset \textsf{Rand-Euclidean} is generated as follows: Assume that we want to generate a dataset with $n$ data points, $n'$ query points, and are interested in finding the $k$ nearest neighbors for each query point. For an even dimension $d$, we generate $n - k\cdot n'$ data points of the form $(v, \mathbf{0})$, where $v$ is a random unit length vector of dimension $d/2$, and $\mathbf{0}$ is the vector containing $d/2$ $0$ entries. We call the first $d/2$ components the \emph{first part} and the following $d/2$ components the \emph{second part} of the vector.
From these points, we randomly pick $n'$ points $(v_1,\ldots,v_{n'})$. For each point $v_i$, we replace its second part with a random vector of length $1/\sqrt{2}$. The resulting point is the query point $q_i$. For each $q_i$, we insert $k$ random points at varying distance increasing from $0.1$ to $0.5$ to $q_i$ into the original dataset. The idea behind such a dataset is that the vast majority of the dataset looks like a random dataset with little structure for the algorithm to exploit, while
each query point has $k$ neighbors that are with high probability well separated from the rest of the data points. This means that the queries are easy to answer locally, but they should be difficult to answer if the algorithm wants to exploit a global structure.
\medskip
\noindent{\textbf{Parameters of Algorithms.}} Most algorithms do not allow the user to explicitly specify a quality target---in fact, only
three implementations from Table~\ref{tab:algorithms} provide ``recall'' as an input parameter. We used our framework to test many parameter settings at once. The detailed settings tested for each algorithm can be found on the framework's website.
\medskip
\noindent{\textbf{Status of \texttt{FALCONN}.}} While preparing this full version, we noticed that the performance of \texttt{FALCONN} has drastically decreased in recent versions. We communicated this to the authors of \cite{falconn}, who are, as of publication time, still working on a fix. For reference, we include the results from the conference version of this paper verbatim in Figure~\ref{plot:performance}, but we do not discuss results related to \texttt{FALCONN}.
\subsection{Objectives of the Experiments}
We used the benchmarking framework to find answers to the following questions:
\noindent\textbf{(Q1) Performance.} Given a dataset, a quality measure and a number $k$ of nearest neighbors to return, how do algorithms compare to each other with respect
to different performance measures, such as query time or index size?\\
\noindent\textbf{(Q2) Robustness.} Given an algorithm $\mathcal{A}$, how is its performance
and result quality influenced by the dataset and the number of returned neighbors?\\
\noindent\textbf{(Q3) Approximation.} Given a dataset, a number $k$ of nearest neighbors to return,
and an algorithm $\mathcal{A}$, how does its performance improve when the returned neighbors can be an approximation? Is the effect comparable for different algorithms? \\
%\item[\textbf{(Q4)}] \textbf{Query Generation.} Usually, queries are generated by splitting a dataset at random. However, many datasets come with a pre-made set of queries. Is their a qualitative difference between splitting the dataset at random and answering these pre-made queries?
\noindent\textbf{(Q4) Embeddings.} Equipped with a framework with many different datasets and distance metrics, we can try interesting combinations. How do algorithms targeting Euclidean space or Cosine similarity perform in, say, Hamming space? How does replacing the internals of an algorithm with Hamming space related techniques improve its performance?
\begin{figure}[t!]
\input{plot-performance}
\end{figure}
\subsection{Discussion}
\noindent{\textbf{(Q1) Performance.}} Figure~\ref{plot:performance} shows the relationship between an algorithm's achieved recall and the number of queries it can answer per second (its QPS) on the two datasets \textsf{GLOVE} (Cosine similarity) and \textsf{SIFT} (Euclidean distance) for $10$- and $100$-nearest neighbor queries.
For \textsf{GLOVE}, we observe that the graph-based algorithms clearly outperform the tree-based approaches. It is noteworthy that all implementations, except \texttt{FLANN},
achieve close to perfect recall. Over all recall values, \texttt{HNSW} and \texttt{ONNG}
are fastest. However, at high recall values they are closely matched by \texttt{KGraph}. Next comes \texttt{FAISS-IVF}, only losing to the
graph-based approaches at very high recall values. For 100 nearest neighbors, the picture is very similar. We note, however, that most
graph-based indexes were not able to build indexes for nearly perfect
recall values within the five-hour time limit.
On \textsf{SIFT}, all tested algorithms %except \texttt{FAISS}
can achieve
close to perfect recall. Again,
the graph-based algorithms are fastest; they are followed by \texttt{Annoy} and
\texttt{FAISS-IVF}. \texttt{FLANN} and \texttt{BallTree} are at the end.
In particular, \texttt{FLANN} was not able to finish its auto-tuning for high
recall values within the five-hour time limit.
Very few of these algorithms can tune themselves to produce a particular recall
value. In particular, almost all of the fastest algorithms on the \textsf{GLOVE} dataset
expose many parameters, leaving the user to find the combination that works
best. The
\texttt{KGraph} algorithm, on the other hand, uses only a single parameter,
which---even in its ``smallest'' choice---still gives high recall on \textsf{GLOVE} and \textsf{SIFT}.
\texttt{FLANN} manages to tune itself for a particular recall value well. However, at
high recall values, the tuning does not complete within the time
limit, especially with 100-NN.
%
\begin{figure}[t!]
\input{plot-index-size}
\end{figure}
Figure~\ref{plot:index:size} relates an algorithm's performance to its index
size. (Note that here down and to the right is better.) High recall can be achieved with small indexes by probing many points;
however, this probing is expensive, and so the QPS drops dramatically.
To reflect this performance cost, we scale the size of the index by the QPS it
achieves for a particular run.
This reveals that, on \textsf{SIFT}, most implementations perform similarly under
this metric. \texttt{HNSW} and \texttt{ONNG} are best (due to the QPS they achieve), but most of the other
algorithm achieve similar cost. In particular, \texttt{FAISS-IVF} and \texttt{FLANN} do well. \texttt{NND}, \texttt{Annoy}, and \texttt{BallTree} achieve their QPS at the cost of relatively large indexes, reflected in a rather large gap between them and their competition. On \textsf{GLOVE}, we see a much wider spread of index size performance. Here, \texttt{FAISS-IVF} and \texttt{HNSW} perform nearly indistinguishably. Next follow the other graph-based algorithms, with \texttt{FLANN} among
them. Again, \texttt{Annoy} and \texttt{BallTree} perform worst in this measure.
\noindent{\textbf{(Q2) Robustness.}} Figure~\ref{plot:random} plots recall against QPS on the dataset \textsf{Rand-Euclidean}. Recall from our earlier discussion of datasets that this dataset contains easy queries, but requires an algorithm to exploit the local structure instead of some global structure of the data structure, cf.~\textbf{Datasets}. We see very different behavior than before: there is a large difference between different graph-based approaches. While \texttt{ONNG}, \texttt{KGraph},
\texttt{NND}
can solve the task easily with high QPS, both \texttt{HNSW} and \texttt{SWG} fail in this task. This means that the ``small-world'' structure of these two methods \emph{hurts} performance on such a dataset. In particular, no tested parameter setting for \texttt{HNSW} achives recall beyond .86. \texttt{Annoy} performs best at exploiting the local structure of the dataset and is the fastest algorithm. The dataset is also easy for \texttt{FAISS-IVF}, which also has very good performance.
Let us turn our focus to how the algorithms perform on a wide variety of datasets. Figure~\ref{plot:robustness} plots recall against QPS for \texttt{Annoy}, \texttt{FAISS-IVF}, and \texttt{HNSW} over a range of datasets. Interestingly, implementations agree on the ``difficulty'' of a dataset most of the time, i.e., the relative order
of performance is the same among the algorithms. Notable exceptions are \textsf{Rand-Euclidean}, which is very easy for \texttt{Annoy} and \texttt{FAISS-IVF}, but difficult for \texttt{HNSW} (see above), and \textsf{NYTimes}, where \texttt{FAISS-IVF} fails to achieve recall above .7 for the tested parameter settings. Although all algorithms take a performance hit for high recall values, \texttt{HNSW} is least affected. On the other hand, \texttt{HNSW} shows the biggest slowdown in answering 100-NN
compared to 10-NN queries among the different algorithms.
\begin{figure}[t!]
\input{plot-random}
\input{plot-robustness}
\end{figure}
\begin{figure}[t!]
\input{plot-epsilon}
\end{figure}
\begin{figure}[t!]
\input{plot-hamming}
\end{figure}
\noindent{\textbf{(Q3) Approximation.}} Figure~\ref{plot:approximation} relates achieved QPS to the (approximate) recall of an algorithm. The plots show
results on the \textsf{GIST} dataset with 100-NN for recall with no
approximation and approximation factors of $1.01$ and $1.1$, respectively. Despite its high
dimensionality, all considered algorithms achieve close to perfect recall (left). For an
approximation factor of $1.01$, i.e., distances to true nearest neighbors are allowed to differ by $1\%$, all curves move to the right, as expected. Also, the relative difference between the performance of algorithms does not change. However, we see a clear difference between the candidate sets that are returned by algorithms at low recall. For example, the data point for \texttt{MRPT} around .5 recall on the left achieves roughly .6 recall
as a $1.01$ approximation, which means that roughly 10 new candidates are considered true approximate nearest neighbors. On the other hand, \texttt{HSNW}, \texttt{FAISS-IVF}, and \texttt{Annoy} improve by around 25 candidates being counted as approximate nearest neighbors. We see that allowing a slack of $10\%$ in the distance renders the queries too simple: almost all algorithms achieve near-perfect recall for all of their parameter choices. Interestingly, \texttt{Annoy} becomes the
second-fastest algorithm for $1.1$ approximation. This means that its candidates at very low recall values were a bit better than the ones obtained by its competitors.
%\noindent{\textbf{(Q4) Query Generation.}} The plot in Figure~\ref{plot:premade:queries} queries shows a comparison between the performance of algorithms on \textsf{SIFT} with the query set provided with the dataset, and a version where the queries are added to the dataset and the joined dataset is split randomly into 1M data points and 10\,000 queries. As the plot indicates, there is no significant difference between these two versions.
\noindent{\textbf{(Q4) Embeddings.}} Figure~\ref{plot:hamming} shows a comparison between selected algorithms on the binary version of \textsf{SIFT} and a version of the Wikipedia dataset generated by \textsf{Word2Bits}, which is an embedding of \texttt{word2vec} vectors~\cite{Mikolov13} into binary vectors. The
performance plot for \texttt{Annoy} in the original Euclidean-space
version of \textsf{SIFT} is also shown.
On \textsf{SIFT}, algorithms perform much faster
in the embedded Hamming space version compared to
the original Euclidean-space version (see Figure~\ref{plot:performance}), which indicates that the queries are
easier to answer in the embedded space. (Note here that the dimensionality is actually twice as large.)
Multi-index
hashing \cite{mihalgo}, an exact algorithm for Hamming space, shows good performance on \textsf{SIFT} with around 460 QPS.
We created a Hamming space-aware version of \texttt{Annoy}, using
\texttt{popcount} for distance computations, and sampling single bits
(as in Bitsampling LSH \cite{IndykM98}) instead of choosing hyperplanes. This version is two to three times faster on \textsf{SIFT} until high recall, where the Hamming space version and the Euclidean space version converge in running time.
On the 800-dimensional \textsf{Word2Bits} dataset the opposite is true and the original version of \texttt{Annoy} is faster than the dedicated Hamming space approach. This means that the original data-dependent node splitting in \texttt{Annoy} adapts better to the query structure than the node splitting by data-independent Bitsampling for this dataset. The dataset seems to be hard in general: \texttt{MIH} achieves only around 20 QPS on \textsf{Word2Bits}.
We remark that setting the parameters for \texttt{MIH} correctly is crucial;
even though the recall will always be 1, different parameter settings can give
wildly different QPS values.
The embedding into Hamming space does have some consistent benefits that we do
not show here. Hamming space-aware algorithms should always have smaller index
sizes, for example, due to the compactness of bit vectors.
\subsection{Index build time remarks}
\label{sec:build:time}
Figure~\ref{fig:buildtime} compares different implementations with respect to the time it takes to build the index. We see a huge difference in the index building time among implementations, ranging from \texttt{FAISS-IVF} (around 2 seconds to build the index) to \texttt{HNSW} (almost 5 hours). In general, building the nearest neighbor graph and building a tree data structure takes considerably longer than the inverted file approach taken by \texttt{FAISS-IVF}. Shorter build times make it much quicker to search for the best parameter choices for a dataset. Although all indexes achieve recall of at least 0.9, we did not normalize by the queries per second as in Figure~\ref{plot:index:size}. For example, \texttt{HNSW} also achieves its highest QPS with these indexes, but \texttt{FAISS-IVF} needs a larger index to achieve the performance from Figure~\ref{plot:performance} (which takes around 13 seconds to build).
As an aside, \texttt{FAISS}'s implementation of \texttt{HNSW} is much faster than the original here, building an index that achieved recall .9 in only 1\,700 seconds.
\input{result_tables/build_bar}
\input{plot-qps-build-sqrt}
Another perspective on build time and query time is given by Figure~\ref{plot:qps-build}. There, we plot queries per second \emph{times} the time it took to build the index, which is the build time divided by the average query time. This gives an amortization point: \emph{How many queries must be performed to make it worth building the index structure?}
The plot shows that the amortization point decreases for all methods as the recall increases. The differences in building time discussed in Figure~\ref{fig:buildtime} translate to very different curves.
For example, \texttt{FAISS-IVF} amortizes the time spent building the index after around 100\,000 queries for recall 0.9, or 1\,000\,000 queries at recall 0.75. By comparison, \texttt{HNSW} needs around a factor of 100 more queries to amortize for its longer build times: around 10 million at recall 0.9 and 100 million at recall 0.75.
We remark that it makes little sense to optimize for this cost measure; it is merely useful when deciding whether or not a particular use case justifies building a certain index.
\subsection{Batched Queries}
We turn our focus to batched queries. In this setting, each algorithm is given the whole set of query points at once and has to return closest neighbors for each point. This allows for several optimizations: in a GPU setting, for example, copying query points to, and results from, the GPU's memory is expensive, and being able to copy everything at once drastically reduces this overhead.
The following experiments have been carried out on an Intel Xeon CPU E5-1650 v3 @ 3.50GHz with 6 physical cores, 15MB L3 Cache, 64 GB RAM, and equipped with an NVIDIA Titan XP GPU.
\begin{figure}
\input{plot-batch}
\end{figure}
Figure~\ref{plot:batch} reports on our results with regard to algorithms in batch mode. \texttt{FAISS}' inverted file index on the GPU is by far the fastest index, answering around 655\,000 queries per second for .7 recall, and 61\,000 queries per second for recall .99. It is around 20 to 30 times faster than the respective data structure running on the CPU. Comparing \texttt{HNSW}'s performance with batched queries against non-batched queries shows a speedup by a factor of roughly 3 at .5 recall, and a factor of nearly
5 at recall .99 in favor of batched queries.
It is particularly interesting to see that even the simplest GPU-driven approach, \texttt{FAISS}' brute-force variant, can handle nearly 25,000 queries per second.
\subsection{Summary}
\noindent\textbf{Which method to choose?} From the evaluation, we see that graph-based algorithms provide by far the highest number of queries per second on most of the datasets. \texttt{HNSW} and \texttt{ONNG} are often the fastest algorithms, with \texttt{ONNG} being more robust if there is no global structure in the dataset, according to the experiments presented here. The downside of graph-based approaches is the high preprocessing time needed to build their data structures. This could mean that they might not be the preferred choice if the dataset changes regularly. When it comes to small and quick-to-build index data structures, \texttt{FAISS}' inverted file index provides a suitable choice that still gives good performance in answering queries, as discussed in Section~\ref{sec:build:time}.
\medskip
\noindent\textbf{How well do these results generalize?} In our experiments, we observed that, for the standard datasets under consideration, algorithms usually agree on
\begin{itemize}
\item [(i)] the order in how well they perform on datasets, i.e., if algorithm \texttt{A} answers queries on dataset \textsf{X} faster than on dataset \textsf{Y}, then so will algorithm \texttt{B}; and
\item [(ii)] their relative order to each other, i.e., if algorithm \texttt{A} is faster than algorithm \texttt{B} on dataset \textsf{X}, this will most likely be the order for dataset \textsf{Y}.
\end{itemize}
There exist exceptions from this rule, e.g., for the dataset \textsf{Rand-Euclidean} described above.
\medskip
\noindent\textbf{How robust are parameter choices?} With very few exceptions (see Table~\ref{tab:algorithms}), users often have to set many parameters themselves. Our framework allows them to choose the best parameter choice by exploring the interactive plots that contain the parameter choices that achieve certain quality guarantees.
In general, the \emph{build parameters} can be used to estimate the size of the index\footnote{As an
example, the developers of \texttt{FAISS} provide a detailed description of the space usage of their indexes at \url{https://github.com/facebookresearch/faiss/wiki/Faiss-indexes}.},
while the \emph{query parameters} suggest the amount of effort that is put into searching the index.
We will concentrate for a moment on Figure~\ref{plot:parameters}. This figure presents a scatter plot of selected algorithms for \textsf{GLOVE} on 10-NN, cf. the Pareto curve in Figure~\ref{plot:performance} (in the top left). Each algorithm has a very distinctive parameter space plot.
For \texttt{HNSW}, almost all data points lie on the Pareto curve. This means that the different build parameters blend seamlessly into each other.
For \texttt{Annoy}, we see that data points are grouped into clusters of three points each, which represent exactly the three different index
choices that are built by the algorithm.
For low recall, there is a big performance penalty for choosing a too large index; at high recall, the different build parameters blend almost into each other.
For \texttt{SW-Graph}, we see two groups of data points, representing two different index choices.
We see that with the index choice to the left, only very low recall is achieved on the dataset.
Extrapolating from the curve, choosing query parameters that would explore a large part of the index will probably lead to low QPS. No clear picture is visible for \texttt{FAISS-IVF} from the plot. This is chiefly because we test many different build parameters -- recall that the index building time is very low. Each build parameter has its very own curve with respect to the different query parameters.
As a rule of thumb, when aiming for high recall values, a larger index performs
better than a smaller index and is more robust to the choice of query parameters.
\begin{figure}
\input{plot-parameters}
\end{figure}
\medskip
\noindent\textbf{How do these results generalize to lower-dimensional datasets?} While our focus has been on high-dimensional datasets, one might reasonably wonder to what extent the observations made above are true for lower-dimensional datasets. To this end, we embedded the \textsf{NYTimes} dataset so that each vector is in $\mathbb{R}^6$. The relative performance of the implementations discussed earlier is nearly unaffected by this change, i.e., graph-based approaches still provide a better QPS/recall tradeoff than other approaches. However, the exact KD-tree
implementation provided with Python's \texttt{sklearn} --
which performs worse than a linear scan on all of the other datasets in our evaluation --
becomes very competitive, achieving around 7000 QPS -- lower than \texttt{HNSW} (~40000 QPS with recall .9984) and \texttt{FAI-IVF} (~10000 QPS at recall .9983), but faster, for example, than \texttt{PyNNDescent}, which achieves around 2000 QPS at recall .999.
This suggests that exact algorithms are worth considering when working with lower-dimensional datasets.
|
|
\subsection{html -- HyperText Markup Language support}
To be done ....
%
|
|
\section{Bayesian inference of species tree}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Species \& gene trees}
\begin{frame}\frametitle{Species tree}
\begin{itemize}
\item Species tree --- the phylogeny representing the relationships among a group of species
\begin{figure}[h!]
\includegraphics[height=4cm]{figures/primateSpeciesTree}
\figureCaption{\cite{Rogers:2014ka}}{}
\end{figure}
\item Gene tree --- the phylogeny for sequences at a particular gene locus from those species
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Gene tree discordance}
\begin{itemize}
\item Incomplete lineage sorting
\end{itemize}
\begin{figure}[h!]
\includegraphics[height=4cm]{figures/incompleteLineageSorting}
\figureCaption{\cite{Patterson:2006cm}}{}
\end{figure}
\end{frame}
\begin{frame}\frametitle{Gene tree discordance}
\begin{itemize}
\item Horizontal gene transfer
\item Gene duplication and loss
\begin{figure}[h!]
\includegraphics[height=3.5cm]{figures/HGTandDuplicationLoss}
\figureCaption{\cite{Degnan:2009hr}}{}
\end{figure}
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Gene tree discordance}
\begin{itemize}
\item Hybridization
\begin{figure}[h!]
\includegraphics[height=6cm]{figures/hybridization}
\figureCaption{\cite{Li:2016ko}}{}
\end{figure}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{*BEAST}
\begin{frame}\frametitle{Species tree inference and *BEAST}
\begin{itemize}
\item A Bayesian method to infer species tree from multilocus sequence data \cite{Heled:2010ia}
\item *BEAST, a functionality of BEAST2
\end{itemize}
\begin{columns}
\column{4.6cm}
\begin{itemize}
\item Gene trees are embedded in the species tree under the multispecies coalescent model \cite{Rannala:2003vt}
\begin{itemize}
\item incomplete lineage sorting
\end{itemize}
\item Gene trees are independent among loci
\end{itemize}
\column{4.7cm}
\begin{figure}[h!]
\includegraphics[width=1.0\textwidth]{figures/geneTreesInSpeciesTree}
\end{figure}
\end{columns}
\end{frame}
%\begin{frame}\frametitle{Species tree inference and *BEAST}
% \begin{itemize}
% \item TODO: draw a graphical model representation
% \end{itemize}
%\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Species tree prior}
\begin{frame}\frametitle{Species tree prior}
\begin{itemize}
\item The prior for species tree $S$ has two parts:
\[ P(S) = P(S_T)P(N) \]
\begin{itemize}
\item $S_T$ --- species time tree
\item $N$ --- population size functions
\end{itemize}
\item $P(S_T)$ --- typically a Yule (pure-birth) or birth-death prior
\begin{itemize}
\item we can assign a hyperprior for the speciation (birth) rate (and extinction (death) rate, if birth-death)
\end{itemize}
\item $P(N)$ --- constant or continuous-linear
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Species tree prior}
\begin{itemize}
\item Constant population sizes
\begin{figure}[h!]
\includegraphics[height=5cm]{figures/popFunctionConstant}
\figureCaption{\cite{Drummond:2015tz}}{}
\end{figure}
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Species tree prior}
\begin{itemize}
\item Continuous-linear population sizes
\begin{figure}[h!]
\includegraphics[height=5.3cm]{figures/popFunctionLinear}
\figureCaption{\cite{Drummond:2015tz}}{}
\end{figure}
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Species tree prior}
\begin{itemize}
\item In *BEAST, the prior type for $N$ is fixed to gamma
\item The gamma shape parameter $k$ is fixed to 2, but we can assign a hyperprior for $\psi$, the scale parameter of the gamma
\item (This $\psi$ parameter is called "population mean" in Beauti, but the prior mean is actually $2\psi$ when the population sizes are constant)
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Multispecies coalescent}
\begin{frame}\frametitle{Multispecies coalescent model}
\begin{itemize}
\item The prior for gene tree $g$, given species tree $S$
\end{itemize}
\begin{figure}[h!]
\includegraphics[height=5.5cm]{figures/mscOverview}
\figureCaption{\cite{Drummond:2015tz}}{}
\end{figure}
\end{frame}
\begin{frame}\frametitle{Multispecies coalescent model}
\begin{itemize}
\item The prob. distribution of gene time tree $g$ given species tree $S$, is:
\[ P(g|S) = \prod_{j=1}^{2s-1} P(L_j(g)|N_j(t)) \]
\end{itemize}
\begin{columns}
\column{5.4cm}
\begin{itemize}
\item $s$ --- number of extant species ($2s-1$ branches totally)
\item $N_j(t)$ --- population size function (linear)
\item $L_j(g)$ --- coalescent intervals for genealogy $g$ that are contained in the $j$'th branch of species tree $S$
\end{itemize}
\column{4cm}
\begin{figure}[h!]
\includegraphics[width=1.0\textwidth]{figures/mscPopulation4}
\end{figure}
\end{columns}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Molecular clock model}
\begin{frame}\frametitle{Molecular clock model}
\begin{itemize}
\item $P(c)$ --- prior for the molecular clock model of genealogy $g$
\begin{itemize}
\item strict clock --- typically fix to 1.0 for the first locus, and infer the relative clock rates for the rest loci
\item relaxed clock
\end{itemize}
\end{itemize}
\begin{itemize}
\item $P(\theta)$ --- prior for the substitution model parameters
\item e.g. HKY85,
\begin{itemize}
\item Prior for transition/transversion rate ratio ($\kappa$), e.g. gamma(2,1)
\item Prior for base frequencies ($\pi_T, \pi_C, \pi_A, \pi_G$), e.g. Dirichlet(1,1,1,1)
\end{itemize}
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Felsenstein likelihood}
\begin{frame}\frametitle{Felsenstein likelihood}
\begin{itemize}
\item The probability (likelihood) of data $d_i$ (alignment at locus $i$), given the gene time tree $g_i$, molecular clock $c_i$, and substitution model $\theta_i$, is:
\[ P(d_i|g_i,c_i,\theta_i) \]
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Posterior distribution}
\begin{frame}\frametitle{Priors and likelihood}
\begin{itemize}
\item $P(S)$ --- prior for species tree
\vskip 0.4cm
\item $P(g_i|S)$ --- prior for gene tree $i$ (multispecies coalescent)
\vskip 0.4cm
\item $P(c_i)$ --- prior for clock rate of locus $i$
\vskip 0.4cm
\item $P(\theta_i)$ --- prior for substitution parameters of locus $i$
\vskip 0.4cm
\item $P(d_i|g_i,c_i,\theta_i)$ --- likelihood of data at locus $i$
\end{itemize}
\end{frame}
\begin{frame}\frametitle{Posterior}
\begin{itemize}
\item The posterior distribution of the species tree $S$ and other paremeters given data $D$ is:
\[ P(S, \mathbf{g, c}, \Theta|D) \propto P(S) \prod_{i=1}^n P(g_i|S) P(c_i) P(\theta_i) P(d_i|g_i,c_i,\theta_i) \]
\item The data $D = \{d_1, d_2, \dots, d_n\}$ is composed of $n$ alignments, one per locus.
\end{itemize}
\end{frame}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{starBEAST2}
\begin{frame}\frametitle{Integrating out population sizes}
\begin{itemize}
\item Assume constant population sizes
\item Assign i.i.d inverse-gamma($\alpha$, $\beta$) prior for $N_j$
\begin{itemize}
\item mean = $\beta / (\alpha -1)$
\end{itemize}
\item The population sizes $N$ can be integrated out from $P(g|S)$ \cite{Jones:2015jf}
\vskip 0.4cm
\item Specify $\alpha$ and $\beta$ in the invgamma prior (instead of $\psi$ in the gamma prior)
\end{itemize}
\end{frame}
\begin{frame}\frametitle{starBEAST2}
\begin{itemize}
\item A more efficient implementation and an upgrade of *BEAST
\begin{itemize}
\item Population sizes integrated out \cite{Jones:2015jf}
\item Relaxed molecular clock per species tree branch (instead of per gene tree branch)
\item More efficient MCMC proposals for the species tree and gene trees (coordinated operators) \cite{Jones:2015jf, Rannala:2015wk}
\end{itemize}
\item Available at \url{github.com/genomescale/starbeast2}, will be released soon (as a BEAST2 add-on)
\end{itemize}
\end{frame}
|
|
% Chapter Template
\chapter{Description of online meetings} % Main chapter title
\label{Meetings} % Change X to a consecutive number; for referencing this chapter elsewhere, use \ref{ChapterX}
\section{Meeting 1}
\begin{center}
\begin{tabular}{| c | c | c }
\hline
Date & 31.03.2017 \\
\hline
Location & Skype \\
\hline
Atendees & All the team members \\
\hline
Duration & 1h \\
\hline
\end{tabular}
\end{center}
We settled on Wiggio to take the meetings on Skype because it features good chat functions and video calls, screen sharing.
We had the following agendas for the meeting:
\begin{itemize}
\item Introductory round
\item Facilitator and minute taker roles
\item Discuss Meeting schedule
\end{itemize}
So when we successfully connect to the meeting the very first thing we did, was getting know with each other in an introductory round. This is very important in a team where the members do not know each other, in order to make every member feel comfortable.
The second thing was to select slack for communication, we select slack because is dedicated for this kind of projects and teams. I set up a team on slack: \href{https://remc17white.slack.com}{remc17white.slack.com}, I did it because I have it experience with slack in the past.
We have made the decision to do a rotation of the roles for each meeting.
We talked between each other about what we have to do, in order to be sure that everybody understood the course requirements.
\section{Meeting 2}
\begin{center}
\begin{tabular}{| c | c | c }
\hline
Date & 04.04.2017\\
\hline
Location & Skype \\
\hline
Atendees & All the team members \\
\hline
Duration & 1h \\
\hline
\end{tabular}
\end{center}
The colleagues from Germany have found an implementation of code for collecting the tweets they also described
in a presentation. I also contributed with a tool for scanning the facebook profile.
\textbf{New theme for the project}
\begin{itemize}
\item We discussed (promoted by me), in the following,
the possibility to choose other theme for the project which
only Twitter would be necessary.
\item It was mentioned a current topic which a game called
Blue Whale was influencing people to suicide
\item One of the suggestions was to use Twitter to find
possible suicides (not necessarily influenced by the game)
\end{itemize}
\textbf{Agreement:}
\begin{itemize}
\item The conclusion about the suggestion was it could be too
complicated to label these messages
\item Requires knowledge and research in topics we don't
know (e.g. Psychology)
\end{itemize}
\section{Meeting 3}
\begin{center}
\begin{tabular}{| c | c | c }
\hline
Date & 07.04.2017 \\
\hline
Location & Skype \\
\hline
Atendees & All the team members \\
\hline
Duration & 1h \\
\hline
\end{tabular}
\end{center}
In the beginning of the third meeting we explored the idea of a new approach for retrieving and analyzing the data Paul suggested that we could also use another web application, which is called "sentiment vz" insted of keyhole.
The problem was that this application is solely
retrieving data from Twitter, and this is a problem to
accomplish our project goal, which is to compare the
sentimental analysis from two different social
platforms
For the topic, which we wanted to use for collecting the
data, we were discussing about current events in the Brexit or the happenings in Syria.
Another topic we were discussing was the French
presidential election. A possible problem of this
problem could be that most of the data might be in
French, which make it difficult to analyze, since most of
the sentimental analyses work with the English
language.
\section{Meeting 4}
\begin{center}
\begin{tabular}{| c | c | c }
\hline
Date & 11.04.2017 \\
\hline
Location & Skype \\
\hline
Atendees & All the team members \\
\hline
Duration & 1h \\
\hline
\end{tabular}
\end{center}
Bernardo had started the conversation and asked from every
participant about the research progress which everyone have
from previous meeting.
All members shared their ideas what they have done so far.
Paul shared the key words and hashtags for extracting data
from both social medias.
\section{Meeting 5}
\begin{center}
\begin{tabular}{| c | c | c }
\hline
Date & 25.04.2017 \\
\hline
Location & Skype \\
\hline
Atendees & All the team members \\
\hline
Duration & 1h \\
\hline
\end{tabular}
\end{center}
Bernardo started the discussion to find out the results of the
data which we have gotten since last meeting. Benjamin
shared his approach for getting the data from YouTube
comments about "Brexit" and shared his experienced about
some useless comments due to useless content in the video.
Diana explained that Paul get the data from twitter and
Bogdan was responsible for sentiment analysis with that data.
\section{Meeting 6}
\begin{center}
\begin{tabular}{| c | c | c }
\hline
Date & 28.04.2017 \\
\hline
Location & Skype \\
\hline
Atendees & All the team members \\
\hline
Duration & 1h \\
\hline
\end{tabular}
\end{center}
\textbf{Organisation}
\begin{itemize}
\item talked about how the graphical data should look on the final paper;
\item talked about gender specific data, on how we could include this parameter into our research paper;
\item the Youtube team talked about gender specific data, ex. It turned out that women tend to be less negative/hateful;
\end{itemize}
\section{Meeting 7}
\begin{center}
\begin{tabular}{| c | c | c }
\hline
Date & 02.05.2017 \\
\hline
Location & Skype \\
\hline
Atendees & All the team members \\
\hline
Duration & 1h \\
\hline
\end{tabular}
\end{center}
Unfortunately, I was not able to participate in this meeting. I also announced my colleges before I had an important meeting at work.
\section{Meeting 8}
\begin{center}
\begin{tabular}{| c | c | c }
\hline
Date & 05.05.2017 \\
\hline
Location & Skype \\
\hline
Atendees & All the team members \\
\hline
Duration & 1h \\
\hline
\end{tabular}
\end{center}
Reviewing the chapter “Introduction”
We discussed about the content of the introduction chapter
of our research paper. It was suggested to add the purpose
of the research paper to our introduction because in this part
of a paper it is necessary to mention the overall goal.
It was asked if information like the importance of social
media in terms of the world of work, should be part of the
introduction or not. The reasons for these doubts were
because it did not seem relevant to our topic.
Another suggestion, which was discussed, was the
comparison between the social platforms itself like platforms,
which focus more on file sharing, or focusing more about the
communication.
Furthermore, it was said that mentioning the behavior in
social platforms comparing with interaction from the real
world could also be part of the introduction. This may also
be linked to the degree of exposure of private information.
\section{Meeting 9}
\begin{center}
\begin{tabular}{| c | c | c }
\hline
Date & 09.05.2017 \\
\hline
Location & Skype \\
\hline
Atendees & All the team members \\
\hline
Duration & 1h \\
\hline
\end{tabular}
\end{center}
In the last meeting we have discussed the research paper by proof reading each part of the paper to remove grammatical mistakes. Every group member has studied the individual part of paper
and gave some suggestions to replace certain phrases. I provided the twitter result with both graphically and tabular form after cleaning the data from tweets and
added in the result of the research paper
|
|
\section{Cream of Mushroom Soup}
\label{creamOfMushroomSoup}
\setcounter{secnumdepth}{0}
Time: 30 minutes (10 minutes prep, 20 minutes cooking)
Serves: 6
\begin{multicols}{2}
\subsection*{Ingredients}
\begin{itemize}
\item 1 recipe of light \nameref{roux}
\item 8 ounces mushrooms, sliced
\item 16 ounces (2 cups) heavy cream
\item salt and black pepper to taste (maybe \( \frac{1}{4} \) teaspoon of each)
\end{itemize}
\subsection*{Hardware}
\begin{itemize}
\item Stock pot
\item Ladle
\end{itemize}
\clearpage
\subsection*{Instructions}
\begin{enumerate}
\item Make the light roux per the standard recipe.
\item Add 8 ounces sliced mushrooms to the pan.
\item Add 16 ounces (2 cups) heavy cream.
\item Stir to combine, until thickened.
\item Salt and pepper to taste.
\end{enumerate}
\subsection*{Notes}
\begin{itemize}
\item This is my own recipe.
\item I tend to use this in other recipes, such as \nameref{americanCottagePie} and \nameref{greenBeanCasserole}.
\end{itemize}
\end{multicols}
\clearpage
|
|
\documentclass[xelatex,ja=standard]{bxjsarticle}
% error level change (cf. http://bit.ly/36dakLt)
\RequirePackage[l2tabu, orthodox]{nag} % for warning old packages and commands
\usepackage[all, warning]{onlyamsmath} % for amsmath checker
% fonts (cf. http://bit.ly/36dakLt)
\usepackage{newtxtext,newtxmath}
% AMS styles
\usepackage{amsmath, amssymb, amsfonts}
% for physics and SI unit
\usepackage{physics, siunitx}
% figures (cf. http://bit.ly/36dakLt)
\usepackage[dvipdfmx]{graphicx}
% \usepackage{subcaption} % caption for subfigures
%% for links
% \usepackage{hyperref, url}
%% comment environment
% \usepackage{comment}
%% custom enumerate and itemize
% \usepackage{enumitem}
%% author and affiliation
% \usepackage{authblk}
\title{}
\author{Masashi Yamazaki}
\begin{document}
\maketitle
\tableofcontents
\appendix
\section{An appendix section}
\end{document}
|
|
%!TEX root = ../main.tex
\subsection{Initial and Terminal}
\objective{Simplify angles and their coterminal synonyms, and apply them to the three basic trigonometric functions}
Angles are a measure of turning. Since Babylonian times, it has been customary to divide
the circle into 360 parts, beginning directly off to the right, and proceeding counter-clockwise.
The beginning ray pointing right is known as the \textbf{initial side}, whereas the ray pointing
off where the angle has turned to is called the \textbf{terminal side}.
Because one direction of spin has been designated as positive, it is therefore true that
there exist negative angles. These also begin at the initial side of $0^\circ$, but proceed
\emph{clockwise}. Very quickly, there will be multiple names for the same angle.
Angle that end in the same place are called \textbf{coterminal}, after the Latin for the same.
Finding coterminal angles which are the same as a given angle is simply a matter of
adding or subtracting $360^\circ$ as many or as few times as desired.
Some processes in mathematic produce very large angle measures, which can become
cumbersome if dealt with by hand. While it might be easy in some cases to simply
add or subtract $360^\circ$ until the angle is reasonable, this can become time prohibitive.
It is most efficient to find the remainder when an angle is divided by 360.
Degree/Minutes/Seconds will be dealt with in section §16.3, on sexigesimal numbers.
\subsection{Reference Angles}
Every angle can be thought of as a turn from the closest horizontal axis. In the
first quadrant, this is the angle itself, without modification. In the fourth quadrant,
this is reversed, a certain distance down from $0^\circ$, or better, back from
$360^\circ$. For example, $330^\circ$ is an upside-down version of a $30^\circ$
angle, which is to say, the reference angle for $330^\circ$ is $30^\circ$.
In the second quadrant, things are not upside down, but mirrored. What is the reference
angle for $150^\circ$? Well the closest horizontal axis is not $0^\circ/360^\circ$, but
$180^\circ$. The reference angle for $150^\circ$ is also $30^\circ$. The third quadrant
is the hardest, being both flipped left-right, and up-down. But $30^\circ$ past
$180^\circ$ is $210^\circ$.
\begin{figure}[h]
\begin{center}
\includegraphics{\chapdir/pics/reference}
\caption{Reference angles, sometimes call $\theta'$ (nothing to do with derivatives!)}
\end{center}
\end{figure}
\subsection{Trigonometric functions}
Also since ancient times, it has been exceedingly helpful to reference the ratios of
various components of an angle. On a right triangle, these ratios are often memorized with
the helpful acronym S.O.H.C.A.H.T.O.A., short for sine = opposite/hypotenuse , cosine =
adjacent/hypotenuse, tangent = opposite over adjacent.
\begin{figure}[h]
\includegraphics[scale=0.5]{\chapdir/pics/TrigonometryTriangle.png}
\caption{The names of the sides of a right triangle}
\end{figure}
These definitions can be extended via reference angles to the other quadrants. In
such a context, the sine of an angle
becomes the signed vertical displacement over the exact distance, cosine of an
angle becomes the signed horizontal displacement over the exact distance,
and tangent of the angle becomes its slope.
While no longer of much use or interest, there are names for the reciprocals of
the three main trigonometric functions. The reciprocal of cosine is called
secant, the reciprocal of sine is called cosecant, and the reciprocal of tangent
is called cotangent. Of these, only secant is commonly used (outside of math
classrooms!).
|
|
%!TEX TS-program = pdflatexmk
\documentclass[oneside,12pt]{amsart}
\usepackage{amsmath,amsfonts,amsthm,amssymb}
\usepackage[left=1in,top=1in,right=1in,bottom=1in,footskip = 0.333in]{geometry}
\usepackage[T1]{fontenc}
\usepackage{fancyhdr}
\usepackage{url}
\usepackage{setspace}
\usepackage{calc}
\usepackage{graphicx}
\usepackage{tabularx}
\usepackage{pdflscape}
\usepackage{rotating}
\usepackage{pdfpages}
\usepackage{wrapfig}
\usepackage{fancyvrb}
%caption setup
\usepackage[margin=0.5cm]{caption}
\usepackage{subcaption}
%% Draft Watermark Toggle
%\usepackage{draftwatermark}
%\SetWatermarkText{DRAFT}
%\SetWatermarkScale{5}
%%if using bibtex:
%\usepackage[round]{natbib}
%\renewcommand{\bibsection}{} %clear ref title
%if using bibaltex:
\usepackage[citestyle=authoryear,bibstyle=numeric, natbib=true, backend=bibtex]{biblatex}
\addbibresource{Main.bib}
%set up header
\setlength{\headheight}{0.2in}
\pagestyle{fancy}
\fancyhf{}
\lhead{\small \sc PI Name}
\chead{\small \sc Program Call}
\rhead{\footnotesize \thepage}
\renewcommand{\headrulewidth}{0pt}
%set linespacing
\renewcommand{\baselinestretch}{1}\normalsize
%fix toc
\setcounter{tocdepth}{3}% to get subsubsections in toc
\makeatletter
\def\@tocline#1#2#3#4#5#6#7{\relax
\ifnum #1>\c@tocdepth % then omit
\else
\par \addpenalty\@secpenalty\addvspace{#2}%
\begingroup \hyphenpenalty\@M
\@ifempty{#4}{%
\@tempdima\csname r@tocindent\number#1\endcsname\relax
}{%
\@tempdima#4\relax
}%
\parindent\z@ \leftskip#3\relax \advance\leftskip\@tempdima\relax
\rightskip\@pnumwidth plus4em \parfillskip-\@pnumwidth
#5\leavevmode\hskip-\@tempdima
\ifcase #1
\or\or \hskip 1em \or \hskip 2em \else \hskip 3em \fi%
#6\nobreak\relax
\dotfill\hbox to\@pnumwidth{\@tocpagenum{#7}}\par
\nobreak
\endgroup
\fi}
\makeatother
%section numbering
\renewcommand{\thesubsection}{\alph{subsection}}
\renewcommand{\thesubsubsection}{\roman{subsubsection}}
\makeatletter
\renewcommand{\p@subsection}{\thesection.}
\renewcommand{\p@subsubsection}{\thesection.\thesubsection.}
\makeatother
\newcommand\invisiblesection[1]{%
\refstepcounter{section}%
%\refstepcounter{page} %%only if numbering pages within sections
\addcontentsline{toc}{section}{\protect\numberline{\thesection.}#1}%
\sectionmark{#1}}
\newcommand\invisiblesubsection[1]{%
\refstepcounter{subsection}%
\addcontentsline{toc}{subsection}{\thesubsection.{\hskip 1em}#1}%
\sectionmark{#1}}
%%page numbering by section (i.e. 1-1, 1-2, 2-1, etc)
%\numberwithin{page}{section}
%\renewcommand{\thepage}{\thesection-\arabic{page}}
%
%% Make sure that page starts from 1 with every \section
%\usepackage{etoolbox}
%\makeatletter
%\patchcmd{\@sect}% <cmd>
% {\protected@edef}% <search>
% {\def\arg{#1}\def\arg@{section}%
% \ifx\arg\arg@\stepcounter{page}\fi%
% \protected@edef}% <replace>
% {}{}% <success><failure>
%\makeatother
% private defs
\def\mf{\mathbf}
\def\mb{\mathbb}
\def\mc{\mathcal}
\newcommand{\R}{\mathbf{r}}
\newcommand{\bc}{\mathbf{b}}
\newcommand{\mfbar}[1]{\mf{\bar{#1}}}
\newcommand{\mfhat}[1]{\mf{\hat{#1}}}
\newcommand{\bmu}{\boldsymbol{\mu}}
\newcommand{\blam}{\boldsymbol{\Lambda}}
\newcommand{\refeq}[1]{Equation (\ref{#1})}
\newcommand{\reftable}[1]{Table \ref{#1}}
\newcommand{\refch}[1]{Chapter \ref{#1}}
\newcommand{\reffig}[1]{Figure \ref{#1}}
\newcommand{\refcode}[1]{Listing \ref{#1}}
\newcommand{\intd}[1]{\ensuremath{\,\mathrm{d}#1}}
\newcommand{\leftexp}[2]{{\vphantom{#2}}^{#1}\!{#2}}
\newcommand{\leftsub}[2]{{\vphantom{#2}}_{#1}\!{#2}}
\newcommand{\fddt}[1]{\ensuremath{\leftexp{\mathcal{#1}}{\frac{\mathrm{d}}{\mathrm{d}t}}}}
\newcommand{\fdddt}[1]{\ensuremath{\leftexp{\mathcal{#1}}{\frac{\mathrm{d}^2}{\mathrm{d}t^2}}}}
\newcommand{\omegarot}[2]{\ensuremath{\leftexp{\mathcal{#1}}{\boldsymbol{\omega}}^{\mathcal{#2}}}}
\begin{document}
\begin{center}
\textsc{\textbf{Title}\\
PI: \\
}
\end{center}
\section{Table of Contents}
\renewcommand\contentsname{}
\tableofcontents
\clearpage
\begin{landscape}
\thispagestyle{empty}
\invisiblesection{Overview Chart}
%\includepdf[landscape]{quadchart}
\end{landscape}
\section{Scientific/Technical/Management}
\subsection{Relevance}\label{sec:relevance}
\subsubsection{Responsiveness}\label{sec:response}
\subsubsection{State of the Art}\label{sec:soa}
\subsubsection{Innovation}\label{sec:innovation}
\subsubsection{Technology Transition}\label{sec:infusion}
\subsection{Technical Approach}\label{sec:tech}
\subsubsection{Available Facilities and Proposer Capabilities}
\subsection{Work Plan}\label{sec:work}
\subsubsection{Data Management Plan}\label{sec:data}
\subsection{TRL}\label{sec:trl}
\subsection{Management Structure}\label{sec:management}
\clearpage
\section{References}
%%if using bibtex:
%\bibliographystyle{apalike}
%\bibliography{Main}
%if using biblatex:
\printbibliography[heading=none]
\clearpage
\invisiblesection{Biographical Sketches}
\invisiblesubsection{Name1}
%\includepdf[pages=-,pagecommand={}]{cv1}
\clearpage
\invisiblesubsection{Name2}
%\includepdf[pages=-,pagecommand={}]{cv2}
\clearpage
%% For ECF only
%\invisiblesection{Department Letter}
%\includepdf[pages=-]{Savransky_NASA_ECF_3-29-17_Department_Letter}
%
%\clearpage
\invisiblesection{Current and Pending Support}
\invisiblesubsection{Name1}
%\includepdf[pages=-,pagecommand={}]{c_and_p}
%\invisiblesubsection{Name2}
%\includepdf[pages=-,pagecommand={}]{c_and_p}
\clearpage
\section{Statements of Commitment and Letters of Support}
Statements of commitment are acknowledged electronically through NSPIRES.
\clearpage
\invisiblesection{Budget Justification}
%\includepdf[pages=-,pagecommand={}]{budget_justification}
\end{document}
|
|
\hypertarget{section}{%
\section{1}\label{section}}
\bibverse{1} In the eighth month, in the second year of Darius, the
LORD's\footnote{1:1 When rendered in ALL CAPITAL LETTERS, ``LORD'' or
``GOD'' is the translation of God's Proper Name.} word came to the
prophet Zechariah the son of Berechiah, the son of Iddo, saying,
\bibverse{2} ``The LORD was very displeased with your fathers.
\bibverse{3} Therefore tell them, the LORD of Armies says: `Return to
me,' says the LORD of Armies, `and I will return to you,' says the LORD
of Armies. \bibverse{4} Don't you be like your fathers, to whom the
former prophets proclaimed, saying: The LORD of Armies says, `Return now
from your evil ways and from your evil doings;' but they didn't hear nor
listen to me, says the LORD. \bibverse{5} Your fathers, where are they?
And the prophets, do they live forever? \bibverse{6} But my words and my
decrees, which I commanded my servants the prophets, didn't they
overtake your fathers?
``Then they repented and said, `Just as the LORD of Armies determined to
do to us, according to our ways and according to our practices, so he
has dealt with us.'\,''
\bibverse{7} On the twenty-fourth day of the eleventh month, which is
the month Shebat, in the second year of Darius, the LORD's word came to
the prophet Zechariah the son of Berechiah, the son of Iddo, saying,
\bibverse{8} ``I had a vision in the night, and behold,\footnote{1:8
``Behold'', from ``הִנֵּה'', means look at, take notice, observe, see,
or gaze at. It is often used as an interjection.} a man riding on a
red horse, and he stood amongst the myrtle trees that were in a ravine;
and behind him there were red, brown, and white horses. \bibverse{9}
Then I asked, `My lord, what are these?'\,''
The angel who talked with me said to me, ``I will show you what these
are.''
\bibverse{10} The man who stood amongst the myrtle trees answered,
``They are the ones the LORD has sent to go back and forth through the
earth.''
\bibverse{11} They reported to the LORD's angel who stood amongst the
myrtle trees, and said, ``We have walked back and forth through the
earth, and behold, all the earth is at rest and in peace.''
\bibverse{12} Then the LORD's angel replied, ``O LORD of Armies, how
long will you not have mercy on Jerusalem and on the cities of Judah,
against which you have had indignation these seventy years?''
\bibverse{13} The LORD answered the angel who talked with me with kind
and comforting words. \bibverse{14} So the angel who talked with me said
to me, ``Proclaim, saying, `The LORD of Armies says: ``I am jealous for
Jerusalem and for Zion with a great jealousy. \bibverse{15} I am very
angry with the nations that are at ease; for I was but a little
displeased, but they added to the calamity.'' \bibverse{16} Therefore
the LORD says: ``I have returned to Jerusalem with mercy. My house shall
be built in it,'' says the LORD of Armies, ``and a line shall be
stretched out over Jerusalem.''\,'
\bibverse{17} ``Proclaim further, saying, `The LORD of Armies says: ``My
cities will again overflow with prosperity, and the LORD will again
comfort Zion, and will again choose Jerusalem.''\,'\,''
\bibverse{18} I lifted up my eyes and saw, and behold, four horns.
\bibverse{19} I asked the angel who talked with me, ``What are these?''
He answered me, ``These are the horns which have scattered Judah,
Israel, and Jerusalem.''
\bibverse{20} The LORD showed me four craftsmen. \bibverse{21} Then I
asked, ``What are these coming to do?''
He said, ``These are the horns which scattered Judah, so that no man
lifted up his head; but these have come to terrify them, to cast down
the horns of the nations that lifted up their horn against the land of
Judah to scatter it.''
\hypertarget{section-1}{%
\section{2}\label{section-1}}
\bibverse{1} I lifted up my eyes, and saw, and behold, a man with a
measuring line in his hand. \bibverse{2} Then I asked, ``Where are you
going?''
He said to me, ``To measure Jerusalem, to see what is its width and what
is its length.''
\bibverse{3} Behold, the angel who talked with me went out, and another
angel went out to meet him, \bibverse{4} and said to him, ``Run, speak
to this young man, saying, `Jerusalem will be inhabited as villages
without walls, because of the multitude of men and livestock in it.
\bibverse{5} For I,' says the LORD, 'will be to her a wall of fire
around it, and I will be the glory in the middle of her.
\bibverse{6} Come! Come! Flee from the land of the north,' says the
LORD; `for I have spread you abroad as the four winds of the sky,' says
the LORD. \bibverse{7} `Come, Zion! Escape, you who dwell with the
daughter of Babylon.' \bibverse{8} For the LORD of Armies says: `For
honour he has sent me to the nations which plundered you; for he who
touches you touches the apple of his eye. \bibverse{9} For, behold, I
will shake my hand over them, and they will be a plunder to those who
served them; and you will know that the LORD of Armies has sent me.
\bibverse{10} Sing and rejoice, daughter of Zion! For behold, I come and
I will dwell within you,' says the LORD. \bibverse{11} Many nations
shall join themselves to the LORD in that day, and shall be my people;
and I will dwell amongst you, and you shall know that the LORD of Armies
has sent me to you. \bibverse{12} The LORD will inherit Judah as his
portion in the holy land, and will again choose Jerusalem. \bibverse{13}
Be silent, all flesh, before the LORD; for he has roused himself from
his holy habitation!''
\hypertarget{section-2}{%
\section{3}\label{section-2}}
\bibverse{1} He showed me Joshua the high priest standing before the
LORD's angel, and Satan standing at his right hand to be his adversary.
\bibverse{2} The LORD said to Satan, ``The LORD rebuke you, Satan! Yes,
the LORD who has chosen Jerusalem rebuke you! Isn't this a burning stick
plucked out of the fire?''
\bibverse{3} Now Joshua was clothed with filthy garments, and was
standing before the angel. \bibverse{4} He answered and spoke to those
who stood before him, saying, ``Take the filthy garments off him.'' To
him he said, ``Behold, I have caused your iniquity to pass from you, and
I will clothe you with rich clothing.''
\bibverse{5} I said, ``Let them set a clean turban on his head.''
So they set a clean turban on his head, and clothed him; and the LORD's
angel was standing by.
\bibverse{6} The LORD's angel solomnly assured Joshua, saying,
\bibverse{7} ``The LORD of Armies says: `If you will walk in my ways,
and if you will follow my instructions, then you also shall judge my
house, and shall also keep my courts, and I will give you a place of
access amongst these who stand by. \bibverse{8} Hear now, Joshua the
high priest, you and your fellows who sit before you, for they are men
who are a sign; for, behold, I will bring out my servant, the Branch.
\bibverse{9} For, behold, the stone that I have set before Joshua: on
one stone are seven eyes; behold, I will engrave its inscription,' says
the LORD of Armies, `and I will remove the iniquity of that land in one
day. \bibverse{10} In that day,' says the LORD of Armies, `you will
invite every man his neighbour under the vine and under the fig
tree.'\,''
\hypertarget{section-3}{%
\section{4}\label{section-3}}
\bibverse{1} The angel who talked with me came again and wakened me, as
a man who is wakened out of his sleep. \bibverse{2} He said to me,
``What do you see?''
I said, ``I have seen, and behold, a lamp stand all of gold, with its
bowl on the top of it, and its seven lamps on it; there are seven pipes
to each of the lamps which are on the top of it; \bibverse{3} and two
olive trees by it, one on the right side of the bowl, and the other on
the left side of it.''
\bibverse{4} I answered and spoke to the angel who talked with me,
saying, ``What are these, my lord?''
\bibverse{5} Then the angel who talked with me answered me, ``Don't you
know what these are?''
I said, ``No, my lord.''
\bibverse{6} Then he answered and spoke to me, saying, ``This is the
LORD's word to Zerubbabel, saying, `Not by might, nor by power, but by
my Spirit,' says the LORD of Armies. \bibverse{7} Who are you, great
mountain? Before Zerubbabel you are a plain; and he will bring out the
capstone with shouts of `Grace, grace, to it!'\,''
\bibverse{8} Moreover the LORD's word came to me, saying, \bibverse{9}
``The hands of Zerubbabel have laid the foundation of this house. His
hands shall also finish it; and you will know that the LORD of Armies
has sent me to you. \bibverse{10} Indeed, who despises the day of small
things? For these seven shall rejoice, and shall see the plumb line in
the hand of Zerubbabel. These are the LORD's eyes, which run back and
forth through the whole earth.''
\bibverse{11} Then I asked him, ``What are these two olive trees on the
right side of the lamp stand and on the left side of it?''
\bibverse{12} I asked him the second time, ``What are these two olive
branches, which are beside the two golden spouts that pour the golden
oil out of themselves?''
\bibverse{13} He answered me, ``Don't you know what these are?''
I said, ``No, my lord.''
\bibverse{14} Then he said, ``These are the two anointed ones who stand
by the Lord\footnote{4:14 The word translated ``Lord'' is ``Adonai.''}
of the whole earth.''
\hypertarget{section-4}{%
\section{5}\label{section-4}}
\bibverse{1} Then again I lifted up my eyes and saw, and behold, a
flying scroll. \bibverse{2} He said to me, ``What do you see?''
I answered, ``I see a flying scroll; its length is twenty
cubits,\footnote{5:2 A cubit is the length from the tip of the middle
finger to the elbow on a man's arm, or about 18 inches or 46
centimetres.} and its width ten cubits.''
\bibverse{3} Then he said to me, ``This is the curse that goes out over
the surface of the whole land, for everyone who steals shall be cut off
according to it on the one side; and everyone who swears falsely shall
be cut off according to it on the other side. \bibverse{4} I will cause
it to go out,'' says the LORD of Armies, ``and it will enter into the
house of the thief, and into the house of him who swears falsely by my
name; and it will remain in the middle of his house, and will destroy it
with its timber and its stones.''
\bibverse{5} Then the angel who talked with me came forward and said to
me, ``Lift up now your eyes and see what this is that is appearing.''
\bibverse{6} I said, ``What is it?''
He said, ``This is the ephah\footnote{5:6 An ephah is a measure of
volume of about 22 litres, 5.8 U. S. gallons, or about 2/3 of a
bushel.} basket that is appearing.'' He said moreover, ``This is their
appearance in all the land--- \bibverse{7} and behold, a lead cover
weighing one talent+ 5:7 A talent is about 30 kilograms or 66 pounds.
was lifted up---and there was a woman sitting in the middle of the
ephah+ 5:7 1 ephah is about 22 litres or about 2/3 of a bushel basket.''
\bibverse{8} He said, ``This is Wickedness;'' and he threw her down into
the middle of the ephah basket; and he threw the lead weight on its
mouth.
\bibverse{9} Then I lifted up my eyes and saw, and behold, there were
two women; and the wind was in their wings. Now they had wings like the
wings of a stork, and they lifted up the ephah basket between earth and
the sky. \bibverse{10} Then I said to the angel who talked with me,
``Where are these carrying the ephah basket?''
\bibverse{11} He said to me, ``To build her a house in the land of
Shinar. When it is prepared, she will be set there in her own place.''
\hypertarget{section-5}{%
\section{6}\label{section-5}}
\bibverse{1} Again I lifted up my eyes, and saw, and behold, four
chariots came out from between two mountains; and the mountains were
mountains of bronze. \bibverse{2} In the first chariot were red horses.
In the second chariot were black horses. \bibverse{3} In the third
chariot were white horses. In the fourth chariot were dappled horses,
all of them powerful. \bibverse{4} Then I asked the angel who talked
with me, ``What are these, my lord?''
\bibverse{5} The angel answered me, ``These are the four winds of the
sky, which go out from standing before the Lord of all the earth.
\bibverse{6} The one with the black horses goes out towards the north
country; and the white went out after them; and the dappled went out
towards the south country.'' \bibverse{7} The strong went out, and
sought to go that they might walk back and forth through the earth. He
said, ``Go around and through the earth!'' So they walked back and forth
through the earth.
\bibverse{8} Then he called to me, and spoke to me, saying, ``Behold,
those who go towards the north country have quieted my spirit in the
north country.''
\bibverse{9} The LORD's word came to me, saying, \bibverse{10} ``Take of
them of the captivity, even of Heldai, of Tobijah, and of Jedaiah; and
come the same day, and go into the house of Josiah the son of Zephaniah,
where they have come from Babylon. \bibverse{11} Yes, take silver and
gold, and make crowns, and set them on the head of Joshua the son of
Jehozadak, the high priest; \bibverse{12} and speak to him, saying, 'The
LORD of Armies says, ``Behold, the man whose name is the Branch! He will
grow up out of his place; and he will build the LORD's temple.
\bibverse{13} He will build the LORD's temple. He will bear the glory,
and will sit and rule on his throne. He will be a priest on his throne.
The counsel of peace will be between them both. \bibverse{14} The crowns
shall be to Helem, to Tobijah, to Jedaiah, and to Hen the son of
Zephaniah, for a memorial in the LORD's temple.
\bibverse{15} Those who are far off shall come and build in the LORD's
temple; and you shall know that the LORD of Armies has sent me to you.
This will happen, if you will diligently obey the LORD your God's
voice.''\,'\,''\footnote{6:15 The Hebrew word rendered ``God'' is
``אֱלֹהִ֑ים'' (Elohim).}
\hypertarget{section-6}{%
\section{7}\label{section-6}}
\bibverse{1} In the fourth year of king Darius, the LORD's word came to
Zechariah in the fourth day of the ninth month, the month of Chislev.
\bibverse{2} The people of Bethel sent Sharezer and Regem Melech and
their men to entreat the LORD's favour, \bibverse{3} and to speak to the
priests of the house of the LORD of Armies and to the prophets, saying,
``Should I weep in the fifth month, separating myself, as I have done
these so many years?''
\bibverse{4} Then the word of the LORD of Armies came to me, saying,
\bibverse{5} ``Speak to all the people of the land and to the priests,
saying, `When you fasted and mourned in the fifth and in the seventh
month for these seventy years, did you at all fast to me, really to me?
\bibverse{6} When you eat and when you drink, don't you eat for
yourselves and drink for yourselves? \bibverse{7} Aren't these the words
which the LORD proclaimed by the former prophets when Jerusalem was
inhabited and in prosperity, and its cities around her, and the South
and the lowland were inhabited?'\,''
\bibverse{8} The LORD's word came to Zechariah, saying, \bibverse{9}
``Thus has the LORD of Armies spoken, saying, `Execute true judgement,
and show kindness and compassion every man to his brother. \bibverse{10}
Don't oppress the widow, the fatherless, the foreigner, nor the poor;
and let none of you devise evil against his brother in your heart.'
\bibverse{11} But they refused to listen, and turned their backs, and
stopped their ears, that they might not hear. \bibverse{12} Yes, they
made their hearts as hard as flint, lest they might hear the law and the
words which the LORD of Armies had sent by his Spirit by the former
prophets. Therefore great wrath came from the LORD of Armies.
\bibverse{13} It has come to pass that, as he called and they refused to
listen, so they will call and I will not listen,'' said the LORD of
Armies; \bibverse{14} ``but I will scatter them with a whirlwind amongst
all the nations which they have not known. Thus the land was desolate
after them, so that no man passed through nor returned; for they made
the pleasant land desolate.''
\hypertarget{section-7}{%
\section{8}\label{section-7}}
\bibverse{1} The word of the LORD of Armies came to me. \bibverse{2} The
LORD of Armies says: ``I am jealous for Zion with great jealousy, and I
am jealous for her with great wrath.''
\bibverse{3} The LORD says: ``I have returned to Zion, and will dwell in
the middle of Jerusalem. Jerusalem shall be called `The City of Truth;'
and the mountain of the LORD of Armies, `The Holy Mountain.'\,''
\bibverse{4} The LORD of Armies says: ``Old men and old women will again
dwell in the streets of Jerusalem, every man with his staff in his hand
because of their old age. \bibverse{5} The streets of the city will be
full of boys and girls playing in its streets.''
\bibverse{6} The LORD of Armies says: ``If it is marvellous in the eyes
of the remnant of this people in those days, should it also be
marvellous in my eyes?'' says the LORD of Armies.
\bibverse{7} The LORD of Armies says: ``Behold, I will save my people
from the east country and from the west country. \bibverse{8} I will
bring them, and they will dwell within Jerusalem. They will be my
people, and I will be their God, in truth and in righteousness.''
\bibverse{9} The LORD of Armies says: ``Let your hands be strong, you
who hear in these days these words from the mouth of the prophets who
were in the day that the foundation of the house of the LORD of Armies
was laid, even the temple, that it might be built. \bibverse{10} For
before those days there was no wages for man nor any wages for an
animal, neither was there any peace to him who went out or came in,
because of the adversary. For I set all men everyone against his
neighbour. \bibverse{11} But now I will not be to the remnant of this
people as in the former days,'' says the LORD of Armies. \bibverse{12}
``For the seed of peace and the vine will yield its fruit, and the
ground will give its increase, and the heavens will give their dew. I
will cause the remnant of this people to inherit all these things.
\bibverse{13} It shall come to pass that, as you were a curse amongst
the nations, house of Judah and house of Israel, so I will save you, and
you shall be a blessing. Don't be afraid. Let your hands be strong.''
\bibverse{14} For the LORD of Armies says: ``As I thought to do evil to
you when your fathers provoked me to wrath,'' says the LORD of Armies,
``and I didn't repent, \bibverse{15} so again I have thought in these
days to do good to Jerusalem and to the house of Judah. Don't be afraid.
\bibverse{16} These are the things that you shall do: speak every man
the truth with his neighbour. Execute the judgement of truth and peace
in your gates, \bibverse{17} and let none of you devise evil in your
hearts against his neighbour, and love no false oath; for all these are
things that I hate,'' says the LORD.
\bibverse{18} The word of the LORD of Armies came to me. \bibverse{19}
The LORD of Armies says: ``The fasts of the fourth, fifth, seventh, and
tenth months shall be for the house of Judah joy, gladness, and cheerful
feasts. Therefore love truth and peace.''
\bibverse{20} The LORD of Armies says: ``Many peoples and the
inhabitants of many cities will yet come. \bibverse{21} The inhabitants
of one will go to another, saying, `Let's go speedily to entreat the
favour of the LORD, and to seek the LORD of Armies. I will go also.'
\bibverse{22} Yes, many peoples and strong nations will come to seek the
LORD of Armies in Jerusalem and to entreat the favour of the LORD.''
\bibverse{23} The LORD of Armies says: ``In those days, ten men out of
all the languages of the nations will take hold of the skirt of him who
is a Jew, saying, `We will go with you, for we have heard that God is
with you.'\,''
\hypertarget{section-8}{%
\section{9}\label{section-8}}
\bibverse{1} A revelation. The LORD's word is against the land of
Hadrach, and will rest upon Damascus--- for the eye of man and of all
the tribes of Israel is towards the LORD--- \bibverse{2} and Hamath,
also, which borders on it, Tyre and Sidon, because they are very wise.
\bibverse{3} Tyre built herself a stronghold, and heaped up silver like
the dust, and fine gold like the mire of the streets. \bibverse{4}
Behold, the Lord will dispossess her, and he will strike her power in
the sea; and she will be devoured with fire. \bibverse{5} Ashkelon will
see it, and fear; Gaza also, and will writhe in agony; as will Ekron,
for her expectation will be disappointed; and the king will perish from
Gaza, and Ashkelon will not be inhabited. \bibverse{6} Foreigners will
dwell in Ashdod, and I will cut off the pride of the Philistines.
\bibverse{7} I will take away his blood out of his mouth, and his
abominations from between his teeth; and he also will be a remnant for
our God; and he will be as a chieftain in Judah, and Ekron as a
Jebusite. \bibverse{8} I will encamp around my house against the army,
that no one pass through or return; and no oppressor will pass through
them any more: for now I have seen with my eyes. \bibverse{9} Rejoice
greatly, daughter of Zion! Shout, daughter of Jerusalem! Behold, your
King comes to you! He is righteous, and having salvation; lowly, and
riding on a donkey, even on a colt, the foal of a donkey. \bibverse{10}
I will cut off the chariot from Ephraim and the horse from Jerusalem.
The battle bow will be cut off; and he will speak peace to the nations.
His dominion will be from sea to sea, and from the River to the ends of
the earth. \bibverse{11} As for you also, because of the blood of your
covenant, I have set free your prisoners from the pit in which is no
water. \bibverse{12} Turn to the stronghold, you prisoners of hope! Even
today I declare that I will restore double to you. \bibverse{13} For
indeed I bend Judah as a bow for me. I have loaded the bow with Ephraim.
I will stir up your sons, Zion, against your sons, Greece, and will make
you like the sword of a mighty man. \bibverse{14} The LORD will be seen
over them. His arrow will flash like lightning. The Lord GOD will blow
the trumpet, and will go with whirlwinds of the south. \bibverse{15} The
LORD of Armies will defend them. They will destroy and overcome with
sling stones. They will drink, and roar as through wine. They will be
filled like bowls, like the corners of the altar. \bibverse{16} The LORD
their God will save them in that day as the flock of his people; for
they are like the jewels of a crown, lifted on high over his land.
\bibverse{17} For how great is his goodness, and how great is his
beauty! Grain will make the young men flourish, and new wine the
virgins.
\hypertarget{section-9}{%
\section{10}\label{section-9}}
\bibverse{1} Ask of the LORD rain in the spring time, The LORD who makes
storm clouds, and he gives rain showers to everyone for the plants in
the field. \bibverse{2} For the teraphim\footnote{10:2 teraphim were
household idols that may have been associated with inheritance rights
to the household property.} have spoken vanity, and the diviners have
seen a lie; and they have told false dreams. They comfort in vain.
Therefore they go their way like sheep. They are oppressed, because
there is no shepherd. \bibverse{3} My anger is kindled against the
shepherds, and I will punish the male goats, for the LORD of Armies has
visited his flock, the house of Judah, and will make them as his
majestic horse in the battle. \bibverse{4} From him will come the
cornerstone, from him the tent peg, from him the battle bow, from him
every ruler together. \bibverse{5} They will be as mighty men, treading
down muddy streets in the battle. They will fight, because the LORD is
with them. The riders on horses will be confounded. \bibverse{6} ``I
will strengthen the house of Judah, and I will save the house of Joseph.
I will bring them back, for I have mercy on them. They will be as though
I had not cast them off, for I am the LORD their God, and I will hear
them. \bibverse{7} Ephraim will be like a mighty man, and their heart
will rejoice as through wine. Yes, their children will see it and
rejoice. Their heart will be glad in the LORD. \bibverse{8} I will
signal for them and gather them, for I have redeemed them. They will
increase as they were before. \bibverse{9} I will sow them amongst the
peoples. They will remember me in far countries. They will live with
their children and will return. \bibverse{10} I will bring them again
also out of the land of Egypt, and gather them out of Assyria. I will
bring them into the land of Gilead and Lebanon; and there won't be room
enough for them. \bibverse{11} He will pass through the sea of
affliction, and will strike the waves in the sea, and all the depths of
the Nile will dry up; and the pride of Assyria will be brought down, and
the sceptre of Egypt will depart. \bibverse{12} I will strengthen them
in the LORD. They will walk up and down in his name,'' says the LORD.
\hypertarget{section-10}{%
\section{11}\label{section-10}}
\bibverse{1} Open your doors, Lebanon, that the fire may devour your
cedars. \bibverse{2} Wail, cypress tree, for the cedar has fallen,
because the stately ones are destroyed. Wail, you oaks of Bashan, for
the strong forest has come down. \bibverse{3} A voice of the wailing of
the shepherds! For their glory is destroyed---a voice of the roaring of
young lions! For the pride of the Jordan is ruined.
\bibverse{4} The LORD my God says: ``Feed the flock of slaughter.
\bibverse{5} Their buyers slaughter them and go unpunished. Those who
sell them say, `Blessed be the LORD, for I am rich;' and their own
shepherds don't pity them. \bibverse{6} For I will no more pity the
inhabitants of the land,'' says the LORD; ``but, behold, I will deliver
every one of the men into his neighbour's hand and into the hand of his
king. They will strike the land, and out of their hand I will not
deliver them.''
\bibverse{7} So I fed the flock to be slaughtered, especially the
oppressed of the flock. I took for myself two staffs. The one I called
``Favour'' and the other I called ``Union'', and I fed the flock.
\bibverse{8} I cut off the three shepherds in one month; for my soul was
weary of them, and their soul also loathed me. \bibverse{9} Then I said,
``I will not feed you. That which dies, let it die; and that which is to
be cut off, let it be cut off; and let those who are left eat each
other's flesh.'' \bibverse{10} I took my staff Favour and cut it apart,
that I might break my covenant that I had made with all the peoples.
\bibverse{11} It was broken in that day; and thus the poor of the flock
that listened to me knew that it was the LORD's word. \bibverse{12} I
said to them, ``If you think it best, give me my wages; and if not, keep
them.'' So they weighed for my wages thirty pieces of silver.
\bibverse{13} The LORD said to me, ``Throw it to the potter---the
handsome price that I was valued at by them!'' I took the thirty pieces
of silver and threw them to the potter in the LORD's house.
\bibverse{14} Then I cut apart my other staff, Union, that I might break
the brotherhood between Judah and Israel.
\bibverse{15} The LORD said to me, ``Take for yourself yet again the
equipment of a foolish shepherd. \bibverse{16} For, behold, I will raise
up a shepherd in the land who will not visit those who are cut off,
neither will seek those who are scattered, nor heal that which is
broken, nor feed that which is sound; but he will eat the meat of the
fat sheep, and will tear their hoofs in pieces. \bibverse{17} Woe to the
worthless shepherd who leaves the flock! The sword will strike his arm
and his right eye. His arm will be completely withered, and his right
eye will be totally blinded!''
\hypertarget{section-11}{%
\section{12}\label{section-11}}
\bibverse{1} A revelation of the LORD's word concerning Israel: The
LORD, who stretches out the heavens and lays the foundation of the
earth, and forms the spirit of man within him says: \bibverse{2}
``Behold, I will make Jerusalem a cup of reeling to all the surrounding
peoples, and it will also be on Judah in the siege against Jerusalem.
\bibverse{3} It will happen in that day that I will make Jerusalem a
burdensome stone for all the peoples. All who burden themselves with it
will be severely wounded, and all the nations of the earth will be
gathered together against it. \bibverse{4} In that day,'' says the LORD,
``I will strike every horse with terror and his rider with madness. I
will open my eyes on the house of Judah, and will strike every horse of
the peoples with blindness. \bibverse{5} The chieftains of Judah will
say in their heart, `The inhabitants of Jerusalem are my strength in the
LORD of Armies their God.'
\bibverse{6} In that day I will make the chieftains of Judah like a pan
of fire amongst wood, and like a flaming torch amongst sheaves. They
will devour all the surrounding peoples on the right hand and on the
left; and Jerusalem will yet again dwell in their own place, even in
Jerusalem.
\bibverse{7} The LORD also will save the tents of Judah first, that the
glory of David's house and the glory of the inhabitants of Jerusalem not
be magnified above Judah. \bibverse{8} In that day the LORD will defend
the inhabitants of Jerusalem. He who is feeble amongst them at that day
will be like David, and David's house will be like God, like the LORD's
angel before them. \bibverse{9} It will happen in that day, that I will
seek to destroy all the nations that come against Jerusalem.
\bibverse{10} I will pour on David's house and on the inhabitants of
Jerusalem the spirit of grace and of supplication. They will look to
me\footnote{12:10 After ``me'', the Hebrew has the two letters ``Aleph
Tav'' (the first and last letters of the Hebrew alphabet), not as a
word, but as a grammatical marker.} whom they have pierced; and they
shall mourn for him as one mourns for his only son, and will grieve
bitterly for him as one grieves for his firstborn. \bibverse{11} In that
day there will be a great mourning in Jerusalem, like the mourning of
Hadadrimmon in the valley of Megiddo. \bibverse{12} The land will mourn,
every family apart; the family of David's house apart, and their wives
apart; the family of the house of Nathan apart, and their wives apart;
\bibverse{13} the family of the house of Levi apart, and their wives
apart; the family of the Shimeites apart, and their wives apart;
\bibverse{14} all the families who remain, every family apart, and their
wives apart.
\hypertarget{section-12}{%
\section{13}\label{section-12}}
\bibverse{1} ``In that day there will be a fountain opened to David's
house and to the inhabitants of Jerusalem, for sin and for uncleanness.
\bibverse{2} It will come to pass in that day, says the LORD of Armies,
that I will cut off the names of the idols out of the land, and they
will be remembered no more. I will also cause the prophets and the
spirit of impurity to pass out of the land. \bibverse{3} It will happen
that when anyone still prophesies, then his father and his mother who
bore him will tell him, `You must die, because you speak lies in the
LORD's name;' and his father and his mother who bore him will stab him
when he prophesies. \bibverse{4} It will happen in that day that the
prophets will each be ashamed of his vision when he prophesies; they
won't wear a hairy mantle to deceive, \bibverse{5} but he will say, `I
am no prophet, I am a tiller of the ground; for I have been made a
bondservant from my youth.' \bibverse{6} One will say to him, `What are
these wounds between your arms?' Then he will answer, `Those with which
I was wounded in the house of my friends.' \bibverse{7} ``Awake, sword,
against my shepherd, and against the man who is close to me,'' says the
LORD of Armies. ``Strike the shepherd, and the sheep will be scattered;
and I will turn my hand against the little ones. \bibverse{8} It shall
happen that in all the land,'' says the LORD, ``two parts in it will be
cut off and die; but the third will be left in it. \bibverse{9} I will
bring the third part into the fire, and will refine them as silver is
refined, and will test them like gold is tested. They will call on my
name, and I will hear them. I will say, `It is my people;' and they will
say, `The LORD is my God.'\,''
\hypertarget{section-13}{%
\section{14}\label{section-13}}
\bibverse{1} Behold, a day of the LORD comes, when your plunder will be
divided within you. \bibverse{2} For I will gather all nations against
Jerusalem to battle; and the city will be taken, the houses rifled, and
the women ravished. Half of the city will go out into captivity, and the
rest of the people will not be cut off from the city. \bibverse{3} Then
the LORD will go out and fight against those nations, as when he fought
in the day of battle. \bibverse{4} His feet will stand in that day on
the Mount of Olives, which is before Jerusalem on the east; and the
Mount of Olives will be split in two from east to west, making a very
great valley. Half of the mountain will move towards the north, and half
of it towards the south. \bibverse{5} You shall flee by the valley of my
mountains, for the valley of the mountains shall reach to Azel. Yes, you
shall flee, just like you fled from before the earthquake in the days of
Uzziah king of Judah. The LORD my God will come, and all the holy ones
with you.\footnote{14:5 Septuagint reads ``him'' instead of ``you''.}
\bibverse{6} It will happen in that day that there will not be light,
cold, or frost. \bibverse{7} It will be a unique day which is known to
the LORD---not day, and not night; but it will come to pass that at
evening time there will be light.
\bibverse{8} It will happen in that day that living waters will go out
from Jerusalem, half of them towards the eastern sea, and half of them
towards the western sea. It will be so in summer and in winter.
\bibverse{9} The LORD will be King over all the earth. In that day the
LORD will be one, and his name one.
\bibverse{10} All the land will be made like the Arabah, from Geba to
Rimmon south of Jerusalem; and she will be lifted up and will dwell in
her place, from Benjamin's gate to the place of the first gate, to the
corner gate, and from the tower of Hananel to the king's wine presses.
\bibverse{11} Men will dwell therein, and there will be no more curse;
but Jerusalem will dwell safely.
\bibverse{12} This will be the plague with which the LORD will strike
all the peoples who have fought against Jerusalem: their flesh will
consume away while they stand on their feet, and their eyes will consume
away in their sockets, and their tongue will consume away in their
mouth. \bibverse{13} It will happen in that day that a great panic from
the LORD will be amongst them; and they will each seize the hand of his
neighbour, and his hand will rise up against the hand of his neighbour.
\bibverse{14} Judah also will fight at Jerusalem; and the wealth of all
the surrounding nations will be gathered together: gold, silver, and
clothing, in great abundance.
\bibverse{15} A plague like this will fall on the horse, on the mule, on
the camel, on the donkey, and on all the animals that will be in those
camps.
\bibverse{16} It will happen that everyone who is left of all the
nations that came against Jerusalem will go up from year to year to
worship the King, the LORD of Armies, and to keep the feast of booths.
\bibverse{17} It will be that whoever of all the families of the earth
doesn't go up to Jerusalem to worship the King, the LORD of Armies, on
them there will be no rain. \bibverse{18} If the family of Egypt doesn't
go up and doesn't come, neither will it rain on them. This will be the
plague with which the LORD will strike the nations that don't go up to
keep the feast of booths. \bibverse{19} This will be the punishment of
Egypt and the punishment of all the nations that don't go up to keep the
feast of booths.
\bibverse{20} In that day there will be inscribed on the bells of the
horses, ``HOLY TO THE LORD''; and the pots in the LORD's house will be
like the bowls before the altar. \bibverse{21} Yes, every pot in
Jerusalem and in Judah will be holy to the LORD of Armies; and all those
who sacrifice will come and take of them, and cook in them. In that day
there will no longer be a Canaanite+ 14:21 or, merchant in the house of
the LORD of Armies.
|
|
\documentclass[../main.tex]{subfiles}
\begin{document}
\section{Curves \& Arc Length}
\begin{definition}{Smoothness}{}
A vector valued function \(\vec{r}(t)\) is \emph{smooth} on an interval \((a,b)\) if \(\vec{r}'(t)\) exists \emph{and} \(\vec{r}'(t)\neq\vec{0}\) on the interval.
\end{definition}
In the above definition, the case where \(\vec{r}'(t)=\vec{0}\) corresponds to coming to a full stop, at which point motion may resume in any direction while maintaining the existence of the derivative, so a stronger condition is necessary to eliminate these cases.
\begin{definition}{Closure}{}
A vector valued function \(\vec{r}(t)\) is \emph{closed} on an interval \((a,b)\) if \(\vec{r}(a)=\vec{r}(b)\).
\end{definition}
\begin{definition}{Simplicity}{}
A vector valued function \(\vec{r}(t)\) is \emph{simple} on an interval \((a,b)\) if it has no self-intersections on the interval, except possibly where \(\vec{r}(a)=\vec{r}(b)\).
\end{definition}
Given a curve described by some function \(\vec{r}(t)\) on some interval \((a,b)\), we may wish to find its arc length. We assume that \(\vec{r}\) is continuous, smooth, and 1-to-1. This may be done by approximating the curve as a sequence of line segments, and taking the limit as their length goes to \(0\). We consider some set of points \(\{t_0,\ldots,t_n\}\), where \(a=t_0 < t_1 < \ldots < t_n=b\), which yield the corresponding line segments \(\{\vec{r}(t_1)-\vec{r}(t_0),\ldots,\vec{r}(t_n)-\vec{r}(t_{n-1})\}\) whose lengths may be summed up to obtain an approximation.
\begin{definition}{Rectifiability}{}
A curve is rectifiable if there exists some \(k>0\) such that the length of an approximation of the curve in terms of line segments is less than \(k\) for any number of line segments. In other words, if the length of the approximation approaches a limit.
\end{definition}
If a curve is rectifiable with some \(k\), then the smallest such \(k\) is the length of the curve. Formulaically,
\[
L = \int_a^b \left| \vec{r}'(t) \right|\, dt = \int_a^b v(t)\,dt
\]
which may be proven by noting that the length of the line segment between \(\vec{r}(t_i)\) and \(\vec{r}(t_{i-1})\) is \(|\vec{r}(t_i)-\vec{r}(t_{i-1})|\), which approaches \(|\vec{v}'(t_i)|\) as \(t_i\) and \(t_{i-1}\) become very close.
\begin{example}{}{}
Find the length of the helix described by \(\vec{r}(t)=a\cos{t}\i+a\sin{t}\j+bt\) on the interval \(0\leq t \leq T\).
\tcblower
\(\vec{r}'(t)=-a\sin{t}\i+a\cos{t}\j+b\k\), so \(v(t)=\sqrt{a^2\sin^2{t}+a^2\cos^2{t}+b^2}=\sqrt{a^2+b^2}\) and
\[
L = \int_0^T \sqrt{a^2+b^2}\,dt = \left[\sqrt{a^2+b^2}t\right]_0^T = T\sqrt{a^2+b^2}
\]
\end{example}
\begin{example}{}{}
Find the length of the curve described by \(\vec{r}(t)=2t\i+t^2\j+\frac{1}{3}t^3\k\) on the interval \(1\leq t \leq 2\).
\tcblower
\(\vec{r}'(t)=2\i+2t\j+t^2\k\), so \(v(t)=\sqrt{4+4t^2+t^4}=\sqrt{(2+t^2)^2}=2+t^2\), and the length of the curve is
\[
L = \int_1^2 2+t^2\,dt = \left[2t+\frac{t^3}{3}\right]_1^2 = \frac{13}{3}
\]
\end{example}
Let's say that \(\vec{r}:[a,b]\to\mathbb{R}^d\) is a smooth parametric curve, and \(s(t)\) is the length of the curve from \(\vec{r}(a)\) to \(\vec{r}(t)\). We may then use \(s\) to parametrize the curve instead of \(t\), which yields a parameterization with a constant speed of \(1\).
Note that
\[
s(t) = \int_a^t v(u)\,du
\]
so
\[
\frac{ds}{dt}=v(t)
\]
by FTC.
\begin{theorem}{}{}
Prove that an arc length parametrized curve has constant speed \(1\).
\tcblower
Note that
\[
\left|\frac{d\vec{r}}{dt}\right|=\left|\frac{d\vec{r}}{dt}\frac{dt}{ds}\right|
\]
but by the definition of arc length parameterization we have \(\frac{dt}{ds}=\frac{1}{v(t)}\), which is always defined because \(\vec{r}\) is smooth. In that case,
\[
\frac{d\vec{r}}{dt}=|v(t)|\frac{1}{|v(t)|}=1
\]
\end{theorem}
\begin{example}{}{}
Find the arc length parameterization of the helix described by \(\vec{r}(t)=a\cos{t}\i+a\sin{t}\j+bt\) on the interval \(0\leq t \leq T\).
\tcblower
\(\vec{r}(t)=a\cos{t}\i+a\sin{t}\j+b\k\), so \(v(t)=\sqrt{a^2+b^2}\) and
\[
s(t) = \int_0^t \sqrt{a^2+b^2}\,du = t\sqrt{a^2+b^2}
\]
which we may use this to reparameterize the curve in the form
\[
\vec{r}=a\cos\left(\frac{s}{\sqrt{a^2+b^2}}\right)\i + a\sin\left(\frac{s}{\sqrt{a^2+b^2}}\right)\j + b\frac{s}{\sqrt{a^2+b^2}}\k
\]
\end{example}
\begin{example}{}{}
Find the arc length parameterization of the curve described by \(\vec{r}(t)=2t\i+t^2\j+\frac{t^3}{3}\k\) on the interval \(0 \leq t \leq T\).
\tcblower
\(\vec{r}(t)=2t\i+t^2\j+\frac{t^3}{3}\k\), so \(v(t)=t^2+2\) and
\[
s(t) = \int_0^tu^2+2\,du = \frac{t^3}{3}+2t
\]
which may be solved for \(t\) to find a parameterization. The algebra involved is tedious, and thus elided.
\end{example}
\end{document}
|
|
\chapter{CT Block Diagrams}
\section{The Four Basic Motifs}
Understanding complex systems, with many interconnections, is aided by graphical representations, generally called block diagrams \footnote{There is a closely related graphical approach called \emph{signal flow graphs} that you may learn about in upper-level courses. They are equivalent to block diagrams, but are more amenable to computer representation and manipulation.}. They are a hybrid graphical-analytical approach.
There are just four basic motifs needed to build any block diagram. Let $\mathcal{S}_i$ denote a (sub) system. Then the four motifs are:
\begin{itemize}
\item A single block.\\[1em]
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
% We start by placing the blocks
\node [input, name=input] {};
\node [block, right of=input] (system) {$\mathcal{S}_1$};
\node [output, right of=system] (output) {};
% Once the nodes are placed, connecting them is easy.
\draw [draw,->] (input) -- node {$x(t)$} (system);
\draw [->] (system) -- node {$y(t)$} (output);
\end{tikzpicture}
\item A {\it series} connection of two blocks\\[1em]
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
% We start by placing the blocks
\node [input, name=input] {};
\node [block, right of=input] (system1) {$\mathcal{S}_1$};
\node [block, right of=system1,node distance=4cm] (system2) {$\mathcal{S}_2$};
\node [output, right of=system2] (output) {};
% Once the nodes are placed, connecting them is easy.
\draw [draw,->] (input) -- node {$x(t)$} (system1);
\draw [->] (system1) -- (system2);
\draw [->] (system2) -- node {$y(t)$} (output);
\end{tikzpicture}
\item A {\it parallel} connection of two blocks\\[1em]
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
% We start by placing the blocks and inputs
\node[shape=coordinate] at (1,1) (input1) {};
\node[block] at (3,1) (block1) {$\mathcal{S}_1$};
\node[shape=coordinate] at ($(block1.east)+(0.5,0)$) (output1) {};
\draw[->] (input1) -- (block1);
\draw (block1) -- (output1);
\node[shape=coordinate] at (1,-1) (input2) {};
\node[block] at (3,-1) (block2) {$\mathcal{S}_2$};
\node[shape=coordinate] at ($(block2.east)+(0.5,0)$) (output2) {};
\draw[->] (input2) -- (block2);
\draw (block2) -- (output2);
\node [input, name=input] at (0,0) {};
\node [input, name=conn] at (1,0) {};
\draw (conn) -- (input1);
\draw (conn) -- (input2);
\node [sum, right of=input,node distance=5cm] (sum) {$\Sigma$};
\draw [->] (output1) -| (sum);
\draw [->] (output2) -| (sum);
\draw [draw] (input) -- node {$x(t)$} (conn);
\node [output, right of=sum] (output) {};
\draw [->] (sum) -- node {$y(t)$} (output);
\end{tikzpicture}
\item A {\it feedback} connection\\[1em]
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
% We start by placing the blocks
\node[block] at (4,0) (block1) {$\mathcal{S}_1$};
\node[block] at (4,-2) (block2) {$\mathcal{S}_2$};
\node[shape=coordinate] at (6,-2) (input2) {};
\node [input, name=input] at (0,0) {};
\node [shape=coordinate, name=conn] at (6,0) {};
\draw (block1) -- (conn);
\draw (conn) -- (input2);
\draw [->] (input2) -- (block2);
\node [sum, right of=input,node distance=2cm] (sum) {$\Sigma$};
\draw [->] (block2) -| node[pos=0.95] {$-$} (sum);
\draw [draw,->] (input) -- node {$x(t)$} (sum);
\draw [->] (sum) -- (block1);
\node [output, right of=conn] (output) {};
\draw [->] (conn) -- node {$y(t)$} (output);
\end{tikzpicture}
\end{itemize}
Note the feedback is negative (the minus sign on the feedback summation input). These can be use in various combinations, as we shall see shortly.
\section{Connections to Convolution}
Each subsystem, $\mathcal{S}_i$, can be represented by a basic time-domain operation (e.g. derivatives, integrals, addition, and scaling) or more generally by it's impulse response $h_i(t)$.
\
For example a block representing an system acting as integrator is typically drawn as
\begin{center}
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
% We start by placing the blocks
\node [input, name=input] {};
\node [block, right of=input] (system) {$\int$};
\node [output, right of=system] (output) {};
% Once the nodes are placed, connecting them is easy.
\draw [draw,->] (input) -- node {$x(t)$} (system);
\draw [->] (system) -- node[pos=3] {$y(t) = \int\limits_{-\infty}^t x(\tau) \; d\tau$} (output);
\end{tikzpicture}
\end{center}
This is equivalent to an impulse response $h(t) = u(t)$ so that it might also be drawn as
\begin{center}
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
% We start by placing the blocks
\node [input, name=input] {};
\node [block, right of=input] (system) {$h(t) = u(t)$};
\node [output, right of=system] (output) {};
% Once the nodes are placed, connecting them is easy.
\draw [draw,->] (input) -- node {$x(t)$} (system);
\draw [->] (system) -- node[pos=3] {$y(t) = x(t) * u(t) = \int\limits_{-\infty}^t x(\tau) \; d\tau$} (output);
\end{tikzpicture}
\end{center}
We can use the concept of convolution to connect block diagrams to the properties of convolution
\begin{itemize}
\item A single block is equivalent to convolution with the impulse response for that subsystem\\[1em]
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
% We start by placing the blocks
\node [input, name=input] {};
\node [block, right of=input] (system) {$h_1(t)$};
\node [output, right of=system] (output) {};
% Once the nodes are placed, connecting them is easy.
\draw [draw,->] (input) -- node {$x(t)$} (system);
\draw [->] (system) -- node[pos=2] {$y(t) = h_1(t)*x(t)$} (output);
\end{tikzpicture}
\item Using the associative property, a series connection of two blocks becomes
\begin{center}
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
% We start by placing the blocks
\node [input, name=input] {};
\node [block, right of=input] (system1) {$h_1(t)$};
\node [block, right of=system1,node distance=4cm] (system2) {$h_2(t)$};
\node [output, right of=system2] (output) {};
\draw [draw,->] (input) -- node {$x(t)$} (system1);
\draw [->] (system1) -- (system2);
\draw [->] (system2) -- node[pos=3] {$y(t) = \left[h_1(t)*h_2(t)\right]*x(t)$} (output);
\end{tikzpicture}
\end{center}
which can be reduced to a single convolution $y(t) = h_3(t)*x(t)$ where $h_3(t) = h_1(t)*h_2(t)$.
\item Using the distributive property, a parallel connection of two blocks becomes
\begin{center}
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
\node[shape=coordinate] at (1,1) (input1) {};
\node[block] at (3,1) (block1) {$h_1(t)$};
\node[shape=coordinate] at ($(block1.east)+(0.5,0)$) (output1) {};
\draw[->] (input1) -- (block1);
\draw (block1) -- (output1);
\node[shape=coordinate] at (1,-1) (input2) {};
\node[block] at (3,-1) (block2) {$h_2(t)$};
\node[shape=coordinate] at ($(block2.east)+(0.5,0)$) (output2) {};
\draw[->] (input2) -- (block2);
\draw (block2) -- (output2);
\node [input, name=input] at (0,0) {};
\node [input, name=conn] at (1,0) {};
\draw (conn) -- (input1);
\draw (conn) -- (input2);
\node [sum, right of=input,node distance=5cm] (sum) {$\Sigma$};
\draw [->] (output1) -| (sum);
\draw [->] (output2) -| (sum);
\draw [draw] (input) -- node {$x(t)$} (conn);
\node [output, right of=sum] (output) {};
\draw [->] (sum) -- node[pos=3] {$y(t)= \left[h_1(t)*x(t)\right] + \left[h_2(t)*x(t)\right] = \left[h_1(t)+h_2(t)\right]*x(t)$} (output);
\end{tikzpicture}
\end{center}
which is equivalent to a single convolution $y(t) = h_3(t)*x(t)$ where $h_3(t) = h_1(t) + h_2(t)$.
\item In the feedback connection let $w(t)$ be the output of the summation
\begin{center}
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
% We start by placing the blocks
\node[block] at (4.5,0) (block1) {$h_1(t)$};
\node[block] at (4,-2) (block2) {$h_2(t)$};
\node[shape=coordinate] at (6,-2) (input2) {};
\node [input, name=input] at (0,0) {};
\node [shape=coordinate, name=conn] at (6,0) {};
\draw (block1) -- (conn);
\draw (conn) -- (input2);
\draw [->] (input2) -- (block2);
\node [sum, right of=input,node distance=2cm] (sum) {$\Sigma$};
\draw [->] (block2) -| node[pos=0.95] {$-$} (sum);
\draw [draw,->] (input) -- node {$x(t)$} (sum);
\draw [->] (sum) -- (block1);
\node [output, right of=conn] (output) {};
\draw [->] (conn) -- node {$y(t)$} (output);
\draw node at (3,0.3) {$w(t)$};
\end{tikzpicture}
\end{center}
Then $y(t) = h_1(t)*w(t)$ and $w(t) = x(t) - h_2(t)*y(t)$. Substituting the later into the former gives $y(t) = h_1*(x-h_2(t)*y(t))$. Using the distributive property we get $y(t) = h_1(t)*x(t) - h_1(t)*h_2(t)*y(t)$. Isolating the input on the right-hand side and using $y(t) = \delta(t)*y(t)$ we get
\[
y(t) + h_1(t)*h_2(t)*y(t) = \left[\delta(t) + h_1(t)*h_2(t)\right]*y(t) = h_1(t)*x(t)
\]
We can solve this for $y(t)$ using the concept of inverse systems. Let $h_3(t)* \left[\delta(t) + h_1(t)*h_2(t)\right]= \delta(t)$, i.e. $h_3$ is the inverse system of $\delta(t) + h_1(t)*h_2(t)$. Then
\[
y(t) = h_3(t)*h_1(t)*x(t)
\]
\end{itemize}
Recall, when the system is instantaneous (memoryless) the impulse response is $a\delta(t)$ for some constant $a$. This is the same as scaling the signal by $a$. We typically drop the block in such cases and draw the input-output operation as
\begin{center}
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
\node [input, name=input] at (0,0) {};
\node [output, name=system] at (2,0) {};
\node [output, name=output] at (4,0) {};
\draw [draw,->] (input) -- node {$x(t)$} (system);
\draw [draw,->] (system) -- node[pos=1] {$y(t) = ax(t)$} (output);
\draw [->] (input) -- node {$a$} (output);
\end{tikzpicture}
\end{center}
These properties allow us to perform transformations, either breaking up a system into subsystems, or reducing a system to a single block.
\begin{example}
Consider a second-order system system with impulse response
\[
h(t) = \left(e^{-3t} - e^{-t}\right)\, u(t)
\]
We can express this as a block diagram consisting of two parallel blocks
\begin{center}
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
\node[shape=coordinate] at (1,1) (input1) {};
\node[block] at (3,1) (block1) {$h_1(t) = e^{-3t}u(t)$};
\node[shape=coordinate] at ($(block1.east)+(0.5,0)$) (output1) {};
\draw[->] (input1) -- (block1);
\draw (block1) -- (output1);
\node[shape=coordinate] at (1,-1) (input2) {};
\node[block] at (3,-1) (block2) {$h_2(t) = -e^{-t}u(t)$};
\node[shape=coordinate] at ($(block2.east)+(0.5,0)$) (output2) {};
\draw[->] (input2) -- (block2);
\draw (block2) -- (output2);
\node [input, name=input] at (0,0) {};
\node [input, name=conn] at (1,0) {};
\draw (conn) -- (input1);
\draw (conn) -- (input2);
\node [sum, right of=input,node distance=5cm] (sum) {$\Sigma$};
\draw [->] (output1) -| (sum);
\draw [->] (output2) -| (sum);
\draw [draw] (input) -- node {$x(t)$} (conn);
\node [output, right of=sum] (output) {};
\draw [->] (sum) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
\end{center}
\end{example}
\begin{example}
Consider a system with block diagram
\begin{center}
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
\node[shape=coordinate] at (1,1) (input1) {};
\node[block] at (3,1) (block1) {$h_1(t) = e^{-2t}u(t)$};
\node[shape=coordinate] at ($(block1.east)+(0.5,0)$) (output1) {};
\draw[->] (input1) -- (block1);
\draw (block1) -- (output1);
\node[shape=coordinate] at (1,-1) (input2) {};
\node[block] at (3,-1) (block2) {$h_2(t) = -e^{-4t}u(t)$};
\node[shape=coordinate] at ($(block2.east)+(0.5,0)$) (output2) {};
\draw[->] (input2) -- (block2);
\draw (block2) -- (output2);
\node[block] at (8,0) (block3) {$h_3(t) = e^{-6t}u(t)$};
\node [input, name=input] at (0,0) {};
\node [input, name=conn] at (1,0) {};
\draw (conn) -- (input1);
\draw (conn) -- (input2);
\node [sum, right of=input,node distance=5cm] (sum) {$\Sigma$};
\draw [->] (output1) -| (sum);
\draw [->] (output2) -| (sum);
\draw [draw] (input) -- node {$x(t)$} (conn);
\node [output, right of=block3] (output) {};
\draw [->] (sum) -- (block3);
\draw [->] (block3) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
\end{center}
We can determine the overall impulse response of this system using the distributive and associative properties
\begin{align*}
h(t) &= \left[ h_1(t) + h_2(t)\right]*h_3(t)\\
&= h_1(t)*h_3(t) + h_2(t)*h_3(t)\\
&= \left[ e^{-2t}u(t)\right]*\left[ e^{-6t}u(t)\right] + \left[-e^{-4t}u(t) \right]*\left[ e^{-6t}u(t)\right]
\end{align*}
Using the convolution table from Lecture 8 we get the overall impulse response
\[
h(t) = \frac{e^{-2 t}-e^{-6 t}}{4}u(t) - \frac{e^{-4 t}-e^{-6 t}}{2}u(t) = \frac{1}{4}e^{-2t}u(t) -\frac{1}{2}e^{-4t}u(t) + \frac{1}{4}e^{-6t}u(t)
\]
\end{example}
\section{Connections to LCCDE}
The other system representation we have seen are linear, constant-coefficient differential equations. These can be expressed as combinations of derivative and/or integration blocks.
\subsection*{First-Order System}
To illustrate this consider the first-order LCCDE
\[
\frac{dy}{dt}(t) + ay(t) = x(t)
\]
We can solve this for $y(t)$
\[
y(t) = -\frac{1}{a} \frac{dy}{dt}(t) + \frac{1}{a}x(t)
\]
and can express this as a feedback motif
\begin{center}
\begin{tikzpicture}[auto]
\node[block] at (4,-2) (block2) {$\frac{1}{a}\frac{d}{dt}$};
\node[shape=coordinate] at (6,-2) (input2) {};
\node [input, name=input] at (0,0) {};
\node [shape=coordinate, name=conn] at (6,0) {};
\node [sum, right of=input,node distance=2cm] (sum) {$\Sigma$};
\draw (sum) -- (conn);
\draw (conn) -- (input2);
\draw [->] (input2) -- (block2);
\draw [->] (block2) -| node[pos=0.95] {$-$} (sum);
\draw [draw,->] (input) -- node {$\frac{1}{a}$} (sum);
\node [left of=input, node distance=2em] {$x(t)$};
\node [output, right of=conn] (output) {};
\draw [->] (conn) -- (output);
\node [right of=output, node distance=2em] {$y(t)$};
\end{tikzpicture}
\end{center}
Alternatively we could integrate the differential equation
\begin{align*}
\frac{dy}{dt}(t) + ay(t) &= x(t)\\
\int\limits_{-\infty}^t \frac{dy}{dt}(\tau)\; d\tau + a\int\limits_{-\infty}^t y(\tau)\; d\tau &= \int\limits_{-\infty}^t x(\tau)\; d\tau\\
y(\tau) \Big|_{-\infty}^t + a\int\limits_{-\infty}^t y(\tau)\; d\tau &= \int\limits_{-\infty}^t x(\tau)\; d\tau\\
\end{align*}
Under the assumption $y(-\infty) = 0$ we can solve this for $y(t)$ to get
\[
y(t) = -a\int\limits_{-\infty}^t y(\tau)\; d\tau + \int\limits_{-\infty}^t x(\tau)\; d\tau
\]
which can be expressed as the block diagram
\begin{center}
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input] at (0,0) {};
\node[block] at (2,0) (block1) {$\int$};
\node[block] at (6,-1) (block2) {$\int$};
\node[shape=coordinate] at (6,-2) (input2) {};
\node [shape=coordinate, name=conn] at (6,0) {};
\node [shape=coordinate, name=conn2] at (4,-2) {};
\node [shape=coordinate, name=conn3] at (6,-2) {};
\node [sum, right of=block1,node distance=2cm] (sum) {$\Sigma$};
\node [output, right of=conn] (output) {};
\draw (sum) -- (conn);
\draw (conn) -- (block2);
\draw (block2) -- (conn3);
\draw (conn3) -- node {$a$} (conn2);
\draw [->] (conn2) -| node[pos=0.95] {$-$} (sum);
\draw [draw,->] (input) -- node {$x(t)$} (block1);
\draw [->] (block1) -- (sum);
\draw [->] (conn) -- node {$y(t)$} (output);
\end{tikzpicture}
\end{center}
We can simplify this block diagram, by noting
\begin{align*}
y(t) &= -a\int\limits_{-\infty}^t y(\tau)\; d\tau + \int\limits_{-\infty}^t
x(\tau)\; d\tau\\
&= \int\limits_{-\infty}^t \left(-a y(\tau) + x(\tau)\right)\; d\tau\\
\end{align*}
which requires only a single integrator
\begin{center}
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input] at (0,0) {};
\node[block] at (4,-1) (block2) {$\int$};
\node [shape=coordinate, name=conn] at (4,0) {};
\node [shape=coordinate, name=conn2] at (2,-2) {};
\node [shape=coordinate, name=conn3] at (4,-2) {};
\node [sum, right of=input,node distance=2cm] (sum) {$\Sigma$};
\node [output, right of=conn3] (output) {};
\draw (sum) -- (conn);
\draw (conn) -- (block2);
\draw (block2) -- (conn3);
\draw (conn3) -- node {$a$} (conn2);
\draw [->] (conn2) -| node[pos=0.95] {$-$} (sum);
\draw [draw,->] (input) -- node {$x(t)$} (sum);
\draw [->] (conn3) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
\end{center}
The choice of using derivative or integrator blocks is not arbitrary in practice. Derivatives are sensitive to noise at high frequencies (for reasons we will see later in the semester) and so integrators perform much better when implemented in hardware.
\subsection*{Second-Order System}
Now consider the second-order system
\[
\frac{d^2y}{dt^2}(t) + a\frac{dy}{dt}(t) + by(t)= x(t)
\]
Using a similar process to the first-order system, we can express this as (dropping the limits of integration for clarity):
\[
y(t) = -a \int y(\tau)\; d\tau + \int\int \left( -by(\tau) + x(\tau) \right) \; d\tau^2
\]
which has the block diagram
\begin{center}
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input] at (0,0) {};
\node [block, right of=input,node distance=2cm] (block1) {$\int$};
\node [block, right of=block1,node distance=2cm] (block2) {$\int$};
\node [sum, right of=block2,node distance=2cm] (sum) {$\Sigma$};
\node [sum, below of=sum,node distance=2cm] (sum2) {$\Sigma$};
\node[block] at (8,-1) (block3) {$\int$};
\node[block] at (8,-3) (block4) {$\int$};
\node [shape=coordinate, name=conn1] at (8,0) {};
\node [shape=coordinate, name=conn2] at (8,-2) {};
\node [shape=coordinate, name=conn3] at (8,-4) {};
\node [shape=coordinate, name=conn4] at (6,-4) {};
\node [output, right of=conn1] (output) {};
\draw [->] (input) -- node {$x(t)$} (block1);
\draw [->] (block1) -- (block2);
\draw [->] (block2) -- (sum);
\draw (sum) -- (conn1);
\draw [->] (conn1) -- (block3);
\draw (block3) -- (conn2);
\draw [->] (conn2) -- (block4);
\draw [->] (conn2) -- node {$a$} (sum2);
\draw (block4) -- (conn3);
\draw (conn3) -- node {$b$} (conn4);
\draw [->] (conn3) -| (sum2);
\draw [->] (sum2) -- node[pos=0.95] {$-$} (sum);
\draw [->] (conn1) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
\end{center}
This is equivalent to two systems in series
\begin{center}
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input1] at (0,0) {};
\node [block, right of=input1,node distance=2cm] (block1) {$\int$};
\node [block, right of=block1,node distance=2cm] (block2) {$\int$};
\node [output, right of=block2] (output) {};
\draw [->] (input1) -- node {$x(t)$} (block1);
\draw [->] (block1) -- (block2);
\draw [->] (block2) -- node[pos=1] {$z(t)$} (output);
\node [input, name=input2] at (6,0) {};
\node [sum, right of=input2,node distance=2cm] (sum) {$\Sigma$};
\node [sum, below of=sum,node distance=2cm] (sum2) {$\Sigma$};
\node[block] at (10,-1) (block3) {$\int$};
\node[block] at (10,-3) (block4) {$\int$};
\node [shape=coordinate, name=conn1] at (10,0) {};
\node [shape=coordinate, name=conn2] at (10,-2) {};
\node [shape=coordinate, name=conn3] at (10,-4) {};
\node [shape=coordinate, name=conn4] at (8,-4) {};
\node [output, right of=conn1] (output) {};
\draw [->] (input2) -- node {$z(t)$} (sum);
\draw (sum) -- (conn1);
\draw [->] (conn1) -- (block3);
\draw (block3) -- (conn2);
\draw [->] (conn2) -- (block4);
\draw [->] (conn2) -- node {$a$} (sum2);
\draw (block4) -- (conn3);
\draw (conn3) -- node {$b$} (conn4);
\draw [->] (conn3) -| (sum2);
\draw [->] (sum2) -- node[pos=0.95] {$-$} (sum);
\draw [->] (conn1) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
\end{center}
Recall that, from the commutative property of convolution, the order of systems in series can be swapped\\
\begin{center}
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input] at (0,0) {};
\node [sum, right of=input,node distance=2cm] (sum) {$\Sigma$};
\node [sum, below of=sum,node distance=2cm] (sum2) {$\Sigma$};
\node[block] at (4,-1) (block3) {$\int$};
\node[block] at (4,-3) (block4) {$\int$};
\node [shape=coordinate, name=conn1] at (4,0) {};
\node [shape=coordinate, name=conn2] at (4,-2) {};
\node [shape=coordinate, name=conn3] at (4,-4) {};
\node [shape=coordinate, name=conn4] at (2,-4) {};
\node [output, right of=conn1] (output) {};
\draw [->] (input) -- node {$x(t)$} (sum);
\draw (sum) -- (conn1);
\draw [->] (conn1) -- (block3);
\draw (block3) -- (conn2);
\draw [->] (conn2) -- (block4);
\draw [->] (conn2) -- node {$a$} (sum2);
\draw (block4) -- (conn3);
\draw (conn3) -- node {$b$} (conn4);
\draw [->] (conn3) -| (sum2);
\draw [->] (sum2) -- node[pos=0.95] {$-$} (sum);
\draw [->] (conn1) -- node[pos=1] {$z(t)$} (output);
\node [input, name=input] at (6,0) {};
\node [block, right of=input,node distance=2cm] (block1) {$\int$};
\node [block, right of=block1,node distance=2cm] (block2) {$\int$};
\node [output, right of=block2] (output) {};
\draw [->] (input) -- node {$z(t)$} (block1);
\draw [->] (block1) -- (block2);
\draw [->] (block2) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
\end{center}
We then note that the signal $z$ and the output of the integrator blocks are the same in both systems so that they can be combined into a single block diagram as follows, reducing the number of integrators by two
\begin{center}
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input] at (0,0) {};
\node [sum, right of=input,node distance=2cm] (sum) {$\Sigma$};
\node [sum, below of=sum,node distance=2cm] (sum2) {$\Sigma$};
\node[block] at (4,-1) (block3) {$\int$};
\node[block] at (4,-3) (block4) {$\int$};
\node [shape=coordinate, name=conn1] at (4,0) {};
\node [shape=coordinate, name=conn2] at (4,-2) {};
\node [shape=coordinate, name=conn3] at (4,-4) {};
\node [shape=coordinate, name=conn4] at (2,-4) {};
\node [output, right of=conn3] (output) {};
\draw [->] (input) -- node {$x(t)$} (sum);
\draw (sum) -- (conn1);
\draw [->] (conn1) -- (block3);
\draw (block3) -- (conn2);
\draw [->] (conn2) -- (block4);
\draw [->] (conn2) -- node {$a$} (sum2);
\draw (block4) -- (conn3);
\draw (conn3) -- node {$b$} (conn4);
\draw [->] (conn3) -| (sum2);
\draw [->] (sum2) -- node[pos=0.95] {$-$} (sum);
\draw [->] (conn3) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
\end{center}
\section{Implementing a System in Hardware}
One of the most powerful uses of block diagrams is the implementation of a CT system in hardware. As we shall see later in the semester, designing CT systems for a particular purpose leads to a mathematical description that is equivalent to either an impulse response, or a LCCDE. We have seen how these can be represented as block diagrams. Once we have reduced a system to blocks consisting of simple operations, we can then convert the block diagram to a circuit.
\begin{tabular}{cc}
Block & Typical Circuit\\
\hline
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input] at (0,0) {};
\node [shape=coordinate, name=signal1] at (1,0) {};
\node [shape=coordinate, name=signal2] at (2,0) {};
\node [output, right of=signal2] (output) {};
\draw (input) -- node {$x(t)$} (signal1);
\draw (signal1) -- node {$a < 0$} (signal2);
\draw [->] (signal2) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
&
\begin{circuitikz}[american voltages,scale=0.8, every node/.style={transform shape}]
\draw
(5,3.5) node[op amp] (opamp1) {}
(0,4) to[R,l=$R_1$,o-] (4,4)
(4,4) to[short] (opamp1.-)
(opamp1.+) to[short] (3.8,2)
(0,2) to[short,o-o] (8,2)
(opamp1.out) to[short] (6.2,5)
(3.5,5) to[R,l=$R_2$] (6.2,5)
(3.5,4) to[short] (3.5,5)
(opamp1.out) to[short,-o] (8,3.5)
(0,4) to[open, v=$x(t)$] (0,2)
(8,3.5) to[open, v=$y(t)$] (8,2);
\end{circuitikz}
\\[2em]
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input] at (0,0) {};
\node [shape=coordinate, name=signal1] at (1,0) {};
\node [shape=coordinate, name=signal2] at (2,0) {};
\node [output, right of=signal2] (output) {};
\draw (input) -- node {$x(t)$} (signal1);
\draw (signal1) -- node {$a > 1$} (signal2);
\draw [->] (signal2) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
&
\begin{circuitikz}[american voltages,scale=0.8, every node/.style={transform shape}]
\draw
(7,3.5) node[op amp] (opamp1) {}
(4,0) to[short,o-o] (12,0)
(4,4) to[short,o-] (opamp1.-)
(opamp1.+) to[short] (5.8,1.75)
(5.8,1.75) to[short] (8.2,1.75)
(opamp1.out) to[R, l=$R_1$] (8.2,1.75)
(8.2,1.75) to[R, l=$R_2$] (8.2,0)
(opamp1.out) to[short, -o] (12,3.5)
(4,4) to[open, v=$x(t)$] (4,0)
(12,3.5) to[open, v=$y(t)$] (12,0);
\end{circuitikz}
\\[2em]
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input1] at (0,0) {};
\node [input, name=input2] at (0,-1) {};
\node [sum] at (2,0) (sum1) {$\Sigma$};
\node [output, right of=sum1] (output) {};
\draw [->] (input1) -- node[pos=0] {$x_1(t)$} (sum1);
\draw [->] (input2) -| node[pos=0] {$x_2(t)$} (sum1);
\draw [->] (sum1) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
&
\begin{circuitikz}[american voltages,scale=0.8, every node/.style={transform shape}]
\draw
(9,3.5) node[op amp] (opamp1) {}
(2,0) to[short,o-o] (12,0)
(2,4) to[short,o-] (5,4)
(4.5,2) to[short,o-] (5,2)
(5,4) to[R, l=$R$] (7,4)
(5,2) to[R, l=$R$] (7,2)
(7,2) to[short] (7,4)
(7,4) to[short] (opamp1.-)
(opamp1.+) to[short] (7.8,1.75)
(7.8,1.75) to[short] (10.2,1.75)
(opamp1.out) to[short] (10.2,1.75)
(opamp1.out) to[short, -o] (12,3.5)
(2,4) to[open, v=$x_1(t)$] (2,0)
(4.5,2) to[open, v=$x_2(t)$] (4.5,0)
(12,3.5) to[open, v=$y(t)$] (12,0);
\end{circuitikz}
\\[2em]
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto]
\node [input, name=input] at (0,0) {};
\node[block] at (2,0) (block1) {$-\int$};
\node [output, right of=block1] (output) {};
\draw [->] (input) -- node {$x(t)$} (block1);
\draw [->] (block1) -- node[pos=1] {$y(t)$} (output);
\end{tikzpicture}
&
\begin{circuitikz}[american voltages,scale=0.8, every node/.style={transform shape}]
\draw
(5,3.5) node[op amp] (opamp1) {}
(0,4) to[R,l=$R$,o-] (4,4)
(4,4) to[short] (opamp1.-)
(opamp1.+) to[short] (3.8,2)
(0,2) to[short,o-o] (8,2)
(opamp1.out) to[short] (6.2,5)
(3.5,5) to[C,l=$C$] (6.2,5)
(3.5,4) to[short] (3.5,5)
(opamp1.out) to[short,-o] (8,3.5)
(0,4) to[open, v=$x(t)$] (0,2)
(8,3.5) to[open, v=$y(t)$] (8,2);
\end{circuitikz}\\
\hline
\end{tabular}
\newpage
\section*{Solved Problems}
\begin{enumerate}
\item Consider a system with the following block diagram:
\begin{center}
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
\node [input, name=input] at (0,0) {};
\node [block] at (4.5,0) (block1) {$\int$};
\node [sum] at (2,0) (sum) {$\Sigma$};
\node [output, name=feedback] at (6,0) {};
\node [output, name=feedback2] at (6,1) {};
\node [output, name=output] at (8,-5) {};
\node [block] at (6,-2) (block2) {$\int$};
\node [output, name=output2] at (6,-3) {};
\node [block] at (6,-4) (block3) {$\int$};
\node [output, name=output3] at (6,-5) {};
\draw [->] (input) -- (sum);
\draw [->] (sum) -- (block1);
\draw (block1) -- (feedback);
\draw (feedback) -- (feedback2);
\draw [->] (feedback2) -| node[pos=0.95] {$-$} (sum);
\draw [->] (output3) -- node {$b$} (output);
\draw [->] (feedback) -- (block2);
\draw [->] (block2) -- (block3);
\draw [->] (output2) -| node[pos=0.95] {$-$} (sum);
\draw (block3) -- (output3);
\draw node at (-0.5,0) {$x(t)$};
\draw node at (8.5,-5) {$y(t)$};
\draw node at (4,-2.75) {$a$};
\end{tikzpicture}
\end{center}
Determine the differential equation representation of this system.\\[1em]
\textbf{Solution:} We can convert this back to a differential equation representation as follows. First label the output of each block as a signal (called the internal states of the system), which we denote as $u(t)$, $v(t)$, $w(t)$, and $z(t)$ below.
\begin{center}
\tikzstyle{block} = [draw, fill=gray!20, rectangle,
minimum height=2em, minimum width=2em]
\tikzstyle{sum} = [draw, fill=gray!20, circle, node distance=1cm]
\tikzstyle{input} = [coordinate]
\tikzstyle{output} = [coordinate]
\tikzstyle{pinstyle} = [pin edge={to-,thin,black}]
\begin{tikzpicture}[auto, node distance=2cm,>=latex',scale=1, every node/.style={transform shape}]
\node [input, name=input] at (0,0) {};
\node [block] at (4.5,0) (block1) {$\int$};
\node [sum] at (2,0) (sum) {$\Sigma$};
\node [output, name=feedback] at (6,0) {};
\node [output, name=feedback2] at (6,1) {};
\node [output, name=output] at (8,-5) {};
\node [block] at (6,-2) (block2) {$\int$};
\node [output, name=output2] at (6,-3) {};
\node [block] at (6,-4) (block3) {$\int$};
\node [output, name=output3] at (6,-5) {};
\draw [->] (input) -- (sum);
\draw [->] (sum) -- (block1);
\draw (block1) -- (feedback);
\draw (feedback) -- (feedback2);
\draw [->] (feedback2) -| node[pos=0.95] {$-$} (sum);
\draw [->] (output3) -- node {$b$} (output);
\draw [->] (feedback) -- (block2);
\draw [->] (block2) -- (block3);
\draw [->] (output2) -| node[pos=0.95] {$-$} (sum);
\draw (block3) -- (output3);
\draw node at (-0.5,0) {$x(t)$};
\draw node at (8.5,-5) {$y(t)$};
\draw node at (4,-2.75) {$a$};
\draw node at (6.5,0) {$u(t)$};
\draw node at (6.5,-3) {$v(t)$};
\draw node at (3,-0.3) {$w(t)$};
\draw node at (5.5,-5) {$z(t)$};
\end{tikzpicture}
\end{center}
Now we can read off the input-output relationships moving from input to output. Starting with the output of the summation
\[
w(t) = x(t) - u(t) -a\,v(t) \; .
\]
The outputs of each integrator are:
\[
u(t) = \int\limits_{-\infty}^t w(\tau) \; d\tau\;, \;
v(t) = \int\limits_{-\infty}^t u(\tau) \; d\tau\;, \mbox{ and }\;
z(t) = \int\limits_{-\infty}^t v(\tau) \; d\tau
\]
or equivalently
\[
\frac{du}{dt}(t) = w(t)\;,\; \frac{dv}{dt}(t) = u(t)\; ,\; \mbox{ and }\; \frac{dz}{dt}(t) = v(t)
\]
Finally, the output is:
\[
y(t) = b\, z(t)\; .
\]
We now do a series of derivatives and substitutions
\begin{align*}
y(t) &= b\, z(t)\\
\frac{dy}{dt}(t) &= b\, \frac{dz}{dt}(t)\\
&= b\, v(t)\\
\frac{d^2y}{dt^2}(t) &= b\, \frac{dv}{dt}(t)\\
&= b\, u(t)\\
\frac{d^3y}{dt^3}(t) &= b\, \frac{du}{dt}(t)\\
&= b\, w(t)\\
&= b\left( x(t) - u(t) -a\,v(t)\right)
\end{align*}
Rearranging the last equation to isolate the input on the right hand side gives
\[
\frac{d^3y}{dt^3}(t) + b\,u(t) +ab\,v(t) = b\,x(t)\; \mbox{ (Eqn.~1)}
\]
We can now note from above
\[
u(t) = \frac{dv}{dt}(t) = \frac{d^2z}{dt^2}(t) = \frac{1}{b} \frac{d^2y}{dt^2}(t) \mbox{ and }
\]
\[
v(t) = \frac{dz}{dt}(t) = \frac{1}{b} \frac{dy}{dt}(t)\; .
\]
Substituting these back into Eqn.~1 gives
\[
\frac{d^3y}{dt^3}(t) + \frac{d^2y}{dt^2}(t) +a\,\frac{dy}{dt}(t) = b\,x(t)
\]
Which is a LCCDE.\\
$\blacksquare$
\end{enumerate}
|
|
\subsection{Objects}
\index{objects}
\subsubsection{Spheres}
\index{objects!spheres}
Spheres are the simplest object supported by \RAY\ and they are
also the fastest object to render. Spheres are defined as one would expect,
with a {\bf CENTER}, {\bf RAD} (radius), and a texture. The texture may
be defined along with the object as discussed earlier, or it may be declared
and assigned a name.
Here's a sphere definition using a previously defined "NitrogenAtom" texture:
\begin{verbatim}
SPHERE CENTER 26.4 27.4 -2.4 RAD 1.0 NitrogenAtom
\end{verbatim}
A sphere with an inline texture definition is declared like this:
\begin{verbatim}
Sphere center 1.0 0.0 10.0
Rad 1.0
Texture Ambient 0.2 Diffuse 0.8 Specular 0.0 Opacity 1.0
Color 1.0 0.0 0.5
TexFunc 0
\end{verbatim}
Notice that in this example I used mixed case for the keywords, this is
allowable...
Review the section on textures if the texture definitions are confusing.
\subsubsection{Triangles}
\index{objects!triangles}
Triangles are also fairly simple objects, constructed by listing the
three vertices of the triangle, and its texture. The order of the
vertices isn't important, the triangle object is "double sided", so the
surface normal is always pointing back in the direction of the incident ray.
The triangle vertices are listed as {\bf V1}, {\bf V2}, and {\bf V3} each one
is an X, Y, Z coordinate. An example of a triangle is shown below:
\begin{verbatim}
TRI
V0 0.0 -4.0 12.0
V1 4.0 -4.0 8.0
V2 -4.0 -4.0 8.0
TEXTURE
AMBIENT 0.1 DIFFUSE 0.2 SPECULAR 0.7 OPACITY 1.0
COLOR 1.0 1.0 1.0
TEXFUNC 0
\end{verbatim}
\subsubsection{Smoothed Triangles}
\index{objects!smoothed triangles}
Smoothed triangles are just like regular triangles, except that the
surface normal for each of the three vertexes is used to determine the
surface normal across the triangle by linear interpolation.
Smoothed triangles yield curved looking objects and have nice
reflections.
\begin{verbatim}
STRI
V0 1.4 0.0 2.4
V1 1.35 -0.37 2.4
V2 1.36 -0.32 2.45
N0 -0.9 -0.0 -0.4
N1 -0.8 0.23 -0.4
N2 -0.9 0.27 -0.15
TEXTURE
AMBIENT 0.1 DIFFUSE 0.2 SPECULAR 0.7 OPACITY 1.0
COLOR 1.0 1.0 1.0
TEXFUNC 0
\end{verbatim}
\subsubsection{Infinite Planes}
\index{objects!planes}
Useful for things like desert floors, backgrounds, skies etc, the infinite
plane is pretty easy to use. An infinite plane only consists of two pieces
of information, the {\bf CENTER} of the plane, and a {\bf NORMAL} to the plane.
The center of the plane is just any point on the plane such that the point
combined with the surface normal define the equation for the plane.
As with triangles, planes are double sided. Here is an example of an
infinite plane:
\begin{verbatim}
PLANE
CENTER 0.0 -5.0 0.0
NORMAL 0.0 1.0 0.0
TEXTURE
AMBIENT 0.1 DIFFUSE 0.9 SPECULAR 0.0 OPACITY 1.0
COLOR 1.0 1.0 1.0
TEXFUNC 1
CENTER 0.0 -5.0 0.0
ROTATE 0. 0.0 0.0
SCALE 1.0 1.0 1.0
\end{verbatim}
\subsubsection{Rings}
\index{objects!rings}
Rings are a simple object, they are really a not-so-infinite plane.
Rings are simply an infinite plane cut into a washer shaped ring, infinitely
thing just like a plane. A ring only requires two more pieces of information
than an infinite plane does, an inner and outer radius. Here's an example
of a ring:
\begin{verbatim}
Ring
Center 1.0 1.0 1.0
Normal 0.0 1.0 0.0
Inner 1.0
Outer 5.0
MyNewRedTexture
\end{verbatim}
\subsubsection{Infinite Cylinders}
\index{objects!infinite cylinders}
Infinite cylinders are quite simple. They are defined by a center, an
axis, and a radius. An example of an infinite cylinder is:
\begin{verbatim}
Cylinder
Center 0.0 0.0 0.0
Axis 0.0 1.0 0.0
Rad 1.0
SomeRandomTexture
\end{verbatim}
\subsubsection{Finite Cylinders}
\index{objects!finite cylinders}
Finite cylinders are almost the same as infinite ones, but the
center and length of the axis determine the extents of the cylinder.
The finite cylinder is also really a shell, it doesn't have any
caps. If you need to close off the ends of the cylinder, use two
ring objects, with the inner radius set to 0.0 and the normal set
to be the axis of the cylinder. Finite cylinders are built this
way to enhance speed.
\begin{verbatim}
FCylinder
Center 0.0 0.0 0.0
Axis 0.0 9.0 0.0
Rad 1.0
SomeRandomTexture
\end{verbatim}
This defines a finite cylinder with radius 1.0, going from 0.0 0.0 0.0, to
0.0 9.0 0.0 along the Y axis. The main difference between an infinite cylinder
and a finite cylinder is in the interpretation of the {\bf AXIS} parameter.
In the case of the infinite cylinder, the length of the axis vector is
ignored. In the case of the finite cylinder, the axis parameter is used
to determine the length of the overall cylinder.
\subsubsection{Axis Aligned Boxes}
\index{objects!axis-aligned boxes}
Axis aligned boxes are fast, but of limited usefulness. As such, I'm
not going to waste much time explaining 'em. An axis aligned box is
defined by a {\bf MIN} point, and a {\bf MAX} point. The volume between
the min and max points is the box. Here's a simple box:
\begin{verbatim}
BOX
MIN -1.0 -1.0 -1.0
MAX 1.0 1.0 1.0
Boxtexture1
\end{verbatim}
\subsubsection{Fractal Landscapes}
\index{objects!fractal landscapes}
Currently fractal landscapes are a built-in function. In the near future
I'll allow the user to load an image map for use as a heightfield.
Fractal landscapes are currently forced to be axis aligned. Any suggestion
on how to make them more appealing to users is welcome. A fractal landscape
is defined by its "resolution" which is the number of grid points along
each axis, and by its scale and center. The "scale" is how large the
landscape is along the X, and Y axes in world coordinates. Here's a simple
landscape:
\begin{verbatim}
SCAPE
RES 30 30
SCALE 80.0 80.0
CENTER 0.0 -4.0 20.0
TEXTURE
AMBIENT 0.1 DIFFUSE 0.9 SPECULAR 0.0 OPACITY 1.0
COLOR 1.0 1.0 1.0
TEXFUNC 0
\end{verbatim}
The landscape shown above generates a square landscape made of 1,800 triangles.
When time permits, the heightfield code will be rewritten to be more
general and to increase rendering speed.
\subsubsection{Arbitrary Quadric Surfaces}
\index{objects!arbitrary quadrics}
Docs soon. I need to add these into the parser, must have forgotten
before ;-)
\subsubsection{Volume Rendered Scalar Voxels}
\index{objects!grids of scalar voxels}
These are a little trickier than the average object :-)
These are likely to change substantially in the very near future so I'm not
going to get too detailed yet.
A volume rendered data set is described by its axis aligned bounding box, and
its resolution along each axis. The final parameter is the voxel data
file. If you are seriously interested in messing with these, get hold of
me and I'll give you more info. Here's a quick example:
\begin{verbatim}
SCALARVOL
MIN -1.0 -1.0 -0.4
MAX 1.0 1.0 0.4
DIM 256 256 100
FILE /cfs/johns/vol/engine.256x256x110
TEXTURE
AMBIENT 1.0 DIFFUSE 0.0 SPECULAR 0.0 OPACITY 8.1
COLOR 1.0 1.0 1.0
TEXFUNC 0
\end{verbatim}
|
|
\chapter{\Pelectron \Pmuon scattering}
This time the scattering between electrons and muons will be considered with spin taken into account. This calculation can be easily extended to similar scattering situations.
\section{Electron in an EM field}
As in Section \ref{sec:EMdynamics} we replace $p^\mu \rightarrow p^\mu + eA^\mu$. Then the components transform as
\begin{align}
E \rightarrow & E + eV \\
\vec{p} \rightarrow & \vec{p} + e\vec{A}
\end{align}
Starting from the free particle Dirac equation, $(\vec{\alpha}\cdot\vec{p} + \beta m)\psi = E\psi$, after the substitution we have
\begin{equation}
\left( \vec{\alpha}\cdot\vec{p} + \beta m + e\left[ \vec{\alpha}\cdot\vec{A} - VI_4 \right] \right) \psi = E\psi
\end{equation}
and we may identify $V_D$, the Dirac potential, to be $e\left( \vec{\alpha}\cdot\vec{A} - VI_4 \right)$.
\section{Current-potential formulation}
Consider the scattering of a particle with wavefunction $\psi_i$ off a potential $V_D$ to wavefunction $\psi_f$. The amplitude is given by
\begin{align}
T_{fi} &= -i \int \psi_f^\dagger \, V_D \, \psi_i \, \dd[4]{x} \\
&= -ie \int \psi_f^\dagger \, \gamma^0 \, \gamma^0 \left( -VI_4 + \alpha^k A_k \right) \psi_i \, \dd[4]{x} \nonumber \\
&= -ie \int \, \overline{\psi}_f \left( -\gamma^0 V + \gamma^k A_k \right) \psi_i\dd[4]{x} \nonumber \\
&= ie \int \overline{\psi}_f \, \gamma^\mu A_\mu \psi_i \, \dd[4]{x}.
\end{align}
Recall that in the current-potential formulation, the scattering amplitude is given by
\begin{equation}
T_{fi} = -i \int j^\mu_{fi} \, A_\mu \, \dd[4]{x}
\end{equation}
so we identify the current
\begin{equation}
j^\mu_{fi} = -e \, \overline{\psi}_f \, \gamma^\mu \, \psi_i = -e \, \overline{u}_f \, \gamma^\mu \, u_i \, e^{i(p_f - p_i)x}
\end{equation}
where the second equality comes from the plane wavefunction.
\section{Scattering amplitude}
Now we can consider the full electron-muon scattering process.
\begin{figure}[th]
\centering
\include{figures/DiracEMuScatter}
\caption{Elastic scattering of spin-$\frac{1}{2}$ electrons and muons. This is a $t$-channel process. \label{fig:DiracEMuScatter}}
\end{figure}
Using the currents $j_1$, $j_2$ and propagator, the transition amplitude is
\begin{align}
T_{fi} &= -i \int j_\mu^1 \, \frac{-1}{q^2} \, j^\mu_2 \, \dd[4]{x} \\
&= -i \int \left( -e \, \overline{u}(k^\prime) \, \gamma_\mu \, u(k) \, e^{i(k^\prime - k)x} \right) \frac{-1}{q^2} \left( -e \, \overline{u}(p^\prime) \, \gamma^\mu \, u(p) \, e^{i(p^\prime - p)x} \right) \, \dd[4]{x} \nonumber \\
&= \frac{ie^2}{q^2} \int \left[ \overline{u}(k^\prime) \, \gamma_\mu \, u(k) \right]\left[ \overline{u}(p^\prime) \, \gamma^\mu \, u(p) \right] \, e^{i(k^\prime + p^\prime - k - p)x} \, \dd[4]{x}
\end{align}
As before, in calculating $\abs{T_{fi}}^2$, one exponential term becomes the phase space factor and the other becomes the 4-space volume. The result is
\begin{equation}
\abs{T_{fi}}^2 = \frac{e^4}{q^4} \, \left[ \overline{u}(k^\prime) \, \gamma_\mu \, u(k) \right]^\dagger \left[ \overline{u}(p^\prime) \, \gamma^\mu \, u(p) \right]^\dagger \left[ \overline{u}(k^\prime) \, \gamma_\nu \, u(k) \right]\left[ \overline{u}(p^\prime) \, \gamma^\nu \, u(p) \right]
\end{equation}
Evaluating the Hermitian conjugate (recall that $(\gamma^0)^\dagger = \gamma^0$, $(\gamma^k)^\dagger = -\gamma^k$, and $\gamma^0\gamma^k = -\gamma^k \gamma^0$),
\begin{align}
\left[ \overline{u}(p^\prime) \, \gamma^0 \, u(p) \right]^\dagger &= \left[ u^\dagger(p^\prime) \, \gamma^0 \, \gamma^0 \, u(p) \right]^\dagger \nonumber \\
&= u^\dagger(p) \, \gamma^0 \, \gamma^0 \, u(p^\prime) \nonumber \\
&= \overline{u}(p) \, \gamma^0 \, u(p^\prime)
\end{align}
\begin{align}
\left[ \overline{u}(p^\prime) \, \gamma^k \, u(p) \right]^\dagger &= \left[ u^\dagger(p^\prime) \, \gamma^0 \, \gamma^k \, u(p) \right]^\dagger \nonumber \\
&= -u^\dagger(p) \, \gamma^k \, \gamma^0 \, u(p^\prime) \nonumber \\
&= u^\dagger(p) \, \gamma^0 \, \gamma^k \, u(p^\prime) \nonumber \\
&= \overline{u}(p) \, \gamma^k \, u(p^\prime)
\end{align}
So $\left[ \overline{u}(p^\prime) \, \gamma^\mu \, u(p) \right]^\dagger = \overline{u}(p) \, \gamma^\mu \, u(p^\prime)$ and similarly $\left[ \overline{u}(k^\prime) \, \gamma_\mu \, u(k) \right]^\dagger = \overline{u}(k) \, \gamma_\mu \, u(k^\prime)$.
Now the transition amplitude is
\begin{align}
\abs{T_{fi}}^2 &= \frac{e^4}{q^4} \, \left[ \overline{u}(k) \, \gamma_\mu \, u(k^\prime) \right] \left[ \overline{u}(p) \, \gamma^\mu \, u(p^\prime) \right] \left[ \overline{u}(k^\prime) \, \gamma_\nu \, u(k) \right]\left[ \overline{u}(p^\prime) \, \gamma^\nu \, u(p) \right] \nonumber \\
&= \frac{e^4}{q^4} \, ^e\!L_{\mu\nu} \, ^\mu\!L^{\mu\nu}
\end{align}
where
\begin{equation}
^e\!L_{\mu\nu} = \left[ \overline{u}(k) \, \gamma_\mu \, u(k^\prime) \right]\left[ \overline{u}(k^\prime) \, \gamma_\nu \, u(k) \right]
\end{equation}
is the electron tensor, and
\begin{equation}
^\mu\!L_{\mu\nu} = \left[ \overline{u}(p) \, \gamma_\mu \, u(p^\prime) \right]\left[ \overline{u}(p^\prime) \, \gamma_\nu \, u(p) \right]
\end{equation}
is the muon tensor.
\subsection{Sum over spins}
For the whole process we must sum over initial and final spins then average over the initial spins. Then the electron tensor becomes
\begin{equation}
^e\!L_{\mu\nu} = \frac{1}{2} \sum_S \sum_{S^\prime} \left[ \overline{u}(k) \, \gamma_\mu \, u(k^\prime) \right]\left[ \overline{u}(k^\prime) \, \gamma_\nu \, u(k) \right].
\end{equation}
Writing out the matrix indices,
\begin{align}
^e\!L_{\mu\nu} &= \frac{1}{2}\sum_S \sum_{S^\prime} \overline{u}(k)_\alpha \, \gamma^{\alpha\beta}_\mu \, u(k^\prime)_\beta \, \overline{u}(k^\prime)_\epsilon \, \gamma_\nu^{\epsilon\sigma} \, u(k)_\sigma \nonumber \\
&= \frac{1}{2} \sum_S \sum_{S^\prime} u(k^\prime)_\beta \, \overline{u}(k^\prime)_\epsilon \, \gamma_\nu^{\epsilon\sigma} \, u(k)_\sigma \, \overline{u}(k)_\alpha \, \gamma_\mu^{\alpha\beta} \nonumber \\
&= \left( \fsl{k^\prime} + m \right)_{\beta\epsilon} \, \gamma_\nu^{\epsilon\sigma} \, \left( \fsl{k} + m \right)_{\sigma\alpha} \, \gamma_\mu^{\alpha\beta}
\end{align}
where we have used the completeness relation, \eqref{eq:completeness}, in the last step. This may be written as a trace, such that
\begin{align}
^e\!L_{\mu\nu} &= \frac{1}{2} \Tr[(\fsl{k}^\prime + m) \, \gamma_\nu \, (\fsl{k} + m) \, \gamma_\mu] \\
^\mu\!L_{\mu\nu} &= \frac{1}{2} \Tr[(\fsl{p}^\prime + M) \, \gamma_\nu \, (\fsl{p} + M) \, \gamma_\mu].
\end{align}
where the last equation follows from an identical calculation with the muon tensor, and $m$ and $M$ are the electron and muon masses, respectively.
Now the scattering probability is given by
\begin{equation}
\abs{T_{fi}}^2 = \frac{e^4}{q^4} \, \frac{1}{2} \Tr[(\fsl{k}^\prime + m) \, \gamma_\nu \, (\fsl{k} + m) \, \gamma_\mu] \, \frac{1}{2} \Tr[(\fsl{p}^\prime + M) \, \gamma_\nu \, (\fsl{p} + M) \, \gamma_\mu].
\end{equation}
Using the trace theorems from Section \ref{sec:Trace}, the only non-zero term are those with two or four $\gamma$-matrices:
\begin{align}
\abs{T_{fi}}^2 &= \frac{e^4}{4q^4} \, \Tr[\gamma_\alpha \, \gamma_\nu \, \gamma_\beta \, \gamma_\mu \, k^\prime^\alpha \, k^\beta + \gamma_\nu\, \gamma_\mu \, m^2] \, \Tr[\gamma^\alpha \, \gamma^\nu \, \gamma^\beta \, \gamma^\mu \, p^\prime_\alpha \, p_\beta + \gamma^\nu\, \gamma^\mu \, M^2] \nonumber \\
&= \frac{4e^4}{q^4} \, \left[ \left( g_{\alpha\nu} \, g_{\beta\mu} - g_{\alpha\beta} \, g_{\nu\mu} + g_{\alpha\mu} \, g_{\nu\beta} \right) k^\prime^\alpha \, k^\beta + g_{\nu\mu} m^2 \right]\left[ \left( g^{\alpha\nu} \, g^{\beta\mu} - g^{\alpha\beta} \, g^{\nu\mu} + g^{\alpha\mu} \, g^{\nu\beta} \right) p^\prime^\alpha \, p^\beta + g^{\nu\mu} M^2 \right] \nonumber \\
&= \frac{8e^4}{q^4} \left[ (k^\prime \cdot p^\prime)(k \cdot p) + (k^\prime \cdot p)(k \cdot p^\prime) - m^2(p^\prime \cdot p) - M^2(k^\prime \cdot k) + m^2M^2 \right]
\end{align}
\section{Differential cross section}
In the ultrarelativistic limit the masses become negligible and the scattering probability simplifies to
\begin{equation}
\abs{T_{fi}}^2 = \frac{8e^4}{q^4} \left[ (k^\prime \cdot p^\prime)(k \cdot p) + (k^\prime \cdot p)(k \cdot p^\prime) \right]
\end{equation}
Now we wish to express this in terms of the Mandelstam variables. The process is $t$-channel, so we have that
\begin{align}
t &= q^2 \\
&= (k - k^\prime)^2 = (p - p^\prime)^2 \nonumber \\
&\approx -2k\cdot k^\prime \approx -2p\cdot p^\prime \nonumber \\
\end{align}
The centre of momentum energy is
\begin{align}
s &= (k + p)^2 = (k^\prime + p^\prime)^2 \nonumber \\
&\approx 2k\cdot p \approx 2k^\prime \cdot p^\prime
\end{align}
and finally
\begin{align}
u &= (k - p^\prime)^2 = (p - k^\prime)^2 \nonumber \\
&\approx -2k\cdot p^\prime \approx -2p \cdot k^\prime
\end{align}
Using these relations, the scattering probability may be expressed as
\begin{align}
\abs{T_{fi}}^2 &= \frac{2e^4}{t^2} \left( s^2 + u^2 \right)
\end{align}
We employ the formula for the differential cross section for elastic scaterring,
\begin{equation}
\dv{\sigma}{\Omega} = \frac{1}{64\pi^2} \frac{\abs{T_{fi}}^2}{s}
\end{equation}
so for the $t$-channel scattering of two spin-half particles with electron charge (e.g.~electrons and muons),
\begin{equation}\boxed{
\dv{\sigma}{\Omega} = \frac{e^4}{32\pi^2 s} \frac{s^2 + u^2}{t^2}
}\end{equation}
|
|
\documentclass[10pt,aps,twocolumn,secnumarabic,balancelastpage,amsmath,amssymb,nofootinbib,floatfix]{revtex4}
%%\usepackage{setspace}
%%\setstretch{1.25}
\usepackage{graphicx} % tools for importing graphics
%\usepackage{lgrind} % convert program code listings to a form
% includable in a LaTeX document
%\usepackage{xcolor} % produces boxes or entire pages with
% colored backgrounds
%\usepackage{longtable} % helps with long table options
%\usepackage{epsf} % old package handles encapsulated postscript issues
\usepackage{bm} % special bold-math package. usge: \bm{mathsymbol}
%\usepackage{asymptote} % For typesetting of mathematical illustrations
%\usepackage{thumbpdf}
\usepackage[colorlinks=true]{hyperref} % this package should be added after
% all others.
% usage: \url{http://web.mit.edu/8.13}
\renewcommand{\baselinestretch}{1.0}
\begin{document}
\title{Photoelectrons, Statistics and Pulse Analysis}
\author{MIT Department of Physics}
\email{nodbody\@mit.edu}
\homepage{http://web.mit.edu/8.13/}
\date{\today}
%\affiliation{MIT Department of Physics}
%-------------------------------------------------------------------------------
\begin{abstract}
In this experiment, we learn to use the oscilloscope and investigate the photo-electron effect and its statistical behaviour. In the first part of the experiment, we acquaint ourselves with basic functionality of the scope and learn to manipulate different waveforms from a function generator. In the second part, we use an LED and a photo-multiplier tube to determine the probability distribution of the number of photons hitting the photo-multiplier in a given time interval. By changing the rate of events, we explore both Poisson and Gaussian distributions. We also investigate the probability distribution of time intervals between successive events.
{\bf This year this experiment will have to rely on recorded data. The description has been expanded to include a more thorough analysis of the recorded pulses. Please take into account that you will not be able to perform the data recording yourself, so please skip over the instructions for the recording. It is nevertheless useful to read those as it makes it more clear what the data is that has been recorded.}
\end{abstract}
\maketitle
%-------------------------------------------------------------------------------
\section*{Preparatory Questions}
\begin{enumerate}
\item In a sentence or two, describe the function of the following functions on the oscilloscope a) trigger b) time base and offset c) input channels with AC-DC coupling.
\item If a green photon ($\lambda$ = 532~nm) penetrates the glass window of the photo-multiplier tube and hits the bi-alkali layer ($\phi$ = 2.1~eV) on the inside emitting a photo-electron, what is the maximum energy it could have? Compare this energy to the energy of electron due to acceleration by $\sim$ 300~V between the cathode and the first dynode.
\item You detect a triangular signal with a peak of 20~mV and a time base of 20~ns. Calculate the gain if the signal is terminated by a 50~Ohm resistor.
\item What is the most likely time difference between two consecutive events? Evaluate this probability for two random events at a rate of 100~events/$\mu$s to be within 20~ns.
\item What is the uncertainty on the mean of a Gaussian distribution? How does it improve when adding more data?
\item Using a computer program or online tool, create a 100-event, Poisson distributed synthetic histogram with mean, $\mu$ = 2. What does this distribution look like? What does the variance appear to be?
% \item {\bf Challenge Question - you may need to have taken 8.044 to do this one!}: Noise is thermal electrons that have been liberated from the photocathode, which then enter the photomultiplier tube and are amplified, just like an event. Give a rough estimate of the noise rate at 297~K, for a monolayer 2" diameter cathode, where the atoms have a density of 1 atom/nm$^2$. Hint: use the Maxwell-Boltzmann distribution.
\end{enumerate}
%-------------------------------------------------------------------------------
\section{Theory}
\subsection{The Photoelectric Effect}
The photoelectric effect was discovered by Heinrich Hertz~\cite{buchwald1994} in 1887 who observed that when ultraviolet light fell on a negative electrode, electricity flowed to the anode. This experimental observation prompted Albert Einstein~\cite{einstein1905} to study the idea proposed by Max Planck in 1900 that electromagnetic radiation carries energy in 'packets' we call photons, where each photon carries energy equal to $h\nu$~\cite{planck1900}. In 1905, using the experimental data on the photoelectric effect, Einstein published a paper to re-iterate the idea that light travels in discrete quantized packets of energy~\cite{einstein1905}.
\begin{figure*}
\centering
\includegraphics[width=1.0\linewidth]{figs/apparatus1.png}
\caption{The schematic of the experimental setup}
\label{fig:apparatus1}
\end{figure*}
The photoelectric effect is the emission of electrons when light hits a material. Electrons emitted by this process are called photoelectrons. Each material has a characteristic work function $\phi$ (electron binding energy) which is the amount of energy required to remove an electron from its surface. If the incoming photon has an energy higher than the work function $\phi$, the electron will be ejected with a maximum kinetic energy given by
%%
\begin{equation}
K = h\nu - \phi
\end{equation}
%%
where $h$ is the Planck's constant with a value of $6.626\times10^{-34}$ Js or $4.135\times10^{-15}$ eVs.
\subsection{Poisson Statistics}
The Poisson probability distribution describes the probability of a given number of events occurring within a fixed time and/or space interval if these events occur with a given average rate and are independent~\cite{WP-poisson-distribution}. Common examples of Poisson distribution involve the number of decay events from a radioactive source, number of typos on a page of this document, number of meteors larger than a meter striking Earth in a year, number of calls received by a call center in an hour {\it etc.} Can you think of some processes that do not follow a Poisson distribution? Do you think the number of airplanes arriving at Boston Logan Airport in a five-minute interval is an example of Poisson distribution?
The probability distribution of a Poisson random variable $X$ representing the number of events occurring in a given time interval or space interval is given by
%%
\begin{equation}
P(X) = \frac{e^{-\lambda} \lambda^{x}}{x!} \ ,
\end{equation}
%%
where $\lambda$ is the average number of events occurring in a fixed interval of time. In the Poisson distribution, both the expected value and variance are equal to $\lambda$.
\begin{equation}
E(X) = \lambda ~~~~\text {and} ~~~~ \text{Var}(X) = \sigma^2 = \lambda \ .
\end{equation}
%-------------------------------------------------------------------------------
\section{Goals for the experiment}
For the photoelectron experiment we have the following goals:
\begin{enumerate}
\item Learn how to operate an oscilloscope.
\item Learn about the photoelectric effect.
\item {\bf Learn about Poisson and Gaussian distributions.}
\end{enumerate}
%-------------------------------------------------------------------------------
\section{Experimental Setup}
The components of this experiment include:
\begin{enumerate}
\item Green LED: The green LED takes voltage pulses from the function generator, and when the voltage is above a certain threshold, it produces green photons ($\lambda$ = 560 nm). These photons pass through a black attenuating cloth. Those that make it through the attenuator impinge onto the photocathode.
\item Photocathode: Photons incident on the photocathode cause electrons to be emitted via the \textbf{photoelectric effect}. These electrons are drawn to the first dynode of the photomultiplier tube.
\item Photomultiplier Tube (PMT): The photomultiplier tube amplifies the signal of a single electron, increasing the number of electrons by several orders of magnitude via the use of dynodes. The dynodes have a low work function and easily emit secondaries that hit the next dynode. The PMT is connected to the High Voltage (HV) supply, which provides $\sim$1877~V to the PMT. This voltage should not be changed throughout the experiment. What does the high voltage from the HV supply affect?
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figs/apparatus2.png}
\caption{The setup of apparatus in the lab}
\label{fig:apparatus2}
\end{figure}
\item Function Generator: The function generator outputs a periodic function whose shape, amplitude, and period can be adjusted. For this experiment, we will be using the function generator to create square pulses with a period of about 5~$\mu$s.
\item Oscilloscope: The oscilloscope displays signals from the generator and the PMT, allowing for visual diagnostics and quantitative analysis. The function generator and the PMT should be connected to the oscilloscope (see Figure \ref{fig:apparatus1}) \cite{tds2000}.
\end{enumerate}
%-------------------------------------------------------------------------------
\section{Experiment}
\subsection{Scope exercises}
For the oscilloscope exercise please follow the steps outlined below carefully. The hardware can be damaged if not operated properly.
\begin{enumerate}
\item Switch on the oscilloscope, the function generator and the HV supply. The voltage should read $\sim$1877~V. Please do not change the voltage for the rest of the experiment.
\item Check that function generator output is sent to channel 1 on the scope and the output of the PMT is sent to channel 2 with a 50~Ohms terminator. This terminator is quite important, can you think of a reason why?
\item Ensure that the function generator is sending in square-waves and the range is set to `100~K'. If there is no signal on the scope or if the signal appears frozen, press the `RUN/STOP' button on the top right corner.
\item Move the `TRIGGER LEVEL' until a stable signal is achieved. Can you understand what the `TRIGGER' does?
\item You should now be able to see a train of square-wave pulses and the photo-electron signal. Find the period and amplitude of the square-wave pulses and write them down in your notebook.
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figs/pulsetrain.png}
\caption{The pulse train expected to be seen on the oscilloscope.}
\label{fig:pulsetrain}
\end{figure}
\item Play around with the `VERTICAL' position and scale for both channel 1 and channel 2 on the scope. Does changing the position affect the period and amplitude of the signal? Does changing scale affect the period and amplitude?
\item Now do the same for `HORIZONTAL' position and scale. How does that affect the period and amplitude of the signal? Which channel's signal does it affect?
\item Try changing to a different waveform on the function generator. Can you explain what happens? Now change the amplitude on the function generator. What do you see?
\item Why is the signal from the PMT upside down (negative)?
\item Using `RUN/STOP' to record the trace on a USB stick. You will find more instructions on collecting data using a USB stick in the user manuals for scopes available right next to the scope. Print out your trace and verify that you got what you wanted.
\item Go back to generating square waveform from the function generator. Zoom into the time domain using the horizontal scale until the scale reads on the order of 20~ns. Ensure that you are triggering on channel 2 with a falling slope. By adjusting the position knobs, you should be able to see a single peak in detail.
\item Assuming that the signal shape can be approximated by a triangle, compute the gain of the PMT as you did for the preparatory problem. Do not forget about the 50~Ohms terminator! Repeat the calculation a few times. You should observe that $Q$ varies by $\sim$~30\% depending on where the electron emerged from the photo-cathode. {\it Pro-tip: No number in experimental physics without its uncertainty! - Prof. Becker}
\end{enumerate}
\subsection{Photo-electron statistics exercises}
\begin{enumerate}
\item Pause collection of the oscilloscope, and adjust channel 2 scale (signal from PMT) until you can clearly see individual peaks. This means that you are detecting single photons and we are now ready to record data.
\item Adjust the function generator's frequency until the average number of peaks per function maximum (bin) is between 1 and 5 and record 100 intervals.
%\item {\bf Use the recorded data and count the number of negative voltage peaks that are at least 10 times larger than the noise, in the corresponding time. What do you think the taller peaks signify, and how should they affect your count per bin?}
%\item {\bf Plot a histogram of the number of peaks per bin and calculate its variance. Fit both a Poisson and Gaussian distribution to the data. Qualitatively, which is a better fit?}
\item Now, adjust the oscilloscope until the average count is greater than ten and record 100 intervals.
%\item {\bf Analyze the recorded data as previously and plot another histogram and calculate the variance. Fit both a Poisson and Gaussian to this data. Qualitatively, which is a better fit?}
\end{enumerate}
%-------------------------------------------------------------------------------
\section{Reading recorded oscilloscope data}
After recording a number of intervals as described before we will proceed to a more detailed analysis of the pulse data. Therefore start by installing the software package following the instructions in Reference~\cite{cite:pulses}.
\subsection{Understanding single oscilloscope traces}
Use the data in the 'data' directory:
\begin{enumerate}
\item Select two typical but different plots of pulses and explain what the features are that you see.
\item The baseline and jitter of an oscilloscope recording is important to determine to be able to determine a signal. The baseline is the average reading that you expect with no signal being present, while the jitter is the fluctuation of the reading. Determine the baseline and the jitter for the two plots you showed in the last part. Explain what you did.
\item Fit a Landau function~\cite{cite:plandau} to the peak and determine the mean values and the corresponding uncertainties of the position, the width and the normalization of the peak. The analysis package already does this, but does not include the statistical uncertanties for each reading. Please, modify the code to include the uncertainties and compare the results with ones where no uncertainties are included. Calculate the $\chi^2$ and determine the number of degrees of freedom. Use those two numbers to determine the probability that the peak observed originates from the fitted landau distribution.
\end{enumerate}
\subsection{Analyze two datasets of oscilloscope traces}
The two datasets, each consisting of a number of oscilloscope recordings, are given in the directories 'data-a' and 'data-b' and have been recorded as described above. Please, perform the following tasks on each dataset.
\begin{enumerate}
\item Determine the number of photons seen per recording and produce a plot showing the distribution of the full dataset.
\item Determine the average number of photons by fitting the distribution obtained before using a Poisson and a Gaussian distribution. Please, include the statistical uncertainty.
\item Compare the result with the average rate obtained when dividing the total number of counts (sum of all recordings) by the total time (sum of the duration of all recordings).
\item Produce a picture of the cumulative averages, {\it i.e.} adding one recording after the other to the running average. Please, also keep the statistical uncertainties up to date.
\end{enumerate}
\bibliography{photon_pulses}
%-------------------------------------------------------------------------------
%\bibliographystyle{yahapj}
%\bibliography{photon_statisitics}
\begin{thebibliography}{99}
\bibitem{WP-photo-electric}
\href{https://en.wikipedia.org/wiki/Photoelectric\_effect}{Wikipedia: Photoelectric effect}
\bibitem{WP-poisson-distribution}
\href{https://en.wikipedia.org/wiki/Poisson\_distribution}{Wikipedia: Poisson distribution},
\href{https://towardsdatascience.com/the-poisson-distribution-and-poisson-process-explained-4e2cb17d459}%
{Towards data science web site.}
\bibitem{buchwald1994}
J.Z.~Buchwald.
``The Creation of Scientific Effects'',
\href{https://press.uchicago.edu/ucp/books/book/chicago/C/bo3636276.html}%
{The University of Chicago Press, Book, 1994}
\bibitem{einstein1905}
A.~Einstein,
``\"Uber einen die Erzeugung und Verwandlung des Lichtes betreffenden heuristischen Gesichtspunkt,''
\href{http://users.physik.fu-berlin.de/~kleinert/files/eins\_lq.pdf}%
{Annalen der Physik Vol. {\bf 322}, Issue 6 (1905) 132-148.}, and
A.~Peppard,
``Einstein's Proposal of the Photon Concept-a Translation of the Annalen der Physik Paper of 1905,''
\href{https://aapt.scitation.org/doi/10.1119/1.1971542}%
{American Journal of Physics, Volume {\bf 33}, Issue 5 (1965) 367-374}
\bibitem{planck1900}
M.~Planck,
``Zur Theorie des Gesetzes der Energieverteilung im Normalspectrum,''
\href{https://archive.org/stream/verhandlungende01goog#page/n247/mode/2up}{Verhandlungen der Deutschen Physikalischen Gesellschaft 2 (1900) 237–245}, and
D. ter Haar,
``The Old Quantum Theory,''
\href{https://openlibrary.org/books/OL5997151M/The\_old\_quantum\_theory}{Pergamon Press: 82 (1967) LCCN 66029628}.
\bibitem{tsitsiklis2018}
J.~Tsitsiklis,
``Definition of the Poisson Process,''
\href{https://www.youtube.com/watch?v=D\_EGYzqmapc}{MIT RES.6-012 Introduction to Probability, Spring 2018}
\bibitem{h-value}
\href{https://physics.nist.gov/cgi-bin/cuu/Value?h}{NIST web site.}, or
\href{http://www.physics.rutgers.edu/~abrooks/342/constants.html}{Useful constants and conversions.}
\bibitem{tds2000}
\href{https://www.tek.com/oscilloscope/tds2000-digital-storage-oscilloscope}{Tektronics scope: TDS2000}
\bibitem{cite:pulses}
\href{https://github.com/JLabMit/JLabExperiments/tree/master/Pulses/python}{Python package for the Pulses experiment.}
\bibitem{cite:plandau}
\href{https://pypi.org/project/pylandau/}{Python implementation of landau function.}
\end{thebibliography}
%-------------------------------------------------------------------------------
\clearpage
\appendix
\section{More on the photomultiplier and oscilloscope}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figs/photomultipliertube.png}
\caption{The schematic of signal amplification by the photomultiplier}
\label{fig:photomultiplier-tube}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{figs/morescope.png}
\caption{The different knobs and functions of the scope}
\label{fig:morescope}
\end{figure}
To save your data into a USB stick:
\begin{enumerate}
\item Insert the USB into the port once you are ready to save the files.
\item Press the `SAVE' button on the scope and wait until the files are all saved.
\item The files saved into the USB stick contain a .CSV file to access your data and .JPG file which is an image of the scope output.
\end{enumerate}
\end{document}
|
|
\documentclass[output=paper]{langscibook}
\ChapterDOI{10.5281/zenodo.5524274}
\author{Emily A. Hanink\affiliation{The University of Manchester}}
\title{Restructuring and nominalization size}
\abstract{This paper addresses the interaction between restructuring and nominalization in Washo (isolate, USA). An overview of the basics of restructuring in Washo is provided, and then two types of thematic nominalizations -- subject and object -- are compared with respect to their underlying structure and the availability of restructuring. Particular attention is paid to predictions determining the availability of both functional and lexical restructuring; with specific regard to the latter, the Washo data offer preliminary evidence that the height of the nominalization must contain at least VoiceP to faciliate agent sharing (\citealt{WurmbrandShimamura2017}).}
\begin{document}
\SetupAffiliations{mark style=none}
\tikzset{every tree node/.style={align=center,anchor=north}}
\maketitle
\section{Introduction}
This paper addresses the interaction between restructuring and nominalization size in Washo (isolate, USA). While Washo allows for restructuring in some nominalizations, it is shown that sufficient structure must be projected. I demonstrate this with a comparison between two types of thematic nominalizations in the language, subject and object, which differ in their underlying structure. The interaction between restructuring and nominalization is not well-studied, but offers an exciting venue for future research. The modest aim of this paper is therefore to offer some discussion of the basics of restructuring in Washo (\sectref{haninksec:2}), and to highlight some questions regarding the relationship between nominalization height and the availability of restructuring, based on currently available data (\sectref{haninksec:3}--\sectref{haninksec:4}).
\section{Restructuring in Washo}\label{haninksec:2}
The term {\itshape restructuring} refers to constructions in which an ``embedded predicate is transparent
for properties which are otherwise clause-bound'' \citep[248]{wurmbrand2015}. For example, one common diagnostic for restructuring comes from the availability of clitic climbing, as shown with the Italian contrast in (\ref{haninkitaliana}--\ref{haninkitalianb}) \citep[991--992]{Wurmbrand2004}:
\ea Italian\\
\ea[]{
\gll Lo volevo $[$ vedere {\itshape t}$_{\text{cl}}$ subito\ $]$.\\
him I-wanted {} see {} immediately\\
\glt `I wanted to see him immediately.' \hfill {\itshape Restructuring}\label{haninkitaliana}
}
\ex[*]{
\gll Lo detesto $[$ vedere {\itshape t}$_{\text{cl}}$ in quello stato\ $]$.\\
him I-detest {} see {} in that state\\
\glt Intended: `I detest seeing him in that state' \hfill {\itshape Non-restructuring}\label{haninkitalianb}
}
\z
\z
While restructuring phenomenena have largely been studied in analytic-type languages, agglutinative-type languages likewise display restructuring effects. This is illustrated for example in (\ref{haninkjapanese}) with Japanese, in which the restructuring verb {\itshape wasure} `forget' occurs as as an affix on the non-finite verb {\itshape tabe} `eat' within the same predicate. Such predicates instantiate restructuring in that they exhibit monoclausal effects; see \citealt{shimamurawurmbrand2014} for more details.
\ea Japanese\\
\gll
John-wa subete-no ringo-o tabe\textit{-wasure}-ta.\\
John-{\scshape top} all-{\scshape gen} apple-{\scshape acc} eat\textit{-forget}-{\scshape pst}\\
\glt `John forgot to eat all the apples.' \hfill \citep[2]{shimamurawurmbrand2014} \label{haninkjapanese}
\z
In Washo, a head-final language like Japanese, restructuring verbs are likewise affixed onto a non-finite (tenseless) verb to form a complex predicate (\ref{haninkrest}).\footnote{Washo (iso: was) is an endangered isolate spoken in several communities of California and Nevada surrounding Lake Tahoe. Some typologists group Washo within the Hokan family, see e.g., \citet{campbell1997} and \citet{mithun1999} for discussion. Orthography is adapted from \citet{jacobsen1964}; non-IPA symbols in this paper are L [l̥], š [ʃ], and y [j]. Stress is represented with an acute accent. Unless otherwise stated, the Washo data come from the author's fieldwork.}
\ea\label{haninkrest} Washo\\
\gll l-éšɨm\textit{-dugá:gu}-yi\\
1-sing\textit{-not.know.how}-\textsc{ind}\\
\glt `I don't know how to sing.'\footnote{Some verbs in Washo are inherently negative, as is the case with {\itshape dugá:gu} `not know how'.}
\z
\noindent Here, clause-bound transparency is revealed by the presence of a single agreement morpheme at the left periphery (prefixal agreement is only for person). Agreement morphology may not appear on both verbs, which I take as evidence for the reduced and non-finite status of the embedded verbal domain. In the same vein, just one set of TAM marking is observed at the right periphery; negation must likewise be clause-peripheral, and may not intervene between the verbs.
This strategy stands in contrast for example to finite embedding in the language, which comes in the form of either a clausal nominalization (\ref{haninkadele}) or a bare (non-nominalized) clause (\ref{haninkdream}), depending on the embedding predicate (\citealt{haninkbochnak2018}). Independent tense and mood marking are permitted in both of these clause types.\footnote{Washo is an optional tense language (\citealt{bochnak2016}), and tense marking often does not appear.} Clausal nominalizations further provide evidence for a CP-layer in that they exhibit switch reference morphology (see \citealt{arregihanink2018}). The upshot is that both of these embedding strategies involve finite clauses.
\ea Finite embedding of a clausal nominalization (nominalized CP)\label{haninkadele}\\
\gll Adele $[$ \textit{pro} daláʔak ʔ-í:gi\textit{-yi-$\varnothing$}-ge $]$ hámup'a-yé:s-i\\
Adele $[$ \textit{pro} mountain 3/3-see-\textit{\textsc{ind-ss}}\textsc{-nm.acc} $]$ 3/3.forget-\textsc{neg-ind}\\
\glt`Adele remembers that she saw the mountain.'\footnote{`Remember' in Washo can only be expressed by negating `forget'.}
\ex Finite embedding of a bare clause (MoodP)\label{haninkdream}\\
\gll \textit{pro} $[$ \textit{pro} di-yé-{iʔ}iš{\itshape-aʔ} $]$ di-gum-su{ʔ}ú{ʔ}uš-i{ʔ}-i \\
\textit{pro} $[$ \textit{pro} 1-fly-forward\textit{-\textsc{dep}} $]$ 1-\textsc{refl}-dream-\textsc{attr-ind}\\
\glt `I dreamt that I was flying.' \hfill Washo Archive
\z
\subsection{Restructuring in Washo}
Restructuring in Washo is found with a range of aspectual suffixes (\ref{haninkaspect}), as well as with modal `know how to' (\ref{haninkmodal}) and desiderative `want' (\ref{haninkwant}) (which can also mean `like'). Below I have classified a subset of these verbs (a term used loosely here, see \sectref{haninksec:vs}) based on Grano's (\citeyear{grano2012diss}: 16) sorting of Landau's (\citeyear{landau2000}) classes; Grano draws from the set of restructuring verbs in \citet[342]{wurmbrand2001}. The examples in (\ref{haninkother}) list some verbs in Washo that do not fall clearly into any of these categories.
\ea Aspectual\label{haninkaspect}\\
\ea {\gll zí:gɨn l-éʔw\textit{-gáŋa}-leg-i\\
chicken 1/3-eat\textit{-start}-{\scshape rec.pst-ind}\\
\glt `I started to eat the chicken.' \hfill Washo Archive}
\ex mí:-lé:we di-dulé:k'ɨl\textit{-mámaʔ}-ášaʔ-i\\
2.{\scshape pro}-for 1-cook\textit{-finish}-{\scshape prosp-ind}\\
\glt `I'll finish cooking for you.'
\ex \gll háʔaš\textit{-dúweʔ}-i\\
3.rain\textit{-be.about.to}-{\scshape ind}\\
\glt `It's about to rain.'\label{haninkrain}
\ex \label{haninksmoke}\gll t'é:liwhu báŋkuš\textit{-íweʔ}-i\\
man 3.smoke-stop-{\scshape ind}\\
\glt `The man stopped smoking.' \hfill Washo Archive
\z
\ex Modal\\
{\gll t'é:liwhu bašáʔ\textit{-dugá:gu}-yi\\
man 3.write\textit{-not.know.how}-{\scshape ind}\\
\glt `The man doesn't know how to write.'} \label{haninkmodal}
\ex Desiderative\label{haninkwant}\\
\ea {\gll di-gé:gel\textit{-gaʔlám}-i\\
1-sit\textit{-want}-{\scshape ind}\\
\glt `I want to sit.'\hfill Washo Archive}
\ex \gll l-éšɨm\textit{-gaʔlám}-i\\
1-sing\textit{-like}-{\scshape ind}\\
\glt `I like to sing.'
\z
\ex Other\label{haninkother}
\ea \gll di-bamušéʔeš\textit{-tamugáyʔliʔ}-i\\
1-read\textit{-be.tired.of}-{\scshape ind}\\
\glt `I'm tired of reading.'
\ex \gll l-éšɨm\textit{-duwéʔweʔ}-ášaʔ-i\\
1-sing\textit{-try}-{\scshape prosp-ind}\\
\glt `I'm going to try to sing.'\footnote{The verb `try' is the reduplicated from of the aspectual verb `be about to' (\ref{haninkrain}). This is an unusual instance of reduplication, which generally indicates plurality in Washo (see \citealt{yu2005,yu2012}).}
\ex \gll di-gum-yá:gɨm\textit{-ŋáŋa}-hu-yaʔ\\
1-{\scshape refl}-smoke\textit{-pretend}-{\scshape pl.incl-dep}\\
\glt `Let's pretend to smoke one another.' \hfill Bear and Deer Story
\z
\z
\subsection{Lexical vs. functional restructuring} \label{haninksec:vs}\largerpage[1.75]
\citet{wurmbrand2001} argues for a distinction between {\itshape lexical} and {\itshape functional} restructuring (see also \citealt{Wurmbrand2004}; cf. \citealt{cinque2001,cinque2004,grano2012diss}), which depends on whether the restructuring element is a lexical verb or a functional head, e.g., Asp or Mod. I show in this section that this distinction, which will come up in the discussion of nominalizations, appears to be motivated in Washo.
\citet{Wurmbrand2004} lays out several diagnostics for lexical vs. functional restructuring. For example, only lexical restructuring verbs show flexibility in selection. In Washo, this is observed in that lexical verbs may select for a nominal argument (\ref{haninkshoes}); this is however not possible in functional restructuring (\ref{haninkbook}).
\ea Variation in selection
\ea[] {\gll $[$ di-mók'o $]$ di\textit{-tamugáyʔliʔ}-i\\
$[$ 1-shoe $]$ 1/3\textit{-be.tired.of}-{\scshape ind}\\
\glt `I'm tired of my shoes.'\label{haninkshoes}}
\ex[*] {\gll $[$ ʔitbamušéʔeš $]$ di\textit{-gáŋaʔ}-i\\
$[$ book $]$ 1/3\textit{-start}-{\scshape ind}\\
\glt Intended: `I started the book.'\label{haninkbook} }
\z
\z
\begin{sloppypar}Second, functional restructuring is compatible with weather subjects (\ref{haninkstopcold}), while lexical restructuring is not (\ref{haninktiredcold}):\end{sloppypar}
\ea Weather verbs
\ea[*] {\gll baŋáya wa-métuʔ\textit{-tamugáyʔliʔ}-i\\
outside {\scshape stat}-be.cold{\itshape -be.tired.of}{\scshape -ind}\\
\glt Intended: `It's tired of being cold outside.'\label{haninktiredcold}}
\ex[] {\gll baŋáya wa-métuʔ\textit{-iweʔ}-i\\
outside {\scshape stat}-be.cold{\itshape -stop}{\scshape -ind}\\
\glt `It stopped being cold outside.'\label{haninkstopcold}}
\z
\z
Additionally, Washo exhibits cross-linguistically rare object control in restructuring (cf. \citealt{cinque2001}), exemplified in (\ref{haninkobjectcontrol}) with the verb {\itshape méwɨl} (`ask (someone) to do something'). Such examples pose a problem for accounts in which restructuring is limited entirely to functional heads, as such heads are predicted not to be able to select for internal arguments.
\ea
\gll Adele l-é:biʔ{\itshape -méwɨl}-i\\
Adele 1/3-come{\itshape -ask}-{\scshape ind}\\
\glt `I asked Adele to come.' \label{haninkobjectcontrol}
\z
Finally, variation is observed in possible orderings of the causative morpheme. In cases of lexical restructuring, the causative morpheme may appear as a suffix on the lower verb (\ref{hanink1}), or at the periphery of both verbs (\ref{hanink2}).\footnote{This may in fact be a diagnostic for the optionality of lexical restructuring.} In cases of functional restructuring, it may only appear in a right-peripheral position (\ref{haninkfunccaus}).\footnote{The position of the causative morpheme in Washo is sensitive to phonological factors, see e.g., \citealt{jacobsen1973,benz2018}, but that is not what is driving the contrast here.}
\ea Position of the causative in lexical restructuring
\ea \gll dímeʔ di-yák'aš{\itshape-ha}-gaʔlám-i\\
water 1/3-be.warm-\textit{\textsc{caus}}-want{\scshape-ind}\\
\glt `I want to warm the water up.' \label{hanink1}
\ex \gll dímeʔ di-yák'aš-gaʔlám{\itshape -ha}-yi\\
water 1/3-be.warm-want\textit{\textsc{-caus}}{\scshape-ind}\\
\glt `I want to warm up the water.' \label{hanink2}
\z
\ex Position of the causative in functional restructuring\label{haninkfunccaus}
\ea[]{ \gll dímeʔ di-yák'aš-gáŋa{\itshape-ha}-yi\\
water 1/3-be.warm-start\textit{\textsc{-caus}}{\scshape-ind}\\
\glt `I'm starting to warm the water up.' \label{hanink3}}
\ex[*] {\gll dímeʔ di-yák'aš{\itshape-ha}-gáŋaʔ-i \\
water 1/3-be.warm-\textit{\textsc{caus}}-start-{\scshape ind}\\
\glt Intended: `I'm starting to warm the water up.' \label{hanink4} }
\z
\z
While a precise analysis explaining the range of such effects awaits future research, moving forward I follow \citeauthor{wurmbrand2001} (\citeyear{wurmbrand2001}, et seq.) in treating functional restructuring as involving functional heads in the clausal spine such as Asp/Mod (\citealt{cinque2001,cinque2004,grano2012diss}), represented in (\ref{haninkvoice}) below as ``F'', but lexical restructuring as involving lexical verbs that select for an embedded VoiceP (\ref{haninkvoice2}), in a way to be made more precise in the next subsection.
\begin{multicols}{2}\raggedcolumns
\ea Functional restructuring\label{haninkvoice}\\
\begin{tikzpicture}[baseline]
\tikzset{level distance=22pt,sibling distance=15pt}
%\tikzset{level 1/.style={sibling distance=-125pt}}
\Tree [.FP [.VoiceP \qroof{\dots}.vP [.Voice ] ] [.F ] ]
\end{tikzpicture}
\z\columnbreak
\ea Lexical restructuring\label{haninkvoice2}\\
\begin{tikzpicture}[baseline]
\tikzset{level distance=22pt,sibling distance=15pt}
%\tikzset{level 1/.style={sibling distance=-125pt}}
\Tree [.VP [.VoiceP \qroof{\dots}.vP [.Voice ] ] [.V ] ]
\end{tikzpicture}
\z
\end{multicols}
\subsection{Lexical restructuring involves agent sharing}\label{haninksec:agent}
Relevant for the discussion of nominalizations moving forward is the proposal that lexical restructuring involves the selection of VoiceP by a restructuring verb (\citealt{wurmbrand2015,WurmbrandShimamura2017}), rather than the selection of a bare VP (e.g., \citealt{wurmbrand2001,Wurmbrand2004}).
This proposal is motivated by languages that show a variety of effects of Voice in restructuring enivronments.\footnote{While voice distinctions play a large role here, Washo lacks a passive (\citealt{jacobsen1979}).} I briefly summarize their approach and show how it extends to Washo.
Adopting the proposal that (causative) $v$ co-occurs with Voice within a split-voice domain (i.a. \citealt{bowers2002,folliharley2005,alexiadouetal2006,marantz2008}), \citet{WurmbrandShimamura2017} offer the following derivation of a matrix clause with active voice (\figref{haninkvoice3}). In this structure, the Voice head introduces the agent and bears both agent and accusative case features, while v carries transitivity information. The valuation of interpretable $\varphi$-features as well as feature sharing between the DP argument and Voice corresponds to theta-assignment.
\begin{figure}
\caption{\label{haninkvoice3}Feature sharing between DP and Voice \citep{WurmbrandShimamura2017}}
\begin{tikzpicture}[baseline]
\tikzset{level distance=22pt,sibling distance=15pt}
%\tikzset{level 1/.style={sibling distance=-125pt}}
\Tree [.VoiceP \qroof{{\itshape i}$\varphi$: val}.DP [.Voice$^\prime$ [.Voice\\{\scshape agent, acc}\\{{\itshape i}$\varphi$:\underline{val}} ][.vP [.v\\{\scshape tr/in, (caus)} ] \qroof{$\dots$}.VP ]] ]
\draw[semithick,dashed,*-*] (-2,-2.25)..controls +(south east:1) and +(south west:1)..(.35,-3.15);
\end{tikzpicture}
\end{figure}
\citet{WurmbrandShimamura2017} adopt moreover a valuation approach to Agree (\citealt{pesetskytorrego2007}), formulated in (\ref{haninkreverse}) as Reverse Agree, which accounts for the downward valuation of the agent's features onto Voice.
\ea Reverse Agree \citep{Wurmbrand2014}\label{haninkreverse}\\
A feature {\scshape F}:\underline{\ \ \ } on $\alpha$ is valued by a feature {\scshape f}: val on $\beta$ iff
\ea $\beta$ c-commands $\alpha$ and
\ex $\alpha$ is {\itshape accessible to $\beta$}
\ex $\alpha$ does not value \{a feature of $\beta$\}/\{a feature {\scshape f} of $\beta$\}
\z
\z
In restructuring configurations (see below), the restructuring verb selects for VoiceP. Crucially, matrix Voice agrees with the DP subject in its specifier before valuing {\itshape i}$\varphi$ on the lower Voice head (see \citealt{wurmbrand2015,WurmbrandShimamura2017} for distinctions between voice matching and default voice languages). No embedded subject is projected; this proposal therefore accounts for the fact that an overt subject is not allowed in the embedded VoiceP. Instead, feature sharing results in agent sharing between Voice heads.
Evidence for the presence of embedded VoiceP in Washo comes from the appearance of the causative morpheme {\itshape -ha} between the lower and higher verbs, indicating that the complement of the restructuring verb is larger than VP. Adopting Wurmbrand \& Shimamura's (\citeyear{WurmbrandShimamura2017}) proposal for Washo, the structure for an example such as (\ref{haninkwarmup}) is then as in \figref{fig:haninkwarmupstructure} (schematized without head movement). No embedded subject is projected, instead embedded Voice enters into a dependency with the higher Voice head, whose features it then shares.
% (as is also the case in Acehnese, \citealt{legate2014}, apud \citealt{wurmbrandshimamura2017}),
\ea \label{haninkwarmup}
\gll dímeʔ di-yák'aš-ha-tamugáyʔliʔ-i\\
water 1/3-be.warm-{\scshape caus}-be.tired.of-{\scshape ind}\\
\glt `I'm tired of warming up the water.'
%\gll géwe di-yúli-duwéʔweʔ-ha-yášaʔ-i\\
%coyote 1/3-die-try.{\scshape fut}-{\scshape caus-near.fut-ind}\\
%\glt `I'm going to try to kill the coyote.'
\z
\begin{figure}
\caption{General schematic for restructuring in Washo\label{fig:haninkwarmupstructure}}
\begin{tikzpicture}
\tikzset{level distance=22pt,sibling distance=15pt}
\tikzset{level 1/.style={sibling distance=-125pt}}
\tikzset{level 2/.style={sibling distance=15pt}}
\tikzset{level 3/.style={sibling distance=-10pt}}
\tikzset{level 4/.style={sibling distance=0pt}}
\tikzset{level 5/.style={sibling distance=15pt}}
\Tree [.VoiceP [.{\itshape pro}\\{i$\varphi$:val} ] [.Voice$^{\prime}$ [.vP [.VP [.VoiceP [.vP \qroof{\itshape dímeʔ yák'aš}.VP$_2$ [.v\\{\scshape caus}\\{\itshape -ha} ] ] [.Voice\\{i$\varphi$: \underline{val$_{ag}$}} ] ] [.V\\{\itshape -tamugáyʔliʔ} ] ] [.v ] ] [.Voice\\{i$\varphi$: val$_{ag}$} ] ] ]
\draw[semithick,dashed,*-*] (-.5,-5.25)..controls +(south east:1.5) and +(south west:1)..(3.75,-2.65);
\end{tikzpicture}
%\begin{tikzpicture}
%\tikzset{level distance=25pt,sibling %distance=20pt}\Tree[.vP [.VP$_1$ [.vP$_R$ \qroof{\itshape %duwéʔweʔ}.VP$_2$ [.$<$v$_R$$>$ ] %] [.V$_1$ [.V$_1$\\{\itshape -yuli} ] [.v$_{R}$\\{\itshape -ha} ] %] ] [.v ] ] \end{tikzpicture}
\end{figure}
\section{Restructuring in nominalizations}\label{haninksec:3}
I now turn to the interaction between restructuring and nominalization. Beyond the sentential level, restructuring is also observed in certain nominalizations; by contrasting subject and object nominalizations, I show below that the height of the nominalization determines whether restructuring is possible. Functional restructuring requires higher aspectual heads to be present in order to obtain, while the proposal put forward in \sectref{haninksec:agent} predicts that the projection of at least VoiceP within the nominalization is required for lexical restructuring. %I show that the availability of restructuring in subject nominalizations is consistent with this prediction.
\subsection{Thematic subject nominalizations}
The first nominalization type I discuss is thematic subject nominalizations, characterized in Washo by a lack of TAM marking as well as the presence of the phonologically conditioned prefix {\itshape t'-/d\textsuperscript{e}-} (\citealt{jacobsen1964}):
\ea Thematic subject nominalizations
\ea \gll{\itshape da}-mt'áʔŋaʔ\\
\textit{\textsc{3.un}}-hunt\\
\glt `hunter' \hfill Washo Archive
\ex \gll dé:guš {\itshape t'}-í:k'eʔ\\
potato \textit{\textsc{3.un}}-grind\\
\glt `potato grinder' ({\itshape man's name}) \hfill \citep[354]{jacobsen1964}
\z
\z
Much focus in the literature on subject nominalizations has focused on {\itshape -er} nominals (\citealt{rappaporthovavlevin1992,bakervinokurova2009,alexiadouschafer2010}), which are generally limited to external arguments cross-lin\-guis\-ti\-cal\-ly (though see \citealt{alexiadouschafer2008,alexiadouschafer2010}), exemplified in (\ref{haninkenglish}):
\ea a dazzled $[$ admir{\itshape -er} of Washington\ $]$ \hfill (\citealt{rappaporthovavlevin1992})\label{haninkenglish}
\z
\noindent\citet{bakervinokurova2009} argue that other subject nominalizations are distinguishable from {\itshape -er} nominals by the availability of: (i) direct objects and (ii) unaccusative subjects. In their analysis, deverbal -{\itshape er} nominals do not project beyond VP (cf. \citealt{alexiadouschafer2010}), precluding accusative case licensing as well as external arguments in this nominalization type (-{\itshape er} is a nominal Voice head (cf. \citealt{kratzer1996}), explaining the restriction to external arguments).
On the first point, (\ref{haninkhealer}) shows that accusative direct objects are licensed in Washo {\itshape t'-/d\textsuperscript{e}-} nominalizations ({\itshape t'ánu} `people'; note that accusative is unmarked on nouns), while the presence of v and Voice is diagnosed by the availability of the causative suffix {\itshape -ha}. On the second point, unaccusative subjects are also possible (\ref{haninkbroken}), consistent with the fact that the nominalizer does not take the place of an agentive subject, as on Baker \& Vinokurova's \citeyear{bakervinokurova2009} analysis.\footnote{Unaccusativity is diagnosed by the ability to undergo the inchoative/causative alternation.}
\ea \gll t'ánu t'-íšiw-ha\\
person 3.\textsc{un}-get.well-{\scshape caus}\\
\glt `person healer' (Lit. `one who heals people') \label{haninkhealer}
\ex \gll da-gótaʔ\\
\textsc{3.{\scshape un}}-{break}\\
\glt `something that is broken' \label{haninkbroken}
\z
\noindent Relatedly, evidence for a syntactically-projected subject in VoiceP (beyond accusative licensing) comes from the availability of reflexives (\ref{haninkcall}), for which {\scshape pro} serves as a licit antecedent (cf. \citealt{bakervinokurova2009} on Gĩkũyũ (Bantu)). \il{Gĩkũyũ}
\ea \gll Ramona de{\itshape -gum}-díʔyeʔ L-éʔ-i\\
Ramona {\scshape 3.{\scshape un}}-\textit{\textsc{refl}}-call 1-be-{\scshape ind}\\
\glt `My name is Ramona.' (Lit. `{one who calls herself Ramona}') \label{haninkcall}
\z
Subject nominalizations in Washo are therefore not of the {\itshape -er} type, and, based on the above behaviors from complementation and subject flexibility, can be taken to contain at least VoicePs (cf. \citealt{bochnaketal2011}). I note moreover that they are in fact even larger, as there is preliminary evidence that aspectual suffixes are also permitted, as in (\ref{haninkalways}), which contains the progressive suffix {\itshape -giš}:%\footnote{See \citealt{bochnak2015sula} for evidence that this is a grammatical aspect morpheme.}
\ea \gll t'ánu da-báŋkuš-i{\itshape-giš} k'-é{ʔ}-i\\
person {\scshape 3.un}-tobacco-{\scshape attr}\textit{\textsc{-prog}} 3-be-{\scshape ind}\\
\glt `People are always smoking.' (Lit. `{ones who are continually with tobacco}')\label{haninkalways}
\z
I now turn to the predictions for restructuring. Beginning with functional restructuring, the prediction is that at least AspP/ModP must be projected for restructuring to obtain. We saw in (\ref{haninkalways}) that there is in fact evidence for an AspP layer in these nominalizations, leading to the prediction that functional restructuring should be possible. (\ref{haninknotsing}) shows that this prediction is borne out: functional restructuring with e.g., aspectual {\itshape -íwe} `stop' is permitted:
\ea Functional restructuring in subject nominalizations\\
{\gll t'-íšɨm\textit{-íwe}-yé:s\\
{\scshape 3.un}-sing-\textit{stop}-{\scshape neg}\\
\glt `one who doesn't stop singing'} \label{haninknotsing}
\z
The availability of functional restructuring follows straightforwardly from the fact these nominalizations may contain functional layers such as AspP. This is schematized in \figref{haninksubjectfunctional} for the example in (\ref{haninknotsing}) (shown without negation):\footnote{Note that the presence of PossP in these structures is due to the fact that the prefix {\itshape t'-/d\textsuperscript{e}-} is not an invariant nominalizer, but in fact a form of possessor agreement that appears with covert third person possessors. I do not go into this any further here due, but see \citet{hanink2020}.}
\begin{figure}
\caption{Functional restructuring in subject nominalizations\label{haninksubjectfunctional}}
\begin{tikzpicture}[baseline]
\tikzset{level distance=22pt,sibling distance=15pt}
\Tree [.PossP [.AspP \qroof{{\scshape pro} \itshape íšɨm}.VoiceP [.Asp\\{\itshape -íwe} ] ] [.Poss\\{\itshape t'-} ] ] \node (c) at (-4.5,-1.05) {{\itshape Height of nominalization $\rightarrow$}};
\end{tikzpicture}
\end{figure}
Turning to lexical restructuring, the prediction is specific to VoiceP. On the account presented in \sectref{haninksec:agent}, lexical restructuring requires agent sharing across Voice heads; the height of nominalization must therefore be at least VoiceP. We saw above that subject nominalizations do involve VoiceP as well as a projected subject, leading to the prediction that restructuring should be possible. This is again borne out, as demonstrated in (\ref{haninknoteat}) with the lexical verb {\itshape -gaʔlám} `like':
\ea Lexical restructuring in subject nominalizations\\
\gll t'-émlu\textit{-gaʔlám}-é:s\\
{\scshape 3.un}-eat\textit{-like}-{\scshape neg}\\
\glt `one who doesn't like to eat' \label{haninknoteat}\hfill Washo Archive
\z
\begin{figure}
\caption{Lexical restructuring in subject nominalizations\label{fig:haninksubjectstructure}}
\begin{tikzpicture}
\tikzset{level distance=22pt,sibling distance=15pt}
\tikzset{level 1/.style={sibling distance=-20pt}}
\tikzset{level 2/.style={sibling distance=-50pt}}
\tikzset{level 3/.style={sibling distance=-75pt}}
\tikzset{level 4/.style={sibling distance=15pt}}
\tikzset{level 5/.style={sibling distance=0pt}}
\tikzset{level 6/.style={sibling distance=0pt}}
\Tree [.PossP [.AspP [.VoiceP [.{\scshape pro} ] [.Voice$^\prime$ [.vP [.VP [.VoiceP \qroof{\itshape émlu}.vP [.Voice\\{i$\varphi$:\underline{val$_{\textsc{ag}}$}} ] ] [.V\\{\itshape -gáʔlam} ] ] [.v ] ] [.Voice\\{i$\varphi$: val$_{\textsc{ag}}$} ] ] ] [.Asp ]] [.Poss\\{\itshape t'-} ] ]
\node (c) at (-5.25,-1.05) {{\itshape Height of nominalization $\rightarrow$}};
\draw[semithick,dashed,*-*] (-3.35,-6.65)..controls +(south east:1.5) and +(south west:1)..(.75,-4.25);
\end{tikzpicture}
\end{figure}
Unlike functional restructuring, lexical restructuring relies on agent sharing. As the nominalization targets (at least) VoiceP, this is possible because the $\varphi$-features on embedded Voice can be valued by the higher Voice head (see \figref{fig:haninksubjectstructure}, cp. \figref{fig:haninkwarmupstructure}).
In sum, that thematic subject nominalizations in Washo support both functional and lexical restructuring is consistent with the fact that their structure is quite large. Note that if \citet{bakervinokurova2009} are correct that agent nominalizations contain only VP, then restructuring should not be possible in {\itshape -er}-nominals cross-linguistically, as higher functional heads will not be present, nor will agent sharing be possible. Restructuring thus provides a further diagnostic to distinguish between different types of subject nominalizations.
\subsection{Unexpressed theme nominalizations}
I now move on from subject nominalizations to a type of {\itshape object} nominalization in Washo, which I term {\itshape unexpressed theme nominalizations}. This class of nominalizations is characterized by the invariant nominalizing prefix {\itshape d-}, as in (\ref{haninkinternal}):
\ea Unexpressed theme nominalizations\label{haninkinternal}\\
\ea \gll {\itshape d-}íšɨm\\
\textit{\textsc{nmlz}}-sing\\
\glt `song'
\ex \gll {\itshape d-}á:muʔ\\
\textit{\textsc{nmlz}}-wear.dress\\
\glt `dress'
%\ex \gll {\itshape da-}háʔaš\\
%\textit{\textsc{nmlz}}-rain\\
%\glt `rain' %\hfill 5-23-19
\z
\z
This type of nominalization refers to an unexpressed internal argument (essentially a cognate object, cf. \citet{barker1998} on -{\itshape ee} nominalizations), and can only apply to unergative verbs, not transitives or unaccusatives; Washo distinguishes between transitive/intransitive variants for several of these verbs (\ref{haninkeat}), even with object drop (\ref{haninkeating}) but only the intransitive form may be nominalized by {\itshape d-} (\ref{haninkfood}).
\ea Intransitive vs. transitive `eat'\label{haninkeat}
\ea \gll m-émlu-yi\\
2-eat.{\scshape in}-{\scshape ind}\\
\glt `You're eating.'
\ex \gll t'á:daš m-íʔw-i\\
meat 2/3-eat.{\scshape tr}-{\scshape ind}\\
\glt `You're eating meat.'
\ex \gll m-íʔw-i\\
2/3-eat.{\scshape tr}-{\scshape ind}\\
\glt `You're eating it.' \hfill \citep[149]{jacobsen1979} \label{haninkeating}
\z
\ex Nominalization of intransitive vs. transitive `eat'\label{haninkfood}
\ea \gll {\itshape d-}émlu\\
\textit{\textsc{nmlz}}-eat.{\scshape in}\\
\glt `food'
\ex[*]{\gll {\itshape d}-íʔw\\
\textit{\textsc{nmlz}}-eat.{\scshape tr}\\
\glt Intended: `food' }
\z
\z
It is crucial here that unexpressed theme nominalizations differ from subject nominalizations in that they are deficient in verbal structure and do not license overt arguments. With this in mind, one way of deriving the meaning for this nominalization type is to treat {\itshape d-} as a root-selecting nominalizer that also introduces a theme (\ref{haninkdenoted}). This would rule out categorization of transitive and unaccusative roots by {\itshape d-}, as they are lexically specified as having a theme and are therefore of type \textit{$\langle$e, $\langle$v, t$\rangle$$\rangle$}. The resulting meaning for the nominalization is then the set of individuals that are the themes of generic eating events, i.e., {\itshape food}.
\ea
\ea {\denote{$\sqrt{emlu}$}: $\lambda$$e$$_v$$[$eat($e$)$]$}
\ex {\denote{\itshape d-}: $\lambda$$P_{\langle v,t\rangle}$$\lambda$$x_e$.Gen $e$$[$$P$($e$) \& {\scshape theme}($x$)($e$)$]$}\label{haninkdenoted}
\ex {\denote{\itshape d-} (\denote{$\sqrt{emlu}$}): $\lambda$$x_e$.Gen $e$$[$eat($e$) \& {\scshape theme}($x$)($e$)$]$}
\z
\z
\begin{sloppypar}
The treatment of {\itshape d-}nominalizations as root nominalizations rather than nominalizations of some verbal structure is further corroborated by Marantz's (\citeyear{marantz2001}) diagnostics distinguishing {\itshape root-cycle} vs. {\itshape outer-cycle} attachment. For example, merger with a root is not only consistent with idiosyncractic meanings (\ref{haninkwater}), but also implies that the resulting meaning depends on the semantics of the root itself, rather than on argument structure. Given that the argument structure of unergative verbs does not entail a syntactically projected internal argument, the semantics of this nominalization must be sensitive to the meaning of the root instead.
\end{sloppypar}
\ea \gll {\itshape d-}ímeʔ\\
\textit{\textsc{nmlz}}-drink\\
\glt `water' ({\itshape not} `(a) drink') \label{haninkwater}
\z
I therefore propose that the nominalizations in (\ref{haninkinternal}) have the structure in \figref{haninkdstructure}.
\begin{figure}
\caption{\label{haninkdstructure}Unexpressed theme nominalizations}
\begin{tikzpicture}[baseline]
\tikzset{level distance=22pt,sibling distance=15pt}
\Tree [.nP [.$\sqrt{\textit{emlu}}$ ] [.n\\{\itshape d-} ] ]
\node (c) at (-3.5,-1) {{\itshape Height of nominalization $\rightarrow$}};
\end{tikzpicture}
\end{figure}
Relevant for our purposes is that neither functional nor lexical restructuring is ever possible in this type of nominalization (\ref{haninknor}), unlike in the deverbal nominalizations described in the previous subsections. This fact is immediately obvious if {\itshape d-}nominalizations are root nominalizations, and therefore do not in fact project any verbal structure (\figref{haninkdstructure}) despite their superficially deverbal appearance.
\ea No restructuring in unexpressed theme nominalizations\label{haninknor}
\ea[*]{\gll d-émlu-gaʔlám\\
\textit{\textsc{nmlz}}-eat.{\scshape in}-like\\
\glt Intended: `food that is liked/wanted'}
\ex[*]{ \gll d-émlu-mámaʔ\\
\textit{\textsc{nmlz}}-eat-finish\\
\glt Intended: `finished food' }
\z
\z
To summarize, unexpressed theme nominalizations do not permit restructuring, which is immediately predicted due to their lack of verbal structure. This is of course not surprising, given that they turn out to be root nominalizations. While both subject and object nominalizations superficially appear to be deverbal, the availability of restructuring in the former but not the latter corroborates independently observed differences in the amount of structure they project.
\section{Other nominalizations in Washo} \label{haninksec:4}
We have seen in the previous section that subject nominalizations in Washo are large enough to allow for restructuring, while object nominalizations are not. Before concluding, I turn briefly to two further types of nominalizations in Washo~-- gerunds and instrumental nominalizations -- that lead to predictions about the availability of restructuring, but for which relevant data is lacking at this time.
\subsection{Gerunds}\largerpage
Gerunds in Washo, like subject nominalizations, lack TAM marking and do not make use of an overt nominalizer. Unlike subject nominalizations however, gerunds allow overt subjects and therefore show normal prefixal agreement, which I again treat as possessor agreement resulting from the presence of Poss (I return to this below).\footnote{Washo exhibits portmanteau agreement marking for subject/object (\citealt{jacobsen1964}), which in this case can be understood as possessor/possessum.} One environment that gerunds occur in is as the subject of the underspecified modal {\itshape éʔ} (\ref{haninkgive}), which is otherwise a copula (\citealt{bochnak2015wscla,bochnak2015nels}). Another is as the complement of certain verbs, e.g., `want' (\ref{haninksleep}).
\ea Gerunds \label{haninkgerunds}
\ea \gll $[$ hútiweʔ lem-íšɨl $]$ k'-éʔ-i\\
$[$ something 2/1-give $]$ 3-be-{\scshape ind}\\
\glt `You have to give me something.' \\ (Lit. `{Your giving me something is necessary}.') \label{haninkgive}
\ex \gll $[$ l-élšɨm $]$ di-gaʔlám-i\\
$[$ 1-sleep $]$ 1/3-want-{\scshape ind}\\
\glt `I want to sleep.' (Lit. `{I want my sleeping}.')\label{haninksleep}
\z
\z
Based on this distribution, I treat this construction as a type of -{\itshape ing} nominalization. Within the domain of {\itshape ing}-nominalizations, \citet{kratzer1996} distinguishes between `poss'-{\itshape ing} and `of'-{\itshape ing} constructions (see also \citealt{abney1987,alexiadou2005,harley2009}), which differ for example in whether the complement of the verb is introduced as a direct object (\ref{haninkposs}), or by the preposition {\itshape of} (\ref{haninkof}).
\ea -\textit{ing}-nominalizations
\ea We remember his building the barn. \label{haninkposs}
\ex His rebuilding of the barn took five months. \label{haninkof} \hfill \citep[126--127]{kratzer1996}
\z
\z
Kratzer argues that `poss'-{\itshape ing} nominalizations must include at least a VoiceP layer, as accusative case is licensed on the direct object. This is the case in Washo gerunds, as shown by the availability of the accusative pronoun {\itshape gé:} in (\ref{haninkeddy}):
\ea \gll Eddy ʔwáʔ ʔ-éʔ-é:s-i-š-ŋa $[$ {\itshape gé:} l-í:gi\ $]$ k'-éʔ-i\\
Eddy here 3-be-{\scshape neg-ind-ds}-but {} \textit{\textsc{3.pro.acc}} 1/3-see 3-be-{\scshape ind}\\
\glt `Eddy isn't here but I need to see him.' $[$=`My seeing him is necessary'$]$\label{haninkeddy}
\z
%Further, as in the case of subject nominalizations, further evidence for the presence of articulated argument structure comes from the presence of instrumental, agentive adjuncts as in (\ref{haninkhammer}).
%\ea \gll $[$ Adele {\itshape déʔek-lu} dákɨš $]$ l-émc'i-ha-yi\\
%$[$ Adele \textit{rock-\textsc{inst}} 3.hammer $]$ 3/1-wake.up-{\scshape caus-ind}\\
%\glt `Adele's hammering with a rock woke me up.'\label{haninkhammer}
%\z
Further, as with subject nominalizations, there is again evidence that AspP is also present in such structures, as suggested by examples such as in (\ref{haninkkeep}), which contains the progressive morpheme -{\itshape giš}:
\ea\gll ʔum-lóʔc'iw{\itshape-giš} k'-éʔ-i\\
2-run\textit{\textsc{-prog}} 3-be-{\scshape ind}\\
\glt `You need to keep running.' (Lit. `Your continuing to run is necessary.') \label{haninkkeep}
\z
Based on these characteristics, I adopt the structure in \figref{fig:haninkgerundstructure} for gerunds in Washo, building on \citet{kratzer1996}.\footnote{I assume again here that these nominalizations involve PossP, on the assumption that the agreement is in fact a form of agreement triggered by Poss, rather than T. Possessor agreement and verbal agreement are identical in almost all cases; I unfortunately do not have available the relevant data that might distinguish them. Note also that the case of the possessor is nominative/unmarked; the absence of case marking on the gerund's subject is therefore not surprising. See e.g., \citet{pires2007} for tests distinguishing clausal gerunds (treated as TPs) from poss-{\itshape ing} nominalizations (see also \citealt{chomsky1970,abney1987}). Fieldwork/research is ongoing.}
\begin{figure}
\caption{General schematic for gerunds in Washo\label{fig:haninkgerundstructure}}
\begin{tikzpicture}
\tikzset{level distance=22pt,sibling distance=10pt}
\tikzset{level 2/.style={sibling distance=0pt}}
\Tree [.PossP [.AspP [.VoiceP \qroof{Subject}.DP [.Voice$^\prime$ [.vP \qroof{...}.VP [.v\\{\scshape acc} ] ] [.Voice\\{i$\varphi$: val$_{\textsc{ag}}$} ] ] ] [.Asp ]] [.Poss ] ]
\node (c) at (-5.25,-1.05) {{\itshape Height of nominalization $\rightarrow$}};
\end{tikzpicture}
\end{figure}
The presence of AspP in the structure again predicts that functional restructuring should be possible in gerunds. This prediction is borne out, as shown with the aspectual suffixes `start' and `finish' in (\ref{haninkshould}--\ref{haninkfinishread}), respectively:
\ea Gerunds with restructuring \label{haninkgerundr}
\ea\gll $[$ mé:hu šáwlamhu wagay-áŋa\textit{-gáŋaʔ} $]$ k-éʔ-i\\
$[$ boy girl 3.talk-{\scshape appl}{\itshape-start} $]$ 3-be-{\scshape ind}\\
\glt `The boy should start talking to the girl.'\\(Lit. `The boy's starting to talk to the girl should be.') \label{haninkshould}
\ex \label{haninkfinishread}
\gll $[$ di-bamušéʔeš\textit{-mámaʔ} $]$ di-gaʔlám-i\\
$[$ 1-read\textit{-finish} $]$ 1/3-want-{\scshape ind}\\
\glt `I want to finish reading.' (Lit. `I want my finishing to read.')
\z
\z
Regarding lexical restructuring, the presence of VoiceP in gerunds likewise predicts agent sharing to be possible (barring semantic anomaly), leading to the availability of lexical restructuring in gerunds. I unfortunately do not have data to test this prediction at present, and so I must leave this question to future work.
\subsection{Instrumental nominalizations}
Another nominalization type for which restructuring remains to be tested are instrumental nominalizations, formed by the prefix {\itshape ʔit-} (\ref{haninkinst}). As demonstrated through the availability of direct objects (\ref{haninkfly}), the causative morpheme (\ref{haninkfly}--\ref{haninkrouge}), and reflexive marking (\ref{haninkrouge}), such nominalizations target at least VoiceP.
\ea Instrumental nominalizations\label{haninkinst}\\
\ea \gll pú:t'eʔ ʔit-yúli-ha\\
fly {\scshape inst}-to.die-{\scshape caus}\\
\glt `fly swatter' (Lit. `something to kill flies with') \hfill Washo Archive \label{haninkfly}
\ex \gll ʔit-gum-p'áʔlu-šóšoŋ-ha\\
{\scshape inst-refl}-on.cheeks-be.red-{\scshape caus}\\
\glt `rouge' (Lit. `something to make one's cheeks red with')\newline\phantom{x}\hfill Washo Archive \label{haninkrouge}
\z
\z
Due to the presence of VoiceP, it is predicted that lexical restructuring should be possible; functional restructuring is predicted to be allowed should it turn out that aspectual suffixes are also permitted. Here again I must test these predictions in future work. I note as well that an interesting case would be a type of nominalization with an intermediate size, smaller than VoiceP but larger than a root nominalization. I am unfortunately unaware of any such nominalizations in Washo, but this points to an open empirical question for cross-linguistic research.
\section{Conclusion}
%In this paper I have offered a preliminary overview of restructuring and nominlization in Washo, paying particular attention to the interaction between restructuring and nominalization size. Supporting the proposal put forward in \citet{wurmbrandshimamura2017}, I have shown that restructuring is only possible in a nominalization if that nominalization projects at least to the level of VoiceP. As a result, restructuring is possible in thematic subject nominalizations and gerunds, but not in unexpressed theme nominalizations.
Susi Wurmbrand's rich work over the years has opened to the door to many fascinating questions about the way that restructuring manifests cross-linguistically. While I have only scratched the surface of this topic, I hope to have demonstrated that examining the interaction between restructuring and nominalization cross-linguistically is a useful tool for understanding both of these constructions.
\section*{Acknowledgments}
I would like to thank Adele James, Melba Rakow, and Ramona Dick\textsuperscript{†}, who have patiently worked with me over the years on the Washo language. I also thank Karlos Arregi, Andrew Koontz-Garboden, and the audience at GLOW 43 for helpful discussion of various aspects of the ideas presented here, as well as the two anonymous reviewers of this paper. All errors and shortcomings are my own.
\section*{Abbreviations}
\begin{tabularx}{.5\textwidth}{@{}lQ}
{\scshape acc} & accusative\\
{\scshape attr} & attributive\\
{\scshape appl} & applicative\\
{\scshape caus} & causative\\
{\scshape dep} & dependent mood\\
{\scshape ds} & different subject (switch reference)\\
{\scshape in} & intransitive \\
{\scshape incl} & inclusive\\
{\scshape ind} & independent mood\\
{\scshape inst} & instrumental nominalizer\\
{\scshape neg} & negation\\
\end{tabularx}%
\begin{tabularx}{.5\textwidth}{lQ@{}}
{\scshape nm} & clausal nominalizer\\
{\scshape nmlz} & nominalizer\\
{\scshape pl} & plural\\
{\scshape prog} & progressive\\
{\scshape prosp} & prospective aspect\\
{\scshape rec.pst} & recent past\\
{\scshape refl} & reflexive\\
{\scshape ss} & same subject\\
{\scshape stat} & static\\
{\scshape tr} & transitive\\
{\scshape un} & unexpressed possessor agreement\\
\end{tabularx}
{\sloppy\printbibliography[heading=subbibliography,notkeyword=this]}
\end{document}
|
|
\chapter{Case Study}
% \label{ch:relatedwork}
In this case study, we will discuss how energy consumption is affected if we were to use a containerized environment
rather than running an application natively. We will also briefly discuss how Lingua Franca, a polyglot coordination
language for distributed programming, may also contribute towards optimizing energy consumption \textemdash
albeit for only very specific use cases.
\section{Docker}
As discussed in Section II, Docker \cite{turnbull_2014} is a container framework which has gained massive
popularity and real-world application in the past few years. From applications to distributions, one can run
almost anything inside a containerized environment. Figure \ref{fig:dockerfile} displays a sample
Dockerfile, which is a script to configure the containerized environment. It can be customized as to one's liking
and is one of the reasons why Docker has such diverse applications. \\
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{Figs/dockerfile.png}
\end{center}
\caption{A sample Dockerfile}
\label{fig:dockerfile}
\end{figure}
For this study, we ran \texttt{TempSensClient} as a Docker container on the Raspberry Pi \textemdash we will call
it \texttt{TempSensClientDocker}. The only thing extra that
needed to be done was forwarding the port for socket communication when starting up the container. This is due to the
fact that Docker abstracts the networking layer so the container ports are different from the host ports. The
assumption for this study was that \texttt{TempSensClientDocker} would consume more energy than \texttt{TempSensClient}.
However, since we are more interested in communication cost and the energy associated with it, we wanted to
compare the communication cost. Figure \ref{fig:dockerenergy} shows the EPB for \texttt{TempSensClientDocker} w.r.t
each chunk size. \\
\begin{figure}
\begin{center}
\includegraphics[scale=0.23]{Figs/dockerenergy.png}
\end{center}
\caption{EPB for different chunk sizes (Chunk Size (x-axis) and Energy in microJ (y-axis)) - Raspberry Pi 4 (Docker)}
\label{fig:dockerenergy}
\end{figure}
This experiment shows that the optimal chunk size for \texttt{TempSensClientDocker} is $2^{15}$ unlike \texttt{TempSensClient}
that had an optimal chunk size of $2^{14}$. Even though we did expect a little difference between the energy
consumption, what we did not anticipate was getting a completely different optimal chunk size for the same size on
the same hardware configuration. Running an application adds an overhead in terms of energy consumption but on the
other hand provides more maintainability and process isolation. However, the different optimal chunk size is not
a result of that. As we discussed before, Docker abstracts the networking layer (among other things such as the file
system) and in order for \texttt{TempSensClientDocker} to be able to communicate with \texttt{TempSensServer} over
WiFi, we needed to forward the port that the application is supposed to listen at. Recall that in order to compute
energy consumption, we need both power and time. The power factor here is static and the same as \texttt{TempSensClient}
however, the abstraction and port forwarding adds to the time factor and therefore, \texttt{TempSensClientDocker} takes
a tiny bit longer to communicate as compared to \texttt{TempSensClient}, and therefore, has a different optimal chunk size. \\
One question that we would like to ask is that would it be viable to use Docker to achieve energy efficiency? From
our experiment, it is clear that the optimal chunk size is different and the energy consumption corresponding
to that is higher than \texttt{TempSensClient} as well. We believe that for the sake of running a single application
in containerized environment, it is not worth the trade to consume more energy as maintaining one application is
a somewhat easier task. However, if multiple applications are running inside the same container, then we
could minimize some of the overhead costs and have a trade-off between maintainability and energy consumption
depending on the use-case. But, the energy consumption while using Docker would still be greater than running the
application on a bare-metal OS. This is due to the fact that containers are based on virtualization
and the I/O system calls that interact with the machine as well as abstractions (such as network layer)
have a specific overhead. This effect is also discussed by Santos et al. \cite{DBLP:journals/corr/abs-1011-0686} and explains why energy consumption
is different in a container environment.
\section{Lingua Franca}
Lingua Franca is a framework developed to provide a coordination language for distributed systems. There are a
lot of things it is capable of but for the sake of this study, we will focus on features that best relate to
our work. \\
One of the things that was most interesting was how after writing an LF program, the tool produced efficient code
for the target programming language. Moreover, it introduces the notion of logical and physical time. Logical time
is instantiated the moment the program is executed with the system clock's value, which is just the physical time
at that point. However, logical time progresses differently than physical time. Logical time does not advance
during the execution of the reaction unlike physical time. Which means that all reactions inside a specific reactor
are instantaneous. In other terms, two reactions can run concurrently as long as proper variable value handling is done.
Our assumption was that this could mean the code will execute faster (if possible) given the necessary variables
had their values instantiated at all possible times, or if the wait time was very minimal. This would further
help in reduced energy consumption if the program took lesser than usual execution time (with the one time overhead
of writing the LF equivalent and generating the application code). The code below \cite{lfsource} is an example of concurrent
reactions being run simultaneously and how the second reaction will print the same result but the process
is dependant on
first reaction's output. To put it simply, if \texttt{out} is set before \texttt{out.is\_present} is called in the
second reaction, it will double the output and print 42. Otherwise, it will just print 42 (from the \texttt{else} condition) since there is no logical
information available to determine the wait time in this example. This also show-cases how we can improvise and
avoid non-determinism. But how much of this actually helps when the application in question is sequential
and pretty straightforward in question? How does the efficient code generation impact code generation? \\
\begin{lstlisting}{language=Python}
reactor Source {
output out;
preamble {=
import random
=}
reaction(startup) -> out {=
# Set a seed for random number generation based on the current time.
self.random.seed()
# Randomly produce an output or not.
if self.random.choice([0,1]) == 1:
out.set(21)
=}
reaction(startup) -> out {=
if out.is_present:
out.set(2 * out.value)
else:
out.set(42)
=}
}
\end{lstlisting}
We implemented \texttt{TempSens} purely as an LF application with \texttt{TempSensClientLingua} and
\texttt{TempSensServerLingua}. The EPB results were similar to Figure \ref{fig:epbpi} because the Python
source code generated by LF from the above versions was about 99\% identical to our original version. This
is due to the fact that our application was very simple in nature and also had none of the distributed elements
that LF could take advantage of. However, we implemented a basic matrix multiplication program in Python
utilizing parallelism via multi-threading, and then created an LF version for it (see Appendix A). The LF generated
version was 1.742 seconds slower than the original Python program. There are a number of reasons behind this.
First, LF has its own overhead because it keeps tracks of logical time among other things. Second, multi-threading
is not possible using Python since it is not a thread-safe language. The only type of parallelism that can be
obtained is via multi-processing. And to the best of our knowledge, there is no current LF implementation alternative
to pool and collect data from multiple processes. \\
The motivation behind this study was to determine whether LF generated programs are more energy-efficient or
not. From our study, we found out that is not the case with sequential programs with no distributed element
to them. The \texttt{TempSensClientLingua} took more overall time for communication and hence consumed more energy than
\texttt{TempSensClient} even though the communication cost was comparable. We believe that since it is designed for
distributed applications, it would be able to generate more efficient code. However, we also believe that
the communication cost would not be affected by it since it is not a distributed programming construct but
since the overall program will be potentially more efficient (due to ease of implementation and understanding
through LF) in terms of energy consumption. \\
We wanted to conduct a study for the Fall Detection application as well but weren't able to do so due to
technical limitations. If Fall Detection application were to be implemented as a distributed application,
we believe that the source code generated by LF would be more energy-efficient. This stems from the fact that
LF programs are highly specific in terms of functionality without the need for any code bloat, and are
easier to translate distributed concepts into. This results in less ambiguity and redundancy in the generated
code and therefore, with fewer instructions to process, it would consume less energy than its natively written
counterpart. \\
We will discuss the technical limitations in detail in the next section where we will conclude our thesis.
|
|
\documentclass[./\jobname.tex]{subfiles}
\begin{document}
\chapter{State of the Art}
\label{chap:state_of_the_art}
This chapter provides an overview of the current state of the art in solving \gls{pde}. Included are the widely used \gls{fem} as well as heuristic optimisation methods. Further, an introduction to the \gls{de} framework is given, which provides the basis for the algorithms described in this thesis.
\section{Finite Element Method}
Currently, the Finite Element Method is the go-to approach to solve partial differential equations. The domain $\Omega$ on which the \gls{pde} is posed, is discretised into multiple smaller elements - as the name suggests. Thus, \gls{fem} counts to the category of meshed methods. The underlying solution function $u(\mathbf{x})$ to the PDE is then approximated by so called ``basis-functions'' $\Phi(\mathbf{x})$ limited to these finite elements. This thesis uses the open-source Netgen/NGSolve \gls{fem} package (\cite{schoberl_ngsolvengsolve_2020}).
The general steps taken to solve a PDE with an FEM solver are:
\begin{enumerate}
\item \underline{Step: Strong Form} \\
This is the standard formulation of the linear \gls{pde}. $\mathbf{L}$ and $\mathbf{B}$ are linear differential operators that include the derivatives. \\
\begin{equation}
\label{eq: strong form}
\begin{split}
\mathbf{x} \in \mathbb{R}^2 \\
u(\mathbf{x}), f(\mathbf{x}), g(\mathbf{x}): \Omega \rightarrow \mathbb{R} \\
\mathbf{L} u(\mathbf{x}) = f(\mathbf{x}) \text{ on $\Omega$} \\
\mathbf{B} u(\mathbf{x}) = g(\mathbf{x}) \text{ on $\partial \Omega$}
\end{split}
\end{equation}
Further, only Dirichlet boundary conditions are considered, thus the boundary operator is always the identity matrix $\mathbf{B} = \mathbb{I}$. Therefore, the linear operator on the boundary $\mathbf{B}$ can be disregarded, resulting in \\
\begin{equation}
u(\mathbf{x})|_{\partial \Omega} = g(\mathbf{x}) .
\end{equation}
\item \underline{Step: Weak Form} \\
The next step is to reformulate the strong form into a usable weak form. This is equivalent to the strong form but written in an integral notation. In this equation, the $A$, $b$ and $c$ correspond to the constant factors of the derivatives in strong form. For the sake of completeness, this is kept abstract. In the \gls{pde}s considered in this work, $A = \mathbb{I}$ and $b=\mathbf{0}$, $c = 0$. Currently, the so-called test-function $v(\mathbf{x})$ is an arbitrary function, but it has to be 0 on the boundary $v(\mathbf{x})|_{\Omega} = 0$. The choice of the test-function correspond to different \gls{fem} types (\cite[p. 6f]{shen_spectral_2011}).\\
\begin{equation}
\label{eq: weak form}
\begin{split}
\underbrace{\int_{\Omega} - (\nabla^T A \nabla) u(\mathbf{x}) v(\mathbf{x}) dV - \int_{\Omega} b^T \nabla u(\mathbf{x}) v(\mathbf{x}) dV + \int_{\Omega} c u(\mathbf{x}) v(\mathbf{x}) dV}_{a(u,v)} \\ = \underbrace{\int_{\Omega} f(\mathbf{x}) v(\mathbf{x}) dV}_{F(v)}
\end{split}
\end{equation}
\item \underline{Step: Discretisation of $\Omega$} \\
Create a mesh of finite elements that span the whole domain. Usually these are triangles. Thus, this step is sometimes called ``triangulation''.
\item \underline{Step: Basis functions} \\
Choose a basis function $\Phi(\mathbf{x})$ that can be used to approximate the solution $u(\mathbf{x}) \approx u_{h}(\mathbf{x}) = \sum_{i = 1}^{N} u_i \Phi_i(\mathbf{x})$. A common choice are Lagrange or Chebyshev polynomials. In the Galerkin type \gls{fem}, the test-function $v(\mathbf{x})$ is the same as the trail-function, thus $v(\mathbf{x}) = \sum_{j = 1}^{N} v_j \Phi_j(\mathbf{x})$. The choice of the basis function $\Phi(\mathbf{x})$ largely influences the computational effort. $\Phi(\mathbf{x})$ should have a small support, to produce a thinly populated matrix $A$ in the linear system of equations \eqref{eq:linear_system_of_equations} below.
\item \underline{Step: Solution} \\
In the weak form, as seen in equation \eqref{eq: weak form}, $a(u,v)$ is a continuous bilinear form and $F(v)$ is a continuous linear functional. Substituting $u$ and $v$ with their corresponding approximation from \mbox{step 4} results in
\begin{equation}
\sum_{j=1}^{N} v_j \sum_{i=1}^{N} u_i a(\Phi_i, \Phi_j) = \sum_{j=1}^{N} v_j F(\Phi_j).
\end{equation}
Dividing by the $v_j$ values on both sides results in a linear system of equations, where the constant factors $u_i$ need to be determined.
\begin{equation}
\label{eq:linear_system_of_equations}
\underbrace{\sum_{i=1}^{N} u_i a(\Phi_i, \Phi_j)}_{\mathbf{A u}} = \underbrace{F(\Phi_j)}_{\mathbf{b}} \text{ for $j=1,...N$}
\end{equation}
\end{enumerate}
Modern solvers include more complex and advanced techniques to further improve the solution error and the computation time. Some of the most important concepts that are also available in NGSolve are listed here.
\begin{itemize}
\item \underline{Static Condensation}: \\
Depending on the number of discrete elements, the $\mathbf{A}$ matrix can be very large. Inverting large matrices is very time consuming. Static condensation, also called Guyan reduction (\cite{guyan_reduction_1965}), reduces this dimensionality by exploiting the structure of $\mathbf{A}$.
\item \underline{Preconditioner}: \\
Instead of solving the $\mathbf{A}^{-1}$ exactly, this can also be approximated by a matrix that is similar to $\mathbf{A}^{-1}$. The actual inverse can be iteratively approximated. NGSolve implements multiple different preconditioners and it even allows to create your own method.
\item \underline{Adaptive Mesh Refinement}: \\
The accuracy of a FEM-approximated solution mainly depends on the density of the mesh. Typically, finer meshes tend to produce more accurate solutions, but the computation time is longer. This trade-off can be overcome by a self-adaptive mesh. NGSolve implements that in an adaptive loop that executes:
\begin{itemize}
\item Solve PDE (with coarse mesh)
\item Estimate Error (for every element)
\item Mark Elements (that have the greatest error)
\item Refine Elements (that were previously marked)
\item Repeat until degrees of freedom exceed a specified $N$
\end{itemize}
\end{itemize}
\section{Computational Intelligence Methods}
\label{chap:literature_overview}
The research community interested in computational intelligence solvers for differential equations has been steadily growing over the past 20 years. This chapter summarises the most important works done in the general field of development and application of such statistical numerical solvers. The following table \ref{tab:literature_research} gives a brief overview of these papers and sorts them historically.
In general, all of these papers from the table use the \gls{wrm}, or some variant of that concept, to transform their differential equation into an optimisation problem. This serves as the fitness function and is necessary to evaluate a possible candidate solution and perform the evolutionary selection. The fitness function is the function to be optimised. It is also called objective function and these terms are used interchangeably in this thesis. In short, the residual $R$ is defined through the differential equation itself and can be calculated by $R(u(\mathbf{x})) = \mathbf{L}u(\mathbf{x}) - f(\mathbf{x})$. The residual can be thought of as a functional that substitutes $u(\mathbf{x})$ with an approximate solution $u_{apx}(\mathbf{x})$ and returns a numerical score. The \gls{wrm} method is further described in chapter \ref{chap:opt_problem}.
\cite{howard_genetic_2001} is one of the first advances in this field. They approximate a subset of the convection-diffusion equations with \gls{gp} (\cite{koza_genetic_1992}). Their main idea is to use a polynomial of variable length as the candidate solution that is forced to satisfy the boundary condition. Their fitness value, as seen in equation \eqref{eq:howard_fitness_2001}, is calculated by squaring the residual $R$ and integrating it over the domain. Since the polynomials are known, and the problems are restricted to a specific differential equation, the integral can be evaluated analytically.
\begin{equation}
\label{eq:howard_fitness_2001}
F(u_{apx}(\mathbf{x})) = -\int_{\Omega} R(u_{apx}(\mathbf{x}))^2 dx
\end{equation}
\cite{kirstukas_hybrid_2005} proposes a three-step procedure. The first step is time consuming and employs \gls{gp} techniques to find basis functions that span the solution space. The second step is faster and uses a Gram–Schmidt algorithm to compute the basis function multipliers to develop a complete solution for a given set of boundary conditions. Using linear solver methods, a set of coefficients is found that produces a single function that both satisfies the differential equation and the boundary or initial conditions at distinct points over the domain. These points are further called collocation points.
\cite{tsoulos_solving_2006} use \gls{ge} (\cite{ryan_grammatical_1998}) to find solutions to various differential equations. In contrary to \gls{gp}, \gls{ge} uses vectors instead of trees to represent the candidate string. The solution is evaluated as an analytical string, constructed of the functions $sin$, $cos$, $exp$ and $log$, as well as all digits and all four basic arithmetic operations. Because the \gls{ge} step could result in virtually any function, the fitness integral from equation \eqref{eq:howard_fitness_2001} can not be calculated analytically. Thus, the integral is approximated by evaluating the residual at collocation points within the domain. This is seen in equation \eqref{eq:fit_func_tsoulos}. The algorithm was tested on multiple problems of \gls{ode}, system of ODEs and \gls{pde}. Only the results for ODEs were promising.
\begin{equation}
\label{eq:fit_func_tsoulos}
F(u_{apx}(\mathbf{x})) = \sum_{i=1}^{n_C} ||R(u_{apx}(\mathbf{x}_i))||^2
\end{equation}
\cite{mastorakis_unstable_2006} couples a \gls{ga} (\cite{holland_outline_1962}) with a \gls{ds} method (\cite{nelder_simplex_1965}) for the local solution refinement. The candidates are represented as polynomials of the order 5 where the coefficients are optimised. The boundary condition is directly incorporated into the candidate, thus simplifying the objective function to equation \eqref{eq:fit_func_tsoulos}. The focus here is on unstable ODEs that can not be solved with finite difference methods.
\cite{sobester_genetic_2008} tried a radical different approach to incorporate the boundary condition into the solution. They found that using \gls{gp} for the inner domain is only effective if the algorithm does not have to consider the boundary. They split the solution $u_{apx}(\mathbf{x})$ into two parts where $u_{GP}(\mathbf{x})$ represents the solution for the inner domain and $u_{RBF}(\mathbf{x})$ ensures the boundary condition
\begin{equation}
\label{eq:solution_sobester}
u(\mathbf{x})_{apx} = u_{GP}(\mathbf{x}) + u(\mathbf{x})_{RBF}.
\end{equation}
At first, the \gls{gp} step produced a trial solution according to the objective function \eqref{eq:fit_func_tsoulos}. After the \gls{gp} procedure, a linear combination of radial basis functions $u(\mathbf{x})_{RBF} = \sum_{j=1}^{n_B} \alpha_j \Phi (||\mathbf{x}-\mathbf{x}_{j}||)$ is specifically tailored to $u_{GP}(\mathbf{x})$ that ensures the boundary condition at all $\mathbf{x}_{j}$ points on $\partial \Omega$. Finding the parameters $\alpha_j$ can be formulated as a least squares problem.
\cite{howard_genetic_2011} use a \gls{gp} scheme to find the solution to a specific set of simplified convection-diffusion equations. They represent a candidate as discrete function value points over the domain. The function between these points is interpolated. The fitness function is similar to equation \eqref{eq:fit_func_tsoulos} with the exception that the $n_C$ points are not predetermined. These points are sampled randomly in the domain, thus allowing the algorithm to approximate the solution aside from fixed base points.
\cite{chaquet_solving_2012} use a simple self-adaptive \gls{es} (as developed by \cite{schwefel_evolutionsstrategien_1977} and \cite{rechenberg_evolutionsstrategien_1978}) to evolve the coefficients of a partial Fourier series. The fitness function is expressed in equation \eqref{eq:fit_func_chaquet}. This is similar to the fitness function \ref{eq:fit_func_tsoulos}, but it extends the definition of the boundary to also include Neumann conditions by introducing the linear differential operator $\mathbf{B}$. The limit $n_C$ denotes the number of inner collocation points $\mathbf{x}_i$ within the domain $\Omega$, whereas $n_B$ is the number of discrete points $\mathbf{x}_j$ on the boundary $\partial \Omega$. Further, a penalty factor $\phi$ shifts the focus of the fitness to the boundary. Additionally, this objective function can also represent systems of differential equations, where the number of equations is denoted by $m$. To reduce the search dimension (represented by the number of harmonics), they developed a scheme that only optimises one harmonic at a time and freezes the other coefficients. This scheme is based on the often observed principle that lower frequencies are more important in reconstructing a signal than higher ones. Albeit this concept might not be valid for all possible functions, it worked on all differential equations of their testbed.
\begin{equation}
\label{eq:fit_func_chaquet_2012}
F(u_{apx}(\mathbf{x})) = \frac{\sum_{i=1}^{n_C} || \mathbf{L}u_{apx}(\mathbf{x}_i) - f(\mathbf{x}_i)||^2 + \phi \sum_{j=1}^{n_B} || \mathbf{B}u_{apx}(\mathbf{x}_j) - g(\mathbf{x}_j)||^2}{m (n_C + n_B)}
\end{equation}
\cite{babaei_general_2013} takes a similar approach. They approximate a solution using a partial Fourier series. The optimal parameters for the candidates are found using a \gls{pso} algorithm (\cite{kennedy_particle_1995}). The fitness function consists of two parts, one for the inner area (equation \eqref{eq:inner_WRF}) and one for the boundary (equation \eqref{eq:boundary_penalty}). These are added together resulting in equation \eqref{eq:inner_and_boundary_fitness}.
The weighted residual integral WRF is exactly the formulation of the \gls{wrm} from chapter \ref{chap:opt_problem}. $W$ is an arbitrary weighting function. The absolute values of $W$ and $R$ ensure that only positive values count towards the fitness. Instead of using a sum over collocation points, the integral is evaluated using a numerical integration scheme.
\begin{equation}
\label{eq:inner_WRF}
WRF(u_{apx}(\mathbf{x})) = \int_{\Omega} |W(\mathbf{x)}| |R(u_{apx}(\mathbf{x}))| dx
\end{equation}
The boundary condition is incorporated by summing up its normed violations at distinct points $\mathbf{x}_i$. $K_j$ are penality multipliers that shift the focus to different points of the boundary. The concept of this penalty function originates from \cite{rajeev_discrete_1992}.
\begin{equation}
\label{eq:boundary_penalty}
PFV(u_{apx}(\mathbf{x})) = WRF(u_{apx}(\mathbf{x})) \cdot \sum_{j=1}^{n_B} K_j \left(\frac{u_{apx}(\mathbf{x}_i)}{g(\mathbf{x}_i)} - 1\right)
\end{equation}
\begin{equation}
\label{eq:inner_and_boundary_fitness}
F(u_{apx}(\mathbf{x})) = WRF(u_{apx}(\mathbf{x})) + PFV(u_{apx}(\mathbf{x}))
\end{equation}
\cite{panagant_solving_2014} use polynomials as a candidate representation. They do not specify the order or the type of the polynomial. They test five different simple versions of the optimisation algorithm \gls{de} (\cite{storn_differential_1997}). Further, they introduce a so called DE-New that increases the population size after every generation. Their proposition is that greater population sizes are better at finding good solutions.
\cite{sadollah_metaheuristic_2017} compares three different optimisation algorithms to approximate differential equations: \gls{pso}, \gls{hs} (\cite{geem_new_2001}) and \gls{wca} (\cite{eskandar_water_2012}). They use the formulation in equation \eqref{eq:inner_WRF}, where the weighting function is the same as the residual $|W(\mathbf{x})| = |R(u_{apx}(\mathbf{x}))| \rightarrow WRF = \int_{\Omega} |R(u_{apx}(\mathbf{x}))|^2 dx$. The integral is again approximated using a numerical integration scheme. They find that the \gls{pso} is slightly better at producing low error solutions, however \gls{wca} is better at satisfying the boundary condition.
In their paper \cite{chaquet_using_2019} describe an algorithm that approximates a solution with a linear combination of Gaussian \gls{rbf} as kernels:
\begin{equation}
u(\mathbf{x})_{apx} = \sum_{i=1}^{N} \omega_i e^{\gamma_i (\left||\mathbf{x} - \mathbf{c}_i\right||^2)}
\end{equation}
The approximated function $u(\mathbf{x})_{apx}$ can be fully determined by a finite number of parameters: $\omega_i, \gamma_i, \mathbf{c}_i$. These are stacked together into a vector $\mathbf{p_{apx}}$ and called the decision variables which are optimised by the algorithm.
The objective function can be seen in equation \eqref{eq:fit_func_chaquet}. This is an update of the objective function in \ref{eq:fit_func_chaquet_2012} where the inner collocation points also get scaled by a weighting function $\xi(\mathbf{x}_i)$.
\begin{equation}
\label{eq:fit_func_chaquet}
F(u_{apx}(\mathbf{x})) = \frac{\sum_{i=1}^{n_C} \xi (\mathbf{x}_i) || \mathbf{L}u_{apx}(\mathbf{x}_i) - f(\mathbf{x}_i)||^2 + \phi \sum_{j=1}^{n_B} || \mathbf{B}u_{apx}(\mathbf{x}_j) - g(\mathbf{x}_j)||^2}{m (n_C + n_B)}
\end{equation}
The multipliers $\xi(\mathbf{x}_i)$ and $\phi$ are weighting factors for either the inner or the boundary term. The whole term is normalised with the number of collocation points.
The parameters of the kernels are determined via a \gls{cma_es} (\cite{hansen_reducing_2003}). To further improve the solution, the evolutionary algorithm is coupled with a \gls{ds} method to carry out the local search. The authors show empirically that the local search significantly improves the performance by testing the algorithm on a set of 32 differential equations.
\cite{fateh_differential_2019} use a simple variant of \gls{de} where candidates are represented as discrete function value points within the domain. The function values between the grid points are linearly interpolated. This is a radical brute force approach that results in a massive search space dimension. Yet, the main advantage is that the solution is not limited to a decomposition of kernel functions and thus, even non-smooth functions can be approximated. Since this approach does not produce an analytical solution, the differential equation and the boundary condition is incorporated into the fitness function by taking the sum of squared residuals at every grid point, as seen in equation \eqref{eq:fit_fateh}. The derivatives within the residual are calculated between two neighbouring points by the difference quotient.
\begin{equation}
\label{eq:fit_fateh}
F(\mathbf{x}) = \sqrt{\sum_{i=0}^{n} R(\mathbf{x}_i)^2}
\end{equation}
\begin{table}[H]
\centering
\noindent\adjustbox{max width=\linewidth}{
\begin{tabular}{|c|c|c|c|}
\hline
\rowcolor[HTML]{\farbeTabA}
Paper & Algorithm & Representation & Problems \\ \hline
\multilinecell{\cite{howard_genetic_2001}} & \multilinecell{\gls{gp}} & \multilinecell{polynomial of \\ arbitrary length} & \multilinecell{one-dimensional \\ steady-state \\ model of \\ convection-diffusion \\ equation} \\ \hline
\multilinecell{\cite{kirstukas_hybrid_2005}} & \multilinecell{\gls{gp}} & \multilinecell{algebraic \\ expression} & \multilinecell{heating of thin rod \\ heating by current} \\ \hline
\multilinecell{\cite{tsoulos_solving_2006}} & \multilinecell{\gls{ge}} & \multilinecell{algebraic term} & \multilinecell{set of ODEs \\ system of ODEs \\ and PDEs} \\ \hline
\multilinecell{\cite{mastorakis_unstable_2006}} & \multilinecell{\gls{ga}\\(global); \\ \gls{ds}\\(local)} & \multilinecell{5th order \\ polynomial}& \multilinecell{unstable \\ ODEs} \\ \hline
\multilinecell{\cite{sobester_genetic_2008}} & \multilinecell{\gls{gp} \\ and \\ RBF-NN} & \multilinecell{algebraic term \\ for inner; \\ RBF for boundary} & elliptic PDEs \\ \hline
\multilinecell{\cite{howard_genetic_2011}} & \multilinecell{\gls{gp}} & function value grid & \multilinecell{convection–diffusion \\ equation \\ at different \\ Peclet numbers } \\ \hline
\multilinecell{\cite{chaquet_solving_2012}} & \multilinecell{\gls{es}} & \multilinecell{partial sum \\ of Fourier series} & \multilinecell{testbench of \\ ODEs \\ system of ODEs \\ and PDEs} \\ \hline
\multilinecell{\cite{babaei_general_2013}} & \multilinecell{\gls{pso}} & \multilinecell{partial sum\\of Fourier series} & \multilinecell{integro-differential equation\\system of linear ODEs \\ Brachistochrone \\ nonlinear Bernoulli} \\ \hline
\multilinecell{\cite{panagant_solving_2014}} & \multilinecell{\gls{de}} & \multilinecell{polynomial of \\ unspecified order} & \multilinecell{set of 6 \\ different PDEs} \\ \hline
\multilinecell{\cite{sadollah_metaheuristic_2017}} & \multilinecell{\gls{pso}\\\gls{hs}\\\gls{wca}} & \multilinecell{partial sum\\of Fourier series} & \multilinecell{singular BVP} \\ \hline
\multilinecell{\cite{chaquet_using_2019}} & \multilinecell{\gls{cma_es}\\(global); \\ \gls{ds}\\(local)} & \multilinecell{linear combination \\ of Gaussian kernels} & \multilinecell{testbench of \\ ODEs \\ system of ODEs \\ and PDEs}\\ \hline
\multilinecell{\cite{fateh_differential_2019}} & \multilinecell{\gls{de}} & \multilinecell{function value\\grid} & elliptic PDEs \\ \hline
\end{tabular}
}
\unterschrift{Literature research on the general topic of stochastic solver and their application. The papers are sorted by date of release. }{}{}
\label{tab:literature_research}
\end{table}
\section{Differential Evolution}
The differential evolution framework was first introduced in \cite{storn_differential_1997}. Due to its simple and flexible structure, it quickly became one of the most successful evolutionary algorithm. Over the years, several adaptations to the original framework have been proposed and some of them currently count to the best performing algorithms, as the 100-Digit Challenge at GECCO 2019 (\cite{suganthan_suganthancec2019_2020}) shows.
The main \gls{de} framework consists of three necessary steps that continuously update a population of possible solutions. The population can be interpreted as a matrix, where each row-vector $\mathbf{x}_i$, also called individual, represents a point within the search domain and has a fitness value corresponding to the fitness function $f(\mathbf{x}_i): \mathbb{R}^n \rightarrow \mathbb{R}$. The goal is to minimise the fitness function. These steps are performed in a loop until a predefined termination condition is reached. Each individual step is controlled by a user-defined parameter:
\begin{itemize}
\item \underline{Mutation}: \\
Mutation strength parameter F;\\
The mutation uses the information from within the population to create a trial vector $v_i$. This is done by scaling the difference between some vectors in the population - hence the name \textit{differential} evolution. The \textit{/current-to-pbest/1} mutation operator can be seen in equation \eqref{eq:mut_rand_1} where $x_i$ is the current individual, $x_{best}^p$ is one random vector of the p\% top vectors, $x_{r1}$ is a random vector from the population while $\tilde{x}_{r2}$ is randomly chosen from the population and the archive. $x_{r1}$ and $\tilde{x}_{r2}$ must not describe the same individual.
\begin{equation}
\label{eq:mut_rand_1}
v_i = x_{i} + F_i(x_{best}^p - x_{i}) + F_i(x_{r1} - \tilde{x}_{r2})
\end{equation}
\item \underline{Crossover}: \\
Crossover probability parameter CR;\\
The crossover procedure randomly mixes the information between the trial vector $v_i$ and a random candidate from the population $x_{i}$ to create a new trial vector $u_i$. The binomial crossover from equation \eqref{eq:crs_bin} randomly takes elements from both vectors, where $K$ is a random index to ensure that at least one element from the trial vector $v_i$ is taken.
\begin{equation}
\label{eq:crs_bin}
u_{ij}=\begin{cases}
v_{ij}, &\text{if $j = K \lor rand[0,1] \leq CR$}\\
x_{ij}, &\text{otherwise}
\end{cases}
\end{equation}
\item \underline{Selection}: \\
Population size N;\\
The selection replaces the old candidate $x_i$ if the trial candidate $u_i$ is better as measured by the fitness function. This is performed for every individual in the population, then the next generation is started.
\end{itemize}
In modern \gls{de} variants, these parameters are self-adapted during the evolutionary process. This means that the algorithms can balance out between exploration of the search-space and exploitation of promising locations.
A prominent example of a modern \gls{de} with self-adaption is JADE, which was developed by \cite{zhang_jade_2009}. The adaption is performed by taking successful F and CR values of the last generation into account. If a certain setting is successful in generating better candidates, newly selected F and CR gravitate towards that setting. The pseudocode is presented in the appendix \ref{chap:pscode_jade}.
This idea was later refined by \cite{tanabe_success-history_2013}. They propose a similar self-adaptive scheme but extend the ``memory'' for good F and CR parameters over multiple generations. This idea improves the robustness as compared to JADE. The pseudocode in appendix \ref{chap:pscode_shade} shows the outline of this so called SHADE algorithm.
The latest iteration of SHADE is called L-SHADE (\cite{tanabe_improving_2014}), which improves the performance by including a deterministic adaptive concept for the population size. At first, L-SHADE starts with a big population size, and reduces the number of individuals in a linear fashion by deleting bad candidates. This has the effect of reducing the number of unnecessary function evaluations. The code is displayed in the appendix \ref{chap:pscode_lshade}.
\end{document}
|
|
\chapter{Environment}
\label{chap-environment}
\section{Introduction}
Translating a \commonlisp{} program from source code into an abstract
syntax tree is done in constant interaction with an
\emph{environment}. The \hs{} stipulates%
\footnote{See Section 3.2.1 of the \hs{}.}
that there are four different environments that are relevant to
compilation:
\begin{itemize}
\item The \emph{startup environment}. This environment is the global
environment of the \commonlisp{} system when the compiler was
invoked.
\item The \emph{compilation environment}. This environment is the
local environment in which forms are compiled. It is also the
environment that is passed to macro expanders.
\item The \emph{evaluation environment}. According to the \hs{}, this
environment is ``a run-time environment in which macro expanders and
code specified by \texttt{eval-when} to be evaluated are
evaluated''.
\item The \emph{run-time environment}. This environment is used when
the resulting compiled program is executed.
\end{itemize}
The \hs{} does not specify how environments are represented, and there
is no specified protocol for manipulating environments. As a result,
each implementation has its own representation and its own protocols.
\sysname{} uses an external library called \emph{Trucler} for
representing the compile-time environment.
When \sysname{} is asked to convert a
form to an abstract syntax tree, client code must supply an object
that represents the startup environment. During the conversion
process, \sysname{} will call the functions documented in
??? to augment the startup environment
with information introduced by binding forms to create an augmented
compilation environment. To determine the meaning of the program
elements in the form to be converted, \sysname{} will call the
functions documented in ???.
Client code must supply methods on the functions in
??? to augment the environment and
return the resulting environment. Client code must also supply
methods on the functions in ??? that
will query the environment (whether the startup environment or the
augmented environment) and return the relevant information to
\sysname{}.
It might seem that \sysname{} could represent the \emph{local part} of
the environment (i.e., the part of the environment that is temporarily
introduced when nested forms are compiled) in whatever way it pleases,
but this is not the case. The reason is that the full environment
must be passed as an argument to macro expanders that are defined in
the startup environment, and those macro expanders are implementation
specific. It is also not possible for \sysname{} to define its own
version of \texttt{macroexpand}, because a globally defined
implementation-specific macro expander may call the
implementation-specific version of \texttt{macroexpand} which would
fail if given an environment other than the one defined by the
implementation.
%% LocalWords: startup expanders expander subclasses
|
|
\chapter{Normal hidden Markov models}
\section{Forward-backward variables}
Let $(X_t)_{t = 1}^T$ be a homogeneous Markov chain over the state
space $S = \{1, \ldots, s\}$ with transition matrix $P = [p_{ij}]$,
$i, j \in S$, and initial state distribution $p = [p_i]$, $i \in S$.
Then, for each $i_1, \ldots, i_T \in S$,
\[
\Pr(X_1 = i_1, X_2 = i_2, \ldots, X_T = i_T) = p_{i_1}p_{i_1i_2}
\ldots p_{i_{T - 1}i_T}.
\]
For each state $i \in S$, let $f_i(y, \theta_i)$ be a corresponding
probability density function. In each moment $t = 1, \ldots, T$, a
value $y_t$ of a random variable $Y_t$ is observed which comes from
the density $f_{i_t}$. The likelihood of the sample $y_1, \ldots, y_T$
is
\begin{eqnarray*}
\lefteqn{\mathcal{L} = L(p, P, \theta_1, \ldots, \theta_s) =} \\ & =
& \sum_{i_1, \ldots, i_T = 1}^s p_{i_1}f_{i_1}(y_1, \theta_{i_1})
p_{i_1i_2}f_{i_2}(y_2, \theta_{i_2}) \ldots p_{i_{T - 1}i_T}
f_{i_T}(y_T, \theta_{i_T}) = \\ & = & \sum_{i_1 = 1}^s
p_{i_1}f_{i_1}(y_1, \theta_{i_1}) \sum_{i_2 = 1}^s
p_{i_1i_2}f_{i_2}(y_2, \theta_{i_2}) \ldots \sum_{i_T = 1}^s p_{i_{T
- 1}i_T}f_{i_T}(y_T, \theta_{i_T}).
\end{eqnarray*}
The last expression can be calculated using forward variables
\begin{eqnarray}
\label{eq:alpha1}
\alpha_1(j) & = & p_j f_j(y_1, \theta_j), \hspace{1em} j \in S, \\
\label{eq:alphat}
\alpha_t(j) & = & \sum_{i = 1}^s (\alpha_{t - 1}(i)p_{ij})
f_j(y_t, \theta_j),
\hspace{1em} j\in S, \hspace{1em} t = 2, \ldots, T
\end{eqnarray}
or backward variables
\begin{eqnarray}
\label{eq:betaT}
\beta_T(i) & = & 1, \hspace{1em} i \in S, \\
\label{eq:betat}
\beta_t(i) & = & \sum_{j = 1}^s p_{ij}f_j(y_{t + 1},
\theta_j)\beta_{t + 1}(j),
\hspace{1em} i \in S, \hspace{1em} t = T - 1, \ldots, 1
\end{eqnarray}
or both as
\begin{equation}
\label{eq:likelihood}
\mathcal{L} = \sum_{i = 1}^s \alpha_T(i) = \sum_{i = 1}^s p_i f_i(y_1,
\theta_i) \beta_1(i) = \sum_{i = 1}^s
\alpha_t(i)\beta_t(i), \hspace{1em} t = 1, \ldots, T.
\end{equation}
Moreover, if we define
\begin{eqnarray}
\label{eq:gamma}
\gamma_t(i) & = & \alpha_t(i)\beta_t(i) / \mathcal{L}, \hspace{1em} t
= 1, \ldots, T, \hspace{1em} i \in S, \\
\label{eq:xi}
\xi_t(i, j) & = &
\alpha_t(i)\beta_{t + 1}(j)p_{ij} f_j(y_{t + 1}, \theta_j) / \mathcal{L},
\hspace{1em}
t = 1, \ldots, T - 1, \hspace{1em} i, j \in S,
\end{eqnarray}
the following interpretations are possible:
\begin{eqnarray*}
\alpha_t(i) & = & \Pr(Y_1 = y_1, \ldots, Y_t = y_t, X_t = i),
\\ \beta_t(i) & = & \Pr(Y_{t + 1} = y_{t + 1}, \ldots, Y_T = y_T, X_t
= i), \\
\gamma_t(i) & = & \Pr(Y_1 = y_1, \ldots, Y_T = y_T, X_t = i), \\
\xi_t(i, j) & = & \Pr(Y_1 = y_1, \ldots, Y_T = y_T, X_t = i,
X_{t + 1} = j),
\end{eqnarray*}
where $\Pr$ denotes likelihood of the respective event.
\section{Baum-Welch algorithm}
If $f_i(y, \theta_i) = \phi((y - \mu_i) / \sigma_i)$, where $\phi$ is
standard normal probability density, the following formulas can be
used for $i, j \in S$ to increase the likelihood $\mathcal{L}$:
\begin{eqnarray}
\label{eq:baumwelchp}
\overline{p}_i & = & \gamma_1(i), \\
\label{eq:baumwelchP}
\overline{p}_{ij} & = & \frac{\sum_{t = 1}^{T - 1} \xi_t(i,
j)}{\sum_{t = 1}^{T - 1} \gamma_t(i)}, \\
\label{eq:baumwelchmu}
\overline{\mu}_{i} & = & \frac{\sum_{t = 1}^T \gamma_t(i)y_t}{\sum_{t
= 1}^T \gamma_t(i)}, \\
\label{eq:baumwelchsigma}
\overline{\sigma}_i^2 & = & \frac{\sum_{t = 1}^T \gamma_t(i)(y_t -
\overline{\mu}_i)^2}{\sum_{t = 1}^T \gamma_t(i)}.
\end{eqnarray}
\section{Viterbi algorithm}
Having found $p$, $P$, $\theta_1$, \ldots, $\theta_s$, one may need to
find the best sequence of states, that is a sequence
\begin{equation} \label{eq:bestseq}
i_1, \ldots, i_T
\end{equation}
which maximizes
\begin{equation} \label{eq:bestseqprob}
p_{i_1}f_{i_1}(y_1, \theta_{i_1}) p_{i_1i_2}f_{i_2}(y_2, \theta_{i_2})
\ldots p_{i_{T - 1}i_T}f_{i_T}(y_T, \theta_{i_T}).
\end{equation}
The Viterbi algorithm proceeds as follows. Let
\begin{displaymath}
\delta_1(i) = p_i f_i(y_1, \theta_i), \hspace{1em} \psi_1(i) =
0, \hspace{1em} i \in S.
\end{displaymath}
For $t = 2, \ldots, T$, let
\begin{displaymath}
\delta_t(j) = \max_{i \in S} (\delta_{t - 1}(i) p_{ij}) f_j(y_t,
\theta_j),
\hspace{1em}
\psi_t(j) = \argmax_{i \in S} (\delta_{t - 1}(i) p_{ij}),
\hspace{1em} i \in S.
\end{displaymath}
Then the maximized probability~(\ref{eq:bestseqprob}) is equal to
$\max_{i \in S} \delta_T(i)$ and the best sequence~(\ref{eq:bestseq})
can be backtracked by
\begin{displaymath}
i_T = \argmax_{i \in S} \delta_T(i),
\hspace{1em}
i_t = \psi_{t + 1}(i_{t + 1}), \hspace{1em} t = T - 1, \ldots, 1.
\end{displaymath}
\section{Scaling}
If the forward and backward variables are scaled, i.~e.
\begin{eqnarray}
\label{eq:alpha1scaled}
\hat{\alpha}_1(j) & = & c_1 \alpha_1(j), \hspace{1em} j \in S, \\
\label{eq:alphatscaled}
\hat{\alpha}_t(j) & = & c_t \sum_{i = 1}^s (\hat{\alpha}_{t -
1}(i)p_{ij}) f_j(y_t, \theta_j),
\hspace{1em} j \in S, \hspace{1em} t = 2, \ldots, T,
\end{eqnarray}
and
\begin{eqnarray}
\label{eq:betaTscaled}
\hat{\beta}_T(i) & = & d_T \beta_T(i), \hspace{1em} i \in S, \\
\label{eq:betatscaled}
\hat{\beta}_t(i) & = & d_t \sum_{j = 1}^s p_{ij}f_j(y_{t + 1},
\theta_j)\hat{\beta}_{t + 1}(j),
\hspace{1em} i \in S, \hspace{1em} t = T - 1, \ldots, 1
\end{eqnarray}
are calculated instead of~(\ref{eq:alpha1}--\ref{eq:betat}), where
\begin{eqnarray}
\label{eq:c1}
c_1^{-1} & = & \sum_{j = 1}^s \alpha_1(j), \\
\label{eq:ct}
c_t^{-1} & = & \sum_{j = 1}^s \sum_{i = 1}^s (\hat{\alpha}_{t -
1}(i)p_{ij}) f_j(y_t, \theta_j), \hspace{1em} t = 2, \ldots, T, \\
\label{eq:dT}
d_T^{-1} & = & \sum_{i = 1}^s \beta_T(i) = s, \\
\label{eq:dt}
d_t^{-1} & = & \sum_{i = 1}^s \sum_{j = 1}^s p_{ij}f_j(y_{t + 1},
\theta_j)\hat{\beta}_{t + 1}(j),
\hspace{1em} t = T - 1, \ldots, 1,
\end{eqnarray}
then
\begin{eqnarray}
\label{eq:alphabyscaled}
\hat{\alpha}_t(j) & = & c_1 \ldots c_t \alpha_t(j) =
\frac{\alpha_t(j)}{\sum_{j = 1}^s \alpha_t(j)}, \\
\label{eq:betabyscaled}
\hat{\beta}_t(i) & = & d_T \ldots d_t \beta_t(i) =
\frac{\beta_t(i)}{\sum_{i = 1}^s \beta_t(i)},
\end{eqnarray}
for $i, j \in S$, $t = 1, \ldots, T$. The logarithm of likelihood may be
calculated using the first equality~(\ref{eq:likelihood})
and~(\ref{eq:alphabyscaled}) for $t = T$ as
\begin{equation}
\label{eq:loglikelihood}
\log \mathcal{L} = - \sum_{t = 1}^T \log c_t,
\end{equation}
since $\sum_{i = 1}^s \alpha_T(i) = (c_1 \ldots c_T)^{-1}$.
The values~(\ref{eq:gamma}) and~(\ref{eq:xi}) may be calculated as
\begin{eqnarray}
\label{eq:gammabyscaled}
\gamma_t(i) & = & \frac{\hat{\alpha}_t(i)\hat{\beta}_t(i)}{\sum_{i =
1}^s \hat{\alpha}_t(i) \hat{\beta}_t(i)}, \hspace{1em} t = 1,
\ldots, T, \\
\label{eq:xibyscaled}
\xi_t(i, j) & = & d_t\frac{\hat{\alpha}_t(i)\hat{\beta}_{t +
1}(j)p_{ij} f_j(y_{t + 1}, \theta_j)}{\sum_{i = 1}^s
\hat{\alpha}_t(i) \hat{\beta}_t(i)},
\hspace{1em} t = 1, \ldots, T - 1
\end{eqnarray}
for $i, j \in S$.
The Baum-Welch
adjustments~(\ref{eq:baumwelchp}--\ref{eq:baumwelchsigma}) can be
calculated as above except for~(\ref{eq:baumwelchP}), which should be
calculated as
\begin{equation}
\label{eq:baumwelchPscaled}
\overline{p}_{ij} = \frac{\sum_{t = 1}^{T - 1} \frac{\hat{\alpha}_t(i)
\hat{\beta}_{t + 1}(j) p_{ij} f_j(y_{t + 1}, \theta_j)}{\sum_{i =
1}^s \hat{\alpha}_t(i) \hat{\beta}_t(i)}d_t }{\sum_{t = 1}^{T - 1}
\gamma_t(i)}
\end{equation}
for $i, j \in S$.
The Viterbi algorithm needs not scaling, but what should be maximized
is logarithm of~(\ref{eq:bestseqprob}) rather
than~(\ref{eq:bestseqprob}) itself.
% TODO: Put citations in proper places.
References: \cite{rabiner-1989}, \cite{baum-petrie-soules-weiss-1970},
\cite{cappe-moulines-ryden-2005}.
\section{Forecast normal pseudo-residuals}
Forecast normal pseudo-residuals are defined as follows \cite [p.~97]
{zucchini-macdonald-2009}. If $X_t$ is a continuous random variable
with distribution function $F_{X_t}$, then $F_{X_t}(X_t)$ is uniformly
distributed on $(0, 1)$ and $u_t = \Pr(X_t \leq x_t) = F_{X_t}(x_t)$
is the uniform pseudo-residual. The random variable
$\Phi^{-1}(F_{X_t}(X_t))$ is distributed standard normal and
\[
z_t = \Phi^{-1}(u_t) = \Phi^{-1}(F_{X_t}(x_t))
\]
is the normal pseudo-residual. If we take
\[
F_{X_t}(x_t) = \Pr(X_t \leq x_t\ |\ \mathbf{X}^{(t - 1)} =
\mathbf{x}^{(t - 1)}),
\]
we get forecast normal pseudo-residuals, while taking
\[
F_{X_t}(x_t) = \Pr(X_t \leq x_t\ |\ \mathbf{X}^{(-t)} =
\mathbf{x}^{(-t)}),
\]
we get ordinary normal pseudo-residuals. Therefore, we calculate
density of forecast according to formula
\[
\Pr(X_t = x\ |\ \mathbf{X}^{(t - 1)} = \mathbf{x}^{(t - 1)})
= \frac{\alpha_{t - 1} \Gamma P(x) 1^T}{\alpha_{t - 1} 1^T}.
\]
|
|
% !TEX TS-program = xelatex
% !TEX encoding = UTF-8
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% SIMPLE-RESUME-CV
%% <https://github.com/zachscrivena/simple-resume-cv>
%% This is free and unencumbered software released into the
%% public domain; see <http://unlicense.org> for details.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% INSTRUCTIONS FOR COMPILING THIS DOCUMENT ("CV.tex")
%% TeX ---(XeLaTeX)---> PDF:
%%
%% Method 1: Use latexmk for fully automated document generation:
%% latexmk -xelatex "CV.tex"
%% (add the -pvc switch to automatically recompile on changes)
%%
%% Method 2: Use XeLaTeX directly:
%% xelatex "CV.tex"
%% (run multiple times to resolve cross-references if needed)
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \documentclass[a4paper,10pt,oneside]{article}
\documentclass[letterpaper,10pt,oneside]{article}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% TYPESETTING OPTIONS.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\TypesetInNonStopMode}{1}
\newcommand{\TypesetInDraftMode}{0}
\newcommand{\BreakLine}{
\hspace*{-3.2cm}
\noindent\rule{0.98\textwidth}{0.4pt}
}
\newcommand{\DoubleBreakLine}{
\hspace*{-3.1cm}
\hsize
}
\newcommand{\DoubleBreakLineOther}{
\hspace*{-3.1cm}
\hrule width \hsize \kern 1mm \hrule width \hsize height 2pt
}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% PREAMBLE.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\input{CV-Preamble.tex}
% CV Info (to be customized).
\newcommand{\CVAuthor}{Ammar Kothari}
\newcommand{\CVWebpage}{http://people.oregonstate.edu/~KothariA/}
\newcommand{\CVGitHub}{https://github.com/amarsbars}
\newcommand{\CVLinkedIn}{https://www.linkedin.com/in/ammarikothari/}
% PDF settings and properties.
\hypersetup{
pdftitle={},
pdfauthor={\CVAuthor},
pdfsubject={\CVWebpage},
pdfcreator={XeLaTeX},
pdfproducer={},
pdfkeywords={},
pdfpagemode={},
bookmarks=true,
unicode=true,
bookmarksopen=true,
pdfstartview=FitH,
pdfpagelayout=OneColumn,
pdfpagemode=UseOutlines,
hidelinks,
breaklinks}
% Shorthand.
\newcommand{\CodeCommand}[1]{\mbox{\textbf{\textbackslash{#1}}}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% ACTUAL DOCUMENT.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%%%%%%%%%%%%%%%
% TITLE BLOCK %
%%%%%%%%%%%%%%%
\title{\CVAuthor}
\begin{subtitle}
{636 NW 21st Street, Corvallis, OR 97330}
\par
\href{mailto:KothariA@oregonstate.edu}
{KothariA@oregonstate.edu}
\,\SubBulletSymbol\,
+1\,(408)\,891-0392
\,\SubBulletSymbol\,
\href{\CVWebpage}
{\CVWebpage}\\
\href{\CVGitHub}
{\CVGitHub}
\,\SubBulletSymbol\,
\href{\CVLinkedIn}
{\CVLinkedIn}
\end{subtitle}
\begin{body}
%%%%%%%%%%%%%%%
%% EDUCATION %%
%%%%%%%%%%%%%%%
\section{Education}{Education}
{PDF:Education}
\href{http://oregonstate.edu/}
{\textbf{Oregon State University}},
Corvallis, Oregon, USA
\GapNoBreak
\BulletItem
Masters of Science (M.S.) in
\href{http://robotics.oregonstate.edu/}
{Robotics}
\hfill
\DatestampYM{2016}{9} --
Current
%\BulletItem
%Doctor of Philosophy (Ph.D.) in
%\href{http://robotics.oregonstate.edu/}
%{Robotics}
%\hfill
%\DatestampYM{2016}{9} --
%Current
\begin{detail}
% \SubBulletItem
% Thesis:
% % \href{http://www.example.com/my-phd-thesis}
% % {A Statistical Approach to Quantifying Climate Change}
% \SubBulletItem
% Adviser:
% Professor Jonathan Public
\SubBulletItem
\textbf{Research areas}:
Robot control focusing on manipulation inspired by expert demonstrations which can generalize to a large number of objects in real world settings.
\SubBulletItem
\textbf{Course Work}: Multiagent systems, Sequential Decision Making, Deep Learning, Applied Robotics, Human Robot Interaction, Geometric Mechanics, Learning Based Control
\SubBulletItem
\textbf{Teaching}: ME250-Intro to Manufacturing Processes. Each class is an hour of lecture and an hour of demonstration to teach students basics of manufacturing and practical fabrication skills. I help students develop comfort and abilities with machines and processes.
\end{detail}
\BigGap
\href{http://www.berkeley.edu/}
{\textbf{University of California, Berkeley}},
Berkeley, California, USA
\GapNoBreak
\BulletItem
Bachelor of Science (B.S.) in
\href{http://www.me.berkeley.edu/}
{Mechanical Engineering}
\hfill
\DatestampYM{2007}{08} --
\DatestampYM{2011}{05}
\begin{detail}
\SubBulletItem
\textbf{Course Work}: 3D Modeling, control theory, vehicle dynamics, energy conversion principles
\end{detail}
% \BreakLine
%%%%%%%%%%%%%%%%%%%%%%%%%%%
%% WORK EXPERIENCE %%
%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section
{Industry Experience}
{Industry Experience}
{PDF:IndustryWorkExperience}
\href{http://www.rethinkrobotics.com/}
{\textbf{Rethink Robotics}},
Boston, Massachusetts, USA
\GapNoBreak
\textit{Manufacturing Engineer}
\hfill
\DatestampYMD{2015}{12}{01} --
\DatestampYMD{2016}{06}{24}
\newline
\textit{Electromechanical Technician}
\hfill
\DatestampYMD{2015}{01}{04} --
\DatestampYMD{2015}{11}{30}
\begin{detail}
I leaped into an expanding company as a technician to help build a new generation of collaborative robots. Immersed in the manufacturing process, I learned extensively about robot design, development, and deployment.
\SubBulletItem
Managed and built 20 late prototype robots while supervising 2 assemblers.
\SubBulletItem
Problem solved to understand root cause of new failures and find solutions.
\SubBulletItem
Created majority of procedures to facilitate transfer of assembly to contract manufacturer.
\SubBulletItem
Understood expectations for embedded software, mechanical design, and controls to deliver functional robots.
\SubBulletItem
Extensive time understanding and adapting code in bash and Python for ROS.
\end{detail}
\BigGap
\href{http://www.acelity.com/products/snap-therapy-system}
{\textbf{Spiracur Inc.}},
Sunnyvale, California, USA
\GapNoBreak
\textit{Manufacturing Engineer}
\hfill
\DatestampYMD{2012}{07}{01} --
\DatestampYMD{2014}{04}{24}
\begin{detail}
I supported my product line and developed manufacturing processes to improve functionality of the primary product.
I also commonly worked on quality and R\&D projects.
\SubBulletItem
Designed, documented, implemented, and iterated fixtures and assembly processes for existing and new products. Reduced processes times by 20\% on average. Improved data collection and analysis to provide insight into project effectiveness and prioritization.
\SubBulletItem
Cultivated dialogue with assemblers to better understand issues unknown to engineers.
\SubBulletItem
Engaged vendors to achieve desired quality and functionality for components that reduced scrap costs by over \$10,000.
\SubBulletItem
Testing and reporting for R\&D and Quality activities when required.
\end{detail}
\BigGap
\textbf{Other Positions}
\textit{Immune Tech} - Project Engineer \hfill Foster City, CA 3/12-6/12
\textit{Triazz Systems} - Product Development Intern \hfill San Jose, CA 6/09–8/09
\BreakLine
%%%%%%%%%%%%%%%%%%
%% Publications %%
%%%%%%%%%%%%%%%%%%
\section
{Publications}
{Publications}
{PDF:Publications}
\small{\textbf{Kothari, A.}, Morrow, J., Thrasher, V., Engle, K., Balasubramanian, R., Grimm, C. "Grasping objects big and small: Human heuristics relating grasp-type and object size." Robotics and Automation (ICRA), 2018 IEEE International Conference on. IEEE, 2018.
Branyan, C., Fleming, C., Remaley, J., \textbf{Kothari, A.}, Tumer, K., Hatton, R., and Yigit Menguc. "Soft Snake Robots: Mechanical Design and Geometric Gait Implementation." Robotics and Biomimetics (ROBIO), 2017 IEEE International Conference on. IEEE, 2017.
}
\BreakLine
%%%%%%%%%%%%
%% Awards %%
%%%%%%%%%%%%
\section
{Awards}
{Awards}
{PDF:Awards}
OSU MIME Fellowship, ICRA Travel Grant
\BreakLine
%%%%%%%%%%%%
%% SKILLS %%
%%%%%%%%%%%%
\section
{Skills}
{Skills}
{PDF:Skills}
Python, ROS, Solidworks, MATLAB, C++; Machining, basic Spanish
% HTML/CSS, Six Sigma Green Belt Certified; Solidworks Introductory Course
\BreakLine
%%%%%%%%%%%%%%%%%%
%% VOLUNTEERING %%
%%%%%%%%%%%%%%%%%%
\section
{Volunteering}
{Volunteering}
{PDF: Volunteering}
FIRST Tech Challenge, Berkeley Project, Corvallis Bike Collective
\BreakLine
%\DoubleBreakLine
% %%%%%%%%%%%%%%%%%%%%%%%%%
% %% RESEARCH EXPERIENCE %%
% %%%%%%%%%%%%%%%%%%%%%%%%%
% \section
% {Research Experience}
% {Research Experience}
% {PDF:ResearchExperience}
% \href{http://www.example.com/my-institute}
% {\textbf{Institute for Advanced Research}},
% Science College
% \GapNoBreak
% \BulletItem
% Undergraduate Research Student, Science Department
% \hfill
% \DatestampYMD{2004}{05}{15} --
% \DatestampYMD{2005}{05}{15}
% \begin{detail}
% \SubBulletItem
% Project:
% Investigations on the Use of Lasers to Measure Climate Change
% \SubBulletItem
% Supervisors:
% Professor Jane Citizen and
% Dr Ann Yone
% \SubBulletItem
% Research areas:
% Climate change, lasers, statistical analysis, data analytics.
% \end{detail}
% %%%%%%%%%%%%%%%%%%
% %% PUBLICATIONS %%
% %%%%%%%%%%%%%%%%%%
% \section
% {Publications}
% {Publications}
% {PDF:Publications}
% \subsection
% {Journals}
% {Journals}
% {PDF:Journals}
% \GapNoBreak
% \NumberedItem{[11]}
% \href{http://www.example.com/my-paper-doi-5}
% {\underline{J.~Doe}, J.~Citizen, and A.~Yone,
% ``On lasers and climate change,''
% \textit{Journal of Science},
% vol.~89,
% no.~2,
% pp.~4123--4133,
% \DatestampYM{2008}{02}.}
% % Note the use of {\CharSpace} for aligning shorter numbers.
% \Gap
% \NumberedItem{{\CharSpace}[1]}
% \href{http://www.example.com/my-paper-doi-4}
% {\underline{J.~Doe} and J.~Citizen,
% ``Measuring the extent of climate change,''
% \textit{Global Scientific Journal},
% vol.~12,
% no.~4,
% pp.~330--352,
% \DatestampYM{2006}{12}.}
% \BigGap
% \subsection
% {Conferences}
% {Conferences}
% {PDF:Conferences}
% \GapNoBreak
% \NumberedItem{[11]}
% \href{http://www.example.com/my-paper-doi-3}
% {\underline{J.~Doe}, J.~Citizen, and A.~Yone,
% ``On lasers and climate change,''
% in \textit{Proceedings of the Laser Symposium},
% Las Vegas, Nevada, USA,
% \DatestampYM{2007}{01}.}
% \Gap
% \NumberedItem{[10]}
% \href{http://www.example.com/my-paper-doi-2}
% {A.~Yone and \underline{J.~Doe},
% ``Climate change and general relativity,''
% in \textit{Proceedings of the International Astronomical Conference},
% Sydney, Australia,
% \DatestampYM{2006}{8}.}
% % Note the use of {\CharSpace} for aligning shorter numbers.
% \Gap
% \NumberedItem{{\CharSpace}[1]}
% \href{http://www.example.com/my-paper-doi-1}
% {\underline{J.~Doe} and J.~Citizen,
% ``Measuring the extent of climate change,''
% in \textit{Proceedings of the International Climate Change Conference},
% London, UK,
% \DatestampYM{2005}{11}.}
% %%%%%%%%%%%%%%%%%%%%%
% %% ACADEMIC AWARDS %%
% %%%%%%%%%%%%%%%%%%%%%
% \section
% {Academic Awards}
% {Academic Awards}
% {PDF:AcademicAwards}
% \BulletItem
% Dean's List,
% Fall 2002 through Spring 2005,
% Science College
% \hfill
% \DatestampY{2002} --
% \DatestampY{2005}
% \begin{detail}
% \SubItem
% For attaining a semester GPA of at least 3.75.
% \end{detail}
% \Gap
% \BulletItem
% Undergraduate Researcher Award,
% Science College
% \hfill
% \DatestampYMD{2005}{05}{15}
% \begin{detail}
% \SubItem
% For outstanding scientific contributions in the fields of lasers and climate change.
% \end{detail}
% \Gap
% \BulletItem
% International Science Scholarship,
% \hfill
% \DatestampYMD{2001}{12}{10}
% \newline
% Global Science, Technology, Engineering, and Mathematics Foundation
% \begin{detail}
% \SubItem
% Full-tuition scholarship with stipend for undergraduate studies.
% One of 42 awardees in the world.
% \end{detail}
% %%%%%%%%%%%%%%%%%%
% %% OTHER AWARDS %%
% %%%%%%%%%%%%%%%%%%
% \section
% {Other Awards}
% {Other Awards}
% {PDF:OtherAwards}
% \BulletItem
% Chess Tournament,
% First Prize,
% First American University
% \hfill
% \DatestampYMD{2007}{03}{10}
% \begin{detail}
% \SubItem
% Awarded at the Tenth Annual Chess Tournament held during Alumni Weekend.
% \end{detail}
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% %% PROFESSIONAL AFFILIATIONS & ACTIVITIES %%
% %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% \section
% {Professional Affiliations\newline
% \& Activities}
% {Professional Affiliations \& Activities}
% {PDF:ProfessionalAffiliationsActivities}
% \href{http://www.example.com/my-society}
% {\textbf{Society of Professional Earth Scientists}},
% New York, USA
% \GapNoBreak
% \BulletItem
% Member
% \hfill
% \DatestampY{2009} --
% Present
% %%%%%%%%%%%%%%%%%%%%%%%
% %% CAMPUS ACTIVITIES %%
% %%%%%%%%%%%%%%%%%%%%%%%
% \section
% {Campus Activities}
% {Campus Activities}
% {PDF:CampusActivities}
% \href{http://www.example.com/my-club}
% {\textbf{First Volunteers Club}},
% First American University
% \GapNoBreak
% \BulletItem
% President
% \hfill
% \DatestampYMD{2006}{08}{15} --
% \DatestampYMD{2007}{08}{15}
% \begin{detail}
% \SubBulletItem
% Lorem ipsum dolor sit amet, consectetur adipiscing elit.
% \SubBulletItem
% Curabitur vitae laoreet velit, vel ultricies est. Nam nec elit ac ante facilisis ultrices.
% \SubBulletItem
% Integer sit amet turpis dolor. Lorem ipsum dolor sit amet, consectetur adipiscing elit. Nunc at orci eu leo vulputate finibus sed et sem.
% \SubBulletItem
% Suspendisse volutpat sapien et mi cursus, gravida ornare mauris sollicitudin.
% \end{detail}
\end{body}
%%%%%%%%%%%
% CV NOTE %
%%%%%%%%%%%
% \begin{flushright}
% \UseNoteFont%
% [\textit{\CVNote}]%
% \hspace{2.0mm}\null
% \end{flushright}
\label{LastPage}~
\end{document}
|
|
\chapter{Atomic Structure and Periodic Trends}
\section{Electron-Configuration Notation}
A system of numbers and letters is used to designate electron configuration.
For example: \ce{1s^2 2s^2 2p^6 3s^2 3p^2}.
\begin{enumerate}
\item the level number is used
\item the level designation for the shape of the orbital
\item a superscript is used to represent the number of electrons in that
specific orbital.
\end{enumerate}
\subsection{Noble-gas Notation}
To simplify writing out all of these numbers, you can include the noble-gas
most-closely following behind of the element you are trying to describe, and
then only describe the differing electrons using the notation above. For
example: \ce{Ne 3s^2 3p^2}.
\begin{description}
\item[Highest occupied level] the electron containing main energey level with
the highest quantum number
\item[Inner-shell electrons] the electrons that are contained in levels with a
lower quantum state
\item[Noble-gas configuration] an outer main-energey level that is fully
occupied, usually by eight electrons.
\end{description}
\section{Periodic Table and Trends}
\subsection{The Periodic Table}
\subsubsection{Mendeleeve's Table}
In 1860, the first Internation Conference of Chemists was held and at that time,
the first periodic table was discussed. In 1869, 9 years later, that table was
published in a textbook for college students.
Mendeleev was the first to generate a periodic table. He used atomic mass as
the basis for his table, arranging elements by increasing mass. Some elements,
however, were out of place when compared against the periodic tables of today.
He was able to, despite these faults, see periodic trends, which allowed him to
predict with a high-degree of accuracy, the qualities of yet undiscovered
elements.
\subsubsection{Improvements}
Moseley observed that elements were better fit as ordered by their nuclear
charge, not atomic mass. The nuclear charge is due to the number of protons,
which led to the usage of atomic number for description.
\textbf{Periodic Law} states that the physical and chemical properties of the
elements are periodic functions of their atomic numbers.
\subsection{Modern-day}
Today, the \textit{Periodic Table} is just an arrangement of elements in the
order of their atomic numbers, such that elements of similar properties fall in
the samily column or group. However, there are but a few differences from
Mendeleeve's original table:
\begin{enumerate}
\item There are more elements now as compared to the first introduction of the
periodic table
\item The discovery of Noble Gases and the synthesis of the Lanthanides and
Actinides
\item The general arangement has changed over time as well
\end{enumerate}
\section{Periodic Trends}
\subsection{Atomic Radii}
Atomic radius is defined as one half of the distance between the nuclei of
identical atoms that are bonded together. Atomic radii decreases as we move
across a period (increase in positive charge) and increases as we descend a
family (increasing the main energy level).
\subsection{Ionization Energy}
Ionization energy is the energy required to remove one electron from a neutral
atom of an element. It is defined by the following chemical equation: \ce{A +
energy -> A^+ + e^-}. Ionization energy increases as you move right across a
period, and decreases as you move down a family.
\subsubsection{Multiple Ionization Energies}
There are multiple ionization energies, each which involves removing another
electron from the atom. These are referred to as $IE_2$, $IE_3$, and so on.
\subsection{Electron Affinity}
Electron affinity is the amount of energy change that results when an electron
is acquired by a neutral atom.
\subsection{Ionic Radii}
This trend represents the radius of an ion of an element. Cations have ionic
radii that are smaller than the neutral atom, where as anions have ionic radii
that are typically larger than the neutral atom. This happens because of the
addition and removal of electrons in the outer shell.
\subsection{Electronegativity}
Electronegativity is the measure of the ability of an atom in a chemical
compound to attract electrons.
|
|
\chapter{Figures}
\label{sec-pcr-pro}
\section{Single Image Configuration}
Use can use pdf image as well.
\begin{figure}[!h]
\centering
\includegraphics[width=.9\linewidth]{pdf/Result.pdf}
\caption{Single Image Configuration.}
\label{fig:single_image_Configuration}
\end{figure}
\newpage
\section{1-1 Configuration}
\begin{figure*}[!h]
\captionsetup[subfigure]{width=.9\linewidth}
\subfloat[First Image] {\label{fig:1_1_Configuration_1}
\includegraphics[width=.9\linewidth]{images/std.jpg}
}
\captionsetup[subfigure]{width=.9\linewidth}
\subfloat[Second Image.] {\label{fig:1_1_Configuration_2}
\includegraphics[width=.9\linewidth]{images/std_pcr_pro.jpg}
}
\caption{1-1 Configuration.}
\label{fig:1_1_Configuration}
\end{figure*}
\newpage
\section{2-1 Configuration}
\begin{figure}[!h]
\captionsetup[subfigure]{width=.48\linewidth}
\subfloat[Image A.]{\label{fig:2_1_Configuration_1}
\includegraphics[width=.48\linewidth]{images/Result1/original_poses.jpg}
}
\subfloat[Image B.]{\label{fig:2_1_Configuration_2}
\includegraphics[width=.48\linewidth]{images/Result1/slam.jpg}
} \\
\captionsetup[subfigure]{width=.95\linewidth}
\subfloat[Image C.]{\label{fig:2_1_Configuration_3}
\includegraphics[width=.98\linewidth]{images/Result1/fullview.jpg}
}
\caption{2-1 Configuration.}
\label{fig:2_1_Configuration}
\end{figure}
\newpage
\section{4 x 4 Configuration}
\begin{figure}[!h]
\captionsetup[subfigure]{width=.45\linewidth}
\subfloat[Image 21.]{\label{fig:4_4_Configuration_1}
\includegraphics[width=.45\linewidth]{images/fig21.jpg}
}
\subfloat[Image 23.]{\label{fig:4_4_Configuration_2}
\includegraphics[width=.45\linewidth]{images/fig23.jpg}
} \\
\subfloat[Image 22.]{\label{fig:4_4_Configuration_3}
\includegraphics[width=.45\linewidth]{images/fig22.jpg}
}
\subfloat[Image 24.]{\label{fig:4_4_Configuration_4}
\includegraphics[width=.45\linewidth]{images/fig24.jpg}
}
\caption{4 x 4 Configuration.}
\label{fig:4_4_Configuration}
\end{figure}
\newpage
\section{3 x 2 Configuration}
\begin{figure*}[!h]
\centering
\captionsetup[subfigure]{width=.32\linewidth}
\subfloat[Image 1.]{\label{fig:3_2_Configuration_a}
\includegraphics[width=.32\linewidth]{images/Result3/bundle_three_odom_threepoint_ueye5_ueye6.jpg}
}
\subfloat[Image 2.]{\label{fig:3_2_Configuration_b}
\includegraphics[width=.32\linewidth]{images/Result3/loop_box_three_odom_threepoint_ueye5_ueye6.jpg}
}
\subfloat[Image 3.]{\label{fig_loop_box_pgo_parking2_c}
\includegraphics[width=.32\linewidth]{images/Result3/full_odom.jpg}
}\hfil
\subfloat[Image 4.]{\label{fig:3_2_Configuration_d}
\includegraphics[width=.32\linewidth]{images/Result4/bundle_three_odom_threepoint_ueye5_ueye6.jpg}
}
\subfloat[Image 5.]{\label{fig:3_2_Configuration_e}
\includegraphics[width=.32\linewidth]{images/Result4/loop_box_three_odom_threepoint_ueye5_ueye6.jpg}
}
\subfloat[Image 6.]{\label{fig_loop_box_pgo_parking2_f}
\includegraphics[width=.32\linewidth]{images/Result4/full_odom.jpg}
}
\caption{(a), (b), and (c) and (d), (e), and (f) correspond to the 3 x 2 Configuration.}
\label{fig:3_2_Configuration}
\end{figure*}
\newpage
\section{1-2-1 Configuration}
\begin{figure}[!h]
\centering
\captionsetup[subfigure]{width=.96\linewidth}
\subfloat[Image 1.]{\label{fig:1_2_1_Configuration_1}
\includegraphics[width=.98\linewidth]{images/Result2/matching.jpg}
}
\hfil
\captionsetup[subfigure]{width=.45\linewidth}
\subfloat[Image 2.]{\label{fig:1_2_1_Configuration_2}
\includegraphics[width=.48\linewidth]{images/Result2/poses.jpg}
}
\subfloat[Image 3.]{\label{fig:1_2_1_Configuration_3}
\includegraphics[width=.48\linewidth]{images/Result2/point_SALM.jpg}
}
\hfil
\captionsetup[subfigure]{width=.98\linewidth}
\subfloat[Image 4.]{\label{fig:1_2_1_Configuration_4}
\includegraphics[width=.98\linewidth]{images/Result2/full_map.jpg}
}
\caption{\text{1-2-1} Configuration.}
\label{fig:1_2_1_Configuration}
\end{figure}
\newpage
\section{Appendix}
\label{sec.appen}
\subsection{Appendix subsection}
\label{appen_d}
\subsubsection{Problem definition}
\newpage
\renewcommand*{\bibname}{\section{References}}
\bibliographystyle{ieeetr}
\bibliography{Thesis}
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% This work is licensed under the Creative Commons Attribution 4.0 International %
% License. To view a copy of this license, visit %
% http://creativecommons.org/licenses/by/4.0/. %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[11pt]{article}
\usepackage[cm]{fullpage}
%%AVC PACKAGES
\usepackage{avcgreek}
\usepackage{avcfonts}
\usepackage{avcmath}
\usepackage[numberby=section,skip=9pt plus 2pt minus 5pt]{avcthm}
\usepackage{qcmacros}
\usepackage{goldstone}
%%MACROS FOR THIS DOCUMENT
\numberwithin{equation}{section}
\usepackage[
margin=1.5cm,
includefoot,
footskip=30pt,
headsep=0.2cm,headheight=1.3cm
]{geometry}
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhf{}
\fancyhead[LE,RO]{Quiz 6, Handout 1: Perturbation Theory}
\fancyfoot[CE,CO]{\thepage}
\usepackage{url}
\makeatother
\newcommand{\resolventline}[2][1]{
\tikz[overlay]{
\draw[thick,flexdotted] (0,-1ex) to ++(0,#1*4.5ex) node[above,inner sep=1pt] {#2};
}
}
\begin{document}
\setcounter{section}{5}
\section{Perturbation theory}
\begin{dfn}
\thmtitle{Model Hamiltonian}
The electronic Hamiltonian\footnote{For the sake of brevity I will here refer to $H_\mr{c}$ as ``the electronic Hamiltonian''. We could also use $H=E_0+H_\mr{c}$, which will simply shift some of the equations by a constant.} can be expressed as the sum of a \textit{zeroth order} or \textit{``model''~Hamiltonian} $H_0$ and a \textit{perturbation} $V_\mr{c}$, known as the \textit{fluctuation potential}.
For well-behaved electronic systems, a common choice for the model Hamiltonian is the diagonal part of the Fock operator.
\begin{align}
\label{eq:diagonal-fock-model-hamiltonian}
H_0
\equiv
f_p^p
\tl{a}^p_p
&&
V_\mr{c}
\equiv
f_p^q
(
1
-
\d_p^q
)
\tl{a}^p_q
+
\tfr{1}{4}
\ol{g}_{pq}^{rs}
\tl{a}^{pq}_{rs}
\end{align}
This choice of $H_0$ brings the advantage that its eigenbasis is the standard basis of determinants.
\begin{align}
\label{eq:model-problem}
H_0
\F
=
0\,
\F
&&
H_0
\F_{i_1\cd i_k}^{a_1\cd a_k}
=
\mc{E}_{i_1\cd i_k}^{a_1\cd a_k}
\F_{i_1\cd i_k}^{a_1\cd a_k}
&&
\mc{E}_{q_1\cd q_k}^{p_1\cd p_k}
\equiv
\sum_{r=1}^k
f_{p_r}^{p_r}
-
\sum_{r=1}^k
f_{q_r}^{q_r}
\end{align}
In general the model Hamiltonian is chosen to make the matrix representation of $H_\mr{c}$ in the model eigenbasis diagonally dominant.\footnote{See \url{https://en.wikipedia.org/wiki/Diagonally_dominant_matrix}.}
Our choice of $H_0$ is appropriate for \textit{weakly correlated systems}, where the reference determinant can be chosen to satisfy $\ip{\F|\Y}\gg \ip{\F_{i_1\cd i_k}^{a_1\cd a_k}|\Y}$ for all substituted determinants.
In this context it is convenient to employ intermediate normalization for the wavefunction, which will be assumed from here on out.
\end{dfn}
\begin{dfn}
\thmtitle{Perturbation theory}
\textit{Perturbation theory} analyzes the polynomial order with which the wavefunction and its observables depend on the fluctuation potential.
For this purpose, we define a continuous series of Hamiltonians
$
H(\la)
\equiv
H_0
+
\la
V_\mr{c}
$
parametrized by a \textit{strength parameter} $\la$ that smoothly toggles between the model Hamiltonian at $\la=0$ to the exact one at $\la=1$.
The \textit{$m\eth$-order contribution} to a quantity $X$ is then defined as the $m\eth$ coefficient in its Taylor series about $\la=0$, denoted $X\ord{m}$.
In particular, the wavefunction and correlation energy can be expanded as follows.
\begin{align}
\label{eq:series-schrodinger-equation}
\Y
=
\sum_{m=0}^\infty
\Y\ord{m}
&&
E_\mr{c}
=
\sum_{m=0}^\infty
E_\mr{c}\ord{m}
&&
\Y\ord{m}
\equiv
\fr{1}{m!}
\left.
\pd{^m\Y(\la)}{\la^m}
\right|_{\la=0}
&&
E_\mr{c}\ord{m}
\equiv
\fr{1}{m!}
\left.
\pd{^mE(\la)}{\la^m}
\right|_{\la=0}
&&
H(\la)
\Y(\la)
=
E(\la)
\Y(\la)
\end{align}
The order(s) at which a term contributes to the wavefunction or energy provides one measure of its relative importance.
\end{dfn}
\begin{rmk}
Projecting the Schr\"odinger equation by $\F$ and using eq~\ref{eq:model-problem}, along with intermediate normalization, implies
\begin{align}
E_\mr{c}
=
\ip{\F|V_\mr{c}|\Y}
\hspace{20pt}
\implies
\hspace{20pt}
E_\mr{c}\ord{m+1}
=
\ip{\F|V_\mr{c}|\Y\ord{m}}
\end{align}
where the equation on the right follows from generalizing the energy expression to
$
E(\la)
=
\ip{\F|\la V_\mr{c}|\Y(\la)}
$.
In words, this says that the $m\eth$-order wavefunction contribution determines the $(m+1)\eth$-order energy contribution.
This immediately identifies the first-order energy as
$
E_\mr{c}\ord{1}
=
\ip{\F|V_\mr{c}|\F}
=
0
$,
since $V_\mr{c}$ consists of $\F$-normal-ordered operators.
\end{rmk}
\begin{dfn}
\thmtitle{Model space projection operator}
The projection onto the reference determinant, $P=\kt{\F}\br{\F}$, is termed the \textit{model space projection operator}.
Its complement is the \textit{orthogonal space projection operator}.\footnote{$1_n\equiv 1|_{\mc{F}_n}$ is the identity on $\mc{F}_n$, which is equivalent to a projection onto this subspace. For our purposes, this is the identity.}
\begin{align}
\label{eq:orthogonal-space-projection-operator}
Q
\equiv
1_n
-
P
=
\sum_k
\pr{
\tfr{1}{k!}
}^2
\sum_{\substack{a_1\cd a_k\\i_1\cd i_k}}
\kt{\F_{i_1\cd i_k}^{a_1\cd a_k}}
\br{\F_{i_1\cd i_k}^{a_1\cd a_k}}
\end{align}
Note that $P$ and $Q$ satisfy the following relationships, which are characteristic of complementary projection operators.
\begin{align}
P
+
Q
=
1_n
&&
P^2
=
P
&&
Q^2
=
Q
&&
PQ
=
QP
=
0
\end{align}
Due to intermediate normalization, we also have that
$
P\Y
=
\F
$
and
$
Q\Y
=
\Y
-
\F
$.
\end{dfn}
\begin{samepage}
\begin{dfn}
\thmtitle{Resolvent}
The \textit{resolvent},
$
R_0
\equiv
(-H_0)^{-1}Q
$, is the negative\footnote{The annoying sign factor is required for consistency with $R(\zeta)\equiv(\zeta-H_0)^{-1}Q$, which is a more general definition of the resolvent.} inverse of $H_0$ in the orthogonal space.\footnote{Note that this implies $R_0P=0$ and $R_0Q=R_0$.}
\begin{align}
\label{eq:resolvent-spectral-decomposition}
R_0
\F
=
0
\F
&&
R_0
\F_{i_1\cd i_k}^{a_1\cd a_k}
=
(\mc{E}_{a_1\cd a_k}^{i_1\cd i_k})^{-1}
\F_{i_1\cd i_k}^{a_1\cd a_k}
&&
R_0
=
\sum_k
\pr{\tfr{1}{k!}}^2
\sum_{\substack{a_1\cd a_k\\i_1\cd i_k}}
\fr{
\kt{\F_{i_1\cd i_k}^{a_1\cd a_k}}
\br{\F_{i_1\cd i_k}^{a_1\cd a_k}}
}{
\mc{E}_{a_1\cd a_k}^{i_1\cd i_k}
}
\end{align}
The equation on the right is the spectral decomposition of the resolvent.\footnote{This follows from the eigenvalue equations, but you can derive it explicitly by substituting equation~\ref{eq:orthogonal-space-projection-operator} into $R_0=(-H_0)^{-1}Q$.}
Restriction to the orthogonal space is necessary because $H_0$ is singular in the model space, which means that $H_0^{-1}$ does not exist there.
\end{dfn}
\end{samepage}
\begin{samepage}
\begin{rmk}
\thmtitle{A recursive solution to the Schr\"odinger equation}
Operating $R_0$ on $H(\la)\Y(\la)=E(\la)\Y(\la)$ gives\footnote{This follows from $R_0H_0\Y=-Q\Y=-\Y+\F$.}
\begin{align}
\label{eq:lambda-dependent-recursive-series}
\Y(\la)
=
\F
+
R_0
(
\la V_\mr{c}
-
E(\la)
)
\Y(\la)
\end{align}
which provides a recursive equation for $\Y(\la)$ that can be used to solve for wavefunction contributions order by order.
\end{rmk}
\end{samepage}
\begin{ex}
The first two derivatives of equation~\ref{eq:lambda-dependent-recursive-series} are given by
\begin{align*}
\pd{\Y(\la)}{\la}
=&
R_0
\pr{
V_\mr{c}
-
\pd{E(\la)}{\la}
}
\Y(\la)
+
R_0
(
\la V_\mr{c}
-
E(\la)
)
\pd{\Y(\la)}{\la}
\\
\pd{^2\Y(\la)}{\la^2}
=&
-
R_0
\pd{^2E(\la)}{\la^2}
\Y(\la)
+
2
R_0
\pr{
V_\mr{c}
-
\pd{E(\la)}{\la}
}
\pd{\Y(\la)}{\la}
+
R_0
(
\la V_\mr{c}
-
E(\la)
)
\pd{^2\Y(\la)}{\la^2}
\end{align*}
which can be used to determine the first- and second-order wavefunction contributions.
\begin{align}
\label{eq:first-and-second-order-principal-terms}
\Y\ord{1}
=
\left.
\pd{\Y(\la)}{\la}
\right|_{\la=0}
=
R_0
V_\mr{c}
\F
&&
\Y\ord{2}
=
\left.
\fr{1}{2}
\pd{^2\Y(\la)}{\la^2}
\right|_{\la=0}
=
R_0
V_\mr{c}
\Y\ord{1}
=
R_0
V_\mr{c}
R_0
V_\mr{c}
\F
\end{align}
Here we have used $E_\mr{c}\ord{0}=E_\mr{c}\ord{1}=0$ and $R_0\F=0$ to simplify the result.
\end{ex}
\begin{ex}
\label{ex:first-order-wavefunction-expansion-unsimplified}
Plugging in the spectral decomposition for $R_0$ allows us to expand $\Y\ord{1}$ in the determinant basis.
\begin{align}
\label{eq:first-order-wavefunction-expansion-unsimplified}
\Y\ord{1}
=
R_0
V_\mr{c}
\F
=
\sum_{\substack{a\\i}}
\F_i^a
\fr{\ip{\F_i^a|V_\mr{c}|\F}}{\mc{E}_a^i}
+
(\tfr{1}{2!})^2
\sum_{\substack{ab\\ij}}
\F_{ij}^{ab}
\fr{\ip{\F_{ij}^{ab}|V_\mr{c}|\F}}{\mc{E}_{ab}^{ij}}
\end{align}
The expansion truncates at double excitations because the maximum excitation level of $V_\mr{c}$ is $+2$.
\end{ex}
\begin{ex}
The numerators in example~\ref{ex:first-order-wavefunction-expansion-unsimplified} are easily evaluated using Slater's rules, which leads to the following.
\begin{align*}
\label{eq:first-order-wavefunction-expansion}
\Y\ord{1}
=
\sum_{\substack{a\\i}}
\F_i^a
\fr{f_a^i}{\mc{E}_a^i}
+
(\tfr{1}{2!})^2
\sum_{\substack{ab\\ij}}
\F_{ij}^{ab}\,
\fr{\ol{g}_{ab}^{ij}}{\mc{E}_{ab}^{ij}}
\hspace{20pt}
\implies
\hspace{20pt}
E_\mr{c}\ord{2}
=
\ip{\F|V_\mr{c}|\Y\ord{1}}
=
\sum_{\substack{a\\i}}
\fr{f_i^af_a^i}{\mc{E}_a^i}
+
(\tfr{1}{2!})^2
\sum_{\substack{ab\\ij}}
\fr{\ol{g}_{ij}^{ab}\,\ol{g}_{ab}^{ij}}{\mc{E}_{ab}^{ij}}
\end{align*}
Note that the singles contribution vanishes for canonical Hartree-Fock references, since $f_a^i=0$.
These extra terms are required for non-canonical orbitals, such as those obtained from restricted open-shell Hartree-Fock (ROHF) theory.
\end{ex}
\begin{dfn}\label{dfn:resolvent-line}
\thmtitle{Resolvent line}
We can generalize our previous definition of the \textit{resolvent line} as follows
\begin{align}
\,\resolventline[0.7]{}\,
Y
\equiv
\sum_k
\pr{\tfr{1}{k!}}^2
\sum_{\substack{a_1\cd a_k\\i_1\cd i_k}}
\fr{y_{a_1\cd a_k}^{i_1\cd i_k}}{\mc{E}_{a_1\cd a_k}^{i_1\cd i_k}}
\tl{a}^{a_1\cd a_k}_{i_1\cd i_k}
&&
Y
=
Y_{n\rightarrow n}
+
Y_{n\not\rightarrow n}
&&
Y_{n\rightarrow n}
=
y_0
+
\sum_k
\pr{\tfr{1}{k!}}^2
\sum_{\substack{p_1\cd p_k\\q_1\cd q_k}}
y_{p_1\cd p_k}^{q_1\cd q_k}
\tl{a}^{p_1\cd p_k}_{q_1\cd q_k}
\end{align}
where $Y$ is an arbitrary operator.
The last equation is the Wick expansion of $Y_{n\rightarrow n}$, which denotes the purely particle-number-conserving part\footnote{The component that maps $\mc{F}_n\rightarrow\mc{F}_n$ for all $n$, which can always be written as a linear combination of excitation operators.} of $Y$.
This definition immediately implies
$
\,\resolventline[0.7]{}\,\,
\kt{\Y}
=
R_0
\kt{\Y}
$
for all $\Y$.\footnote{Since any $\kt{\Y}$ can be written as $Y\kt{\F}$, this follows from applying eq~\ref{eq:resolvent-spectral-decomposition} to each term in the Wick expansion of $Y$ in $R_0Y\kt{\F}$.}
Other expressions are defined by giving resolvent lines priority in the order of operations, with maximum priority given to the rightmost resolvent.
\begin{align}
Y_1
\,\resolventline[0.7]{}\,
Y_2
\cd
\,\resolventline[0.7]{}\,
Y_n
\equiv
Y_1
\pr{
\,\resolventline[0.7]{}\,
Y_2
\pr{
\cd
\pr{
\,\resolventline[0.7]{}\,
Y_n
\vphantom{Y_Y^Y}
}\cd}}
&&
\gno{\ol{
Y_1
\,\resolventline[0.8]{}\,
Y_2
\cd
\,\resolventline[0.8]{}\,
Y_n
}}
\equiv
\gno{\ol{
\,\,
Y_1
\pr{
\,\resolventline[0.7]{}\,
Y_2
\pr{
\cd
\pr{
\,\resolventline[0.7]{}\,
Y_n
\vphantom{Y_Y^Y}
}\cd}}
}}
\end{align}
This definition also specifies the interpretation rule for a graphs with resolvent lines, which are formally defined below.
\end{dfn}
\begin{cor}\label{cor:wicks-theorem-for-pt}
\thmtitle{Wick's theorem for perturbation theory}
\thmstatement{
$
YR_0Y_1\cd R_0 Y_m
\kt{\F}
=
\pr{
\gno{
Y
\,\resolventline[0.8]{}\,
Y_1
\cd
\,\resolventline[0.8]{}\,
Y_m
}
+
\gno{\ol{
Y
\,\resolventline[0.8]{}\,
Y_1
\cd
\,\resolventline[0.8]{}\,
Y_m
}}
}
\kt{\F}
$%
}%
\vspace{5pt}
\thmproof{
This follows directly from Wick's theorem and definition~\ref{dfn:resolvent-line}.
}
\end{cor}
\begin{dfn}\label{dfn:resolvent-graph}
\thmtitle{Resolvent graph}
A \textit{resolvent graph} represents a normal-ordered product of operators and resolvents.
Graphs with disconnected parts that don't share any resolvent lines are considered products of separate resolvent graphs.
Vertical spaces between resolvent lines in a resolvent graph are termed \textit{levels}, which are numbered from bottom with zero indexing.
Therefore, an operator lies in the $k\eth$ level if there are $k$ resolvent lines below it.
Formally, then, an \textit{$m$-level resolvent graph} $G(\rh,m)\equiv(G,\rh,m)$ associates each operator $o$ in $G$ with a specific level $\rh(o)=\rh_o$ in $\mb{Z}_m=\{0,1,\ld,m-1\}$ through the \textit{level map} $\rh$.\,\footnote{
Note that an $m$-level resolvent graph contains $m-1$ resolvents.
}
Therefore, each line $l$ in $G$ crosses resolvents
$
\mr{min}(\rh_{h(l)},\rh_{t(l)}) + 1
$
through
$
\mr{max}(\rh_{h(l)},\rh_{t(l)})
$.
\end{dfn}
\begin{ex}
In diagram notation, $\Y\ord{1}$ and $E_\mr{c}\ord{2}$ can be expressed as follows.
\begin{align}
\Y\ord{1}
=
\diagram[bottom]{
\draw
(0,-0.5)
node[circlep] {}
to
++(0.5,0)
node[ddot] (f1) {};
\draw[->-]
(f1)
to
++(-0.25,1);
\draw[-<-]
(f1)
to
++(+0.25,1);
\draw[thick,flexdotted] (0.2,+0.25) to ++(0.6,0);
}
+
\diagram[bottom]{
\interaction{2}{g}{(0,-0.5)}{ddot}{sawtooth};
\draw[->-]
(g1)
to
++(-0.25,1);
\draw[-<-]
(g1)
to
++(+0.25,1);
\draw[->-]
(g2)
to
++(-0.25,1);
\draw[-<-]
(g2)
to
++(+0.25,1);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
\draw[thick,flexdotted] (-0.3,+0.25) to ++(1.6,0);
}
&&
E_\mr{c}\ord{2}
=
\diagram{
%top
\draw
(0,+0.5)
node[circlep] {}
to
++(0.5,0)
node[ddot] (1f1) {};
%bottom
\draw
(0,-0.5)
node[circlep] {}
to
++(0.5,0)
node[ddot] (2f1) {};
\draw[->-=0.4,bend left ] (2f1) to (1f1);
\draw[-<-=0.6,bend right] (2f1) to (1f1);
\draw[thick,flexdotted] (0.2,0) to ++(0.6,0);
}
+
\diagram{
%top
\interaction{2}{1g}{(0,+0.5)}{ddot}{sawtooth};
%bottom
\interaction{2}{2g}{(0,-0.5)}{ddot}{sawtooth};
%lines
\draw[->-=0.4,bend left ] (2g1) to (1g1);
\draw[-<-=0.6,bend right] (2g1) to (1g1);
\draw[->-=0.4,bend left ] (2g2) to (1g2);
\draw[-<-=0.6,bend right] (2g2) to (1g2);
\draw[thick,flexdotted] (-0.3,0) to ++(1.6,0);
}
\end{align}
\end{ex}
\begin{ex}
The expansion for $\Y\ord{2}$ can be evaluated using \cref{cor:wicks-theorem-for-pt}.
Assuming Brillouin's theorem for simplicity,
\begin{align}
\nonumber
\Y\ord{2}
=
R_0
V_\mr{c}
R_0
V_\mr{c}
\kt{\F}
=&\
\nonumber
\diagram[bottom]{
\interaction{2}{1g}{(0,-0.5)}{ddot}{sawtooth};
\node[ddot] (2g2) at (1,0) {};
\draw[-<-] (1g1) to ++(-0.25,+1);
\draw[->-=0.25,->-=0.75] (1g1) to node[midway,ddot] (2g1) {} ++(+0.25,+1);
\draw[sawtooth] (2g1) to (2g2);
\draw[->-,bend left =45] (1g2) to (2g2);
\draw[-<-,bend right=45] (1g2) to (2g2);
\draw[thick,flexdotted] (-0.3,-0.27) to ++(1.6,0);
\draw[thick,flexdotted] (-0.3,+0.35) to ++(0.7,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
+
\diagram[bottom]{
\interaction{2}{1g}{(0,-0.5)}{ddot}{sawtooth};
\node[ddot] (2g2) at (1,0) {};
\draw[->-] (1g1) to ++(-0.25,+1);
\draw[-<-=0.25,-<-=0.75] (1g1) to node[midway,ddot] (2g1) {} ++(+0.25,+1);
\draw[sawtooth] (2g1) to (2g2);
\draw[->-,bend left =45] (1g2) to (2g2);
\draw[-<-,bend right=45] (1g2) to (2g2);
\draw[thick,flexdotted] (-0.3,-0.27) to ++(1.6,0);
\draw[thick,flexdotted] (-0.3,+0.35) to ++(0.7,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
+
\diagram[bottom]{
\interaction{2}{t}{(0,-0.5)}{ddot}{sawtooth};
\draw[->-=0.25,->-=0.75] (t1) to node[midway,ddot] (g1) {}
++(-0.25,1);
\draw[-<-=0.7] (t1) to ++(+0.25,1);
\draw[->-=0.25,->-=0.75] (t2) to node[midway,ddot] (g2) {}
++(-0.25,1);
\draw[-<-=0.7] (t2) to ++(+0.25,1);
\draw[sawtooth] (g1)--(g2);
\draw[thick,flexdotted] (-0.3,-0.27) to ++(1.6,0);
\draw[thick,flexdotted] (-0.4,+0.35) to ++(1.8,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
+
\diagram[bottom]{
\interaction{2}{t}{(0,-0.5)}{ddot}{sawtooth};
\draw[-<-=0.25,-<-=0.75] (t1) to node[midway,ddot] (g1) {}
++(-0.25,1);
\draw[->-=0.7] (t1) to ++(+0.25,1);
\draw[-<-=0.25,-<-=0.75] (t2) to node[midway,ddot] (g2) {}
++(-0.25,1);
\draw[->-=0.7] (t2) to ++(+0.25,1);
\draw[sawtooth] (g1)--(g2);
\draw[thick,flexdotted] (-0.3,-0.27) to ++(1.6,0);
\draw[thick,flexdotted] (-0.4,+0.35) to ++(1.8,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
+
\diagram[bottom]{
\interaction{2}{t}{(0,-0.5)}{ddot}{sawtooth};
\interaction{2}{g}{(1,+0.0)}{ddot}{sawtooth};
\draw[->-] (t1) to ++(-0.25,1);
\draw[-<-] (t1) to ++(+0.25,1);
\draw[->-,bend left] (t2) to (g1);
\draw[-<-,bend right] (t2) to (g1);
\draw[->-] (g2) to ++(-0.25,0.5);
\draw[-<-] (g2) to ++(+0.25,0.5);
\draw[thick,flexdotted] (-0.3,-0.27) to ++(1.6,0);
\draw[thick,flexdotted] (-0.4,+0.35) to ++(2.8,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
\\&\
\label{eq:second-order-wavefunction-graphical}
+
\diagram[bottom]{
\interaction{2}{1g}{(0,-0.5)}{ddot}{sawtooth};
\interaction{2}{2g}{(1.125,0)}{ddot}{sawtooth};
\draw[-<-] (1g1) to ++(-0.25,1);
\draw[->-] (1g1) to ++(+0.25,1);
\draw[-<-] (1g2) to ++(-0.25,1);
\draw[->-=0.25,->-=0.75] (1g2) to ++(+0.25,1);
\draw[-<-] (2g2) to ++(-0.25,0.5);
\draw[->-] (2g2) to ++(+0.25,0.5);
\draw[thick,flexdotted] (-0.4,-0.27) to ++(2.9,0);
\draw[thick,flexdotted] (-0.4,+0.35) to ++(2.9,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
+
\diagram[bottom]{
\interaction{2}{1g}{(0,-0.5)}{ddot}{sawtooth};
\interaction{2}{2g}{(1.125,0)}{ddot}{sawtooth};
\draw[->-] (1g1) to ++(-0.25,1);
\draw[-<-] (1g1) to ++(+0.25,1);
\draw[->-] (1g2) to ++(-0.25,1);
\draw[-<-=0.25,-<-=0.75] (1g2) to ++(+0.25,1);
\draw[->-] (2g2) to ++(-0.25,0.5);
\draw[-<-] (2g2) to ++(+0.25,0.5);
\draw[thick,flexdotted] (-0.4,-0.27) to ++(2.9,0);
\draw[thick,flexdotted] (-0.4,+0.35) to ++(2.9,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
+
\diagram[bottom]{
\interaction{2}{1g}{(0,-0.5)}{ddot}{sawtooth};
\interaction{2}{2g}{(2,0)}{ddot}{sawtooth};
\draw[-<-] (1g1) to ++(-0.25,1);
\draw[->-] (1g1) to ++(+0.25,1);
\draw[-<-] (1g2) to ++(-0.25,1);
\draw[->-] (1g2) to ++(+0.25,1);
\draw[-<-] (2g1) to ++(-0.25,0.5);
\draw[->-] (2g1) to ++(+0.25,0.5);
\draw[-<-] (2g2) to ++(-0.25,0.5);
\draw[->-] (2g2) to ++(+0.25,0.5);
\draw[thick,flexdotted] (-0.3,-0.27) to ++(1.6,0);
\draw[thick,flexdotted] (-0.4,+0.35) to ++(3.9,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
\\
=&\
\tfr{1}{2}
\sum_{\substack{abc\\ij}}
\F_i^a\,
\fr{
\ol{g}_{aj}^{bc}
\ol{g}_{bc}^{ij}
}{
\mc{E}_a^i
\mc{E}_{bc}^{ij}
}
-
\tfr{1}{2}
\sum_{\substack{ab\\ijk}}
\F_i^a
\fr{
\ol{g}_{jk}^{ib}
\ol{g}_{ab}^{jk}
}{
\mc{E}_a^i
\mc{E}_{ab}^{jk}
}
+
\tfr{1}{2^3}
\sum_{\substack{abcd\\ij}}
\F_{ij}^{ab}
\fr{
\ol{g}_{ab}^{cd}
\ol{g}_{cd}^{ij}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{cd}^{ij}
}
+
\tfr{1}{2^3}
\sum_{\substack{ab\\ijkl}}
\F_{ij}^{ab}
\fr{
\ol{g}_{kl}^{ij}
\ol{g}_{ab}^{kl}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{ab}^{kl}
}
\nonumber
\\&\
+
\sum_{\substack{abc\\ijk}}
\F_{ij}^{ab}
\fr{
\ol{g}_{ac}^{ik}
\ol{g}_{kb}^{cj}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{ac}^{ik}
}
+
\tfr{1}{2^2}
\sum_{\substack{abcd\\ijk}}
\F_{ijk}^{abc}
\fr{
\ol{g}_{ad}^{ij}
\ol{g}_{bc}^{dk}
}{
\mc{E}_{abc}^{ijk}
\mc{E}_{ad}^{ij}
}
-
\tfr{1}{2^2}
\sum_{\substack{abc\\ijkl}}
\F_{ijk}^{abc}
\fr{
\ol{g}_{ab}^{il}
\ol{g}_{lc}^{jk}
}{
\mc{E}_{abc}^{ijk}
\mc{E}_{ab}^{il}
}
+
\tfr{1}{2^4}
\sum_{\substack{abcd\\ijkl}}
\F_{ijkl}^{abcd}
\fr{
\ol{g}_{ab}^{ij}
\ol{g}_{cd}^{kl}
}{
\mc{E}_{abcd}^{ijkl}
\mc{E}_{ab}^{ij}
}
\end{align}
where the operators in the final diagram do not form an equivalent pair because they pass through different resolvent lines.
The third-order contribution to the correlation energy can be evaluated as the complete contractions of $V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}$
\begin{align}
E_\mr{c}\ord{3}
\,{=}\,
\diagram{
%top
\interaction{2}{1g}{(0,+0.5)}{ddot}{sawtooth};
%bottom
\interaction{2}{2g}{(0,-0.5)}{ddot}{sawtooth};
\draw[->-=0.25,->-=0.75, bend left]
(2g1)
to
node[midway,ddot] (g1) {}
(1g1);
\draw[-<-=0.65,bend right] (2g1) to (1g1);
\draw[->-=0.25,->-=0.75, bend left]
(2g2)
to
node[midway,ddot] (g2) {}
(1g2);
\draw[-<-=0.65,bend right] (2g2) to (1g2);
\draw[sawtooth] (g1)--(g2);
\draw[thick,flexdotted] (-0.3,-0.3) to ++(1.6,0);
\draw[thick,flexdotted] (-0.3,+0.3) to ++(1.6,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
{+}
\diagram{
%top
\interaction{2}{1g}{(0,+0.5)}{ddot}{sawtooth};
%bottom
\interaction{2}{2g}{(0,-0.5)}{ddot}{sawtooth};
\draw[-<-=0.25,-<-=0.75, bend left]
(2g1)
to
node[midway,ddot] (g1) {}
(1g1);
\draw[->-=0.65,bend right] (2g1) to (1g1);
\draw[-<-=0.25,-<-=0.75, bend left]
(2g2)
to
node[midway,ddot] (g2) {}
(1g2);
\draw[->-=0.65,bend right] (2g2) to (1g2);
\draw[sawtooth] (g1)--(g2);
\draw[thick,flexdotted] (-0.3,-0.3) to ++(1.6,0);
\draw[thick,flexdotted] (-0.3,+0.3) to ++(1.6,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
{+}
\diagram{
%top
\draw[sawtooth]
(0,+0.5)
node[ddot] (1g1) {}
to
++(2,0)
node[ddot] (1g2) {};
%middle
\interaction{2}{g}{(1,+0.0)}{ddot}{sawtooth};
%bottom
\interaction{2}{2g}{(0,-0.5)}{ddot}{sawtooth};
\draw[->-,bend left ] (2g1) to (1g1);
\draw[-<-,bend right] (2g1) to (1g1);
\draw[->-,bend left ] (2g2) to (g1);
\draw[-<-,bend right] (2g2) to (g1);
\draw[->-,bend left ] (g2) to (1g2);
\draw[-<-,bend right] (g2) to (1g2);
\draw[thick,flexdotted] (-0.3,-0.3) to ++(1.6,0);
\draw[thick,flexdotted] (-0.3,+0.3) to ++(2.6,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
}
{=}
\tfr{1}{2^3}
\sum_{\substack{abcd\\ij}}
\fr{
\ol{g}_{ij}^{ab}
\ol{g}_{ab}^{cd}
\ol{g}_{cd}^{ij}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{cd}^{ij}
}
{+}
\tfr{1}{2^3}
\sum_{\substack{ab\\ijkl}}
\fr{
\ol{g}_{ij}^{ab}
\ol{g}_{kl}^{ij}
\ol{g}_{ab}^{kl}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{ab}^{kl}
}
{+}
\sum_{\substack{abc\\ijk}}
\fr{
\ol{g}_{ij}^{ab}
\ol{g}_{ac}^{ik}
\ol{g}_{kb}^{cj}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{ac}^{ik}
}
\end{align}
which is equivalent to contracting the doubles contributions to $\Y\ord{2}$ with $\tfr{1}{4}\ol{g}_{ij}^{ab}\tl{a}^{ij}_{ab}$.
Note that $E_\mr{c}\ord{m+1}$ always only depends on the doubles contribution to $\Y\ord{m}$, but that the doubles coefficients themselves may involve triples, quadruples and higher contributions from wavefunction components of order less than $m$.
\end{ex}
\begin{ex}
Using
${}\ord{m}c_{ab\cd}^{ij\cd}=\ip{\F_{ij\cd}^{ab\cd}|\Y\ord{m}}$, the second order CI coefficients can be determined from eq~\ref{eq:second-order-wavefunction-graphical} by contracting a bare excitation operator with the top of each diagram.
Interpreting these graphs gives the following.
\begin{align*}
{}\ord{2}c_a^i
&=
\tfr{1}{2}
\sum_{\substack{bc\\j}}
\fr{
\ol{g}_{aj}^{bc}
\ol{g}_{bc}^{ij}
}{
\mc{E}_a^i
\mc{E}_{bc}^{ij}
}
-
\tfr{1}{2}
\sum_{\substack{b\\jk}}
\fr{
\ol{g}_{jk}^{ib}
\ol{g}_{ab}^{jk}
}{
\mc{E}_a^i
\mc{E}_{ab}^{jk}
}
\\
{}\ord{2}c_{ab}^{ij}
&=
\tfr{1}{2}
\sum_{\substack{cd}}
\fr{
\ol{g}_{ab}^{cd}
\ol{g}_{cd}^{ij}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{cd}^{ij}
}
+
\tfr{1}{2}
\sum_{\substack{kl}}
\fr{
\ol{g}_{kl}^{ij}
\ol{g}_{ab}^{kl}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{ab}^{kl}
}
+
\op{P}_{(a/b)}^{(i/j)}
\sum_{\substack{c\\k}}
\fr{
\ol{g}_{ac}^{ik}
\ol{g}_{kb}^{cj}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{ac}^{ik}
}
\\
{}\ord{2}c_{abc}^{ijk}
&=
\op{P}_{(a/bc)}^{(ij/k)}
\sum_{\substack{d}}
\fr{
\ol{g}_{ad}^{ij}
\ol{g}_{bc}^{dk}
}{
\mc{E}_{abc}^{ijk}
\mc{E}_{ad}^{ij}
}
-
\op{P}_{(ab/c)}^{(i/jk)}
\sum_{\substack{l}}
\fr{
\ol{g}_{ab}^{il}
\ol{g}_{lc}^{jk}
}{
\mc{E}_{abc}^{ijk}
\mc{E}_{ab}^{il}
}
\\
{}\ord{2}c_{abcd}^{ijkl}
&=
\op{P}_{(ab/cd)}^{(ij/kl)}
\fr{
\ol{g}_{ab}^{ij}
\ol{g}_{cd}^{kl}
}{
\mc{E}_{abcd}^{ijkl}
\mc{E}_{ab}^{ij}
}
\end{align*}
Note that the second order quadruples coefficient is disconnected.
Prop.~\ref{prop:second-order-c4} shows that the second-order quadruples operator is actually a simple product of first-order doubles operators.
This fact was an early motivation for coupled-pair many-electron theory,\footnote{This is the original name for coupled-cluster doubles.} since it justifies approximating
$
\Y_\mr{CIDQ}
=
(1+C_2+C_4)\F
$
by
$
\Y_\mr{CPMET}
=
(1 + C_2 + \tfr{1}{2}C_2^2)\F
$.
\end{ex}
\begin{prop}
\label{prop:second-order-c4}
\thmstatement{
$
{}\ord{2} C_4
=
\tfr{1}{2}
{}\ord{1} C_2^2
$
}
\thmproof{
This follows from rearranging the resolvent denominator.
\begin{align*}
\fr{1}{\mc{E}_{abcd}^{ijkl}\mc{E}_{ab}^{ij}}
+
\fr{1}{\mc{E}_{abcd}^{ijkl}\mc{E}_{cd}^{kl}}
=
\fr{
\mc{E}_{cd}^{kl} + \mc{E}_{ab}^{ij}
}{
\mc{E}_{abcd}^{ijkl}
\mc{E}_{ab}^{ij}
\mc{E}_{cd}^{kl}
}
=
\fr{
1
}{
\mc{E}_{ab}^{ij}
\mc{E}_{cd}^{kl}
}
\implies
{}\ord{2}C_4
=
\pr{\tfr{1}{2}}^4
\sum_{\substack{abcd\\ijkl}}
\tl{a}^{abcd}_{ijkl}
\fr{
\ol{g}_{ab}^{ij}
\ol{g}_{cd}^{kl}
}{
\mc{E}_{abcd}^{ijkl}
\mc{E}_{ab}^{ij}
}
=
\tfr{1}{2}\cdot
\pr{\tfr{1}{2}}^4
\sum_{\substack{abcd\\ijkl}}
\tl{a}^{abcd}_{ijkl}
\fr{
\ol{g}_{ab}^{ij}
\ol{g}_{cd}^{kl}
}{
\mc{E}_{ab}^{ij}
\mc{E}_{cd}^{kl}
}
=
\tfr{1}{2}
{}\ord{1}C_2^2
\end{align*}
}
\end{prop}
\begin{lem}\label{lem:energy-substitution}
\thmtitle{The Energy Substitution Lemma}
\thmstatement{
$\Y\ord{m}$ equals the sum of a ``principal term''
$(R_0V_\mr{c})^m\F$
plus all possible substitutions of adjacent factors $(R_0V_\mr{c})^{r_i}$ in the principal term by $R_0E_\mr{c}\ord{r_i}$.
Each term in the sum is weighted by a sign factor $(-)^k$, where $k$ is the number of substitutions.
}
\thmproof{
See \cref{app:linked-diagram-theorem}.
}
\end{lem}
\begin{ex}
Lemma~\ref{lem:energy-substitution} is consistent with equation~\ref{eq:first-and-second-order-principal-terms} because substitution of the rightmost factors in the principal term leaves a resolvent acting on the reference determinant and because the first-order energy contribution equals zero.
The first non-trivial examples of the energy substitution lemma begin at third order.
\begin{align}
\label{eq:energy-substitution-psi-3}
\Y\ord{3}
=&\
R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0E_\mr{c}\ord{2}R_0V_\mr{c}\F
\\
\label{eq:energy-substitution-psi-4}
\Y\ord{4}
=&\
R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0E_\mr{c}\ord{2}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0V_\mr{c}R_0E_\mr{c}\ord{2}R_0V_\mr{c}\F
-
R_0E_\mr{c}\ord{3}R_0V_\mr{c}\F
\\
\nonumber
\Y\ord{5}
=&\
R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0E_\mr{c}\ord{2}R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0V_\mr{c}R_0E_\mr{c}\ord{2}R_0V_\mr{c}R_0V_\mr{c}\F
\\&\
\nonumber
-
R_0V_\mr{c}R_0V_\mr{c}R_0E_\mr{c}\ord{2}R_0V_\mr{c}\F
+
R_0E_\mr{c}\ord{2}R_0E_\mr{c}\ord{2}R_0V_\mr{c}\F
-
R_0E_\mr{c}\ord{3}R_0V_\mr{c}R_0V_\mr{c}\F
\\&\
\label{eq:energy-substitution-psi-5}
-
R_0V_\mr{c}R_0E_\mr{c}\ord{3}R_0V_\mr{c}\F
-
R_0E_\mr{c}\ord{4}R_0V_\mr{c}\F
\end{align}
\end{ex}
\begin{thm}\label{thm:bracketing-theorem}
\thmtitle{The Bracketing Theorem}
\thmstatement{
$\Y\ord{m}$ equals the principal term plus all possible insertions of nested brackets into the principal term.
Each term in the sum is weighted by $(-)^k$ where $k$ is the total number of brackets.\footnote{
The ``brackets'' here are reference expectation values: $\ip{W}\equiv\ip{\F|W|\F}$.
}
}
\thmproof{
See \cref{app:linked-diagram-theorem}.
}
\end{thm}
\begin{ex}
Equations~\ref{eq:energy-substitution-psi-3} and~\ref{eq:energy-substitution-psi-4} are clearly consistent with \cref{thm:bracketing-theorem}, since $E_\mr{c}\ord{2}{=}\,\ip{V_\mr{c}R_0V_\mr{c}}$ and $E_\mr{c}\ord{3}{=}\,\ip{V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}}$.
\begin{align}
\label{eq:bracketing-psi-3}
\Y\ord{3}
=&\
R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0\ip{V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}\F
\\
\Y\ord{4}
=&\
R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0\ip{V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0V_\mr{c}R_0\ip{V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}\F
-
R_0\ip{V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}\F
\\
\intertext{
The first non-vanishing terms with nested brackets appear at fifth-order
}
\nonumber
\Y\ord{5}
=&\
R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0\ip{V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}\F
-
R_0V_\mr{c}R_0\ip{V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}R_0V_\mr{c}\F
\\&\
\nonumber
-
R_0V_\mr{c}R_0V_\mr{c}R_0\ip{V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}\F
+
R_0\ip{V_\mr{c}R_0V_\mr{c}}R_0\ip{V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}\F
-
R_0\ip{V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}R_0V_\mr{c}\F
\\&\
-
R_0V_\mr{c}R_0\ip{V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}\F
-
R_0\ip{V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}\F
+
R_0\ip{V_\mr{c}R_0\ip{V_\mr{c}R_0V_\mr{c}}R_0V_\mr{c}}R_0V_\mr{c}\F
\end{align}
which follows from substituting equation~\ref{eq:bracketing-psi-3} into $E\ord{4}=\ip{\F|V_\mr{c}|\Y\ord{3}}$ in the energy substitution expansion of $\Y\ord{5}$.
\end{ex}
\begin{ex}
Assuming Brillouin's theorem, the simplest non-vanishing term with an inserted bracket appears in $\Y\ord{3}$.
\begin{align*}
R_0
\ip{V_\mr{c}R_0V_\mr{c}}
R_0
V_\mr{c}
\F
=
\diagram{
\interaction{2}{1g}{(0,-0.5)}{ddot}{sawtooth};
\draw[->-] (1g1) to ++(-0.25,1);
\draw[-<-] (1g1) to ++(+0.25,1);
\draw[->-] (1g2) to ++(-0.25,1);
\draw[-<-] (1g2) to ++(+0.25,1);
\interaction{2}{2g}{(2,-0.5)}{ddot}{sawtooth};
\interaction{2}{3g}{(2,+0.5)}{ddot}{sawtooth};
\draw[->-=0.4,bend left ] (2g1) to (3g1);
\draw[-<-=0.6,bend right] (2g1) to (3g1);
\draw[->-=0.4,bend left ] (2g2) to (3g2);
\draw[-<-=0.6,bend right] (2g2) to (3g2);
\draw[thick,flexdotted] (-0.3,-0.25) to ++(1.6,0);
\draw[thick,flexdotted] (-0.4,+0.35) to ++(1.8,0);
\draw[opacity=0] (0.5,-0.5) circle (0.125cm);
\draw[thick,flexdotted] (1.7,0) to ++(1.6,0);
\padborder{5pt};
\draw[double, thick] (current bounding box.south west)--(current bounding box.south east);
\node at (0.5,-1) {remainder};
\node at (2.5,-1) {insertion};
\node[inner sep=0pt] at (-1.5,0) {
\begin{tabular}{c}
level of the\\insertion
\end{tabular}
};
\draw[->] (-2.5,0) to (-0.3,0);
\node[inner sep=0pt] at (+4,+0.4) {$1\rst$\,level};
\node[inner sep=0pt] at (+4,-0.3) {$0\eth$\,level};
}
\end{align*}
\end{ex}
\begin{prop}
\thmtitle{Wigner's (2n+1) rule}
\end{prop}
\newpage
\appendix
\section{Proof of the Linked-Diagram Theorem}\label{app:linked-diagram-theorem}
\begin{ntt}\label{ntt:operator-combinations}
Let
``$Y^m$ choose $Z^k$'', denoted ${}^mC_k(Y:Z)$,
refer to a sum over the $m$ choose $k$ permutations of $Y^{m-k}Z^k$,\,\footnote{For example,
$
{}^4C_2(Y:Z)
=
Y^2Z^2
+
YZYZ
+
YZ^2Y
+
ZY^2Z
+
ZYZY
+
Z^2Y^2
$.
}
where $Y$ and $Z$ are operators that may or may not commute.\,\footnote{
If they do commute, then ${}^mC_k(Y:Z)={n\choose k}Y^{m-k}Z^k$.
}
This defines a generalization of the binomial theorem.
\begin{align}
\label{eq:generalized-binomial-theorem}
(
Y
+
Z
)^m
=
\sum_{k=0}^m
{}^mC_k(Y:Z)
\end{align}
Furthermore, let
$
{}^mC(Y:Z_1,\ld,Z_k)
$
be a sum over permutations of
$
Y^{m-k}
Z_1\cd Z_k
$ that preserve the ordering of the $Z_i$'s.\,\footnote{
For example,
$
{}^4C(Y:Z_1,Z_2)
=
Y^2Z_1Z_2
+
YZ_1YZ_2
+
YZ_1Z_2Y
+
Z_1Y^2Z_2
+
Z_1YZ_2Y
+
Z_1Z_2Y^2
$.
}
When all of the $Z_i$'s equal $Z$, we can write
$
{}^mC(Y:Z_1,\ld,Z_k)
=
{}^mC_k(Y:Z)
$.
\end{ntt}
\begin{prop}
\label{prop:wavefunction-infinite-recursion}
\thmstatement{
$\ds{
\Y(\la)
=
\sum_{m=0}^\infty
\pr{
R_0
(\la V_\mr{c} - E(\la))
}^m
\F
}$
}
\thmproof{
This follows by infinite recursion of equation~\ref{eq:lambda-dependent-recursive-series} with the assumption
$\ds{
\lim_{m\rightarrow\infty}
\pr{
R_0
(\la V_\mr{c} - E(\la))
}^m
\Y(\la)
=
0
}$.
}
\end{prop}
\begin{dfn}\label{dfn:integer-compositions}
\thmtitle{Integer compositions}
The \textit{compositions} of an integer $m$ are the ways of writing $m$ as a sum of positive integers.
The full set of integer compositions of $m$ is given by
$
\mc{C}(m)
=
\mc{C}_1(m)
\cup
\mc{C}_2(m)
\cup
\cd
\cup
\mc{C}_m(m)
$
where
$
\mc{C}_k(m)
=
\{
(r_1,\ld,r_k)\in\mb{N}_0^k
\,|\,
r_1+\cd+r_k
=
m
\}
$
are the integer compositions of $m$ into $k$ parts.
\end{dfn}
\begin{lem}
\label{lem:energy-substitution-proof}
\thmtitle{The Energy Substitution Lemma}
\thmstatement{
$\Y\ord{m}$ equals the sum of a ``principal term''
$(R_0V_\mr{c})^m\F$
plus all possible substitutions of adjacent factors $(R_0V_\mr{c})^{r_i}$ in the principal term by $R_0E_\mr{c}\ord{r_i}$.
Each term in the sum is weighted by a sign factor $(-)^k$, where $k$ is the number of substitutions.
}\vspace{5pt}
\thmproof{
Using equation~\ref{eq:generalized-binomial-theorem} and a double sum identity\footnote{
Reverse double-sum reduction:
$\ds{
\sum_{m=0}^\infty
\sum_{k=0}^m
t_{m-k,k}
=
\sum_{k'=0}^\infty
\sum_{k=0}^\infty
t_{k',k}
}$.
See
\url{http://functions.wolfram.com/GeneralIdentities/12/}.
} in the infinite recursion formula for $\Y(\la)$ gives the following.
{\footnotesize
\begin{align*}
\Y(\la)
=
\sum_{m=0}^\infty
(
R_0
(
\la V_\mr{c}
-
E(\la)
)
)^m
\F
=
\sum_{m=0}^\infty
\sum_{k=0}^m
\la^{m-k}
(-)^k\,\,
{}^mC_k
(
R_0V_\mr{c}:
R_0E(\la)
)
\F
=
\sum_{k'=0}^\infty
\sum_{k=0}^\infty
\la^{k'}
(-)^{k}\,\,
{}^{k'+k}\hspace{-1pt}C_k
(
R_0V_\mr{c}:
R_0E(\la)
)
\F
\end{align*}}%
The $k'=0$ term has no operators separating $\F$ from the resolvent and vanishes.
Taylor expansion of the energies gives
{\footnotesize
\begin{align*}
\Y(\la)
=&\
\sum_{k=0}^\infty
\sum_{k'=1}^\infty
{\sum_{p_1=1}^\infty}
\cd
{\sum_{p_k=1}^\infty}
\la^{k' + p_1 + \cd + p_k}
(-)^{k}\,\,
{}^{k'+k}\hspace{-1pt}C
(
R_0V_\mr{c}:
R_0E_\mr{c}\ord{p_1},\ld,
R_0E_\mr{c}\ord{p_k}
)
\F
\\
=&\
\sum_{m=1}^\infty
\sum_{k=0}^{m-1}
\sum_{(r_1,\ld,r_{k+1})}^{\mc{C}_{k+1}(m)}
\la^m
(-)^k\,\,
{}^{k+r_1}\hspace{-1pt}C
(
R_0V_\mr{c}:
R_0E_\mr{c}\ord{r_2},\ld,
R_0E_\mr{c}\ord{r_{k+1}}
)
\F
\end{align*}}%
where we have grouped powers of $\la$ using a multi-sum reduction.
Writing the inner sums as a sum over $\mc{C}(m)$ we find
\begin{align}
\Y\ord{m}
=
\left.
\fr{1}{m!}
\pd{^m\Y(\la)}{\la^m}
\right|_{\la=0}
=
\sum_{(r_1,\ld,r_{k+1})}^{\mc{C}(m)}
(-)^k\,\,
{}^{k+r_1}\hspace{-1pt}C
(
R_0V_\mr{c}:
R_0E_\mr{c}\ord{r_2},\ld,
R_0E_\mr{c}\ord{r_{k+1}}
)
\F
\end{align}
which, given \cref{ntt:operator-combinations} and definition~\ref{dfn:integer-compositions}, is an algebraic statement of the proposition, completing the proof.
}
\end{lem}
\begin{thm}
\thmtitle{The Bracketing Theorem}
\thmstatement{
$\Y\ord{m}$ equals the principal term plus all possible insertions of nested brackets into the principal term.
Each term in the sum is weighted by $(-)^k$ where $k$ is the total number of brackets.
}\vspace{5pt}
\thmproof{
The proposition holds for $m=1$ because
$\Y\ord{1}=R_0V_\mr{c}\F$ and there are no possible bracketings.
Assume it holds for $m-1$.
Then by the energy substitution lemma it also holds for $m$ because $E_\mr{c}\ord{r_i}$
equals
$\ip{\F|V_\mr{c}|\Y\ord{r_i}}$
which, by our inductive assumption, equals
$\ip{V_\mr{c}(R_0V_\mr{c})^{r_i}}$
plus all nested bracketings weighted by appropriate sign factors.
}
\end{thm}
\end{document}
|
|
\section{Frequency mode 14}
\subsection{Spectra}
\subsection{Sideband leakage}
\subsection{Seasonality}
|
|
\begin{algorithm}[H]
\caption{Build Solutions}\label{algo:build-solutions}
\begin{algorithmic}[1]
\REQUIRE $inst: \text{MIP instance}, config: \text{Configuration}$
\STATE {$ur = 0.01;$} \refcomment{sec:usage-ratio}
\STATE {$solutions = empty\_list();$} \label{build-solutions:line:empty-list}
\STATE {$time\_out = config.min\_time; $}
\FOR {$ i\ \textbf{in}\ count(config.count)$}
\STATE {$ submodel = generate\_random\_submodel(inst,\ ur)^{\ref{algo:submodel-build}};$}
\STATE {$ solution, status = solve\_submodel^{\ref{algo:solve-submodel}}(submodel, time\_out); $}
\IF {$ status\ \textbf{is}\ INFEASIBLE $}
\STATE $ur = \sqrt[5]{ur^4};$ \refcomment{eq:ur-infease-grow}
\ELSIF {$status\ \textbf{is}\ LIN\_FEASIBLE$}
\STATE $ur = 1.1 \times ur;$ \refcomment{eq:ur-lin-fease-grow}
\STATE $insert\_solution(solutions, solution, status);$
\ELSIF {$status\ \textbf{is}\ INT\_FEASIBLE$}
\STATE $ur = 0.9 \times ur;$ \refcomment{eq:ur-fease-grow}
\STATE $insert\_solution(solutions, solution, status);$
\ELSIF {$status\ \textbf{is}\ TIMEOUT$}
\IF {$ time\_out < config.max\_time $}
\STATE {$ time\_out = time\_out + 1;$}
\ENDIF
\ENDIF
\ENDFOR
\RETURN $solutions$
\end{algorithmic}
\end{algorithm}
\paragraph{Description} \Cref{algo:build-solutions} is a core component of Feature Kernel. It solve any submodel to understand its feasibility status and then controls
the two main inner parameter of the method: $Usage\ Ratio$, see \Cref{eq:usage-ratio}, and timeout, see \Cref{sec:timeout}.
\Cref{algo:build-solutions} requires as input some kind of MIP instance and a configuration. The configuration is just a collection of named values
(i.e. a \href{https://en.wikipedia.org/wiki/Struct_(C_programming_language)}{C Struct} or a
\href{https://docs.python.org/3/library/collections.html#namedtuple-factory-function-for-tuples-with-named-fields}{Python namedtuple})
so it is possible to pass any parameter with just one name. Config contains any useful configuration parameter, like time limits, submodel count and the split
criterion.
On line \ref{build-solutions:line:empty-list} the function $empty\_list$ create a new empty dynamic vector
(i.e. a \href{https://en.wikipedia.org/wiki/List_(abstract_data_type)}{list}) of the appropriate type to store solutions. This list can be used as the first argument of
the $insert\_solution$ that under the hood creates a new record and append it as the end of the list.
Then \Cref{algo:build-solutions} generate and solve the given number of sub models, end for each submodel it check the output status and act accordingly:
\begin{itemize}
\item If the model is feasible it is stored and $ur$ is update as described in \Cref{eq:ur-fease-grow} and \Cref{eq:ur-lin-fease-grow}
\item If the model is infeasible the solution is empty, so it is ignored, and $ur$ is update as described in \Cref{eq:ur-infease-grow}
\item If the model ends with a timeout the solution is empty, so it is ignored, and $timeout$ is updated as described in \Cref{sec:timeout}
\end{itemize}
Once all the submodel have been created and tested \Cref{algo:build-solutions} returns the list of valid solutions.
|
|
% !TeX spellcheck = de_CH
\chapter{Test}
\section{Unit Test}
\section{System Test}
\section{End User Test}
|
|
\documentclass{article}
\usepackage{amsmath}
\usepackage{booktabs}
\usepackage{fullpage}
\usepackage{parskip}
\usepackage{tikz}
\usetikzlibrary{calc, shapes, patterns}
\begin{document}
\section*{\centering{Moran process}}
\subsection*{\centering{Fitness}}
\[
N=3
\text{ and }
A =
\begin{pmatrix}
0 & 3 \\
1 & 2
\end{pmatrix}
\]
\vspace{1cm}
\begin{center}
\begin{tabular}{r|c|c}
\toprule
& \(f(\text{Hawk})\) & \(f(\text{Dove})\) \\
\midrule
& & \\
1 Hawk, 2 Doves & \(0\times 0 + 3\times 2\)=6 & \phantom{\(0\times 0 + 3\times 2\)=6} \\
& & \\
\midrule
& & \\
2 Hawks, 1 Dove & & \\
& & \\
\bottomrule
\end{tabular}
\end{center}
\subsection*{\centering{Probabilities}}
\begin{center}
\begin{tabular}{r|r|c|c}
\toprule
& Select & Selection: Birth & Selection: Death \\
\midrule
& & & \\
&Hawk & \(\frac{f(\text{Hawk})}{f(\text{Hawk}) + 2f(\text{Dove})}=\frac{6}{12}\) & \(\frac{1}{3}\)\\
& & & \\
1 Hawk, 2 Doves & & & \\
& & & \\
&Dove & & \\
& & & \\
\midrule
& & & \\
&Hawk & & \\
& & & \\
2 Hawks, 1 Dove & & & \\
& & & \\
&Dove & & \\
& & & \\
\bottomrule
\end{tabular}
\end{center}
\newpage
\section*{\centering{Simulation}}
Use the appropriate dice to simulate 1 Hawk taking over a population of Doves.
Decide what dice you will use to sample birth/death selection at all possible
states:
\begin{center}
\begin{tabular}{r|c|c|c|c}
\toprule
State & Birth: dice used & Select Hawk values & Death: dice used &
Select Hawk values \\
\midrule
& & & & \\
1 Hawk & 6 & \(\{1, 2, 3\}\) & 6 & \(\{1, 2\}\) \\
& & & & \\
\midrule
& & & & \\
2 Hawks & & & & \\
& & & & \\
\bottomrule
\end{tabular}
\end{center}
\subsection*{\centering{Example}}
\begin{center}
\begin{tabular}{r|c|c|c|c|c}
\toprule
State & Birth: dice used & Birth: value rolled & Death: dice used & Death: value rolled & Next state \\
\midrule
& & & & & \\
1 Hawk & 6 & 2 & 6 & 1& 1 Hawk \\
& & (Select Hawk)& & (Select Hawk)& \\
& & & & & \\
1 Hawk & 6 & 3 & 6 & 5& 2 Hawks \\
& & (Select Hawk)& & (Select Dove)& \\
& & & & & \\
2 Hawks & 4 & 4 & 6 & 2& 1 Hawk \\
& & (Select Dove)& & (Select Hawk)& \\
& & & & & \\
1 Hawk & 6 & 4 & 6 & 1& \framebox{0 Hawks} \\
& & (Select Dove)& & (Select Hawk)& \\
\bottomrule
\end{tabular}
\end{center}
\subsection*{\centering{Activity}}
Every time you arrive at 0 \textbf{or} 3 Hawks:
\begin{enumerate}
\item Stop;
\item Circle your final state
\item Draw a line in the table (next page);
\item Start again.
\end{enumerate}
\newpage
\begin{center}
\begin{tabular}{r|c|c|c|c|c}
\toprule
Current state & Birth: dice used & Birth: value rolled & Death: dice used & Death: value rolled & Next state \\
\midrule
1 Hawk & 6 & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
& & & & & \\
\end{tabular}
\end{center}
\section*{\centering{Computation}}
\begin{center}
\begin{tikzpicture}[
dove/.style={circle, pattern=north west lines, pattern color=blue!70, draw=blue},
hawk/.style={circle, pattern=north east lines, pattern color=red!70, draw=red},
]
\node (N1) at (1, 1) [dove] {D};
\node (N2) at (0, 0) [dove] {D};
\node (N3) at (2, 0) [dove] {D};
\draw [thick, <-] ($(N1)!0.5!(N2) + (2.5, -.25)$) -- node [below] {\(p_{10}\)} ++(1, 0);
\node (N1) at ($(N1) + (5, 0)$) [dove] {D};
\node (N2) at ($(N2) + (5, 0)$) [dove] {D};
\node (N3) at ($(N3) + (5, 0)$) [hawk] {H};
\draw [thick, ->] ($(N1)!0.5!(N2) + (2.5, -.25)$) -- node [below] {\(p_{12}\)} ++(1, 0);
\draw [thick, <-] ($(N1)!0.5!(N2) + (2.5, .25)$) -- node [above] {\(p_{21}\)} ++(1, 0);
\node (N1) at ($(N1) + (5, 0)$) [dove] {D};
\node (N2) at ($(N2) + (5, 0)$) [hawk] {H};
\node (N3) at ($(N3) + (5, 0)$) [hawk] {H};
\draw [thick, ->] ($(N1)!0.5!(N2) + (2.5, .25)$) -- node [above] {\(p_{23}\)} ++(1, 0);
\node (N1) at ($(N1) + (5, 0)$) [hawk] {H};
\node (N2) at ($(N2) + (5, 0)$) [hawk] {H};
\node (N3) at ($(N3) + (5, 0)$) [hawk] {H};
\end{tikzpicture}
\end{center}
\vspace{1cm}
Which gives:
\[
p_{10}=\frac{6}{12}\frac{1}{3}=\frac{1}{6}\qquad
p_{12}=\phantom{\frac{6}{12}\frac{2}{3}=\frac{1}{3}}\qquad
p_{21}=\phantom{\frac{2}{8}\frac{2}{3}=\frac{1}{6}}\qquad
p_{23}=\phantom{\frac{6}{8}\frac{1}{3}=\frac{1}{4}}
\]
\end{document}
|
|
\chapter{Programs}
\label{ch:programs}
In the following we give a brief description of the main programs used to
implement the GNSS analysis used in the platform. Note that all of the programs
depend on the \nameref{ch:pybern} module, hence \ul{make sure that you have
the lastest version of pybern installed} (see \nameref{sec:pybern-installation}).
A complete list of the available programs can be found in the \verb|bin| folder.
The following remarks apply fro the programs:
\begin{itemize}
\item all of the programs are standalone
\item all of the programs include \href{https://en.wikipedia.org/wiki/Shebang_(Unix)}{shebang},
hence you should be able to run them directly ommiting the interpreter (aka
\verb|syncwbern52.py| is equivalent to \verb|python syncwbern52.py|)
\item all of the programs include a help message, triggered by \verb|-h| or
\verb|--help|
\item all of the programs can be run in \emph{verbose} mode, printing debug messages, via
the \verb|--verbose| switch
\item for Python programs, no version is enforced, the shebang call the default
interpreter/version. Should you want to run a program with a different version/interpreter
you are free to do so
\end{itemize}
\section{syncwbern52}
\label{sec:programs-syncwbern52}
\verb|syncwbern52| syncronizes (aka mirrors) a local folder, which should normaly
be the local \verb|/GEN|\index{GEN} folder, to the (remote) AIUB's \verb|/GEN| folder
located at \url{ftp.aiub.unibe.ch/BSWUSER52/GEN}. This process
\ul{excludes all *.EPH files} which are system-dependent, binary file(s).
You can see the help message for more information.
\section{get\_vmf1\_grd}
\label{sec:programs-get-vmf1-grd}
\verb|get_vmf1_grd| downloads VMF1\index{VMF1} grid files to be used in the GNSS analysis
for troposphere estimation/mitigation. Grid files are downloaded from
\url{VMF1 grid files from https://vmf.geo.tuwien.ac.at/trop_products/GRID/2.5x2/VMF1}.
The script allows for downloading both final and forecast grid files, but note that
for the latter you will need \nameref{ch:credentials}. You can see the
help message for more information.
\subsection{Examples}
\label{ssec:programs-get-vmf1-grd-examples}
Download final VMF1\index{VMF1} grid files, for the date 01/01/2021, merge them (all four) to
a file named \verb|VMF_01012021.GRD| and delete the individual hourly files.\\
\verb|$>get_vmf1_grd.py -y 2021 -d 1 -m VMF_01012021.GRD --del-after-merge|\\
\\
Download (forecast) VMF1\index{VMF1} grid files for today, merge them (all four) to a file
named \verb|VMF_today.GRD| and delete the individual hourly files. This will need
credentials to access the forecast VMF1 files, see \nameref{ch:credentials}.\\
\verb|$>date +"%Y-%j" ## get year and doy in Unix-like systems|\\
\verb|2021-268|\\
\verb|## call the program with today's date|\\
\verb|$>get_vmf1_grd.py -y 2021 -d 268 -f -c dso_credentials -m VMF_today.GRD --del-after-merge|\\
\section{getdcb}
\label{sec:programs-getdcb}
\verb|getdcb| is a program that allows downloading of GPS/GNSS Code Differential
Bias (aka DCB\index{DCB}) files from derived from AIUB. These files can contain
numerous combinations of biases related to different satellite systems, code observables,
day of estimation, etc. An exhustive description is thus not possible. The program
can download any of them, given the correct combination of command line parameters
by the user.
To check the available DCB files that can be downloaded, you can pass in the \verb|-l| or
\verb|--list-products| switch and the program will print out a list of the available
DCBs and how to target them (it will download nothing though).
\subsection{Examples}
\label{ssec:programs-getdcb}
Let's say we want to download the file \verb|CODE_FULL.DCB|, which contains biases
for both GPS and GLONASS between code obervables P1P2, P1C1 and P2C2. The easiest way,
is to first list our options:
\verb|$>getdcb.py -l|\\
This will print out a list of files and directions; we target the ones of interest, aka:
\begin{verbatim}
_Available files in FTP____________________________________________________
[...]
CODE_FULL.DCB Combination of P1P2.DCB, P1C1.DCB (GPS satellites),
P1C1_RINEX.DCB (GLONASS satellites), and P2C2_RINEX.DCB
[...]
_Arguments for Products____________________________________________________
[9] type=current, span=monthly, obs=full | CODE_FULL.DCB (merged [2], [3], [6] and [7])
\end{verbatim}
So, we can now run:\\
\verb|$>getdcb.py -y 2021 -d 268 --type current --time-span monthly --code-type full|\\
\\
Let's say we want the combined GPS and GLONASS monthly P1-P2 DCB values for a past
date, aka 01/01/2021, that is the first month of 2021. We list again (\verb|$>getdcb.py -l|)
and see that we have to run (for the \verb|P1P2yymm_ALL.DCB.Z| file):\\
\verb|getdcb.py -y 2021 -d 1 --type final --time-span monthly --code-type p1p2all|\\
\emph{Note that in both above examples we could have skipped the} \verb|--time-span monthly| \emph{option,
since this is the default (you can see this in the help message). We only add it here for completeness.}
\section{geterp}
\label{sec:programs-geterp}
\verb|geterp| is a program that allows downloading of Earth Rotation Parameter information
files (aka ERP\index{ERP}), published by AIUB. Users can choose between a variety of
such files (depending e.g. on the product type --final, rapid, etc-- and the time-span
they cover). The program can download any of them, given the correct combination of
command line parameters by the user.
To check the available ERP files that can be downloaded, you can pass in the \verb|-l| or
\verb|--list-products| switch and the program will print out a list of the available
ERPs and how to target them (it will download nothing though).
\subsection{Examples}
\label{ssec:programs-geterp}
Let's say we want to download weekly ERP for a past date, namely 01/01/2021; we
can run the command:\\
\verb|geterp.py -y 2021 -d 1 --time-span weekly -t final|\\
which will download the file \verb|COD21387.ERP.Z|. If instead we want the respective
(final) file but with daily (not weekly) records, we can run:\\
\verb|geterp.py -y 2021 -d 1 --time-span daily -t final|\\
which will download the file \verb|COD21385.ERP.Z|.
\\
If the user want to download a file for a date that is too close to current (which
means that probably the final solution is not yet available), then we can choose a
number of solution types; the program will try each solution type and if the respective
file is available it will be downloaded. E.g.\\
\verb|geterp.py -y 2021 -d 266 -t final,final-rapid,early-rapid,ultra-rapid,prediction|\\
which will try (in turn) to download files for \emph{final, final-rapid, early-rapid, ultra-rapid}
and finally \emph{prediction} solutions. Obviously, users can target each of these
types individualy, using the target solution as option to the \verb|type| switch.
\section{getion}
\label{sec:programs-getion}
\verb|getion| is a program that allows downloading of ionospheric information
files published by AIUB. Ionospheric information files are published in one of two
formats, namely IONEX\index{IONEX} format (normally with extension \verb|INX| or
\verb|YYI|, with YY the two-digit year) or an internal Bernese format (aka
\verb|ION|\index{ION}). Users can choose the format they prefer via the \verb|--format|
switch. A variety of ionospheric products is made available by AIUB, and the ones
available for downloading can be listed using the \verb|-l| switch.
\subsection{Examples}
\label{ssec:programs-getion}
Let's say we want to download ionospheric information in Bernese format for a past date,
namely 01/01/2021; we can run the command:\\
\verb|getion.py -y 2021 -d 1 --format bernese --type final|\\
which will download the file \verb|COD21385.ION.Z|.
\\
If the user want to download a file for a date that is too close to current (which
means that probably the final solution is not yet available), then we can choose a
number of solution types; the program will try each solution type and if the respective
file is available it will be downloaded. E.g.\\
\verb|getion.py -y 2021 -d 266 -f bernese -t final,rapid,ultra-rapid,prediction|\\
which will try (in turn) to download files for \emph{final, rapid, ultra-rapid} and
finally \emph{prediction} solutions. Obviously, users can target each of these types individualy,
using the target solution as option to the \verb|type| switch.
|
|
\section{Sample metadata}
\subsection{General Sample Info}
This section contains information regarding where samples were acquired from. This corresponds to the ``Sample'' label in Table \ref{tab:sample_metadata}.
\begin{enumerate}
\item NA12878 - female of European ancestry; purchased through \url{https://www.coriell.org/0/Sections/Search/Sample_Detail.aspx?Ref=NA12878&Product=DNA}
\item HG002-HG004 - son and parents of Eastern Europe Ashkenazi Jewish ancestry; purchased through \url{https://www-s.nist.gov/srmors/view_detail.cfm?srm=8392}
\item HG005 - male of Chinese ancestry; purchased through \url{https://www-s.nist.gov/srmors/view_detail.cfm?srm=8393}
\end{enumerate}
\subsection{Samples}
This section contains information regarding the specific samples used for analysis.
This data is automatically pulled from a sample JSON file containing sample names, sample types (i.e. which GIAB sample), and how the sample was prepared.
Table \ref{tab:sample_metadata} contains the list of metadata as pulled from the JSON.
Note: HG006 and HG007 were used only for testing.
% NOTE TO USER: if samples are added, make sure the sample JSON is updated or it won't get caught
\begin{longtable}{|l|r|r|r|}
\hline
{{ FORMAT.HEADER_COLOR }}\textbf{Library}
{% for fk in FORMAT.METADATA_ORDER %}
&{{ '\\textbf{'+FORMAT.METADATA.get(fk, {}).get('label', fk)+'}' }}
{% endfor %}
\\ \hline
\endhead
{% for sample in sorted(METADATA.keys()) %}
{{ sample.replace('_', '\\_') }}
{% for fk in FORMAT.METADATA_ORDER %}
&{{ FORMAT.METADATA.get(fk, {}).get('format', FORMAT.METADATA['default']['format']).format(METADATA.get(sample, {}).get(fk, 'NOT IN METADATA')) }}
{% endfor %}
\\ \hline
{% endfor %}
\caption{This table contains metadata regarding each sequenced sample. The GIAB sample label and prep type are currently the two pieces of tracked metadata regarding each sample.}
\label{tab:sample_metadata}
\end{longtable}
|
|
%% Copyright (C) 2005, 2011 Carnegie Mellon University and others.
%%
%% The first version of this file was contributed to the Ipopt project
%% on Aug 1, 2005, by Yoshiaki Kawajiri
%% Department of Chemical Engineering
%% Carnegie Mellon University
%% Pittsburgh, PA 15213
%%
%% Since then, the content of this file has been updated significantly by
%% Carl Laird and Andreas Waechter IBM
%% Stefan Vigerske GAMS
%%
%%
%% $Id$
%%
\documentclass[10pt]{article}
\setlength{\textwidth}{6.3in} % Text width
\setlength{\textheight}{9.4in} % Text height
\setlength{\oddsidemargin}{0.1in} % Left margin for even-numbered pages
\setlength{\evensidemargin}{0.1in} % Left margin for odd-numbered pages
\setlength{\topmargin}{-0.5in} % Top margin
\renewcommand{\baselinestretch}{1.1}
\usepackage{amsfonts}
\usepackage{amsmath}
\PassOptionsToPackage{hidelinks,pdftitle={Ipopt documentation},pdflang=en}{hyperref}
\usepackage{url}
\usepackage{html}
\usepackage{xspace}
%\usepackage{showlabels}
\newcommand{\RR}{\mathbb{R}}
\newcommand{\Ipopt}{\textsc{Ipopt}\xspace}
\newcommand{\JIpopt}{\textsc{JIpopt}\xspace}
\newcommand{\ipoptr}{\texttt{ipoptr}\xspace}
\newcommand{\sIpopt}{\textsc{sIpopt}\xspace}
\newcommand{\Matlab}{\textsc{MATLAB}\xspace}
%\htmltitle{Ipopt documentation}
\begin{document}
\title{Introduction to \Ipopt:\\
A tutorial for downloading, installing, and using \Ipopt}
\author{Revision number of this document: $Revision$}
%\date{\today}
\maketitle
\begin{abstract}
This document is a guide to using \Ipopt 3.12. It includes
instructions on how to obtain and compile \Ipopt, a description of
the interface, user options, etc., as well as a tutorial on how to
solve a nonlinear optimization problem with \Ipopt.
\end{abstract}
\section*{History of this document}
The initial version of this document was created by Yoshiaki
Kawajir\footnote{then Department of Chemical Engineering, Carnegie Mellon
University, Pittsburgh PA} as a course project for \textit{47852
Open Source Software for Optimization}, taught by Prof. Fran\c{c}ois
Margot at Tepper School of Business, Carnegie Mellon University.
After this, Carl Laird\footnote{then Department of Chemical
Engineering, Carnegie Mellon University, Pittsburgh PA} has added
significant portions, including the very nice tutorials. The current
version is maintained by Stefan Vigerske\footnote{GAMS Software GmbH} and
Andreas W\"achter\footnote{Department of Industrial Engineering and
Management Sciences, Northwestern University}.
\tableofcontents
\vspace{\baselineskip}
\begin{small}
\noindent
The following names used in this document are trademarks or registered
trademarks: AMPL, IBM, Intel, Matlab, Microsoft, MKL, Visual Studio C++,
Visual Studio C++ .NET
\end{small}
\section{Introduction}
\Ipopt (\underline{I}nterior \underline{P}oint \underline{Opt}imizer,
pronounced ``Eye--Pea--Opt'') is an open source software package for
large-scale nonlinear optimization. It can be used to solve general
nonlinear programming problems of the form
%\begin{subequations}\label{NLP}
\begin{eqnarray}
\min_{x\in\RR^n} &&f(x) \label{eq:obj} \\
\mbox{s.t.} \; &&g^L \leq g(x) \leq g^U \label{eq:constraints}\\
&&x^L \leq x \leq x^U, \label{eq:bounds}
\end{eqnarray}
%\end{subequations}
where $x \in \RR^n$ are the optimization variables (possibly with
lower and upper bounds, $x^L\in(\RR\cup\{-\infty\})^n$ and
$x^U\in(\RR\cup\{+\infty\})^n$), $f:\RR^n\longrightarrow\RR$ is the
objective function, and $g:\RR^n\longrightarrow \RR^m$ are the general
nonlinear constraints. The functions $f(x)$ and $g(x)$ can be linear
or nonlinear and convex or non-convex (but should be twice
continuously differentiable). The constraints, $g(x)$, have lower and
upper bounds, $g^L\in(\RR\cup\{-\infty\})^m$ and
$g^U\in(\RR\cup\{+\infty\})^m$. Note that equality constraints of the
form $g_i(x)=\bar g_i$ can be specified by setting
$g^L_{i}=g^U_{i}=\bar g_i$.
\subsection{Mathematical Background}
\Ipopt implements an interior point line search filter method that
aims to find a local solution of (\ref{eq:obj})-(\ref{eq:bounds}). The
mathematical details of the algorithm can be found in several
publications
\cite{NocWaeWal:adaptive,WaechterPhD,WaecBieg06:mp,WaeBie05:filterglobal,WaeBie05:filterlocal}.
\subsection{Availability}
The \Ipopt package is available from COIN-OR
(\url{http://www.coin-or.org}) under the EPL (Eclipse Public License)
open-source license and includes the source code for \Ipopt. This
means, it is available free of charge, also for commercial purposes.
However, if you give away software including \Ipopt code (in source
code or binary form) and you made changes to the \Ipopt source code,
you are required to make those changes public and to clearly indicate
which modifications you made. After all, the goal of open source
software is the continuous development and improvement of software.
For details, please refer to the Eclipse Public License.
Also, if you are using \Ipopt to obtain results for a publication, we
politely ask you to point out in your paper that you used \Ipopt, and
to cite the publication \cite{WaecBieg06:mp}. Writing high-quality
numerical software takes a lot of time and effort, and does usually
not translate into a large number of publications, therefore we
believe this request is only fair :). We also have space at the
\Ipopt project home page where we list publications, projects, etc.,
in which \Ipopt has been used. We would be very happy to hear about
your experiences.
\subsection{Prerequisites}\label{sec:prerequisites}
In order to build \Ipopt, some third party components are required:
\begin{itemize}
\item BLAS (Basic Linear Algebra Subroutines). Many vendors of
compilers and operating systems provide precompiled and optimized
libraries for these dense linear algebra subroutines. You can also
get the source code for a simple reference implementation from {\tt
www.netlib.org} and have the \Ipopt distribution compile it
automatically. However, it is strongly recommended to use some
optimized BLAS implemetion, for large problems this can make a
runtime difference of an order of magnitude!
Examples for efficient BLAS implementations are:
\begin{itemize}
\item From hardware vendors:
\begin{itemize}
\item ACML (AMD Core Math Library) by AMD
\item ESSL (Engineering Scientific Subroutine Library) by IBM
\item MKL (Math Kernel Library) by Intel
\item Sun Performance Library by Sun
\end{itemize}
\item Generic:
\begin{itemize}
\item Atlas (Automatically Tuned Linear Algebra Software)
\item GotoBLAS
\end{itemize}
\end{itemize}
You find more information on the web by googling them.
Note: BLAS libraries distributed with Linux are usually not
optimized.
\item LAPACK (Linear Algebra PACKage). Also for LAPACK, some vendors
offer precompiled and optimized libraries. But like with BLAS, you
can get the source code from \url{http://www.netlib.org} and have the
\Ipopt distribution compile it automatically.
Note that currently LAPACK is only required if you intend to use the
quasi-Newton options in \Ipopt. You can compile the code without
LAPACK, but an error message will then occur if you try to run the
code with an option that requires LAPACK. Currently, the LAPACK
routines that are used by \Ipopt are only {\tt DPOTRF}, {\tt
DPOTRS}, and {\tt DSYEV}.
Note: LAPACK libraries distributed with Linux are usually not
optimized.
\item A sparse symmetric indefinite linear solver. \Ipopt needs
to obtain the solution of sparse, symmetric, indefinite linear
systems, and for this it relies on third-party code.
Currently, the following linear solvers can be used:
\begin{itemize}
\item MA27 from the HSL Mathematical Software Library\\ (see \url{http://www.hsl.rl.ac.uk}).
\item MA57 from the HSL Mathematical Software Library\\ (see \url{http://www.hsl.rl.ac.uk}).
\item HSL\_MA77 from the HSL Mathematical Software Library\\ (see \url{http://www.hsl.rl.ac.uk}).
\item HSL\_MA86 from the HSL Mathematical Software Library\\ (see \url{http://www.hsl.rl.ac.uk}).
\item HSL\_MA97 from the HSL Mathematical Software Library\\ (see \url{http://www.hsl.rl.ac.uk}).
\item MUMPS (MUltifrontal Massively Parallel sparse direct Solver)\\
(see \url{http://graal.ens-lyon.fr/MUMPS})
\item The Parallel Sparse Direct Solver (PARDISO)\\ (see \url{http://www.pardiso-project.org}).%\\
%Note: The Pardiso version in Intel's MKL library does not yet
%support the features necessary for \Ipopt.
\item The Watson Sparse Matrix Package (WSMP)\\ (see \url{http://researcher.ibm.com/view_project.php?id=1426})
\end{itemize}
You should include at least one of the linear solvers above in order
to run \Ipopt, and if you want to be able to switch easily between
different alternatives, you can compile \Ipopt with all of them.
The \Ipopt library also has mechanisms to load the linear solvers MA27,
MA57, HSL\_MA77, HSL\_MA86, HSL\_MA97, and Pardiso from a shared library at
runtime, if the library has not been compiled with them (see
Section~\ref{sec:linear_solver_loader}).
\textbf{NOTE: The solution of the linear systems is a central
ingredient in \Ipopt and the optimizer's performance and
robustness depends on your choice. The best choice depends on
your application, and it makes sense to try different options.
Most of the solvers also rely on efficient BLAS code (see above),
so you should use a good BLAS library tailored to your system.
Please keep this in mind, particularly when you are comparing
\Ipopt with other optimization codes.}
If you are compiling MA57, HSL\_MA77, HSL\_MA86, HSL\_MA97, or MUMPS within
the \Ipopt build
system, you should also include the METIS linear system ordering package.
Interfaces to other linear solvers might be added in the future; if
you are interested in contributing such an interface please contact
us! Note that \Ipopt requires that the linear solver is able to
provide the inertia (number of positive and negative eigenvalues) of
the symmetric matrix that is factorized.
\item Furthermore, \Ipopt can also use the HSL package MC19
for scaling of the linear systems before they are passed to the
linear solver. This may be particularly useful if \Ipopt is used
with MA27 or MA57. However, it is not required to have MC19 to
compile \Ipopt; if this routine is missing, the scaling is never
performed.
%\footnote{There are more recent scaling routines in the
% HSL, but they have not (yet) been integrated. Contributions are
% welcome!}.
\item ASL (AMPL Solver Library). The source code is available at {\tt
www.netlib.org}, and the \Ipopt makefiles will automatically
compile it for you if you put the source code into a designated
space. NOTE: This is only required if you want to use \Ipopt from
AMPL and want to compile the \Ipopt AMPL solver executable.
\end{itemize}
For more information on third-party components and how to obtain them,
see Section~\ref{ExternalCode}.
Since the \Ipopt code is written in C++, you will need a C++ compiler
to build the \Ipopt library. We tried very hard to write the code as
platform and compiler independent as possible.
In addition, the configuration script also searches for a Fortran
compiler, since some of the dependencies above are written in Fortran.
If all third party dependencies are available as self-contained
libraries, those compilers are in principle not necessary. Also, it
is possible to use the Fortran-to-C compiler {\tt f2c} from
\url{http://www.netlib.org/f2c} to convert Fortran 77 code to C, and compile the
resulting C files with a C compiler and create a library containing
the required third party dependencies. %We have tested and used this
%in connection with the Microsoft Visual C++ compiler, and instructions
%on how to use it in this context are given below.
When using GNU compilers, we recommend you use the same version numbers for {\tt gcc}, {\tt g++}, and {\tt gfortran}. For {\tt gfortran} specifically, we recommend versions newer than 4.5.2 (versions 4.5.1, 4.5.2, and before 4.2.0 are known to have bugs that caused issues with some of the newer Fortran 90 HSL linear solvers).
\subsection{How to use \Ipopt}
If desired, the \Ipopt distribution generates an executable for the
modeling environment AMPL. As well, you can link your problem
statement with \Ipopt using interfaces for C++, C, or Fortran.
\Ipopt can be used with most Linux/Unix environments, and on Windows
using Visual Studio .NET, Cygwin or MSYS/MinGW. In
Section~\ref{sec:tutorial-example} this document demonstrates how to
solve problems using \Ipopt. This includes installation and
compilation of \Ipopt for use with AMPL as well as linking with your
own code.
Additionally, the \Ipopt distribution includes interfaces for
\begin{itemize}
\item {\tt CUTEr}\footnote{see \url{http://cuter.rl.ac.uk/cuter-www}} (for
solving problems modeled in SIF),
\item {\tt Java}, which allows you to use \Ipopt from Java, see the files in the
\texttt{Ipopt/contrib/JavaInterface} directory,
\item {\tt Matlab} (mex interface), which allows you to use \Ipopt from Matlab, see
\centerline{\url{https://projects.coin-or.org/Ipopt/wiki/MatlabInterface},}
\item and the {\tt R} project for statistical computing, see the files in the
\texttt{Ipopt/contrib/RInterface} directory.
\end{itemize}
There is also software that facilitates use of \Ipopt maintained by other people, among them are:
\begin{itemize}
\item ADOL-C (automatic differentiation)
ADOL-C facilitates the evaluation of first and higher derivatives of
vector functions that are defined by computer programs written in C or C++.
It comes with examples that show how to use it in connection with \Ipopt, see
\url{https://projects.coin-or.org/ADOL-C}.
\item AIMMS (modeling environment)
The AIMMSlinks project on COIN-OR, maintained by Marcel Hunting,
provides an interface for \Ipopt within the AIMMS modeling tool, see
\url{https://projects.coin-or.org/AIMMSlinks}.
\item APMonitor
MATLAB, Python, and Web Interface to Ipopt for Android, Linux, MacOS X,
and Windows, see \url{http://apmonitor.com}.
\item CasADi
CasADi is a symbolic framework for automatic differentiation and
numeric optimization and comes with \Ipopt, see
\url{http://casadi.org}.
\item CppAD (automatic differentiation)
Given a C++ algorithm that computes function values, CppAD generates an
algorithm that computes corresponding derivative values (of arbitrary
order using either forward or reverse mode).
It comes with an example that shows how to use it in connection with \Ipopt, see
\url{https://projects.coin-or.org/CppAD}.
\item GAMS (modeling environment)
The GAMSlinks project on COIN-OR includes a GAMS interface for \Ipopt, see
\url{https://projects.coin-or.org/GAMSlinks}.
\item JuliaOpt
Julia is a high-level, high-performance dynamic programming language for technical computing.
JuliaOpt, see \url{http://juliaopt.org}, is an umbrella group for Julia-based optimization-
related projects. It includes the algebraic modeling language JuMP
(\url{https://github.com/JuliaOpt/JuMP.jl}) and an interface to \Ipopt
(\url{https://github.com/JuliaOpt/Ipopt.jl}).
\item mexIPOPT
A rewrite of the above mentioned MATLAB Interface:
\url{https://github.com/ebertolazzi/mexIPOPT}
\item MADOPT (Modelling and Automatic Differentiation for Optimisation)
Light-weight C++ and Python modelling interfaces implementing
expression building using operator overloading and automatic differentiation, see
\url{https://github.com/stanle/madopt}
\item .NET :
An interface to the C\# language is available here:
\url{https://github.com/cureos/csipopt}
\item OPTimization Interface (OPTI) Toolbox
OPTI is a free {\tt Matlab} toolbox for constructing and solving linear,
nonlinear, continuous and discrete optimization problem and comes with
\Ipopt.
\item Optimization Services
The Optimization Services (OS) project provides a set of standards
for representing optimization instances, results, solver options,
and communication between clients and solvers, incl.\ \Ipopt, in a
distributed environment using Web Services, see
\url{https://projects.coin-or.org/OS}.
\item PyIpopt:
An interface to the python language is available here:
\url{https://github.com/xuy/pyipopt}
\item Scilab (free Matlab-like environment):
%Edson Cordeiro do Valle has written an interface to use \Ipopt from
%Scilab.\url{http://www.scilab.org/contrib/displayContribution.php?fileID=839}
A Scilab interface is available here:
\url{http://forge.scilab.org/index.php/p/sci-ipopt}
\end{itemize}
\subsection{More Information and Contributions}
More and up-to-date information can be found at the \Ipopt homepage,
\begin{center}
\url{http://projects.coin-or.org/Ipopt}.
\end{center}
Here, you can find FAQs, some (hopefully useful) hints, a bug report
system etc. The website is managed with Wiki, which means that every
user can edit the webpages from the regular web browser. {\bf In
particular, we encourage \Ipopt users to share their experiences
and usage hints on the ``Success Stories'' and ``Hints and Tricks''
pages, or to list the publications discussing applications of
\Ipopt in the ``Papers related to Ipopt'' page}\footnote{Since we
had some malicious hacker attacks destroying the content of the web
pages in the past, you are now required to enter a user name and
password; simply follow the instructions on top of the main project
page.}. In particular, if you have trouble getting \Ipopt work
well for your optimization problem, you might find some ideas here.
Also, if you had some difficulties to solve a problem and found a way
around it (e.g., by reformulating your problem or by using certain
\Ipopt options), it would be very nice if you help other users by
sharing your experience at the ``Hints and Tricks'' page.
\Ipopt is an open source project, and we encourage people to
contribute code (such as interfaces to appropriate linear solvers,
modeling environments, or even algorithmic features). If you are
interested in contributing code, please have a look at the COIN-OR
contributions webpage\footnote{see \url{http://www.coin-or.org/contributions.html}} and contact the \Ipopt
project leader.
There is also a mailing list for \Ipopt, available from the webpage
\begin{center}
\url{http://list.coin-or.org/mailman/listinfo/ipopt},
\end{center}
where you can subscribe to get notified of updates, to ask general
questions regarding installation and usage, or to share your
experience with \Ipopt. You might want to look at the archives before
posting a question. An easy way to search the archive with Google is
to specify
\url{site:http://list.coin-or.org/pipermail/ipopt}
in addition to your keywords in the search string.
We try to answer questions posted to the mailing list in a reasonable
manner. Please understand that we cannot answer all questions in
detail, and because of time constraints, we are not able to help
you model and debug your particular optimization problem.
Another way for discussion and contributing to \Ipopt is via its presence
on GitHub:
\begin{center}
\url{https://github.com/coin-or/Ipopt}
\end{center}
The git repository of \Ipopt at GitHub is only a mirror of the
subversion repository at the COIN-OR server, but the page allows you to
file issues and send small patches. This may be suitable for users who
find using the conventional mailing list and bug tracking system too
cumbersome.
A short tutorial on getting started with \Ipopt is also available
\cite{Waechter90Minutes}.
\subsection{History of \Ipopt}
The original \Ipopt (Fortran version) was a product of the
dissertation research of Andreas W\"achter \cite{WaechterPhD}, under
the supervision of Lorenz T. Biegler at the Chemical Engineering
Department at Carnegie Mellon University. The code was made open
source and distributed by the COIN-OR initiative, which is now a
non-profit corporation. \Ipopt has been actively developed under
COIN-OR since 2002.
To continue natural extension of the code and allow easy addition of
new features, IBM Research decided to invest in an open source
re-write of \Ipopt in C++. With the help of Carl Laird, who came to
the Mathematical Sciences Department at IBM Research as a summer
intern in 2004 and 2005 during his PhD studies, the code was
re-implemented from scratch.
The new C++ version of the \Ipopt optimization code (\Ipopt 3.0.0
and beyond) was maintained at IBM Research and remains part of the
COIN-OR initiative. The development on the Fortran version has
ceased, but the source code can still be downloaded from \url{http://www.coin-or.org/download/source/Ipopt-Fortran}.
\section{Installing \Ipopt}\label{Installing}
The following sections describe the installation procedures on
UNIX/Linux systems. For installation instructions on Windows
see Section~\ref{WindowsInstall}.
Additional hints on installing \Ipopt and its various interfaces is available
on the \Ipopt and CoinHelp wiki pages, in particular
\begin{itemize}
\item \Ipopt compilation hints:\\
\url{https://projects.coin-or.org/Ipopt/wiki/CompilationHints}
\item Current configuration and installation issues for COIN-OR projects:\\
\url{https://projects.coin-or.org/BuildTools/wiki/current-issues}
\end{itemize}
\subsection{Getting System Packages (Compilers, ...)}
Many Linux distributions will come with all necessary tools. All you should need to do is check the compiler versions. On a Debian-based distribution, you can obtain all necessary tools with the following command:
\begin{verbatim}
sudo apt-get install gcc g++ gfortran subversion patch wget
\end{verbatim}
Replace {\tt apt-get} with your relevant package manager, e.g. {\tt yum} for Red Hat-based distributions, {\tt zypper} for SUSE, etc. The {\tt g++} and {\tt gfortran} compilers may need to be specified respectively as {\tt gcc-c++} and {\tt gcc-gfortran} with some package managers.
On Mac OS X, you need either the Xcode Command Line Tools, available at \url{https://developer.apple.com/downloads} after registering as an Apple Developer, or a community alternative such as \url{https://github.com/kennethreitz/osx-gcc-installer/downloads} to install the gcc and g++ compilers.
It has been reported, that gcc/g++ 4.2 and older is not sufficient for using the HSL codes.
If you have a recent version of Xcode installed, the Command Line Tools are available under Preferences, Downloads. In Xcode 3.x, the Command Line Tools are contained in the optional item ``UNIX Dev Support'' during Xcode installation.
%If you are using OS X 10.6 or earlier, then see the instructions at \url{http://kitcambridge.tumblr.com/post/17778742499/installing-the-xcode-command-line-tools-on-snow-leopard} on how to install the Xcode Command Line Tools without needing to download the rest of Xcode.
These items unfortunately do not come with a Fortran compiler, but you can get {\tt gfortran} from {\tt http://gcc.gnu.org/wiki/GFortranBinaries\#MacOS}. We have been able to compile \Ipopt using default Xcode versions of {\tt gcc} and {\tt g++} and a newer version of {\tt gfortran} from this link, but consistent version numbers may be an issue in future cases.
\subsection{Getting the \Ipopt Code}
\Ipopt is available from the COIN-OR subversion repository and the COIN-OR
group at GitHub (for expert users). You can
either download the code using \texttt{svn} (the
\textit{subversion} client) or \texttt{git} or
simply retrieve a tarball (compressed archive file). While the
tarball is an easy method to retrieve the code, using the
\textit{subversion} and \textit{git} system allows users the benefits of the version
control system, including easy updates and revision control.
\subsubsection{Getting the \Ipopt code via subversion}
Of course, the \textit{subversion} client must be installed on your
system if you want to obtain the code this way (the executable is
called \texttt{svn}); it is already installed by default for many
recent Linux distributions. Information about \textit{subversion} and
how to download it can be found at
\url{http://subversion.apache.org}.
To obtain the \Ipopt source code via subversion, change into the
directory in which you want to create a subdirectory {\tt CoinIpopt} with
the \Ipopt source code. Then follow the steps below:
\begin{enumerate}
\item{Download the code from the repository}\\
{\tt \$ svn co https://projects.coin-or.org/svn/Ipopt/stable/3.12 CoinIpopt} \\
Note: The {\tt \$} indicates the command line
prompt, do not type {\tt \$}, only the text following it.
\item Change into the root directory of the \Ipopt distribution\\
{\tt \$ cd CoinIpopt}
\end{enumerate}
In the following, ``\texttt{\$IPOPTDIR}'' will refer to the directory in
which you are right now (output of \texttt{pwd}).
\subsubsection{Getting the \Ipopt code via git (expert users only)}
Of course, the \textit{git} client must be installed on your
system if you want to obtain the code this way (the executable is
called \texttt{git}). Information about \textit{git} and
how to download it can be found at
\url{http://git-scm.com}.
\textbf{NOTE:} Currently, cloning the code from the GitHub mirror does
not automatically retrieve the \Ipopt dependencies that are essential
for building \Ipopt (BuildTools and build system for external codes).
You will have to obtain them manually.
To obtain the \Ipopt source code via git, change into the
directory in which you want to create a subdirectory {\tt CoinIpopt} with
the \Ipopt source code. Then follow the steps below:
\begin{enumerate}
\item{Download the code from the repository}\\
{\tt \$ git clone -b stable/3.12 https://github.com/coin-or/Ipopt.git CoinIpopt} \\
Note: The {\tt \$} indicates the command line
prompt, do not type {\tt \$}, only the text following it.
\item Change into the root directory of the \Ipopt distribution\\
{\tt \$ cd CoinIpopt}
\end{enumerate}
In the following, ``\texttt{\$IPOPTDIR}'' will refer to the directory in
which you are right now (output of \texttt{pwd}).
\subsubsection{Getting the \Ipopt code as a tarball}
To use the tarball, follow the steps below:
\begin{enumerate}
\item Download the desired tarball from
\url{http://www.coin-or.org/download/source/Ipopt}, it has the form {\tt
Ipopt-{\em x.y.z}.tgz}, where {\tt\em x.y.z} is the version
number, such as {\tt 3.12.0}. There might also be daily snapshot
from the stable branch. The number of the latest official release
can be found on the \Ipopt Trac page.
\item Issue the following commands to unpack the archive file: \\
\texttt{\$ gunzip Ipopt-{\em x.y.z}.tgz} \\
\texttt{\$ tar xvf Ipopt-{\em x.y.z}.tar} \\
Note: The {\tt \$} indicates the command line
prompt, do not type {\tt \$}, only the text following it.
\item Rename the directory you just extracted:\\
\texttt{\$ mv Ipopt-{\em x.y.z} CoinIpopt}
\item Change into the root directory of the \Ipopt distribution\\
{\tt \$ cd CoinIpopt}
\end{enumerate}
In the following, ``\texttt{\$IPOPTDIR}'' will refer to the directory in
which you are right now (output of \texttt{pwd}).
\subsection{Download External Code}\label{ExternalCode}
\Ipopt uses a few external packages that are not included in the
\Ipopt source code distribution, namely ASL (the AMPL Solver Library
if you want to compile the \Ipopt AMPL solver executable), Blas,
Lapack.
\Ipopt also requires at least one linear solver for sparse symmetric
indefinite matrices. There are
different possibilities, see Sections~\ref{sec:HSL}--\ref{sec:WSMP}.
{\bf It is important to keep in mind that usually
the largest fraction of computation time in the optimizer is spent for
solving the linear system, and that your choice of the linear solver
impacts \Ipopt's speed and robustness. It might be worthwhile to try
different linear solver to experiment with what is best for your
application.}
Since this third party software is released under different licenses than
\Ipopt, we cannot distribute their code together with the \Ipopt
packages and have to ask you to go through the hassle of obtaining it
yourself (even though we tried to make it as easy for you as we
could). Keep in mind that it is still your responsibility to ensure
that your downloading and usage of the third party components conforms
with their licenses.
Note that you only need to obtain the ASL if you intend to use \Ipopt
from AMPL. It is not required if you want to specify your
optimization problem in a programming language (C++, C, or Fortran).
Also, currently, Lapack is only required if you intend to use the
quasi-Newton options implemented in \Ipopt.
\subsubsection{Download BLAS, LAPACK and ASL}
Note: It is \textbf{highly recommended that you obtain an efficient
implementation of the BLAS library}, tailored to your hardware;
Section~\ref{sec:prerequisites} lists a few options. Assuming that
your precompiled efficient BLAS library is \texttt{libmyblas.a} in
\texttt{\$HOME/lib}, you need to add the flag
\texttt{--with-blas="-L\$HOME/lib -lmyblas"} when you run
\texttt{configure} (see Section~\ref{sec.comp_and_inst}). Some of
those libraries also include LAPACK.
If you have the download utility \texttt{wget} installed on your
system (or \texttt{ftp} on Mac OS X), retrieving source code for
BLAS (the inefficient reference
implementation, not required if you have a precompiled library), as
well as LAPACK and ASL is straightforward using scripts included with
the ipopt distribution. These scripts download the required files
from the Netlib Repository
(\url{http://www.netlib.org}).
\medskip
\noindent
{\tt \$ cd \$IPOPTDIR/ThirdParty/Blas}\\
{\tt \$ ./get.Blas}\\
{\tt \$ cd ../Lapack}\\
{\tt \$ ./get.Lapack}\\
{\tt \$ cd ../ASL}\\
{\tt \$ ./get.ASL}
\medskip
\noindent
If you do not have \texttt{wget} (or \texttt{ftp} on Mac OS X) installed
on your system, please read the \texttt{INSTALL.*} files in the
\texttt{\$IPOPTDIR/ThirdParty/Blas},
\texttt{\$IPOPTDIR/ThirdParty/Lapack} and
\texttt{\$IPOPTDIR/ThirdParty/ASL} directories for alternative
instructions.
If you are having firewall issues with {\tt wget}, try opening the {\tt get.<library>} scripts and replace the line {\tt wgetcmd=wget} with {\tt wgetcmd="wget --passive-ftp"}.
If you are getting permissions errors from tar, try opening the {\tt get.<library>} scripts and replace any instances of {\tt tar xf} with {\tt tar --no-same-owner -xf}.
\subsubsection{Download HSL Subroutines}
\label{sec:HSL}
\noindent
There are two versions of HSL available:
\begin{description}
\item[HSL Archive] contains outdated codes that are freely available for
personal commercial or non-com\-mer\-cial usage. Note that you may not
redistribute these codes in either source or binary form without purchasing a
licence from the authors. This version includes MA27, MA28, and MC19.
\item[HSL 2011] contains more modern codes that are freely available for
academic use only. This version includes the codes from the HSL Archive and
additionally MA57, HSL\_MA77, HSL\_MA86, and HSL\_MA97. \Ipopt supports the
HSL 2011 codes from 2012 and 2013, the support for the versions from 2012 may be
dropped in a future release.
\end{description}
% The use of alternative linear solvers is described in
% Sections~\ref{sec:MUMPS}--\ref{sec:WSMP}. You do not necessarily
% have to use a HSL code such as MA27 as described in this section, but at least
% one linear solver is required for \Ipopt to function.
To obtain the HSL code, you can follow the following steps:
\begin{enumerate}
\item Go to \url{http://hsl.rl.ac.uk/ipopt}.
\item Choose whether to download either the Archive code or the HSL 2011
code. To download, select the relevant
``source'' link.
\item Follow the instructions on the website, read the license, and
submit the registration form.
\item Wait for an email containing a download link (this should take no
more than one working day).
\end{enumerate}
\noindent
You may either:
\begin{itemize}
\item Compile the HSL code as part of \Ipopt. See the instructions below.
\item Compile the HSL code separately either before or after the \Ipopt code
and use the shared library loading mechanism. See the documentation
distributed with the HSL package for information on how to do so.
\end{itemize}
To compile the HSL code as part of \Ipopt, unpack the archive, then move and
rename the resulting directory so that it becomes
{\tt \$IPOPTDIR/ThirdParty/HSL/coinhsl}. \Ipopt may then be configured as
normal.
Note: Whereas it is essential to have at least one linear solver, the
package MC19 could be omitted (with the consequence that you cannot
use this method for scaling the linear systems arising inside the
\Ipopt algorithm). By default, MC19 is only used to scale the linear
system when using one of the HSL solvers, but it can also be
switched on for other linear solvers (which usually have internal
scaling mechanisms).
Further, also the package MA28 can be omitted, since it is used only
in the experimental dependency detector, which is not used by default.
Note: If you are an academic or a student, we recommend you download the
HSL 2011 package as this ensures you have access to the full range of solvers.
MA57 can be considerably faster than MA27 on some problems.
Yet another note: If you have a precompiled library containing the
HSL codes, you can specify the directory with the header files and
the linker flags for this library with the \verb|--with-hsl-incdir| and
\verb|--with-hsl-lib| flags for the {\tt configure} script described in
Section~\ref{sec.comp_and_inst}.
\subsubsection{Obtaining the MUMPS Linear Solver}\label{sec:MUMPS}
You can also use the (public domain) sparse linear solver MUMPS.
Please visit the MUMPS home page \url{http://graal.ens-lyon.fr/MUMPS}
for more information about the solver. MUMPS is provided as Fortran 90
and C source code. You need to have a Fortran 90 compiler (for
example, the GNU compiler {\tt gfortran} is a free one) to be able to
use it.
You can obtain the MUMPS code by running the script
{\tt \$IPOPTDIR/ThirdParty/Mumps/get.Mumps} if you have {\tt wget}
(or {\tt ftp} on Mac OS X) installed in your system.
Alternatively, you can get the latest version
from the MUMPS home page and extract the archive in the
directory {\tt \$IPOPTDIR/ThirdParty/Mumps}. The extracted
directory usually has the MUMPS version number in it, so you need to
rename it to {\tt MUMPS} such that you have a file called
{\tt \$IPOPTDIR/ThirdParty/Mumps/MUMPS/README}.
Once you put the MUMPS source code into the correct place, the \Ipopt
configuration scripts will automatically detect it and compile MUMPS
together with \Ipopt, \emph{if your Fortran compiler is able to compile
Fortran 90 code}.
Note: MUMPS will perform better with METIS, see
Section~\ref{sec:METIS}.
Note: MUMPS uses interally a fake implementation of MPI. If you are
using \Ipopt within an MPI program together with MUMPS, the code will
not run. You will have to modify the MUMPS sources so that the MPI
symbols inside the MUMPS code are renamed.
\subsubsection{Obtaining the Linear Solver Pardiso}\label{sec:Pardiso}
If you would like to compile \Ipopt with the Parallel Sparse Direct
Linear Solver (Pardiso), you need to obtain either Intel's MKL library
or the Pardiso library from \url{http://www.pardiso-project.org} for
your operating system.
From \url{http://www.pardiso-project.org},
you can obtain a limited time license of Pardiso for academic or
evaluation purposes or buy a non-profit or commercial
license. Make sure you read the license agreement before filling out
the download form.
%Note: Pardiso is included in Intel's MKL library. However, that
%version does not include the changes done by the Pardiso developers to
%make the linear solver work smoothly with \Ipopt.
Please consult Appendix~\ref{ExpertInstall} to find out how to
configure your \Ipopt installation to work with Pardiso.
\subsubsection{Obtaining the Linear Solver WSMP}\label{sec:WSMP}
If you would like to compile \Ipopt with the Watson Sparse Matrix
Package (WSMP), you need to obtain the WSMP library for your operating
system. Information about WSMP can be found at
\url{http://www.research.ibm.com/projects/wsmp}.
%http://researcher.ibm.com/view_project.php?id=1426
At this website you can download the library for several operating systems
including a trial license key for 90 days that allows you to use WSMP
for ``educational, research, and benchmarking purposes by
non-profit academic institutions'' or evaluation purposes by commercial
organizations;
make sure you read the license agreement before using the library.
Once you obtained the library and license, please check if the version
number of the library matches the one on the WSMP website.
If a newer version is announced on that website, you can (and
probably should) request the current version by sending a message to
\verb|wsmp@watson.ibm.com|. Please include the operating system and
other details to describe which particular version of WSMP you need.
% You can use the bugfix releases with the license you obtained from alphaWorks.
Note: Only the interface to the shared-memory version of WSMP is
currently supported.
Please consult Appendix~\ref{ExpertInstall} to find out how to
configure your \Ipopt installation to work with WSMP.
\subsubsection{Using the Linear Solver Loader}\label{sec:linear_solver_loader}
By default, \Ipopt will be compiled with a mechanism, the Linear
Solver Loader, which can dynamically load shared libraries with MA27,
MA57, HSL\_MA77, HSL\_MA86, HSL\_MA97, or the Pardiso linear solver at
runtime\footnote{This is not
enabled if you compile \Ipopt with the MS Visual Studio project files
provided in the \Ipopt distribution. Further, if you have problems
compiling this new feature, you can disable this by specifying
\texttt{--disable-linear-solver-loader} for the \texttt{configure}
script}. This means, if you obtain one of those solvers after you
compiled \Ipopt, you don't need to recompile to
use it. Instead, you can just put a shared library called
\texttt{libhsl.so} or \texttt{libpardiso.so} into the shared library
search path, \texttt{LD\_LIBRARY\_PATH}. These are the names on most
UNIX platforms, including Linux. On Mac OS X, the names are
\texttt{libhsl.dylib}, \texttt{libpardiso.dylib}, and
\texttt{DYLD\_LIBRARY\_PATH}. On Windows, the names are \texttt{libhsl.dll},
\texttt{libpardiso.dll}, and \texttt{PATH}.
The Pardiso shared library can be downloaded from the Pardiso website.
To create a shared library containing the HSL linear solvers, read the
instructions in \texttt{\$IPOPTDIR/ThirdParty/HSL/INSTALL.HSL}.
\subsubsection{Obtaining METIS}\label{sec:METIS}
The linear solvers MA57, HSL\_MA77, HSL\_MA86, HSL\_MA97, and MUMPS can make
use of the matrix ordering algorithms implemented in METIS (see
\url{http://glaros.dtc.umn.edu/gkhome/metis/metis/overview}). If
you are using one of these linear solvers, you should obtain the METIS
source code and put it into \texttt{\$IPOPTDIR/ThirdParty/Metis}.
Read the \texttt{INSTALL.Metis} file in that directory, and if you
have the \texttt{wget} utility (or \texttt{ftp} on Mac OS X) installed on your
system, you can download the code by running the \texttt{./get.Metis} script.
Note, that {\bf only the older METIS 4.x version\footnote{\url{http://glaros.dtc.umn.edu/gkhome/fetch/sw/metis/OLD/metis-4.0.3.tar.gz}} is supported} by
MA57, HSL\_MA77, HSL\_MA86, HSL\_MA97, MUMPS, and the build system.
The \texttt{./get.Metis} script takes care of downloading the right METIS version.
\subsection{Compiling and Installing \Ipopt} \label{sec.comp_and_inst}
\Ipopt can be easily compiled and installed with the usual {\tt
configure}, {\tt make}, {\tt make install} commands. We follow the
procedure that is used for most of the COIN-OR projects, based on the
GNU autotools. At \url{https://projects.coin-or.org/CoinHelp}
you can find a general description of the tools.
Below are the basic steps for the \Ipopt compilation that should work
on most systems. For special compilations and for some
troubleshooting see Appendix~\ref{ExpertInstall} and consult the
generic COIN-OR help page
\url{https://projects.coin-or.org/CoinHelp} before submitting a
ticket or sending a message to the mailing list.
\begin{enumerate}
\item Create a directory where you want to compile \Ipopt, for example\\
{\tt \$ mkdir \$IPOPTDIR/build}\\
and go into this direcrory\\
{\tt \$ cd \$IPOPTDIR/build}
Note: You can choose any location, including {\tt \$IPOPTDIR}
itself, as the location of your compilation. However, on COIN-OR we
recommend to keep the source and compiled files separate.
\item Run the configure script\\
{\tt \$ \$IPOPTDIR/configure}
One might have to give options to the configure script, e.g., in
order to choose a non-default compiler, or to tell it where some
third party code is installed, see Appendix~\ref{ExpertInstall}.
If the last output line reads ``\texttt{configure:\ Main configuration of Ipopt successful}'' then everything worked
fine. Otherwise, look at the screen output, have a look at the
\texttt{config.log} output files and/or consult
Appendix~\ref{ExpertInstall}.
The default configure (without any options) is sufficient for most
users that downloaded the source code for the linear solver. If you
want to see the configure options, consult
Appendix~\ref{ExpertInstall}, and also visit the generic COIN-OR
configuration instruction page at
\centerline{\url{https://projects.coin-or.org/CoinHelp/wiki/user-configure}}
\item Build the code \\
{\tt \$ make}
Note: If you are using GNU make, you can also try to speed up the
compilation by using the {\tt -jN} flag (e.g., {\tt make -j3}),
where {\tt N} is the number of parallel compilation jobs. A good
number for {\tt N} is the number of available processors plus one.
Under some circumstances, this fails, and you might have to re-issue
the command, or omit the {\tt -j} flag.
\item If you want, you can run a short test to verify that the
compilation was successful. For this, you just
enter\\
{\tt \$ make test}\\
This will test if the AMPL solver executable works (if you got the
ASL code) and if the included C++, C, and Fortran examples work.
Note: The {\tt configure} script is not able to automatically
determine the C++ runtime libraries for the C++ compiler. For
certain compilers we enabled default values for this, but those
might not exist or be wrong for your compiler. In that case, the C
and Fortran example in the test will most probably fail to compile.
If you don't want to hook up the compiled \Ipopt library to some
Fortran or C code that you wrote you don't need to worry about this.
If you do want to link the \Ipopt library with a C or Fortran
compiler, you need to find out the C++ runtime libraries (e.g., by
running the C++ compiler in verbose mode for a simple example
program) and run {\tt configure} again, and this time specify all
C++ runtime libraries with the {\tt CXXLIBS} variable (see also
Appendix~\ref{ExpertInstall}).
\item Install \Ipopt \\
{\tt \$ make install}\\
This installs
\begin{itemize}
\item the \Ipopt AMPL solver executable (if ASL source was
downloaded) in \texttt{\$IPOPTDIR/build/bin},
\item the \Ipopt library (\texttt{libipopt.so}, \texttt{libipopt.a}
or similar) and all its dependencies (MUMPS, HSL, Metis libraries)
in \texttt{\$IPOPTDIR/build/lib},
\item text files {\tt ipopt\_addlibs\_cpp.txt}, {\tt ipopt\_addlibs\_c.txt},
and {\tt ipopt\_addlibs\_f.txt} in\\
\texttt{\$IPOPTDIR/build/share/coin/doc/Ipopt} each
containing a line with linking flags that are required for
linking code with the \Ipopt library for C++, C, and Fortran main
programs, respectively. (This is only for convenience if you want
to find out what additional flags are required, for example, to
include the Fortran runtime libraries with a C++ compiler.)
\item the necessary header files in
\texttt{\$IPOPTDIR/build/include/coin}.
\end{itemize}
You can change the default installation directory (here
\texttt{\$IPOPTDIR/build}) to something else (such as \texttt{/usr/local})
by using the \verb|--prefix| switch for \texttt{configure}.
\item (Optional) Install \Ipopt for use with {\tt CUTEr}\\
If you have {\tt CUTEr} already installed on your system and you
want to use \Ipopt as a solver for problems modeled in {\tt SIF},
type\\
{\tt \$ make cuter}\\
This assumes that you have the environment variable {\tt MYCUTER}
defined according to the {\tt CUTEr} instructions. After this, you
can use the script {\tt sdipo} as the {\tt CUTEr} script to solve a
{\tt SIF} model.
\end{enumerate}
Note: The above procedures show how to compile the code in directories
separate from the source files. This comes in handy when you want to
compile the code with different compilers, compiler options, or
different operating system that share a common file system. To use
this feature, change into the directory where you want to compile the
code, and then type {\tt \$IPOPTDIR/configure} with all the options.
For this, the directories with the \Ipopt source must not have any
configuration and compiled code.
\subsection{Installation on Windows}\label{WindowsInstall}
There are several ways to install \Ipopt on Windows systems. The
first two options, described in Sections~\ref{CygwinInstall} and
\ref{CygwinInstallNative}, are to use Cygwin (see
\url{http://www.cygwin.com}), which offers a comprehensive UNIX-like
environment on Windows and in which the installation procedure
described earlier in this section can be used. If you want to use the
(free) GNU compilers, follow the instructions in
Section~\ref{CygwinInstall}. If you have the Microsoft C++ compiler
and possibly a ``native'' Fortran compiler (e.g., the Intel Fortran
compiler) and want to use those to compile \Ipopt, please see
Section~\ref{CygwinInstallNative}. If you use MSYS/MinGW (a
light-weight UNIX-like environment for Windows), please consider the
notes in Section~\ref{MinGWInstall}.
If you want to compile the \Ipopt\ {\tt mex} interface to \Matlab, then we recommend to use the MSYS/MinGW option.
%The \Ipopt distribution also includes projects files for
%Microsoft Visual Studio. % (see Section~\ref{VisualStudioInstall}).
Note: Some binaries for \Ipopt are available on the COIN-OR website at
\url{http://www.coin-or.org/download/binary/Ipopt}.
There, also precompiled versions of \Ipopt as DLLs (generated from
the MSVS solution in \Ipopt's subdirectory
\texttt{\$IPOPTDIR/Ipopt/MSVisualStudio/v8-ifort}) are available. Look at the
\texttt{README} files for details. An example how to use these DLLs
from your own MSVS project is in
\texttt{\$IPOPTDIR/Ipopt/MSVisualStudio/BinaryDLL-Link-Example}.
\subsubsection{Installation with Cygwin using GNU compilers}\label{CygwinInstall}
Cygwin is a Linux-like environment for Windows; if you don't know what
it is you might want to have a look at the Cygwin homepage,
\url{http://www.cygwin.com}.
It is possible to build the \Ipopt AMPL solver executable in Cygwin
for general use in Windows. You can also hook up \Ipopt to your own
program if you compile it in the Cygwin environment\footnote{It is
also possible to build an \Ipopt DLL that can be used from
non-cygwin compilers, but this is not (yet?) supported.}.
If you want to compile \Ipopt under Cygwin, you first have to install
Cygwin on your Windows system. This is pretty straight forward; you
simply download the ``setup'' program from
\url{http://www.cygwin.com} and start it.
Then you do the following steps (assuming here that you don't have any
complications with firewall settings etc - in that case you might have
to choose some connection settings differently):
\begin{enumerate}
\item Click next
\item Select ``install from the internet'' (default) and click next
\item Select a directory where Cygwin is to be installed (you can
leave the default) and choose all other things to your liking, then
click next
\item Select a temp dir for Cygwin setup to store some files (if you
put it on your desktop you will later remember to delete it)
\item Select ``direct connection'' (default) and click next
\item Select some mirror site that seems close by to you and click next
\item OK, now comes the complicated part:\\
You need to select the packages that you want to have installed. By
default, there are already selections, but the compilers are usually
not pre-chosen. You need to make sure that you select the GNU
compilers (for Fortran, C, and C++), Subversion, and some additional tools.
For this, get the following packages from the associated branches:
\begin{itemize}
\item ``Devel'': {\tt gcc4}
\item ``Devel'': {\tt gcc4-fortran}
\item ``Devel'': {\tt pkg-config}
\item ``Devel'': {\tt subversion}
\item ``Archive'': {\tt unzip}
\item ``Utils'': {\tt patch}
\item ``Web'': {\tt wget}
\end{itemize}
When a Resolving Dependencies window comes up, be sure to
``Select required packages (RECOMMENDED)''.
This will automatically also select some other packages.
\item\label{it:cyg_done} Then you click on next, and Cygwin will be
installed (follow the rest of the instructions and choose everything
else to your liking). At a later point you can easily add/remove
packages with the setup program.
\item The version of the GNU Make utility provided by the Cygwin installer
will not work. Therefore, you need to download the fixed version from
\url{http://www.cmake.org/files/cygwin/make.exe} and save it to {\tt C:$\backslash$cygwin$\backslash$bin}.
Double-check this new version by typing {\tt make --version} in a Cygwin
terminal (see next point).
If you get an error {\tt -bash: /usr/bin/make: Bad address}, then try
\url{http://www.cmake.org/files/cygwin/make.exe-cygwin1.7} instead, rename
it to {\tt make.exe} and move it to {\tt C:$\backslash$cygwin$\backslash$bin}.
(Replace {\tt C:$\backslash$cygwin} with your installation location if different.)
\item Now that you have Cygwin, you can open a Cygwin window, which is
like a UNIX shell window.
\item\label{it:cyg_inst} Now you just follow the instructions in the
beginning of Section~\ref{Installing}: You download the \Ipopt
code into your Cygwin home directory (from the Windows explorer that
is usually something like
\texttt{C:$\backslash$Cygwin$\backslash$home$\backslash$your\_user\_name}).\
After that you obtain the third party code (as on Linux/UNIX),
type
\texttt{./configure}
and
\texttt{make install}
in the correct directories, and hopefully that will work. The
\Ipopt AMPL solver executable will be in the subdirectory
\texttt{bin} (called ``\texttt{ipopt.exe}''). If you want to set
the installation, type
\texttt{make test}
% I think this is outdated nowadays:
% \textbf{NOTE:} By default, the compiled binaries (library and
% executables) will be ``Cygwin-native'', i.e., in order to run
% executables using this, the {\tt Cygwin1.dll} has to be present
% (e.g., in a Cygwin window). If you want to compile things in a way
% that allow your executables to run outside of Cygwin, e.g., in a
% regular DOS prompt, you need to specify the option ``{\tt
% --enable-doscompile}'' when you run {\tt configure}.
\end{enumerate}
\subsubsection{Installation with Cygwin using the MSVC++ compiler}
\label{CygwinInstallNative}
This section describes how you can compile \Ipopt with the Microsoft
Visual C++ compiler under Cygwin. Here you have two options for
compiling the Fortran code in the third party dependencies:
\begin{itemize}
\item Using a Windows Fortran compiler, e.g., the Intel Fortran
compiler, which is also able to compile Fortran 90 code. This would
allow you to compile the MUMPS linear solver if you desire to do so.
\item Using the {\tt f2c} Fortran to C compiler, available for free at
Netlib (see \url{http://www.netlib.org/f2c}). This can only compile
Fortran 77 code (i.e., you won't be able to compile MUMPS). Before
doing the following installation steps, you need to follow the
instructions in\\ {\tt\$IPOPTDIR/BuildTools/compile\_f2c/INSTALL}.
\end{itemize}
\noindent
Once you have settled on this, do the following:
\begin{enumerate}
\item Follow the instructions in Section~\ref{CygwinInstall} until
Step~\ref{it:cyg_inst} and stop after your downloaded the third
party code.
\item\label{it:setupmsvcpath} Now you need to make sure that Cygwin
knows about the native compilers. For this you need to edit the
file {\tt cygwin.bat} in the Cygwin base directory (usually {\tt
C:$\backslash$cygwin}). Here you need to add a line like the
following:
\texttt{call "C:$\backslash$Program Files$\backslash$Microsoft Visual
Studio 8$\backslash$VC$\backslash$vcvarsall.bat"}
On my computer, this sets the environment variables so that I can use
the MSVC++ compiler.
If you want to use also a native Fortran compiler, you need to
include something like this
\texttt{call "C:$\backslash$Program
Files$\backslash$Intel$\backslash$Fortran$\backslash$compiler80$\backslash$IA32$\backslash$BIN$\backslash$ifortvars.bat"}
You might have to search around a bit. The important thing is that,
after your change, you can type ``{\tt cl}'' in a newly opened
Cygwin windows, and it finds the Microsoft C++ compiler (and if you
want to use it, the Fortran compiler, such as the Intel's {\tt
ifort}).
\item Run the configuration script, and tell it that you want to use
the native compilers:
\texttt{./configure --enable-doscompile=msvc}
Make sure the last message is
\texttt{Main Ipopt configuration successful}
%\item\label{it:ASLcompile} If want to compile the AMPL solver
% executable, you need to compile the ASL library from a script. For
% this you need to change into the ASL compilation directory, execute
% the script \texttt{compile\_MS\_ASL}, and go back to the directory
% where you were:
%
% \texttt{cd ThirdParty/ASL}
%
% \texttt{./compile\_MS\_ASL}
%
% \texttt{cd -}
\item Now you can compile the code with
\texttt{make},
test the installation with
\texttt{make test},
and install everything with
\texttt{make install}
\end{enumerate}
\subsubsection{Installation with MSYS/MinGW}\label{MinGWInstall}
You can compile \Ipopt also under MSYS/MinGW, which is another, more
light-weight UNIX-like environment for Windows. It can be obtained
from \url{http://www.mingw.org/}.
If you want to use MSYS/MinGW to compile \Ipopt with native Windows
compilers (see Section~\ref{CygwinInstallNative}), all you need to
install is the basic version\footnote{a convenient Windows install
program is available from \url{http://sourceforge.net/projects/mingw/files/Installer/mingw-get-inst/}}.
If you also want to use the GNU
compilers, you need to install those as well, of course.
A compilation with the GNU compilers works just like with any other
UNIX system, as described in Section~\ref{sec.comp_and_inst}.
That is, during the installation, select (at least) the C Compiler, C++ Compiler, Fortran Compiler, MSYS Basic System, and the MinGW Developer ToolKit.
Additionally, {\tt wget} and {\tt unzip} should be installed with the following command in an MSYS terminal:
\begin{verbatim}
mingw-get install msys-wget msys-unzip
\end{verbatim}
If you want to use the native MSVC++ compiler (with {\tt f2c} or a native
Fortran compiler), you essentially follow the steps outlined in
Section~\ref{CygwinInstallNative}.
Additionally, you need to make sure that the environment variables are set for
the compilers (see step~\ref{it:setupmsvcpath}), this time adding the line to
the {\tt msys.bat} file.
For a 64-bit build, you will need to install also a MinGW-64 distribution.
We recommend TDM-GCC, which is available from \url{http://sourceforge.net/projects/tdm-gcc/files/TDM-GCC\%20Installer/tdm-gcc-webdl.exe/download}.
Install MinGW-64 in a different folder than your existing 32-bit MinGW installation!
The components you need are: {\tt core} (under {\tt gcc}), {\tt c++} (under {\tt gcc}), {\tt fortran} (under {\tt gcc}), {\tt openmp} (under {\tt gcc}, necessary if you want to use any multi-threaded linear solvers), {\tt binutils}, and {\tt mingw64-runtime}.
After MinGW-64 is installed, open the file {\tt C:$\backslash$MinGW$\backslash$msys$\backslash$1.0$\backslash$etc$\backslash$fstab}, and replace the line
\begin{verbatim}
C:\MinGW\ /mingw
\end{verbatim}
with
\begin{verbatim}
C:\MinGW64\ /mingw
\end{verbatim}
(Replace paths with your installation locations if different.)
%Also, you need to run the
%\texttt{compile\_MS\_ASL} script in the \texttt{ThirdParty/ASL}
%immediately after you run the configuration script.
%\subsubsection{Using Microsoft Visual Studio}\label{VisualStudioInstall}
%\textbf{NEW:} Some binaries for \Ipopt are available on the COIN-OR website at
%\url{http://www.coin-or.org/download/binary/Ipopt}.
%There are also precompiled versions of Ipopt as DLLs (generated from
%the MSVC solution in \Ipopt's subdirectory
%\texttt{\$IPOPTDIR/Ipopt/MSVisualStudio/v8-ifort}). Look at the
%\texttt{README} files for details. An example how to use these DLLs
%from your own MSVC project is in\\
%\texttt{\$IPOPTDIR/Ipopt/MSVisualStudio/BinaryDLL-Link-Example}.
%The \Ipopt distribution includes project files that can be used to
%compile the \Ipopt library, the AMPL solver executable, and a C++
%example within the Microsoft Visual Studio. The project files have
%been created with Microsoft Visual 8 Express. Fortran files in the
%third party dependencies need to be converted to C code using the {\tt
% f2c} Fortran to C compiler\footnote{Projects files for a previous
% version of \Ipopt that used the Intel Fortran compiler are in
% {\$IPOPTDIR$\backslash$Ipopt$\backslash$NoLongerMaintainedWindows},
% but they are probably outdated, and you will have to correct them.}.
%In order to use those project files, download the \Ipopt source code,
%as well as the required third party code (put it into the {\tt
% ThirdParty$\backslash$Blas}, {\tt ThirdParty$\backslash$Lapack},
%{\tt ThirdParty$\backslash$HSL}, {\tt ThirdParty$\backslash$ASL}
%directories). Detailed step-by-step instructions on how to install
%{\tt f2c}, translate the Fortran code to C files, and further details
%are described in the file
%\texttt{\$IPOPTDIR$\backslash$Ipopt$\backslash$MSVisualStudio$\backslash$v8$\backslash$README.TXT}
%After that, you can open the solution file
%\texttt{\$IPOPTDIR$\backslash$Ipopt$\backslash$MSVisualStudio$\backslash$v8$\backslash$Ipopt.sln}
%If you are compiling \Ipopt with different linear solvers, you need
%to edit the configuration header file
%\texttt{Ipopt$\backslash$src$\backslash$Common$\backslash$IpoptConfig.h},
%in the section after
%\begin{verbatim}
%/***************************************************************************/
%/* HERE DEFINE THE CONFIGURATION SPECIFIC MACROS */
%/***************************************************************************/
%\end{verbatim}
%and include the corresponding source files in
%\texttt{Ipopt$\backslash$src$\backslash$Algorithm$\backslash$LinearSolvers}
%and add the corresponding libraries to your project.
\subsection{Compiling and Installing the Java Interface \JIpopt}
\label{sec.jipopt.build}
\hfill \textit{based on documentation by Rafael de Pelegrini Soares}%
\footnote{VRTech Industrial Technologies}
\medskip
\JIpopt uses the Java Native Interface (JNI), which is a programming framework
that allows Java code running in the Java Virtual Machine (JVM) to call and be
called by native applications and libraries written in languages such as C and
C++.
\JIpopt requires Java 5 or higher.
After building and installing \Ipopt, the \JIpopt interface can be build by
setting the environment variable {\tt JAVA\_HOME} to the directory that
contains your JDK, changing to the \JIpopt directory in your \Ipopt build, and
issuing {\tt make}, e.g.,
\begin{verbatim}
export JAVA_HOME=/usr/lib/jvm/java-1.5.0
cd $IPOPTDIR/build/Ipopt/contrib/JavaInterface
make
\end{verbatim}
This will generate the Java class {\tt org/coinor/Ipopt.class}, which you will
need to make available in your Java code (i.e., add {\tt \$IPOPTDIR/build/Ipopt/contrib/JavaInterface}
to your {\tt CLASSPATH}) and the shared object
{\tt lib/libjipopt.so} (on Linux/UNIX) or {\tt lib/libjipopt.dylib} (on Mac OS
X) or the DLL {\tt lib/jipopt.dll} (on Windows).
In order to test your \JIpopt library you can run two example problems by
issuing the command {\tt make test} inside the \JIpopt directory.
\textbf{NOTE}: The \JIpopt build procedure currently cannot deal with spaces
in the path to the JDK. If you are on Windows and have Java in a path like
\verb|C:\Program Files\Java|, try setting {\tt JAVA\_HOME} to the DOS
equivalent \verb|C:\Progra~1| (or similar).
\textbf{NOTE}: \JIpopt needs to be able to load the \Ipopt library dynamically
at runtime. Therefore, \Ipopt must have been compiled with the {\tt -fPIC}
compiler flag. While per default, an Ipopt shared library is compiled with
this flag, for a configuration of \Ipopt in debug mode ({\tt --enable-debug})
or as static library ({\tt --disable-shared}), the configure flag
{\tt --with-pic} need to be used to enable compilation with {\tt -fPIC}.
\subsection{Compiling and Installing the R Interface \ipoptr}
\label{sec.ipoptr.build}
The \ipoptr interface can be build after \Ipopt has been build and installed.
In the best case, it is sufficient to execute the following command in R:
\begin{verbatim}
install.packages('$IPOPTDIR/build/Ipopt/contrib/RInterface', repos=NULL, type='source')
\end{verbatim}
In certain situations, however, it can be necessary to setup the dynamic
library load path to the path where the \Ipopt library has been installed,
e.g.,
\begin{verbatim}
LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$IPOPTDIR/build/lib
\end{verbatim}
\textbf{NOTE}: R needs to be able to load the \Ipopt library dynamically
at runtime. Therefore, \Ipopt must have been compiled with the {\tt -fPIC}
compiler flag. While per default, an Ipopt shared library is compiled with
this flag, for a configuration of \Ipopt in debug mode ({\tt --enable-debug})
or as static library ({\tt --disable-shared}), the configure flag {\tt --with-
pic} need to be used to enable compilation with {\tt -fPIC}.
After installation of the \ipoptr package, it should be possible to load the
package in R and to view the help page:
\begin{verbatim}
> library('ipoptr')
> ?ipoptr
\end{verbatim}
\subsection{Compiling and Installing the \Matlab interface}
\label{sec.matlab.build}
\hfill \textit{based on documentation by Peter Carbonetto\footnote{University of British Columbia}, Tony Kelman\footnote{University of California, Berkeley}, and Ray Zimmerman}%
\medskip
The \Matlab interface to \Ipopt uses the {\tt mex} interface of \Matlab.
It has been tested on \Matlab versions 7.2 through 7.7. It might very well
work on earlier versions of \Matlab, but there is also a good chance that it
will not. It is unlikely that the software will run with versions prior to
\Matlab 6.5.
\textbf{NOTE}: The \Matlab interface \textbf{does not support \Matlab 8.3} (aka, R2014a), as
it has not been adapted to \Matlab changes in the buildsystem for MEX files.
First, note that some binaries of \Ipopt\ {\tt mex} files are available for download at
\url{http://www.coin-or.org/download/binary/Ipopt}.
Further, the OPTI Toolbox (\url{http://www.i2c2.aut.ac.nz/Wiki/OPTI}) comes with a
precompiled \Matlab interface for \Ipopt on Windows.
\subsubsection{Setting up {\tt mex}}
To build the interface by yourself, you will need to have \Matlab installed on
your system and have it configured to build {\tt mex} files, see
\url{http://www.mathworks.com/support/tech-notes/1600/1605.html} for detail on
how to set this up.
Ipopt 3.11 has added Makefile options to automate fixes for commonly-encountered
issues with building the \Matlab interface. On Mac OS X or Windows, the file
{\tt mexopts.sh} ({\tt mexopts.bat} on Windows) will need to be modified.
This is performed automatically by calling {\tt make mexopts} in the
{\tt \$IPOPTDIR/build/Ipopt/contrib/MatlabInterface/src} directory. No changes
will be made if you already have a {\tt mexopts.sh} file in that directory.
If you need to make these modifications manually, follow the steps below.
For Mac OS X, the following procedure has been reported: First, one executes a command like
\begin{verbatim}
/Applications/MATLAB_R2012.app/bin/mex -setup
\end{verbatim}
This creates a {\tt mexopts.sh} file in the {\tt ~/.matlab/R2010} directory.
Copy that file to the directory {\tt \$IPOPTDIR/build/Ipopt/contrib/MatlabInterface/src}
and modify it as follows.
\begin{itemize}
\item In the {\tt maci} section (32 bit builds) or the {\tt maci64} section
(64 bit builds), change both instances of {\tt libgfortran.dylib} to
{\tt libgfortran.a} in the {\tt FC\_LIBDIR} line (in case your Fortran
compiler only comes with static libraries).
\item Remove all occurrences of {\tt -isysroot \$SDKROOT} or
{\tt -Wl,-syslibroot,\$SDKROOT} in case the hard-coded version of the Xcode
SDK that Matlab expects does not match what you have installed on your system.
\item Remove all occurrences of {\tt -arch \$ARCHS} in case you are using a
GNU compiler that does not recognize these Apple-specific flags.
\end{itemize}
On Windows, if you are using the GNU compilers via MinGW, then you will need
to use the {\tt gnumex} project. First, execute the script {\tt ./get.Gnumex}
from the {\tt \$IPOPTDIR/Ipopt/contrib/MatlabInterface} directory. Then, after
configuring \Ipopt, go to {\tt \$IPOPTDIR/build/contrib/MatlabInterface/src}
and execute {\tt make gnumex}. This will start an instance of \Matlab and open
the {\tt gnumex} tool. Check that the options are filled out appropriately for
your MinGW installation, click ``Make options file," then close this new instance
of \Matlab after {\tt gnumex} has created {\tt mexopts.bat}.
Calling {\tt make mexopts} will automatically make the necessary changes to this
new {\tt mexopts.bat} file. If you would like to do so manually, the changes are
as follows.
\begin{itemize}
\item Change {\tt COMPILER=gcc} to {\tt COMPILER=g++}
\item Change {\tt GM\_MEXLANG=c} to {\tt GM\_MEXLANG=cxx}
\item Add the contents of the {\tt LIBS=} line from the \Matlab interface Makefile
to {\tt GM\_ADD\_LIBS}
\item If you want to statically link the standard libraries into the
{\tt mex} file, add {\tt -static} to {\tt GM\_ADD\_LIBS}
\end{itemize}
\subsubsection{Adjusting configuration and build of \Ipopt}
The configure script of \Ipopt attempts to automatically locate the directory
where \Matlab is installed by querying the location of the {\tt matlab}
executable. You can also manually specify the \Matlab home directory when
calling the configure script with the flag {\tt --with-matlab-home}. You can
determine this home directory by the command {\tt matlabroot} within \Matlab.
In practice, it has been found easier to install and use the \Matlab interface
by disabling compilation of the shared libraries, and use only static
libraries instead. However, these static libraries need to be built in a way
that allow using them in a shared library, i.e., they need to build with
position-independent code.
This is achieved with the configure script flags
\begin{verbatim}
--disable-shared --with-pic
\end{verbatim}
On Mac OS X, it has been reported that additionally the following flags for
configure should be used:
\begin{verbatim}
ADD_CFLAGS="-fno-common -fexceptions -no-cpp-precomp"
ADD_CXXFLAGS="-fno-common -fexceptions -no-cpp-precomp"
ADD_FFLAGS="-fexceptions -fbackslash"
\end{verbatim}
With \Ipopt 3.11, a \emph{site script for configure} has been added to the
\Matlab interface. This script takes care of setting configure options in a
way that is appropriate for building an \Ipopt { \tt mex} file that is useable via
\Matlab. Therefore, instead of setting configure options as described in the
previous section, it should be sufficient to create a directory {\tt \$IPOPTDIR/build/share}, copy the site file {\tt \$IPOPTDIR/contrib/MatlabInterface/MatlabInterface.site} to that directory, and rename it to {\tt config.site} before running configure.
Alternatively, you can set an environment variable
{\tt CONFIG\_SITE} that points to the site file.
This site script sets the configure flags (if not specified by the user)
{\tt --disable-shared --with-pic --with-blas=BUILD --with-lapack=BUILD}.
The first two flags are discussed above. We also specify that the reference
versions of BLAS and LAPACK should be used by default because of a commonly
observed issue on 64-bit Linux systems. If \Ipopt configure finds BLAS and/or
LAPACK libraries already installed then it will use them. However, \Matlab
includes its own versions of BLAS and LAPACK, which on 64-bit systems are
incompatible with the expected interface used by \Ipopt and the BLAS and LAPACK
packages available in many Linux distributions. If the \Ipopt { \tt mex} file is
compiled in such a way that the BLAS and LAPACK libraries are dynamically linked
as shared libraries (as found in installed Linux packages), those library
dependencies will be overridden by \Matlab's incompatible versions. This can
be avoided by statically linking BLAS and LAPACK into the \Ipopt { \tt mex} file,
which the above combination of configure flags will do. Note, that this issue does
not appear to affect Mac OS X versions of \Matlab, so if you would like to use
the Apple optimized BLAS and LAPACK libraries you can override these settings and
specify {\tt --with-blas='-framework vecLib' --with-lapack='-framework vecLib'}.
The site script also tests whether the compilers on your system are capable of
statically linking the standard C++ and Fortran libraries into a shared library.
This is possible with GCC versions 4.5.0 or newer on Mac OS X or Windows, 4.7.3 or newer
(when GCC itself is built {\tt --with-pic}) on Linux. If this is the case, then
the site script will set appropriate configure flags and options in the \Matlab
interface {\tt Makefile} to statically link all standard libraries into the \Ipopt { \tt mex}
file. This should allow a single {\tt mex} file to work with a variety of versions of
\Matlab, and on computers that do not have the same compiler versions installed.
If this static linking of standard libraries causes any issues, you can disable
it with the configure flag {\tt --disable-matlab-static}.
\subsubsection{Building the \Matlab interface}
After configuring, building, and installing \Ipopt itself, it is time to build
the \Matlab interface.
For that, \Ipopt's configure has setup a directory
{\tt \$IPOPTDIR/build/contrib/MatlabInterface/src} which contains a
{\tt Makefile}.
You may need to edit this file a little bit to suit for your system setup. You
will find that most of the variables such as {\tt CXX} and {\tt CXXFLAGS} have
been automatically (and hopefully, correctly) set according to the flags
specified during your initial call to configure script.
However, you may need to modify {\tt MATLAB\_HOME}, {\tt MEXSUFFIX} and
{\tt MEX} as explained in the comments of the Makefile.
For example, on Mac OS X, it has been reported that all duplicates of strings
like {\tt -L/usr/lib/gcc/i686-apple-darwin11/4.2.1/../../..} should be removed
from the {\tt LIBS} line.
Once you think you've set up the {\tt Makefile} properly, type
{\tt make install} in the same directory as the {\tt Makefile}. If you didn't
get any errors, then you should have ended up with a {\tt mex} file. The {\tt mex}
file will be called {\tt ipopt.\$MEXEXT}, where {\tt \$MEXEXT} is {\tt mexglx}
for 32-bit Linux, {\tt mexa64} for 64-bit Linux, {\tt mexw32} for 32-bit Windows, etc.
\subsubsection{Making \Matlab aware of the {\tt mex} file}
In order to use the {\tt mex} file in \Matlab, you need to tell \Matlab where
to find it. The best way to do this is to type
\begin{verbatim}
addpath sourcedir
\end{verbatim}
in the \Matlab command prompt, where {\tt sourcedir} is the location of the
{\tt mex} file you created. (For more information, type {\tt help addpath} in
\Matlab. You can also achieve the same thing by modifying the {\tt MATLABPATH}
environment variable in the UNIX command line, using either the {\tt export}
command (in Bash shell) or the {\tt setenv} command (in C-shell).
%There's a great possibility you will encounter problems with the installation instructions we have just described here. I'm afraid some resourcefulness will be required on your part, as the installation will be slightly different for each person. Please consult the troubleshooting section on this webpage, and the archives of the IPOPT mailing list. If you can't find the answer at either of these locations, try sending an email to the IPOPT mailing list.
\subsubsection{Additional notes}
Starting with version 7.3, \Matlab can handle 64-bit
addressing, and the authors of \Matlab have modified the implementation of
sparse matrices to reflect this change. However, the row and column indices in
the sparse matrix are converted to signed integers, and this could potentially
cause problems when dealing with large, sparse matrices on 64-bit platforms
with \Matlab version 7.3 or greater.
As \Matlab (version R2008a or newer) includes its own HSL MA57
library, \Ipopt's configure can be setup to enable using this library in
\Ipopt's MA57 solver interface. To enable this, one should specify the
configure option {\tt --enable-matlab-ma57}. Note, that using this option is
not advisable if one also provides the source for MA57 via ThirdParty/HSL.
\subsubsection{Troubleshooting}
The installation procedure described above does involve a certain amount of
expertise on the part of the user. If you are encountering problems, it is
highly recommended that you follow the standard installation of \Ipopt first,
and then test the installation by running some examples, either in C++ or in
AMPL.
What follows are a list of common errors encountered, along with a suggested remedy.
\medskip
\textbf{Problem:} compilation is successful, but \Matlab crashes
\textbf{Remedy:} Even if you didn't get any errors during compilation, there's
still a possibility that you didn't link the {\tt mex} file properly. In this
case, executing \Ipopt in \Matlab will cause \Matlab to crash. This is a
common problem, and usually arises because you did not choose the proper
compiler or set the proper compilation flags (e.g. {\tt --with-pic}) when you
ran the configure script at the very beginning.
\medskip
\textbf{Problem:} \Matlab fails to link to \Ipopt shared library
\textbf{Remedy:} You might encounter this problem if you try to execute one of
the examples in \Matlab, and \Matlab complains that it cannot find the \Ipopt
shared library. The installation script has been set up so that the {\tt mex}
file you are calling knows where to look for the \Ipopt shared library.
However, if you moved the library then you will run into a problem. One way to
fix this problem is to modify the {\tt LDFLAGS} variable in the \Matlab
interface {\tt Makefile} (see above) so that the correct path of the \Ipopt
library is specified. Alternatively, you could modify the
{\tt LD\_LIBRARY\_PATH} environment variable so that the location of the
\Ipopt library is included in the path. If none of this is familiar to you,
you might want to do a web search to find out more.
\medskip
\textbf{Problem:} {\tt mwIndex} is not defined
\textbf{Remedy:} You may get a compilation error that says something to the
effect that {\tt mwIndex} is not defined. This error will surface on a version
of \Matlab prior to 7.3. The solution is to add the flag {\tt -DMWINDEXISINT}
to the {\tt CXXFLAGS} variable in the \Matlab interface {\tt Makefile} (see
above).
%\subsubsection{More Information}
%The difficulties with building and using the \Matlab interface of \Ipopt have
%lead to several installation instructions on the web:
%\begin{itemize}
%\item Peter Carbonetto's original instructions on how to build the \Matlab
%interface:\\ \url{https://projects.coin-or.org/Ipopt/wiki/MatlabInterface}
%\item Ray Zimmerman's instructions on how to build the \Matlab interface on
%Mac OS X:\\ \url{https://projects.coin-or.org/Ipopt/wiki/Ipopt_on_Mac_OS_X}
%\item Giacomo Perantoni's experiences and help with the \Matlab interface on
%Windows:\\ \url{http://users.ox.ac.uk/~newc3480}
%\end{itemize}
\subsection{Expert Installation Options for \Ipopt}\label{ExpertInstall}
The configuration script and Makefiles in the \Ipopt distribution
have been created using GNU's {\tt autoconf} and {\tt automake}. They
attempt to automatically adapt the compiler settings etc.\ to the
system they are running on. We tested the provided scripts for a
number of different machines, operating systems and compilers, but you
might run into a situation where the default setting does not work, or
where you need to change the settings to fit your particular
environment.
In general, you can see the list of options and variables that can be
set for the {\tt configure} script by typing \verb/configure --help/.
Also, the generic COIN-OR help pages are a valuable resource of
information:
\centerline{\url{https://projects.coin-or.org/CoinHelp}}
Below a few particular options are discussed:
\begin{itemize}
\item The {\tt configure} script tries to determine automatically, if
you have BLAS and/or LAPACK already installed on your system (trying
a few default libraries), and if it does not find them, it makes
sure that you put the source code in the required place.
However, you can specify a BLAS library (such as your local ATLAS
library\footnote{see \url{http://math-atlas.sourceforge.net}})
explicitly, using the \verb/--with-blas/ flag for {\tt configure}.
For example,
\verb|./configure --with-blas="-L$HOME/lib -lf77blas -lcblas -latlas"| %$
To tell the configure script to compile and use the downloaded BLAS
source files even if a BLAS library is found on your system, specify
\verb|--with-blas=BUILD|.
Similarly, you can use the \verb/--with-lapack/ switch to specify
the location of your LAPACK library, or use the keyword {\tt BUILD}
to force the \Ipopt makefiles to compile LAPACK together with
\Ipopt.
\item Similarly, if you have a precompiled library containing the
HSL packages, you can specify the directory with the
\texttt{CoinHslConfig.h} header file with the \verb|--with-hsl-incdir| flag and
the linker flags with the \verb|--with-hsl-lib| flag.
Analogously, use \verb|--with-asl-incdir| and \verb|--with-asl-lib| for
building against a precompiled AMPL solver library.
\item The HSL codes HSL\_MA86 and HSL\_MA97 can run in parallel if
compiled with OpenMP support. By default, this is not enabled by
\Ipopt's configure so far. To enable OpenMP with GNU compilers, it
has been reported that the following configure flags should be used:
\verb|ADD_CFLAGS=-fopenmp ADD_FFLAGS=-fopenmp ADD_CXXFLAGS=-fopenmp|
\item If you want to compile \Ipopt with the linear solver Pardiso
(see Section~\ref{sec:Pardiso}) from the Pardiso project website,
you need to specify the link flags
for the library with the \verb|--with-pardiso| flag, including
required additional libraries and flags. For example, if you want
to compile \Ipopt with the parallel version of Pardiso (located in
{\tt \$HOME/lib}) on an AIX system in 64bit mode, you should add the
flag
\verb|--with-pardiso="-qsmp=omp $HOME/lib/libpardiso_P4AIX51_64_P.so"| %$
If you are using the parallel version of Pardiso, you need to
specify the number of processors it should run on with the
environment variable \verb|OMP_NUM_THREADS|, as described in the
Pardiso manual.
If you want to compile \Ipopt with the Pardiso library that is included
in Intel MKL, it should be sufficient to ensure that MKL is used for
the linear algebra routines (Blas/Lapack). On some systems, configure
is able to find MKL automatically when looking for Blas. On other systems,
one has to specify the MKL libraries with the \verb|--with-blas| option.
\item If you want to compile \Ipopt with the linear solver WSMP (see
Section~\ref{sec:WSMP}), you need to specify the link flags for the
library with the \verb|--with-wsmp| flag, including required
additional libraries and flags. For example, if you want to compile
\Ipopt with WSMP (located in {\tt \$HOME/lib}) on an Intel IA32
Linux system, you should add the flag
\verb|--with-wsmp="$HOME/lib/wsmp/wsmp-Linux/lib/IA32/libwsmp.a -lpthread"| %$
\item If you want to compile \Ipopt with a precompiled MUMPS library
(see Section~\ref{sec:MUMPS}), you need to specify the directory containing
the MUMPS header files with the \verb|--with-mumps-incdir| flag,
e.g.,
\verb|--with-mumps-incdir="$HOME/MUMPS/include"| %$
and you also need to provide the link flags for MUMPS with the
\verb|--with-mumps-lib| flag.
\item If you want to specify that you want to use particular
compilers, you can do so by adding the variables definitions for
{\tt CXX}, {\tt CC}, and {\tt F77} to the {\tt ./configure} command
line, to specify the C++, C, and Fortran compiler, respectively.
For example,
{\tt ./configure CXX=g++-4.2.0 CC=gcc-4.2.0 F77=gfortran-4.2.0}
In order to set the compiler flags, you should use the variables
{\tt CXXFLAGS}, {\tt CFLAGS}, {\tt FFLAGS}. Note, that the \Ipopt
code uses ``{\tt dynamic\_cast}''. Therefore it is necessary that
the C++ code is compiled including RTTI (Run-Time Type Information).
Some compilers need to be given special flags to do that (e.g.,
``{\tt -qrtti=dyna}'' for the AIX {\tt xlC} compiler).
Please also check the generic COIN-OR help page at
\centerline{\url{https://projects.coin-or.org/CoinHelp/wiki/user-configure\#GivingOptions}}
for the description of more variables that can be set for {\tt
configure}.
\item By default, the \Ipopt library is compiled as a shared library,
on systems where this is supported. If you want to generate a
static library, you need to specify the {\tt --disable-shared}
flag. If you want to compile both shared and static libraries, you
should specify the {\tt --enable-static} flag.
\item If you want to link the \Ipopt library with a main program
written in C or Fortran, the C and Fortran compiler doing the
linking of the executable needs to be told about the C++ runtime
libraries. Unfortunately, the current version of {\tt autoconf}
does not provide the automatic detection of those libraries. We
have hard-coded some default values for some systems and compilers,
but this might not work all the time.
If you have problems linking your Fortran or C code with the \Ipopt
library {\tt libipopt.a} and the linker complains about missing
symbols from C++ (e.g., the standard template library), you should
specify the C++ libraries with the {\tt CXXLIBS} variable. To find out
what those libraries are, it is probably helpful to link a simple C++
program with verbose compiler output.
For example, for the Intel compilers on a Linux system, you
might need to specify something like
{\tt ./configure CC=icc F77=ifort CXX=icpc $\backslash$\\ \hspace*{14ex} CXXLIBS='-L/usr/lib/gcc-lib/i386-redhat-linux/3.2.3 -lstdc++'}
\item Compilation in 64bit mode sometimes requires some special
consideration. For example, for compilation of 64bit code on AIX,
we recommend the following configuration
{\tt ./configure AR='ar -X64' NM='nm -X64' $\backslash$\\
\hspace*{14ex} CC='xlc -q64' F77='xlf -q64' CXX='xlC
-q64'$\backslash$\\ \hspace*{14ex} CFLAGS='-O3
-bmaxdata:0x3f0000000'
$\backslash$\\ \hspace*{14ex} FFLAGS='-O3 -bmaxdata:0x3f0000000' $\backslash$\\
\hspace*{14ex} CXXFLAGS='-qrtti=dyna -O3 -bmaxdata:0x3f0000000'}
(Alternatively, a simpler solution for AIX is to set the environment variable {\tt OBJECT\_MODE} to 64.)
% \item To build library/archive files (with the ending {\tt .a})
% including C++ code in some environments, it is necessary to use the
% C++ compiler instead of {\tt ar} to build the archive. This is for
% example the case for some older compilers on SGI and SUN. For this,
% the {\tt configure} variables {\tt AR}, {\tt ARFLAGS}, and {\tt
% AR\_X} are provided. Here, {\tt AR} specifies the command for the
% archiver for creating an archive, and {\tt ARFLAGS} specifies
% additional flags. {\tt AR\_X} contains the command for extracting
% all files from an archive. For example, the default setting for SUN
% compilers for our configure script is
% {\tt AR='CC -xar' ARFLAGS='-o' AR\_X='ar x'}
\item It is possible to compile the \Ipopt library in a debug
configuration, by specifying \verb|--enable-debug|. Then the
compilers will use the debug flags (unless the compilation flag
variables are overwritten in the {\tt configure} command line)
Also, you can tell \Ipopt to do some additional runtime sanity
checks, by specifying the flag {\tt --with-ipopt-checklevel=1}.
This usually leads to a significant slowdown of the code, but might
be helpful when debugging something.
% We assume vpath installations already during this chapter.
%\item It is not necessary to produce the binary files in the
% directories where the source files are. If you want to compile the
% code on different systems or with different compilers/options on a
% shared file system, you can keep one single copy of the source files
% in one directory, and the binary files for each configuration in
% separate directories. For this, simply run the configure script in
% the directory where you want the base directory for the \Ipopt
% binary files. For example:
%
% {\tt \$ mkdir \$HOME/Ipopt-objects}\\
% {\tt \$ cd \$HOME/Ipopt-objects}\\
% {\tt \$ \$HOME/CoinIpopt/configure}
\end{itemize}
\section{Interfacing your NLP to \Ipopt}
\label{sec:tutorial-example}
\Ipopt has been designed to be flexible for a wide variety of
applications, and there are a number of ways to interface with \Ipopt
that allow specific data structures and linear solver
techniques. Nevertheless, the authors have included a standard
representation that should meet the needs of most users.
This tutorial will discuss six interfaces to \Ipopt, namely the AMPL
modeling language \cite{FouGayKer:AMPLbook} interface, and the C++, C,
Fortran, Java, and R code interfaces. AMPL is a 3rd party modeling language
tool that allows users to write their optimization problem in a syntax
that resembles the way the problem would be written mathematically.
Once the problem has been formulated in AMPL, the problem can be
easily solved using the (already compiled) \Ipopt AMPL solver
executable, {\tt ipopt}. Interfacing your problem by directly linking
code requires more effort to write, but can be far more efficient for
large problems.
We will illustrate how to use each of the four interfaces using an
example problem, number 71 from the Hock-Schittkowsky test suite \cite{HS},
%\begin{subequations}\label{HS71}
\begin{eqnarray}
\min_{x \in \Re^4} &&x_1 x_4 (x_1 + x_2 + x_3) + x_3 \label{eq:ex_obj} \\
\mbox{s.t.} &&x_1 x_2 x_3 x_4 \ge 25 \label{eq:ex_ineq} \\
&&x_1^2 + x_2^2 + x_3^2 + x_4^2 = 40 \label{eq:ex_equ} \\
&&1 \leq x_1, x_2, x_3, x_4 \leq 5, \label{eq:ex_bounds}
\end{eqnarray}
%\end{subequations}
with the starting point
\begin{equation}
x_0 = (1, 5, 5, 1) \label{eq:ex_startpt}
\end{equation}
and the optimal solution
\[
x_* = (1.00000000, 4.74299963, 3.82114998, 1.37940829). \nonumber
\]
You can find further, less documented examples for using \Ipopt from
your own source code in the {\tt Ipopt/examples} subdirectory.
\subsection{Using \Ipopt through AMPL} \label{sec.ipoptampl}
Using the AMPL solver executable is by far the easiest way to
solve a problem with \Ipopt. The user must simply formulate the problem
in AMPL syntax, and solve the problem through the AMPL environment.
There are drawbacks, however. AMPL is a 3rd party package and, as
such, must be appropriately licensed (a free student version for
limited problem size is available from the AMPL website,
\url{http://www.ampl.com}). Furthermore, the AMPL environment may be
prohibitive for very large problems. Nevertheless, formulating the problem in
AMPL is straightforward and even for large problems, it is often used as a
prototyping tool before using one of the code interfaces.
This tutorial is not intended as a guide to formulating models in
AMPL. If you are not already familiar with AMPL, please consult
\cite{FouGayKer:AMPLbook}.
The problem presented in equations
(\ref{eq:ex_obj})--(\ref{eq:ex_startpt}) can be solved with \Ipopt with
the following AMPL model.
\begin{verbatim}
# tell ampl to use the ipopt executable as a solver
# make sure ipopt is in the path!
option solver ipopt;
# declare the variables and their bounds,
# set notation could be used, but this is straightforward
var x1 >= 1, <= 5;
var x2 >= 1, <= 5;
var x3 >= 1, <= 5;
var x4 >= 1, <= 5;
# specify the objective function
minimize obj:
x1 * x4 * (x1 + x2 + x3) + x3;
# specify the constraints
s.t.
inequality:
x1 * x2 * x3 * x4 >= 25;
equality:
x1^2 + x2^2 + x3^2 +x4^2 = 40;
# specify the starting point
let x1 := 1;
let x2 := 5;
let x3 := 5;
let x4 := 1;
# solve the problem
solve;
# print the solution
display x1;
display x2;
display x3;
display x4;
\end{verbatim}
The line, ``{\tt option solver ipopt;}'' tells AMPL to use \Ipopt as
the solver. The \Ipopt executable (installed in
Section~\ref{sec.comp_and_inst}) must be in the {\tt PATH} for AMPL to
find it. The remaining lines specify the problem in AMPL format. The
problem can now be solved by starting AMPL and loading the mod file:
\begin{verbatim}
$ ampl
> model hs071_ampl.mod;
.
.
.
\end{verbatim}
%$
The problem will be solved using \Ipopt and the solution will be
displayed.
At this point, AMPL users may wish to skip the sections about
interfacing with code, but should read Section \ref{sec:options}
concerning \Ipopt options, and Section \ref{sec:output} which
explains the output displayed by \Ipopt.
\subsubsection{Using \Ipopt from the command line}
It is possible to solve AMPL problems with \Ipopt directly from
the command line. However, this requires a file in format {\tt .nl}
produced by {\tt ampl}. If you have a model and data loaded in {\tt
Ampl}, you can create the corresponding {\tt .nl} file with name,
say, {\tt myprob.nl} by using the {\tt Ampl} command:
{\tt write gmyprob}
There is a small {\tt .nl} file available in the \Ipopt distribution. It is
located at {\tt Ipopt/test/mytoy.nl}.
We use this file in the remainder of this section. We assume that the file
{\tt mytoy.nl} is in the current directory and that the command
{\tt ipopt} is a shortcut for running the {\tt ipopt} binary available
in the {\tt bin} directory of the installation of \Ipopt.
We list below commands to perform basic tasks from the Linux prompt.
\begin{itemize}
\item To solve {\tt mytoy.nl} from the Linux prompt, use:
{\tt ipopt mytoy}
\item To see all command line options for \Ipopt, use:
{\tt ipopt -=}
\item To see more detailed information on all options for \Ipopt:
{\tt ipopt mytoy 'print\_options\_documentation yes'}
\item To run {\tt ipopt}, setting the maximum number of iterations to 2 and
print level to 4:
{\tt ipopt mytoy 'max\_iter 2 print\_level 4'}
\end{itemize}
If many options are to be set, they can be collected in a file {\tt
ipopt.opt} that is automatically read by \Ipopt if present in
the current directory, see Section \ref{sec:options}.
\subsection{Interfacing with \Ipopt through code}
\label{sec.required_info}
In order to solve a problem, \Ipopt needs more information than just
the problem definition (for example, the derivative information). If
you are using a modeling language like AMPL, the extra information is
provided by the modeling tool and the \Ipopt interface. When
interfacing with \Ipopt through your own code, however, you must
provide this additional information.
The following information is required by \Ipopt:
\begin{enumerate}
\item Problem dimensions \label{it.prob_dim}
\begin{itemize}
\item number of variables
\item number of constraints
\end{itemize}
\item Problem bounds
\begin{itemize}
\item variable bounds
\item constraint bounds
\end{itemize}
\item Initial starting point
\begin{itemize}
\item Initial values for the primal $x$ variables
\item Initial values for the multipliers (only
required for a warm start option)
\end{itemize}
\item Problem Structure \label{it.prob_struct}
\begin{itemize}
\item number of nonzeros in the Jacobian of the constraints
\item number of nonzeros in the Hessian of the Lagrangian function
\item sparsity structure of the Jacobian of the constraints
\item sparsity structure of the Hessian of the Lagrangian function
\end{itemize}
\item Evaluation of Problem Functions \label{it.prob_eval} \\
Information evaluated using a given point ($x,
\lambda, \sigma_f$ coming from \Ipopt)
\begin{itemize}
\item Objective function, $f(x)$
\item Gradient of the objective $\nabla f(x)$
\item Constraint function values, $g(x)$
\item Jacobian of the constraints, $\nabla g(x)^T$
\item Hessian of the Lagrangian function,
$\sigma_f \nabla^2 f(x) + \sum_{i=1}^m\lambda_i\nabla^2
g_i(x)$ \\
(this is not required if a quasi-Newton options is chosen to
approximate the second derivatives)
\end{itemize}
\end{enumerate}
The problem dimensions and bounds are
straightforward and come solely from the problem definition. The
initial starting point is used by the algorithm when it begins
iterating to solve the problem. If \Ipopt has difficulty converging, or
if it converges to a locally infeasible point, adjusting the starting
point may help. Depending on the starting point, \Ipopt may also
converge to different local solutions.
Providing the sparsity structure of derivative matrices is a bit more
involved. \Ipopt is a nonlinear programming solver that is designed
for solving large-scale, sparse problems. While \Ipopt can be
customized for a variety of matrix formats, the triplet format is used
for the standard interfaces in this tutorial. For an overview of the
triplet format for sparse matrices, see Appendix~\ref{app.triplet}.
Before solving the problem, \Ipopt needs to know the number of
nonzero elements and the sparsity structure (row and column indices of
each of the nonzero entries) of the constraint Jacobian and the
Lagrangian function Hessian. Once defined, this nonzero structure MUST
remain constant for the entire optimization procedure. This means that
the structure needs to include entries for any element that could ever
be nonzero, not only those that are nonzero at the starting point.
As \Ipopt iterates, it will need the values for
Item~\ref{it.prob_eval} in Section \ref{sec.required_info} evaluated at
particular points. Before we can begin coding the interface, however,
we need to work out the details of these equations symbolically for
example problem (\ref{eq:ex_obj})-(\ref{eq:ex_bounds}).
The gradient of the objective $f(x)$ is given by
\[
\left[
\begin{array}{c}
x_1 x_4 + x_4 (x_1 + x_2 + x_3) \\
x_1 x_4 \\
x_1 x_4 + 1 \\
x_1 (x_1 + x_2 + x_3)
\end{array}
\right]
\]
and the Jacobian of the constraints $g(x)$ is
\[
\left[
\begin{array}{cccc}
x_2 x_3 x_4 & x_1 x_3 x_4 & x_1 x_2 x_4 & x_1 x_2 x_3 \\
2 x_1 & 2 x_2 & 2 x_3 & 2 x_4
\end{array}
\right].
\]
We also need to determine the Hessian of the Lagrangian\footnote{If a
quasi-Newton option is chosen to approximate the second derivatives,
this is not required. However, if second derivatives can be
computed, it is often worthwhile to let \Ipopt use them, since the
algorithm is then usually more robust and converges faster. More on
the quasi-Newton approximation in Section~\ref{sec:quasiNewton}.}.
The Lagrangian function for the NLP
(\ref{eq:ex_obj})-(\ref{eq:ex_bounds}) is defined as $f(x) + g(x)^T
\lambda$ and the Hessian of the Lagrangian function is, technically, $
\nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k)$. However,
we introduce a factor ($\sigma_f$) in front of the objective term
so that \Ipopt can ask for the Hessian of the objective or the
constraints independently, if required.
%
Thus, for \Ipopt the symbolic form of the Hessian of the
Lagrangian is
\begin{equation}\label{eq:IpoptLAG}
\sigma_f \nabla^2 f(x_k) + \sum_{i=1}^m\lambda_i\nabla^2 g_i(x_k)
\end{equation}
and for the example problem this becomes
%\begin{eqnarray}
%{\cal L}(x,\lambda) &{=}& f(x) + c(x)^T \lambda \nonumber \\
%&{=}& \left(x_1 x_4 (x_1 + x_2 + x_3) + x_3\right)
%+ \left(x_1 x_2 x_3 x_4\right) \lambda_1 \nonumber \\
%&& \;\;\;\;\;+ \left(x_1^2 + x_2^2 + x_3^2 + x_4^2\right) \lambda_2
%- \displaystyle \sum_{i \in 1..4} z^L_i + \sum_{i \in 1..4} z^U_i
%\end{eqnarray}
\[%\begin{equation}
\sigma_f \left[
\begin{array}{cccc}
2 x_4 & x_4 & x_4 & 2 x_1 + x_2 + x_3 \\
x_4 & 0 & 0 & x_1 \\
x_4 & 0 & 0 & x_1 \\
2 x_1+x_2+x_3 & x_1 & x_1 & 0
\end{array}
\right]
+
\lambda_1
\left[
\begin{array}{cccc}
0 & x_3 x_4 & x_2 x_4 & x_2 x_3 \\
x_3 x_4 & 0 & x_1 x_4 & x_1 x_3 \\
x_2 x_4 & x_1 x_4 & 0 & x_1 x_2 \\
x_2 x_3 & x_1 x_3 & x_1 x_2 & 0
\end{array}
\right]
+
\lambda_2
\left[
\begin{array}{cccc}
2 & 0 & 0 & 0 \\
0 & 2 & 0 & 0 \\
0 & 0 & 2 & 0 \\
0 & 0 & 0 & 2
\end{array}
\right]
\]%\end{equation}
where the first term comes from the Hessian of the objective function,
and the second and third term from the Hessian of the constraints
(\ref{eq:ex_ineq}) and (\ref{eq:ex_equ}), respectively. Therefore, the
dual variables $\lambda_1$ and $\lambda_2$ are the multipliers
for constraints (\ref{eq:ex_ineq}) and (\ref{eq:ex_equ}), respectively.
%C =============================================================================
%C
%C This is an example for the usage of IPOPT.
%C It implements problem 71 from the Hock-Schittkowsky test suite:
%C
%C min x1*x4*(x1 + x2 + x3) + x3
%C s.t. x1*x2*x3*x4 >= 25
%C x1**2 + x2**2 + x3**2 + x4**2 = 40
%C 1 <= x1,x2,x3,x4 <= 5
%C
%C Starting point:
%C x = (1, 5, 5, 1)
%C
%C Optimal solution:
%C x = (1.00000000, 4.74299963, 3.82114998, 1.37940829)
%C
%C =============================================================================
\vspace{\baselineskip}
The remaining sections of the tutorial will lead you through
the coding required to solve example problem
(\ref{eq:ex_obj})--(\ref{eq:ex_bounds}) using, first C++, then C, and finally
Fortran. Completed versions of these examples can be found in {\tt
\$IPOPTDIR/Ipopt/examples} under {\tt hs071\_cpp}, {\tt hs071\_c}, {\tt
hs071\_f}.
As a user, you are responsible for coding two sections of the program
that solves a problem using \Ipopt: the main executable (e.g., {\tt
main}) and the problem representation. Typically, you will write an
executable that prepares the problem, and then passes control over to
\Ipopt through an {\tt Optimize} or {\tt Solve} call. In this call,
you will give \Ipopt everything that it requires to call back to your
code whenever it needs functions evaluated (like the objective
function, the Jacobian of the constraints, etc.). In each of the
three sections that follow (C++, C, and Fortran), we will first
discuss how to code the problem representation, and then how to code
the executable.
\subsection{The C++ Interface} \label{sec.cppinterface}
This tutorial assumes that you are familiar with the C++ programming
language, however, we will lead you through each step of the
implementation. For the problem representation, we will create a class
that inherits off of the pure virtual base class, {\tt TNLP} ({\tt
IpTNLP.hpp}). For the executable (the {\tt main} function) we will
make the call to \Ipopt through the {\tt IpoptApplication} class
({\tt IpIpoptApplication.hpp}). In addition, we will also be using the
{\tt SmartPtr} class ({\tt IpSmartPtr.hpp}) which implements a reference
counting pointer that takes care of memory management (object
deletion) for you (for details, see Appendix~\ref{app.smart_ptr}).
After ``\texttt{make install}'' (see Section~\ref{sec.comp_and_inst}),
the header files are installed in \texttt{\$IPOPTDIR/include/coin}
(or in \texttt{\$PREFIX/include/coin} if the switch
\verb|--prefix=$PREFIX| was used for {\tt configure}). %$
\subsubsection{Coding the Problem Representation}\label{sec.cpp_problem}
We provide the required information
by coding the {\tt HS071\_NLP} class, a specific implementation of the
{\tt TNLP} base class. In the executable, we will create an instance
of the {\tt HS071\_NLP} class and give this class to \Ipopt so it can
evaluate the problem functions through the {\tt TNLP} interface. If
you have any difficulty as the implementation proceeds, have a look at
the completed example in the {\tt Ipopt/examples/hs071\_cpp} directory.
Start by creating a new directory {\tt MyExample} under {\tt examples} and
create the files {\tt hs071\_nlp.hpp} and {\tt
hs071\_nlp.cpp}. In {\tt hs071\_nlp.hpp}, include {\tt IpTNLP.hpp}
(the base class), tell the compiler that we are using the \Ipopt
namespace, and create the declaration of the {\tt HS071\_NLP} class,
inheriting off of {\tt TNLP}. Have a look at the {\tt TNLP} class in
{\tt IpTNLP.hpp}; you will see eight pure virtual methods that we must
implement. Declare these methods in the header file. Implement each
of the methods in {\tt HS071\_NLP.cpp} using the descriptions given
below. In {\tt hs071\_nlp.cpp}, first include the header file for your
class and tell the compiler that you are using the \Ipopt namespace.
A full version of these files can be found in the {\tt
Ipopt/examples/hs071\_cpp} directory.
It is very easy to make mistakes in the implementation of the function
evaluation methods, in particular regarding the derivatives. \Ipopt
has a feature that can help you to debug the derivative code, using
finite differences, see Section~\ref{sec:deriv-checker}.
Note that the return value of any {\tt bool}-valued function should be
{\tt true}, unless an error occurred, for example, because the value of
a problem function could not be evaluated at the required point.
\paragraph{Method {\texttt{get\_nlp\_info}}} with prototype
\begin{verbatim}
virtual bool get_nlp_info(Index& n, Index& m, Index& nnz_jac_g,
Index& nnz_h_lag, IndexStyleEnum& index_style)
\end{verbatim}
Give \Ipopt the information about the size of the problem (and hence,
the size of the arrays that it needs to allocate).
\begin{itemize}
\item {\tt n}: (out), the number of variables in the problem (dimension of $x$).
\item {\tt m}: (out), the number of constraints in the problem (dimension of $g(x)$).
\item {\tt nnz\_jac\_g}: (out), the number of nonzero entries in the Jacobian.
\item {\tt nnz\_h\_lag}: (out), the number of nonzero entries in the Hessian.
\item {\tt index\_style}: (out), the numbering style used for row/col entries in the sparse matrix
format ({\tt C\_STYLE}: 0-based, {\tt FORTRAN\_STYLE}: 1-based; see
also Appendix~\ref{app.triplet}).
\end{itemize}
\Ipopt uses this information when allocating the arrays that
it will later ask you to fill with values. Be careful in this method
since incorrect values will cause memory bugs which may be very
difficult to find.
Our example problem has 4 variables (n), and 2 constraints (m). The
constraint Jacobian for this small problem is actually dense and has 8
nonzeros (we still need to represent this Jacobian using the sparse
matrix triplet format). The Hessian of the Lagrangian has 10
``symmetric'' nonzeros (i.e., nonzeros in the lower left triangular
part.). Keep in mind that the number of nonzeros is the total number
of elements that may \emph{ever} be nonzero, not just those that are
nonzero at the starting point. This information is set once for the
entire problem.
\begin{footnotesize}
\begin{verbatim}
bool HS071_NLP::get_nlp_info(Index& n, Index& m, Index& nnz_jac_g,
Index& nnz_h_lag, IndexStyleEnum& index_style)
{
// The problem described in HS071_NLP.hpp has 4 variables, x[0] through x[3]
n = 4;
// one equality constraint and one inequality constraint
m = 2;
// in this example the Jacobian is dense and contains 8 nonzeros
nnz_jac_g = 8;
// the Hessian is also dense and has 16 total nonzeros, but we
// only need the lower left corner (since it is symmetric)
nnz_h_lag = 10;
// use the C style indexing (0-based)
index_style = TNLP::C_STYLE;
return true;
}
\end{verbatim}
\end{footnotesize}
\paragraph{Method {\texttt{get\_bounds\_info}}} with prototype
\begin{verbatim}
virtual bool get_bounds_info(Index n, Number* x_l, Number* x_u,
Index m, Number* g_l, Number* g_u)
\end{verbatim}
Give \Ipopt the value of the bounds on the variables and constraints.
\begin{itemize}
\item {\tt n}: (in), the number of variables in the problem (dimension of $x$).
\item {\tt x\_l}: (out) the lower bounds $x^L$ for $x$.
\item {\tt x\_u}: (out) the upper bounds $x^U$ for $x$.
\item {\tt m}: (in), the number of constraints in the problem (dimension of $g(x)$).
\item {\tt g\_l}: (out) the lower bounds $g^L$ for $g(x)$.
\item {\tt g\_u}: (out) the upper bounds $g^U$ for $g(x)$.
\end{itemize}
The values of {\tt n} and {\tt m} that you specified in {\tt
get\_nlp\_info} are passed to you for debug checking. Setting a
lower bound to a value less than or equal to the value of the option
\htmlref{\tt nlp\_lower\_bound\_inf}{opt:nlp_lower_bound_inf} will
cause \Ipopt to assume no lower bound. Likewise, specifying the upper
bound above or equal to the value of the option
\htmlref{\tt nlp\_upper\_bound\_inf}{opt:nlp_upper_bound_inf} will cause \Ipopt to
assume no upper bound. These options,
\htmlref{\tt nlp\_lower\_bound\_inf}{opt:nlp_lower_bound_inf}
and \htmlref{\tt nlp\_upper\_bound\_inf}{opt:nlp_upper_bound_inf},
are set to $-10^{19}$ and $10^{19}$,
respectively, by default, but may be modified by changing the options
(see Section \ref{sec:options}).
In our example, the first constraint has a lower bound of $25$ and no upper
bound, so we set the lower bound of constraint {\tt [0]} to $25$ and
the upper bound to some number greater than $10^{19}$. The second
constraint is an equality constraint and we set both bounds to
$40$. \Ipopt recognizes this as an equality constraint and does not
treat it as two inequalities.
\begin{footnotesize}
\begin{verbatim}
bool HS071_NLP::get_bounds_info(Index n, Number* x_l, Number* x_u,
Index m, Number* g_l, Number* g_u)
{
// here, the n and m we gave IPOPT in get_nlp_info are passed back to us.
// If desired, we could assert to make sure they are what we think they are.
assert(n == 4);
assert(m == 2);
// the variables have lower bounds of 1
for (Index i=0; i<4; i++)
x_l[i] = 1.0;
// the variables have upper bounds of 5
for (Index i=0; i<4; i++)
x_u[i] = 5.0;
// the first constraint g1 has a lower bound of 25
g_l[0] = 25;
// the first constraint g1 has NO upper bound, here we set it to 2e19.
// Ipopt interprets any number greater than nlp_upper_bound_inf as
// infinity. The default value of nlp_upper_bound_inf and nlp_lower_bound_inf
// is 1e19 and can be changed through ipopt options.
g_u[0] = 2e19;
// the second constraint g2 is an equality constraint, so we set the
// upper and lower bound to the same value
g_l[1] = g_u[1] = 40.0;
return true;
}
\end{verbatim}
\end{footnotesize}
\paragraph{Method {\texttt{get\_starting\_point}}} with prototype
\begin{verbatim}
virtual bool get_starting_point(Index n, bool init_x, Number* x,
bool init_z, Number* z_L, Number* z_U,
Index m, bool init_lambda, Number* lambda)
\end{verbatim}
Give \Ipopt the starting point before it begins iterating.
\begin{itemize}
\item {\tt n}: (in), the number of variables in the problem (dimension of $x$).
\item {\tt init\_x}: (in), if true, this method must provide an initial value for $x$.
\item {\tt x}: (out), the initial values for the primal variables, $x$.
\item {\tt init\_z}: (in), if true, this method must provide an initial value
for the bound multipliers $z^L$ and $z^U$.
\item {\tt z\_L}: (out), the initial values for the bound multipliers, $z^L$.
\item {\tt z\_U}: (out), the initial values for the bound multipliers, $z^U$.
\item {\tt m}: (in), the number of constraints in the problem (dimension of $g(x)$).
\item {\tt init\_lambda}: (in), if true, this method must provide an initial value
for the constraint multipliers, $\lambda$.
\item {\tt lambda}: (out), the initial values for the constraint multipliers, $\lambda$.
\end{itemize}
The variables {\tt n} and {\tt m} are passed in for your convenience.
These variables will have the same values you specified in {\tt
get\_nlp\_info}.
Depending on the options that have been set, \Ipopt may or may not
require bounds for the primal variables $x$, the bound multipliers
$z^L$ and $z^U$, and the constraint multipliers $\lambda$. The boolean
flags {\tt init\_x}, {\tt init\_z}, and {\tt init\_lambda} tell you
whether or not you should provide initial values for $x$, $z^L$, $z^U$, or
$\lambda$ respectively. The default options only require an initial
value for the primal variables $x$. Note, the initial values for
bound multiplier components for ``infinity'' bounds
($x_L^{(i)}=-\infty$ or $x_U^{(i)}=\infty$) are ignored.
In our example, we provide initial values for $x$ as specified in the
example problem. We do not provide any initial values for the dual
variables, but use an assert to immediately let us know if we are ever
asked for them.
\begin{footnotesize}
\begin{verbatim}
bool HS071_NLP::get_starting_point(Index n, bool init_x, Number* x,
bool init_z, Number* z_L, Number* z_U,
Index m, bool init_lambda,
Number* lambda)
{
// Here, we assume we only have starting values for x, if you code
// your own NLP, you can provide starting values for the dual variables
// if you wish to use a warmstart option
assert(init_x == true);
assert(init_z == false);
assert(init_lambda == false);
// initialize to the given starting point
x[0] = 1.0;
x[1] = 5.0;
x[2] = 5.0;
x[3] = 1.0;
return true;
}
\end{verbatim}
\end{footnotesize}
\paragraph{Method {\texttt{eval\_f}}} with prototype
\begin{verbatim}
virtual bool eval_f(Index n, const Number* x,
bool new_x, Number& obj_value)
\end{verbatim}
Return the value of the objective function at the point $x$.
\begin{itemize}
\item {\tt n}: (in), the number of variables in the problem (dimension
of $x$).
\item {\tt x}: (in), the values for the primal variables, $x$, at which
$f(x)$ is to be evaluated.
\item {\tt new\_x}: (in), false if any evaluation method was
previously called with the same values in {\tt x}, true otherwise.
\item {\tt obj\_value}: (out) the value of the objective function
($f(x)$).
\end{itemize}
The boolean variable {\tt new\_x} will be false if the last call to
any of the evaluation methods ({\tt eval\_*}) used the same $x$
values. This can be helpful when users have efficient implementations
that calculate multiple outputs at once. \Ipopt internally caches
results from the {\tt TNLP} and generally, this flag can be ignored.
The variable {\tt n} is passed in for your convenience. This variable
will have the same value you specified in {\tt get\_nlp\_info}.
For our example, we ignore the {\tt new\_x} flag and calculate the objective.
\begin{footnotesize}
\begin{verbatim}
bool HS071_NLP::eval_f(Index n, const Number* x, bool new_x, Number& obj_value)
{
assert(n == 4);
obj_value = x[0] * x[3] * (x[0] + x[1] + x[2]) + x[2];
return true;
}
\end{verbatim}
\end{footnotesize}
\paragraph{Method {\texttt{eval\_grad\_f}}} with prototype
\begin{verbatim}
virtual bool eval_grad_f(Index n, const Number* x, bool new_x,
Number* grad_f)
\end{verbatim}
Return the gradient of the objective function at the point $x$.
\begin{itemize}
\item {\tt n}: (in), the number of variables in the problem (dimension of $x$).
\item {\tt x}: (in), the values for the primal variables, $x$, at which
$\nabla f(x)$ is to be evaluated.
\item {\tt new\_x}: (in), false if any evaluation method was previously called
with the same values in {\tt x}, true otherwise.
\item {\tt grad\_f}: (out) the array of values for the gradient of the
objective function ($\nabla f(x)$).
\end{itemize}
The gradient array is in the same order as the $x$ variables (i.e., the
gradient of the objective with respect to {\tt x[2]} should be put in
{\tt grad\_f[2]}).
The boolean variable {\tt new\_x} will be false if the last call to
any of the evaluation methods ({\tt eval\_*}) used the same $x$
values. This can be helpful when users have efficient implementations
that calculate multiple outputs at once. \Ipopt internally caches
results from the {\tt TNLP} and generally, this flag can be ignored.
The variable {\tt n} is passed in for your convenience. This
variable will have the same value you specified in {\tt
get\_nlp\_info}.
In our example, we ignore the {\tt new\_x} flag and calculate the
values for the gradient of the objective.
\begin{footnotesize}
\begin{verbatim}
bool HS071_NLP::eval_grad_f(Index n, const Number* x, bool new_x, Number* grad_f)
{
assert(n == 4);
grad_f[0] = x[0] * x[3] + x[3] * (x[0] + x[1] + x[2]);
grad_f[1] = x[0] * x[3];
grad_f[2] = x[0] * x[3] + 1;
grad_f[3] = x[0] * (x[0] + x[1] + x[2]);
return true;
}
\end{verbatim}
\end{footnotesize}
\paragraph{Method {\texttt{eval\_g}}} with prototype
\begin{verbatim}
virtual bool eval_g(Index n, const Number* x,
bool new_x, Index m, Number* g)
\end{verbatim}
Return the value of the constraint function at the point $x$.
\begin{itemize}
\item {\tt n}: (in), the number of variables in the problem (dimension of $x$).
\item {\tt x}: (in), the values for the primal variables, $x$, at
which the constraint functions,
$g(x)$, are to be evaluated.
\item {\tt new\_x}: (in), false if any evaluation method was previously called
with the same values in {\tt x}, true otherwise.
\item {\tt m}: (in), the number of constraints in the problem (dimension of $g(x)$).
\item {\tt g}: (out) the array of constraint function values, $g(x)$.
\end{itemize}
The values returned in {\tt g} should be only the $g(x)$ values,
do not add or subtract the bound values $g^L$ or $g^U$.
The boolean variable {\tt new\_x} will be false if the last call to
any of the evaluation methods ({\tt eval\_*}) used the same $x$
values. This can be helpful when users have efficient implementations
that calculate multiple outputs at once. \Ipopt internally caches
results from the {\tt TNLP} and generally, this flag can be ignored.
The variables {\tt n} and {\tt m} are passed in for your convenience.
These variables will have the same values you specified in {\tt
get\_nlp\_info}.
In our example, we ignore the {\tt new\_x} flag and calculate the
values of constraint functions.
\begin{footnotesize}
\begin{verbatim}
bool HS071_NLP::eval_g(Index n, const Number* x, bool new_x, Index m, Number* g)
{
assert(n == 4);
assert(m == 2);
g[0] = x[0] * x[1] * x[2] * x[3];
g[1] = x[0]*x[0] + x[1]*x[1] + x[2]*x[2] + x[3]*x[3];
return true;
}
\end{verbatim}
\end{footnotesize}
\paragraph{Method {\texttt{eval\_jac\_g}}} with prototype
\begin{verbatim}
virtual bool eval_jac_g(Index n, const Number* x, bool new_x,
Index m, Index nele_jac, Index* iRow,
Index *jCol, Number* values)
\end{verbatim}
Return either the sparsity structure of the Jacobian of the
constraints, or the values for the Jacobian of the constraints at the
point $x$.
\begin{itemize}
\item {\tt n}: (in), the number of variables in the problem (dimension of $x$).
\item {\tt x}: (in), the values for the primal variables, $x$, at which
the constraint Jacobian, $\nabla g(x)^T$, is to be evaluated.
\item {\tt new\_x}: (in), false if any evaluation method was previously called
with the same values in {\tt x}, true otherwise.
\item {\tt m}: (in), the number of constraints in the problem (dimension of $g(x)$).
\item {\tt n\_ele\_jac}: (in), the number of nonzero elements in the
Jacobian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}).
\item {\tt iRow}: (out), the row indices of entries in the Jacobian of the constraints.
\item {\tt jCol}: (out), the column indices of entries in the Jacobian of the constraints.
\item {\tt values}: (out), the values of the entries in the Jacobian of the constraints.
\end{itemize}
The Jacobian is the matrix of derivatives where the derivative of
constraint $g^{(i)}$ with respect to variable $x^{(j)}$ is placed in
row $i$ and column $j$. See Appendix \ref{app.triplet} for a
discussion of the sparse matrix format used in this method.
If the {\tt iRow} and {\tt jCol} arguments are not {\tt NULL}, then
\Ipopt wants you to fill in the sparsity structure of the Jacobian
(the row and column indices only). At this time, the {\tt x} argument
and the {\tt values} argument will be {\tt NULL}.
If the {\tt x} argument and the {\tt values} argument are not {\tt
NULL}, then \Ipopt wants you to fill in the values of the Jacobian
as calculated from the array {\tt x} (using the same order as you used
when specifying the sparsity structure). At this time, the {\tt iRow}
and {\tt jCol} arguments will be {\tt NULL};
The boolean variable {\tt new\_x} will be false if the last call to
any of the evaluation methods ({\tt eval\_*}) used the same $x$
values. This can be helpful when users have efficient implementations
that calculate multiple outputs at once. \Ipopt internally caches
results from the {\tt TNLP} and generally, this flag can be ignored.
The variables {\tt n}, {\tt m}, and {\tt nele\_jac} are passed in for
your convenience. These arguments will have the same values you
specified in {\tt get\_nlp\_info}.
In our example, the Jacobian is actually dense, but we still
specify it using the sparse format.
\begin{footnotesize}
\begin{verbatim}
bool HS071_NLP::eval_jac_g(Index n, const Number* x, bool new_x,
Index m, Index nele_jac, Index* iRow, Index *jCol,
Number* values)
{
if (values == NULL) {
// return the structure of the Jacobian
// this particular Jacobian is dense
iRow[0] = 0; jCol[0] = 0;
iRow[1] = 0; jCol[1] = 1;
iRow[2] = 0; jCol[2] = 2;
iRow[3] = 0; jCol[3] = 3;
iRow[4] = 1; jCol[4] = 0;
iRow[5] = 1; jCol[5] = 1;
iRow[6] = 1; jCol[6] = 2;
iRow[7] = 1; jCol[7] = 3;
}
else {
// return the values of the Jacobian of the constraints
values[0] = x[1]*x[2]*x[3]; // 0,0
values[1] = x[0]*x[2]*x[3]; // 0,1
values[2] = x[0]*x[1]*x[3]; // 0,2
values[3] = x[0]*x[1]*x[2]; // 0,3
values[4] = 2*x[0]; // 1,0
values[5] = 2*x[1]; // 1,1
values[6] = 2*x[2]; // 1,2
values[7] = 2*x[3]; // 1,3
}
return true;
}
\end{verbatim}
\end{footnotesize}
\paragraph{Method {\texttt{eval\_h}}} with prototype
\begin{verbatim}
virtual bool eval_h(Index n, const Number* x, bool new_x,
Number obj_factor, Index m, const Number* lambda,
bool new_lambda, Index nele_hess, Index* iRow,
Index* jCol, Number* values)
\end{verbatim}
Return either the sparsity structure of the Hessian of the Lagrangian, or the values of the
Hessian of the Lagrangian (\ref{eq:IpoptLAG}) for the given values for $x$,
$\sigma_f$, and $\lambda$.
\begin{itemize}
\item {\tt n}: (in), the number of variables in the problem (dimension
of $x$).
\item {\tt x}: (in), the values for the primal variables, $x$, at which
the Hessian is to be evaluated.
\item {\tt new\_x}: (in), false if any evaluation method was previously called
with the same values in {\tt x}, true otherwise.
\item {\tt obj\_factor}: (in), factor in front of the objective term
in the Hessian, $\sigma_f$.
\item {\tt m}: (in), the number of constraints in the problem (dimension of $g(x)$).
\item {\tt lambda}: (in), the values for the constraint multipliers,
$\lambda$, at which the Hessian is to be evaluated.
\item {\tt new\_lambda}: (in), false if any evaluation method was
previously called with the same values in {\tt lambda}, true
otherwise.
\item {\tt nele\_hess}: (in), the number of nonzero elements in the
Hessian (dimension of {\tt iRow}, {\tt jCol}, and {\tt values}).
\item {\tt iRow}: (out), the row indices of entries in the Hessian.
\item {\tt jCol}: (out), the column indices of entries in the Hessian.
\item {\tt values}: (out), the values of the entries in the Hessian.
\end{itemize}
The Hessian matrix that \Ipopt uses is defined in
(\ref{eq:IpoptLAG}). See Appendix \ref{app.triplet} for a
discussion of the sparse symmetric matrix format used in this method.
If the {\tt iRow} and {\tt jCol} arguments are not {\tt NULL}, then
\Ipopt wants you to fill in the sparsity structure of the Hessian
(the row and column indices for the lower or upper triangular part
only). In this case, the {\tt x}, {\tt lambda}, and {\tt values}
arrays will be {\tt NULL}.
If the {\tt x}, {\tt lambda}, and {\tt values} arrays are not {\tt
NULL}, then \Ipopt wants you to fill in the values of the Hessian
as calculated using {\tt x} and {\tt lambda} (using the same order as
you used when specifying the sparsity structure). In this case, the
{\tt iRow} and {\tt jCol} arguments will be {\tt NULL}.
The boolean variables {\tt new\_x} and {\tt new\_lambda} will both be
false if the last call to any of the evaluation methods ({\tt
eval\_*}) used the same values. This can be helpful when users have
efficient implementations that calculate multiple outputs at once.
\Ipopt internally caches results from the {\tt TNLP} and generally,
this flag can be ignored.
The variables {\tt n}, {\tt m}, and {\tt nele\_hess} are passed in for
your convenience. These arguments will have the same values you
specified in {\tt get\_nlp\_info}.
In our example, the Hessian is dense, but we still specify it using the
sparse matrix format. Because the Hessian is symmetric, we only need to
specify the lower left corner.
\begin{footnotesize}
\begin{verbatim}
bool HS071_NLP::eval_h(Index n, const Number* x, bool new_x,
Number obj_factor, Index m, const Number* lambda,
bool new_lambda, Index nele_hess, Index* iRow,
Index* jCol, Number* values)
{
if (values == NULL) {
// return the structure. This is a symmetric matrix, fill the lower left
// triangle only.
// the Hessian for this problem is actually dense
Index idx=0;
for (Index row = 0; row < 4; row++) {
for (Index col = 0; col <= row; col++) {
iRow[idx] = row;
jCol[idx] = col;
idx++;
}
}
assert(idx == nele_hess);
}
else {
// return the values. This is a symmetric matrix, fill the lower left
// triangle only
// fill the objective portion
values[0] = obj_factor * (2*x[3]); // 0,0
values[1] = obj_factor * (x[3]); // 1,0
values[2] = 0; // 1,1
values[3] = obj_factor * (x[3]); // 2,0
values[4] = 0; // 2,1
values[5] = 0; // 2,2
values[6] = obj_factor * (2*x[0] + x[1] + x[2]); // 3,0
values[7] = obj_factor * (x[0]); // 3,1
values[8] = obj_factor * (x[0]); // 3,2
values[9] = 0; // 3,3
// add the portion for the first constraint
values[1] += lambda[0] * (x[2] * x[3]); // 1,0
values[3] += lambda[0] * (x[1] * x[3]); // 2,0
values[4] += lambda[0] * (x[0] * x[3]); // 2,1
values[6] += lambda[0] * (x[1] * x[2]); // 3,0
values[7] += lambda[0] * (x[0] * x[2]); // 3,1
values[8] += lambda[0] * (x[0] * x[1]); // 3,2
// add the portion for the second constraint
values[0] += lambda[1] * 2; // 0,0
values[2] += lambda[1] * 2; // 1,1
values[5] += lambda[1] * 2; // 2,2
values[9] += lambda[1] * 2; // 3,3
}
return true;
}
\end{verbatim}
\end{footnotesize}
\paragraph{Method \texttt{finalize\_solution}} with prototype
\begin{verbatim}
virtual void finalize_solution(SolverReturn status, Index n,
const Number* x, const Number* z_L,
const Number* z_U, Index m, const Number* g,
const Number* lambda, Number obj_value,
const IpoptData* ip_data,
IpoptCalculatedQuantities* ip_cq)
\end{verbatim}
This is the only method that is not mentioned in Section
\ref{sec.required_info}. This method is called by \Ipopt after the
algorithm has finished (successfully or even with most errors).
\begin{itemize}
\item {\tt status}: (in), gives the status of the algorithm as
specified in {\tt IpAlgTypes.hpp},
\begin{itemize}
\item {\tt SUCCESS}: Algorithm terminated successfully at a locally
optimal point, satisfying the convergence tolerances (can be
specified by options).
\item {\tt MAXITER\_EXCEEDED}: Maximum number of iterations exceeded
(can be specified by an option).
\item {\tt CPUTIME\_EXCEEDED}: Maximum number of CPU seconds exceeded
(can be specified by an option).
\item {\tt STOP\_AT\_TINY\_STEP}: Algorithm proceeds with very
little progress.
\item {\tt STOP\_AT\_ACCEPTABLE\_POINT}: Algorithm stopped at a
point that was converged, not to ``desired'' tolerances, but to
``acceptable'' tolerances (see the {\tt acceptable-...} options).
\item {\tt LOCAL\_INFEASIBILITY}: Algorithm converged to a point of
local infeasibility. Problem may be infeasible.
\item {\tt USER\_REQUESTED\_STOP}: The user call-back function {\tt
intermediate\_callback} (see Section~\ref{sec:add_meth})
returned {\tt false}, i.e., the user code requested a premature
termination of the optimization.
\item {\tt DIVERGING\_ITERATES}: It seems that the iterates diverge.
\item {\tt RESTORATION\_FAILURE}: Restoration phase failed,
algorithm doesn't know how to proceed.
\item {\tt ERROR\_IN\_STEP\_COMPUTATION}: An unrecoverable error
occurred while \Ipopt tried to compute the search direction.
\item {\tt INVALID\_NUMBER\_DETECTED}: Algorithm received an
invalid number (such as {\tt NaN} or {\tt Inf}) from the NLP; see
also option \htmlref{\tt check\_derivatives\_for\_naninf}{opt:check_derivatives_for_naninf}.
\item {\tt
INTERNAL\_ERROR}: An unknown internal error occurred. Please
contact the \Ipopt authors through the mailing list.
\end{itemize}
\item {\tt n}: (in), the number of variables in the problem (dimension
of $x$).
\item {\tt x}: (in), the final values for the primal variables, $x_*$.
\item {\tt z\_L}: (in), the final values for the lower bound
multipliers, $z^L_*$.
\item {\tt z\_U}: (in), the final values for the upper bound
multipliers, $z^U_*$.
\item {\tt m}: (in), the number of constraints in the problem
(dimension of $g(x)$).
\item {\tt g}: (in), the final value of the constraint function
values, $g(x_*)$.
\item {\tt lambda}: (in), the final values of the constraint
multipliers, $\lambda_*$.
\item {\tt obj\_value}: (in), the final value of the objective,
$f(x_*)$.
\item {\tt ip\_data} and {\tt ip\_cq} are provided for expert users.
\end{itemize}
This method gives you the return status of the algorithm
(SolverReturn), and the values of the variables,
the objective and constraint function values when the algorithm exited.
In our example, we will print the values of some of the variables to
the screen.
\begin{footnotesize}
\begin{verbatim}
void HS071_NLP::finalize_solution(SolverReturn status,
Index n, const Number* x, const Number* z_L,
const Number* z_U, Index m, const Number* g,
const Number* lambda, Number obj_value,
const IpoptData* ip_data, IpoptCalculatedQuantities* ip_cq)
{
// here is where we would store the solution to variables, or write to a file, etc
// so we could use the solution.
// For this example, we write the solution to the console
printf("\n\nSolution of the primal variables, x\n");
for (Index i=0; i<n; i++) {
printf("x[%d] = %e\n", i, x[i]);
}
printf("\n\nSolution of the bound multipliers, z_L and z_U\n");
for (Index i=0; i<n; i++) {
printf("z_L[%d] = %e\n", i, z_L[i]);
}
for (Index i=0; i<n; i++) {
printf("z_U[%d] = %e\n", i, z_U[i]);
}
printf("\n\nObjective value\n");
printf("f(x*) = %e\n", obj_value);
}
\end{verbatim}
\end{footnotesize}
This is all that is required for our {\tt HS071\_NLP} class and
the coding of the problem representation.
\subsubsection{Coding the Executable (\texttt{main})}
Now that we have a problem representation, the {\tt HS071\_NLP} class,
we need to code the main function that will call \Ipopt and ask \Ipopt
to find a solution.
Here, we must create an instance of our problem ({\tt HS071\_NLP}),
create an instance of the \Ipopt solver (\texttt{IpoptApplication}),
initialize it, and ask the solver to find a solution. We always use
the \texttt{SmartPtr} template class instead of raw C++ pointers when
creating and passing \Ipopt objects. To find out more information
about smart pointers and the {\tt SmartPtr} implementation used in
\Ipopt, see Appendix \ref{app.smart_ptr}.
Create the file {\tt MyExample.cpp} in the MyExample directory.
Include the header files {\tt HS071\_NLP.hpp} and {\tt IpIpoptApplication.hpp}, tell
the compiler to use the {\tt Ipopt} namespace, and implement the {\tt
main} function.
\begin{footnotesize}
\begin{verbatim}
#include "IpIpoptApplication.hpp"
#include "hs071_nlp.hpp"
using namespace Ipopt;
int main(int argv, char* argc[])
{
// Create a new instance of your nlp
// (use a SmartPtr, not raw)
SmartPtr<TNLP> mynlp = new HS071_NLP();
// Create a new instance of IpoptApplication
// (use a SmartPtr, not raw)
// We are using the factory, since this allows us to compile this
// example with an Ipopt Windows DLL
SmartPtr<IpoptApplication> app = IpoptApplicationFactory();
// Change some options
// Note: The following choices are only examples, they might not be
// suitable for your optimization problem.
app->Options()->SetNumericValue("tol", 1e-9);
app->Options()->SetStringValue("mu_strategy", "adaptive");
app->Options()->SetStringValue("output_file", "ipopt.out");
// Intialize the IpoptApplication and process the options
ApplicationReturnStatus status;
status = app->Initialize();
if (status != Solve_Succeeded) {
printf("\n\n*** Error during initialization!\n");
return (int) status;
}
// Ask Ipopt to solve the problem
status = app->OptimizeTNLP(mynlp);
if (status == Solve_Succeeded) {
printf("\n\n*** The problem solved!\n");
}
else {
printf("\n\n*** The problem FAILED!\n");
}
// As the SmartPtrs go out of scope, the reference count
// will be decremented and the objects will automatically
// be deleted.
return (int) status;
}
\end{verbatim}
\end{footnotesize}
The first line of code in {\tt main} creates an instance of {\tt
HS071\_NLP}. We then create an instance of the \Ipopt solver, {\tt
IpoptApplication}. You could use \texttt{new} to create a new
application object, but if you want to make sure that your code would
also work with a Windows DLL, you need to use the factory, as done in
the example above. The call to {\tt app->Initialize(...)} will
initialize that object and process the options (particularly the output
related options). The call to {\tt app->OptimizeTNLP(...)} will
run \Ipopt and try to solve the problem. By default, \Ipopt will
write its progress to the console, and return the {\tt
SolverReturn} status.
\subsubsection{Compiling and Testing the Example}
Our next task is to compile and test the code. If you are familiar
with the compiler and linker used on your system, you can build the
code, telling the linker about the
%\Ipopt library {\tt libipopt.so} (or {\tt libipopt.a}), as well as other
necessary libraries, as listed
in the {\tt ipopt\_addlibs\_cpp.txt} file.
If you are using the autotools based build system, then a sample makefile
created by configure already exists. Copy {\tt
Ipopt/examples/hs071\_cpp/Makefile} into your {\tt MyExample}
directory. This makefile was created for the {\tt hs071\_cpp} code,
but it can be easily modified for your example problem. Edit the file,
making the following changes,
\begin{itemize}
\item change the {\tt EXE} variable \\
{\tt EXE = my\_example}
\item change the {\tt OBJS} variable \\
{\tt OBJS = HS071\_NLP.o MyExample.o}
\end{itemize}
and the problem should compile easily with, \\
{\tt \$ make} \\
Now run the executable,\\
{\tt \$ ./my\_example} \\
and you should see output resembling the following,
\begin{footnotesize}
\begin{verbatim}
******************************************************************************
This program contains Ipopt, a library for large-scale nonlinear optimization.
Ipopt is released as open source code under the Eclipse Public License (EPL).
For more information visit http://projects.coin-or.org/Ipopt
******************************************************************************
Number of nonzeros in equality constraint Jacobian...: 4
Number of nonzeros in inequality constraint Jacobian.: 4
Number of nonzeros in Lagrangian Hessian.............: 10
Total number of variables............................: 4
variables with only lower bounds: 0
variables with lower and upper bounds: 4
variables with only upper bounds: 0
Total number of equality constraints.................: 1
Total number of inequality constraints...............: 1
inequality constraints with only lower bounds: 1
inequality constraints with lower and upper bounds: 0
inequality constraints with only upper bounds: 0
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.6109693e+01 1.12e+01 5.28e-01 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.7410406e+01 8.38e-01 2.25e+01 -0.3 7.97e-01 - 3.19e-01 1.00e+00f 1
2 1.8001613e+01 1.06e-02 4.96e+00 -0.3 5.60e-02 2.0 9.97e-01 1.00e+00h 1
3 1.7199482e+01 9.04e-02 4.24e-01 -1.0 9.91e-01 - 9.98e-01 1.00e+00f 1
4 1.6940955e+01 2.09e-01 4.58e-02 -1.4 2.88e-01 - 9.66e-01 1.00e+00h 1
5 1.7003411e+01 2.29e-02 8.42e-03 -2.9 7.03e-02 - 9.68e-01 1.00e+00h 1
6 1.7013974e+01 2.59e-04 8.65e-05 -4.5 6.22e-03 - 1.00e+00 1.00e+00h 1
7 1.7014017e+01 2.26e-07 5.71e-08 -8.0 1.43e-04 - 1.00e-00 1.00e+00h 1
8 1.7014017e+01 4.62e-14 9.09e-14 -8.0 6.95e-08 - 1.00e+00 1.00e+00h 1
Number of Iterations....: 8
Number of objective function evaluations = 9
Number of objective gradient evaluations = 9
Number of equality constraint evaluations = 9
Number of inequality constraint evaluations = 9
Number of equality constraint Jacobian evaluations = 9
Number of inequality constraint Jacobian evaluations = 9
Number of Lagrangian Hessian evaluations = 8
Total CPU secs in IPOPT (w/o function evaluations) = 0.220
Total CPU secs in NLP function evaluations = 0.000
EXIT: Optimal Solution Found.
Solution of the primal variables, x
x[0] = 1.000000e+00
x[1] = 4.743000e+00
x[2] = 3.821150e+00
x[3] = 1.379408e+00
Solution of the bound multipliers, z_L and z_U
z_L[0] = 1.087871e+00
z_L[1] = 2.428776e-09
z_L[2] = 3.222413e-09
z_L[3] = 2.396076e-08
z_U[0] = 2.272727e-09
z_U[1] = 3.537314e-08
z_U[2] = 7.711676e-09
z_U[3] = 2.510890e-09
Objective value
f(x*) = 1.701402e+01
*** The problem solved!
\end{verbatim}
\end{footnotesize}
This completes the basic C++ tutorial, but see Section
\ref{sec:output} which explains the standard console output of \Ipopt
and Section \ref{sec:options} for information about the use of options
to customize the behavior of \Ipopt.
The {\tt Ipopt/examples/ScalableProblems} directory contains other NLP
problems coded in C++.
\subsubsection{Additional methods in {\tt TNLP}}\label{sec:add_meth}
The following methods are available to additional features that are
not explained in the example. Default implementations for those
methods are provided, so that a user can safely ignore them, unless
she wants to make use of those features. From these features, only the
intermediate callback is already available in the C and Fortran interfaces.
\paragraph{Method \texttt{intermediate\_callback}} with prototype
\begin{verbatim}
virtual bool intermediate_callback(AlgorithmMode mode,
Index iter, Number obj_value,
Number inf_pr, Number inf_du,
Number mu, Number d_norm,
Number regularization_size,
Number alpha_du, Number alpha_pr,
Index ls_trials,
const IpoptData* ip_data,
IpoptCalculatedQuantities* ip_cq)
\end{verbatim}
It is not required to implement (overload) this method. This method
is called once per iteration (during the convergence check), and can
be used to obtain information about the optimization status while
\Ipopt solves the problem, and also to request a premature
termination.
The information provided by the entities in the argument list
correspond to what \Ipopt prints in the iteration summary (see also
Section~\ref{sec:output}). Further information can be obtained from
the {\tt ip\_data} and {\tt ip\_cq} objects (in the C++ interface and for experts only :)).
You you let this method return {\tt false}, \Ipopt will terminate
with the {\tt User\_Requested\_Stop} status. If you do not implement
this method (as we do in this example), the default implementation
always returns {\tt true}.
A frequently asked question is how to access the values of the primal and dual variables in this callback. The values are stored in the {\tt ip\_cq} object for the \emph{internal representation} of the problem.
To access the values in a form that corresponds to those used in the evaluation routines, the user has to request \Ipopt's {\tt TNLPAdapter} object to ``resort'' the data vectors and to fill in information about possibly filtered out fixed variables.
The {\tt TNLPAdapter} can be accessed as follows.
First, add the following includes to your {\tt TNLP} implementation:
\begin{verbatim}
#include "IpIpoptCalculatedQuantities.hpp"
#include "IpIpoptData.hpp"
#include "IpTNLPAdapter.hpp"
#include "IpOrigIpoptNLP.hpp"
\end{verbatim}
Next, add the following code to your implementation of the {\tt intermediate\_callback}:
\begin{verbatim}
Ipopt::TNLPAdapter* tnlp_adapter = NULL;
if( ip_cq != NULL )
{
Ipopt::OrigIpoptNLP* orignlp;
orignlp = dynamic_cast<OrigIpoptNLP*>(GetRawPtr(ip_cq->GetIpoptNLP()));
if( orignlp != NULL )
tnlp_adapter = dynamic_cast<TNLPAdapter*>(GetRawPtr(orignlp->nlp()));
}
\end{verbatim}
Note, that retrieving the {\tt TNLPAdapter} will fail (i.e., {\tt orignlp} will be {\tt NULL}) if \Ipopt is currently in restoration mode.
If, however, {\tt tnlp\_adapter} is not {\tt NULL}, then it can be used to obtain primal variable values $x$ and the dual values for the constraints \ref{eq:constraints} and the variable bounds \ref{eq:bounds} as follows.
\begin{verbatim}
double* primals = new double[n];
double* dualeqs = new double[m];
double* duallbs = new double[n];
double* dualubs = new double[n];
tnlp_adapter->ResortX(*ip_data->curr()->x(), primals);
tnlp_adapter->ResortG(*ip_data->curr()->y_c(), *ip_data->curr()->y_d(), dualeqs);
tnlp_adapter->ResortBnds(*ip_data->curr()->z_L(), duallbs,
*ip_data->curr()->z_U(), dualubs);
\end{verbatim}
Additionally, information about scaled violation of constraint
\ref{eq:constraints} and violation of complementarity constraints can be
obtained via
\begin{verbatim}
tnlp_adapter->ResortG(*ip_data->curr_c(), *ip_data->curr_d_minus_s(), ...)
tnlp_adapter->ResortBnds(*ip_data->curr_compl_x_L(), ...,
*ip_data->curr_compl_x_U(), ...)
tnlp_adapter->ResortG(*ip_data->curr_compl_s_L(), ...)
tnlp_adapter->ResortG(*ip_data->curr_compl_s_U(), ...)
\end{verbatim}
\paragraph{Method \texttt{get\_scaling\_parameters}} with prototype
\begin{verbatim}
virtual bool get_scaling_parameters(Number& obj_scaling,
bool& use_x_scaling, Index n,
Number* x_scaling,
bool& use_g_scaling, Index m,
Number* g_scaling)
\end{verbatim}
This method is called if the {\tt nlp\_scaling\_method} is chosen as
{\tt user-scaling}. The user has to provide scaling factors for
the objective function as well as for the optimization variables
and/or constraints. The return value should be true, unless an error
occurred, and the program is to be aborted.
The value returned in {\tt obj\_scaling} determines, how \Ipopt
should internally scale the objective function. For example, if this
number is chosen to be 10, then \Ipopt solves internally an
optimization problem that has 10 times the value of the original
objective function provided by the {\tt TNLP}. In particular, if this
value is negative, then \Ipopt will maximize the objective function
instead of minimizing it.
The scaling factors for the variables can be returned in {\tt
x\_scaling}, which has the same length as {\tt x} in the other {\tt
TNLP} methods, and the factors are ordered like {\tt x}. You need
to set {\tt use\_x\_scaling} to {\tt true}, if you want \Ipopt so scale
the variables. If it is {\tt false}, no internal scaling of the
variables is done. Similarly, the scaling factors for the constraints
can be returned in {\tt g\_scaling}, and this scaling is activated by
setting {\tt use\_g\_scaling} to {\tt true}.
As a guideline, we suggest to scale the optimization problem (either
directly in the original formulation, or after using scaling factors)
so that all sensitivities, i.e., all non-zero first partial
derivatives, are typically of the order $0.1-10$.
\paragraph{Method \texttt{get\_number\_of\_nonlinear\_variables}} with prototype
\begin{verbatim}
virtual Index get_number_of_nonlinear_variables()
\end{verbatim}
This method is only important if the limited-memory quasi-Newton
options is used, see Section~\ref{sec:quasiNewton}. It is used
to return the number of variables that appear nonlinearly in the
objective function or in at least one constraint function. If a
negative number is returned, \Ipopt assumes that all variables are
nonlinear.
If the user doesn't overload this method in her implementation of the
class derived from {\tt TNLP}, the default implementation returns -1,
i.e., all variables are assumed to be nonlinear.
\paragraph{Method \texttt{get\_list\_of\_nonlinear\_variables}} with prototype
\begin{verbatim}
virtual bool get_list_of_nonlinear_variables(Index num_nonlin_vars,
Index* pos_nonlin_vars)
\end{verbatim}
This method is called by \Ipopt only if the limited-memory
quasi-Newton options is used and if the {\tt
get\_number\_of\_nonlinear\_variables} method returns a positive
number; this number is then identical with {\tt num\_nonlin\_vars} and
the length of the array {\tt pos\_nonlin\_vars}. In this call, you
need to list the indices of all nonlinear variables in {\tt
pos\_nonlin\_vars}, where the numbering starts with 0 order 1,
depending on the numbering style determined in {\tt get\_nlp\_info}.
\paragraph{Method \texttt{get\_variables\_linearity}} with prototype
\begin{verbatim}
virtual bool get_variables_linearity(Index n,
LinearityType* var_types)
\end{verbatim}
This method is never called by \Ipopt, but is used by \textsc{Bonmin} to get information about which variables occur only in linear terms.
\Ipopt passes a {\tt var\_types} array of size {\tt n}, which the user should fill with the appropriate linearity type of the variables ({\tt TNLP::LINEAR} or {\tt TNLP::NON\_LINEAR}).
If the user doesn't overload this method in her implementation of the class derived from {\tt TNLP}, the default implementation returns {\tt false}.
\paragraph{Method \texttt{get\_constraints\_linearity}} with prototype
\begin{verbatim}
virtual bool get_constraints_linearity(Index m,
LinearityType* const_types)
\end{verbatim}
This method is never called by \Ipopt, but is used by \textsc{Bonmin} to get information about which constraints are linear.
\Ipopt passes a {\tt const\_types} array of size {\tt m}, which the user should fill with the appropriate linearity type of the constraints ({\tt TNLP::LINEAR} or {\tt TNLP::NON\_LINEAR}).
If the user doesn't overload this method in her implementation of the class derived from {\tt TNLP}, the default implementation returns {\tt false}.
\paragraph{Method \texttt{get\_var\_con\_metadata}} with prototype
\begin{verbatim}
virtual bool get_var_con_metadata(Index n,
StringMetaDataMapType& var_string_md,
IntegerMetaDataMapType& var_integer_md,
NumericMetaDataMapType& var_numeric_md,
Index m,
StringMetaDataMapType& con_string_md,
IntegerMetaDataMapType& con_integer_md,
NumericMetaDataMapType& con_numeric_md)
\end{verbatim}
This method is used to pass meta data about variables or constraints to
\Ipopt. The data can be either of integer, numeric, or string type.
\Ipopt passes this data on to its internal problem representation.
The meta data type is a {\tt std::map} with {\tt std::string} as key type and
a {\tt std::vector} as value type.
So far, \Ipopt itself makes only use of string meta data under the key {\tt
idx\_names}. With this key, variable and constraint names can be passed to
\Ipopt, which are shown when printing internal vector or matrix data structures
if \Ipopt is run with a high value for the \htmlref{\tt print\_level}{opt:print_level}
option. This allows a user to identify the original variables and constraints
corresponding to \Ipopt's internal problem representation.
If the user doesn't overload this method in her implementation of the class
derived from {\tt TNLP}, the default implementation does not set any meta data
and returns {\tt false}.
\paragraph{Method \texttt{finalize\_metadata}} with prototype
\begin{verbatim}
virtual void finalize_metadata(Index n,
const StringMetaDataMapType& var_string_md,
const IntegerMetaDataMapType& var_integer_md,
const NumericMetaDataMapType& var_numeric_md,
Index m,
const StringMetaDataMapType& con_string_md,
const IntegerMetaDataMapType& con_integer_md,
const NumericMetaDataMapType& con_numeric_md)
\end{verbatim}
This method is called just before {\tt finalize\_solution} and is used to
return any meta data collected during the algorithms run, including the meta
data provided by the user with the {\tt get\_var\_con\_metadata} method.
If the user doesn't overload this method in her implementation of the class
derived from {\tt TNLP}, the default implementation does nothing.
\paragraph{Method \texttt{get\_warm\_start\_iterate}} with prototype
\begin{verbatim}
virtual bool get_warm_start_iterate(IteratesVector& warm_start_iterate)
\end{verbatim}
Overload this method to provide an \Ipopt iterate which is already in the form
\Ipopt requires it internally for a warm starts.
This method is only for expert users.
If the user doesn't overload this method in her implementation of the class
derived from {\tt TNLP}, the default implementation does not provide a warm
start iterate and returns {\tt false}.
\subsection{The C Interface} \label{sec.cinterface}
The C interface for \Ipopt is declared in the header file {\tt
IpStdCInterface.h}, which is found in\\
\texttt{\$IPOPTDIR/include/coin} (or in
\texttt{\$PREFIX/include/coin} if the switch
\verb|--prefix=$PREFIX| was used for {\tt configure}); while %$
reading this section, it will be helpful to have a look at this file.
In order to solve an optimization problem with the C interface, one
has to create an {\tt IpoptProblem}\footnote{{\tt IpoptProblem} is a
pointer to a C structure; you should not access this structure
directly, only through the functions provided in the C interface.}
with the function {\tt CreateIpoptProblem}, which later has to be
passed to the {\tt IpoptSolve} function.
The {\tt IpoptProblem} created by {\tt CreateIpoptProblem} contains
the problem dimensions, the variable and constraint bounds, and the
function pointers for callbacks that will be used to evaluate the NLP
problem functions and their derivatives (see also the discussion of
the C++ methods {\tt get\_nlp\_info} and {\tt get\_bounds\_info} in
Section~\ref{sec.cpp_problem} for information about the arguments of
{\tt CreateIpoptProblem}).
The prototypes for the callback functions, {\tt Eval\_F\_CB}, {\tt
Eval\_Grad\_F\_CB}, etc., are defined in the header file {\tt
IpStdCInterface.h}. Their arguments correspond one-to-one to the
arguments for the C++ methods discussed in
Section~\ref{sec.cpp_problem}; for example, for the meaning of $\tt
n$, $\tt x$, $\tt new\_x$, $\tt obj\_value$ in the declaration of {\tt
Eval\_F\_CB} see the discussion of ``{\tt eval\_f}''. The callback
functions should return {\tt TRUE}, unless there was a problem doing
the requested function/derivative evaluation at the given point {\tt
x} (then it should return {\tt FALSE}).
Note the additional argument of type {\tt UserDataPtr} in the callback
functions. This pointer argument is available for you to communicate
information between the main program that calls {\tt IpoptSolve} and
any of the callback functions. This pointer is simply passed
unmodified by \Ipopt among those functions. For example, you can
use this to pass constants that define the optimization problem and
are computed before the optimization in the main C program to the
callback functions.
After an {\tt IpoptProblem} has been created, you can set algorithmic
options for \Ipopt (see Section~\ref{sec:options}) using the {\tt
AddIpopt...Option} functions. Finally, the \Ipopt algorithm is
called with {\tt IpoptSolve}, giving \Ipopt the {\tt IpoptProblem},
the starting point, and arrays to store the solution values (primal
and dual variables), if desired. Finally, after everything is done,
you should call {\tt FreeIpoptProblem} to release internal memory that
is still allocated inside \Ipopt.
In the remainder of this section we discuss how the example problem
(\ref{eq:ex_obj})--(\ref{eq:ex_bounds}) can be solved using the C
interface. A completed version of this example can be found in {\tt
Ipopt/examples/hs071\_c}.
% We first create the necessary callback
% functions for evaluating the NLP. As just discussed, the \Ipopt C
% interface required callbacks to evaluate the objective value,
% constraints, gradient of the objective, Jacobian of the constraints,
% and the Hessian of the Lagrangian. These callbacks are implemented
% using function pointers. Have a look at the C++ implementation for
% {\tt eval\_f}, {\tt eval\_g}, {\tt eval\_grad\_f}, {\tt eval\_jac\_g},
% and {\tt eval\_h} in Section \ref{sec.cpp_problem}. The C
% implementations have somewhat different prototypes, but are
% implemented almost identically to the C++ code.
\vspace{\baselineskip}
In order to implement the example problem on your own, create a new
directory {\tt MyCExample} and create a new file, {\tt
hs071\_c.c}. Here, include the interface header file {\tt
IpStdCInterface.h}, along with other necessary header files, such as
{\tt stdlib.h} and {\tt assert.h}. Add the prototypes and
implementations for the five callback functions. Have a look at the
C++ implementation for {\tt eval\_f}, {\tt eval\_g}, {\tt
eval\_grad\_f}, {\tt eval\_jac\_g}, and {\tt eval\_h} in Section
\ref{sec.cpp_problem}. The C implementations have somewhat different
prototypes, but are implemented almost identically to the C++ code.
See the completed example in {\tt Ipopt/examples/hs071\_c/hs071\_c.c} if you
are not sure how to do this.
We now need to implement the {\tt main} function, create the {\tt
IpoptProblem}, set options, and call {\tt IpoptSolve}. The {\tt
CreateIpoptProblem} function requires the problem dimensions, the
variable and constraint bounds, and the function pointers to the
callback routines. The {\tt IpoptSolve} function requires the {\tt
IpoptProblem}, the starting point, and allocated arrays for the
solution. The {\tt main} function from the example is shown next and
discussed below.
\begin{verbatim}
int main()
{
Index n=-1; /* number of variables */
Index m=-1; /* number of constraints */
Number* x_L = NULL; /* lower bounds on x */
Number* x_U = NULL; /* upper bounds on x */
Number* g_L = NULL; /* lower bounds on g */
Number* g_U = NULL; /* upper bounds on g */
IpoptProblem nlp = NULL; /* IpoptProblem */
enum ApplicationReturnStatus status; /* Solve return code */
Number* x = NULL; /* starting point and solution vector */
Number* mult_x_L = NULL; /* lower bound multipliers at the solution */
Number* mult_x_U = NULL; /* upper bound multipliers at the solution */
Number obj; /* objective value */
Index i; /* generic counter */
/* set the number of variables and allocate space for the bounds */
n=4;
x_L = (Number*)malloc(sizeof(Number)*n);
x_U = (Number*)malloc(sizeof(Number)*n);
/* set the values for the variable bounds */
for (i=0; i<n; i++) {
x_L[i] = 1.0;
x_U[i] = 5.0;
}
/* set the number of constraints and allocate space for the bounds */
m=2;
g_L = (Number*)malloc(sizeof(Number)*m);
g_U = (Number*)malloc(sizeof(Number)*m);
/* set the values of the constraint bounds */
g_L[0] = 25; g_U[0] = 2e19;
g_L[1] = 40; g_U[1] = 40;
/* create the IpoptProblem */
nlp = CreateIpoptProblem(n, x_L, x_U, m, g_L, g_U, 8, 10, 0,
&eval_f, &eval_g, &eval_grad_f,
&eval_jac_g, &eval_h);
/* We can free the memory now - the values for the bounds have been
copied internally in CreateIpoptProblem */
free(x_L);
free(x_U);
free(g_L);
free(g_U);
/* set some options */
AddIpoptNumOption(nlp, "tol", 1e-9);
AddIpoptStrOption(nlp, "mu_strategy", "adaptive");
/* allocate space for the initial point and set the values */
x = (Number*)malloc(sizeof(Number)*n);
x[0] = 1.0;
x[1] = 5.0;
x[2] = 5.0;
x[3] = 1.0;
/* allocate space to store the bound multipliers at the solution */
mult_x_L = (Number*)malloc(sizeof(Number)*n);
mult_x_U = (Number*)malloc(sizeof(Number)*n);
/* solve the problem */
status = IpoptSolve(nlp, x, NULL, &obj, NULL, mult_x_L, mult_x_U, NULL);
if (status == Solve_Succeeded) {
printf("\n\nSolution of the primal variables, x\n");
for (i=0; i<n; i++)
printf("x[%d] = %e\n", i, x[i]);
printf("\n\nSolution of the bound multipliers, z_L and z_U\n");
for (i=0; i<n; i++)
printf("z_L[%d] = %e\n", i, mult_x_L[i]);
for (i=0; i<n; i++)
printf("z_U[%d] = %e\n", i, mult_x_U[i]);
printf("\n\nObjective value\nf(x*) = %e\n", obj);
}
/* free allocated memory */
FreeIpoptProblem(nlp);
free(x);
free(mult_x_L);
free(mult_x_U);
return 0;
}
\end{verbatim}
Here, we declare all the necessary variables and set the dimensions of
the problem. The problem has 4 variables, so we set {\tt n} and
allocate space for the variable bounds (don't forget to call {\tt
free} for each of your {\tt malloc} calls before the end of the
program). We then set the values for the variable bounds.
The problem has 2 constraints, so we set {\tt m} and allocate space
for the constraint bounds. The first constraint has a lower bound of
$25$ and no upper bound. Here we set the upper bound to
\texttt{2e19}. \Ipopt interprets any number greater than or equal to
\htmlref{\tt nlp\_upper\_bound\_inf}{opt:nlp_upper_bound_inf} as infinity.
The default value of \htmlref{\tt nlp\_lower\_bound\_inf}{opt:nlp_lower_bound_inf}
and \htmlref{\tt nlp\_upper\_bound\_inf}{opt:nlp_upper_bound_inf} is
\texttt{-1e19} and \texttt{1e19}, respectively, and can be changed
through \Ipopt options. The second constraint is an equality with
right hand side 40, so we set both the upper and the lower bound to
40.
We next create an instance of the {\tt IpoptProblem} by calling {\tt
CreateIpoptProblem}, giving it the problem dimensions and the variable
and constraint bounds. The arguments {\tt nele\_jac} and {\tt
nele\_hess} are the number of elements in Jacobian and the Hessian,
respectively. See Appendix~\ref{app.triplet} for a description of the
sparse matrix format. The {\tt index\_style} argument specifies whether
we want to use C style indexing for the row and column indices of the
matrices or Fortran style indexing. Here, we set it to {\tt 0} to
indicate C style. We also include the references to each of our
callback functions. \Ipopt uses these function pointers to ask for
evaluation of the NLP when required.
After freeing the bound arrays that are no longer required, the next
two lines illustrate how you can change the value of options through
the interface. \Ipopt options can also be changed by creating a {\tt
ipopt.opt} file (see Section~\ref{sec:options}). We next allocate
space for the initial point and set the values as given in the problem
definition.
The call to {\tt IpoptSolve} can provide us with information about the
solution, but most of this is optional. Here, we want values for the
bound multipliers at the solution and we allocate space for these.
We can now make the call to {\tt IpoptSolve} and find the solution of
the problem. We pass in the {\tt IpoptProblem}, the starting point
{\tt x} (\Ipopt will use this array to return the solution or final
point as well). The next 5 arguments are pointers so \Ipopt can fill
in values at the solution. If these pointers are set to {\tt NULL},
\Ipopt will ignore that entry. For example, here, we do not want the
constraint function values at the solution or the constraint
multipliers, so we set those entries to {\tt NULL}. We do want the
value of the objective, and the multipliers for the variable bounds.
The last argument is a {\tt void*} for user data. Any pointer you give
here will also be passed to you in the callback functions.
The return code is an {\tt ApplicationReturnStatus} enumeration, see
the header file {\tt ReturnCodes\_inc.h} which is installed along {\tt
IpStdCInterface.h} in the \Ipopt include directory.
After the optimizer terminates, we check the status and print the
solution if successful. Finally, we free the {\tt IpoptProblem} and
the remaining memory and return from {\tt main}.
\subsection{The Fortran Interface}
The Fortran interface is essentially a wrapper of the C interface
discussed in Section~\ref{sec.cinterface}. The way to hook up \Ipopt
in a Fortran program is very similar to how it is done for the C
interface, and the functions of the Fortran interface correspond
one-to-one to the those of the C and C++ interface, including their
arguments. You can find an implementation of the example problem
(\ref{eq:ex_obj})--(\ref{eq:ex_bounds}) in {\tt
\$IPOPTDIR/Ipopt/examples/hs071\_f}.
The only special things to consider are:
\begin{itemize}
\item The return value of the function {\tt IPCREATE} is of an {\tt
INTEGER} type that must be large enough to capture a pointer
on the particular machine. This means, that you have to declare
the ``handle'' for the IpoptProblem as {\tt INTEGER*8} if your
program is compiled in 64-bit mode. All other {\tt INTEGER}-type
variables must be of the regular type.
\item For the call of {\tt IPSOLVE} (which is the function that is to
be called to run \Ipopt), all arrays, including those for the dual
variables, must be given (in contrast to the C interface). The
return value {\tt IERR} of this function indicates the outcome of
the optimization (see the include file {\tt IpReturnCodes.inc} in
the \Ipopt include directory).
\item The return {\tt IERR} value of the remaining functions has to be
set to zero, unless there was a problem during execution of the
function call.
\item The callback functions ({\tt EV\_*} in the example) include the
arguments {\tt IDAT} and {\tt DAT}, which are {\tt INTEGER} and {\tt
DOUBLE PRECISION} arrays that are passed unmodified between the
main program calling {\tt IPSOLVE} and the evaluation subroutines
{\tt EV\_*} (similarly to {\tt UserDataPtr} arguments in the C
interface). These arrays can be used to pass ``private'' data
between the main program and the user-provided Fortran subroutines.
The last argument of the {\tt EV\_*} subroutines, {\tt IERR}, is to
be set to 0 by the user on return, unless there was a problem
during the evaluation of the optimization problem
function/derivative for the given point {\tt X} (then it should
return a non-zero value).
\end{itemize}
\subsection{The Java Interface}
\hfill \textit{based on documentation by Rafael de Pelegrini Soares}%
\footnote{VRTech Industrial Technologies}
\medskip
The Java interface offers an abstract base class {\tt Ipopt} with basic
methods to specify an NLP, set a number of \Ipopt options, to request \Ipopt
to solve the NLP, and to retrieve a found solution, if any.
A HTML documentation of all available interface methods of the {\tt Ipopt}
class can be generated via {\tt javadoc} by executing {\tt make doc} in the
\JIpopt build directory.
In the following, we discuss necessary steps to implement the HS071 example
with \JIpopt.
First, we create a new directory and therein sub directories {\tt
org/coinor/}. Into {\tt org/coinor/} we copy the file {\tt Ipopt.java}, which
contains the Java code of the interface, from the corresponding
\JIpopt source directory
(\verb|$IPOPTDIR/Ipopt/contrib/JavaInterface/org/coinor|). %$
Further, we create a directory {\tt lib} next to the {\tt org} directory and
place the previously build \JIpopt library into it ({\tt libjipopt.so} on
Linux/UNIX, {\tt libjipopt.dylib} on Mac OS X, {\tt jipopt.dll} on Windows),
see also Section \ref{sec.jipopt.build}.
Next, we create a new Java source file {\tt HS071.java} and define a class
{\tt HS071} that extends the class {\tt Ipopt} of \JIpopt.
In the class constructor, we call the {\tt create()} method of \JIpopt, which
works analogously to {\tt get\_nlp\_info()} of the C++ interface.
It initializes an {\tt IpoptApplication} object and informs \JIpopt about the
problem size (number of variables, constraints, nonzeros in Jacobian and
Hessian).
\begin{verbatim}
/** Initialize the bounds and create the native Ipopt problem. */
public HS071() {
/* Number of nonzeros in the Jacobian of the constraints */
int nele_jac = 8;
/* Number of nonzeros in the Hessian of the Lagrangian (lower or
* upper triangual part only) */
int nele_hess = 10;
/* Number of variables */
int n = 4;
/* Number of constraints */
int m = 2;
/* Index style for the irow/jcol elements */
int index_style = Ipopt.C_STYLE;
/* create the IpoptProblem */
create(n, m, nele_jac, nele_hess, index_style);
}
\end{verbatim}
\noindent Next, we add callback functions that are called by \JIpopt to obtain
variable bounds, constraint sides, and a starting point:
\begin{verbatim}
protected boolean get_bounds_info(int n, double[] x_L, double[] x_U,
int m, double[] g_L, double[] g_U) {
/* set the values of the variable bounds */
for( int i = 0; i < x_L.length; i++ ) {
x_L[i] = 1.0;
x_U[i] = 5.0;
}
/* set the values of the constraint bounds */
g_L[0] = 25.0;
g_U[0] = 2e19;
g_L[1] = 40.0;
g_U[1] = 40.0;
return true;
}
protected boolean get_starting_point(int n, boolean init_x, double[] x,
boolean init_z, double[] z_L, double[] z_U,
int m, boolean init_lambda,double[] lambda) {
assert init_z == false;
assert init_lambda = false;
if( init_x ) {
x[0] = 1.0;
x[1] = 5.0;
x[2] = 5.0;
x[3] = 1.0;
}
return true;
}
\end{verbatim}
\noindent In the following, we implement the evaluation methods in a way that
is very similar to the C++ interface:
\begin{verbatim}
protected boolean eval_f(int n, double[] x, boolean new_x, double[] obj_value) {
obj_value[0] = x[0] * x[3] * (x[0] + x[1] + x[2]) + x[2];
return true;
}
protected boolean eval_grad_f(int n, double[] x, boolean new_x, double[] grad_f) {
grad_f[0] = x[0] * x[3] + x[3] * (x[0] + x[1] + x[2]);
grad_f[1] = x[0] * x[3];
grad_f[2] = x[0] * x[3] + 1;
grad_f[3] = x[0] * (x[0] + x[1] + x[2]);
return true;
}
protected boolean eval_g(int n, double[] x, boolean new_x, int m, double[] g) {
g[0] = x[0] * x[1] * x[2] * x[3];
g[1] = x[0] * x[0] + x[1] * x[1] + x[2] * x[2] + x[3] * x[3];
return true;
}
protected boolean eval_jac_g(int n, double[] x, boolean new_x, int m, int nele_jac,
int[] iRow, int[] jCol, double[] values) {
if( values == null ) {
/* return the structure of the jacobian */
/* this particular jacobian is dense */
iRow[0] = 0; jCol[0] = 0;
iRow[1] = 0; jCol[1] = 1;
iRow[2] = 0; jCol[2] = 2;
iRow[3] = 0; jCol[3] = 3;
iRow[4] = 1; jCol[4] = 0;
iRow[5] = 1; jCol[5] = 1;
iRow[6] = 1; jCol[6] = 2;
iRow[7] = 1; jCol[7] = 3;
}
else {
/* return the values of the jacobian of the constraints */
values[0] = x[1]*x[2]*x[3]; /* 0,0 */
values[1] = x[0]*x[2]*x[3]; /* 0,1 */
values[2] = x[0]*x[1]*x[3]; /* 0,2 */
values[3] = x[0]*x[1]*x[2]; /* 0,3 */
values[4] = 2*x[0]; /* 1,0 */
values[5] = 2*x[1]; /* 1,1 */
values[6] = 2*x[2]; /* 1,2 */
values[7] = 2*x[3]; /* 1,3 */
}
return true;
}
protected boolean eval_h(int n, double[] x, boolean new_x, double obj_factor,
int m, double[] lambda, boolean new_lambda,
int nele_hess, int[] iRow, int[] jCol, double[] values) {
int idx = 0; /* nonzero element counter */
int row = 0; /* row counter for loop */
int col = 0; /* col counter for loop */
if( values == null ) {
/* return the structure. This is a symmetric matrix, fill the lower left
* triangle only. */
/* the hessian for this problem is actually dense */
idx = 0;
for( row = 0; row < 4; row++ ) {
for( col = 0; col <= row; col++ ) {
iRow[idx] = row;
jCol[idx] = col;
idx++;
}
}
}
else {
/* return the values. This is a symmetric matrix, fill the lower left
* triangle only */
/* fill the objective portion */
values[0] = obj_factor * (2*x[3]); /* 0,0 */
values[1] = obj_factor * (x[3]); /* 1,0 */
values[2] = 0; /* 1,1 */
values[3] = obj_factor * (x[3]); /* 2,0 */
values[4] = 0; /* 2,1 */
values[5] = 0; /* 2,2 */
values[6] = obj_factor * (2*x[0] + x[1] + x[2]); /* 3,0 */
values[7] = obj_factor * (x[0]); /* 3,1 */
values[8] = obj_factor * (x[0]); /* 3,2 */
values[9] = 0; /* 3,3 */
/* add the portion for the first constraint */
values[1] += lambda[0] * (x[2] * x[3]); /* 1,0 */
values[3] += lambda[0] * (x[1] * x[3]); /* 2,0 */
values[4] += lambda[0] * (x[0] * x[3]); /* 2,1 */
values[6] += lambda[0] * (x[1] * x[2]); /* 3,0 */
values[7] += lambda[0] * (x[0] * x[2]); /* 3,1 */
values[8] += lambda[0] * (x[0] * x[1]); /* 3,2 */
/* add the portion for the second constraint */
values[0] += lambda[1] * 2; /* 0,0 */
values[2] += lambda[1] * 2; /* 1,1 */
values[5] += lambda[1] * 2; /* 2,2 */
values[9] += lambda[1] * 2; /* 3,3 */
}
return true;
}
\end{verbatim}
\noindent Finally, we add a main routine to run this example. The main
routines creates an instance of our object and calls the solve method {\tt
OptimizeNLP}:
\begin{verbatim}
public static void main(String[] args) {
// Create the problem
HS071 hs071 = new HS071();
// solve the problem
int status = hs071.OptimizeNLP(x);
// print the status and optimal value
System.out.println("Status = " + status);
System.out.println("Obj Value = " + hs071.getObjectiveValue());
}
\end{verbatim}
The {\tt OptimizeNLP} method returns the \Ipopt solve status as integer, which
indicates whether the problem was solved successfully.
Further, the methods {\tt getObjectiveValue()}, {\tt getVariableValues()}, and {\tt
getConstraintMultipliers()}, {\tt getLowerBoundMultipliers()}, {\tt
getUpperBoundMultipliers()} can be used to obtain the objective value, the primal
solution value of the variables, and dual solution values, respectively.
\subsection{The R Interface}
\hfill \textit{based on documentation by Jelmer Ypma}%
\footnote{University College London}
\medskip
The \ipoptr package (see Section \ref{sec.ipoptr.build} for installation
instructions) offers a R function {\tt ipoptr} which takes an NLP
specification, a starting point, and \Ipopt options as input and returns
information about a \Ipopt run (status, message, ...) and a solution point.
In the following, we discuss necessary steps to implement the HS071 example
with \ipoptr.
A more detailed documentation of \ipoptr is available in
{\tt Ipopt/contrib/RInterface/inst/doc/ipoptr.pdf}.
First, we define the objective function and its gradient
\begin{verbatim}
> eval_f <- function( x ) {
return( x[1]*x[4]*(x[1] + x[2] + x[3]) + x[3] )
}
> eval_grad_f <- function( x ) {
return( c( x[1] * x[4] + x[4] * (x[1] + x[2] + x[3]),
x[1] * x[4],
x[1] * x[4] + 1.0,
x[1] * (x[1] + x[2] + x[3]) ) )
}
\end{verbatim}
Then we define a function that returns the value of the two constraints. We
define the bounds of the constraints (in this case the $g_L$ and $g_U$ are
$25$ and $40$) later.
\begin{verbatim}
> # constraint functions
> eval_g <- function( x ) {
return( c( x[1] * x[2] * x[3] * x[4],
x[1]^2 + x[2]^2 + x[3]^2 + x[4]^2 ) )
}
\end{verbatim}
Then we define the structure of the Jacobian, which is a dense matrix in this
case, and function to evaluate it
\begin{verbatim}
> eval_jac_g_structure <- list( c(1,2,3,4), c(1,2,3,4) )
> eval_jac_g <- function( x ) {
return( c ( x[2]*x[3]*x[4],
x[1]*x[3]*x[4],
x[1]*x[2]*x[4],
x[1]*x[2]*x[3],
2.0*x[1],
2.0*x[2],
2.0*x[3],
2.0*x[4] ) )
}
\end{verbatim}
The Hessian is also dense, but it looks slightly more complicated because we
have to take into account the Hessian of the objective function and of the
constraints at the same time, although you could write a function to calculate
them both separately and then return the combined result in \texttt{eval\_h}.
\begin{verbatim}
> # The Hessian for this problem is actually dense,
> # This is a symmetric matrix, fill the lower left triangle only.
> eval_h_structure <- list( c(1), c(1,2), c(1,2,3), c(1,2,3,4) )
> eval_h <- function( x, obj_factor, hessian_lambda ) {
values <- numeric(10)
values[1] = obj_factor * (2*x[4]) # 1,1
values[2] = obj_factor * (x[4]) # 2,1
values[3] = 0 # 2,2
values[4] = obj_factor * (x[4]) # 3,1
values[5] = 0 # 4,2
values[6] = 0 # 3,3
values[7] = obj_factor * (2*x[1] + x[2] + x[3]) # 4,1
values[8] = obj_factor * (x[1]) # 4,2
values[9] = obj_factor * (x[1]) # 4,3
values[10] = 0 # 4,4
# add the portion for the first constraint
values[2] = values[2] + hessian_lambda[1] * (x[3] * x[4]) # 2,1
values[4] = values[4] + hessian_lambda[1] * (x[2] * x[4]) # 3,1
values[5] = values[5] + hessian_lambda[1] * (x[1] * x[4]) # 3,2
values[7] = values[7] + hessian_lambda[1] * (x[2] * x[3]) # 4,1
values[8] = values[8] + hessian_lambda[1] * (x[1] * x[3]) # 4,2
values[9] = values[9] + hessian_lambda[1] * (x[1] * x[2]) # 4,3
# add the portion for the second constraint
values[1] = values[1] + hessian_lambda[2] * 2 # 1,1
values[3] = values[3] + hessian_lambda[2] * 2 # 2,2
values[6] = values[6] + hessian_lambda[2] * 2 # 3,3
values[10] = values[10] + hessian_lambda[2] * 2 # 4,4
return ( values )
}
\end{verbatim}
After the hard part is done, we only have to define the initial values, the
lower and upper bounds of the control variables, and the lower and upper
bounds of the constraints. If a variable or a constraint does not have lower
or upper bounds, the values \texttt{-Inf} or \texttt{Inf} can be used. If the
upper and lower bounds of a constraint are equal, Ipopt recognizes this as an
equality constraint and acts accordingly.
\begin{verbatim}
> # initial values
> x0 <- c( 1, 5, 5, 1 )
> # lower and upper bounds of control
> lb <- c( 1, 1, 1, 1 )
> ub <- c( 5, 5, 5, 5 )
> # lower and upper bounds of constraints
> constraint_lb <- c( 25, 40 )
> constraint_ub <- c( Inf, 40 )
\end{verbatim}
Finally, we can call \Ipopt with the {\tt ipoptr} function.
In order to redirect the \Ipopt output into a file, we use \Ipopt's
\htmlref{\tt output\_file}{opt:output_file} and
\htmlref{\tt print\_level}{opt:print_level} options.
\begin{verbatim}
> opts <- list("print_level" = 0,
"file_print_level" = 12,
"output_file" = "hs071_nlp.out")
> print( ipoptr( x0 = x0,
eval_f = eval_f,
eval_grad_f = eval_grad_f,
lb = lb,
ub = ub,
eval_g = eval_g,
eval_jac_g = eval_jac_g,
constraint_lb = constraint_lb,
constraint_ub = constraint_ub,
eval_jac_g_structure = eval_jac_g_structure,
eval_h = eval_h,
eval_h_structure = eval_h_structure,
opts = opts) )
Call:
ipoptr(x0 = x0, eval_f = eval_f, eval_grad_f = eval_grad_f, lb = lb,
ub = ub, eval_g = eval_g, eval_jac_g = eval_jac_g,
eval_jac_g_structure = eval_jac_g_structure, constraint_lb = constraint_lb,
constraint_ub = constraint_ub, eval_h = eval_h, eval_h_structure = eval_h_structure,
opts = opts)
Ipopt solver status: 0 ( SUCCESS: Algorithm terminated
successfully at a locally optimal point, satisfying the
convergence tolerances (can be specified by options). )
Number of Iterations....: 8
Optimal value of objective function: 17.0140171451792
Optimal value of controls: 1 4.743 3.82115 1.379408
\end{verbatim}
To pass additional data to the evaluation routines, one can either supply
additional arguments to the user defined functions and {\tt ipoptr} or define
an environment that holds the data and pass this environment to {\tt ipoptr}.
Both methods are shown in the file \texttt{tests/parameters.R} that comes with
\ipoptr.
As a very simple example, suppose we want to find the minimum of
\[
f( x ) = a_1 x^2 + a_2 x + a_3
\]
for different values of the parameters $a_1$, $a_2$ and $a_3$.
First, we define the objective function and its gradient using, assuming that
there is some variable \texttt{params} that contains the values of the
parameters.
\begin{verbatim}
> eval_f_ex1 <- function(x, params) {
return( params[1]*x^2 + params[2]*x + params[3] )
}
> eval_grad_f_ex1 <- function(x, params) {
return( 2*params[1]*x + params[2] )
}
\end{verbatim}
Note, that the first parameter should always be the control variable. All of
the user-defined functions should contain the same set of additional
parameters. You have to supply them as input argument to all functions, even
if you're not using them in some of the functions.
Then we can solve the problem for a specific set of parameters, in this case
$a_1=1$, $a_2=2$ and $a_3=3$, from initial value $x_0=0$, with the following
command
\begin{verbatim}
> # solve using ipoptr with additional parameters
> ipoptr(x0 = 0,
eval_f = eval_f_ex1,
eval_grad_f = eval_grad_f_ex1,
opts = list("print_level"=0),
params = c(1,2,3) )
Call:
ipoptr(x0 = 0, eval_f = eval_f_ex1, eval_grad_f = eval_grad_f_ex1,
opts = list(print_level = 0), params = c(1, 2, 3))
Ipopt solver status: 0 ( SUCCESS: Algorithm terminated
successfully at a locally optimal point, satisfying the
convergence tolerances (can be specified by options). )
Number of Iterations....: 1
Optimal value of objective function: 2
Optimal value of controls: -1
\end{verbatim}
For the second method, we don't have to supply the parameters as additional
arguments to the function.
\begin{verbatim}
> eval_f_ex2 <- function(x) {
return( params[1]*x^2 + params[2]*x + params[3] )
}
> eval_grad_f_ex2 <- function(x) {
return( 2*params[1]*x + params[2] )
}
\end{verbatim}
Instead, we define an environment that contains specific values of
\texttt{params}
\begin{verbatim}
> # define a new environment that contains params
> auxdata <- new.env()
> auxdata$params <- c(1,2,3)
\end{verbatim}
To solve this we supply \texttt{auxdata} as an argument to \texttt{ipoptr},
which will take care of evaluating the functions in the correct environment,
so that the auxiliary data is available.
\begin{verbatim}
> # pass the environment that should be used to evaluate functions to ipoptr
> ipoptr(x0 = 0,
eval_f = eval_f_ex2,
eval_grad_f = eval_grad_f_ex2,
ipoptr_environment = auxdata,
opts = list("print_level"=0) )
Call:
ipoptr(x0 = 0, eval_f = eval_f_ex2, eval_grad_f = eval_grad_f_ex2,
opts = list(print_level = 0), ipoptr_environment = auxdata)
Ipopt solver status: 0 ( SUCCESS: Algorithm terminated
successfully at a locally optimal point, satisfying the
convergence tolerances (can be specified by options). )
Number of Iterations....: 1
Optimal value of objective function: 2
Optimal value of controls: -1
\end{verbatim}
\subsection{The \Matlab Interface}
\hfill \textit{based on documentation by Peter Carbonetto\footnote{University of British Columbia}}%
\medskip
See Section \ref{sec.matlab.build} for instructions on how to build a
{\tt mex} file of the \Matlab interface for \Ipopt and how to make it known to
\Matlab.
The {\tt \$IPOPTDIR/contrib/MatlabInterface/examples} directory contains
several illustrative examples on how to use the \Matlab interface.
The best way to understand how to use the interface is to carefully go over
these examples.
For more information, type {\tt help ipopt} in the \Matlab prompt.
Further, Jonas Asprion assembled information about the \Matlab\ {\tt ipopt}
function and its arguments:
\url{http://www.idsc.ethz.ch/Downloads/IPOPT_InstallMatlab/IPOPT_MatlabInterface_V0p1.pdf}
Note, that this document refers to \Ipopt versions before 3.11.
With 3.11, the {\tt auxdata} option has been removed from the {\tt mex} code. The new wrapper function {\tt ipopt\_auxdata} implements the same functionality as the previous {\tt ipopt} function, but uses \Matlab function handles to do so.
\section{Special Features}
\subsection{Derivative Checker}\label{sec:deriv-checker}
When writing code for the evaluation of derivatives it is very easy to
make mistakes (much easier than writing it correctly the first time
:)). As a convenient feature, \Ipopt provides the option to run a
simple derivative checker, based on finite differences, before the
optimization is started.
To use the derivative checker, you need to use the option
\htmlref{\tt derivative\_test}{opt:derivative_test}.
By default, this option is set to {\tt none},
i.e., no finite difference test is performed, It is set to {\tt
first-order}, then the first derivatives of the objective function
and the constraints are verified, and for the setting {\tt
second-order}, the second derivatives are tested as well.
The verification is done by a simple finite differences approximation,
where each component of the user-provided starting point is perturbed
one of the other. The relative size of the perturbation is determined
by the option \htmlref{\tt derivative\_test\_perturbation}{opt:derivative_test_perturbation}. The default value
($10^{-8}$, about the square root of the machine precision) is
probably fine in most cases, but if you believe that you see wrong
warnings, you might want to play with this parameter. When the test is
performed, \Ipopt prints out a line for every partial derivative, for
which the user-provided derivative value deviates too much from the
finite difference approximation. The relative tolerance for deciding
when a warning should be issued, is determined by the option
\htmlref{\tt derivative\_test\_tol}{opt:derivative_test_tol}.
If you want to see the user-provided and
estimated derivative values with the relative deviation for each
single partial derivative, you can switch the
\htmlref{\tt derivative\_test\_print\_all}{opt:derivative_test_print_all}
option to {\tt yes}.
A typical output is:
\begin{footnotesize}
\begin{verbatim}
Starting derivative checker.
* grad_f[ 2] = -6.5159999999999991e+02 ~ -6.5559997134793468e+02 [ 6.101e-03]
* jac_g [ 4, 4] = 0.0000000000000000e+00 ~ 2.2160643690464592e-02 [ 2.216e-02]
* jac_g [ 4, 5] = 1.3798494268463347e+01 v ~ 1.3776333629422766e+01 [ 1.609e-03]
* jac_g [ 6, 7] = 1.4776333636790881e+01 v ~ 1.3776333629422766e+01 [ 7.259e-02]
Derivative checker detected 4 error(s).
\end{verbatim}
\end{footnotesize}
The star (``\verb|*|'') in the first column indicates that this line
corresponds to some partial derivative for which the error tolerance
was exceeded. Next, we see which partial derivative is concerned in
this output line. For example, in the first line, it is the second
component of the objective function gradient (or the third, if the
{\tt C\_STYLE} numbering is used, i.e., when counting of indices
starts with 0 instead of 1). The first floating point number is the
value given by the user code, and the second number (after
``\verb|~|'') is the finite differences estimation. Finally, the
number in square brackets is the relative difference between these two
numbers.
For constraints, the first index after {\tt jac\_g} is the index of
the constraint, and the second one corresponds to the variable index
(again, the choice of the numbering style matters).
Since also the sparsity structure of the constraint Jacobian has to be
provided by the user, it can be faulty as well. For this, the ``{\tt
v}'' after a user-provided derivative value indicates that this
component of the Jacobian is part of the user provided sparsity
structure. If there is no ``{\tt v}'', it means that the user did not
include this partial derivative in the list of non-zero elements. In
the above output, the partial derivative ``{\tt jac\_g[4,4]}'' is
non-zero (based on the finite difference approximation), but it is not
included in the list of non-zero elements (missing ``{\tt v}''), so
that the user probably made a mistake in the sparsity structure. The
other two Jacobian entries are provided in the non-zero structure but
their values seem to be off.
For second derivatives, the output looks like:
\begin{footnotesize}
\begin{verbatim}
* obj_hess[ 1, 1] = 1.8810000000000000e+03 v ~ 1.8820000036612328e+03 [ 5.314e-04]
* 3-th constr_hess[ 2, 4] = 1.0000000000000000e+00 v ~ 0.0000000000000000e+00 [ 1.000e+00]
\end{verbatim}
\end{footnotesize}
There, the first line shows the deviation of the user-provided partial
second derivative in the Hessian for the objective function, and the
second line show an error in a partial derivative for the Hessian of
the third constraint (again, the numbering style matters).
Since the second derivatives are approximates by finite differences of
the first derivatives, you should first correct errors for the first
derivatives. Also, since the finite difference approximations are
quite expensive, you should try to debug a small instance of your
problem if you can.
Another useful option is
\htmlref{\tt derivative\_test\_first\_index}{opt:derivative_test_first_index}
which allows your to start the derivative test with variables with a larger
index.
%
Finally, it is of course always a good idea to run your code through
some memory checker, such as {\tt valgrind} on Linux.
\subsection{Quasi-Newton Approximation of Second Derivatives}
\label{sec:quasiNewton}
\Ipopt has an option to approximate the Hessian of the Lagrangian by
a limited-memory quasi-Newton method (L-BFGS). You can use this
feature by setting the \htmlref{\tt hessian\_approximation}{opt:hessian_approximation}
option to the value {\tt limited-memory}. In this case, it is not necessary to
implement the Hessian computation method {\tt eval\_h} in {\tt TNLP}.
If you are using the C or Fortran interface, you still need to
implement these functions, but they should return {\tt false} or {\tt
IERR=1}, respectively, and don't need to do anything else.
In general, when second derivatives can be computed with reasonable
computational effort, it is usually a good idea to use them, since
then \Ipopt normally converges in fewer iterations and is more
robust. An exception might be in cases, where your optimization
problem has a dense Hessian, i.e., a large percentage of non-zero entries
in the Hessian. In such a case, using the quasi-Newton approximation might be
better, even if it increases the number of iterations, since with exact
second derivatives the computation time per iteration might be significantly
higher due to the very large number of non-zero elements in the linear systems
that \Ipopt solve in order to compute the search direction.
Since the Hessian of the Lagrangian is zero for all variables that
appear only linearly in the objective and constraint functions, the
Hessian approximation should only take place in the space of all
nonlinear variables. By default, it is assumed that all variables are
nonlinear, but you can tell \Ipopt explicitly which variables are
nonlinear, using the {\tt get\_number\_of\_nonlinear\_variables} and
{\tt get\_list\_of\_nonlinear\_variables} method of the {\tt TNLP}
class, see Section~\ref{sec:add_meth}. (Those methods have been
implemented for the AMPL interface, so you would automatically only
approximate the Hessian in the space of the nonlinear variables, if
you are using the quasi-Newton option for AMPL models.) Currently,
those two methods are not available through the C or Fortran
interface.
\subsection{Warm-Starting Capabilities via AMPL}
\hfill \textit{based on documentation by Victor M. Zavala}\footnote{Department of Chemical Engineering, Carnegie Mellon University}
\medskip
Warm-starting an interior-point algorithm is an important issue. One of the main
difficulties arises from the fact that full-space variable information is required to
generate the warm-starting point. While \Ipopt is currently equipped to retrieve and receive
this type of information through the {\tt TNLP} interface, there exist some communication
barriers in the AMPL interface. When the user solves the problem
\eqref{eq:obj}--\eqref{eq:bounds}, \Ipopt will only return the optimal values of the primal
variables $x$ and of the constraint multipliers corresponding to the active bounds of $g(x)$
(see \eqref{eq:constraints}). The constraint multiplier values can be accessed through the
{\tt .dual} suffix or through the {\tt .sol} file. If this information is used to solve the
same problem again, you will notice that \Ipopt will take some iterations in finding the
same solution. The reason for this is that we are missing the input information of the
multipliers $z^L$ and $z^U$ corresponding to the variable bounds (see \eqref{eq:bounds}).
However, \Ipopt also passes the values of the bound multipliers $z^L$ and $z^U$ to AMPL.
This will be communicated to the AMPL user through the suffixes {\tt ipopt\_zL\_out} and
{\tt ipopt\_zU\_out}, respectively. The user does not need to declare these suffixes, they
will be generated automatically in the AMPL interface. The user can use the suffix values to
initialize the bound multipliers for subsequent calls. In order to pass this information to
\Ipopt, the user will need to declare and assign values to the suffixes {\tt ipopt\_zL\_in}
and {\tt ipopt\_zU\_in}. For instance, for a given variable {\tt x[i]}, this can be done by
setting:
\begin{verbatim}
let x[i].ipopt_zL_in := x[i].ipopt_zL_out;
let x[i].ipopt_zU_in := x[i].ipopt_zU_out;
\end{verbatim}
If the user does not specify some of these values, \Ipopt will set these multipliers to 1.0
(as before). In order to make the warm-start effective, the user has control over the
following options from AMPL:\\
\htmlref{\tt warm\_start\_init\_point}{opt:warm_start_init_point} \\
\htmlref{\tt warm\_start\_bound\_push}{opt:warm_start_bound_push} \\
\htmlref{\tt warm\_start\_mult\_bound\_push}{opt:warm_start_mult_bound_push}
Note, that the use of this feature is far from solving the complicated issue of warm-
starting interior-point algorithms. As a general advice, this feature will be useful if the
user observes that the solution of subsequent problems (i.e., for different data instances)
preserves the same set of active inequalities and bounds (monitor the values of $z^L$ and
$z^U$ for subsequent solutions). In this case, initializing the bound multipliers and
setting \htmlref{\tt warm\_start\_init\_point}{opt:warm_start_init_point} to {\tt yes} and
setting \htmlref{\tt warm\_start\_bound\_push}{opt:warm_start_bound_push},
\htmlref{\tt warm\_start\_mult\_bound\_push}{opt:warm_start_mult_bound_push} and
\htmlref{\tt mu\_init}{opt:mu_init} to a small value ($10^{-6}$ or so) will reduce
significantly the number of iterations. This is particularly useful in setting up on-line
applications and high-level optimization strategies in AMPL.
If active-set changes are observed between subsequent solutions, then this strategy might
not decrease the number of iterations (in some cases, it might even tend to increase the
number of iterations).
You might also want to try the adaptive barrier update (instead of the default monotone one where above we chose the initial value $10^{-6}$) when doing the warm start. This can be activated by setting the \htmlref{\tt mu\_strategy}{opt:mu_strategy} option to {\tt adaptive}. Also the option \htmlref{\tt mu\_oracle}{opt:mu_oracle} gives some alternative choices. In general, the adaptive choice often leads to less iterations, but the computational cost per iteration might be higher.
The file {\tt \$IPOPTDIR/Ipopt/doc/hs071\_warmstart.mod} illustrates the use of the warm-start feature on the HS071 problem, see also Section \ref{sec.ipoptampl}.
\subsection{\sIpopt: Optimal Sensitivity Based on \Ipopt}
\hfill \textit{based on documentation by Hans Pirnay\footnote{RWTH Aachen, {\tt hans.pirnay@avt.rwth-aachen.de}} and Rodrigo L\'opez-Negrete\footnote{Carnegie Mellon University, {\tt rln@cmu.edu}}}
\medskip
The \sIpopt project provides a toolbox that uses NLP sensitivity theory to generate fast approximations to solutions when parameters in the problem change. It has been developed primarily by Hans Pirnay (RWTH-Aachen), Rodrigo L\'opez-Negrete (CMU), and Lorenz Biegler (CMU).
Sensitivity of nonlinear programming problems is a key step in any optimization study. Sensitivity provides information on regularity and curvature conditions at KKT points, assesses which variables play dominant roles in the optimization, and provides first order estimates for parametric nonlinear programs. Moreover, for NLP algorithms that use exact second derivatives, sensitivity can be implemented very efficiently within NLP solvers and provide valuable information with very little added computation. This implementation provides \Ipopt with the capabilities to calculate sensitivities, and approximate perturbed solutions with them.
The basic sensitivity strategy implemented here is based on the application of the Implicit Function Theorem (IFT) to the KKT conditions of the NLP. As shown by Fiacco (1983), sensitivities can be obtained from a solution with suitable regularity conditions merely by solving a linearization of the KKT conditions. More details can be found in \cite{PLNB:sIpopt}.
If you are using \sIpopt for your research, please cite \cite{PLNB:sIpopt}.
The \sIpopt project is available in the \Ipopt repository under
{\tt \$IPOPTDIR/Ipopt/contrib/sIPOPT}.
After having installed \Ipopt successfully, \sIpopt can be build and installed by changing to the directory {\tt \$IPOPTDIR/build/Ipopt/contrib/sIPOPT} and executing {\tt make install}.
This should copy the generated libraries {\tt libsipopt.*} to {\tt \$IPOPTDIR/build/lib} and an AMPL executable {\tt ipopt\_sens} to {\tt \$IPOPTDIR/build/bin}.
The files {\tt \$IPOPTDIR/Ipopt/contrib/sIPOPT/examples/parametric\_ampl/parametric.\{mod,run\}} are an example that shows how to use \sIpopt to solve the NLP
\begin{align}
\min\quad & x_1^2 + x_2^2 + x_3^2, \\
\mathrm{such~that}\quad & 6x_1 + 3x_2 + 2x_3 = p_1, \\
& p_2 x_1 + x_2 - x_3 = 1, \\
& x_1, x_2, x_3 \geq 0,
\end{align}
where we perturb the parameters $p_1$ and $p_2$ from $p_a = (p_1, p_2) = (5, 1)$ to $p_b = (4.5, 1)$.
Note, that \sIpopt has been developed under the constraint that it must work with the regular \Ipopt code. Due to this constraint, some compromises had to be made.
However, there is an ongoing effort to develop \sIpopt 2, which is a fork of the \Ipopt code that allows for the explicit definition of parametric NLPs. This code can be found at \url{https://github.com/athrpf/sipopt2}. If you have questions about \sIpopt 2, please contact Hans Pirnay.
\subsection{Inertia-Free Curvature Test}
\hfill \textit{contributed by Nai-Yuan Chiang\footnote{Argonne National Laboratory, \url{http://www.mcs.anl.gov/~nychiang/}} and Victor M. Zavala Tejeda\footnote{University of Wisconsin-Madison, \url{http://zavalab.engr.wisc.edu/}}}
\medskip
In a filter line-search setting it is necessary to detect the presence of
negative curvature and to regularize the Hessian of the Lagrangian when
such is present. Regularization ensures that the computed step is a descent
direction for the objective function when the constraint violation is
sufficiently small, which in turn is necessary to guarantee global
convergence.
To detect the presence of negative curvature, the default method
implemented in IPOPT requires inertia information of the augmented system.
The inertia of the augmented system is the number of positive, negative,
and zero eigenvalues. Inertia is currently estimated using symmetric
indefinite factorization routines implemented in powerful packages such as
MA27, MA57, or Pardiso. When more general linear
algebra strategies/packages are used (e.g., iterative, parallel
decomposition), however, inertia information is difficult (if not
impossible) to obtain.
In \cite{ChiangZavala2014}, we present acceptance tests for the search step
that do not require inertia information of the linear system and prove that
such tests are sufficient to ensure global convergence. Similar tests were
proposed in the exact penalty framework reported in \cite{CouHubSchWae:inexact}.
The inertia-free approach also enables the use of a wider range of linear
algebra strategies and packages. We have performed significant benchmarks
and found satisfactory performance compared to the inertia-based
counterpart. Moreover, we have found that this test can yield significant
improvements in computing time because it provides more flexibility to
accept steps. This flexibility is particularly beneficial in problems that
are inherently ill-conditioned and require significant amounts of
regularization.
The inertia-free capability implemented in \Ipopt is controlled by the
options \htmlref{\tt neg\_curv\_test\_tol}{opt:neg_curv_test_tol} and
\htmlref{\tt neg\_curv\_test\_reg}{opt:neg_curv_test_reg}.
\section{\Ipopt Options}\label{sec:options}
\Ipopt has many (maybe too many) options that can be adjusted for the
algorithm. Options are all identified by a string name, and their
values can be of one of three types: Number (real), Integer, or
String. Number options are used for things like tolerances, integer
options are used for things like maximum number of iterations, and
string options are used for setting algorithm details, like the NLP
scaling method. Options can be set through code, through the AMPL
interface if you are using AMPL, or by creating a {\tt ipopt.opt}
file in the directory you are executing \Ipopt.
The {\tt ipopt.opt} file is read line by line and each line should
contain the option name, followed by whitespace, and then the
value. Comments can be included with the {\tt \#} symbol. For example,
\begin{verbatim}
# This is a comment
# Turn off the NLP scaling
nlp_scaling_method none
# Change the initial barrier parameter
mu_init 1e-2
# Set the max number of iterations
max_iter 500
\end{verbatim}
is a valid {\tt ipopt.opt} file.
Options can also be set in code. Have a look at the examples to see
how this is done.
A subset of \Ipopt options are available through AMPL. To set options
through AMPL, use the internal AMPL command {\tt options}. For
example, \\
{\tt options ipopt\_options "nlp\_scaling\_method=none mu\_init=1e-2
max\_iter=500"} \\
is a valid options command in AMPL. The most important options are
referenced in Appendix~\ref{app.options_ref}. To see which options are
available through AMPL, you can run the AMPL solver executable with
the ``{\tt -=}'' flag from the command prompt. To specify other
options when using AMPL, you can always create {\tt ipopt.opt}. Note,
the {\tt ipopt.opt} file is given preference when setting options.
This way, you can easily override any options set in a particular
executable or AMPL model by specifying new values in {\tt ipopt.opt}.
For a list of the most important valid options, see the Appendix
\ref{app.options_ref}. You can print the documentation for all \Ipopt
options by using the option
\medskip
\htmlref{\tt print\_options\_documentation}{opt:print_options_documentation} {\tt~yes}
\medskip
and running \Ipopt (like the AMPL solver executable, for
instance). This will output the documentation of almost all options to the
console.
\section{\Ipopt Output}\label{sec:output}
This section describes the standard \Ipopt console output with the
default setting for \htmlref{\tt print\_level}{opt:print_level}. The output is designed to
provide a quick summary of each iteration as \Ipopt solves the problem.
Before \Ipopt starts to solve the problem, it displays the problem
statistics (number of nonzero-elements in the matrices, number of
variables, etc.). Note that if you have fixed variables (both upper
and lower bounds are equal), \Ipopt may remove these variables from
the problem internally and not include them in the problem statistics.
Following the problem statistics, \Ipopt will begin to solve the
problem and you will see output resembling the following,
\begin{verbatim}
iter objective inf_pr inf_du lg(mu) ||d|| lg(rg) alpha_du alpha_pr ls
0 1.6109693e+01 1.12e+01 5.28e-01 0.0 0.00e+00 - 0.00e+00 0.00e+00 0
1 1.8029749e+01 9.90e-01 6.62e+01 0.1 2.05e+00 - 2.14e-01 1.00e+00f 1
2 1.8719906e+01 1.25e-02 9.04e+00 -2.2 5.94e-02 2.0 8.04e-01 1.00e+00h 1
\end{verbatim}
and the columns of output are defined as,
\begin{description}
\item[{\tt iter}:] The current iteration count. This includes regular
iterations and iterations during the restoration phase. If the
algorithm is in the restoration phase, the letter {\tt r'} will be
appended to the iteration number.
\item[{\tt objective}:] The unscaled objective value at the current
point. During the restoration phase, this value remains the unscaled
objective value for the original problem.
\item[{\tt inf\_pr}:] The unscaled constraint violation at the current
point. This quantity is the infinity-norm (max) of the (unscaled)
constraints (\ref{eq:constraints}). During the restoration phase,
this value remains the constraint violation of the original problem
at the current point. The option \htmlref{\tt inf\_pr\_output}{opt:inf_pr_output} can
be used to switch to the printing of a different quantity.
\item[{\tt inf\_du}:] The scaled dual infeasibility at the current
point. This quantity measure the infinity-norm (max) of the
internal dual infeasibility, Eq.~(4a) in the implementation paper
\cite{WaecBieg06:mp}, including inequality constraints reformulated
using slack variables and problem scaling. During the restoration
phase, this is the value of the dual infeasibility for the
restoration phase problem.
\item[{\tt lg(mu)}:] $\log_{10}$ of the value of the barrier parameter
$\mu$.
\item[{\tt ||d||}:] The infinity norm (max) of the primal step (for
the original variables $x$ and the internal slack variables $s$).
During the restoration phase, this value includes the values of
additional variables, $p$ and $n$ (see Eq.~(30) in
\cite{WaecBieg06:mp}).
\item[{\tt lg(rg)}:] $\log_{10}$ of the value of the regularization
term for the Hessian of the Lagrangian in the augmented system
($\delta_w$ in Eq.~(26) and Section 3.1 in \cite{WaecBieg06:mp}). A
dash (``\texttt{-}'') indicates that no regularization was done.
\item[{\tt alpha\_du}:] The stepsize for the dual variables
($\alpha^z_k$ in Eq.~(14c) in \cite{WaecBieg06:mp}).
\item[{\tt alpha\_pr}:] The stepsize for the primal
variables ($\alpha_k$ in Eq.~(14a) in \cite{WaecBieg06:mp}). The
number is usually followed by a character for additional diagnostic
information regarding the step acceptance criterion, see
Table~\ref{tab:alpha_pr}.
\item[{\tt ls}:] The number of backtracking line search steps (does
not include second-order correction steps).
\end{description}
\begin{table}
\centering
\begin{tabular}{ll}
Tag & Description \\
\hline
f & f-type iteration in the filter method w/o second order correction \\
F & f-type iteration in the filter method w/ second order correction \\
h & h-type iteration in the filter method w/o second order correction \\
H & h-type iteration in the filter method w/ second order correction \\
k & penalty value unchanged in merit function method w/o second order correction \\
K & penalty value unchanged in merit function method w/ second order correction \\
n & penalty value updated in merit function method w/o second order correction \\
N & penalty value updated in merit function method w/ second order correction \\
R & Restoration phase just started \\
w & in watchdog procedure \\
s & step accepted in soft restoration phase \\
t/T & tiny step accepted without line search \\
r & some previous iterate restored \\
% (forgot right now what that means ;-) )
\hline \\
\end{tabular}
\caption{Diagnostic output in {\tt alpha\_pr} column.}
\label{tab:alpha_pr}
\end{table}
Note that the step acceptance mechanisms in \Ipopt consider the
barrier objective function (Eq~(3a) in \cite{WaecBieg06:mp}) which is
usually different from the value reported in the \texttt{objective}
column. Similarly, for the purposes of the step acceptance, the
constraint violation is measured for the internal problem formulation,
which includes slack variables for inequality constraints and
potentially scaling of the constraint functions. This value, too, is
usually different from the value reported in \texttt{inf\_pr}. As a
consequence, a new iterate might have worse values both for the
objective function and the constraint violation as reported in the
iteration output, seemingly contradicting globalization procedure.
When the algorithm terminates, \Ipopt will output a message to the
screen based on the return status of the call to {\tt Optimize}. The following
is a list of the possible return codes, their corresponding output message
to the console, and a brief description.
\begin{description}
\item[{\tt Solve\_Succeeded}:] $\;$ \\
Console Message: {\tt EXIT: Optimal Solution Found.} \\
This message indicates that \Ipopt found a (locally) optimal point
within the desired tolerances.
\item[{\tt Solved\_To\_Acceptable\_Level}:] $\;$ \\
Console Message: {\tt EXIT: Solved To Acceptable Level.} \\
This indicates that the algorithm did not converge to the
``desired'' tolerances, but that it was able to obtain a point
satisfying the ``acceptable'' tolerance level as specified by the
\htmlref{\tt acceptable\_*}{opt:acceptable_tol} options.
This may happen if the desired tolerances
are too small for the current problem.
\item[{\tt Feasible\_Point\_Found}:] $\;$ \\
Console Message: {\tt EXIT: Feasible point for square problem found.} \\
This message is printed if the problem is ``square'' (i.e., it has
as many equality constraints as free variables) and \Ipopt found a
feasible point.
\item[{\tt Infeasible\_Problem\_Detected}:] $\;$ \\
Console Message: {\tt EXIT: Converged to a point of
local infeasibility. Problem may be infeasible.} \\
The restoration phase converged to a point that is a minimizer for
the constraint violation (in the $\ell_1$-norm), but is not feasible
for the original problem. This indicates that the problem may be
infeasible (or at least that the algorithm is stuck at a locally
infeasible point). The returned point (the minimizer of the
constraint violation) might help you to find which constraint is
causing the problem. If you believe that the NLP is feasible,
it might help to start the optimization from a different point.
\item[{\tt Search\_Direction\_Becomes\_Too\_Small}:] $\;$ \\
Console Message: {\tt EXIT: Search Direction is becoming Too Small.} \\
This indicates that \Ipopt is calculating very small step sizes and
is making very little progress. This could happen if the problem has
been solved to the best numerical accuracy possible given the
current scaling.
\item[{\tt Diverging\_Iterates}:] $\;$ \\
Console Message: {\tt EXIT: Iterates divering; problem might be
unbounded.} \\
This message is printed if the max-norm of the iterates becomes
larger than the value of the option
\htmlref{\tt diverging\_iterates\_tol}{opt:diverging_iterates_tol}.
This can happen if the problem is unbounded below and the iterates
are diverging.
\item[{\tt User\_Requested\_Stop}:] $\;$ \\
Console Message: {\tt EXIT: Stopping optimization at current point
as requested by user.} \\
This message is printed if the user call-back method {\tt
intermediate\_callback} returned {\tt false} (see
Section~\ref{sec:add_meth}).
\item[{\tt Maximum\_Iterations\_Exceeded}:] $\;$ \\
Console Message: {\tt EXIT: Maximum Number of Iterations Exceeded.} \\
This indicates that \Ipopt has exceeded the maximum number of
iterations as specified by the option \htmlref{\tt max\_iter}{opt:max_iter}.
\item[{\tt Maximum\_CpuTime\_Exceeded}:] $\;$ \\
Console Message: {\tt EXIT: Maximum CPU time exceeded.} \\
This indicates that \Ipopt has exceeded the maximum number of
CPU seconds as specified by the option \htmlref{\tt max\_cpu\_time}{opt:max_cpu_time}.
\item[{\tt Restoration\_Failed}:] $\;$ \\
Console Message: {\tt EXIT: Restoration Failed!} \\
This indicates that the restoration phase failed to find a feasible
point that was acceptable to the filter line search for the original
problem. This could happen if the problem is highly degenerate, does
not satisfy the constraint qualification, or if your NLP code
provides incorrect derivative information.
\item[{\tt Error\_In\_Step\_Computation}:] $\;$ \\
Console Output:\ {\tt EXIT:\ Error in step computation (regularization becomes too large?)!} \\
This messages is printed if \Ipopt is unable to compute a search
direction, despite several attempts to modify the iteration matrix.
Usually, the value of the regularization parameter then becomes too
large. One situation where this can happen is when values in the
Hessian are invalid ({\tt NaN} or {\tt Inf}). You can check whether this
is true by using the
\htmlref{\tt check\_derivatives\_for\_naninf}{opt:check_derivatives_for_naninf} option.
\item[{\tt Invalid\_Option}:] $\;$ \\
Console Message: (details about the particular error
will be output to the console) \\
This indicates that there was some problem specifying the options.
See the specific message for details.
\item[{\tt Not\_Enough\_Degrees\_Of\_Freedom}:] $\;$ \\
Console Message: {\tt EXIT: Problem has too few degrees of freedom.} \\
This indicates that your problem, as specified, has too few degrees
of freedom. This can happen if you have too many equality
constraints, or if you fix too many variables (\Ipopt removes fixed
variables by default, see also the \htmlref{\tt fixed\_variable\_treatment}{opt:fixed_variable_treatment} option).
\item[{\tt Invalid\_Problem\_Definition}:] $\;$ \\
Console Message: (no console message, this is a return code for the
C and Fortran interfaces only.) \\
This indicates that there was an exception of some sort when
building the {\tt IpoptProblem} structure in the C or Fortran
interface. Likely there is an error in your model or the {\tt main}
routine.
\item[{\tt Unrecoverable\_Exception}:] $\;$ \\
Console Message: (details about the particular error
will be output to the console) \\
This indicates that \Ipopt has thrown an exception that does not
have an internal return code. See the specific message for details.
\item[{\tt NonIpopt\_Exception\_Thrown}:] $\;$ \\
Console Message: {\tt Unknown Exception caught in Ipopt} \\
An unknown exception was caught in \Ipopt. This exception could have
originated from your model or any linked in third party code.
\item[{\tt Insufficient\_Memory}:] $\;$ \\
Console Message: {\tt EXIT: Not enough memory.} \\
An error occurred while trying to allocate memory. The problem may
be too large for your current memory and swap configuration.
\item[{\tt Internal\_Error}:] $\;$ \\
Console: {\tt EXIT: INTERNAL ERROR: Unknown SolverReturn
value - Notify IPOPT Authors.} \\
An unknown internal error has occurred. Please notify the authors of
\Ipopt via the mailing list.
\end{description}
\subsection{Diagnostic Tags for \Ipopt}
To print additional diagnostic tags for each iteration of \Ipopt, set
the options \htmlref{\tt print\_info\_string}{opt:print_info_string}
to \texttt{yes}. With
this, a tag will appear at the end of an iteration line with the
following diagnostic meaning that are useful to flag difficulties for
a particular \Ipopt run. A list of possible strings is given in
Table~\ref{tab:info_string}.
\begin{table}\centering
\begin{tabular}{@{}lll@{}}
Tag & Description & Reference \\
\hline
! & Tighten resto tolerance if only slightly infeasible & Section 3.3 in \cite{WaecBieg06:mp} \\
A & Current iteration is acceptable & Alternate termination \\
a & Perturbation for PD singularity impossible, assume singular & Section 3.1 in \cite{WaecBieg06:mp}\\
C & Second Order Correction taken & Section 2.4 in \cite{WaecBieg06:mp} \\
Dh & Hessian degenerate based on multiple iterations & Section 3.1 in \cite{WaecBieg06:mp}\\
Dhj & Hessian/Jacobian degenerate based on multiple iterations & Section 3.1 in \cite{WaecBieg06:mp}\\
Dj & Jacobian degenerate based on multiple iterations & Section 3.1 in \cite{WaecBieg06:mp}\\
dx & $\delta_x$ perturbation too large & Section 3.1 in \cite{WaecBieg06:mp}\\
e & Cutting back $\alpha$ due to evaluation error & in backtracking line search \\
F- & Filter should be reset, but maximal resets exceeded & Section 2.3 in \cite{WaecBieg06:mp} \\
F+ & Resetting filter due to last few rejections of filter & Section 2.3 in \cite{WaecBieg06:mp} \\
L & Degenerate Jacobian, $\delta_c$ already perturbed & Section 3.1 in \cite{WaecBieg06:mp}\\
l & Degenerate Jacobian, $\delta_c$ perturbed & Section 3.1 in \cite{WaecBieg06:mp}\\
M & Magic step taken for slack variables & in backtracking line search \\
Nh & Hessian not yet degenerate & Section 3.1 in \cite{WaecBieg06:mp}\\
Nhj & Hessian/Jacobian not yet degenerate & Section 3.1 in \cite{WaecBieg06:mp}\\
Nj & Jacobian not yet degenerate & Section 3.1 in \cite{WaecBieg06:mp}\\
NW & Warm start initialization failed & in Warm Start Initialization \\
q & PD system possibly singular, attempt improving sol.\ quality & Section 3.1 in \cite{WaecBieg06:mp}\\
R & Solution of restoration phase & Section 3.3 in \cite{WaecBieg06:mp} \\
S & PD system possibly singular, accept current solution & Section 3.1 in \cite{WaecBieg06:mp}\\
s & PD system singular & Section 3.1 in \cite{WaecBieg06:mp}\\
s & Square Problem. Set multipliers to zero & Default initialization routine \\
Tmax & Trial $\theta$ is larger than $\theta_{max}$ & filter parameter, see (21) in \cite{WaecBieg06:mp} \\
W & Watchdog line search procedure successful & Section 3.2 in \cite{WaecBieg06:mp} \\
w & Watchdog line search procedure unsuccessful, stopped & Section 3.2 in \cite{WaecBieg06:mp} \\
Wb & Undoing most recent SR1 update & Section 5.4.1 in \cite{Biegler:nlpbook} \\
We & Skip Limited-Memory Update in restoration phase & Section 5.4.1 in \cite{Biegler:nlpbook} \\
Wp & Safeguard $B^0 = \sigma I$ for Limited-Memory Update & Section 5.4.1 in \cite{Biegler:nlpbook} \\
Wr & Resetting Limited-Memory Update & Section 5.4.1 in \cite{Biegler:nlpbook} \\
Ws & Skip Limited-Memory Update since $s^Ty$ is not positive & Section 5.4.1 in \cite{Biegler:nlpbook} \\
WS & Skip Limited-Memory Update since $\Delta x$ is too small & Section 5.4.1 in \cite{Biegler:nlpbook} \\
y & Dual infeasibility, use least square multiplier update & during ipopt algorithm \\
z & Apply correction to bound multiplier if too large & during ipopt algorithm \\
\end{tabular}
\caption{Diagnostic output appended using \texttt{print\_info\_sting}.}
\label{tab:info_string}
\end{table}
\appendix
\section{Triplet Format for Sparse Matrices}\label{app.triplet}
\Ipopt was designed for optimizing large sparse nonlinear programs.
Because of problem sparsity, the required matrices (like the
constraints Jacobian or Lagrangian Hessian) are not stored as dense
matrices, but rather in a sparse matrix format. For the tutorials in
this document, we use the triplet format. Consider the matrix
\begin{equation}
\label{eqn.ex_matrix}
\left[
\begin{array}{ccccccc}
1.1 & 0 & 0 & 0 & 0 & 0 & 0.5 \\
0 & 1.9 & 0 & 0 & 0 & 0 & 0.5 \\
0 & 0 & 2.6 & 0 & 0 & 0 & 0.5 \\
0 & 0 & 7.8 & 0.6 & 0 & 0 & 0 \\
0 & 0 & 0 & 1.5 & 2.7 & 0 & 0 \\
1.6 & 0 & 0 & 0 & 0.4 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & 0.9 & 1.7 \\
\end{array}
\right]
\end{equation}
A standard dense matrix representation would need to store $7 \cdot
7{=} 49$ floating point numbers, where many entries would be zero. In
triplet format, however, only the nonzero entries are stored. The
triplet format records the row number, the column number, and the
value of all nonzero entries in the matrix. For the matrix above, this
means storing $14$ integers for the rows, $14$ integers for the
columns, and $14$ floating point numbers for the values. While this
does not seem like a huge space saving over the $49$ floating point
numbers stored in the dense representation, for larger matrices, the
space savings are very dramatic\footnote{For an $n \times n$ matrix,
the dense representation grows with the the square of $n$, while the
sparse representation grows linearly in the number of nonzeros.}.
The parameter {\tt index\_style} in {\tt get\_nlp\_info} tells \Ipopt if
you prefer to use C style indexing (0-based, i.e., starting the
counting at 0) for the row and column indices or Fortran style
(1-based). Tables \ref{tab.fortran_triplet} and \ref{tab.c_triplet}
below show the triplet format for both indexing styles, using the
example matrix (\ref{eqn.ex_matrix}).
\begin{footnotesize}
\begin{table}[ht]%[!h]
\begin{center}
\begin{tabular}{c c c}
row & col & value \\
\hline
{\tt iRow[0] = 1} & {\tt jCol[0] = 1} & {\tt values[0] = 1.1} \\
{\tt iRow[1] = 1} & {\tt jCol[1] = 7} & {\tt values[1] = 0.5} \\
{\tt iRow[2] = 2} & {\tt jCol[2] = 2} & {\tt values[2] = 1.9} \\
{\tt iRow[3] = 2} & {\tt jCol[3] = 7} & {\tt values[3] = 0.5} \\
{\tt iRow[4] = 3} & {\tt jCol[4] = 3} & {\tt values[4] = 2.6} \\
{\tt iRow[5] = 3} & {\tt jCol[5] = 7} & {\tt values[5] = 0.5} \\
{\tt iRow[6] = 4} & {\tt jCol[6] = 3} & {\tt values[6] = 7.8} \\
{\tt iRow[7] = 4} & {\tt jCol[7] = 4} & {\tt values[7] = 0.6} \\
{\tt iRow[8] = 5} & {\tt jCol[8] = 4} & {\tt values[8] = 1.5} \\
{\tt iRow[9] = 5} & {\tt jCol[9] = 5} & {\tt values[9] = 2.7} \\
{\tt iRow[10] = 6} & {\tt jCol[10] = 1} & {\tt values[10] = 1.6} \\
{\tt iRow[11] = 6} & {\tt jCol[11] = 5} & {\tt values[11] = 0.4} \\
{\tt iRow[12] = 7} & {\tt jCol[12] = 6} & {\tt values[12] = 0.9} \\
{\tt iRow[13] = 7} & {\tt jCol[13] = 7} & {\tt values[13] = 1.7}
\end{tabular}
\caption{Triplet Format of Matrix (\ref{eqn.ex_matrix})
with {\tt index\_style=FORTRAN\_STYLE}}
\label{tab.fortran_triplet}
\end{center}
\end{table}
\begin{table}[ht]%[!h]
\begin{center}
\begin{tabular}{c c c}
row & col & value \\
\hline
{\tt iRow[0] = 0} & {\tt jCol[0] = 0} & {\tt values[0] = 1.1} \\
{\tt iRow[1] = 0} & {\tt jCol[1] = 6} & {\tt values[1] = 0.5} \\
{\tt iRow[2] = 1} & {\tt jCol[2] = 1} & {\tt values[2] = 1.9} \\
{\tt iRow[3] = 1} & {\tt jCol[3] = 6} & {\tt values[3] = 0.5} \\
{\tt iRow[4] = 2} & {\tt jCol[4] = 2} & {\tt values[4] = 2.6} \\
{\tt iRow[5] = 2} & {\tt jCol[5] = 6} & {\tt values[5] = 0.5} \\
{\tt iRow[6] = 3} & {\tt jCol[6] = 2} & {\tt values[6] = 7.8} \\
{\tt iRow[7] = 3} & {\tt jCol[7] = 3} & {\tt values[7] = 0.6} \\
{\tt iRow[8] = 4} & {\tt jCol[8] = 3} & {\tt values[8] = 1.5} \\
{\tt iRow[9] = 4} & {\tt jCol[9] = 4} & {\tt values[9] = 2.7} \\
{\tt iRow[10] = 5} & {\tt jCol[10] = 0} & {\tt values[10] = 1.6} \\
{\tt iRow[11] = 5} & {\tt jCol[11] = 4} & {\tt values[11] = 0.4} \\
{\tt iRow[12] = 6} & {\tt jCol[12] = 5} & {\tt values[12] = 0.9} \\
{\tt iRow[13] = 6} & {\tt jCol[13] = 6} & {\tt values[13] = 1.7}
\end{tabular}
\caption{Triplet Format of Matrix (\ref{eqn.ex_matrix})
with {\tt index\_style=C\_STYLE}}
\label{tab.c_triplet}
\end{center}
\end{table}
\end{footnotesize}
The individual elements of the matrix can be listed in any order, and
if there are multiple items for the same nonzero position, the values
provided for those positions are added.
The Hessian of the Lagrangian is a symmetric matrix. In the case of a
symmetric matrix, you only need to specify the lower left triangular
part (or, alternatively, the upper right triangular part). For
example, given the matrix,
\begin{equation}
\label{eqn.ex_sym_matrix}
\left[
\begin{array}{ccccccc}
1.0 & 0 & 3.0 & 0 & 2.0 \\
0 & 1.1 & 0 & 0 & 5.0 \\
3.0 & 0 & 1.2 & 6.0 & 0 \\
0 & 0 & 6.0 & 1.3 & 9.0 \\
2.0 & 5.0 & 0 & 9.0 & 1.4
\end{array}
\right]
\end{equation}
the triplet format is shown in Tables \ref{tab.sym_fortran_triplet}
and \ref{tab.sym_c_triplet}.
\begin{footnotesize}
\begin{table}[ht]%[!h]
\begin{center}
\begin{tabular}{c c c}
row & col & value \\
\hline
{\tt iRow[0] = 1} & {\tt jCol[0] = 1} & {\tt values[0] = 1.0} \\
{\tt iRow[1] = 2} & {\tt jCol[1] = 1} & {\tt values[1] = 1.1} \\
{\tt iRow[2] = 3} & {\tt jCol[2] = 1} & {\tt values[2] = 3.0} \\
{\tt iRow[3] = 3} & {\tt jCol[3] = 3} & {\tt values[3] = 1.2} \\
{\tt iRow[4] = 4} & {\tt jCol[4] = 3} & {\tt values[4] = 6.0} \\
{\tt iRow[5] = 4} & {\tt jCol[5] = 4} & {\tt values[5] = 1.3} \\
{\tt iRow[6] = 5} & {\tt jCol[6] = 1} & {\tt values[6] = 2.0} \\
{\tt iRow[7] = 5} & {\tt jCol[7] = 2} & {\tt values[7] = 5.0} \\
{\tt iRow[8] = 5} & {\tt jCol[8] = 4} & {\tt values[8] = 9.0} \\
{\tt iRow[9] = 5} & {\tt jCol[9] = 5} & {\tt values[9] = 1.4}
\end{tabular}
\caption{Triplet Format of Matrix (\ref{eqn.ex_sym_matrix})
with {\tt index\_style=FORTRAN\_STYLE}}
\label{tab.sym_fortran_triplet}
\end{center}
\end{table}
\begin{table}[ht]%[!h]
\begin{center}
\begin{tabular}{c c c}
row & col & value \\
\hline
{\tt iRow[0] = 0} & {\tt jCol[0] = 0} & {\tt values[0] = 1.0} \\
{\tt iRow[1] = 1} & {\tt jCol[1] = 0} & {\tt values[1] = 1.1} \\
{\tt iRow[2] = 2} & {\tt jCol[2] = 0} & {\tt values[2] = 3.0} \\
{\tt iRow[3] = 2} & {\tt jCol[3] = 2} & {\tt values[3] = 1.2} \\
{\tt iRow[4] = 3} & {\tt jCol[4] = 2} & {\tt values[4] = 6.0} \\
{\tt iRow[5] = 3} & {\tt jCol[5] = 3} & {\tt values[5] = 1.3} \\
{\tt iRow[6] = 4} & {\tt jCol[6] = 0} & {\tt values[6] = 2.0} \\
{\tt iRow[7] = 4} & {\tt jCol[7] = 1} & {\tt values[7] = 5.0} \\
{\tt iRow[8] = 4} & {\tt jCol[8] = 3} & {\tt values[8] = 9.0} \\
{\tt iRow[9] = 4} & {\tt jCol[9] = 4} & {\tt values[9] = 1.4}
\end{tabular}
\caption{Triplet Format of Matrix (\ref{eqn.ex_sym_matrix})
with {\tt index\_style=C\_STYLE}}
\label{tab.sym_c_triplet}
\end{center}
\end{table}
\end{footnotesize}
\section{The Smart Pointer Implementation: {\tt SmartPtr<T>}} \label{app.smart_ptr}
The {\tt SmartPtr} class is described in {\tt IpSmartPtr.hpp}. It is a
template class that takes care of counting references to objects and
deleting them when not references anymore. Instead of pointing to an object
with a raw C++ pointer (e.g. {\tt HS071\_NLP*}), we use a {\tt
SmartPtr}. Every time a {\tt SmartPtr} is set to reference an
object, it increments a counter in that object (see the {\tt
ReferencedObject} base class if you are interested). If a {\tt
SmartPtr} is done with the object, either by leaving scope or being
set to point to another object, the counter is decremented. When the
count of the object goes to zero, the object is automatically deleted.
{\tt SmartPtr}'s are very simple, just use them as you would use a
standard pointer.
It is very important to use {\tt SmartPtr}'s instead of raw pointers
when passing objects to \Ipopt. Internally, \Ipopt uses smart
pointers for referencing objects. If you use a raw pointer in your
executable, the object's counter will NOT get incremented. Then, when
\Ipopt uses smart pointers inside its own code, the counter will get
incremented. However, before \Ipopt returns control to your code, it
will decrement as many times as it incremented earlier, and the
counter will return to zero. Therefore, \Ipopt will delete the
object. When control returns to you, you now have a raw pointer that
points to a deleted object.
This might sound difficult to anyone not familiar with the use of
smart pointers, but just follow one simple rule; always use a SmartPtr
when creating or passing an \Ipopt object.
\section{Options Reference} \label{app.options_ref}
Options can be set using {\tt ipopt.opt}, through your own code, or
through the AMPL {\tt ipopt\_options} command. See Section
\ref{sec:options} for an explanation of how to use these commands.
Shown here is a list of the most important options for \Ipopt. To view
the full list of options, you can set the option
\htmlref{\tt print\_options\_documentation}{opt:print_options_documentation}
to {\tt yes} or simply run the \Ipopt AMPL solver executable as
\begin{verbatim}
ipopt --print-options
\end{verbatim}
Usually, option values are identical for the regular mode of \Ipopt
and the restoration phase. However, to set an option value
specifically for the restoration phase, the prefix ``\texttt{resto.}''
should be appended. For example, to set the acceptable tolerance for
the restoration phase, use the keyword
``\texttt{resto.acceptable\_tol}''.
\medskip
\noindent
The most common options are:
\input{options.tex}
\section{Options available via the AMPL Interface}
The following is a list of options that is available via AMPL:
\input{options_ampl.tex}
%\bibliographystyle{plain}
%\bibliography{/home/andreasw/tex/andreas}
\input{documentation.bbl}
\end{document}
|
|
\chapter{Performance Evaluation}\label{ch:perf_eval}
\graphicspath{{Chapter6-PerformanceEvaluation/Figs/Vector/}{Chapter6-PerformanceEvaluation/Figs/}}
This chapter provides a comprehensive performance evaluation of the key concepts introduced in this thesis. Evaluation focusses on five issues:
\begin{enumerate}[label=\roman*]
\item The scalability of the \textsf{AmaaS} model over P2P.
\item A performance comparison between \textsf{ABcast} and TOA when utilised in the context of \textsf{AmaaS}.
\item The performance of \textsf{ABcast} when requests are frequent over a long period of time.
\item The probability $R$ of ordering correctness by \textsf{ABcast}, more specifically \textsf{Aramis}.
\item The effectiveness of \textsf{ABcast}'s non-blocking message delivery in the interim period between a node crashing and the GM protocol publishing a new view.
\end{enumerate}
The remainder of this chapter is structured as follows: First we detail an experiment that emulates Infinispan's distributed transactions, and is used to evaluate (i) and (ii) ($\S$ \ref{sec:emulated_transactions}). This is followed by an experiment that replicates the inner workings of the \textsf{SCast} protocol, in order to simulate an \textsf{AmaaS} service operating at maximum capacity, which is used to evaluate (iii) and (iv) ($\S$ \ref{sec:infinite_clients_eval}). Finally, we introduce an experiment that evaluates (iv) and (v) by crashing a node while \emph{abcast}s are sent between nodes ($\S$ \ref{sec:infini_crashed_node}).
% Probing validation experiments
%\section{DMC Validation}
\section{AmaaS}\label{sec:emulated_transactions}
To test our hypothesis that the \textsf{AmaaS} model can improve the scalability of Infinispan's distributed transactions, we developed an experiment that emulates the workflow of these transactions by replicating the \emph{amcast} messages sent by Infinispan when executing total order transactions ($\S$ \ref{sec:to_commit}). This experiment does not utilise Infinispan, or implement a basic transaction manager, rather it focuses purely on replicating the underlying communication stages required by Infinispan transactions.
Existing research \citep{Ruivo:2011:ETO:2120967.2121604} has already shown the benefits of utilising a total order protocol instead of 2PC, therefore our experiments concentrate on the performance of the underlying \emph{amcast} protocol used to coordinate these transactions.
In our experiments, if a new \emph{amcast} protocol can demonstrably increase throughput and reduce latency of \emph{amcast} messages as the number of destinations increase, then we can infer that the scalability of the Infinispan system will be improved by adopting this protocol. Therefore, if our experiments show that the \textsf{AmaaS} model consistently outperforms P2P, then we assume our hypothesis to be true.
In order to compare and contrast the performance of the \textsf{AmaaS} and P2P approach, it was necessary for two experiments to be created.
The first experiment was designed to evaluate the latency and throughput of both a \textsf{SCast} and \textsf{PSCast} service. This experiment allows the performance of the \textsf{AmaaS} model to be evaluated, whilst also enabling the performance of the underlying \emph{abcast} protocols, which are utilised by the services for state machine replication, to be contrasted. Both the \textsf{SCast} and \textsf{PSCast} services utilise a simplified version of the \textsf{SCast} Protocol ($\S$ \ref{sec:scast_protocol}) to coordinate interactions between $c$-nodes and $s$-nodes.
The second experiment was designed to measure the performance of \emph{amcast} requests when utilising the P2P approach. This experiment utilises the same workloads and parameters as the first experiment, however, as per the P2P model, no $s$-nodes are present and consequently there is no need for the \textsf{SCast} protocol. Instead, the TOA protocol is executed directly between $c$-nodes when emulating transactions.
Utilising the same experiment structure and workloads across both sets of experiments allows us to compare the performance of the two system models across a consistent environment. This consistency enables us to contrast the performance of the TOA protocol, when utilised in both the P2P model and the \textsf{SCast} service, with the \textsf{ABcast} protocol utilised by the \textsf{PSCast} service.
\subsection{Experimentation}\label{ssec:emulated_transaction_experiments}
\subsubsection*{\textsf{SCast} and \textsf{PSCast} Services}
We implemented our \textsf{AmaaS} services using the JGroups\citep{JGroups} framework with $n=2$ and $n=3$ $s$-nodes. All nodes in the experiment utilised commodity PCs of \emph{3.4GHz Intel Core i7-3770} CPU and 8GB of RAM, running \emph{Fedora 20} and communicating over Gigabit Ethernet. The $s$-nodes and $c$-nodes utilised in our experiments are a part of a large university cluster, hence communication delays between nodes can be quite volatile as they are influenced by other network traffic and processes launched by other users.
Our experiments are based upon a heavily modified version of an existing performance test available in the JGroups\citep{JGroups} framework, which mimics the partial replication of key/values in Infinispan\citep{Infinispan}. In these experiments we utilise ten $c$-nodes in the same cluster, each of which emulates a transaction system which is reliant on an \textsf{AmaaS} service for transaction ordering. Each $c$-node operates 25 concurrent threads to initiate and coordinate transactions, and a transaction $Tx$ involves a set $|Tx.dst| = 3,4,\ldots,10$ $c$-nodes; where $|Tx.dst|$ includes $Tx.c$. A thread coordinating a transaction starts its next transaction, $Tx'$, as soon as it executes a commit/abort decision for the currently active $Tx$. Thus, at any moment, $250$ transactions are in different stages of execution.
All of the emulated transactions consist purely of key/value write operations and thus require \emph{amcast} messages for coordination. Infinispan's read requests ($get(k)$) are not emulated, as the retrieval of key/values occurs before $Tx.c$ \emph{amcast}s its $prepare(k)$ message, hence read operations have no baring on the performance of the underlying \emph{amcast} protocol.
Both the \textsf{SCast} and \textsf{PSCast} services utilise a modified version of the \textsf{SCast} protocol defined in section \ref{sec:scast_protocol} to dictate the interactions between $c$-nodes and $s$-nodes. In our implementation $s$-nodes utilise message bundling to reduce the total number of \emph{abcast} messages required.
\textbf{\emph{Omission:}} Stage 1 of the \textsf{SCast} protocol has been omitted from this implementation because we only compare the performance of the two approaches in a crash-free scenario. Our rationale for removing this stage, was that the fault-tolerance provisions described in \textsf{SCast} is only one possible solution for ensuring that the \emph{amcast} protocol can continue to execute in the event of the original coordinator crashing during a multicast and alternative solutions are not obliged to utilise this additional communication stage. Furthermore, in our experiments we are comparing \textsf{SCast} to the \textsf{TOA} protocol which does not implement any mechanism to cope with a crashed message originator, therefore removing Stage 1 of \textsf{SCast} protocol makes for a fairer comparison of the two protocols.
\textbf{Experiment Workflow:} The workflow of a transaction in our experiments is as follows:
\begin{enumerate}
\item A coordinator thread submits its \emph{amcast} request for $Tx$, denoted as $req(Tx)$, with some $s$-node; who stores the request in FIFO order within its ARP.
\item The $s$-nodes \emph{Send} thread retrieves requests stored in its ARP and places them into a message bundle $mb$, which can have a maximum payload of $1kB$ \footnote{In the experiments that utilise the \textsf{ABcast} protocol, we \emph{pad} the contents of the message bundle to ensure that it is always equal to $1kB$. This ensures that all messages \emph{abcast} by the protocol are approximately the same size, which increases the accuracy of the DMC's predictions at the expense of redundant bandwidth.}, then \emph{abcast}s $mb$ to all other $s$-nodes.
\textbf{Note:} If there exists no requests in the ARP, then the \emph{Send} thread waits for it to become non-empty before initiating the next $mb'$. Hence, the number of requests bundled in any $mb$ varies depending on the arrival rate of requests.
\item Once $req(Tx)$ has been \emph{abcast} to all $s$-nodes, a response message, $Rsp(Tx)$ it is sent to $Tx.c$ who disseminates this message to $Tx.dst$ as $mcast(Tx)$.
\item When all $d \in Tx.dst$ have received and delivered $mcast(Tx)$, as per the delivery conditions of the \textsf{SCast} protocol, the transaction is considered complete and the coordinator thread can start executing $Tx'$.
\end{enumerate}
In our experiments that utilise \textsf{ABcast} (\textsf{PSCast} service), an additional phase is required before the experiments can begin. Prior to accepting requests from $c$ nodes, $s$-nodes must participate in an initialisation period that lasts approximately 1-2 seconds. During this period, the clocks of the $s$-nodes are synchronized and each $s$-node broadcasts $10^3$ \emph{probe} messages, with a payload of $1kB$, to all other $s$-nodes. The purpose of these probe messages is to record the $NT_P$ latencies required by \textsf{ABcast}'s DMC.
Finally, all of our experiments with \textsf{ABcast} (\textsf{PSCast} Service) utilise the the following constant values. The DMC utilises $R=0.9999$ ($\S$ \ref{ssec:dmc}), and AFC utilises $\delta_{min}$ and $\delta_{max}$ values equal to $1ms$ and $10ms$, respectively ($\S$ \ref{sec:afc_protocol}).
\subsubsection*{P2P}
In order to test the performance of P2P total order transactions we repeated the experiments detailed above, however, as per the P2P model, all $c$-nodes coordinate transactions between themselves without utilising any $s$-nodes. In these experiments, a transaction is considered complete when it has been successfully \emph{amcast} to all $d \in Tx.dst$ by the P2P protocol; where success is defined as all correct destinations delivering the \emph{amcast} message.
\textbf{Note:} The same cluster of machines were used for both the P2P and \textsf{AmaaS} experiments to ensure a fair comparison between protocols.
\subsection{Results}\label{sec:AmaaS_results}
Our performance evaluation focuses on the comparison of the TOA protocol, being utilised in a traditional P2P scenario (\emph{TOA-P2P}), with two different \textsf{AmaaS} services that utilise the \textsf{SCast} protocol. The \textsf{SCast} service utilises the deterministic protocol TOA for state machine replication, whilst the \emph{PSCast} service utilises the probabilistic protocol \textsf{ABcast}, hence we refer to these two services as the \emph{TOA-Service} and \emph{ABService}.
The performance of all three approaches is measured based upon the average transaction latency and throughput rate. In both the \emph{TOA-Service} and \emph{ABcast-Service}, latency is measured as the time elapsed between a $c$-node's initial transmission of $req(Tx)$ to some $s$-node, and \textbf{all} members of $Tx.dst$ delivering $mcast(Tx)$ to the experiment application. In TOA-P2P, latency is measured as the time taken for all $Tx.dst$ to deliver $Tx$ to the experiment application. For both approaches, throughput is measured as the average number of \emph{abcast}s delivered by the experiment application per second at each $c$-node.
\textbf{Note:} All of our experiments were conducted in isolation in order to prevent any side effects caused by simultaneously executing multiple experiments on the same cluster, however we conducted all experiments over approximately the same time period to ensure that the network was under similar loads for all of our experiments.
Figures \ref{fig:LatencyGraph} and \ref{fig:ThroughputGraph} show the latency and throughput results for our experiments, with $2N$ and $3N$ representing an \textsf{AmaaS} service that consists of two and three, $s$-nodes respectively. Each plot on the graph is an average of three \emph{crash-free} trials; where a trial consists of each $c$-node completing $10^4$ transactions for a specific value of $|Tx.dst|$. Thus, in all three trials the \emph{TOA-Service} and \emph{ABService} each receive a total of $10^5$ \emph{amcast} requests. In TOA-P2P, each $c$-node initiates $10^4$ TOA executions between its peers ($10$ $c$-nodes $\times 10^4 = 10^5$).
Concerning \textsf{AmaaS} performance, Table \ref{table:emulated_transaction_averages} shows the average number of client requests received, the average number of \emph{abcast} messages sent and the average number of requests bundled into each \emph{abcast}, based upon all of our experiments that utilised a \textsf{AmaaS} service. All of theses average values are calculated based upon the statistics recorded by each $s$-node that was utilised during our experiments.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|c|c|c|}
\hline
Experiment & $\#$ Client Requests & $\#$ \emph{abcast}s & Bundle Size \\ \hline \hline
ABService-2N & 50000 & 12763.4 & 4 \\ \hline
TOA-Service-2N & 50000 & 16632 & 3 \\ \hline
ABService-3N & 33333.3 & 13416.4 & 2.5 \\ \hline
TOA-Service-3N & 33333.3 & 13507.8 & 2.5 \\ \hline
\end{tabular}
\caption{Average Node Statistics for Emulated Transaction Experiments}
\label{table:emulated_transaction_averages}
\end{center}
\end{table}
Table \ref{table:emulated_transcation_aramis_deliveries} shows the performance of the \textsf{ABcast} protocol in both the ABService-2N and 3N experiments. It shows the average number of \emph{abcast}s sent per node and the average number of these messages that were delivered by the Aramis protocol, as well as providing the total percentage of \emph{abcast}s that were delivered via Aramis. Furthermore, this table details the ratio of $s$-nodes that delivered an \emph{abcast} via Aramis compared to the total number of $s$-nodes utilised. For example, in the case of ABService-2N we performed $24$ experiments, therefore we have statistics for $48$ $s$-nodes and our records show that only $3$ of these nodes utilised Aramis to deliver one or more \emph{abcast} messages.
The very small number of \textsf{Aramis} deliveries is understandable as $\Delta_m$ of \textsf{Aramis} is estimated pessimistically and no crashes occur. In fact, it is surprising that some \emph{abcast}s were indeed delivered by \textsf{Aramis} faster than \textsf{Base} and more discussions on this aspect are presented in subsection \ref{ssec:emulated_eval}.
\begin{table}[h]
\begin{center}
\begin{tabular}{|l|c|c|c|c|}
\hline
Experiment & $\#$ \emph{abcast}s & Nodes Affected & Avg Aramis Deliveries & $\%$ Aramis Deliveries \\ \hline \hline
ABService-2N & 12763.4 & 3:48 & 10.8 & $0.085\%$ \\ \hline
ABService-3N & 13416.4 & 30:72 & 15.3 & $0.114\%$ \\ \hline
\end{tabular}
\caption{Average ABcast Statistics per Node}
\label{table:emulated_transcation_aramis_deliveries}
\end{center}
\end{table}
\begin{figure}[h]
% \centering
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio, clip, trim={2cm 3.25cm 2cm 3cm}]{Latency2}
\caption{AmaaS Latency Comparison}
\label{fig:LatencyGraph}
\end{figure}
\begin{figure}[h]
% \centering
\includegraphics[width=\textwidth,height=\textheight,keepaspectratio, clip, trim={2cm 3.25cm 2cm 3cm}]{Throughput2}
\caption{AmaaS Throughput Comparison}
\label{fig:ThroughputGraph}
\end{figure}
\clearpage
\subsection{Evaluation}\label{ssec:emulated_eval}
This section is split into three distinct subsections. First we directly compare the performance of the \textsf{AmaaS} service and the P2P approach with both experiments utilising the same TOA protocol. We then evaluate the performance of the ABService in contrast with the previous two approaches, focusing on the differences between the performance of the ABcast and TOA based service. Finally, we evaluate the performance of \textsf{ABcast}, focusing on how often the \textsf{Aramis} protocol was utilised to deliver messages and its ability to maintain ordering correctness.
\subsubsection*{AmaaS vs P2P}
In Figure \ref{fig:LatencyGraph} we can see that when $|Tx.dst| \geq 4$, TOA-P2P's \emph{abcast} latencies increase considerably when compared to the two TOA-Service experiments. With TOA-P2P experiencing approximately a $25\%$ and $50\%$ increase in average latency when compared to TOA-Service-3N and TOA-Service-2N respectively. Thus, indicating that \emph{amcast}ing is best provided as a service as the number of clients involved in a transaction increases. Comparing throughput in Figure \ref{fig:ThroughputGraph} leads to similar conclusions, with the steady throughput observed as $|Tx.dst| \rightarrow 10$ also suggesting an absence of node saturation.
TOA-P2P's superior performance when $Tx.dst < 4$ can be attributed to the additional stages involved when utilising the \emph{AmaaS} model. For example when TOA-Service utilises two $s$-nodes ($2N$) the following stages are required: $Tx.c$ sends a request, the \emph{multicast service} \emph{abcast}s it with $|m.dst| = 2$ to all $s$-nodes and returns it to $Tx.c$, who must then multicast $mcast(Tx)$ to $Tx.dst$. Ignoring the individual message cost of each stage the total number of stages is four, whereas in TOA-P2P the only step required is the \emph{amcast}ing of $Tx$. So although $|m.dst|$ for each \emph{amcast} is less in TOA-Service ($|m.dst| = 2$) than TOA-P2P ($|m.dst| = 3$), the overhead of sending a request to the \emph{multicast service} and back is much greater than the savings offered by reducing $|m.dst|$ by one node. However, as $|Tx.dst|$ increases, the overhead of TOA-P2P's increased $|m.dst|$ becomes significant, to the point where TOA-Service's additional communication stages becomes less of an overhead than the cost of TOA-P2P \emph{amcat}ing to a large $m.dst$.
\subsubsection*{ABService vs TOA-Service}
In Figure \ref{fig:LatencyGraph} we can see that the latencies encountered by the ABService-2N and TOA-Service-2N experiments are very similar, regardless of the number of clients involved in a transaction, with the maximum difference between any two plots being no greater than $0.3$ milliseconds. Interestingly, our experiments show that in the majority of experiments, the ABService outperforms TOA-Service. This superior performance can be attributed to a combination of two factors: the number of requests that are bundled on average per \emph{abcast} and the overall message cost associated with the underlying \emph{abcast} protocol.
The average number of client requests bundled into a single \emph{abcast} can play a decisive role in the latency and throughput of a \textsf{AmaaS} service as the higher the average bundle rate, the lower the total number of \emph{abcast}s required. As the \emph{abcast}ing of requests between $s$-nodes is the most expensive operation, in terms of bandwidth and latency in the \textsf{AmaaS} model, it is self-evident that reducing their frequency will reduce the average latency encountered by client requests, therefore reducing the total duration of a transaction.
Table \ref{table:emulated_transaction_averages} shows that the average bundle rate for the ABService-2N was $4$ messages, whilst it was only $3$ for TOA-Service-2N. Therefore, on average a node in TOA-Service-2N sends $\approx 3869$ more \emph{abcast}s then its counterpart ABService-2N, which partially explains the difference in performance between the two approaches.
The difference in overall message cost between the two \emph{abcast} protocols is a consequence of the two different approaches to solving \emph{abcast} and the optimisations present in the \textsf{ABcast} protocol ($\S$ \ref{ssec:atomic_broadcast} $\&$ \ref{ssec:base_ack_piggyback}). The \textsf{ABcast} protocol piggybacks any outstanding message acknowledgements on subsequent message broadcasts, enabling \emph{abcast}s to be executed in a single phase when all nodes are frequently sending \emph{abcast}s. Whereas, the JGroups implementation of the TOA protocol does not implement any optimisations, and thus, each broadcast always consists of two phases, therefore increasing the average latency encountered by transaction requests.
Correspondingly, it is possible to observe that the average and maximum difference between the latencies encountered in the ABService-3N and TOA-Service-3N experiments is greater than that observed when $N = 2$. The improving performance of the ABService can be attributed to the \textsf{ABcast} optimisations becoming more effective as the number of $s$-nodes increase. For example, if $N = 3$ and \textsf{ABcast} sends a broadcast, the total message cost for that single \emph{abcast} is only $2$ unicasts when piggybacking occurs, whereas with TOA the total cost is always $6$ unicasts. Clearly, such an optimisation will have a positive effect on the performance of the ABService implementation, especially when service requests are evenly distributed amongst $s$-nodes and are arriving frequently which is ideal for the \textsf{ABcast} optimisations.
Interestingly, in Table \ref{table:emulated_transaction_averages} we can see that the average bundle rate of ABService-3N and TOA-Service-3N are almost the same, yet the difference between the observed latencies in the two approaches has increased. This suggests that in these experiments the average bundle rate has no significant impact on the performance of the two approaches.
The large difference between the average bundle rate observed in ABService-2N and 3N, is a direct consequence of the DMC's calculations and how AFC ($\S$ \ref{sec:afc_protocol}) manages broadcast rates. Recall that the delay imposed by AFC, for an \emph{abcast} message, increases when latencies start to exceed the previously calculated $x_{mx}$ value, and decreases to $\delta_{min}$ when no such latencies are observed. When $2$ $s$-nodes are utilised, the observed $x_{mx}$ is typically lower than 3 $s$-nodes, as the number of unicasts sent between $s$-nodes is less; hence the probability of large delays being observed is reduced. The smaller the average $x_{mx}$ value, the more susceptible the system is to delays periodically exceeding $x_{mx}$. Therefore, when $2$ $s$-nodes are utilised the probability of the calculated AFC delay regularly exceeding $\delta_{min}$ increases, which in turn reduces the node's broadcast rate. Consequently, the number of requests which can accumulate between \emph{abcast}s will increase, and hence the average bundle rate also increases. When $3$ $s$-nodes are utilised, the DMC's observations are typically more stable, resulting in less \emph{outlier} latencies being recorded and the broadcast rate being more stable; hence an average bundle rate that is approximately the same as the TOA-Service-3N.
The throughput of the ABService and TOA-Service for both $2N$ and $3N$ follows a very similar pattern to that observed when analysing their latencies. This is not surprising as the average transaction latency has a direct impact on the average rate of throughput. Combining the results shown in Figures \ref{fig:LatencyGraph} and \ref{fig:ThroughputGraph}, it is clear to see that the ABService provides comparable performance to that of the TOA-Service and that both of these \textsf{AmaaS} solutions consistently outperform TOA-P2P when $Tx.dst > 3$.
\textbf{Note:} While the overall performance of ABServive and TOA-Service are similar, the \textsf{ABcast} protocol used by ABService provides non-blocking message delivery in the event of node-failures, as well as stronger guarantees on message ordering than TOA. Recall that TOA does not provide \emph{uniform agreement} in the event of a message originator crashing ($\S$ \ref{ssec:TOA_limations}), therefore it is not unreasonable to imagine that the performance gap between the two protocols would increase, in favour of ABService, if the TOA protocol was adapted to provide \emph{uniform agreement}.
\subsubsection*{ABcast}
In Table \ref{table:emulated_transcation_aramis_deliveries} we can see that only $3$ of the $48$ nodes utilised by ABService-2N delivered an \emph{abcast} via the \textsf{Aramis} protocol, with the average number of messages being $\approx 11$, only $0.085\%$ of all messages. Hence, the $\Delta_m$ value calculated by the DMC was sufficient for $99.915\%$ of \emph{abcast}s. The results of the ABService-3N experiments shows that as the number of $s$-nodes increased, the total number of \textsf{Aramis} deliveries also increased. Almost $50\%$ of nodes delivered at least one message via \textsf{Aramis}, with an overall average of $\approx 16$ messages per node. Although this is a large increase in the number of nodes requiring \textsf{Aramis}, the protocol still only accounts for $0.114\%$ of all \emph{abcast}s sent.
The increase in \textsf{Aramis} deliveries as the number of $s$-nodes increase can be attributed to the DMC recording each latency anomalously (without regard for source of the message) and calculating $\Delta_m$ based upon these latencies. In the experiments where $n=2$, we know that all of the latencies recorded by node $Ns_1$ will be from messages originating at $Ns_2$. Therefore, when node $Ns_1$ broadcasts message $m$, it is guaranteed that the calculated $\Delta_m$ has been calculated utilising latencies representative of $Ns_2$'s past performance. Whereas, when $n=3$, $Ns_1$ will have calculated $\Delta_m$ based upon latencies recorded from both $Ns_2$ and $Ns_3$, therefore it is possible that if $Ns_3$ is slower than $Ns_2$, the latencies calculated from $Ns_2$ will dilute the larger latencies recorded from messages originating at $Ns_3$. Thus, the calculated $\Delta_m$ could be smaller than the value required by the slower node $Ns_3$.
\textbf{Note:} None of the experiments that delivered a message via \textsf{Aramis} suffered an \textsf{ABcast} ordering violation and hence no SCast ordering violations occurred at any client nodes. Furthermore, we repeated our experiments with delivery condition $D1_B$ of the \textsf{Base} protocol disabled, which causes all \emph{abcast}s to be delivered via \textsf{Aramis}, in order to evaluate the accuracy of $\Delta_m$. We found that, for both $n=2$ and $n=3$, the calculated $\Delta_m$ was sufficient for all $s$-nodes to deliver messages without a single ordering violation occurring. As expected, latencies were large, and they were so large that a single experiment (emulating $10^5$ transactions) took several minutes to complete. Obviously such large latencies are not practical, however these experiments provide evidence of $\Delta_m$'s ability to prevent ordering violations.
\subsection{Summary}
When deploying a large-scale distributed transaction system that executes transactions which span several nodes ($|Tx.dst| > 3$), higher throughout and lower-latency can be achieved by utilising the \textsf{AmaaS} model for \emph{amcast} messages. Furthermore, such a service can provide non-blocking \emph{amcast}s when the \textsf{ABcast} protocol is utilised for state machine replication, whilst maintaining similar levels of performance to when a GM based protocol is utilised.
\section{ABcast - Infinite Clients for Extreme Load Conditions}\label{sec:infinite_clients_eval}
In the previous section, we tested the performance of the \textsf{AmaaS} approach whilst utilising the \textsf{ABcast} protocol. Our results showed, that the \textsf{Aramis} protocol was rarely required to deliver messages, accounting for only $0.015\%$ and $0.114\%$ of messages, when the number of $s$-nodes was two and three respectively. However, in these experiments the total number of \emph{abcast} messages was, on average, relatively low for each node; typically less than $2 \times 10^4$. Furthermore, each $s$-node's rate of \emph{abcast}s would vary depending on the restrictions of the AFC protocol and the rate at which requests were being received by $c$-nodes.
Due to the number of client nodes being relatively small, it is probable that at times an $s$-node's ARP could have been empty. Therefore, in order to test the performance of \textsf{ABcast} under extremely heavy loads, it was necessary for a new experiment to be developed. The purpose of these experiments are two fold. First, they allow us to measure how often \textsf{Aramis} is required to deliver messages and the frequency of order violations. Secondly, they allow us to monitor the values calculated by the DMC during high levels of network load and determine their effect on the resulting $\Delta_m$.
In order to test the performance of \textsf{ABcast} under heavy loads, we could simply increase the number of client nodes that were used in our previous experiment, however this would require a large amount of resources and would be cumbersome to orchestrate. Furthermore, such an approach does not guarantee that the ARP of a given $s$-node will always have a request to process.
We propose a new experiment, which we refer to as an \emph{infinite client system} as it represents the performance of \textsf{AmaaS} ordering service if each $s$-node always has a full ARP. This experiment does not utilise client nodes at all, instead, it simply consists of $n$ nodes initiating \emph{abcast}s \emph{as fast as} \textsf{AFC} permits. This is the same as the steps required by \textsf{SCast} for state machine replication, however we do not have the overhead of maintaining the data structures required by \textsf{SCast} at each node; \emph{i.e.} $order\_history[]$. Therefore the delay between subsequent \emph{abcast}s will be less in this experiment, hence the \textsf{ABcast} protocol will be under a heavier load in these experiments than is possible in a \emph{complete} \textsf{SCast} implementation.
\subsection{Experimentation}\label{ssec: infinite_experimentation}
The infinite client experiment was implemented using the JGroups framework and the same \textsf{ABcast} implementations as the experiments detailed in $\S$ \ref{ssec:emulated_transaction_experiments}. Furthermore, our experiments utilised the same computer cluster and specification of machine as our previous experiments.
An individual \emph{infinite client} experiment consists of $3$ nodes sending $10^6$ \emph{abcast}s between themselves; with each individual node sending $\frac{10^6}{n}$ messages with a payload of $1kB$. The workflow of our experiments is as follows:
\begin{enumerate}
\item A node broadcasts its requests as fast as possible using a single thread, which represents the \emph{sender} thread utilised in \textsf{SCast}
\item As soon as a message has been sent, another \textsf{ABcast} is initiated; where the sending of a message $m$ consists of $m$ being sent down the JGroups stack, processed and delayed by AFC, before being unicast to all $n$ nodes.
\item An experiment is considered complete when each node has delivered $10^6$ messages, or if one or more order violations ($\#violations$) has occurred, then ($10^6 - \#violations$) \footnote{As none of our experiments maintain a state at the application level, \emph{abcast}s that cause order violations are not \emph{delivered} to the application, instead their occurrence is simply recorded.}.
\end{enumerate}
For all of our experiments the \textsf{ABcast} protocol used the following constant values: $R = 0.9999$, $\delta_{min} = 1ms$ and $\delta_{max} = 10ms$. Furthermore, we utilise the same initialisation period from the \textsf{AmaaS} experiments.
\subsection{Results}
The experiment detailed in $\S$ \ref{ssec: infinite_experimentation} were executed a total of ten times, utilising the same machines for each experiment. Table \ref{table:infinite_clients_rejections} presents the results of each of these experiments based upon each node's individual performance as well as the performance of the cluster as a whole; where $Ns_1, Ns_2, Ns_3$ correspond to the values recorded by an individual node and we define the cluster as being the combined performance of $\{Ns_1,Ns_2,Ns_3\}$. For each node in an experiment, we show the total number of \emph{abcast}s that were delivered by \textsf{Aramis} and in brackets the number of order violations. We also show the total number of \emph{abcast}s delivered by \textsf{Aramis} across the cluster, and the percentage of all \emph{abcast}s that are delivered by \textsf{Aramis}.
\begin{table}[p]
\begin{center}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Experiment & $Ns_1$ & $Ns_2$ & $Ns_3$ & Total & $\%$ of all \emph{abcast}s \\ \hline \hline
1 & 9220, (0) & 7929, (0) & 6434, (0) & 23538 & 2.36 \\ \hline
2 & 3348, (0) & 4555, (0) & 5008, (0) & 12911 & 1.29 \\ \hline
3 & 4496, (0) & 4920, (0) & 1952, (0) & 11368 & 1.14 \\ \hline
4 & 5832, (0) & 6439, (0) & 4801, (0) & 17072 & 1.71 \\ \hline
5 & 5320, (0) & 5757, (0) & 4066, (0) & 15143 & 1.51 \\ \hline
6 & 4181, (0) & 3286, (0) & 4157, (0) & 11624 & 1.16 \\ \hline
7 & 1743, (0) & 2237, (0) & 2235, (0) & 6215 & 0.62 \\ \hline
8 & 4188, (0) & 1846, (0) & 5421, (0) & 11455 & 1.15 \\ \hline
9 & 5621, (0) & 4242, (0) & 5291, (0) & 15154 & 1.52 \\ \hline
10 & 2953, (0) & 5014, (0) & 3192 , (0) & 11159 & 1.12 \\ \hline \hline
Total &46902, (0) &46225, (0) &42557, (0) &135684 & 1.36\\ \hline
\end{tabular}
\caption[Aramis deliveries for Infinite Clients - $\rho_{min}$ = 1]{Aramis deliveries (Order Violations) for infinite clients - $\rho_{min}$ = 1}
\label{table:infinite_clients_rejections}
\end{center}
\end{table}
Table \ref{table:infinite_clients_aramis_latencies} presents the average delivery latency encountered by all \emph{abcast}s sent via \textsf{ABcast} (including those delivered by \textsf{Aramis}), as well as the average $\Delta_m$ value calculated by each node. Each node records its delivery delay as $Dt_m - m.ts$, where $m.ts$ is the timestamp allocated to an \emph{abcast} message $m$ when an \emph{abcast} is initiated and $Dt_m$ is the time at which $m$ is passed upto the application. The average $\Delta_m$ value is recorded using a given node's calculations of $\Delta$ not those recorded by others, hence $Ns_1$'s average is calculated using only $\Delta$ values calculated by $Ns_1$'s DMC. Therefore, the \textquoteleft{}overall' entry in the table provides the average $\Delta$ value of all nodes in the cluster.
\begin{table}[p]
\begin{center}
\renewcommand{\arraystretch}{1.3}
\begin{tabular}{|l|c|c|c|}
\hline
Node & Avg Delivery Latency (ms) & Avg $\Delta_m$ (ms) \\ \hline \hline
$Ns_1$ & $21.48$ & $710.34$ \\ \hline
$Ns_2$ & $23.47$ & $687.29$ \\ \hline
$Ns_2$ & $25.45$ & $767.74$ \\ \hline \hline
Overall & $23.47$ & $721.79$ \\ \hline
\end{tabular}
\caption{Average \textsf{ABcast} Latencies and Calculated $\Delta_m$ - $\rho_{min}$ = 1}
\label{table:infinite_clients_aramis_latencies}
\end{center}
\end{table}
\subsection{Evaluation}
In Table \ref{table:infinite_clients_rejections} we can see that out of all $10^7$ messages, only $1.36\%$ of \emph{abcast}s were delivered by \textsf{Aramis}. Furthermore, out of these $135684$ \textsf{Aramis} deliveries there was not a single order violation, therefore \emph{ABcast}'s guarantees were maintained even when the rate of requests was very high. This lack of order violations implies that the calculated $\Delta_m$ is sufficiently large to prevent messages being missed, whilst still being small enough for some \emph{abcast}s ($1.36\%$) to be delivered via \textsf{Aramis} before the \textsf{Base} protocol could complete. Thus, for $1.36\%$ of \emph{abcast}s the \textsf{ABcast} protocol reduces latency and prevents message blocking even in the absence of node failures, when compared to traditional GM based protocols. Finally, the lack of order violations indicates that the protocol is able to handle a large number of \emph{abcast} requests without compromising on message ordering.
Correspondingly, Table \ref{table:infinite_clients_aramis_latencies} shows that the average delivery latency encountered by \emph{abcast} messages remains low even when the network is heavily loaded. Furthermore, it shows that the average $\Delta_m$ value remains below $800ms$ for each node.
\textsf{ABcast}s ability to provide to low-latency message delivery in such conditions is crucial, as the speed of the \emph{abcast} protocol utilised in an \textsf{AmaaS} service ultimately determines the response time for each client request. More significantly, the low average $\Delta_m$ value shows that, even under the heaviest of loads, the DMC is able to calculate an average $\Delta_m$ that is sub $1$ second and still deliver all messages without a single order violation. This is a \textbf{\emph{vital}} result, because if the $\Delta_m$ value became increasingly large as the load increased, $\Delta_m$ would start to exceed the typical delay required by the GM service to publish a new view after a node crash, therefore rendering our hybrid approach redundant.
\subsection{Summary}
The \textsf{ABcast} protocol is capable of providing low-latency \emph{abcast}s over a sustained period of time in conditions representative of those found in an \textsf{AmaaS} service. In such conditions, the DMC consistently calculates a $\Delta_m$ value that is small enough to outperform GM services, whilst being sufficiently large to ensure that no violations of \emph{abcast} guarantees occur when messages are delivered by \textsf{Aramis}.
\section{ABcast - Fault Tolerance}\label{sec:infini_crashed_node}
In our previous experiments with \textsf{ABcast} we have evaluated the performance of the protocol in the context of an \textsf{AmaaS} service where no node failures occur. However, as \textsf{ABcast} has been designed to compliment the low-latency performance of GM protocols, by allowing for non-blocking message delivery when node crashes occur, it is necessary to ensure that $\Delta_m$ is sufficiently small for messages to be delivered in the interim period between a node crash and the GM service publishing a new view.
Ultimately, if the GM service is able to publish a new view before any messages are delivered via the \textsf{Aramis} protocol, then the hybrid approach we have taken is unnecessary. In such a case, a traditional GM based protocol would be more suitable as order violations are not possible. Therefore in order to determine the effectiveness of \textsf{ABcast}'s hybrid approach, it was necessary to create an experiment that monitors the number of messages, if any, that are delivered by \textsf{ABcast} in the interim period between a node crashing and the GM service publishing a new view.
Such an experiment also enable us to explore the impact of utilising different values for \textsf{ABcast}'s configuration parameters, such as $\rho_{min}$ and $R$, on the number of messages delivered in this interim period. More specifically, these experiments allow us to explore the impact of these configurations parameters on the average $\Delta_m$ value calculated by a node and how these variations impact the observed number of order violations.
\subsection{Experimentation}\label{ssec:crash_experiment}
In order to test the performance of \textsf{ABcast} when a node crashes, we reuse the experiments detailed in $\S$ \ref{ssec: infinite_experimentation}. However, in these experiments, instead of all $3$ nodes sending a total of $10^6$ \emph{abcast}s, only $2$ of the nodes complete their broadcasts. The third node, $Ns_3$, is crashed after sending $50000$ \emph{abcast}s.
\textbf{Note:} As JGroups is implemented in the Java programming language, we crash $Ns_3$, by crashing the underlying Java Virtual Machine (JVM), not the physical machine.
In order to understand this experiment and why crashing the underlying JVM is necessary, it is important to recall the design of JGroup's GMS and associated \emph{failure detection} protocols presented in section \ref{ssec:jgroups_gms}. Recall that the \emph{failure detection} protocol \texttt{FD\_SOCK} is particularly effective at detecting node crashes; with crashes typically detected within seconds. Therefore, in order for \textsf{ABcast} to deliver messages before the GMS protocol becomes aware of a node crash, the calculated $\Delta_m$ would need to remain relatively small ($\lesssim 2$ $seconds$) throughout our experiments.
Due to \texttt{FD\_SOCK}'s use of Java shutdown hooks, it was not possible for the crashed node in our experiments to be exited as a normal Java application; as this would result in the terminating node sending a leaving message to all members in the view and alerting GMS almost instantly that the node was leaving the current view. Clearly such a leaving message cannot be sent when a node is crashed unintentionally. Therefore, it was necessary for us to terminate the JVM in the most disruptive manner possible, in order to replicate the untimely occurrence of a real node crash. We achieved this by using reflection to access the \emph{sun.misc.Unsafe} api and crash the JVM. The code used to crash the JVM is shown below:
\noindent
\begin{minipage}{\linewidth}
\hfill
\begin{lstlisting}
Field theUnsafe = Unsafe.class.getDeclaredField("theUnsafe");
theUnsafe.setAccessible(true);
((Unsafe) theUnsafe.get(null)).getByte(0);
\end{lstlisting}
\hfill
\end{minipage}
As previously stated, we crash the node $Ns_3$ after it has initiated $50000$ \emph{abcast} requests. Therefore, we consider each experiment to be complete when both $Ns_1$ and $Ns_2$ have delivered $(666666 + 50000 - \#violations)$ \emph{abcast}s. For all of our experiments, we utilise the following AFC values $\delta_{min} = 1ms$ and $\delta_{max} = 10ms$. We execute our experiments utilising $\rho_{min} = 1,2,3$ and $R=0.9999$ in order to determine the effect of increasing $\Delta_m$ on the number of messages delivered before the GM service publishes a new view and the number of order violations. Similarly, we also execute our experiments utilising $\rho_{min} = 1$ and $R=0.99999$, to see the effect of increasing $R$ on $\Delta_m$ and the number of ordering violations.
Once again, we utilise the same initialisation period for \textsf{ABcast} as in our previous experiments.
\subsection{Results}
Tables \ref{table:crashed_node_rho1}, \ref{table:crashed_node_rho2} and \ref{table:crashed_node_rho3} show the performance of the \textsf{ABcast} protocol in the experiments described in \ref{ssec:crash_experiment}, when $\rho_{min}$ is equal to $1, 2$ and $3$, respectively. With each table showing the results of ten experiments that were executed with the specified $\rho_{min}$ value. Each of these tables, show the average $\Delta_m$ value calculated for messages originating at both $Ns_1$ and $Ns_2$, as well as the total number of \emph{abcast}s, $\#abcast$, delivered by \textsf{Aramis} in the interim period between node $Ns_3$ crashing and the GM publishing a new view \footnote{If a column contains $-$ it indicates that no \textsf{Aramis} deliveries occurred before GMS detected $Ns_3$'s crash.}. Furthermore, the value in brackets next to this total represents the number of order violations that occurred in the interim period. Finally, $\#abcast$ shows the throughput gain provided by utilising the probabilistic \textsf{Aramis} protocol, as these \emph{abcast}s would not have been delivered until GMS detected $Ns_3$'s crash if a GM based protocol had been used for \emph{abcast}ing.
Table \ref{table:crashed_node_R.99999} shows the performance of \textsf{ABcast} in the same experiments, but with $\rho_{min} = 1$ and the constant $R = 0.99999$. The fields and columns presented in this table are equivalent to those describe above.
\begin{table}[h]
\begin{center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{$\rho_{min}$} & \multirow{2}{*}{$R$} & \multirow{2}{2cm}{$\#$ Violation Free Runs} & \multicolumn{2}{|c|}{$\#$ Violations} \\ \cline{4-5}
& & & $Ns_1$ & $Ns_2$ \\ \hline \hline
3 & 0.9999 & $10/10$ & $-$ & $-$ \\ \hline
2 & 0.9999 & $9/10$ & $-$ & $\frac{1}{19485}$ \\ \hline
\multirow{2}{*}{1} & \multirow{2}{*}{0.9999} & \multirow{2}{*}{$8/10$} & $-$ & $\frac{1}{12483}$ \\
& & & $-$ & $\frac{1}{10544}$ \\ \hline
1 &0.99999 & $9/10$ & $-$ & $\frac{1}{17016}$ \\ \hline
\end{tabular}
\caption{Summary of $\rho_{min}$ and $R$ when node crashes occur}
\label{table:crashed_node_summary}
\end{center}
\end{table}
Table \ref{table:crashed_node_summary} provides a summary of all of these previous tables, with each set of experiments represented as a single row and being uniquely identified by the combination of $R$ and $\rho_{min}$ values used in the experiments. For each experiment, we show the number of experiments that encountered no order violations over the total number of experiments, and in experiments where violations did occur we present the number of order violations over the number of successful \textsf{Aramis} deliveries that occurred before GMS detected $Ns_3$'s crash\footnote{In Table \ref{table:crashed_node_summary}, $-$ indicates that no \textsf{Aramis} order violations occurred}.
\begin{table}[p]
\begin{center}
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{Experiment} & \multicolumn{2}{|c|}{$Ns_1$} & \multicolumn{2}{|c|}{$Ns_2$} \\ \cline{2-5}
& $\Delta_m$&\textsf{Aramis} & $\Delta_m$&\textsf{Aramis} \\ \hline \hline
1 & 240 & 10544, (0) & 212 & 10544, (1) \\ \hline
2 & 553 & 6874, (0) & 527 & 6874, (0) \\ \hline
3 & 517 & 17452, (0) & 402 & 17452, (0) \\ \hline
4 & 334 & 18487, (0) & 274 & 18483, (0) \\ \hline
5 & 426 & 12483, (0) & 322 & 12483, (1) \\ \hline
6 & 717 & 4723, (0) & 429 & 4723, (0) \\ \hline
7 & 491 & 8936, (0) & 816 & 8936, (0) \\ \hline
8 & 510 & 393, (0) & 475 & 392, (0) \\ \hline
9 & 478 & 3798, (0) & 931 & 3798, (0) \\ \hline
10 & 234 & 17341, (0) & 290 & 17805, (0) \\ \hline \hline
$R_{ex}$ & \multicolumn{2}{|c|}{1} & \multicolumn{2}{|c|}{0.9999803} \\ \hline
\end{tabular}
\caption[\textsf{Aramis} deliveries before GMS detects node crash ($R=0.9999$, $\rho_{min}=1$)]{\textsf{Aramis} deliveries (Order Violations) before GMS detects $Ns_3$ has crashed \\ $R=0.9999$, $\rho_{min}=1$}
\label{table:crashed_node_rho1}
\end{center}
\end{table}
\begin{table}[p]
\begin{center}
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{Experiment} & \multicolumn{2}{|c|}{$Ns_1$} & \multicolumn{2}{|c|}{$Ns_2$} \\ \cline{2-5}
& $\Delta_m$&\textsf{Aramis} & $\Delta_m$&\textsf{Aramis} \\ \hline \hline
1 & 664 & 5509, (0) & 580 & 5509, (0) \\ \hline
2 & 636 & 13697, (0) & 555 & 13697, (0) \\ \hline
3 & 1020 & 2688, (0) & 496 & 2688, (0) \\ \hline
4 & 320 & 19481, (0) & 279 & 19485, (1) \\ \hline
5 & 331 & 19012, (0) & 400 & 19106, (0) \\ \hline
6 & 456 & 2669, (0) & 466 & 2669, (0) \\ \hline
7 & 432 & 10823, (0) & 939 & 10823, (0) \\ \hline
8 & 271 & 18412, (0) & 272 & 18414, (0) \\ \hline
9 & 498 & 5440, (0) & 362 & 5449, (0) \\ \hline
10 & 716 & 3611, (0) & 376 & 3611, (0) \\ \hline \hline
$R_{ex}$ & \multicolumn{2}{|c|}{1} & \multicolumn{2}{|c|}{0.9999901} \\ \hline
\end{tabular}
\caption[\textsf{Aramis} deliveries before GMS detects node crash ($R=0.9999$, $\rho_{min}=2$)]{\textsf{Aramis} deliveries (Order Violations) before GMS detects $Ns_3$ has crashed \\ $R=0.9999$, $\rho_{min}=2$}
\label{table:crashed_node_rho2}
\end{center}
\end{table}
\begin{table}[p]
\begin{center}
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{Experiment} & \multicolumn{2}{|c|}{$Ns_1$} & \multicolumn{2}{|c|}{$Ns_2$} \\ \cline{2-5}
& $\Delta_m$&\textsf{Aramis} & $\Delta_m$&\textsf{Aramis} \\ \hline \hline
1 & 452 & 17651, (0) & 451 & 21064, (0) \\ \hline
2 & 475 & $-$ & 679 & - \\ \hline
3 & 754 & 3911, (0) & 515 & 3911, (0) \\ \hline
4 & 355 & 16516, (0) & 515 & 3911, (0) \\ \hline
5 & 214 & 17620, (0) & 503 & 17619, (0) \\ \hline
6 & 386 & 12968, (0) & 694 & 12968, (0) \\ \hline
7 & 453 & 7311, (0) & 345 & 7311, (0) \\ \hline
8 & 632 & 12613, (0) & 546 & 12613, (0) \\ \hline
9 & 356 & 18030, (0) & 569 & 18034, (0) \\ \hline
10 & 695 & 13907, (0) & 511 & 13907, (0) \\ \hline \hline
$R_{ex}$ & \multicolumn{2}{|c|}{1} & \multicolumn{2}{|c|}{1} \\ \hline
\end{tabular}
\caption[\textsf{Aramis} deliveries before GMS detects node crash ($R=0.9999$, $\rho_{min}=3$)]{\textsf{Aramis} deliveries (Order Violations) before GMS detects $Ns_3$ has crashed \\ $R=0.9999$, $\rho_{min}=3$}
\label{table:crashed_node_rho3}
\end{center}
\end{table}
\begin{table}[p]
\begin{center}
\renewcommand{\arraystretch}{1.25}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{Experiment} & \multicolumn{2}{|c|}{$Ns_1$} & \multicolumn{2}{|c|}{$Ns_2$} \\ \cline{2-5}
& $\Delta_m$&\textsf{Aramis} & $\Delta_m$&\textsf{Aramis} \\ \hline \hline
1 & 387 & 17982, (0) & 453 & 17016, (1) \\ \hline
2 & 7507 & 255, (0) & 3804 & 742, (0) \\ \hline
3 & 2019 & 8117, (0) & 1676 & 8117, (0) \\ \hline
4 & 3094 & - & 1899 & - \\ \hline
5 & 264 & 10876, (0) & 416 & 10880, (0) \\ \hline
6 & 683 & 9262, (0) & 605 & 9262, (0) \\ \hline
7 & 244 & 18224, (0) & 301 & 18222, (0) \\ \hline
8 & 1160 & 2207, (0) & 830 & 2207, (0) \\ \hline
9 & 334 & 19058, (0) & 278 & 19060, (0) \\ \hline
10 & 233 & 17588, (0) & 421 & 17586, (0) \\ \hline \hline
$R_{ex}$ & \multicolumn{2}{|c|}{1} & \multicolumn{2}{|c|}{0.9999903} \\ \hline
\end{tabular}
\caption[\textsf{Aramis} deliveries before GMS detects node crash ($R=0.99999$, $\rho_{min}=1$)]{\textsf{Aramis} deliveries (Order Violations) before GMS detects $Ns_3$ has crashed \\ $R=0.99999$, $\rho_{min}=1$}
\label{table:crashed_node_R.99999}
\end{center}
\end{table}
\subsection{Evaluation}
From Tables \ref{table:crashed_node_rho1}, \ref{table:crashed_node_rho2}, \ref{table:crashed_node_rho3} and \ref{table:crashed_node_R.99999} we can clearly see that the \textsf{ABcast} protocol allows for a large number of \emph{abcast}s to be delivered in the interim period between a node crashing and the GMS protocol detecting it. With an individual node delivering, on average, greater than $10^4$ \emph{abcast}s and in one case more than double that amount. Furthermore, out of $40$ experiments there was only two instances where there was no benefit to using the \textsf{ABcast} protocol, and this was when the protocol utilised more conservative values of $R=0.99999$ or $\rho_{min}=3$.
In \ref{table:crashed_node_summary}, we can clearly see that increasing the size of $\rho_{min}$ has a direct impact on the reliability of \textsf{Aramis}, as the number of order violations reaches zero when $\rho_{min}$ is at its largest. This can be explained by a larger $\rho_{min}$ increasing the calculated $\Delta_m$ value for each \emph{abcast} (as seen in Tables \ref{table:crashed_node_rho1}, \ref{table:crashed_node_rho2}, \ref{table:crashed_node_rho3}) \footnote{The difference in calculated $\Delta_m$ values is not significant between $\rho_{min}=1,2,3$ in our results, however this can be attributed to the varying state of the underlying network. Our experiments were conducted in sets based upon their constant values, e.g. all ten experiments that utilised $\rho_{min}=1$ and $R=0.9999$ were performed one after the other. Therefore, as these experiments take several minutes each, the time required to conduct all of the experiments was significant, and as a consequence these experiments were conducted over several days. Consequently, the load on the underlying network will have varied for each set of experiments. However, we can still attribute the reduced number of order violations to an increase in $\Delta_m$, as this variable is calculated based upon latencies that represent the networks current state. Therefore, if a smaller $\rho_{min}$ value was utilised under the exact same network conditions as the $\rho_{min}=3$ experiments, we know that the calculated $\Delta_m$ value would have been significantly smaller.}.
Conversely, when we increase $R$ from $0.9999$ to $0.99999$, with $\rho_{min} = 1$, the number of violations is reduced from two to one, at the expense of a greatly enlarged $\Delta_m$ (compared to $\rho_{min}=1,2,3$ when $R=0.9999$).
From its initial conception, \textsf{ABcast} has been designed with pessimistic assumptions in order to minimise the chances of $\Delta_m$ being exceeded by a given \emph{abcast}. This pessimism is reflected in our experiments, with Tables (\ref{table:crashed_node_rho1}, \ref{table:crashed_node_rho2}, \ref{table:crashed_node_rho3}, \ref{table:crashed_node_R.99999}) all showing that the experienced $R$, denoted as $R_{ex}$, is greater than the user specified $R$. Where $R_{ex}$ for a given set of experiments is calculated as
\begin{equation}\label{eq:afc_mu_norm}
\begin{aligned}
1 - R_{ex} = \frac{\sum\limits_{1}^{10} \quad \text{ Number of Order Violations}}{\sum\limits_{1}^{10} \quad \text{ Messages Delivered by Aramis}}
\end{aligned}
\end{equation}
\noindent unless the number of messages delivered by \textsf{Aramis} is zero, in which case $R_{ex} = 1$.
Finally, while our experiments show that a larger number of \emph{abcast}s are delivered in the interim period between node failures and detection, we believe that in the event of a \textquoteleft{}real' crash this value could be much higher. In our experiments we crash the JVM instantly, which results in the TCP sockets utilised by the \texttt{FD\_SOCK} protocol being closed immediately. This means that it is almost certainly the \texttt{FD\_SOCK} protocol that detects the failure of $Ns_3$ each time. If a crash was preceded by a slowing down period where node responses become more staggered and the node was unresponsive, but still running and maintaining an open TCP socket, it is highly probable that the total number of \emph{abcast}s sent in the interim period would be much larger, as the alternative failure detection protocol \texttt{FD\_ALL} has a default timeout period of $40$ seconds.
\subsection{Summary}
We have found that utilising the \textsf{ABcast} protocol for \emph{abcast}s allows for a significant number of messages ($> 10^4$) to be delivered in the interim period between a node crash and the GM protocol publishing a new view. Furthermore, we have found that increasing both $\rho_{min}$ and $R$ reduces the chances of order violations occurring in the presence of node crashes. When $\rho_{min} = 1$, $R_{ex}$ is much larger than the specified $R$, with the difference increasing as $\rho_{min}$ becomes larger. However, a larger $\rho_{min}$ can occasionally risk \textsf{Aramis} not being able to deliver any \emph{abcast}s before the GM publishes a new view, so it is recommended to keep $\rho_{min}=1$.
\section{Summary}
In this chapter, we have presented a thorough performance evaluation of both the \textsf{AmaaS} model and the \textsf{ABcast} protocol. We have shown that, as the number of nodes involved in a transaction increases, the \textsf{AmaaS} model, coupled with the \textsf{SCast} protocol, can improve the average latency and throughput of distributed transactions when compared to the existing P2P approach.
Additionally, we have shown that the \textsf{ABcast} protocol can provide comparable performance to existing deterministic protocols, such as TOA, when utilised within a \textsf{AmaaS} service. Crucially, this performance does not come at the expense of \textsf{ABcast}'s guarantees, as we have demonstrated that these guarantees can be met when handling large numbers of \emph{abcast}s. Furthermore, we have shown that with the correct configuration parameters, it is possible to avoid order violations when node crashes occur. Finally, we have shown that \textsf{ABcast}'s non-blocking message delivery enables a significant number of \emph{abcast} messages to be delivered in the interim period between a node crashing and a GM protocol detecting it.
|
|
For both experiments, 15$\%$ of the data was held out as a test and validation set each. The exact breakdowns of the sets can be seen in~\Cref{appendix:data-info}. Recall was utilized to measure performance ($\%$ of true positives in the top 10 and 1000 predictions). Gridsearches were used to run models with all possible combinations of chosen configurations. The models were also evaluated with early-stopping (stopped when model exhibited maximum performance on the validation set) to prevent over-fitting.
\subsection{Experimental Setup: RankFromSets}
\acrshort{rfs} was run using RMSProp optimizer~\parencite{tieleman2012lecture} with a momentum of 0.9 and a grid-search was done over learning rates of $\{10^{-2}, 10^{-3}, 10^{-4}, 10^{-5}\}$, embedding sizes of $\{10, 25, 50, 100, 500, 1000\}$, and a decision of whether or not to pre-initialize the model with BERT embeddings. The basic training loop is shown here:
\begin{minted}{python}
for step, batch in enumerate(cycle(train_loader)):
# turn to training mode and calculate loss for backpropagation
torch.enable_grad()
model.train()
optimizer.zero_grad()
(publications, articles, word_attributes,
attribute_offsets, real_labels) = batch
publication_set = [args.target_publication] * len(real_labels)
publication_set = torch.tensor(publication_set, dtype=torch.long)
publication_set = publication_set.to(device)
articles = articles.to(device)
word_attributes = word_attributes.to(device)
attribute_offsets = attribute_offsets.to(device)
logits = model(publication_set, articles, word_attributes,
attribute_offsets)
L = loss(logits, labels)
L.backward()
optimizer.step()
running_loss += L.item()
\end{minted}
\subsection{Experimental Setup: BERT}
As \acrshort{bert} already comes "pre-trained", I fine-tuned it to the collected data with the AdamW optimizer and a linear learning rate scheduler with warm-up steps according to best practices, as outlined by~\textcite{devlin2019bert:} and~\textcite{wolf2019huggingfaces}. The model used a batch size of 32, and articles had a maximum length of 512 tokens. A grid-search was performed over learning rates of $\{2, 3, 4, 5\} \times 10^{-5}$, warmup steps of $\{10^2, 10^3, 10^4\}$, and total training steps $\{10^2, 10^3, 10^4, 10^5\} \times 5$. The basic training loop is shown here:
\begin{minted}{python}
for step, batch in enumerate(cycle(train_loader)):
# turn to training mode and calculate loss for backpropagation
torch.enable_grad()
optimizer.zero_grad()
word_attributes, attention_masks, word_subset_counts, real_labels = batch
word_attributes = word_attributes.to(device)
attention_masks = attention_masks.to(device)
logits = model(word_attributes, attention_masks)[0]
logits = torch.squeeze(logits)
L = loss(logits, labels)
L.backward()
if args.clip_grad:
nn.utils.clip_grad_norm_(model.parameters(), 1.0)
optimizer.step()
scheduler.step()
running_loss += L{}.item()
\end{minted}
\subsection{Quantitative Evaluation}
The best performing models from both approaches were chosen and evaluated via recall on the test set.
\input{fig/recall}
Additionally, the time taken to train the models is shown in~\Cref{fig:training-recall}.
\input{fig/training-recall}
|
|
\documentclass[11pt]{article}
\input{../common/common-defs}
\usepackage{graphicx}
\title{Manticore Implementation Note \\ Runtime Conventions}
\author{The Manticore Group}
\date{Draft of \today}
\begin{document}
\maketitle
\section{Overview}
This document describes the runtime conventions used
by the Manticore compiler and runtime system.
\section{Object headers}
\section{Register conventions}
The Manticore runtime model dedicates several registers to
support the language:
\begin{description}
\item[Allocation pointer]
holds the base address of the next object to be
allocated (\ie{}, one word beyond the previous allocated
object).
\item[Limit pointer]
\item[Standard arg]
Used to pass arguments in the standard function and continuation
calling conventions.
\item[Standard environment pointer]
Used to pass the environment in the standard function and continuation
calling conventions.
\item[Standard return continuation]
Used to pass the return continuation in the standard function
calling conventions.
\item[Standard exception continuation]
Used to pass the exception continuation in the standard function
calling conventions.
\end{description}%
\subsection{AMD64}
\begin{center}
\begin{tabular}{cl}
\texttt{\%rax} & Standard argument \\
\texttt{\%rdi} & Standard environment pointer \\
\texttt{\%rsi} & Standard return continuation \\
\texttt{\%rbx} & Standard exception continuation \\
\texttt{\%rcx} & Allocation pointer \\
\texttt{\%r11} & Limit pointer
\end{tabular}%
\end{center}%
\section{Stack layout}
\begin{figure}[tp]
\begin{center}
\includegraphics[scale=0.5]{pictures/stack-heap-alloc}
\end{center}%
\caption{Stack layout while running Manticore code (using heap allocated
continuations)}
\label{fig:stack-heap-alloc}
\end{figure}%
\subsection{Sequential mode}
The compiler supports a sequential execution mode (available via the compiler flag
{\tt -Csequential=true}) to ease debugging.
This mode causes the compiler to not include hlops for scheduling code, thus disabling parallel execution
and preemption.
\subsection{Enumerations}
The compiler uses enumerations for branching over data structures with variant types.
We encode an enumeration value $e$ as the odd number $2*e+1$ in order to distinguish it from pointer
values.
This encoding takes place at code generation ({\tt code-gen-fn.sml}) and applies to literals
and the cases of switch statements.
\subsection{Calling C}
\subsubsection{Attributes}
Attributes inform the Manticore compiler about special properties of external C functions.
\begin{figure}
\begin{center}
\begin{tabular}{cl}
{\tt alloc} & this function performs allocations in the VProc heap (should be {\tt noalloc})
\end{tabular}
\end{center}
\label{fig:c-attributes}
\end{figure}
\end{document}
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% This is a modified ONE COLUMN version of
% the following template:
%
% Deedy - One Page Two Column Resume
% LaTeX Template
% Version 1.1 (30/4/2014)
%
% Original author:
% Debarghya Das (http://debarghyadas.com)
%
% Original repository:
% https://github.com/deedydas/Deedy-Resume
%
% IMPORTANT: THIS TEMPLATE NEEDS TO BE COMPILED WITH XeLaTeX
%
% This template uses several fonts not included with Windows/Linux by
% default. If you get compilation errors saying a font is missing, find the line
% on which the font is used and either change it to a font included with your
% operating system or comment the line out to use the default font.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% TODO:
% 1. Integrate biber/bibtex for article citation under publications.
% 2. Figure out a smoother way for the document to flow onto the next page.
% 3. Add styling information for a "Projects/Hacks" section.
% 4. Add location/address information
% 5. Merge OpenFont and MacFonts as a single sty with options.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% CHANGELOG:
% v1.1:
% 1. Fixed several compilation bugs with \renewcommand
% 2. Got Open-source fonts (Windows/Linux support)
% 3. Added Last Updated
% 4. Move Title styling into .sty
% 5. Commented .sty file.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% Known Issues:
% 1. Overflows onto second page if any column's contents are more than the
% vertical limit
% 2. Hacky space on the first bullet point on the second column.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\documentclass[]{deedy-resume-openfont}
\definecolor{links}{HTML}{006400}
\usepackage{color}
%\hypersetup{colorlinks=true,urlcolor=links}
\hypersetup{colorlinks=true, urlcolor=links,pdfborderstyle={/S/U/W 1},pdfborder=0 0 1}
\setmainfont{Lato Light}
%\hypersetup{%
% colorlinks=true,% hyperlinks will be coloured
% linkcolor=green,% hyperlink text will be green
% linkbordercolor=red,% hyperlink border will be red
%}
%\makeatletter
%\Hy@AtBeginDocument{%
% \def\@pdfborder{0 0 1}% Overrides border definition set with colorlinks=true
% \def\@pdfborderstyle{/S/U/W 1}% Overrides border style set with colorlinks=true
% Hyperlink border style will be underline of width 1pt
%}
%\makeatother
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% LAST UPDATED DATE
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
% TITLE NAME
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\hspace{-5.5mm}
\begin{minipage}[t]{.6\textwidth}
{\Huge Fenimore {\textbf{Love}}}\\
\href{mailto:exorable.ludos@gmail.com}{exorable.ludos@gmail.com} \textbullet{} 571-314-7727 \textbullet{} \href{https://timenotclocks.com}{timenotclocks.com}\\
\end{minipage}
\hfill
\begin{minipage}[t]{.3\textwidth}
Portfolio: \href{https://fenimore.github.io}{fenimore.github.io}\\
Github: \href{https://github.com/fenimore}{@fenimore}
\end{minipage}
\namesection{}{} % for some reason...
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Experience
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Experience}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\runsubsection{Dashlane}
\descript{| data engineer }
\location{May 2020 – present \textbullet{} New York, NY}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\begin{tightemize}
\item Part of a new data engineering team
\end{tightemize}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\runsubsection{Deloitte}
\descript{| data engineer }
\location{September 2018 – May 2020 \textbullet{} New York, NY}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\begin{tightemize}
\item Designed and built reporting pipeline to provide insights for internal operations:
\begin{itemize}
\item Created an ETL pipeline processing hundreds of GB a day with Spark
\item Created a Flask service backed by Postrgres as the data sink for BI
\item Set up an Hbase cluster for storing hundreds of thousands of keys
\end{itemize}
\item Build a model building and scoring pipeline, predicting conversion events for our clients:
\begin{itemize}
\item Moved storage from HDFS to S3 and designed new deploy processes for complex ETL pipelines
\item Designed and implemented a framework for running ETL pipelines with on-demand EMR clusters
\item Set up modeling and scoring pipeline to process 3 Terabytes a day
\item Review data science pull requests and ensure engineering standards are met
\end{itemize}
\item Analyze and explore large datasets with Spark and SparkSQL
\item Manage and configure data streams, data dependencies, and data stores using Kafka, Luigi, Hdfs
\item Help a team of data scientists bring their experiments to production
\item Train and onboard new hires for our new and growing remote team
\end{tightemize}
\sectionsep
\runsubsection{Magnetic}
\descript{| software engineer }
\location{May 2017 – September 2018 \textbullet{} New York, NY}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\textit{Deloitte Digital acquired the Magnetic engineering team (including me) and IP}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\begin{tightemize}
\item Designed and built a reporting ETL pipeline with Spark, Kafka, Hadoop, and Luigi
\item Wrote Spark jobs for ETL pipelines consuming 3 to 4 terabytes of data a day
\item Manipulated, transformed, and visualized data using SparkSQL, Pandas, Impala
\item Rapidly built and maintained microservices in Python using Flask and PostgreSQL
\end{tightemize}
\sectionsep
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Languages
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Languages}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\subsection{Python, Go, SQL \textbullet{} Java, Rust, HTML/CSS/JS, PHP, Ruby}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
Spark, Luigi, Kafka, Hadoop \textbullet{} Linux, Docker \textbullet{} PostgreSQL, HBase, Impala \textbullet{} Flask, Bootstrap, Vue.js
\sectionsep
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Education
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Education}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\runsubsection{Recurse Center}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\descript{| retreat participant }
\location{September 2016 – December 2016 \textbullet{} New York, NY}
The Recurse Center is a self-directed, community-driven educational retreat for programmers in New York City.\newline
My principal projects were a Chess AI and a BitTorrent client, both written in Go.
\sectionsep
\runsubsection{McGill University}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\descript{| Master in Religious Studies}
\location{September 2013 - July 2015 \textbullet{} Montréal, QC}
The Work of Text in the Age of Digital Reproduction:
\textit{A comparison of Ancient Literary Practice and the Copy Left Movement}
\sectionsep % remove for more space
\runsubsection{McGill University}
\vspace{\topsep} % Hacky fix for awkward extra vertical space
\descript{| Bachelor in Religious Studies}
\location{September 2009 - July 2013 \textbullet{} Montréal, QC}
Focus on exegesis of classical texts (Latin and Greek) and the philosophy of religion.
\sectionsep
\end{document} \documentclass[]{article}
|
|
\documentclass{article}
\usepackage[utf8]{inputenc}
\usepackage{amsmath, amssymb}
% \usepackage{proof}
\usepackage{syntax}
\usepackage{listings}
\usepackage{xcolor}
\usepackage{hyperref}
\title{Pixy Semantics}
\author{Reed Mullanix, Finn Hackett}
\date{February 2018}
\begin{document}
\maketitle
\section{Introduction}
The semantics of Pixy can be divided up into 3 portions: The term language, how that term language is evaluated, and the type system.
\section{Term Language}
The term language of Pixy is (roughly) as follows:
\begin{grammar}
<expr> ::= <literal>
\alt <var>
\alt "nil"
\alt "?" <expr>
\alt "if" <expr> "then" <expr> "else" <expr>
\alt <expr> "fby" <expr>
\alt <expr> "where" (<var> "=" <expr>)*
\alt "fun" <var> "=>" <expr>
\alt <expr> <expr>
\end{grammar}
{\color{red} NOTE:} This is incomplete! We need to standardize on the term language.
\section{Evaluation}
The evaluation rules for Pixy are quite different from other languages.
To begin with, each expression can be seen as a taking a State and producing a value and a new State. This state is then fed back into the expression to produce a new State and value, and so on. However, some expressions pose some problems. For example, when evaluating \lstinline{if ... then ... else} expression, we should only really evaluate one of the branches, but doing so may skip important stateful evaluation inside of the untaken branch. To reconcile this, we present a model of evaluation which we call "Choked Evaluation". Whenever we are presented with a branching construct, we still evaluate the branches, with the caveat that all variables \textit{and} literals evaluate to \lstinline{nil} on the branch that is not taken.
Another point we need to make is that evaluation is only valid on \textbf{closed} expressions, or expressions that have no free variables.
{\color{red} NOTE:} Insert full evaluation semantics here.
{\color{red} NOTE:} We need to spec out when exactly evaluation terminates for a given step.
\section{Type Theory}
Typically, type systems follow this general form:
\begin{itemize}
\item The user declares the construction and elimination rules for a type.
\item The user then uses theses construction rules to create programs.
\end{itemize}
We prefer to take a different approach, which has been strongly influenced
by systems such as NuPRL. Generally speaking, our type system works as follows:
\begin{itemize}
\item The user writes a program.
\item The user then creates a proof that the program inhabits some type.
\end{itemize}
That of course raises the question: When does a program inhabit a type? To answer that, we must first answer what exactly a type is in Pixy. We define a type as having 2 components:
\begin{enumerate}
\item A collection of canonical inhabitants.
\item An equivalence relation over those inhabitants.
\end{enumerate}
For example, the canonical inhabitants of the type \lstinline{Nat} are $0,1,2,3...$ and the equivalence relationship is just the equivalence relationship of natural numbers. When we say that $a \in A$, what we are really saying is that $a = a$ under the equality relationship imposed by $A$. This point may seem slightly pedantic, but it has large implications. This can be extended to separate elements, so we could also propose that $a = b \in A$, or that 2 terms $a$ and $b$ are equivalent under the equality relation of $A$. Note that the canonical inhabitants aren't the only members of a type. Any term that evaluates to a canonical inhabitant is also a member of the type. On top of that, if we have 2 terms $t, t'$ and they evaluate to $a, a'$ respectively, and $a = a' \in A$, then $t, t' \in A$ as well!
Continuing in the spirit of NuPRL, what exactly is $a \in A$? Well, if we use the logic of Propositions-as-Types, $a \in A$ should really just be a type! We shall denote this type as $Eq\ a\ b\ A$. We shall also include all of the standard portions of Martin-Löf Type Theory.
Time is represented by using a the type $Next : \star \rightarrow \star$. This type corresponds with the $\circ$ operator in Linear Temporal logic. With this primitive type, we can begin to define the operators of Linear Temporal Logic using inductive and coinductive types.
\begin{align*}
&\circ A \text{ corresponds to } Next(A) \\
&\square A \text{ corresponds to } \nu \sigma . A \times Next(\sigma) \\
&\lozenge A \text{ corresponds to } \mu \sigma . A + Next(\sigma) \\
&A \triangleright B \text{ corresponds to } \mu \sigma . A \times Next(B + \sigma) \\
\end{align*}
There is an alternitave encoding by using temporally-indexed types, which makes quantification over time easier. This was not chosen, as the inductive/coinductive defintions make inductive/coinductive reasoning easier, which makes the kind of proofs we wish to do easier (For example, proving that 2 streams are in sync).
% {\color{red} NOTE:} This section is incomplete, as we have multiple ways of preceding. I have listed out the possible options.
% \begin{enumerate}
% \item Use a temporally indexed dependent type. This allows us to encode
% certain properties such as "$\forall$ Times t, ..." and "$\exists$ Time t, ..." easily.
% \item Use a co-inductive stream type. This would allow us to more easily
% prove relationships between 2 streams.
% \end{enumerate}
\section{Relating Programs to Types}
Note again that we do not derive the types of programs from the bottom-up, as is the norm. Rather, we prove that programs inhabit types from the top-down, using a proof refinement system.
To begin, a proof is a tree of \textbf{Judgments}, which consists of a number of \textbf{hypotheses} of the form $x:A$ followed by a \textbf{Goal}, which is of the form $term:T$. To proceed with the proof, we need to use refinement rules, which are ways of decomposing sub-goals. For example, say we had some term \lstinline{fun x => x}, and we wanted to prove that this term is a member of $Bool \rightarrow Bool$. An example proof would be as follows:
\begin{verbatim}
H >> (fun x => x) in Bool -> Bool by intro-function.
x:Bool, H >> x in Bool by hypothesis x.
H >> Bool in U by bool-intro-universe.
\end{verbatim}
Note that we use 3 rules here, \lstinline{intro-function}, \lstinline{hypothesis}, and \lstinline{intro-universe}. These correspond to the standard type inference rules, but there is a catch: We cannot infer the types. This is because a term can inhabit many potential types. For example, we could also prove that \lstinline{fun x => x} inhabits the type $\Pi_{A:U}.A \rightarrow A$:
\begin{verbatim}
H >> (fun x => x) in (A:U) -> A -> A by intro-function-pi
A:U, x:A, H >> x in A by hypothesis x.
\end{verbatim}
{\color{red} NOTE:} The above rule needs some thinking about. As such, I have decided to not include it in the rule section yet.
{\color{red} NOTE:} Write some examples that show how to use the rules to prove nil-safety.
\section{Rules}
\subsection{Bool}
\begin{verbatim}
H >> true in Bool by intro-true.
H >> false in Bool by intro-false
H >> Bool in U1 by bool-intro-universe.
\end{verbatim}
\subsection{Nil}
\begin{verbatim}
H >> nil in Nil by intro-nil.
-- TODO: Write the choking type rules.
-- Needs clarification
\end{verbatim}
\subsection{Functions}
\begin{verbatim}
H >> (fun x => b) in (x:A) -> B by intro-function.
x:A, H >> b in B.
H >> A in Ui.
H >> B(x) in Ui. -- We may have to be careful about universe levels?
H >> (x:A) -> B in Ui by function-intro-universe.
H >> A in Ui.
H >> B in Ui.
\end{verbatim}
\subsection{Universes}
\begin{verbatim}
H >> Ui in Uj by universe-cumulative.
-- Note, i < j.
\end{verbatim}
\section{Bibliography}
{\color{red} NOTE:} This Section needs to be properly formatted, right now it is just a dumping ground for things that have influenced this work.\\
\url{http://www.nuprl.org/book/} \\
\url{http://www.nuprl.org/documents/Constable/naive.pdf} \\
\url{http://ect.bell-labs.com/who/ajeffrey/papers/plpv12.pdf} \\
\end{document}
|
|
\documentclass[t,aspectratio=169]{beamer}
\usetheme{Frankfurt}
\usecolortheme{orchid}
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\usepackage{lmodern}
\usepackage{amsmath}
\usepackage{graphicx}
\usepackage{textcomp}
\usepackage{hyperref}
\usepackage{soul}
\usepackage{color}
\usepackage{tabularx}
\usepackage{colortbl}
\newif\ifcomplete
%\completetrue % comment out for the short presentation (ICPF version)
\title{ICFP Programming Contest 2014 -- Supermassive Black Hom-set Post-mortem}
\author{P. Lepin}
\date{}
\begin{document}
\frame{\titlepage}
\ifcomplete
\section{Silly Stuff}
\begin{frame}
\frametitle{What's up with the name?}
\begin{itemize}
\item Time pressure -- wanted to submit ASAP, needed \textit{some} team name.
\item<2-> Last year I played as \textquotedblleft{By Wadler's Beard!}\textquotedblright{ }Should have kept it.
\end{itemize}
\end{frame}
\fi
\section{Overview}
\begin{frame}
\frametitle{The Plan}
Coding directly in GCC assembly possible, but painful. So...
\begin{enumerate}
\only<2>{\item Implement the VM.}
\only<3->{\item \st{Implement the VM.} -- there was a web-based implementation}
\only<2>{\item Write a parser for a stand-alone HLL or implement an eDSL.}
\only<3->{\item Write a parser for a stand-alone HLL \st{or implement an eDSL.}}
\only<2>{\item Implement static checks and/or optimizations.}
\only<3->{\item \st{Implement static checks and/or optimizations.}}
\only<2->{\item Implement codegen.}
\only<2>{\item ...?}
\only<3->{\item \st{...?}}
\only<2->{\item \textbf{(end goal)} Implement the Lambda-Man AI.\newline}
\item<4-> Implement symbolic labels on top of GHC assembly.
\item<4-> \textbf{(end goal)} Implement a Ghost AI (or several).
\end{enumerate}
\only<5>{...some of these decisions were extremely myopic.}
\end{frame}
\begin{frame}
\frametitle{Perceived Fun Factor}
\begin{center}
\begin{tabularx}{0.6\textwidth}{|X|c|c|}
\hline
& Tools & AI \\
\hline
Lambda-Man & \cellcolor{green!75}\textbf{FUN!} & \cellcolor{green!75}\textbf{FUN!} \\
\hline
Ghosts & \cellcolor{green!9}Less fun. & \cellcolor{green!9}Less fun. \\
\hline
\end{tabularx}
\end{center}
Lisp-machine CPU, compilers, fairly sophisticated AIs -- interesting. 8-bit CPUs and severely resource-constrained programs -- not so much.
\end{frame}
\begin{frame}
\frametitle{Effort}
\begin{center}
\begin{tabularx}{0.8\textwidth}{|X|c|p{0.3\textwidth}|}
\hline
& Tools & \multicolumn{1}{c|}{AI} \\
\hline
Lambda-Man & \cellcolor{green!27}\textbf{\char`\~9 hrs}, 205 sloc & \cellcolor{green!75}\textbf{\char`\~25 hrs}, 654 sloc, 3114 instructions compiled \\
\hline
Ghosts & \cellcolor{green!9}\textbf{\char`\~3 hrs}, 259 sloc & \cellcolor{green!9}\textbf{\char`\~3 hrs}, 91 instruction \\
\hline
\end{tabularx}
\end{center}
\end{frame}
\begin{frame}
\frametitle{Wild Guesstimate Of Impact}
\begin{center}
\begin{tabularx}{0.6\textwidth}{|X|c|c|}
\hline
& Tools & AI \\
\hline
Lambda-Man & \multicolumn{2}{c|}{\cellcolor{green!9}Some.} \\
\hline
Ghosts & \multicolumn{2}{c|}{\cellcolor{green!75}\textbf{HUGE!}} \\
\hline
\end{tabularx}
\end{center}
Judging by the results of home-brewed tournaments on reddit.
\end{frame}
\section{Gory Details}
\begin{frame}
\frametitle{HLL Compiler Targetting GCC}
\begin{itemize}
\item HLL is Lisp-like, mimicking Scheme and Clojure.
\only<2>{\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{scimitar}
\caption{Almost looks like the real thing.}
\end{figure}}
\only<3->{\item Almost purely functional, \texttt{set!} is accepted by the parser, but... \only<5->{is not supported.}}
\only<4>{\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{noset}
\caption{Not really supported.}
\end{figure}}
\only<5->{\item The only effectful part is \texttt{do}/\texttt{debug}.}
\only<6->{\item \textbf{Many} quirks:
\begin{itemize}
\item<7-> No general TCE, explicit (and unchecked) tail recursion optimization using \texttt{recur}.
\item<8-> Mildly insane function call convention to support \texttt{recur} -- incompatible with ABI as in spec.
\item<9-> \texttt{if}s must be in a tail position - no support for \texttt{SEL}.
\item<10-> Built-ins such as \texttt{car} or \texttt{+} are special forms rather than first-class entities.
\item<11-> No macros.
\item<12-> Virtually no diagnostics or static checks -- a pain to debug.
\end{itemize}}
\only<13->{\item Implementing as eDSL could have given some type safety \textit{almost for free}.}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{HLL Compiler Targetting GCC -- Bugs}
\begin{itemize}
\item A couple of nasty bugs \char`\~30 hrs into the contest.
\only<2->{\begin{figure}[!h]
\centering
\includegraphics[width=0.64\textwidth]{typecheck}
\caption{The fact that it typechecks doesn't \textit{always} mean it's correct.
(But encoding the codegen in a monad \textit{could} have helped.)}
\end{figure}}
\only<3>{\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{recur}
\caption{\texttt{recur} only worked -- occasionally -- by accident.}
\end{figure}}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Lambda-Man AI}
\begin{itemize}
\item Lightning round AI retained as a fall-back -- main AI may not return a move in situations it considers hopeless:
\begin{itemize}
\item Manhattan distance to the closest pill as a value function.
\item Doesn't like being too close to ghosts or visiting recently seen locations (reduces the effect of local minima).
\end{itemize}
\only<2->{\item BFS: cuts off on dying, reaching anything edible or exceeding the depth limit.}
\only<3-5>{\begin{itemize}
\item<3-5> Being close to ghosts is penalized.
\item<4-5> If nothing edible found, simple heuristic value function kicks in.
\item<5> Likes running towards nearest power pill when ghosts are nearby.
\end{itemize}}
\only<6->{\item Data structures: binary search trees (logarithmic access map and IntSet), simple queue from \textit{PFDS}.}
\only<7>{\begin{figure}[!h]
\centering
\includegraphics[width=0.8\textwidth]{pfds}
\end{figure}}
\only<8->{\item Propagates ghosts (ignoring differences in speed) as long as they have no choice.}
\only<9>{\begin{figure}[!h]
\centering
\includegraphics[width=0.1\textwidth]{ghostprop-init}
\caption{Initial state.}
\end{figure}}\only<10>{\begin{figure}[!h]
\centering
\includegraphics[width=0.1\textwidth]{ghostprop-1}
\caption{After 1 step.}
\end{figure}}\only<11>{\begin{figure}[!h]
\centering
\includegraphics[width=0.1\textwidth]{ghostprop-2}
\caption{After 2 steps.}
\end{figure}}\only<12>{\begin{figure}[!h]
\centering
\includegraphics[width=0.1\textwidth]{ghostprop-3}
\caption{After 3 steps.}
\end{figure}}\only<13>{\begin{figure}[!h]
\centering
\includegraphics[width=0.1\textwidth]{ghostprop-4}
\caption{After 4 or more steps.}
\end{figure}}
\only<14->{\item Value of scared ghosts and fruits discounted by time to expiration.}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Lambda-Man AI -- Quirks}
\begin{itemize}
\item Termination of search on reaching anything edible leads to \textquotedblleft{interesting}\textquotedblright{ }behavior.
\item<2-> Scaredy Lambda-Man -- pessimistic estimates of ghost movement.
\item<3-> Very little global preprocessing -- computing tunnels as graph edges was tempting.
\item<4-> Unnecessary map preprocessing on each step.
\item<5-> AI state fragile -- failures screw up future behavior.\newline
\item<6-> \textbf{Not-a-quirk:} Ghost AIs known, but emulation impractical due to cycle limit.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Lambda-Man AI -- Panic}
\begin{itemize}
\item A few hours before the deadline...
\only<2>{\begin{figure}[!h]
\centering
\includegraphics[width=0.4\textwidth]{cycle-limit}
\caption{OMG!!! Cycle limit!}
\end{figure}}
\only<3->{\item Hard to estimate cycles spent sanely from inside the simulation.}
\only<4->{\item Regretted not having my own VM -- web implementation sluggish \textit{and} read-only.}
\only<5>{\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{goodluck}
\caption{Good luck editing \textit{this}.}
\end{figure}}
\only<6->{\item Regretted not having macros -- no easy inlining.}
\only<7->{\item Random BFS depth cutoffs based on the map size.}
\only<8->{\item Optimized by hand.
\begin{itemize}
\item<9-> Non-tail recursive list HOFs -- no artifical limitations on stack depth, \texttt{RTN}s are cheaper than reversing the accumulator.
\only<9>{\begin{figure}[!h]
\centering
\includegraphics[width=0.5\textwidth]{nontailrec}
\end{figure}}
\item<10-> Fused \texttt{map}s and \texttt{filter}s in a few places.
\item<11-> Eliminated some unneeded intermediate variables.
\end{itemize}}
\only<12->{\item Seems to have worked.}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Ghost AI}
\begin{itemize}
\item \textit{Very} limited resources -- even accounting for no need to make decisions on most runs.
\only<2->{\item Wanted something simple and reasonably robust.}
\only<3->{\item Tries to minimize $L_1$ distance to Lambda-Man (\textit{Blinky}/\textit{Chaser}-style).}
\only<4-5>{\begin{itemize}
\item<4-5> \textbf{Problem:} Ghosts tend to clump together.
\item<5> \textbf{Problem:} Can get stuck in dead ends easily.
\end{itemize}}
\only<6->{\item Ghost's index affects tie-breaks: different ghosts favour different directions at intersections.}
\only<7->{\item Simple counter: after a few actual decisions in a row resulting in horizontal or vertical move, that axis is excluded.}
\only<8->{\item Pretty efficient on non-pathological maps: ghosts tend to surround the Lambda-Man.}
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Sources}
\begin{itemize}
\item Submission is available on GitHub, link can be found on ICFPC subreddit, along with many interesting submissions and reports from other teams:
\url{http://www.reddit.com/r/icfpcontest}
\end{itemize}
\end{frame}
\ifcomplete
\section{Lessons (Un)Learned}
\begin{frame}
\frametitle{How To Win The ICFP Programming Contest}
\framesubtitle{What I learned from past failures and how I used it this time -- or not}
\begin{itemize}
\only<1-2>{\item Try to decipher the hint.
\only<2>{\begin{figure}[!h]
\centering
\includegraphics[width=0.3\textwidth]{bkg}
\caption{The apparently black background image on the official site has slight brightness variations. This \textquotedblleft{steganographic message}\textquotedblright{ }turned out to be random noise. D'oh!}
\end{figure}}}
\only<3->{\item \st{Try to decipher the hint.}}
\only<4-7>{\item Do your research.
\begin{itemize}
\item<5-7> There are people smarter than I am on the Net.
\item<6-7> SECD -- little insight into the problem, the spec was enough.
\item<7> Classic ghost AIs -- could have mislead me.
\end{itemize}}
\only<8->{\item \st{Do your research.}}
\only<9-11>{\item If you want to do well -- get on a strong team.
\begin{itemize}
\item<10-11> But can be stressful.
\item<11> And I'm in it for the sheer fun of it...
\end{itemize}}
\only<12->{\item \st{If you want to do well -- get on a strong team.}}
\only<13-14>{\item Don't forget to rest, eat and sleep. \only<14>{On the other hand...}}
\only<15->{\item \st{Don't forget to rest, eat and sleep.}}
\only<16-17>{\item Random tweaks don't work. Think instead.
\only<17>{\begin{figure}[!h]
\centering
\includegraphics[width=0.9\textwidth]{tweak}
\caption{Cutoff depth selection for BFS. Speaks for itself.}
\end{figure}}}
\only<18->{\item \st{Random tweaks don't work. Think instead.}}
\only<19-21>{\item Read the spec carefully.
\begin{itemize}
\item<20-21> Signed my submissions with SHA256 instead of SHA1.
\item<21> Only noticed minutes before the deadline.
\end{itemize}}
\only<22->{\item \st{Read the spec carefully.}}
\only<23-25>{\item Use C++.
\begin{itemize}
\item<24-25> It appears to dominate the field as far as the tools of choice for discriminating hackers are concerned.
\item<25> Used Haskell this year.
\end{itemize}}
\only<26->{\item \st{Use C++.}}
\only<27->{\item All in all, I have no idea how does this work.}
\end{itemize}
\end{frame}
\section{Shout-outs}
\begin{frame}
\frametitle{Lots of teams from Eastern Europe and Russia for some reason?}
\begin{itemize}
\item<1-> In 2012 the organizers expressed puzzlement with a somewhat skewed geographical distribution of teams.
\item<2-> Dmitry Astapov's ICFPC reports (\url{http://users.livejournal.com/_adept_/tag/icfpc}) -- in Russian
\item<3-> Better reading than most fiction.
\item<4-> The 2006 one stands out in particular (and so did the contest itself).
\item<5-> Many folks who read those reports were instantly hooked and wanted to give it a shot -- myself included.
\item<6-> Now you know whom to blame.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Compilers Are Scary Stuff}
\begin{itemize}
\item<1-> Considered to be a dark art among the uninitiated.
\item<2-> Dr. Alex Aiken and his team are running an open Compilers class on Coursera -- \url{https://www.coursera.org/course/compilers}
\item<3-> It's a good one. Get through it and you'll never be scared of writing a compiler again.
\item<4-> \textbf{Correction:} writing a \textit{toy} compiler.
\item<5-> Thankfully, the GHC and GCC in this contest were not the Real Thing.
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{AI Is A Bit Scary Too}
\begin{itemize}
\item<1-> Dr. Dan Klein and others developed an Artificial Intelligence class on edX -- \url{https://www.edx.org/course/uc-berkeleyx/uc-berkeleyx-cs188-1x-artificial-579}
\item<2-> Curiously, it uses Pac-Man in examples and assignments a lot.
\begin{figure}[!h]
\centering
\includegraphics[width=0.3\textwidth]{cs188}
\end{figure}
\item<3-> I'm sure that's sheer coincidence and had \textit{nothing} to do with my performance here.
\end{itemize}
\end{frame}
\fi
\begin{frame}[c]
\begin{center}
\textbf{This was even more awesome than ICFP contests usually are.}
\textbf{Heartfelt thanks to the organizers.}
\textbf{And thank you for listening.}
\end{center}
\end{frame}
\ifcomplete
\begin{frame}[c]
\begin{center}
\textbf{And remember -- for the next year, Haskell is the programming tool of choice for discriminating hackers.}
\end{center}
\end{frame}
\fi
\end{document}
|
|
\chapter{The Lorentz metric}
|
|
\chapter{Subject-related factors in grammaticality judgments}\label{sec:4}
\epigraph{\itshape Speakers perversely disagree among themselves about what is grammatical in their language; some of the principal sources of suffering and dispute within generative linguistics have been over ways of coming to terms with such realities.\\[-2\baselineskip]}{\citep{Fillmore1979}}
\section{Introduction}\label{sec:4.1}
Despite their common genetic makeup, humans exhibit individual differences in virtually every aspect of behavior. It should not be surprising to find that linguistic intuitions are no exception. The central question I address in this chapter is the extent to which differences in linguistic intuitions are systematically attributable to differences either in properties of the organism or in its life experiences. In some cases, there are some features on which people differ that contribute rather transparently to their grammaticality judgments, and to linguistic behavior generally, whereas in other cases the connection is surprising and still poorly understood. Throughout the chapter a major theme is consistency, or the extent to which the same subject gives a sentence the same rating on different occasions, or different subjects give a sentence the same rating. In the former case, inconsistencies are liable to be the result of factors having nothing to do with subjects' linguistic representations, e.g., whether they are fresh or fatigued, uncooperative, attentive or distracted, etc. \citep{BradacEtAl1980}. In the latter case, interspeaker differences might be attributable to differences in deeper properties of the minds of the people in question, in their grammars or in some other module that affects grammaticality judgments. The implications of these various possibilities are taken up in \chapref{sec:6}.
% 98
I begin this chapter with three important studies that have looked quantitatively at individual differences in grammaticality judgments (\sectref{sec:4.2}). The amount of variation found there motivates a search for systematic factors that might account for some of it. In \sectref{sec:4.3}, I examine organismic factors in this regard. Two such factors have been studied extensively: field dependence, a concept from the personality literature (\sectref{sec:4.3.1}), and handedness, which seems to be an important indicator of linguistic structures in the brain (\sectref{sec:4.3.2}). Some other factors, such as age, sex, and general cognitive endowment, seem to be obvious candidates but have been given little or no attention in the literature, so I consider them only briefly in \sectref{sec:4.3.3}. \sectref{sec:4.4} turns to features of the person's experience. The most controversial and most discussed of these is linguistic training. Innumerable critics of the linguistic enterprise have made their case on the basis of linguists being their own speaker-consultants. I look at several studies that have tried to establish whether linguists are suitable sources of grammaticality judgment data (\sectref{sec:4.4.1}). A less-studied but very intriguing source of variation in judgment abilities might be the amount of literacy training and general schooling a person has received. Investigations with remote cultures are the major source of evidence on this topic (\sectref{sec:4.4.2}). I conclude the section with a discussion of a grab bag of miscellaneous experiential factors, such as the amount of exposure one has had to a language (for instance, as a near-native speaker versus a native speaker) and accumulated world knowledge (\sectref{sec:4.4.3}). \sectref{sec:4.5} concludes the chapter by summarizing the findings and using them to motivate the investigations of \chapref{sec:5}.
\section{Individual Differences: Three Representative Studies}\label{sec:4.2}
\epigraph{\itshape Note that, as usual, a given reader is not really expected to agree with a given writer's placement of asterisks.\\[-2\baselineskip]}{\citep{Neubauer1976}}
\noindent The term most often used for individual differences in language judgments is idiolectal variation, although \citet{Heringer1970} is on the mark when he says, ``This term is chosen for want of a better one and is not intended to imply that groups of people do not show the same patterns of variation in acceptability judgments, at least with individual sentence types. To call this dialect variation, however, seems not to be appropriate since there do not appear to be geographical or
%\originalpage{100} % Chapter Four
sociological correlates to this variation'' (p. 287). \citet{Carden1973} uses the term \textit{randomly distributed dialects} in order to emphasize his belief that these should have the same theoretical status as geographically and socially defined dialects. The first set of experiments I review concerns the single most widely studied instance of individual differences: the interpretation of quantifier-negative combinations, as exemplified in (\ref{ex:4:1}a), which might be paraphrased as (\ref{ex:4:1}b) or (\ref{ex:4:1}c):
\ea \label{ex:4:1}
\ea All the boys didn't leave.
\ex Not all the boys left.
\ex None of the boys left.
\z
\z
(Note that the spoken intonation pattern of (\ref{ex:4:1}a) likely would be very different for the two readings, although no one appears to have studied this issue systematically; see \sectref{sec:5.2.6}.) In an early study, \citet{Carden1970b} claims that speakers fall into three categories with regard to their interpretation of sentences like (\ref{ex:4:1}a): some can only get the meaning of (\ref{ex:4:1}b), some can only get the meaning of (\ref{ex:4:1}c), and some find the sentence ambiguous.\footnote{While these three categories represent the major dialects, Carden\ia{Carden, Guy} admits that he found many subdialects. He also reports anecdotally that some speakers who originally could only get the (\ref{ex:4:1}b) reading started accepting both readings after repeated exposure to sentences that forced the (\ref{ex:4:1}c) reading. A similar finding is reported by \citet{Neubauer1976} regarding individual differences in uses of the word \textit{pretend}: subjects moved toward a more liberal dialect when pushed.}
%\textsuperscript{1}
He, along with many other researchers of the day (e.g., \citealt{ElliotEtAl1969}), argued that there are important theoretical insights to be gained by examining the full range of dialects, rather than accounting for one and ignoring the others. Carden\ia{Carden, Guy} was particularly interested in finding implicational relations among dialect differences. In a follow-up study that attempted to elicit judgments on these sentences, \citet{Heringer1970} was faced with ``the problem of asking naive informants to judge the acceptability of ambiguous sentences on specific readings,'' a problem we have also encountered with regard to adjunct \textit{wh}-movement (see \sectref{sec:2.3.2}). When a sentence is uncontroversially good under one reading, one's initial impression is that it sounds fine. This undoubtedly biases ratings of other readings. Therefore, Heringer\ia{Heringer, {James T.}} constructed a situational context in which only one of the readings was possible, either in the form of a scenario of which the target sentence formed the conclusion, or a prose description of the kind of situation where the sentence might occur. These two types of context are illustrated in \REF{ex:4:2} and \REF{ex:4:3}, respectively:
\ea \label{ex:4:2}
All the students didn't pass the test, did they? [Professor Unrat believes he finally has succeeded in making up a midterm which every single one of
his students would fail miserably. However, he doesn't know the test results yet, since his poor overworked teaching assistant Stanley has just this moment finished grading them. Unrat asks Stanley this question in order to confirm his belief.]
\z
\ea\label{ex:4:3}
All the treasure seekers didn't find the chest of gold. [Used in the situation where none of them found it.] (p. 294)
\z
\noindent
Heringer's instructions stated that acceptability should only be considered in the context of the material in square brackets. Unfortunately, acceptability was not defined for the subjects (a complaint made by \citet{Carden1970a} as well) and they did not receive any training on practice sentences.
At any rate, several interesting results are found in this study. One is the ability of context to prompt subjects to see potential acceptability where there otherwise is none, a result that I discuss in \sectref{sec:5.3.1}. Another interesting finding is that, while there were very few speakers who accepted only the (\ref{ex:4:1}c) reading, there were many more who accepted neither reading. In Carden's\ia{Carden, Guy} study this pattern does not show up at all. In general, the results of the two studies differ quite substantially, leading Heringer\ia{Heringer, {James T.}} to speculate on why this should be so. First, the mode of presentation was different in the two studies. Carden\ia{Carden, Guy} presented sentences orally in interviews, whereas Heringer\ia{Heringer, {James T.}} used a written questionnaire. A second possibility, which I discuss more fully in \sectref{sec:6.3}, is that interviews of the sort Carden\ia{Carden, Guy} conducted are more susceptible to experimenter bias. A third potential problem, mentioned by \citet{Carden1970a}, is that Heringer\ia{Heringer, {James T.}} used only one stimulus sentence for each reading in most of the constructions, so it is worth asking whether peculiarities of the sentences chosen could be responsible for some of the results. Nonetheless, Heringer's\ia{Heringer, {James T.}} data apparently refute \citegen{Newmeyer1983} claim that people differ only on their bias of interpretation on these quantifier-negative sentences (i.e., which reading they think of first),but that everyone \textit{can} get both readings. Even when context forced a particular reading, many of Heringer's\ia{Heringer, {James T.}} subjects did not accept that reading, so subjects seem to differ on something deeper than processing preferences.\footnote{Newmeyer\ia{Newmeyer, {Frederick J.}} cites a paper by \citet{Baltin1977} to support his claim that everyone can get both readings, but in fact Baltin found nothing of the kind. He found the three dialects that Carden\ia{Carden, Guy} had reported, using question answering rather than judgments as his primary source. (He also found a significant correlation between subjects' preferences on quantifier-negative constructions and their interpretation of prenominal modifiers as restrictive versus nonrestrictive.) However, \citet{Labov1975}
does report results along the lines described by Newmeyer,\ia{Newmeyer, {Frederick J.}} where nonlinguistic tasks were used to force one reading or the other, with almost complete success across subjects.
}
%\textsuperscript{2}
(See \citet{Labov1972a} for a survey of work on quantifier-negative dialects.)
%\originalpage{102} % Chapter Four
\citet{Stokes1974} performed a follow-up to Carden's work, starting from the criticism that the interview technique is subject to experimenter bias. Stokes used a questionnaire, and determined which readings of quantifier-negative sentences were grammatical by using judgments of synonymy rath\-er than of grammaticality, with the hope that the former would be less likely to tap prescriptive feelings. He too found ``extraordinary variation'' in the results, a much wider range of response patterns than the three dialects Carden\ia{Carden, Guy} discussed. Among his 48 subjects, there were 17 different patterns of responses to the stimulus sentences.
The second study I review is by \citet{SnowEtAl1977}, who performed three experiments to substantiate their claim about the secondary nature of syntactic intuitions and language data, which corresponds in many respects to the position presented in \sectref{sec:3.5}.\footnote{They argue that syntactic intuitions are developmentally secondary, as evidenced by studies such as \citet{Hakes1980}; pragmatically secondary, because their function is not communicative; and methodologically secondary, as demonstrated by their experiments reported in this chapter.}
%\textsuperscript{3}
(The second and third experiments will be discussed in subsequent sections.) Their first experiment used as subjects native speakers of Dutch who were studying linguistics but had not taken any courses in syntactic theory. We might expect them to show somewhat more sophistication than truly naive subjects. Their materials all involved issues of word order, so multiple arrangements of each set of words were constructed. There were two conditions, absolute judgments and rank-ordering. In the former condition, each of 24 sentences appeared on a separate page and the instructions stated, ``Will you please read the sentence, then indicate whether you think it is a good Dutch sentence (by `good' we mean `acceptable in spoken language' and not `grammatically correct'). Write + if the sentence is good, \textminus~if it isn't good,
and ? if it is in-between or if you don't know.'' In the rank-ordering condition, the sentences were divided across four pages of six sentences each, and the instructions read in part, ``Will you please rank these sentences within the groups of six by rewriting them at the bottom of the page with those sentences which are good Dutch, or the best Dutch, at the top and those sentences which are the worst Dutch at the bottom. Sentences which are equally good or bad can be written on one line.'' Immediately we see a potential confound, since the rank-ordering subjects were not told to rank by spoken acceptability as opposed to grammatical correctness (of course, we do not know whether this terminology was understood in a uniform way by the first group either). Snow and Meijer decided to make this a within-subjects
factor, administering the two kinds of tests a week apart to the same subjects, and found no effects of the order of test types, but the instructions could still confound any differences between the two types of task.
The results were first analyzed for between- and within-subjects consistency in the two conditions. The between-subjects consensus on rankings\is{absolute rating (of acceptability), versus relative ranking} was significant for all sets of sentences, as measured by Kendall's coefficient of concordance, but was not extremely high (ranging from .466 to .670 on a potential range of 0\textendash{}1). The most agreed-upon sentence, which the authors claim is perfectly normal, showed disagreement by 3 of 25 subjects, and all other sentences showed at least 7 disagreements as compared to their mean rank. The absolute ratings\is{absolute rating (of acceptability), versus relative ranking} similarly showed no total unanimity, although there was one sentence type on which 24 of the 25 subjects agreed.\footnote{ In the absolute condition, subjects could indicate that they were unsure, which the 25th subject did here; therefore, this constitutes only a weak disagreement.}
%\textsuperscript{4}
On the other hand, five sentence types showed disagreements, i.e., at least one subject rated them bad both times while another rated them good both times, and two of these represented almost equal splits of the subjects. Within-subject consistency was 70.8\% for the absolute judgments, where two identical ratings for two structurally identical variants counted as consistent, even if they were both marked ``?''. The majority of inconsistencies involved one ``?'' rating rather than strictly opposed judgments. One subject out of the 25 was consistent on all 10 sentence types, while the two least consistent subjects were consistent on only 5. Snow and Meijer correctly advise caution in interpreting this as a good level of consistency, however, because many of their
subjects showed strong response biases toward a ``+'' response or toward a ``\textminus''
response. In the extreme case, someone labeling all sentences as good would be 100\% consistent. (Since we are not told the normative status of the stimulus sentences, we do not know what an unbiased distribution of responses might look like.) The authors devised a complex scoring system to assess within-subject consistency between rank-orderings and absolute ratings, which ranged on average from perfect consistency to about three out-of-sequence rankings in a set of six sentences. There was no significant correlation between this cross-conditions consistency score for a given subject and his or her consistency within absolute judgments. Even when judgments are pooled across all the subjects, the absolute ratings do not agree entirely with the rank-orderings. There was at least one reversal
of position for each set of six sentences. On the basis of these results, it is hard to
argue with the authors that ``testing even a relatively large group of subjects, all of
%\originalpage{104} % Chapter Four
them relatively intelligent and language-conscious, does not assure internally consistent judgments concerning the relative acceptability of sentences'' (p. 172).
The third of our example studies is perhaps the most widely cited study on individual differences in grammaticality judgments, that of \citet{Ross1979}. Ross\ia{Ross, {John Robert}} asked 30 subjects to rate the grammaticality of 12 sentences on a scale from 1 to 4, and elicited their perceptions about these judgments.\footnote{There were actually 13 sentences in his questionnaire, one of which was geared to the semantics of \textit{barely} and \textit{scarcely} and did not yield results comparable to those for the other sentences.}
%\textsuperscript{5}
Specifically, the subjects were asked to state how certain they were of each judgment (pretty sure, middling, or pretty unsure), and how they thought that judgment compared to the judgments of most speakers (liberal, conservative, or middle-of-the-road). Since I am particularly concerned with the design of instructions for such experiments, I present Ross's\ia{Ross, {John Robert}} description of the rating scale as it appeared on his questionnaire:
\begin{enumerate}
\item The sentence sounds perfect. You would use it without hesitation.
\item The sentence is less than perfect\schdash{}something in it just doesn't feel comfortable. Maybe lots of people could say it, but you
never feel quite comfortable with it.
\item Worse than 2, but not completely impossible. Maybe somebody might use the sentence, but certainly not you. The sentence is almost beyond hope.
\item The sentence is absolutely out. Impossible to understand, nobody would say it. Un-English. (p. 161)
\end{enumerate}
\noindent
Note the reference to comprehensibility in item 4. In general, the instructions are quite explicit regarding differentiation of the levels, but give little indication of what counts as a criterion for grammaticality.
By his own admission, Ross intended this experiment only as a pilot study. As he acknowledges, his presentation of the results shows no knowledge of statistical whatsoever. Instead, he invents his own numerical measures to assess variability, covariation, etc., and gives numerous large tables of raw data.\footnote{Another potential problem of interpretation is that 8 of his 30 subjects were nonnative speakers of English.}
%\textsuperscript{6}
While these shortcomings make the paper tedious to read and the results hard to interpret, at least his raw data could be used to do proper statistical analyses. I will report only the more obvious results, with the understanding that none of them should be taken as firm. First, I present the sentences employed in the questionnaire, with their mean ratings on the 1\textendash{}4 scale. (Ross did not calculate mean
ratings, but computed an overall score by weighting the numbers of subjects who gave each of the four responses, in effect treating the scale as centered about a zero point. Since his formula is arbitrary and unjustified, I use the standard computation instead. Thus, in his ordered list, the third and fourth sentences are transposed.)\footnote{The general problem of how to come up with a single rating for a sentence on the basis of multiple judgments on a graded scale has arisen in many other studies as well. Standard deviations should probably be reported.}\bigskip
%\textsuperscript{7}
%$\left.\begin{array}{lS}
%\widearray{The doctor is sure that there will be no problems.} & 1.07 \\
%\widearray{Under no circumstances would I accept that offer.} & 1.23 \\
%\end{array}\right] \text{\itshape Core}$
%
%$\left.\begin{array}{lS}
%\widearray{We don't believe the claim that Jimson ever had any money.} & 1.63 \\
%\widearray{That is a frequently talked about proposal.} & 1.70 \\
%\widearray{The fact he wasn't in the store shouldn't be forgotten.} & 1.80 \\
%\widearray{The idea he wasn't in the store is preposterous.} & 2.03 \\
%\widearray{I urge that anything he touch be burned.} & 2.03 \\
%\widearray{Nobody is here who I get along with who I want to talk to.} & 2.60 \\
%\widearray{All the further we got was to Sudbury.} & 2.77 \\
%\widearray{Nobody who I get along with is here who I want to talk to.} & 2.83 \\
%\end{array}\right] \text{\itshape Bog}$
%
%$\left.\begin{array}{lS}
%\widearray{Such formulas should be writable down.} & 3.07 \\
%\widearray{What will the grandfather clock stand between the bed and?} & 3.30\\
%\end{array}\right] \text{\itshape Fringe}$
\resizebox{\textwidth}{!}{\noindent\begin{tabular}{ll@{ }S@{\hspace{1em}}l}
\multirow{2}{*}{}& The doctor is sure that there will be no problems. & 1.07 & \rdelim]{2}{2cm}[\textit{Core}] \\
& Under no circumstances would I accept that offer. & 1.23 & \\
\multirow{8}{*}{} & We don't believe the claim that Jimson ever had any money. & 1.63 & \rdelim]{8}{2cm}[\textit{Bog}] \\
& That is a frequently talked about proposal. & 1.70 & \\
& The fact he wasn't in the store shouldn't be forgotten. & 1.80 & \\
& The idea he wasn't in the store is preposterous. & 2.03 & \\
& I urge that anything he touch be burned. & 2.03 & \\
& Nobody is here who I get along with who I want to talk to. & 2.60 & \\
& All the further we got was to Sudbury. & 2.77 & \\
& Nobody who I get along with is here who I want to talk to. & 2.83 & \\
\multirow{2}{*}{} & Such formulas should be writable down. & 3.07 & \rdelim]{2}{2cm}[\textit{Fringe}] \\
& What will the grandfather clock stand between the bed and? & 3.30 & \\
\end{tabular}}\bigskip
%\todo{Can we get the table to fit within the margins? And is there any way to do big curly braces as in the original printing to bracket the three subgroups of sentences?}
The designations \textit{core, bog}, and \textit{fringe} are used by Ross to refer to the range of good, marginal, and bad sentences, respectively. These divisions are made by eyeballing, not by any formulaic procedure.\footnote{Ross\ia{Ross, {John Robert}} does not commit as to exactly where the divisions should be drawn for the sentences he studied, so I have placed the boundaries arbitrarily within his suggested ranges.}
%\textsuperscript{8}
He found three variables that correlated with this distinction in the order core-fringe-bog (i.e., variables that changed monotonically such that good sentences were at one extreme and marginal ones at the other): increasing variability among subjects, decreasing confidence in their judgments, and increasing self-rating as conservative. The finding about variability jibes with Barsalou's results reported in \sectref{sec:3.3.1} for conceptual typicality judgments. The pattern of confidence agrees with the findings of \citet[52, fig. 9]{QuirkEtAl1966}, based on the number of subjects choosing the ``marginal or dubious'' rating on their 3-point scale; they dub this phenomenon the ``query bulge.'' At an intuitive level, these results are not surprising, but the only explanation Ross{Ross, John Robert} adduces, namely that ``the mind sags in the middle,'' does not add much insight.\footnote{This quotation is attributed to George Miller.\ia{Miller, {George A.}}}
%\textsuperscript{9}
While an additional goal of the questionnaire was to assess whether people know where their judgments stand in relation to those of the rest
% 106 \textsuperscript{Chapter} \textsuperscript{Four}
of the population, the data were not interpretable due to apparent misunderstandings of the liberality scale.\footnote{Ross\ia{Ross, {John Robert}} suggests that a better way to get at this information is simply to ask subjects directly what ratings they think most other people would give.}
%\textsuperscript{10}
Interestingly, Ross found no cases of strongly polarized judgments, i.e., sentences that some people rated 1 and the rest rated 4, with no one in between. In all cases, the two most frequent ratings were adjacent on the scale, that is, there were no bimodal distributions. He suggests that this might be an artifact of the particular sentences chosen; if one deliberately chose known dialectal peculiarities, bimodality might still appear. However, as a measure of just how different people are, no 2 of the 30 subjects agreed on their ratings for more than 7 of the 12 sentences on the 4-point scale. In fact, Ross\ia{Ross, {John Robert}} did not try all combinations of sentences, so it might even require fewer than 7 sentences to differentiate all of the subjects. (By way of comparison, \citet{QuirkEtAl1966} (see \sectref{sec:3.2})
reported that with 76 subjects judging 50 sentences on a 3-point scale, only two sentences were unanimously rejected, and only two accepted.) These sorts of striking results lead Ross\ia{Ross, {John Robert}} to ask, ``Where's English?'' (his proposed answer is discussed below). One experiential factor that contributed to variability among Ross's\ia{Ross, {John Robert}} subjects was that some of his subjects were linguists while others were not. He found systematic differences between the two groups, which I discuss in \sectref{sec:4.4.1}.
Most linguists acknowledge that no two people will agree on even binary judgments of a large collection of sentences, let alone ordinal rankings.\footnote{Newmeyer\ia{Newmeyer, {Frederick J.}} appears to be the exception, claiming that ``there is good reason to think that idiosyncratic (i.e., nongeographical and nonsocial) dialects are nothing but artifacts of the now-abandoned view that grammaticality is dependent on context'' (1983, p. 57). However, he only cites one case as evidence for this very broad generalization, that of quantifier-negative sentences, and the facts there are still controversial, as discussed above.}
%\textsuperscript{11}
What, if anything, does this tell us about people's grammars? Ross's\ia{Ross, {John Robert}} data prompted him to take a very pessimistic view. He proposed in dismay that a language might be defined only as an \textit{n}-dimensional space for some \textit{n} in the thousands, where each point is a sentence and each dimension an implicationally ordered axis such that acceptance of a sentence on a given axis implies acceptance of all sentences closer to the origin along that axis. Then each person's idiolect is an \textit{n}-dimensional vector specifying that person's acceptance threshold for each axis. Most linguists find this an appallingly messy and uninteresting view of language.\footnote{This view seems to have originated in Ross's\ia{Ross, {John Robert}} earlier proposal of the concept of a squish (see \sectref{sec:3.3.1}). A squish is a two-dimensional matrix where the cells represent judgments. On one axis are forms graded by some property, e.g., increasing volitionality. On the other are environments where the forms might occur, graded by the extent to which they demand that property. One can make claims about how orderly the implicational pattern in the matrix should be across speakers. Unfortunately, after some research in this paradigm it started to look like both hierarchies could vary across speakers, or even that this pattern could be violated by a single speaker through the syntactic analog of statistical interactions: the effect of one dimension on grammaticality depended on the level of the other.
}
I discuss some alternative positions in \chapref{sec:6}. The reader is referred to \citet{FillmoreEtAl1979a} for a very wide-ranging discussion of individual differences in language behavior. Let us now consider the potential sources of these differences.
\section{Organismic Factors}\label{sec:4.3}
\subsection{Field Dependence} \label{sec:4.3.1}
Field dependence/independence is a concept that originated in the personality assessment literature in psychology. It is meant to diagnose how people perceive and think, specifically the extent to which they perform \textit{cognitive differentiation}, the process of distinguishing stimuli along different dimensions. \citet{Nagata1989b} investigated whether field (in)dependence would influence grammaticality judgments. A field dependent (FD) person fuses aspects of the world and experiences it globally, whereas a field independent (FI) person is analytical, differentiating information and experiences into components. These are seen as more or less permanent traits of individuals \citep{WeinerEtAl1977}. There are a number of diagnostic tests for field (in)dependence that have been shown by psychologists to be very well correlated. One of these is the tilting-room-tilting-chair test, which involves an apparatus consisting of a small box-shaped room containing a chair, mounted on mechanical devices such that each can be rotated independently in two dimensions. Subjects seated on the chair cannot see outside the room, and are required to judge whether they are seated upright or on a tilt relative to the outside world. FD individuals tend to believe that they are on a tilt if the orientation of the room makes it appear so, i.e., they have trouble distinguishing visual cues from kinesthetic/vestibular ones, whereas FI individuals have less trouble. A simpler test to perform, used by Nagata to divide up his subjects, is the embedded figures test. In this test subjects must rapidly pick out simple geometric figures embedded in larger, more complex ones. FDs have more difficulty with this than FIs. We might expect that these differences in cognitive style could show linguistic side effects. FI individuals show an impersonal orientation and have well-developed cognitive restructuring skills, while FD individuals show more interpersonal competencies. For
example, they recall social words better than FIs and use them more often in free association tasks. Thus, we could anticipate that FDs would use strategies involving the enrichment of stimulus sentences with context when judging them, while FIs would be more prone to employ structural differentiation. (The nature of these strategies is described in more detail in Sections \ref{sec:5.2.4} and \ref{sec:5.2.5},
in conjunction with discussion of Nagata's other experiments.) However, as reported in \sectref{sec:3.5}, \citet{MasnyEtAl1985} found field dependence had no discernible effect on L2 judgment ability. They also review numerous other studies attempting to relate it to language ability, the results of which were mixed. An additional facet of this distinction is that FDs are more prone to changing their opinions under external influence, since they pay greater heed to others, so we should look for differential reactions to knowledge of other people's judgments.
Nagata's experiment involved repeated presentation of sentences. After rating the grammaticality of a number of sentences (on a scale of 1 to 7), subjects were exposed to each sentence 10 times for 3 seconds per repetition, during which time they were told to think of the grammaticality of the sentence. After the tenth repetition, they rated each sentence a second time. Then they were told that their judgments differed from those of the average college student (which Nagata considered negative reinforcement), and were asked to think about the grammaticality of each sentence again and rate it a third time. Other experiments have shown that for a general population, the repetition treatment makes judgments significantly more stringent (i.e., sentences are rated less grammatical after repetition); see \sectref{sec:5.2.3} for details. In Nagata's experiments, the judgments of FIs did become more stringent after repetition, but those of FDs showed no significant change. After the negative reinforcement, both groups' ratings became more lenient (the FDs' nonsignificantly more so). Nagata concludes that FDs approach the task of judging grammaticality differently from FIs, since they resist the usual repetition effect. One might have expected their judgments to become more lenient with repetition, as they considered more potential contexts for the sentences, but this trend was not found either. Apparently it is much harder to make sentences get better than to make them get worse (again, see \chapref{sec:5} for more on this point). The idea that FDs would be more responsive to negative reinforcement was not substantiated. In summary, we can say that field dependence is a factor that can induce variability among subjects on grammaticality judgment tasks, just as it does in other domains. For instance, \citet{LefeverEtAl1976} found a moderately positive correlation between field independence and the abil%
% Subject-Related Factors 109
ity to detect several kinds of ambiguity in sentences. They propose that the common features among the various tasks involve restructuring a stimulus pattern, overcoming the influence of context, and shifting mental set.
\subsection{Handedness}\label{sec:4.3.2}
There is already considerable evidence that handedness correlates with differences in language processing, for instance in the review by \citet{HardyckEtAl1979}. Recently, some preliminary studies have been done on possible correlations between handedness and grammaticality judgment strategies. Work by \citet{BeverEtAl1987} was the first to suggest that such differences might be found. The purpose of their study was to show that the assumption that the basic mechanisms of sentence processing are the same for everyone is a severe oversimplification. Specifically, they demonstrated how right-handers from families with at least one closely related left-hander (``mixed background right-handers'') show different processing patterns from right-handers with no familial history of left-handedness (``pure background right-handers''). The former group tend to process in a more structure-independent way than the latter, that is, they attend less to syntactic and semantic structures of language and more to conceptual and lexico-pragmatic features. These differences were found despite the matching of subject groups on several other variables, including age, sex, native language (English), and verbal SAT score.\footnote{These studies do not use left-handed subjects because they are harder to find and to match on these dimensions.}
%\textsuperscript{13}
In one study the authors used the classic tone location paradigm, wherein a subject hears a tone while listening to a sentence and must subsequently identify at which point in the sentence it occurred. They demonstrated that mixed-background subjects did not show a superiority effect for clause boundary location of the tone, that is, they did \textit{not} locate the tone more accurately when it occurred exactly between two clauses, while pure-background subjects did. A second experiment showed that mixed-background subjects respond more quickly in a word recognition task (supposedly because they ``make more use of the reference of individual words in their processing'') and are insensitive to the position of the target word in the clause, unlike their structure-dependent counterparts, who showed serial order effects. Pure-background righthanders also performed more slowly on word-by-word reading tasks. These results support the authors' general conclusion that pure-background people depend
more on aspects of sentence \textit{structure}, mixed-backgrounders more on lexical and conceptual knowledge.\footnote{It is important to note that there were no instances in which the two groups showed reverse effects; either they showed the same trend to different degrees, or else one group showed no effect.}
%\textsuperscript{14}
There is some neurological evidence to corroborate this proposal. Familial sinistrality seems to be correlated with a less localized, more widespread language module in the brain, which \citet{BeverEtAl1987} suggest leads to more contact between language and other kinds of knowledge. Whatever the eventual explanations of these differences, it would not be surprising to find that the different processing strategies are also reflected in different judgment strategies between such groups. In fact, the two types of strategies proposed by Bever et al. are not so dissimilar from those proposed by Nagata for field dependents versus independents. A replication of his procedure with mixed-background subjects could prove fruitful. See \citet{BeverEtAl1989} and \citet{Bever1992} for more studies of language differences correlated with familial handedness.
\citet{Cowart1989} conducted the first study to look explicitly for the effects of familial sinistrality differences in a judgment task. The experiment involved a written questionnaire using a 4-point scale, the extremes of which were designated ``OK'' and ``odd'' (since the details of the procedure are not reported, we cannot assess the extent to which subjects were instructed on how to evaluate sentences in terms of these labels). The sentences in question followed the paradigm in \REF{ex:4:4}:
\ea\label{ex:4:4}
\ea What did the scientist criticize Max's proof of?
\ex What did the scientist criticize a proof of?
\ex What did the scientist criticize the proof of?
\ex Why did the scientist criticize Max's proof of the theorem?
\z
\z
\noindent
Example (\ref{ex:4:4}a) has traditionally been called a violation of the \isi{Specified Subject Condition}, while (\ref{ex:4:4}b) and (\ref{ex:4:4}c) are considered good in some theories and claimed to violate only the lesser constraint of \isi{Subjacency} by others; (\ref{ex:4:4}d) is an uncontroversial control sentence. It was hypothesized that since the violations in (4a\textendash{}c) are all of a purely structural nature, mixed-background subjects would be less sensitive to them and therefore rate them more grammatical than their pure-background counterparts. This prediction was borne out. For cases like (4a\textendash{}c) the ratings of the latter subjects were significantly lower than those of the former, but
% Subject-Related Factors 111
no difference was found for grammatical control sentences
like (\ref{ex:4:4}d).\footnote{Another result was that cases like (\ref{ex:4:4}b) and (\ref{ex:4:4}c) were rated significantly worse than (\ref{ex:4:4}d), suggesting that they might indeed constitute \isi{Subjacency} violations (but see the caveats in \sectref{sec:2.3}).}
%\textsuperscript{15}
If this insensitivity to structural violations is found throughout the syntax, it could constitute an explanation for a significant amount of intersubject variation in judgments. (See \sectref{sec:7.2} for the possibility that \isi{Subjacency} is really a parsing constraint and not a grammatical constraint.)
\subsection{Other Organismic Factors}\label{sec:4.3.3}
In this subsection I suggest some other organismic factors that might induce systematic differences in grammaticality judgments. First, let us consider two of the most obvious factors: age and sex. \citet{Ross1979} suggests that, in general, more contact with a language leads to higher grammaticality ratings for it, an idea inspired by the fact (reported in \sectref{sec:4.4.1}) that linguists rated sentences higher on average than nonlinguists in his questionnaire experiment, which obviously has other possible explanations. If Ross\ia{Ross, {John Robert}} is right, we would expect increasing age to be correlated with increasing tolerance in judgments. His own data do not bear this out, but they were not even based on accurate ages, just his guesses, so there is certainly room for more investigation here. \citegen{Greenbaum1977c} review of the literature cites age as a factor that correlates with difference in acceptability judgments, but he does not provide details. As for sex differences, \citet{Chaudron1983} states in his wide-ranging survey of metalinguistic research that sex has rarely been experimentally analyzed and ``does not appear to be a relevant factor,'' but if the former is true then how do we know the latter for certain? R. \citet{Lakoff1977}, while dealing with what she calls acceptability differences between men's and women's speech, makes it clear that such differences are conditioned by situational and social factors (i.e., \textit{when} a particular kind of utterance is appropriate), and not differences in grammars. For instance, she has found no instances of syntactic rules that only one sex possesses, at least not in English. However, \citet{Bever1992}
\textit{has} found preliminary evidence for sex differences in methods of language processing, which presumably could be reflected in judgments as well. He argues that there is a spectrum of ``abduction strategies'' or possible ways one can develop abstract representations (linguistic or otherwise), whose extremes are hypothesis refinement (using new data to refine an existing hypothesis) and hypothesis competition (using it to choose between alternative hypotheses). In one of Bever's\ia{Bever, {Thomas G.}} experiments, the tasks involved producing, comprehending, and judging sentences in an artificial language. Under learning conditions that are supposed to support hypothesis competition, men do significantly better than women on the judgment task, while the opposite is true with hypothesis refinement. While there is as yet no conclusive basis for deciding whether these differences are biologically or socially caused, it is intriguing that similar sex differences surface in spatial learning tasks, which leads Bever\ia{Bever, {Thomas G.}} to suggest that there might be a general abduction mechanism implicated in both activities. However, it does not necessarily follow that any sex differences we might find in judgments of one's native language would be attributable to the same mechanism. One can imagine that differences in conversational strategy could lead, say, to women judging fewer sentences ungrammatical than men because they are more supportive in conversation. (See \citet[ch. 13]{Wardhaugh1988} for a brief review of the literature on sex differences in language and their social correlates.) The attribution of sex differences to biological versus psychological causes is notoriously tricky, and is the subject of much ongoing research; see \citet{Halpern1992} for an excellent review, and \citet{PhilipsEtAl1987} on the possible relevance of sex differences in the brain to language. It appears that no one has yet looked for sex differences in the processing of individual sentences, as opposed to overall skill level on verbal tasks.
The second direction we might explore while looking for organismic factors involves general cognitive differences that we suspect are implicated in the task of judging grammaticality. For instance, we will see evidence in \sectref{sec:5.3.5} that part of this process involves imagining a situation to which a sentence could be applied. Therefore, the ability to imagine situations, i.e., some form of creativity, is a dimension on which people undoubtedly vary and one that could correlate with judgments. Various perceptual strategies have been implicated in language processing, and hence also (somewhat controversially) in the generation of judgments. Subjects might differ in their ability to use these strategies \citep{Botha1973}. Similarly, a number of extragrammatical factors often implicated in acceptability (as distinct from grammaticality) might be subject to inherent differences, such as working memory capacity, ability to reason by analogy, and so on. At a more general level, intelligence and cognitive development might be pertinent, at least up to a certain ceiling. \citet{Hakes1980} (reported in \sectref{sec:3.5}) attempts to show that qualitative changes in children's ability to make grammaticality judgments are correlated with Piagetian\ia{Piaget, Jean} stages of development, and \citet{MasnyEtAl1985}, mentioned in \sectref{sec:3.5}, looked for correlations between IQ and judgments
% Subject-Related Factors 113
of second-language learners, although they failed to find any significant patterns. Finally, \citet{BialystokEtAl1985} propose a model of (meta)linguistic ability as factored into two major dimensions, analyzed knowledge and cognitive control (see \sectref{sec:3.4}). Each is the product of underlying cognitive abilities on which people might differ. Analyzed knowledge is related to intelligence and logical deduction abilities, while cognitive control depends on reflective and impulsive tendencies, as well as field dependence, discussed in \sectref{sec:4.3.1}. The authors do not provide specific evidence for these interdependencies, however. In general, demographic variables are hard to study rigorously in this context, because their effects seem to be small relative to stimulus factors and can often interact, which demands large samples of subjects in order to detect the effects reliably.
\section{Experiential Factors}\label{sec:4.4}
\subsection{ Linguistic Training} \label{sec:4.4.1}
\epigraph{\textit{It is well-known among linguists that intuitions about the acceptability of utterances tend to be vague and inconsistent, depending on what you had for breakfast}\footnotemark\textit{and which judgment would best suit your own pet theory.\\[-2\baselineskip]}\footnotemark}{\citep{Dahl1979}}
\footnotetext[16]{Since it is not clear whether what one had for breakfast should be treated as a between- or a within-subjects factor, it will not be discussed further.}
\todo{Attention Fn id 16 was hardcoded to work with the epigraph. Make sure numbers fit before publication!}
\footnotetext{Jim McCawley (personal communication)\ia{McCawley, {James D.}} points out that the relevant factor is actually which judgment linguists \textit{believe} would suit their theories, because their beliefs about the consequences of their own theories may turn out to be erroneous.}
\epigraph{\textit{Only the most sophisticated speakers can supply the exquisite judgments required for writing a grammar.}\\[-2\baselineskip]}{\citep{GleitmanEtAl1970}}
\noindent One of the most frequent criticisms of generative grammar has been the fact that, to paraphrase Labov,\ia{Labov, William} the theories that linguists develop are based on data that they themselves create, a situation that constitutes an intolerable conflict of interest and seriously undermines the external validity of the findings. In this subsection I enumerate some of the specific reasons why it has been suggested that linguists' intuitions differ from those of naive native speakers and thus should not be used as linguistic data. I then turn to experimental attempts to establish whether such differences actually exist, of which there have been surprisingly few. It must be kept in mind throughout that finding differences in the way linguists
and nonlinguists judge sentences does not inherently count as a strike against using data from the former group. We must examine each difference to see what the potential benefits and drawbacks are for linguistic investigation.
The following passage from \citet[968]{BradacEtAl1980} is typical of the views expressed by many outside the generative enterprise: ``as a result of their special training, linguists may tend to judge strings differently from nonlinguists. Training in linguistics may produce beliefs or attitudes which are not shared by those who have not received such training. This suggests that the knowledge produced by linguists may become increasingly artifactual; it may fail increasingly to model natural language.'' While the authors' premise of differing beliefs is almost certainly true, it does not follow that linguists' judgments are artifactual in the sense that they are influenced by factors that are not relevant to the grammars of naive speakers. A priori it is equally possible that their training allows them to factor out various irrelevant factors that \textit{do} influence naive judgments, but actually reflect cognitive factors \textit{other than} the grammar that is the object of study \citep{Levelt1974}. However, there are legitimate reasons to suggest that this ability of linguists might have come at the price of a loss of objectivity. \citet{Labov1972a} argues that linguists have become removed from everyday language experience. \citet{Greenbaum1976a,Greenbaum1977c} believes that linguists are bound to be unreliable subjects, for at least three reasons. First, after long exposure to closely related sentences their judgments tend to become blurred. A famous quotation from \citet[178]{Fraser1971}, exemplifies the point: ``I think this issue is fairly clear. It will be resolved by speakers whose intuitions about the sentences in question are sharper than mine, which have been blunted by frequent worrying about these cases.'' Even Chomsky\ia{Chomsky, Noam} himself has experienced this phenomenon: ``I had worried so much over whether \textit{very} could occur with \textit{surprised}, that I no longer had a firm opinion about it'' \citep[172]{Chomsky1962}. Haj Ross\ia{Ross, {John Robert}} coined the term \textit{scanted out} in the early 1970s to describe this state.\footnote{The term apparently originates from Ross's\ia{Ross, {John Robert}} feeling that just trying to produce any judgments on sentences containing the word \textit{scant} was sufficient to induce a loss of intuitions in short order.}
Second, linguists are liable to be unconsciously prejudiced by thelf own theoretical positions, tending to judge in accordance with the predictions of their particular version of grammar.\footnote{Elan Dresher (personal communication)\ia{Dresher, {Elan B.}} suggests that the reputed argumentativeness of linguists and the existence of multiple competing theories would guard against such bias. However, in the first place, Wayne Cowart (personal communication)\ia{Cowart, Wayne} points out that it is almost impossible to get
an article published if all one has to offer is disagreement with some other linguist's judgments. Furthermore, even if one has a theory to go with new judgments, this will only help the field if one's theory is of interest to the linguist in question. If the source of bias is an uncontroversial assumption within GB, say, but that assumption is disputed by proponents of Lexical-Functional Grammar, the bias will be difficult to discover, because the two camps rarely interact.}
%\textsuperscript{19}
\citet{Botha1973}, \citet{Derwing1973}, \citet{Sampson1975}
% \todo{these need to be split into separate citations because the above prints them with semicolons between, but we need commas}
and \citet{Ringen1979}, among many other
critics, also express this view. Additionally, \citet{Levelt1974} suggests that hypercritical linguists might be biased \textit{away from} the judgments predicted by the theory they are working on. \citet{CardenEtAl1981} speculate on the subconscious process by which this could arise in a particular case. Greenbaum's third source of linguists' unreliability is that they look for reasons behind their acceptance or rejection of a sentence, which takes away spontaneity and makes their judgment processes different from those of naive subjects, who presumably have neither the inclination nor the knowledge necessary to perform this analysis. On the issue of whether this is actually less desirable, see the discussion in \sectref{sec:5.2.7} on the
relative merits of spontaneous versus reasoned judgments. Nonetheless, I agree with Greenbaum that this constitutes an additional difference between the two groups. Let us now see whether any of the above hypotheses have been borne out empirically.
I begin with a summary of differences found by Ross\ia{Ross, {John Robert}} in the study mentioned in \sectref{sec:4.2}. The summary is brief because the study's methodological shortcomings make its results suspect at best. On average, his linguists were more unsure than his nonlinguists (i.e., they had less confidence in their ratings), perhaps because thinking about language makes you realize how little you know about it and shatters your confidence in your own judgments\schdash{}``Doing syntax rots the brain.''\footnote{Ross\ia{Ross, {John Robert}} attributes this adage to John Lawler\ia{Lawler, John} without providing a reference.}
%\textsuperscript{20}
Nonlinguists rated themselves more conservative, were tougher graders (i.e., they rated sentences less grammatical overall), and made fewer distinctions between levels of grammaticality (i.e., they tended not to use the whole scale). We will find a counterexample to the relative stringency finding in another study.
The most widely cited work on linguist/nonlinguist differences is that of \citet{Spencer1973}. The paper is perhaps more important for the many issues it raises than for Spencer's experimental results. She starts from the position that:
\begin{quote}
it is possible that the behavior of producing linguistically relevant intuitions has developed into a specialized skill, no longer directly related to the language behavior of the speech community (Bever [\citeyear{Bever1970a}]).
%\todo{Need citation to print as (Bever [1970a]), because in the source it was just 1970}
The linguist views language in a highly specialized way, and perhaps is influenced by a perceptual set. The resulting description may not be an ideal representation of linguistic structure. It may be an artifactual system which reflects the accretion of conceptual organization by linguists. (p. 87)
\end{quote}
\noindent
Spencer's experiment used two groups of subjects: the naive subjects were students of introductory psychology, while the nonnaive subjects were graduate students who had taken at least one course in generative grammar.\footnote{Apparently the nonnaive subjects did not possess a uniform amount of linguistic background, however, since some were graduate students in linguistics, while others were psychology or speech students. The latter groups might have watered down the linguistic biases of the first group.}
%\textsuperscript{21}
She states that \citegen{Chomsky1961} definition of grammaticality and examples were used as the basis for the instructions in her experiment, but all she actually tells us about these instructions is the following: ``Each [subject] was read the same instructions\schdash{}he would
be asked to make a decision on each statement as to whether it was complete and well-formed or not. There were a series of guidelines and examples as to what the [experimenter] meant. ... After the instructions had been read, the [subject] was asked to tell the [experimenter] what he had understood his instructions to be, and any confusions or omissions were corrected'' (p. 91). Apparently Spencer (or her editors) did not consider it important to describe the details of these instructions, but they are crucial for interpreting the results. If they did not correspond to the concept of grammaticality that linguists use, then we have a confounding variable.\footnote{\citet{Newmeyer1983} makes this criticism as well.}
The stimulus sentences were drawn from six linguistic articles, and had all been labeled unequivocally good or bad by the original author. Unfortunately, none of the sentences are reported in the paper. \citet{Newmeyer1983} surmises, on the basis of the source articles, that many of them were pragmatically very odd and required an unusual context to sound acceptable. Spencer's design was intended to draw out two possible results that would undermine linguists' use of their own intuitions: intersubject variation by naive subjects on allegedly clear cases, and naive subject consensus that conflicted with a linguist's judgment. There was also a check for consistency: six randomly chosen sentences were resubmitted for judgment at the end of the experimental session, and subjects who contradicted themselves on three or more of these had all their results discarded.
The first result was that an average of 81.4\% of the 150 sentences were considered clear cases, as defined by the degree of consensus among subjects. At
least 65\% in each group gave the same rating (either good or bad, there were no other available answers). That is, the division between accepters and rejecters had to be at least 15\% from an even split. But this is not a particularly strong consensus; 35\% of the subjects could still have disagreed. If a 75\% criterion had been set, the percentage of clear cases would have been lower. Spencer does not provide figures from which we can calculate it exactly. She acknowledges that her choice of cut-off is arbitrary. (For comparison, \citet{SnowEtAl1977} report 20\% of their sentences as unclear cases among naive native speakers. Their definition of unclear is that a sentence received approximately equal numbers of acceptances and rejections.) As for whether naive and nonnaive subjects differed in their responses, it is impossible to be certain on the basis of Spencer's reported figures, for two reasons. First, while she shows that the proportion of sentences accepted by the two groups differs by 6\%, she reports no statistical test of significance for this difference. Second, this comparison would not reveal a situation where the groups differed on \textit{which} sentences were accepted, but total \textit{numbers} of acceptances happened to come out roughly the same. Spencer merely states that there were ``no noticeable differences in the distribution of exemplars found unacceptable, unclear, and acceptable.''\footnote{For Spencer, an unclear sentence is one on which the subjects did not show consensus by the measure defined above.}
%\textsuperscript{23}
As for comparing the subjects to the linguist authors, 73 of the 150 sentences showed disagreement, defined by the subjects' pooled rating being either unclear or opposite to that of the linguist. \tabref{tab:1} (from \citet{Spencer1973}) gives a breakdown of the results. Of the disagreements, 81\% were unanimous across the subject groups, and in the majority of the remaining cases it was the naive subjects who disagreed with the linguists while the nonnaive subjects agreed, but again this difference is not analyzed for significance. We must keep in mind, however, that this 50\% disagreement rate is made up by comparing the pooled judgments of 65 subjects with that of an individual linguist, a point that many subsequent articles have emphasized. Thus, while we can certainly conclude that the published judgments did not show a good correspondence with the population as a whole, we crucially cannot conclude that linguists \textit{as a group} have systematically different judgments from nonlinguists. A comparison with any single randomly chosen naive subject could well have shown just as much disagreement. Nevertheless, Spencer concludes that linguists should not trust their intuitions: ``It is reasonable to state that the judgments of the linguists used are representative of many linguists as a group,'' since there had not been any published rebuttals in the 4\textendash{}5 years since the original articles appeared. But there are many possible alternative explanations for that state of affairs. As for the direction of the disagreements, the table shows that on 42 sentences nonlinguists were more accepting, while on 17 they were more stringent and on 14 they were mixed. This pattern, though not overwhelming, contradicts Ross's\ia{Ross, {John Robert}} findings that linguists are more accepting on average.\footnote{If we expect that linguists should be more aware of their actual speech tendencies than untrained speakers, then this result also contradicts the general recommendation of \citet{HindleEtAl1975} to trust ``OK'' judgments more than stars.}
Thus, the only firm recommendation we can draw from this study is that a reasonable sample size be used in determining the representativeness of judgments; we cannot conclude that this sample should not consist of linguists.
%%please move \begin{table} just above \begin{tabular
\begin{table}
\caption{Comparison of Linguists' and Nonlinguists' Acceptability Judgments \citep{Spencer1973}}
\label{tab:1}
\resizebox{\textwidth}{!}{\begin{tabular}{S[table-number-alignment = center]cccS[table-format=2.0]}
\lsptoprule
& \multicolumn{4}{c}{Judgment (+ = acceptable; \textminus\xspace = unacceptable; \textpm\xspace = unclear case)} \\ \cmidrule{2-5}
\multicolumn{1}{l}{Number of sentences} & \multicolumn{1}{>{\centering}p{2.25cm}}{Linguist\\(as published)} & \multirow{3}{*}{Naive group} & \multirow{3}{*}{Nonnaive group} & \multirow{3}{*}{~~~~Total} \\% \multicolumn{1}{c}{\multirow{3}u{*}{Total}}\\
\midrule
\multicolumn{1}{l}{Consensual Agreement} \\
51 & + & + & + & \\
26 & \textminus\xspace & \textminus\xspace & \textminus\xspace & 77\\
\multicolumn{1}{l}{Consensual Disagreement}\\
17 & + & \textminus\xspace or \textpm & \textminus\xspace or \textpm & \\
42 & \textminus\xspace & + or \textpm & + or \textpm & 59\\
\multicolumn{1}{l}{Judgments Mixed} \\
3 & + & + & \textminus\xspace or \textpm & 3 \\
4 & + & \textminus\xspace or \textpm & + &\\
7& \textminus\xspace & + or \textpm & \textminus\xspace & 11 \\
\lspbottomrule
\end{tabular}}
\end{table}
%\todo{text of the commented line needs to be a spanner over columns 2-5 and The Consensual Judgments entries need to be LEFT-justified, not right. I made some changes to the table. Is this OK?}
%\todo{definitely better, but would still like to make a couple of tweaks: can the first column of numbers be moved leftward to roughly the end of the word "Mixed", and can the rightmost column of numbers be centered under "Total"}
%\todo{Please move the four headers that are floading a line above the thin horizontal rule down onto the line. Also, the numbers in the Total column do not look centered in their column, they are too far left.}
Despite the less-than-convincing nature of her findings, Spencer goes on to make the familiar point that linguists who use only their own intuitions as data are really no different from trained introspectionists, whose intuitions ended up being totally removed from the layman's experiences (see \sectref{sec:2.4} for a discussion of of introspectionism in psychology). In addition to the possibility that linguists' theoretical perspectives influence their judgments, she suggests that working with many sentences revolving around a given issue might also contribute to context biases in their judgments. That is, satiation first leads to a loss of symbol meaning, then illusory changes occur in the form and meaning of the sentences, constrained by the context (e.g., one's theory).\footnote{This notion of the effects of satiation derives from experiments such as those of \citet{TaylorEtAl1963}. They used a tape loop of a short phrase or sentence repeated for 15 minutes. When the instructions suggested that the stimulus would change, subjects perceived illusory changes. Furthermore, the number of non-English forms they perceived among these changes was heavily increased when they were told to expect non-English forms. Thus, the context constrained illusory variation.
}
%\textsuperscript{25}
Thus, the linguist can reperceive and reorganize a sentence after repeated consideration, taking into account the theoretical constructs that it bears on. Finally, Spencer addresses the question of whether linguist/nonlinguist differences might not in fact be a good thing:
\begin{quote}
It might be claimed that any difference between linguists and naive speakers found in this experiment is due to the increased awareness and sophistication in language that linguists have developed through their study. Perhaps linguists are simply more sensitive to language and therefore are able to detect finer differentiations than naive speakers in intuitions concerning natural language, rather than creating differentiations which do not exist within the natural language. If linguists are dealing with artifacts, however, nonnaive speakers, who have studied modern linguistics, should perform in a manner similar to naive speakers. Thus, to anticipate this criticism, nonnaive speakers also participated in the experiment. (p. 90)
\end{quote}
\noindent
Of course there is a certain Catch-22 quality to this last point. One could always counter that, however much linguistic training these nonnaive subjects had, it did not raise them to the same level of linguistic sophistication as practicing linguists, and so the latter's judgments might still be valid. Conversely, if the nonnaive subjects behave more like linguists than like naive subjects, one could maintain that linguists' judgments were artifactual and that the nonnaive subjects had too much linguistic training, such that they were exhibiting the same biases as linguists. Thus, subjects with some knowledge of linguistics can never be used to decide this issue definitively. What is needed is truly naive subjects who nonetheless have been given a very good understanding of what is meant by grammaticality.\footnote{I am aware of only two paradigms that have systematically addressed the issue of training naive subjects. \citet{RyanEtAl1984} found that explicit training on how to perform a grammaticality judgment task did not improve the performance of their kindergarten-age subjects, while \citet{McDanielEtAl1990} found that training did help even their 4-year-old subjects.}
%\textsuperscript{26}
(One might, however, question whether this is possible even in principle.)
At least three other studies have compared linguist and nonlinguist judgments directly.\footnote{The only other empirical basis we have for comparing linguists and nonlinguists would have to come from separate studies that use the same procedure but with different kinds of subjects. For example, a study by \citet{ElliotEtAl1969} used mostly linguists, whereas \citegen{Greenbaum1973} replication, described in \sectref{sec:5.2.2}, used all nonlinguists and got different results, but Greenbaum tried to eliminate other procedural problems with the design of \citet{ElliotEtAl1969}, so the studies are no longer directly comparable. This is the only such instance I am aware of.
}
%\textsuperscript{27}
One of these was an informal experiment conducted by \citet[93, fn. 4]{Greenbaum1988} that was similar to Spencer's and found similar results. Another, reported in a very brief article, is by \citet{Rose1973}. Rose also took his stimulus items from linguistic articles, asking subjects to classify them as acceptable or unacceptable (details of the method are not given). Half of the subjects were told to play the role of an editorial assistant working for a strict editor, while the other half had to play the role of a person attempting to help a foreign friend speak properly. Rose states that, overall, subjects agreed with the linguist authors 89\% of the time. I assume this is a percentage of the total individual judgments, rather than a pooled scheme like Spencer used. This number is not nearly as informative as Spencer's, since it could represent a variety of scenarios, such as each sentence showing strong agreement, or most showing uniform agreement and some showing uniform disagreement. A chi-square analysis showed that linguist judgments and subject judgments were significantly related, but we have no indication as to which direction the disagreements took. There was no difference between the two roles played by subjects.
\citegenp{SnowEtAl1977} second experiment repeated the procedures of the first, as reported in \sectref{sec:4.2}, but used eight linguists as subjects, allowing direct comparison with the results of their nonlinguist group. The linguists showed significantly greater within-subject consistency than the nonlinguists in the first experiment: 94.3\% on the absolute judgments. In part this might be attributable to a bias towards ``\textminus'' responses, which exceeded that of nonlinguists. (The authors do not report sentence-by-sentence comparisons, so we cannot say with certainty how often linguists were more stringent than nonlinguists; there is no basis for comparison with Ross\ia{Ross, {John Robert}} or Spencer\ia{Spencer, {Nancy J.}} on this issue.) Linguists' consistency between absolute ratings and rank-orderings\is{absolute rating (of acceptability), versus relative ranking} was also significantly higher, and they showed greater between-subjects agreement, with Kendall coefficients of between .581 and .844. As for whether the linguists' judgments differed from those of the nonlinguists, the mean rankings of sentences by the two groups
showed a high correlation (Spearman {$\rho$} = .89), as did the absolute ratings ({$\rho$} = .84). While this is a higher rate of agreement than Spencer found, we must consider that Snow and Meijer use the mean ratings of a group of linguists, rather than a single linguist's judgments. Also, as they themselves point out, Spencer counted as disagreements any cases where nonlinguists showed disagreement among themselves; this was not taken account of in Snow and Meijer's study. Thus, the two ratings are not directly comparable. The authors draw a number of methodological conclusions, including the interesting suggestion that while comparing absolute judgments with rank-orderings\is{absolute rating (of acceptability), versus relative ranking} provides a useful check of judgmental consistency, the fact that a sentence is judged inconsistently might say more about the sentence than about the quality of the judges, for instance that it has some shifty properties. With regard to the implications of linguists' higher consistency of judgment, they suggest two alternative interpretations. Either linguists have learned to ignore minor irrelevant differences among sentences, such as their semantic plausibility, or they have learned to apply their theory to unclear cases. The extent to which each of these turns out to be right will obviously determine whether this improved consistency is a desirable property.
\citet{Valian1982} has explored in some detail the parallels between linguists' use of their own judgments and expert judgment in other fields. She argues that linguists giving judgments are in relevant respects just like experts judging wine, tea, or cheese, so to the extent that the latter have proven to be useful, in fact essential (e.g., in maintaining uniform taste of a product year after year), the former could also. It is instructive to enumerate these parallels. Tasters, like linguists, are fallible, but their errors are within acceptable limits, and they know their task well enough to be able to take systematic steps to reduce the likelihood of error. For example, they arrange their samples in a particular order, not tasting a heavy-bodied wine before a light-bodied one. Linguists are similarly aware that order of judgment can affect their intuitions. Tasters have a priori biases, e.g., by being Bordeaux lovers rather than burgundy lovers, which makes them differentially sensitive to certain tastes. Similarly, linguists clearly have a priori biases. Tasters also come at their task with prior information about the samples they are tasting, e.g., what region a wine is from. Linguists' theories similarly provide a classification of sentences that are judged. In both cases, this additional information can allow finer judgments to be made and can focus attention on particular aspects of a sample. Valian argues that to have a completely open mind about the material at hand is to lack any experience with it, which results in the inability to
%Chapter Four
make consistent or fine discriminations. In the case of wine, things may all taste the same. Some people excel at different kinds of judgments than others do. In general, while all kinds of judgments are in some sense subjective, this does not mean they cannot be reliable and valid, especially when we acknowledge that there are strategies we can adopt for making them so.
\subsection{Literacy and Education}\label{sec:4.4.2}
\citet[31\textendash{}44]{Birdsong1989}, \citet{BialystokEtAl1985}, and \citet{MasnyEtAl1985} provide extensive reviews of research examining the relationship between literacy, education, and metalinguistic skills, including grammaticality judgments, and comment on the debate over which one(s) might be prerequisite(s) for the other(s). \citet{Bialystok1986} suggests that schooling contributes to her dimension of linguistic control, implicated in the ability to objectify language for judging purposes, while literacy adds to one's analyzed knowledge. (See \sectref{sec:6.2.1} for more discussion of this model.) I present here a few studies from this field.
The largest and most fascinating project on this topic was conducted by \citet{ScribnerEtAl1981}, who did several years of field work among the \isi{Vai} people of \isi{Liberia}. These people have invented their own syllabic writing system, which is taught to some children in the home. Formal schooling, for those who manage to get it, is conducted in English; some \isi{Vai} also know \isi{Arabic}. Scribner and Cole were interested in teasing apart the effects of schooling and literacy, and so the fact that there were \isi{Vai} monoliterates who had no formal schooling was crucial.\footnote{ I should point out that theirs was a huge anthropological and psychological study, of which the metalinguistic tasks reported here constituted a tiny part.}
%\textsuperscript{28}
It was their hypothesis that writing contributes to the objectification of language, independent of any general cognitive advantages it might entail. (In fact, they found very little evidence that literacy in either \isi{Vai} or \isi{Arabic} produces advantages for problem solving or other cognitive tasks.) More specifically, they believed that deliberate written composition in one's native language increases one's understanding of its formal properties, an idea that dates back to Vygotsky.\ia{Vygotsky, Lev}
Scribner\ia{Scribner, Sylvia} and Cole\ia{Cole, Michael} used three kinds of metalinguistic task to test this theory. The first involved orally presenting paired sentences, one good and one bad, and asking subjects to choose the good one and explain why the other one was bad.
Examples \REF{ex:4:5} and \REF{ex:4:6} below give rough English equivalents of the type of structures involved:
\ea\label{ex:4:5}
\ea He shot me at the gun.
\ex He shot the gun at me.
\z
\z
\ea\label{ex:4:6}
\ea These children, what is its name?
\ex These children, what are their names?
\z
\z
\noindent
The second task called for subjects to explicitly identify some grammatical principle of \isi{Vai}. This is illustrated in \REF{ex:4:7}, where the relevant distinction is alienable versus inalienable possession.
\ea%7
\label{ex:4:7}
People say ``my (\textit{{\ng}}) father,'' but ``my (\textit{na}) book''; they say ``my (\textit{{\ng}}) sibling,'' but ``my (\textit{na}) wife.'' Why do people sometimes say \textit{{\ng}} and sometimes say \textit{na}?
\z
\noindent
(Apparently a wife is viewed as an acquired possession rather than a relative.) Subjects' explanations on these two tasks were scored on a scale of 0\textendash{}7. Zero denoted irrelevant answers, such as ``The old people say it like that,'' ``Bad \isi{Vai},'' and ``Not a good \isi{Vai} speaker.'' A score of 1 was given to responses that claimed the sentence was semantically inappropriate, and higher scores denoted increasing degrees of grammatical relevance. While all groups were able to identify the bad sentence in the first task, their explanation abilities on the two tasks differed according to literacy and education. On one survey, the average explanation scores were 3.9 for illiterates, 4.6 for \isi{Vai} literates, and 5.6 for Vai-Arabic\is{Vai}\is{Arabic} biliterates. A replication found scores of 2.3, 2.9, and 3.2, respectively. Multiple regression analysis showed that, of all the demographic data that were available about the subjects, \isi{Vai} literacy was the only factor that predicted these differences.\footnote{Interestingly, similar differences were found by \citet{LilesEtAl1977} when comparing the judgments of normal versus language-disordered children. They found that language-disordered children not only make fewer accurate judgments on certain types of syntactic errors, but they can recognize some errors without being able to correct them, whereas normals have almost no trouble correcting detected errors. In this case, however, the authors suggest that inferior production skills might be responsible, although their reasoning is quite speculative.}
%\textsuperscript{29}
The third task involved correcting errors of various types (shown in \REF{ex:4:8}) and explaining what was wrong with an ungrammatical sentence.
\ea\label{ex:4:8}
\ea My child is crying yesterday.
\ex This house is fine very.
\ex I don't want to bother you (plural) because you (singular) are working.
\ex This is the chief's child first.
\ex These men, where is he going?
\ex They have planting the oranges.
\z
\z
\noindent
On this task explanations were scored on a scale of 0\textendash{}5. The authors provide \tabref{tab:2}, summarizing the number of errors fixed correctly and the total of the explanation scores on the six sentences. Here the regression analysis showed that schooling was the biggest contributor to explanation scores, and Vai literacy was also a factor. It is important to note that literates and illiterates performed equally well on other tasks examining their ability to explain things, so the effect seen in this experiment is specific to the linguistic content of the problem. We can conclude from this work that literacy and schooling have little effect on the ability to identify ungrammaticality, and hence to make grammaticality judgments in the narrow sense, but both factors appear to affect explicit grammatical knowledge, and hence will confound many other metalinguistic tasks.\footnote{A similar result in another domain is reported by \citet{ReedEtAl1979}, also based on work in \isi{Liberia}. These authors examined the arithmetic abilities of \isi{Vai} and \isi{Gola} tailors to assess the contribution of formal Western education as compared to traditional apprenticeship. Their findings suggest that these abilities can be very domain-specific: the same problem framed in terms of monetary units may be more difficult than when framed in terms of numbers of buttons to be sewn on pants, for instance. They also found that the types of errors made differ systematically between the two groups with different types of education, and seem to reflect the different ways in which arithmetic is taught in school versus on the job. The general result is that skill in applying knowledge to a particular domain does not always imply the ability to use that knowledge in the abstract.}
%\textsuperscript{30}
\begin{table}
\caption{Comparison of Vai Error Correction and Explanation as a Function of Literacy \citep{ScribnerEtAl1981}}
\label{tab:2}
\resizebox{\textwidth}{!}{\begin{tabular}{lSSSSS}
\lsptoprule
& \multicolumn{1}{c}{Maximum} & & \multicolumn{1}{c}{Arabic} & \multicolumn{1}{c}{Vai} & \multicolumn{1}{c}{Schooled}\\
& \multicolumn{1}{c}{possible score} & \multicolumn{1}{c}{Nonliterate} & \multicolumn{1}{c}{monoliterate} & \multicolumn{1}{c}{monoliterate} & \multicolumn{1}{c}{literate}\\
\midrule
Number correct & 6 & 5.1 & 4.5 & 5.0 & 5.6\\
Explanation score & 30 & 6.9 & 8.1 & 9.9 & 15.7\\
\lspbottomrule
\end{tabular}}
\end{table}
%\todo{Fix table formatting--should be no horizontal line separating first two lines of text, and all numeric values should be centered and aligned on the decimal point}
%\todo{Center all column headings}
Other researchers of literacy effects include \citet{Scholes1987},
who studied 10 English-speaking adult illiterates and found that they seem to process sentences without making use of all the syntactic information available. For instance, they report anecdotally that a spoken sentence like \textit{The window in the room with the chair was broken} is taken to mean that the chair got broken.\footnote{One might suspect the presence of some third, pathological factor affecting both ability to acquire literacy skills and ability to comprehend sentences, but Scholes and Willis's very brief description gives no indication of such a factor.}
%\textsuperscript{31}
\citet{Birdsong1989} cites other work by these authors suggesting that illiterates are insensitive to passive morphology, and that they judge grammaticality according to pragmatic validity and moral correctness or desirability. Scholes and Willis conclude that illiterates have vastly different grammars from literates, but Birdsong\ia{Birdsong, David} counters that their judgments might be based on different criteria, without the underlying grammars necessarily differing. \citet{Heeschen1978} had similar experiences with the \isi{Eipo}, an illiterate, neolithic horticultural people of West \isi{New Guinea}. He states that they are ``uneasy and unsuccessful'' in trying to objectify language, and concludes that 90\% of their grammaticality judgments of possible but rarely occurring verbal affix combinations were simply wrong.\footnote{Heeschen does not explain how he determined what the correct forms actually were.}
%\textsuperscript{32}
However, their judgments on word order were ``absolutely correct.'' Heeschen suggests why this difference should be found: some affix combinations are rare and hence hard to see as correct out of context, whereas word order is a feature of every utterance that cannot be avoided. This hypothesis is supported by the fact that in \textit{natural} situations (e.g., when native speakers corrected him in conversation), as opposed to structured judgment tasks, ``their judgments as native speakers proved to be perfectly reliable'' (p. 177). Thus, at least for this culture, it seems that illiteracy does not imply the inability to make accurate judgments, but just makes it hard to do so in an abstract context.
\subsection{ Other Experiential Factors} \label{sec:4.4.3}
As in the previous section, I conclude with a collection of remarks on other types of experience that might systematically affect judgments of grammaticality. The most obvious would be the amount of experience with the language in question. There have been numerous studies of metalinguistic skill in nonnative learners of a second language, as part of the second-language teaching literature, which is beyond the scope of this investigation (see \citet{Ellis1991} for a review). Clearly, one would expect nonnative speakers to differ from their native counterparts in judgments as well as in language use, but the results of a third experiment
in \citegen{SnowEtAl1977} study (see also Sections \ref{sec:4.2} and \ref{sec:4.4.1})
suggest that native intuitions may be acquired independently of native skill in language use.
This experiment involved the same procedure as was used in Snow and Meijer's first and second studies, this time with nonnative speakers of Dutch as subjects. Their within-subject consistency was at least as good as that of native speakers, but predictably they showed more between-subject disagreements, since their degree of familiarity with Dutch was not matched. Nonetheless, their pooled judgments agreed somewhat better with the native speaker group than those of the linguists did. And, surprisingly, the three virtually bilingual non-natives did not match the native group better than the remaining poorer Dutch speakers (as measured by correlations in rank-ordering). The authors interpret this to mean that one's skill in speaking a language can improve without one's syntactic intuitions becoming more nativelike.\footnote{\citet{Chaudron1983} points out that there were only eight nonnative subjects in this experiment altogether, so due caution is advised in interpreting the results.}
%\textsuperscript{33}
Conversely, they suggest that classics scholars, for instance, show the opposite: they develop strong intuitions without being able to speak the language. Together with the large amount of variation in judgments among native speakers found in the first two experiments, Snow and Meijer's results lead them to conclude that speaking and understanding involve a different language faculty from judging, since skill in one is not a good predictor of skill in the other. On the other hand, \citet{Coppieters1987} claims that his subjects appeared to have achieved native levels of production and comprehension, and yet their judgments were significantly different from those of native speakers. But, as discussed in \sectref{sec:3.5}, Coppieters's study had not actually shown that the two groups were identical in their \textit{use} of the crucial forms, but only on unrelated general measures of fluency, mastery of various constructions, and so forth. Thus, we have no basis for concluding that nonnative speakers display differences unique to their judgments. More likely, their grammars simply differ from those of natives on the points investigated, and this would show up in everyday use as well if these constructions occurred. It has also been proposed that experience in \textit{another} language (e.g., bilingualism) leads to differences in metalinguistic ability (\citealt{VanKleeck1982}; \citealt{Bialystok1986}; see \sectref{sec:5.3.2}).
One would expect certain types of nonlinguistic experience to influence judgments as well, for example, factual world knowledge, and cultural and social
% Subject-Related Factors 127
experiences and beliefs. \citegenp{Greenbaum1977c} review
cites correlations between judgments and occupation or socioeconomic class, without elaborating. \citet{SvartvikEtAl1977} found differences on judgments by 14-to-17-year-olds concerning the use of \textit{ought} correlated with the different academic standing of three groups of English schools they attended. I am not aware of any studies showing that these variables can affect \textit{structural} judgments. A purported example of how world knowledge is relevant to grammaticality is provided by \citet{Belletti1988}. According to her, the following two sentences involving subject postposing contrast in grammaticality in Italian:
\ea%9
\label{ex:4:9}
\ea[\hspaceThis{*}]{
\gll È stato rubato il portafoglio a Maria. \\
has been stolen the wallet to Maria\\}
\ex[*]{
\gll È stata rubata la pianta a Maria. \\
(has been stolen the plant to Maria) \\}
\z
\z
\noindent
The crucial difference here is claimed to be that we can assume people normally own only one wallet, but the same is not true for a plant. If this is true, someone from a different culture presumably would not show this distinction. Unfortunately, according to one native speaker I consulted (Mirco Ghini, personal communication), while (\ref{ex:4:9}b) does require the presupposition of a unique plant that the speaker is referring to, it is structurally fine. In fact, it represents the unmarked word order for expressing this idea, and is clearly better than other starred examples given by Belletti that do seem to violate structural constraints. Evidently, a systematic investigation of this point is called for. G. \citet{Lakoff1971}
has argued that the well-formedness of a sentence can \textit{never} be assessed without reference to a set of presuppositions about the nature of the world, and cites numerous sentences where people differ in this regard. For example, whether \textit{My cat enjoys tormenting me} is grammatical depends on whether one believes cats to have minds. ln cultures where events are believed to have this property, the equivalent of \textit{My birth enjoys tormenting me} is perfectly normal. Similarly, Lakoff has argued that grammaticality judgments of \textit{John called Mary a Republican and then \textsc{she} insulted \textsc{him}}
depend on the speaker's beliefs, and perhaps even on John's and Mary's. \citet{Chomsky1972} argues instead that such sentences should be considered grammatical regardless of anyone's beliefs, and that it should be left to the semantic component of the grammar to specify the presuppositions they require. (See also \citet{BarHillel1971}, who argues against those who feel ``obliged to force a clearly pragmatic matter into a syntactico-semantic straitjacket.'')
\section{Conclusion} \label{sec:4.5}
The studies reviewed in this chapter show that a considerable proportion of individual differences in grammaticality judgments can be attributed to specific linguistically relevant features of the person, be they inborn or the result of experience. Nonetheless, we can be fairly certain that there remains much variation that we cannot factor out in this way. In this regard grammaticality judgments are like most other forms of behavior, including other metalinguistic tasks such as ambiguity detection \citep{KessEtAl1983}. A common genetic endowment provides for a certain degree of commonality, and certain gross parameters of variation, but beyond that differences abound. This state of affairs, however immutable, presents frustrating problems once we acknowledge that the study of grammar, while in principle a study of each individual's mental structures, must appeal to the judgments of many individuals. However, before we resign ourselves completely, we should consider that not all the variation that shows up within and across experiments is attributable to real differences between subjects. Subtle differences in procedures or in the sentences themselves can add error to the actual variation. In the next chapter I turn my attention to such confounding sources.
|
|
\documentclass[./main]{subfiles}
\newboolean{isMain}
\setboolean{isMain}{false}
\begin{document}
\section{Body}
Here, I want to cite \citet{pikettyDistributionalNationalAccounts2018b}
and \citet{chettyAreMicroMacro2011} again.
\ifthenelse{\boolean{isMain}}{ %pass
}{ %else
\bibliography{bibtex-demo.bib}
\bibliographystyle{econ-aea.bst}
}
\end{document}
|
|
%
% Annual Cognitive Science Conference
% Sample LaTeX Paper -- Proceedings Format
%
% Original : Ashwin Ram (ashwin@cc.gatech.edu) 04/01/1994
% Modified : Johanna Moore (jmoore@cs.pitt.edu) 03/17/1995
% Modified : David Noelle (noelle@ucsd.edu) 03/15/1996
% Modified : Pat Langley (langley@cs.stanford.edu) 01/26/1997
% Latex2e corrections by Ramin Charles Nakisa 01/28/1997
% Modified : Tina Eliassi-Rad (eliassi@cs.wisc.edu) 01/31/1998
% Modified : Trisha Yannuzzi (trisha@ircs.upenn.edu) 12/28/1999 (in process)
% Modified : Mary Ellen Foster (M.E.Foster@ed.ac.uk) 12/11/2000
% Modified : Ken Forbus 01/23/2004
% Modified : Eli M. Silk (esilk@pitt.edu) 05/24/2005
% Modified : Niels Taatgen (taatgen@cmu.edu) 10/24/2006
% Modified : David Noelle (dnoelle@ucmerced.edu) 11/19/2014
% Modified : Roger Levy (rplevy@mit.edu) 12/31/2018
%% Change "letterpaper" in the following line to "a4paper" if you must.
\documentclass[10pt,letterpaper]{article}
\usepackage{cogsci}
\cogscifinalcopy % Uncomment this line for the final submission
\usepackage{graphicx}
\usepackage{pslatex}
\usepackage{apacite}
\usepackage{float} % Roger Levy added this and changed figure/table
% placement to [H] for conformity to Word template,
% though floating tables and figures to top is
% still generally recommended!
%\usepackage[none]{hyphenat} % Sometimes it can be useful to turn off
%hyphenation for purposes such as spell checking of the resulting
%PDF. Uncomment this block to turn off hyphenation.
%\setlength\titlebox{4.5cm}
% You can expand the titlebox if you need extra space
% to show all the authors. Please do not make the titlebox
% smaller than 4.5cm (the original size).
%%If you do, we reserve the right to require you to change it back in
%%the camera-ready version, which could interfere with the timely
%%appearance of your paper in the Proceedings.
\title{Controlling the retrieval of general vs specific semantic knowledge in the instance theory of semantic memory}
\author{{\large \bf Matthew J. C. Crump (mcrump@brooklyn.cuny.edu)} \\
Department of Psychology, 2900 Bedford Ave \\
Brooklyn, NY 1120 USA
\AND {\large \bf Randall K. Jamieson (randy.jamieson@umanitoba.ca)} \\
Department of Psychology, 190 Dysart Rd \\
Winnipeg, MB R3T 2N2 Canada}
\begin{document}
\maketitle
\begin{abstract}
Distributional models of semantic cognition produce word-embeddings sensitive to how words co-occur in local contexts in natural language. Word-embedding similarity space can be compromised by high frequency words, but strategies for managing base rates of word occurrence are usually not cognitively plausible. In a series of simulations, we show that an instance based distributional model \cite<ITS, >{jamiesonInstanceTheorySemantic2018a} can take advantage of learning and memory operations to control how base rate information is integrated into word-embeddings. We suggest these cognitive processing assumptions may allow people to control production of general versus specific semantic knowledge.
\textbf{Keywords:}
distributional semantics; negative information; instance theory; surprise-driven learning; retrieval
\end{abstract}
\section{Introduction}\label{introduction}
Distributional models of semantics produce word embeddings sensitive to word co-occurrence structure in natural text. Word embeddings are more similar between words appearing in similar than different local contexts (sentences, paragraphs). The quality of word embeddings depends on their use. For applied purposes, word-embeddings could be used to train a classifier (e.g., for sentiment analysis) and quality assessed by classifier accuracy. For theoretical purposes, distributional models of semantic cognition are evaluated by fits to human performance in semantic tasks. Word embedding quality also depends on base rates of word occurrence. High frequency words can dominate semantic vectors, causing embeddings to become more similar and less indicative of nuanced meaning. Standard approaches to managing base rate information have questionable cognitive plausibility. The present work merges assumptions from two memory models, the instance theory of semantics \cite<ITS,>{jamiesonInstanceTheorySemantic2018a} and the instance theory of associative learning \cite<MINERVA-AL,>{jamiesonInstanceTheoryAssociative2012}, to deliver a cognitively plausible means of managing base rate information for constructing veridical word embeddings.
Common base rate management strategies are stop word exclusion, transformation, and negative sub-sampling. Stop word exclusion is arbitrary and fails to account for included word base rates. Early models like LSA \cite{landauerSolutionPlatoProblem1997} used transformation. Word frequencies were log transformed and divided by their entropy across document contexts. The log transform compresses frequency counts, and the division by entropy weights words by the specificity of their occurrence in local contexts. Here, context-ubiquitous high-frequency words (high entropy) are weighted less strongly than context-specific words (low entropy). Neural network models like word2vec \cite{mikolovDistributedRepresentationsWords2013} use a process of sub-sampling adversarial examples during training. Here, network weights are modified by prediction error from positive and negative examples; and, negative examples are chosen randomly as a function of word frequency. Importantly, word2vec produces high-quality word embeddings that can explain more variance in human semantic judgments than other models \cite{manderaExplainingHumanPerformance2017a}, and these improvements have been attributed to the sub-sampling procedure. Relatedly, \citeauthor{johnsRoleNegativeInformation2019} \citeyear{johnsRoleNegativeInformation2019} created analytic transforms mimicking the base rate management effects of sub-sampling negative information that appear to improve the quality of word-embeddings in general.
We are optimistic that distributional models hold insights for semantic cognition, but we question the cognitive plausibility of common strategies for managing base rate information. It is unlikely that people ignore stop words, unclear how people would weight word frequency knowledge by information theoretic transforms (or apply singular value decomposition), and doubtful that people employ negative sub-sampling when they encounter words. In our view, a satisfactory model should detail a cognitively plausible process for managing base rates in constructing semantic knowledge. We propose a cognitive solution for the ITS model that merges theoretical insights from traditions in memory and learning.
ITS applied instance theory principles \cite{jacobyNonanalyticCognitionMemory1984} to distributional semantics by combining BEAGLE word representations \cite{jonesRepresentingWordMeaning2007} with MINERVA 2 encoding and retrieval operations \cite{hintzmanMINERVASimulationModel1984}. ITS is unique in assuming instance rather than prototype representations, which fail to capture polysemy. For example, ``bank'' could refer to a river or financial institution; but, a prototype representation averages the distinction with a single vector partway between the two meanings. By contrast, ITS encodes individual sentences as memory traces, and produces word embeddings at the time of retrieval. Retrieval is similarity driven and context-sensitive, allowing production of semantic vectors tailored to the local context (river vs. piggy) of a probe word (bank).
ITS used the cognitively questionable, but common practice, of stop word exclusion to manage base rate information. Here, we show that ITS 2 can manage base rates in cognitively meaningful ways by adopting assumptions from memory and learning theory. In doing so, ITS 2 consolidates processing assumptions across MINERVA 2 and two extensions, ITS, and MINERVA-AL \cite{jamiesonInstanceTheoryAssociative2012} which accounts for a variety of associative learning phenomena. First, we borrow the discrepancy encoding rule from MINERVA-AL, which constrains encoding to unexpected features of an experience. However, we adopted the term "weighted-expectancy subtraction" here because we show how the principle can be applied at encoding or retrieval. Second, we borrow iterative retrieval from MINERVA 2, which allows successive waves of retrieval based on internal memory responses. By simulation, we show that these encoding and retrieval operations modulate the integration of base rate information in ITS 2 semantic vectors, and subsequently determine which aspects of the semantic space ITS recovers.
As an overview we first define ITS and ITS 2. This work is in the preliminary stage of verifying the consequences of the new processing assumptions, so we conducted simulations on an artificial language with known word co-occurrence structure. This enabled a clear accounting of the aspects of the semantic space that ITS and ITS 2 recovers. We present our simulations and then discuss testing the implications in natural languages as a next step.
\subsection{ITS and ITS 2}
We define ITS and then ITS 2 modifications. Importantly, we defined a version of ITS 2 that implemented weighted expectancy subtraction (discrepancy encoding) during encoding throughout training; and a version that used a combination of weighted expectancy subtraction and iterative retrieval only during retrieval, after training was complete.
\subsubsection{Word representation}
Following BEAGLE, words are arbitrary perceptual objects with no pre-existing similarity. Each word is assigned an environment vector by randomly sampling \(n\) values from a normal distribution (\(\mu = 0\), \(\sigma = 1/n\)), where \(n\) determines the dimensionality of the vector space. Thus, all words are ortho-normal in expectation. ITS can accommodate other representational assumptions, and for convenience and didactic purposes we used one hot coding, or a simple identity matrix.
\subsubsection{Memory}
In ITS, slices of experience with words in context are represented as composite traces and stored as new row entries to a memory matrix. For example, committing a sentence to memory involves summing the environmental vectors for the words in the sentence:
\begin{equation}
M_i = c_i = \sum_{j=1}^{j=h} w_{ij}
\label{eq:memory}
\end{equation}
\(M_i\) is the memory matrix, and \(c_i\) is a sentence context. \(c_i\) is stored in a new row in \(M_I\) as a composite trace by summing the \(w_{ij}\) environment vectors for each word, from \(1\), to \(h\), in the sentence. For example, the sentence ``I like cats'' is the sum of \(w_{I} + w_{like} + w_{cats}\). The number of words inside a trace is a windowing parameter that must be larger than one word, otherwise the memory will return perceptually similar traces, rather than semantically similar ones. We note that the memory matrix becomes a document-term matrix of word frequencies when the environment vectors for words are taken from an identity matrix.
\subsubsection{Retrieval}\label{retrieval}
Word meaning is constructed at retrieval. Memory is probed with a word and returns an echo response. The echo is the sum of similarity weighted traces to the probe, and taken as the semantic vector for the probe word. Retrieval and echo construction follow MINERVA 2. First, memory \(M\) is probed with a word environment vector \(w_i\), and the cosine similarities between \(w_i\) and all traces \(M\) are computed to produce a vector of trace activations \(a_i\):
\begin{equation}
a_i = (\frac{\sum_{j=1}^{j=n}p_j \times M_{ij}}{\sqrt{\sum_{j=1}^{j=n}p_j^2}\sqrt{\sum_{j=1}^{j=n}M_{ij}^2}})^{tau}
\label{eq:activation}
\end{equation}
where, \(a_i\) is the activation (cosine similarity to probe) of trace \(i\) in memory, \(p_j\) are the \(jth\) features of the probe, \(M_{ij}\) are the \(jth\) features of each trace \(i\) in memory, and \(n\) is the number of columns in memory setting the dimensionality of the vector space. The vector of activations is raised to a power, \({tau}\), controlling a retrieval gradient determining selectivity in the composition of the echo. The activation vector is a record of similarity between the traces and the probe spanning the range \(-1\) to \(1\), with \(a_i = 1\) when a trace is identical to the probe, \(a_i = 0\) when a trace is orthogonal to the probe, and \(a_i = -1\) when the trace is opposite the probe.
In the second step, the activated traces are summed to produce a composite memory response, called the echo. Specifically, all traces in memory are multiplied by their activations, and the echo is formed by summing the weighted traces:
\begin{equation}
e_j = \sum_{i=1}^{i=m}\sum_{j=1}^{j=n}a_i \times M_{ij}
\label{eq:echo}
\end{equation}
where, \(e_j\) is the \(jth\) feature of the echo, \(m\) is the number of traces in memory, \(a_i\) is the activation of trace \(i\), and \(M_{ij}\) are the \(jth\) values of each trace \(i\) in memory. In ITS, the echo is used as the semantic representation for the probe word.
by probing memory with a word to generate an echo response, which is taken as the semantic vector. Words are compared for semantic similarity by comparing their respective echoes.
As a result, semantic similarity, \(r\), between two words is computed between their respective echoes using a cosine:
\begin{equation}
r(p_1,p_2) = \frac{\sum_{j=1}^{j=n}e_{1j} \times{} e_{2j}}{\sqrt{\sum_{j=1}^{j=n}e_{1j}^2}\sqrt{\sum_{j=1}^{j=n}e_{2j}^2}}
\label{eq:semanticsim}
\end{equation}
We now describe ITS 2 modifications that manage base rate information. The modifications can be implemented during encoding throughout training, or only at retrieval after training is complete. The encoding variant is more computationally expensive, as the contents of ITS 2 memory must be transformed over the course of training the model, on a trace by trace basis.
\subsection{ITS 2: weighted expectancy subtraction at encoding}
ITS 2 implements weighted expectancy subtraction during encoding in a similar manner to MINERVA-AL's discrepancy encoding rule. The difference is the subtraction between the probe and the echo is weighted by \(x\), controlling the amount of expectation to be subtracted. Weighted expectancy subtraction is applied at each step across training. For example, when a new sentence is experienced, the sentence context vector \(c_i\) is used as a probe to memory to generate an echo. The echo represents the memories' expectation for the new sentence. If the new sentence is fully expected, then the memory can reconstruct the new sentence on the basis of its existing traces. The magnitude of the echo vector contains the sum of many traces, and is generally much larger than the magnitude of the sentence context vector. As a result, before subtraction, the probe and echo vectors are normalized,
\begin{equation}
c'_j = \frac{c_j}{\max | c_{j,n} |}
\label{eq:normprobe}
\end{equation}
where, \(c_j\) is a probe vector, and the elements of \(c_j\) are divided by the largest absolute value in \(c_j\), to produce the normalized \(c'_j\). Similarly, the echo is normalized such that,
\begin{equation}
e'_j = \frac{e_j}{\max | e_{j,n} |}
\label{eq:normecho}
\end{equation}
where, \(e_j\) is an echo vector, and the elements of \(e_j\) are divided by the largest absolute value in \(e_j\), to produce the normalized \(e'_j\).
Next, the new trace encoded to memory is defined by subtraction of a weighted normalized echo from the normalized probe,
\begin{equation}
M_{ij} = c'_j - xe'_j
\label{eq:ITS2encoding}
\end{equation}
where, \(M_{ij}\) is the new row entry in the memory matrix, and \(x\) is a weighting parameter (from 0 to 1), controlling how the proportion of the normalized echo subtracted from the normalized probe. When \(x\) is set to 0, ITS 2 becomes equivalent to ITS.
\subsection{ITS 2: weighted expectancy subtraction at retrieval}
ITS 2 can also conduct a similar operation of weighted expectancy subtraction at the time of retrieval. In this case, memory is constructed identically to ITS, except weighted expectancy subtraction occurs at retrieval through a two-step iterative retrieval process. A probe word generates an echo from memory, and the echo is submitted as an ``internal'' probe to generate a second echo. The semantic representation for the word is taken as a weighted subtraction of the normalized second echo from the normalized first echo.
The first echo \(e_1\) is generated in the usual way, but then resubmitted as a probe to construct \(e_2\) by the same equations \ref{eq:activation} and \ref{eq:echo} used to construct \(e_1\). Both \(e_1\) and \(e_2\) are normalized following equation \ref{eq:normecho}. Whereas in ITS, the semantic representation for a word is defined as \(e_1\), the semantic representation for a word with weighted expectancy subtraction at retrieval in ITS 2 is:
\begin{equation}
s_i = e'_1 - xe'_2
\label{eq:ITS2retrieval}
\end{equation}
where, \(s_i\) is the semantic representation for the \(ith\) word, and \(x\) is a weighting parameter varying from 0 to 1 controlling the amount of \(e'_2\) subtracted from \(e'_1\).
\section{Simulations}
\begin{figure*}
\includegraphics[width=\textwidth]{ITS_cogsci_files/figure-latex/artlang-1.pdf}
\caption{Upper: The topic-word probability matrix defining the artificial language. Darker colors represent higher probability of word occurrence. Lower: Word-word similarity matrices from the first to fourth order.}\label{fig:artlang}
\end{figure*}
Our aim was to characterize how ITS and ITS 2 develop sensitivity to word co-occurrence structure. First, we created an artificial language with known co-occurrence structure. Next, we trained ITS on sentences from the artificial language and compared the semantic structure of ITS vectors to direct measures of the semantic structure of the language. Here, we were interested in determining which aspects of the language were recovered by ITS. Last, we determined whether the weighted expectancy subtraction process in ITS 2 would allow it to recover more veridical aspects of the co-occurrence structure than ITS.
\subsection{Artificial language}
The artificial language contained no grammar and only semantic structure based on word co-occurrence. The simplistic form offers a transparent window into the transformations of ITS 2. We created semantic topic generators that use unique collections of words to discuss a given topic, with some overlap across topics. The language contained 100 words and 10 topics. Each topic used 15 words, and overlapped with neighboring topics by five words on both sides. Each topic had a random word-occurrence probability distribution that summed to one. Figure \ref{fig:artlang} depicts the topic-word probability matrix defining the artificial language. A corpus was generated by randomly sampling topics (equal probability), and then constructing sentences from the topic by sampling \(n\) words as a function of their probability. Sentence-size varied randomly between 10 and 20 words per sentence. A corpus included 5,000 sentences.
The purpose of the simulations was to compare the semantic spaces generated by ITS and ITS 2 to known properties of the semantic space from the language. We defined the known semantic space at various orders of semantic similarity. At the first order, the true semantic representation for a word is the column vector for each word in the topic-word probability matrix above. To visualize this semantic space we computed the cosine similarity between each word (using their column vectors) and plotted the similarity matrix. The first word-word similarity matrix in figure \ref{fig:artlang} (lower panel) shows the structure of the artificial language that models are ostensibly attempting to recover. Words are more similar to each other within their topics than between topics, and there is some overlap because word usage overlaps across the topics. Words in topic one are not at all similar to words in topic nine because there is no overlap in word usage between those topics. The remaining panels in figure \ref{fig:artlang} show word-word similarity in higher order space up to the fourth order. A higher order similarity space uses a lower-order space to derive a higher order one. For example, the second-order space uses columns from the first-order similarity matrix as word embeddings to compute a second word-word similarity space, and so on. In our language, because of word overlap between topics, words become increasingly similar to one another in higher order space. A veridical model would recover the first-order semantic space.
\subsection{Simulation 1: ITS}
We trained ITS on 5000 sentences, using one-hot coding ($100 \times 100$ identity matrix) to form environment vectors for the words. Each word was coded as a 1, with 99 zeroes. The position of the 1 in the vector refers to the \(nth\) word in the corpus. As a result, the memory matrix is equivalent to a document-term matrix of raw term frequencies occurring in each document. We used a range of retrieval gradients ($tau$ = 0 to 9) and training intervals (100, 500, 1000, and 5000 sentences). At each interval we computed echoes as semantic representations for each word, and then a word-word similarity matrix from those vectors. To determine which aspects of the artificial language ITS recovered, we computed \(R^2\) between the ITS word-word similarity space, and the first to fourth order word-word similarity spaces derived directly from artificial language. The results are shown in figure \ref{fig:ITSsimple}.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{ITS_cogsci_files/figure-latex/ITSsimple-1.pdf}
\caption{\label{fig:ITSsimple}\(R^2\) values between ITS word-word similarity space, and the first to fourth order word-word similarity spaces derived the artificial language as a function of training, and retrieval gradient (tau)}
\end{figure}
ITS performed well in recovering the structure of the artificial language. Most important, ITS was most sensitive to the second-order similarity structure of the artificial language. More generally, ITS became more sensitive to all orders of similarity as training increased, and less sensitive as tau increased. Raising tau did increase relative sensitivity to the first order, but did so at the cost of losing sensitivity overall. The fact that ITS prioritizes the second order over the first is its flaw. The second order space is an overgeneralized version of the first, and blurs out the finer distinctions between word usage within the topic structures that generate the words. This is a base rate of word occurrence issue. ITS relies on second order similarity (see discussion), so semantic vectors for topic-unique words become similar to words from overlapping topics, whereas they are not similar to those words in first order space. ITS glosses over these nuances.
\subsection{Simulation 2: ITS 2 encoding}
We next trained ITS 2 with weighted expectancy subtraction at encoding on the same artificial language. We show that weighted expectancy subtraction causes ITS 2 to become more sensitive to first order word-word similarity than higher orders. In the simulations we vary the value of \(x\) (from .01 to .5) to subtract different amounts of the echo from the probe. The value of \(x\) causes systematic differences in ITS 2's sensitivity to higher order similarity structure. For clarity, we set \(tau\) to 1. The results are shown in figure \ref{fig:allsims} (left panel).
\begin{figure*}
\includegraphics[width=\textwidth]{ITS_cogsci_files/figure-latex/allsims-1.pdf}
\caption{\(R^2\) values between ITS word-word similarity space and the first to fourth order word-word similarity spaces derived the artificial language as a function of training, and weighted expectancy subtraction. The left panel shows ITS 2 with weighted expectancy subtraction during encoding, and the the right panel shows ITS 2 with weighted expectancy subtraction during retrieval.}\label{fig:allsims}
\end{figure*}
% \begin{figure}
% \centering
% \includegraphics[width=\linewidth]{ITS_cogsci_files/figure-latex/ITSencodinglinear-1.pdf}
% \caption{\label{fig:ITSencodinglinear}\(R^2\) values between simple ITS word-word similarity space, and the first to fourth order word-word similarity spaces derived from the artificial language as a function of training, and weighted expectancy subtraction at encoding. The retrieval gradient (tau) was set to 1.}
% \end{figure}
Weighted expectancy subtraction at encoding modulated how ITS 2 recovered different orders of semantic similarity space. For example, when \(x=.01\), ITS 2 was most sensitive to second order similarity, but as \(x\) increased ITS 2 became most sensitive to first-order similarity. Increasing \(x\) further caused overall sensitivity to decline \cite<akin to the detrimental effects of sub-sampling too much negative information, >{johnsRoleNegativeInformation2019}. It appears that ITS 2 is capable of recovering more veridical and nuanced word embeddings from the first-order similarity space.
\subsection{Simulation 3: ITS 2 retrieval}
Here, we trained ITS 2 with weighted expectancy subtraction at retrieval only. We repeated the above simulation exactly, but used the equations involving one iterative retrieval step to conduct the weighted expectancy subtraction at retrieval. We also used a \(tau\) of 0 to compute both echoes, or a square retrieval gradient. We chose this value to foreshadow a correspondence between transformations of the defined artificial language in higher order similarity space, and what ITS 2 is achieving at retrieval by subtracting a portion of the second echo from the first. The results are shown in figure \ref{fig:allsims} (right panel).
% \begin{figure}
% \centering
% \includegraphics[width=\linewidth]{ITS_cogsci_files/figure-latex/ITSretrieval-1.pdf}
% \caption{\label{fig:ITSretrieval}\(R^2\) values between simple ITS word-word similarity space, and the first to fourth order word-word similarity spaces derived from the artificial language as a function of training, and weighted expectancy subtraction at retrieval.}
% \end{figure}
Remarkably, ITS 2 does not need to make any assumptions about encoding to benefit from weighted expectancy subtraction. The pattern of Simulation 3 is almost identical to that of Simulation 2. Specifically, ITS 2 becomes most sensitive to first-order word-word similarity structure as \(x\) is increased. Again, increasing \(x\) has diminishing returns.
\section{General Discussion}
Distributional models of semantic cognition become sensitive to co-occurrence structure in natural text. By defining analogous sources of co-occurrence structure in an artificial language, we determined the aspects of that structure recovered by ITS and ITS 2. We showed that ITS is most sensitive to second order semantic space. We also showed that ITS 2 can modulate its sensitivity by a process of weighted expectancy subtraction and iterative retrieval. Impressively, these operations allowed ITS 2 to become most sensitive to first order semantic space, which is a more veridical representation by our definition.
It is instructive to consider how ITS and ITS 2 recover different orders of similarity space. First, consider how words become increasingly similar across orders of similarity space. In the first order, word similarity is determined by the topics they occur in. For example, word 6 is only similar to topic one words, and word 15 is similar to topic one and two words because it occurs in both topics. In the second order, words become similar on the basis of their first order similarity features. First order features for word 6 now contain positive similarity for words 1 to 15 (all topic one). Some of these features are shared by words from topic two. As a result, words unique to topic one become similar to words from neighboring topics. If there is a series of partial overlap connecting words across topics, then all words become increasingly similar to all words across increasing orders of similarity, and the nth order similarity matrix becomes all ones.
Crucially, iterative retrieval in ITS 2 is a process for traversing higher-order similarity space; and weighted expectancy subtraction is a process for negotiating the the relative contributions of higher-order similarity representations in the construction of semantic knowledge. To elaborate, we showed that ITS echoes are most sensitive to the second order. Echoes contain sentence memory, so an echo for a topic-unique word contains words that overlap in neighboring topics. Thus, semantic vectors for topic-unique words become similar to words from neighboring topics. Submitting the echo as a probe for iterative retrieval is a third order operation. The echo contains many words and the second echo collapses over memory for sentences that contain any of those words. This draws in sentences from additional topics, causing a given word to be more similar to words in outlying topic neighborhoods. Iterating to the extreme sweeps all sentences in memory into the echo, causing identical echoes for all words.
Simulation 3 showed that subtracting a portion of the second echo from the first allows ITS 2 to preferentially recover first order similarity space. Our preceding discussion suggests ITS 2 performs a weighted subtraction of third from second order space, suggesting a similar result could be obtained analytically. We confirmed this directly from the language by subtracting proportions of the third order similarity matrix from the second, and computing \(R^2\) between each new matrix and the first order similarity matrix. We found an inverse U function, with \(R^2\) approaching 1 at .4. As a side-note, computing second order similarity from a document term matrix \cite{cribbinDiscoveringLatentTopical2011} can produce embeddings similar if not superior to those produced by singular value decomposition, as in LSA. We speculate that subtracting a portion of the third order from the second may further improve the quality of those semantic representations.
In the future we plan to apply ITS and ITS 2 to natural language and compare the quality of word-embedding by fits to human performance in semantic tasks. At present we offer ITS 2 as an intriguing account of how people may transform their semantic knowledge along general versus specific lines, by using iterative memory retrieval to traverse higher order similarity space, and weighted expectancy subtraction to control the specificity of retrieved semantic knowledge.
%
%
% Using the identity matrix to represent words, and setting tau to 0, ITS memory becomes a simple document-term matrix, and the echo for any probe is the sum of document vectors that contain the probe word. Consider that in first order space, words that are unique to one topic (e.g., words 6-10 for topic one) are only similar to themselves, and have zero similarity to all other words. However, this changes in second order space.
%
% If we use an identity matrix to represent words, and set tau to 0, it becomes
%
% It is not surprising that ITS is most sensitive to second order similarity space. Co
%
% We now attempt to clarify analytic relationships between ITS semantic vectors and higher order similarity space
%
%
% Our simulation approach offered some insight into
%
% We have shown that simple ITS is most sensitive to the second order word-word similarity structure of the artificial language. We discussed how iterative retrieval produces echoes that would become more sensitive to increasing orders of word-word similarity space. Simulation 3 showed that subtracting a portion of the second echo from the first cause ITS 2 to become more sensitive to first order word-word similarity. These relationship suggest that weighted expectancy subtraction is a transform that can be applied directly to higher order similarity space, if that space is known.
%
% In our case, the nth-order word-word similarity space is completely determined by the topic-word probability matrix of the artificial language. As a result, we can determine whether subtraction of a higher-order similarity space from a lower order similarity produces similar results. We assume that ITS 2 is performing an operation similar to subtracting a proportion of third-order word-word similarity space from it's representation of second-order word-word similarity space, and this subtraction results in an echo most sensitive to first-order word-word similarity space. We now show this by subtracting weighted portions of third-order word-word similarity space from the second-order to arrive at the first-order, directly from the basis functions of the artificial language.
% \section{Acknowledgments}
%
% In the \textbf{initial submission}, please \textbf{do not include
% acknowledgements}, to preserve anonymity. In the \textbf{final submission},
% place acknowledgments (including funding information) in a section \textbf{at
% the end of the paper}.
\bibliographystyle{apacite}
\setlength{\bibleftmargin}{.125in}
\setlength{\bibindent}{-\bibleftmargin}
\bibliography{ITS}
\end{document}
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Medium Length Graduate Curriculum Vitae
% LaTeX Template
% Version 1.1 (9/12/12)
%
% This template has been downloaded from:
% http://www.LaTeXTemplates.com
%
% Original author:
% Rensselaer Polytechnic Institute (http://www.rpi.edu/dept/arc/training/latex/resumes/)
%
% Important note:
% This template requires the res.cls file to be in the same directory as the
% .tex file. The res.cls file provides the resume style used for structuring the
% document.
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%----------------------------------------------------------------------------------------
% PACKAGES AND OTHER DOCUMENT CONFIGURATIONS
%----------------------------------------------------------------------------------------
\documentclass[margin, 10pt]{res} % Use the res.cls style, the font size can be changed to 11pt or 12pt here
\usepackage{latexsym, amssymb, bbm, amsmath}
\usepackage{helvet} % Default font is the helvetica postscript font
%\usepackage{newcent} % To change the default font to the new century schoolbook postscript font uncomment this line and comment the one above
\setlength{\textwidth}{5.1in} % Text width of the document
\begin{document}
%----------------------------------------------------------------------------------------
% NAME AND ADDRESS SECTION
%----------------------------------------------------------------------------------------
\moveleft.5\hoffset\centerline{\large\bf Allen Wu} % Your name at the top
\moveleft\hoffset\vbox{\hrule width\resumewidth height 1pt}\smallskip % Horizontal line after name; adjust line thickness by changing the '1pt'
\moveleft.5\hoffset\centerline{Mountain View, CA} % Your address
\moveleft.5\hoffset\centerline{nalkpas@gmail.com}
\moveleft.5\hoffset\centerline{(505) 920-4664}
%----------------------------------------------------------------------------------------
\begin{resume}
%----------------------------------------------------------------------------------------
% OBJECTIVE SECTION
%----------------------------------------------------------------------------------------
%
%\section{OBJECTIVE}
%
%Stuff.
%----------------------------------------------------------------------------------------
% EDUCATION SECTION
%----------------------------------------------------------------------------------------
\section{EDUCATION}
\textbf{Stanford University}, Palo Alto, CA \hfill 2016-2018 \\
{\sl MS,} Management Science \& Engineering \\
Completed Coursework: Decision Making under Uncertainty, Stochastic Modeling, Professional Decision Analysis, General Game Playing \\
GPA: 3.86/4.00 \\\\
\textbf{University of Chicago}, Chicago, IL \hfill 2011-2015 \\
{\sl Bachelor of Arts,} Mathematics, Phi Beta Kappa \\
GPA: 3.89/4.00
%Notable Coursework: Analysis in $\mathbb{R}^n$ I-III, Honors Basic Algebra I-III, Optimization, Partial Differential Equations, Cognitive Psychology
%\textbf{Los Alamos High School}, Los Alamos, NM \\
%GPA: 4.33/4.00 \\
%Relevant Coursework: AP Computer Science
%----------------------------------------------------------------------------------------
% PROFESSIONAL EXPERIENCE SECTION
%----------------------------------------------------------------------------------------
\section{EXPERIENCE}
{\sl Course Assistant} \hfill Summer 2017 \\
\textbf{Stanford}, Introduction to Decision Making
\begin{itemize} \itemsep -2pt % Reduce space between items
\item Wrote and reviewed homework assignments and exams.
\item Conducted office hours and discussion sections.
\item Advised students on a course project where they consulted with a decision maker and applied the tools learned in the class.
\end{itemize}
{\sl Semi-Professional Magic Player} \hfill 2015-2017 \\
\textbf{Self-Employed}
\begin{itemize} \itemsep -2pt % Reduce space between items
\item Traveled the world playing the Professional Tour for Magic: the Gathering, a fantasy card game.
\item Achieved silver level in the Professional Players Club in 2016 and 2017.
% \item Won Grand Prix Albuquerque 2016.
\end{itemize}
%{\sl Programmer} \hfill Summer 2015 \\
%\textbf{Elk Capital Markets}
%
%\begin{itemize} \itemsep -2pt % Reduce space between items
%\item Developed software with friends for ETF trading on ARCA and NYSE, focusing particularly on the GUI.
%\item Tested and executed said software, running a handful of simple arbitrage strategies and debugging when necessary.
%\item Provided input on potential new strategies to execute.
%\end{itemize}
{\sl Intern} \hfill Summer 2014 \\
\textbf{Sandia National Laboratory}, Resilience and Regulatory Effects
\begin{itemize} \itemsep -2pt
\item Researched new developments in economic modeling and discussed the strengths and weaknesses of different models with a mentor.
%\item Investigated, acquired, and organized publicly available data sets.
\item Wrote code that acquired, filtered, consolidated, and analyzed public data sets.
%then quickly and iteratively produced relevant graphics.
%\item Reviewed peer papers and proposals for publication.
\end{itemize}
{\sl Student} \hfill Summer 2012 \\
\textbf{Undergraduate Mathematics REU}, University of Chicago
\begin{itemize} \itemsep -2pt
\item Attended introductory lectures to higher mathematics.
\item Researched number theory under a graduate mentor.
\end{itemize}
{\sl Intern} \hfill Summer 2010 \\
\textbf{Los Alamos National Laboratory}, T-Division
\begin{itemize} \itemsep -2pt
\item Wrote a Java program to model fluid dynamics in collisions using finite element methodology in one and two dimensions.
\item Tinkered with the program to test the boundaries of the method.
%\item Wrote a Java program that read data from input arrays and interpolated finite element density graphs. The program rotated, deformed, and translated systems of particles in one and two dimensions.
%\item Adjusted program specifications and inputs to test the boundaries of the methodology and analyze its flaws regarding collision modeling.
%\item Researched the mathematical foundations of the method and discussed them with mentor.
\end{itemize}
%----------------------------------------------------------------------------------------
% GAMES SECTION
%----------------------------------------------------------------------------------------
%\section{GAMES}
%
%\textbf{Magic: the Gathering}
%
%\begin{itemize} \itemsep -2pt % Reduce space between items
%\item Silver-level Pro
%\item Grand Prix Champion
%\end{itemize}
%
%\textbf{Hearthstone}
%
%\begin{itemize} \itemsep -2pt % Reduce space between items
%\item Many times Legend, my best end-of-month finish being \#2 NA in August 2015
%\end{itemize}
%----------------------------------------------------------------------------------------
% SKILLS SECTION
%----------------------------------------------------------------------------------------
\section{SKILLS}
{\sl Programming:}
Python, Julia, C++, Java, R, Stata \\
{\sl Software:}
Excel, Google Sheets, OpenOffice \\
%{\sl Languages:}
%rudimentary Chinese and German \\
%----------------------------------------------------------------------------------------
% INTERESTS SECTION
%----------------------------------------------------------------------------------------
\section{INTERESTS}
biking, writing, board games, George Saunders, Miranda July, the Mountain Goats
%----------------------------------------------------------------------------------------
% COMMUNITY SERVICE SECTION
%----------------------------------------------------------------------------------------
%
%\section{COMMUNITY \\ SERVICE}
%
%Organized and directed the 1988 and 1989 Grand Marshall Week \\
%``Basketball Marathon.'' A 24 hour charity event to benefit the Troy Boys Club. Over 250 people participated each year.
%
%----------------------------------------------------------------------------------------
% EXTRA-CURRICULAR ACTIVITIES SECTION
%----------------------------------------------------------------------------------------
%
%\section{EXTRA-CURRICULAR \\ ACTIVITIES}
%
%Elected {\it House Manager}, Rho Phi Sorority \\
%Elected {\it Sports Chairman} \\
%Attended Krannet Leadership Conference \\
%Headed delegation to Rho Phi Congress \\
%Junior varsity basketball team \\
%Participant, seven intramural athletic teams
%----------------------------------------------------------------------------------------
\end{resume}
\end{document}
|
|
\newcommand{\fortran}[1]{\framebox{#1}}
\renewcommand{\c}[1]{\framebox{#1}}
\newcommand{\cpp}[1]{\framebox{#1}}
% Commands defining notation in programming languages.
\newcommand{\procedure}[2]{\subsection{#1}\hypertarget{#2}{}\label{#2}}
\newcommand{\stat}{\lstinline[language=fortran]+stat+\xspace}
\newcommand{\anyfield}{\protect\hyperlink{type:anyfield}{\textit{anyfield}}}
\newcommand{\meshtype}{\protect\hyperlink{type:mesh}{mesh\_type}}
\newcommand{\scalarfield}{\protect\hyperlink{type:scalarfield}{scalar\_field}}
\newcommand{\vectorfield}{\protect\hyperlink{type:vectorfield}{vector\_field}}
\newcommand{\tensorfield}{\protect\hyperlink{type:tensorfield}{tensor\_field}}
\newcommand{\elementtype}{\protect\hyperlink{type:elementtype}{element\_type}}
\newcommand{\quadraturetype}{\protect\hyperlink{type:quadraturetype}{quadrature\_type}}
\newcommand{\valshape}{\textit{valshape}}
\newcommand{\remapfield}{\protect\hyperlink{proc:remapfield}{remap\_field}}
\newcommand{\eleloc}{\protect\hyperlink{proc:eleloc}{ele\_loc}}
\newcommand{\elengi}{\protect\hyperlink{proc:eleloc}{ele\_ngi}}
\newcommand{\faceloc}{\protect\hyperlink{proc:faceloc}{face\_loc}}
\newcommand{\nodeval}{\protect\hyperlink{proc:nodeval}{node\_val}}
\newcommand{\elevalatquad}{\protect\hyperlink{proc:elevalatquad}{ele\_val\_at\_quad}}
\newcommand{\allocate}{\lstinline[language=fortran]+allocate+\xspace}
\newcommand{\deallocate}{\lstinline[language=fortran]+deallocate+\xspace}
\newcommand{\module}[2][1]
{\textbf{Module:}\ #2\ifthenelse{\equal{#2}{}}
{\\\textbf{Internal module:}\ #1}{}}
% Commands definining mathematical notation.
% This is for quantities which are physically vectors.
\renewcommand{\vec}[1]{\mathbf{#1}}
% Physical rank 2 tensors
\newcommand{\tensor}[1]{\overline{\overline{#1}}}
% This is for vectors formed of the value of a quantity at each node.
\newcommand{\dvec}[1]{\underline{#1}}
% This is for matrices in the discrete system.
\newcommand{\mat}[1]{\mathrm{#1}}
\renewcommand{\u}{\ensuremath{\vec{u}}}
\renewcommand{\v}{\ensuremath{\vec{v}}}
\newcommand{\x}{\ensuremath{\vec{x}}}
\newcommand{\Ne}{N_\mathrm{e}}
\newcommand{\Ndof}{N_\mathrm{dof}}
\newcommand{\Ndim}{N_\mathrm{dim}}
\newcommand{\Nelm}{N_E}
\newcommand{\Nloc}{N_\mathrm{loc}}
\newcommand{\Nquad}{N_\mathrm{quad}}
\newcommand{\iloc}{{i_\mathrm{loc}}}
\newcommand{\jloc}{{j_\mathrm{loc}}}
\newcommand{\vecphi}{\ensuremath{\pmb{\phi}}}
\newcommand{\vecxi}{\ensuremath{\pmb{\xi}}}
\newcommand{\tensorphi}{\ensuremath{\overline{\overline{\phi}}}}
\renewcommand{\d}{\mathrm{d}}
\newcommand{\R}{\mathbb{R}}
\newcommand{\OmegaPhi}{[\Omega,\Phi]}
\newcommand{\M}{\ensuremath{\mat{M}}\xspace}
%Vector calculus.
\renewcommand{\dot}{\cdot}
\newcommand{\cross}{\times}
\newcommand{\grad}{\nabla}
\renewcommand{\div}{\nabla\dot}
\newcommand{\curl}{\nabla\cross}
|
|
% Copyright 2018 Markus J. Pflaum, licensed under GNU FDL v1.3
% main author:
% Markus J. Pflaum
%
\section{Unbounded linear operators}
\label{sec:unbounded-linear-operators}
%
\para
In this section let $\banachV,\banachW$ always denote Banach spaces over the field
$\fldK =\R$ or $\fldK=\C$. The symbols $\hilbertH$, $\hilbertH_1$, ... will always stand for Hilbert spaces over $\fldK$.
\begin{definition}
By an \emph{unbounded $\fldK$-linear operator} or shortly by an
\emph{unbounded operator} from $\banachV$ to $\banachW$ we understand a linear map
$A: \Dom (A) \to \banachW$ defined on a $\fldK$-linear subspace $\Dom (A)\subset \banachV$.
As usual, $\Dom (A)$ is called the \emph{domain} of the operator $A$.
The space of unbounded $\fldK$-linear operators from $V$ to $W$ will be
denoted $\linOps_\fldK (V,W)$ or just $\linOps (V,W)$.
\end{definition}
\begin{remark}
In this work, the term ``unbounded'' is meant in the sense of
``not necessarily bounded''. Sometimes we just say
\emph{linear operator} or even only \emph{operator} instead of
``unbounded linear operator''.
\end{remark}
\para Observe that besides the domain $\Dom (A)$ of an unbounded operator
$A \in \linOps (\banachV,\banachW)$ the
\emph{kernel}
\[\Ker(A)=\big\{ v \in \banachV \bigmid Av = 0 \big\}\subset\banachV \ , \]
the \emph{image}
\[\Img(A)=\big\{ w \in \banachW \bigmid \exists v\in \Dom (A): w = Av \big\}
\subset \banachW \ , \]
and the \emph{graph}
\[\Graph(A)=\big\{ (v,w) \in \Dom (A) \times \banachW \bigmid
w = Av \big\} \subset \banachV \times \banachW \]
of $A$ are all linear subspaces. We will frequently make use of this.
\begin{definition}
An unbounded operator $A\in \linOps (\banachV,\banachW)$ is called
\emph{densely defined} if $\Dom (A)$ is dense in $\banachV$,
and \emph{closed} if the graph $\Gr (A)$
is closed in $\banachV \times \banachW$.
The operator $A \in \linOps (V,W)$ is called \emph{closable} if the closure
$\closure{\Gr(A)}$ is the graph of an unbounded operator
from $\banachV$ to $\banachW$.
An operator $A \in \linOps (V,W)$ is called an \emph{extension} of
$B \in \linOps (V,W)$ if $\Gr (B) \subset \Gr (A)$. One writes in this
situation $B \subset A$.
\end{definition}
|
|
% Complete documentation on the extended LaTeX markup used for Python
% documentation is available in ``Documenting Python'', which is part
% of the standard documentation for Python. It may be found online
% at:
%
% http://www.python.org/doc/current/doc/doc.html
\documentclass{howto}
% This is a template for short or medium-size Python-related documents,
% mostly notably the series of HOWTOs, but it can be used for any
% document you like.
% The title should be descriptive enough for people to be able to find
% the relevant document.
\title{Spammifying Sprockets in Python}
% Increment the release number whenever significant changes are made.
% The author and/or editor can define 'significant' however they like.
\release{0.00}
% At minimum, give your name and an email address. You can include a
% snail-mail address if you like.
\author{Me, 'cause I wrote it}
\authoraddress{Me, 'cause I'm self-employed.}
\begin{document}
\maketitle
% This makes the Abstract go on a separate page in the HTML version;
% if a copyright notice is used, it should go immediately after this.
%
\ifhtml
\chapter*{Front Matter\label{front}}
\fi
% Copyright statement should go here, if needed.
% ...
% The abstract should be a paragraph or two long, and describe the
% scope of the document.
\begin{abstract}
\noindent
This document describes how to spammify sprockets. It is a useful
example of a Python HOWTO document. It is not dependent on any
particular sprocket implementation, and includes a Python-based
implementation in the \module{sprunkit} module.
\end{abstract}
\tableofcontents
Spammifying sprockets from Python is both fun and entertaining.
Applying the techniques described here, you can also fill your hard
disk quite effectively.
\section{What is Sprocket Spammification?}
You have to ask? It's the only thing to do to your sprockets!
\section{Why Use Python?}
Python is an excellent language from which to spammify your sprockets
since you can do it on any platform.
\section{Software Requirements}
You need to have the following software installed:
% The {itemize} environment uses a bullet for each \item. If you want the
% \item's numbered, use the {enumerate} environment instead.
\begin{itemize}
\item Python 1.9.
\item Some sprocket definition files.
\item At least one sprocket system implementation.
\end{itemize}
Note that the \module{sprunkit} is provided with this package and
implements ActiveSprockets in Python.
% The preceding sections will have been written in a gentler,
% introductory style. You may also wish to include a reference
% section, documenting all the functions/exceptions/constants.
% Often, these will be placed in separate files and input like this:
\input{module}
\appendix
\section{This is an Appendix}
To create an appendix in a Python HOWTO document, use markup like
this:
\begin{verbatim}
\appendix
\section{This is an Appendix}
To create an appendix in a Python HOWTO document, ....
\section{This is another}
Just add another \section{}, but don't say \appendix again.
\end{verbatim}
\end{document}
|
|
%%This is a very basic article template.
%%There is just one section and two subsections.
\documentclass{article}
\begin{document}
\section{Title}
\subsection{Subtitle}
Plain text.
\subsection{Another subtitle}
More plain text.
\end{document}
|
|
\documentclass[10pt, a4paper, twoside]{basestyle}
\usepackage[Mathematics]{semtex}
%%%% Shorthands.
%%%% Title and authors.
\title{Documentation for the symplectic methods}
\date{\printdate{2015-06-06}}
\author{Robin~Leroy (eggrobin)}
\begin{document}
\maketitle
This document expands on the comments at the beginning of\\
\texttt{integrators/symplectic\_runge\_kutta\_nyström\_integrator.hpp}.
\section{Differential equations.}
Recall that the equations solved by this class are
\begin{align}
\tuple{\vq,\vp}\der &=
\vX\of{\vq, \vp, t} = \vA\of{\vq, \vp} + \vB\of{\vq, \vp, t}
\quad\parbox{.4\linewidth}{with $\exp h\vA$ and $\exp h\vB$ known and
$\commutator{\vB}{\commutator{\vB}{\commutator{\vB}{\vA}}}=\nullvec$;}
\label{general}\\
\span\parbox{.7\linewidth}{the above equation, with $\exp h\vA = \Identity+h\vA$,
$\exp h\vB = \Identity+h\vB$,
and $\vA$ and $\vB$ known;}
\label{linear}\\
\vq\dder &= -\matM^{-1} \grad_\vq V\of{\vq, t}\text. \label{rkn}
\end{align}
\section{Relation to Hamiltonian mechanics.}
The third equation above is a reformulation of Hamilton's
equations with a Hamiltonian of the form
\begin{equation}
H\of{\vq,\vp,t} = \frac{1}{2}\Transpose{\vp}\matM^{-1}\vp + V\of{\vq, t}\text,
\end{equation}
where $\vp = \matM\vq\der$.
\section{A remark on non-autonomy.}
Most treatments of these integrators write these differential equations as well
as the corresponding Hamiltonian in an autonomous version, thus
$\vX = \vA(\vq, \vp) + \vB(\vq, \vp)$ and
$H\of{\vq,\vp,t} = \frac{1}{2}\Transpose{\vp}\matM^{-1}\vp + V\of{\vq}$.
It is however possible to incorporate time, by considering it as an
additional variable:\[
\tuple{\vq,\vp,t}\der =
\vX\of{\vq, \vp, t} =
\tuple{\vA\of{\vq, \vp}, 1} +
\tuple{\vB\of{\vq, \vp, t}, 0}\text.\]
For equations of the form (\ref{rkn}) it remains to be shown that Hamilton's
equations with quadratic kinetic energy and a time-dependent potential satisfy
$\commutator{\vB}{\commutator{\vB}{\commutator{\vB}{\vA}}}=\nullvec$.
We introduce $t$ and its conjugate momentum $\gcp$ to the phase space,
and write
\[
\tilde{\vq}=\tuple{\vq, t}\text,\quad
\tilde{\vp}=\tuple{\vp, \gcp}\text,\quad
L\of{\tilde{\vp}} = \frac{1}{2}\Transpose{\vp}\matM^{-1}\vp + \gcp\text.
\]
(\ref{rkn}) follows from Hamilton's equations with\[
H\of{\tilde{\vq},\tilde{\vp}} =
L\of{\tilde{\vp}} + V\of{\tilde{\vq}} =
\frac{1}{2}\Transpose{\vp}\matM^{-1}\vp + \gcp + V\of{\vq, t}
\]
since we then get $t\der = 1$.
The desired property follows from the following lemma:
\begin{lemma}
Let $L\of{\tilde{\vq},\tilde{\vp}}$ be a quadratic polynomial in $\tilde{\vp}$,
$V\of{\tilde{\vq}}$ a smooth function, $\vA=\Poisson\placeholder L$, and
$\vB=\Poisson\placeholder V$.
Then\[
\commutator{\vB}{\commutator{\vB}{\commutator{\vB}{\vA}}}=\nullvec\text.\]
\end{lemma}
\begin{proof}
It suffices to show that $\Poisson V{\Poisson V{\Poisson L V}} = 0$. It is
immediate that every term in that expression will contain a third order
partial derivative in the $\tilde p_i$ of $L$, and since $L$ is quadratic
in $\tilde{\vp}$ all such derivatives vanish.
\end{proof}
See \cite[p.~26]{McLachlanQuispel2006} for a detailed treatment
of non-autonomous Hamiltonians using an extended phase space.
See \cite[p.~8]{McLachlan1993} for a proof that
$\Poisson V{\Poisson V{\Poisson L V}} = 0$ for arbitrary Poisson tensors.
\section{Composition and first-same-as-last property}
Recall from the comments that each step is computed as
\begin{align*}
\tuple{\vq_{n+1}, \vp_{n+1}} &=
\exp a_{r-1}h\vA \exp b_{r-1}h\vB \dotsb \exp a_0h\vA \exp b_0h\vB
\tuple{\vq_n, \vp_n}
\text,
\intertext{thus, when $b_0$ vanishes (type $ABA$) or when $a_{r-1}$ does
(type $BAB$),}
\tuple{\vq_{n+1}, \vp_{n+1}} &=
\exp a_{r-1}h\vA \exp b_{r-1}h\vB \dotsb \exp b_1h\vB \exp a_0h\vA
\tuple{\vq_n, \vp_n}\text{, respectively}\\
\tuple{\vq_{n+1}, \vp_{n+1}} &=
\exp b_{r-1}h\vB \exp a_{r-2}h\vA \dotsb \exp a_0h\vA \exp b_0h\vB
\tuple{\vq_n, \vp_n}\text.
\end{align*}
This leads to performance savings.
Let us consider a method of type $BAB$.
Evidently, the evaluation of $\exp a_0h\vA$
is not required, thus only $r-1$ evaluations of $\exp \increment t\vA$ are
required.
Furthermore, if output is not needed at step $n$, the computation of
the $\pa{n-1}$th step requires only $r-1$ evaluations of
$\exp \increment t\vB$, since the consecutive evaluations of $\exp b_0h\vB$
and $\exp b_rh\vB$ can be merged by the group property,\[
\exp b_0h\vB\exp b_rh\vB=\exp \pa{b_0+b_r}h\vB\text.
\]
If the equation is of the form \ref{linear}, the latter saving can be achieved even
for dense output, since only one evaluation of $\vB$ is needed to compute the
increments $b_rh\vB$ and $b_0\vB$.
The same arguments apply to type $ABA$.
This motivates the name of the template
parameter \texttt{evaluations}, equal to $r-1$ for methods of type $ABA$ and
$BAB$, and $r$ otherwise.
\end{document}
|
|
\section{Dynamic Programming}
\lst{Knapsack}{$\mathcal{O}(n\sum_{i = 1}^n p_i)$}{dynamicProgramming/knapsack.cc}
\lst{TSP}{$\mathcal{O}(n 2^n)$}{dynamicProgramming/tsp.cc}
\lst{Subset Sum}{$\mathcal{O}(n\sum_{i = 1}^n v_i)$}{dynamicProgramming/subSetSum.cc}
\lst{Edit Distance}{$\mathcal{O}(nm)$}{dynamicProgramming/editDistance.cc}
\lst{Longest Increasing Subsequence}{$\mathcal{O}(n\log n)$}{dynamicProgramming/lis.cc}
\lst{Longest Common Subsequence}{$\mathcal{O}(nm)$}{dynamicProgramming/lcs.cc}
|
|
\documentclass[man]{apa6}
\usepackage{lmodern}
\usepackage{amssymb,amsmath}
\usepackage{ifxetex,ifluatex}
\usepackage{fixltx2e} % provides \textsubscript
\ifnum 0\ifxetex 1\fi\ifluatex 1\fi=0 % if pdftex
\usepackage[T1]{fontenc}
\usepackage[utf8]{inputenc}
\else % if luatex or xelatex
\ifxetex
\usepackage{mathspec}
\else
\usepackage{fontspec}
\fi
\defaultfontfeatures{Ligatures=TeX,Scale=MatchLowercase}
\fi
% use upquote if available, for straight quotes in verbatim environments
\IfFileExists{upquote.sty}{\usepackage{upquote}}{}
% use microtype if available
\IfFileExists{microtype.sty}{%
\usepackage{microtype}
\UseMicrotypeSet[protrusion]{basicmath} % disable protrusion for tt fonts
}{}
\usepackage{hyperref}
\hypersetup{unicode=true,
pdftitle={Lab 8 Group Git Project},
pdfauthor={Kathryn Denning, Tamara Niella, \& Karlena Ochoa},
pdfkeywords={Lab 8, Git},
pdfborder={0 0 0},
breaklinks=true}
\urlstyle{same} % don't use monospace font for urls
\usepackage{graphicx,grffile}
\makeatletter
\def\maxwidth{\ifdim\Gin@nat@width>\linewidth\linewidth\else\Gin@nat@width\fi}
\def\maxheight{\ifdim\Gin@nat@height>\textheight\textheight\else\Gin@nat@height\fi}
\makeatother
% Scale images if necessary, so that they will not overflow the page
% margins by default, and it is still possible to overwrite the defaults
% using explicit options in \includegraphics[width, height, ...]{}
\setkeys{Gin}{width=\maxwidth,height=\maxheight,keepaspectratio}
\IfFileExists{parskip.sty}{%
\usepackage{parskip}
}{% else
\setlength{\parindent}{0pt}
\setlength{\parskip}{6pt plus 2pt minus 1pt}
}
\setlength{\emergencystretch}{3em} % prevent overfull lines
\providecommand{\tightlist}{%
\setlength{\itemsep}{0pt}\setlength{\parskip}{0pt}}
\setcounter{secnumdepth}{0}
% Redefines (sub)paragraphs to behave more like sections
\ifx\paragraph\undefined\else
\let\oldparagraph\paragraph
\renewcommand{\paragraph}[1]{\oldparagraph{#1}\mbox{}}
\fi
\ifx\subparagraph\undefined\else
\let\oldsubparagraph\subparagraph
\renewcommand{\subparagraph}[1]{\oldsubparagraph{#1}\mbox{}}
\fi
%%% Use protect on footnotes to avoid problems with footnotes in titles
\let\rmarkdownfootnote\footnote%
\def\footnote{\protect\rmarkdownfootnote}
\title{Lab 8 Group Git Project}
\author{Kathryn Denning\textsuperscript{1}, Tamara Niella\textsuperscript{1}, \&
Karlena Ochoa \textsuperscript{1}}
\date{}
\shorttitle{Lab 8}
\affiliation{
\vspace{0.5cm}
\textsuperscript{1} University of Oregon}
\keywords{Lab 8, Git}
\usepackage{csquotes}
\usepackage{upgreek}
\captionsetup{font=singlespacing,justification=justified}
\usepackage{longtable}
\usepackage{lscape}
\usepackage{multirow}
\usepackage{tabularx}
\usepackage[flushleft]{threeparttable}
\usepackage{threeparttablex}
\newenvironment{lltable}{\begin{landscape}\begin{center}\begin{ThreePartTable}}{\end{ThreePartTable}\end{center}\end{landscape}}
\makeatletter
\newcommand\LastLTentrywidth{1em}
\newlength\longtablewidth
\setlength{\longtablewidth}{1in}
\newcommand{\getlongtablewidth}{\begingroup \ifcsname LT@\roman{LT@tables}\endcsname \global\longtablewidth=0pt \renewcommand{\LT@entry}[2]{\global\advance\longtablewidth by ##2\relax\gdef\LastLTentrywidth{##2}}\@nameuse{LT@\roman{LT@tables}} \fi \endgroup}
\DeclareDelayedFloatFlavor{ThreePartTable}{table}
\DeclareDelayedFloatFlavor{lltable}{table}
\DeclareDelayedFloatFlavor*{longtable}{table}
\makeatletter
\renewcommand{\efloat@iwrite}[1]{\immediate\expandafter\protected@write\csname efloat@post#1\endcsname{}}
\makeatother
\authornote{
Correspondence concerning this article should be addressed to Kathryn
Denning, Postal address. E-mail:
\href{mailto:kdenning@uorego.edu}{\nolinkurl{kdenning@uorego.edu}}}
\abstract{
This is where we would write our abstract.
}
\begin{document}
\maketitle
\section{Methods}\label{methods}
We report how we determined our sample size, all data exclusions (if
any), all manipulations, and all measures in the study.
In Batson, Early, and Salvarani (1997) we learn that empathy is actually
perspective taking (cool!). And in (Epley, Keysar, Van Boven, \&
Gilovich, 2004) we learn people are selfish when perspective taking
(because they are egocentric!).
\subsection{Participants}\label{participants}
\subsection{Material}\label{material}
\subsection{Procedure}\label{procedure}
\subsection{Data analysis}\label{data-analysis}
We used R {[}Version 3.5.1; @{]} and the R-packages \emph{\}base}
{[}@\}R-base{]}, \emph{dplyr} (Version 0.7.6; Wickham, François, Henry,
\& Müller, 2018), \emph{forcats} (Version 0.3.0; Wickham, 2018a),
\emph{ggplot2} (Version 3.0.0; Wickham, 2016), \emph{here} (Version 0.1;
Müller, 2017), \emph{kableExtra} (Version 0.9.0; Zhu, 2018),
\emph{knitr} (Version 1.20; Xie, 2015), \emph{magrittr} (Version 1.5;
Bache \& Wickham, 2014), \emph{papaja} (Version 0.1.0.9842; Aust \&
Barth, 2018), \emph{purrr} (Version 0.2.5; Henry \& Wickham, 2018),
\emph{readr} (Version 1.1.1; Wickham, Hester, \& Francois, 2017),
\emph{rio} (Version 0.5.10; C.-h. Chan, Chan, Leeper, \& Becker, 2018),
\emph{stringr} (Version 1.3.1; Wickham, 2018b), \emph{tibble} (Version
1.4.2; Müller \& Wickham, 2018), \emph{tidyr} (Version 0.8.1; Wickham \&
Henry, 2018), and \emph{tidyverse} (Version 1.2.1; Wickham, 2017) for
all our analyses.
\section{Results}\label{results}
\begin{tabular}{llrrrr}
\toprule
sex & frl & math\_mean & math\_sd & rdg\_mean & rdg\_sd\\
\midrule
boy & no & 492.8523 & 46.33845 & 441.4553 & 32.31828\\
boy & yes & 469.8716 & 46.09285 & 425.3794 & 26.62931\\
girl & no & 501.2057 & 45.96210 & 448.5353 & 34.52403\\
girl & yes & 477.5084 & 46.30459 & 430.8029 & 27.42125\\
\bottomrule
\end{tabular}
\section{Discussion}\label{discussion}
\newpage
\section{References}\label{references}
\begingroup
\setlength{\parindent}{-0.5in} \setlength{\leftskip}{0.5in}
\hypertarget{refs}{}
\hypertarget{ref-R-papaja}{}
Aust, F., \& Barth, M. (2018). \emph{papaja: Create APA manuscripts with
R Markdown}. Retrieved from \url{https://github.com/crsh/papaja}
\hypertarget{ref-R-magrittr}{}
Bache, S. M., \& Wickham, H. (2014). \emph{Magrittr: A forward-pipe
operator for r}. Retrieved from
\url{https://CRAN.R-project.org/package=magrittr}
\hypertarget{ref-batson1997}{}
Batson, C. D., Early, S., \& Salvarani, G. (1997). Perspective taking:
Imagining how another feels versus imaging how you would feel.
\emph{Personality and Social Psychology Bulletin}, \emph{23}(7),
751--758.
\hypertarget{ref-R-rio}{}
Chan, C.-h., Chan, G. C., Leeper, T. J., \& Becker, J. (2018).
\emph{Rio: A swiss-army knife for data file i/o}.
\hypertarget{ref-epley2004}{}
Epley, N., Keysar, B., Van Boven, L., \& Gilovich, T. (2004).
Perspective taking as egocentric anchoring and adjustment. \emph{Journal
of Personality and Social Psychology}, \emph{87}(3), 327.
\hypertarget{ref-R-purrr}{}
Henry, L., \& Wickham, H. (2018). \emph{Purrr: Functional programming
tools}. Retrieved from \url{https://CRAN.R-project.org/package=purrr}
\hypertarget{ref-R-here}{}
Müller, K. (2017). \emph{Here: A simpler way to find your files}.
Retrieved from \url{https://CRAN.R-project.org/package=here}
\hypertarget{ref-R-tibble}{}
Müller, K., \& Wickham, H. (2018). \emph{Tibble: Simple data frames}.
Retrieved from \url{https://CRAN.R-project.org/package=tibble}
\hypertarget{ref-R-ggplot2}{}
Wickham, H. (2016). \emph{Ggplot2: Elegant graphics for data analysis}.
Springer-Verlag New York. Retrieved from \url{http://ggplot2.org}
\hypertarget{ref-R-tidyverse}{}
Wickham, H. (2017). \emph{Tidyverse: Easily install and load the
'tidyverse'}. Retrieved from
\url{https://CRAN.R-project.org/package=tidyverse}
\hypertarget{ref-R-forcats}{}
Wickham, H. (2018a). \emph{Forcats: Tools for working with categorical
variables (factors)}. Retrieved from
\url{https://CRAN.R-project.org/package=forcats}
\hypertarget{ref-R-stringr}{}
Wickham, H. (2018b). \emph{Stringr: Simple, consistent wrappers for
common string operations}. Retrieved from
\url{https://CRAN.R-project.org/package=stringr}
\hypertarget{ref-R-tidyr}{}
Wickham, H., \& Henry, L. (2018). \emph{Tidyr: Easily tidy data with
'spread()' and 'gather()' functions}. Retrieved from
\url{https://CRAN.R-project.org/package=tidyr}
\hypertarget{ref-R-dplyr}{}
Wickham, H., François, R., Henry, L., \& Müller, K. (2018). \emph{Dplyr:
A grammar of data manipulation}. Retrieved from
\url{https://CRAN.R-project.org/package=dplyr}
\hypertarget{ref-R-readr}{}
Wickham, H., Hester, J., \& Francois, R. (2017). \emph{Readr: Read
rectangular text data}. Retrieved from
\url{https://CRAN.R-project.org/package=readr}
\hypertarget{ref-R-knitr}{}
Xie, Y. (2015). \emph{Dynamic documents with R and knitr} (2nd ed.).
Boca Raton, Florida: Chapman; Hall/CRC. Retrieved from
\url{https://yihui.name/knitr/}
\hypertarget{ref-R-kableExtra}{}
Zhu, H. (2018). \emph{KableExtra: Construct complex table with 'kable'
and pipe syntax}. Retrieved from
\url{https://CRAN.R-project.org/package=kableExtra}
\endgroup
\end{document}
|
|
% NeuroCam manual - Overview
% Written by Christopher Thomas.
% Copyright (c) 2021 by Vanderbilt University. This work is released under
% the Creative Commons Attribution-ShareAlike 4.0 International License.
\chapter{Overview}
\label{intro}
The NeuroCam system is a computer-controlled camera network that collects
footage of a subject interacting with a game (or other apparatus).
It was commissioned by the Attention Circuits Control Laboratory
(\verb+http://accl.psy.vanderbilt.edu/+)
to facilitate their experiments.
A system diagram is shown in Figure \ref{fig-system}, below:
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.95\textwidth]{figs/system-ext.pdf}
\end{center}
\caption{System block diagram.}\label{fig-system}
\end{figure}
The NeuroCam system processes several types of data and events (described
in detail in later sections):
\begin{itemize}
\item It collects frame data (with timestamps) from several cameras.
\item It collects streamed video data from the game machine.
\item It accepts web connections from authorized computers for control and
monitoring.
\item It provides a ``monitoring'' feed to the control computer showing
all video streams.
\item It records ``marker'' events when interface buttons are clicked on
the monitoring web page.
\item It records digital (TTL) signals from external equipment.
\item It accepts TTL ``start'' and ``stop'' signals from external equipment.
\item It offers collected data for examination, download, and
post-processing via a web interface after experiments have completed.
\end{itemize}
% NOTE - We can't mbox verbatim commands, so manually add line breaks.
To get started, connect an authorized machine to the ``\verb+neurocam+''
network and point it to \linebreak
``\verb+http://192.168.1.+\textit{(value)}''
(the IP address given on the sticker on the NeuroCam machine).
%
% This is the end of the file.
|
|
\subsection{Vertex-weighted graph}
A vertex-weighted graph has weights for each vertex.
|
|
%!TEX root = ../../thesis.tex
\section{Rich-text without HTML editing APIs in practice}
Google completely rewrote their document editor in 2010 abandoning HTML editing APIs entirely. In a blog post\footnote{\url{http://googledrive.blogspot.fr/2010/05/whats-different-about-new-google-docs.html}, last checked on 07/18/2015}, they stated some of the reasons discussed in section \refsection{sec:disadvantages_of_html_editing_apis}. They state, using the editing mode, if a browser has a bug in a particular function, Google won't be able to fix it. In the end, they could only implement ''least common denominator of features''. Furthermore, abandoning HTML editing APIs enables features otherwise impossible, for example tab stops for layouting \cite{bw}. With the Google document editor, Google demonstrates it is possible to implement a fully featured rich-text editor using only JavaScript without HTML editing APIs.
% However, fetching input and modifying text will not suffice to implement a text editor or even a simple text field. There is many more things, that need to be considered which will be discussed in chapter \ref{ch:concept}. %\refchapter
%''Ace'' and ''CodeMirror'' demonstrate it is possible to mimic text-inputs with JavaScript to implement code editors. Rich-text editing is usually being implemented using HTML editing APIs. There are a few exceptions.
Google's document editor is proprietary software and its implementation has not been documented publicly. Most rich-text editors still rely on HTML editing APIs. The editor ''Firepad''\footnote{\url{http://www.firepad.io/}, last checked 07/23/2015} is another exception. It is based on ''CodeMirror''\footnote{A web-based source code editor} and extends it with rich-text formatting. The major disadvantage of Firepad is its origin as a source code editor. It generates ''messy'' (non-semantic) markup with lots of control tags. It has a sparse API that is not designed for rich-text editing and has no public methods to format the text. It is to be noted that Google's document editor generates lots of control tags as well, but it is only used within Google's portfolio of office apps where it may not be necessary to create \textit{well-formatted}, semantic markup. A list of rich-text editors using and not using HTML editing APIs can be found in \reffigure{fig:editors_editing_apis_table} and \reffigure{fig:editors_not_editing_apis_table}. % https://github.com/plotnikoff/HTE
%% In October 1998 the World Wide Web Consortium (W3C) published the ''Document Object Model (DOM) Level 1 Specification''. This specification includes an API on how to alter DOM nodes and the document's tree\footnote{\url{http://www.w3.org/TR/REC-DOM-Level-1/level-one-core.html}, last checked on 07/10/2015}. It provided a standardized way for changing a website's contents. With the implementations of Netscape's JavaScript and Microsoft's JScript this API has been made accessible to web developers.
%\section{Rich-text libraries implemented without editing APIs}
|
|
\documentclass{article}
\usepackage{fullpage}
\usepackage{html}
\title{Using Soot as a Program Optimizer}
\author{Patrick Lam (\htmladdnormallink{plam@sable.mcgill.ca)}{mailto:plam@sable.mcgill.ca}}
\date{March 23, 2000}
\begin{document}
\maketitle
\section{Goals}
This tutorial describes the use of Soot as an optimization tool.
After completing this tutorial, the user will be able to use Soot
to optimize classfiles and whole applications.
\paragraph{Prerequisites} The user should have a working installation
of Soot; successful completion of the
\htmladdnormallink{introduction}{../intro}
is one way to exercise one's installation of Soot.
\section{Classfile Optimization}
Soot is able to optimize individual classfiles. Some of the transformations
which can be carried out on individual classfiles include:
common subexpression elimination, partial redundency elimination,
copy propagation, constant propagation and folding, conditional
branch folding, dead assignment elimination, unreachable code elimination,
unconditional branch folding, and unused local
elimination.
%% Common subexpression elimination has also been developed, but the
%% current implementation is slow; it must be explicitly enabled.
%% (This is described in the \htmladdnormallink{phase-options}
%% {../phase-options/\#SECTION00040000000000000000} document.)
%% Partial-redundancy elimination is being developed.
In order to optimize the {\tt Hello} example from the previous tutorial,
we issue the command:
\begin{verbatim}
> java soot.Main -O Hello
Transforming Hello...
\end{verbatim}
Soot will then leave a new, improved {\tt Hello.class} file in the
{\tt sootOutput} directory. For this class, the improvement after
Sootification is not so obvious. Soot does, however, eliminate unused
locals. Try adding an unused local to {\tt Hello} and giving this command:
\begin{verbatim}
> java soot.Main -O -f jimple Hello
Transforming Hello...
\end{verbatim}
You should see that the unused local is no longer present.
Any number of classfiles can be specified to Soot in this mode, as
long as they are in the {\tt CLASSPATH}.
\paragraph{Hidden Trap} Note that your classfile may belong to some
package; it may be called, for instance, {\tt soot.Scene}. This
indicates that the {\tt Scene} class belongs to the {\tt soot} package.
It will be in a {\tt soot/} subdirectory. In order to Sootify this
file, you must be in the parent directory (not {\tt soot/}), and you
must specify {\tt java soot.Main -O soot.Scene}.
Unfortunately, our current optimizations with {\tt -O} tend to have
little effect on the program execution time.
\section{Program Optimization}
Soot provides the {\tt -app} switch to make it work on all the class
files in an applicaion. When this switch is present, the user specifies
the main classfile, and Soot will load all needed classes.
Soot has a whole-program mode in which allows it to carry out
whole-program transformations; for instance, method inlining requires
the whole program to correctly resolve virtual method calls.
To specify that Soot should do whole-program optimizations ({\tt -W}),
as well as single-class optimizations, use the command:
\begin{verbatim}
> java soot.Main --app -W Hello
Transforming Hello...
\end{verbatim}
Soot will write out all classes except those in the {\tt java.*},
{\tt javax.*} and {\tt sun.*} packages.
The default behaviour of {\tt -W} is to statically inline methods.
Soot is also capable of static method binding; use
\begin{verbatim}
> java soot.Main --app -p wjop.smb on -p wjop.si off -W -O
Hello
\end{verbatim}
This type of optimization has produced significant speedups on
some benchmarks.
\section{Summary}
This lesson has described how Soot can be used to optimize classfiles
and whole applications.
\section{History}
\begin{itemize}
\item March 14, 2000: Initial version.
\item March 23, 2000: Changed documentation to reflect fact that -W
includes -O.
\item May 31, 2003: Updated for Soot 2.0.
\end{itemize}
\end{document}
|
|
\section{Status of this document.}
\hspace{1.0em} 9.10.98 created by N. Amelin.
|
|
\chapter{Group theory}
|
|
%
% Chapter 4
%
\chapter{Monte Carlo event generation}
\label{event_sim}
\section{Introduction}
Accurate simulations for signal and backgrounds are needed for searches for new physics. The primary collision and the decay processes in an event can be described by perturbative QFT. However, perturbative QCD (pQCD) cannot describe the QCD bound states. Therefore phenomenological models are needed to describe hadronization.
Event generators are used for generating simulated particle physics events. Event generators factorize the full process of the event simulation into individual tasks. MC methods are used for the probabilistic branching between these individual problems. MC methods are a class of computational algorithms that rely on repeated random sampling to have the same average behavior in simulation as in collision data. Event signature beyond SM particles can be generated to compare its signature to one of the generated background processes.
General-purpose Monte Carlo (GPMC) generators, like PYTHIA~\cite{Sjostrand:2014zea}, provide fully exclusive simulations of high energy collisions. However, there are also event generators that are specialized in a certain aspect of the event simulation. Perturbative matrix elements for the scattering process are implemented in matrix element generators. Hadronic event generators simulate the initial and final state particle showers, hadronization, and soft hadron-hadron physics, including the initial state's composition and substructure. An overview of different steps in MC generation for \pp collision events can be seen in Figure~\ref{fig:simulation}.
\begin{figure}[htbp]
\centering
\includegraphics[width=0.8\textwidth]{plots/chapter4/simulation.png}
\caption{MC simulation of an event in \pp collisions.}
\label{fig:simulation}
\end{figure}
\section{Monte Carlo simulation}
The primary hard interaction process and the decay of short-lived particles happen at short distance scales. The QCD and QED radiation at a time scale much below $\frac{1}{\Lambda}$, where $\Lambda$ is a typical hadronic scale of a few hundred~\MeV, are also happening at short distance scales. Soft and collinear safe inclusive observables, such as total decay widths or inclusive cross-sections, can be computed with pQCD theory for momentum scales much larger than this scale. The final state collinear splittings and soft emissions give rise to large logarithmically divergent corrections, which cancel against virtual corrections in the total cross-section. Initial state collinear singularities are factorized into parton distribution functions (PDFs). Therefore, the cross-section remains accurate up to higher-order corrections if interpreted as an inclusive cross-section. If this is not the case, then the QCD singularities can lead to a non-convergence of the fixed order expansion.
\textbf{Matrix element generator:} Matrix element generators generate the exact matrix elements for the production of the process. They also produce a certain number of additional partons for hard, large-angle emissions. The radiation of extra partons is not included at the tree level accuracy of the hard process. The radiation of an extra parton with tree level accuracy can be included to provide next-to-leading-order (NLO) corrections along with all NLO virtual corrections. The parton shower algorithms use as input the final state partons of the hard process and their phase space.
\textbf{Parton shower algorithm:} The parton shower algorithm is used for computing the cross-section for a generic hard process. Parton level events are transferred from a hard process generator to a shower generator, containing a list of particles and the used free parameters, using the Les Houches Event File standard~\cite{Alwall:2006yp}. The kinematics of the basic process is first generated, followed by a sequence of independent shower splittings. The cross-section for the given final state is calculated by assigning a probability to each splitting vertex. Collinear emissions and soft gluon emissions at arbitrary angles are the two sources of infrared singularities in massless field theories like QCD. PYTHIA uses a $\text{p}_{\perp}$ ordered shower evolution for correctly describing both effects.
\textbf{Matching:} QCD color confinement restricts quarks and gluons from existing as isolated particles. The hadronization of a quark or a gluon gives rise to hadrons or their decay products. Jets are collimated bunches of these hadrons. The collinear/soft radiation of an appropriate (N + 1) parton final state, generated by a matrix element generator, can give rise to a (N + 1) jet event. A (N + 1) jet event can also be obtained from an N parton final state with hard, large-angle emission during shower evolution. A matching has to be done if different generators have been used to generate matrix elements and parton showers or extra partons generated by the hard process generator.
\textbf{Hadronization models:} The hadronization scale $\text{Q}_{\text{had}}$ is by construction equal to the infrared cut-off where the parton shower ends. Colored partons are transformed into a set of colorless hadrons by GPMCs. This happens at scales with low momentum transfers and at long distances, where non-perturbative effects become important. GPMCs use models that rely on the color flow information between partons as a starting point for hadronization.
\textbf{Soft hadron-hadron physics modeling :} Underlying-event is the additional activity beyond the basic process and its associated initial- and final-state radiation. The dominant part is coming from additional color exchanges between the beam remnants. Multiple parton-parton interactions (MPI) can produce two or more back-to-back jet pairs, with each pair having a small transverse momentum. Most MPI are soft, and they influence the color flow and the event's total scattered energy. This increases the particle multiplicity in the final state and affects the final state activity. Compared to events with no hard jets, the hard jets appear to sit on top of a higher ``pedestal'' of underlying activity. This comes from the impact parameter dependence since central collisions are more likely to contain at least one hard scattering due to the higher probability of interactions and is called the ``jet pedestal'' effect.
\textbf{Parameter Tuning:} The accuracy of the used models is very important for event simulation. The accuracy depends on the inclusiveness of the chosen observables and the sophistication of the simulation. The models can be improved by improving the theoretical calculations. The precision also depends on the constraints in the free parameters, and existing collision data constrains them and is referred to as generator tuning. MC generators are not tuned beyond the constraints in theoretical and experimental precision to avoid overfitting. The final state of the particles and their spectra are influenced by event modeling and generator tuning. Events generated with different generators or tunes can differ and might not describe the collision data in the entire phase space.
\section{Monte Carlo generators}
PYTHIA has been developed for multi-particle production in \pp collisions and simulation of jets. PYTHIA can generate hard subprocess, initial and final state parton showers, hadronization, decays, and the underlying-event. Many hard processes have been implemented for generating the matrix elements for final state and phase space calculation. PYTHIA can optimally generate $2 \to 1$ and $2 \to 2$ processes. Resonance decays with the resonance masses above the b quark system are implemented. Their branching fractions and partial width can be dynamically calculated as a function of their mass. If the spin information is available for resonance decays, it leads to angular correlations of the resonance decay products; otherwise, the resonance decays isotropically. GPMC generators like PYTHIA can simulate the full process. However, there are specialized generators that deal with a certain aspect of the event simulation.
\textbf{MadGraph:} MadGraph generates the matrix element with leading-order (LO) accuracy~\cite{Alwall:2011uj}. MadGraph is a matrix element generator for processes that involve final states with a large number of jets, heavy flavor quarks, leptons, and missing energy. Events from new physics models that are renormalizable or from an effective field theory written in a Lagrangian can be generated. The full amplitude is split into gauge invariant sub-amplitudes. The matrix element contains the full spin correlation and Breit-Wigner effects but is not valid far from the mass peak.
\textbf{POWHEG:} POWHEG is a framework for implementing NLO matrix element calculations~\cite{Alioli:2010xd}. It includes NLO virtual corrections and radiation of an extra parton in the matrix element. It needs the LO matrix elements and the finite part of the virtual corrections as input from which it finds all the singular regions. The singular regions are characterized by a final state parton becoming collinear or soft to either an initial state parton or a final state parton. The singular regions can be grouped according to their underlying LO diagram by replacing this parton pair with a single parton of appropriate flavor.
\textbf{aMC@NLO:} aMC@NLO implements all aspects of NLO computation and its matching with parton showers~\cite{Frederix:2011ss, Alwall:2014hca}. NLO calculations can be achieved by combining one-loop matrix elements and tree-level matrix elements. Tree level computations are performed using MadGraph, and one-loop amplitudes are evaluated with MadLoop~\cite{Hirschi:2011pa}. The matched samples which differ by their final state multiplicity can be merged using the FxFx merging scheme.
\textbf{MLM matching:} MLM matching scheme is a matching algorithm~\cite{Mangano:2001xp, Mangano:2002ea} that matches partons from matrix element calculations to jets reconstructed after shower generation. Parton level events are required to have a separation greater than a minimum value $\text{R}_{\text{jj}} > \text{R}_{\text{min}}$ between them and at least a minimum transverse energy $\text{E}^{\text{min}}_{\text{T}}$ for partons. The jet closest in $(\eta, \phi)$ to the hardest parton is selected, and both match if the distance is smaller than $\text{R}_{\text{min}}$. Once a match is found, the jet is removed, and matching is done with the next parton. If a match is not found, then the event is rejected. This is the case for collinear partons or soft partons, which do not lead to an independent jet or are too soft for jet reconstruction.
\textbf{FxFx merging:} FxFx merging scheme is an NLO merging procedure~\cite{Frederix:2012ps}. There can be NLO accuracy for exclusive events with J light jets by the computation based on matrix elements that have J and (J + 1) partons. NLO mergings are more complicated than LO ones. This is because the matrix elements are considered twice, as Born contribution for processes with J partons and as the real emission contribution, infrared subtraction terms, and the one-loop contributions to processes with (J - 1) partons. Events are reweighted, and a certain amount of events might carry negative weights.
\section{Detector simulation}
In detector simulation, the interactions of particles with the detector material and the detector response are simulated. These events can then be reconstructed and analyzed. Geant4~\cite{Agostinelli:2002hh} is used for detector simulation. It is a toolkit for simulating particles' passage through matter and for simulating particle interactions with matter across a very wide energy range. The user defines the detector geometry and materials. A large number of components with different shapes and materials can be included in the geometrical model. Sensitive elements can be defined, which record information in the form of hits. Hits are needed to simulate the detector responses called digitization. The detector's geometrical structure is divided into logical and physical volumes. Logical volumes contain the information of the material and the sensitive detector behavior. A mixture of different elements and isotopes can be used for the material. Physical volumes carry information about the spatial positioning or placement of the logical volumes.
Particles can interact with the detector material or can decay while they are transported through the geometry. A model can be implemented by electromagnetic and hadronic processes in Geant4 depending on the energy or particle type. Geant4 can handle ionization described by energy loss and range tables, bremsstrahlung, pair production of electron-positrons from photons, photoelectric effect, pair conversion, annihilation, synchrotron, and transition radiation, scintillation, refraction, reflection, absorption, the Cherenkov effect, and many other processes. Particles with their basic properties, like mass, charge, and sensitive processes, can be defined. Particles are transported in steps, and they are tracked through materials and external electromagnetic fields. Event data is generated during simulation. First, events contain primary vertices and primary particles before processing an event. After processing, hits and digitizations generated by simulation are added. Trajectories of simulated particles can be added optionally for the recording of ``simulation truth.''
\section{Monte Carlo samples}
MC simulated event samples are used to model signal and background contributions to all the analysis regions with several event generators. In all cases, parton showering, hadronization, and underlying-event properties are modeled using PYTHIA version 8.212. The PYTHIA parameters affecting the description of the underlying-event are set to the CUETP8M1 tune in 2016~\cite{Khachatryan:2015pea}, except for the \ttbar sample where the CP5 tune is used, which is also the tune in 2017 and 2018~\cite{CMS:2018zub}. The NNPDF3.0 PDF set is used for all 2016 samples, and the NNPDF3.1 PDF set is used for the 2017 and 2018 samples~\cite{Ball:2017nwa}.
Simulation of interactions between particles and the CMS detector is based on Geant4. The same reconstruction algorithms used for data are applied to simulated samples as well. The Higgs bosons are produced in \pp collisions predominantly by gluon gluon fusion (ggF)~\cite{Georgi:1977gs}, but also by vector boson fusion (VBF)~\cite{Cahn:1986zv}, and in association with a vector boson (W/Z)~\cite{Glashow:1978ab}. The ggF, VBF, and associated production Higgs boson samples are generated with POWHEG generator in the implementation described in Ref.~\cite{Heinrich:2017kxx, Buchalla:2018yce}. We only consider Higgs boson produced in ggF and VBF production mechanisms and do not use associated production Higgs boson samples for the signal.
Embedded samples are data samples with well-identified \Zmm events from which muons are removed, and simulated tau leptons are embedded with the same kinematics as the replaced muons. These samples are employed for the data-driven estimation of the \Ztt and some \ttbar/Diboson/Single Top background. The MadGraph generator is used to simulate the $\Zee/\Pgm{}\Pgm + \text{jets}$ process along with the \wjets background process. They are simulated at LO with MLM jet matching and merging schemes~\cite{Alwall:2007fs}.
Diboson production is simulated at NLO using aMC@NLO generator with FxFx jet matching and merging scheme. Top quark pair and single top quark production simulated samples are generated using POWHEG. Events have multiple \pp interactions per bunch crossing (pileup) because of the high instantaneous luminosities attained during data-taking period. The effect is taken into account in simulated samples by generating concurrent minimum bias events. All simulated samples are weighted to match the pileup distribution observed in the data.
|
|
\documentclass[12pt]{article}
\usepackage{lingmacros}
\usepackage{tree-dvips}
\begin{document}
\section{Method}
\subsection{Citation Diversity Statement}
Recent work in several fields of science has identified a bias in citation practices such that papers from women and other minorities are under-cited relative to the number of such papers in the field \cite{mitchell2013gendered,dion2018gendered,caplar2017quantitative, maliniak2013gender, Dworkin2020.01.03.894378, bertolero2021racial, wang2021gendered, chatterjee2021gender, fulvio2021imbalance}. Here we sought to proactively consider choosing references that reflect the diversity of the field in thought, form of contribution, gender, and other factors. We obtained predicted gender of the first and last author of each reference by using databases that store the probability of a name being carried by a woman \cite{Dworkin2020.01.03.894378,zhou_dale_2020_3672110}. By this measure (and excluding self-citations to the first and last authors of our current paper), our references contain $A\%$ woman(first)/woman(last), $B\%$ man/woman, $C\%$ woman/man, $D\%$ man/man, and $E\%$ unknown categorization. This method is limited in that a) names, pronouns, and social media profiles used to construct the databases may not, in every case, be indicative of gender identity and b) it cannot account for intersex, non-binary, or transgender people. Second, we obtained predicted racial/ethnic category of the first and last author of each reference by databases that store the probability of a first and last name being carried by an author of color \cite{ambekar2009name, sood2018predicting}. By this measure (and excluding self-citations), our references contain $F\%$ author of color (first)/author of color(last), $G\%$ white author/author of color, $H\%$ author of color/white author, and $I\%$ white author/white author. This method is limited in that a) names, Census entries, and Wikipedia profiles used to make the predictions may not be indicative of racial/ethnic identity, and b) it cannot account for Indigenous and mixed-race authors, or those who may face differential biases due to the ambiguous racialization or ethnicization of their names. We look forward to future work that could help us to better understand how to support equitable practices in science.
\newpage
\bibliographystyle{ieeetr}
\bibliography{./bibfile.bib}
\end{document}
|
|
\chapter{Technology Review}
\section{Libraries}
\subsection{React}
React is an open-source, frontend, JavaScript library for building user interfaces \cite{react-2021}. React is maintained by Facebook. React components are typically written in JavaScript XML (JSX) which is a combination of HTML and JavaScript. Before React, developers had to build user interfaces from scratch. This caused long development times and many bugs and glitches. React pages are usually split into different, reusable components to make up the whole page. For example, a search bar can be its own component. Components can be reused throughout the project. React uses the virtual Document Object Model (DOM), which allows it to create web applications faster. React has many packages that can be installed, such as Express and Redux \cite{skillcrush_react} \cite{sufiyan_2021}.
\begin{figure}[H]
\centering
\includegraphics[width=0.35\linewidth]{images/react-icon.png}
\caption{The React Icon}
\label{fig:React Icon}
\end{figure}
\section{Packages}
\subsection{Axios}
Axios is an HTTP client for browsers and Node.js. In this project, Axios is used to create, read, update and delete (CRUD) data from APIs. Below is a GET request using Axios.
\begin{minted}[breaklines]{js}
axios.get("http://localhost:4000/api/products")
\end{minted}
\subsection{bcrypt}
bcrypt is a hashing package for React. bcrypt was designed by Niels Provos and David Mazières. It was released in 1999. bcrypt is based on the Blowfish cypher. In this project, bcrypt is used to hash the account passwords in the MongoDB database. bcrypt generates a unique salt for each account's password. In the MongoDB database, the hashed password will be shown instead of the actual password text \cite{codahale-com_2010}. A bcrypt hash is stored in the form of \usemintedstyle{js}{$\$2b\$[cost]\$[22 character salt][31 character hash]$} \cite{bcrypt-wikipedia_2021}.
Where
\begin{itemize}
\item \$2a\$ is the hash identifier.
\item The cost is the cost factor
\item The 22 character salt is a 16-byte salt encoded to base64.
\item The 31 character hash is a 24-byte hash encoded to base64.
\end{itemize}
\subsection{date-fns}
date-fns is a toolset used to modify dates in JavaScript. In this project, it is used to modify the date format for orders.
\subsection{Express}
Express is a fast and minimalist framework for Node.js. Express is used to provide code for the server-side of web applications \cite{express}.
\subsection{Mongoose}
Mongoose is a MongoDB object modelling tool designed to work in an asynchronous environment. Mongoose is used with Node.js and makes it easier to use MongoDB \cite{npm-mongoose} \cite{mcgrath_2019}.
\subsection{React-Bootstrap}
React-Bootstrap is a "complete re-implementation of the Bootstrap components using React". React-Bootstrap is different from regular Bootstrap because it has its separate components \cite{react-bootstrap}.
\subsection{React Router DOM}
React Router DOM is used to handle the routing and paths for a React application.
\subsection{React Helmet}
React Helmet is used to set the title of each page.
\subsection{React Redux}
React Redux is the official React user interface bindings layer for Redux. React Redux allows React components to read data from a Redux store and dispatch actions to the store to update the state \cite{react-redux-blog-rss}.
\subsection{Redux}
Redux is a state container for React. Redux allows for the managing of the state of React applications \cite{redux-blog-rss}.
\subsection{Redux Toolkit}
Redux Toolkit is the recommended way to write Redux code. Redux Toolkit is a simplified version of Redux and it uses less code \cite{redux-blog-rss}.
\subsection{Stripe}
This package contains components for Stripe, which is used to accept payments.
\section{Languages}
\subsection{HTML}
HyperText Markup Language or HTML is a markup language that is used to structure web pages. HTML was invented by Tim Berners-Lee and released in 1993. HTML is not a programming language. HTML can structure paragraphs, lists, tables, images and much more on a web page. Tags are used in HTML to create elements. Most HTML tags must have an end tag. Every HTML page must have \mintinline{html}{<!DOCTYPE html>} declared at the top. Below is an example of HTML.
\begin{minted}[breaklines]{html}
<!DOCTYPE html>
<html>
<head>
<title>Page Title</title>
</head>
<body>
<h1>This is a heading</h1>
<p>This is a paragraph.</p>
</body>
</html>
\end{minted}
\subsection{CSS}
Cascading Style Sheets or CSS is a design language used to create the style, look and design of web pages. CSS was released in 1996. CSS can be used with HTML to create and style web pages. CSS can be used to control the font colour, image size, background colour, paragraph spacing, button design and much more. CSS is a rule-based language. CSS rules can be stored in their own separate file with a .css extension, or they can be declared in .html files. CSS rules can define classes and IDs. Classes are defined with their name preceded by a full stop (.), and an ID is preceded by a hashtag (\#). Elements can use one or multiple CSS classes at the same time, and classes can be reused. CSS classes are applied like so: \mintinline{html}{<h1 class="start">Heading</h1>}. Elements can have only one ID, and all IDs on a page must be unique. CSS IDs are applied like so: \mintinline{html}{<h1 id="end">Heading</h1>}. Below is an example of CSS rules in a .css file.
\begin{minted}[breaklines]{css}
/* This CSS style will be applied to all <h1> tags. */
h1 {
color: blue;
text-align: center;
background-color: red;
}
/* CSS class. */
.start {
background-color: yellow;
}
/* CSS id. */
#end {
background-color: green;
}
\end{minted}
\subsection{JavaScript}
JavaScript is a programming language that is used for web pages. JavaScript was released in 1995. JavaScript can be used with HTML and CSS to create responsive and interactive web pages. JavaScript is one of the most popular programming languages in the world, and it is used on nearly every single web page. JavaScript can access and change the Document Object Model (DOM). JavaScript code can be declared in its own .js file or in a HTML file within a \mintinline{html}{<script>} tag. Below is a sample of HTML and JavaScript code.
\begin{minted}[breaklines]{html}
<!DOCTYPE html>
<html>
<body>
<h1>Add</h1>
<p id="add"></p>
<script>
var x = 5 + 4;
document.getElementById("add").innerHTML = "5 + 4 = " + x;
</script>
</body>
</html>
\end{minted}
\section{Runtime Environment}
\subsection{Node.js}
Node.js is an open-source, asynchronously event-driven JavaScript runtime environment that is lightweight and efficient. Node.js was released in 2009. Node.js makes up the MERN stack. Node.js can be used with the React package Express to create GET, POST, PUT and DELETE HTTP requests. Below is an example of Node.js usage written in JavaScript \cite{node.js}.
\begin{minted}[breaklines]{js}
const http = require('http');
const hostname = '127.0.0.1';
const port = 3000;
const server = http.createServer((req, res) => {
res.statusCode = 200;
res.setHeader('Content-Type', 'text/plain');
res.end('Hello World');
});
server.listen(port, hostname, () => {
console.log(`Server running at http://${hostname}:${port}/`);
});
\end{minted}
\section{Database}
\subsection{MongoDB}
MongoDB is a document-oriented NoSQL database that is highly scalable and high performance. It was released in 2009. MongoDB is one of the most popular databases in the world. MongoDB uses JSON like documents to store data. MongoDB allows users to create clusters in which databases are created and stored. MongoDB is free to use but does provide paid options for large databases. The free version of MongoDB allows users to create one cluster and to use up to 512 MB of storage. Each MongoDB document generates a random, unique 12-byte hexadecimal ID called "\_id". Below is an example of a MongoDB document \cite{mongodb} \cite{objectid-mongodb-manual}.
\begin{minted}[breaklines]{json}
{
"_id": "5cf0029caff5056591b0ce7d",
"firstname": "Jane",
"lastname": "Wu",
"address": {
"street": "1 Circle Rd",
"city": "Los Angeles",
"state": "CA",
"zip": "90404"
},
"hobbies": ["surfing", "coding"]
}
\end{minted}
|
|
\definecolor {processblue}{cmyk}{0.96,0,0,0}
\clearpage
\section{DFA Fixed point iteration}
(Prepared by Namrata Priyadarshini, Shivam Bansal)
\textbf{Algorithm(Forward DFA):} \\
Input: control flow graph $CFG = (N, E, Entry, Exit)$ \\
//Boundary condition \\
$OUT[Entry] = Boundary \, Condition \, Value$ \\
//Initialization for iterative algorithm \\
For each basic block B other than Entry \\
\hspace*{0.5cm} $OUT[B] = Top \, Value $ \\
//Iterate \\
While (changes to any OUT occur) \{ \\
\hspace*{0.5cm} For each basic block B other than Entry \{ \\
\hspace*{1cm} $in[B] = $meet over $(out[p])$, for all preds p of B \\
\hspace*{1cm} $out[B] = f_B(in[B])$ \\
\hspace*{0.5cm} \} \\
\} \\
So let's look at the DFA fixed point iteration once again. The input to the fixed point iteration algorithm is a control flow graph $CFG = (N, E, Entry, Exit)$. N is the set of nodes, E is the set of edges, Entry is a special node and Exit is also a special node in the control flow graph and then if we look at the forward DFA (we could also have looked at the backward DFA but just for simplicity let's just look at one of them and we arbitrarily pick forward). We initialize out of entry to the boundary condition value so whatever is the boundary condition value for a particular DFA that we're interested in. Then for each other basic block other than entry we just say $OUT[B]=$ top value where top value is again specific to the particular DFA that we are looking at. The top value is based on the meet operator so when we define the meet operator it automatically also defines our top value because top value is something which is greater than equal to everything else (i.e. greater than equal to operator or the ordering operator is also implied by the meet operator).
In the pseudo code we have the the fixed point iteration and so inside the iteration we say while changes to any out occur for each basic block B do this computation. So now the there's an interesting observation. Because initially everything else is top it's only the first node or the entry node that has a value that is other than top i.e. the boundary condition value. So let's say if we pick some basic block which is other than entry then we know the out of the basic block will be top because in[B] will be top because all its predecessors OUT would be top and so on. The only place where it would make sense to pick a basic block at the first step is the basic block that just follows the entry node. So all the nodes that are successors of the entry node are the ones that we really need to pick, everything else we don't need to pick
because if we pick them then they are not going to change their values because their predecessors are top, their current value is top and so even the next value is going to be top because meet over top is just top and then transfer function over top is also top
typically (well transfer function over top is not necessarily top but typically it is top but the point is that typically top represents that something has not been reached or it is the most aggressive value). So we want that things should basically reach that point and only then we should be computing it.
So the question here is do we really need to consider all basic blocks at every iteration or can we omit some basic block at some iteration. For example, in a forward DFA, in the first iteration we only need to pick up the successors of the entry node and then from there on the data will start flowing. If we pick something in the middle or something at the end in the first iteration that is usually not going to be very useful and the other thing is if we have multiple basic blocks from which we could pick then should there be an order in which we should pick them. We're going to see later on that the order doesn't really matter from a correctness or from a point of view of what result we get but the order may matter from an efficiency perspective.
\section{Worklist Algorithm}
\subsection{Intuition for the Algorithm}
There's this famous algorithm called the work list algorithm. It's also commonly called the Kildall's vocalist algorithm, named after the person who first devised this algorithm. The idea here is that for a forward DFA the $out[B]$ value does not change
if none of the $out[p]$ values change where p is the predecessor of B. So the IN value will change only if one of the output values of the predecessor has changed because if none of the OUT values of the predecessor has changed then we are already in a fixed point. Although maybe at other places we are not at the fixed point. Similarly for
backward dfa the $out[B]$ value does not change if none of the $in[s]$ values change where s is a successor of B.
Based on this observation here is the idea that we will maintain a work list which is a list of basic blocks that still need to be processed. So list of basic blocks that we know that they need to be processed because maybe in the forward DFA their predecessor was just changed and so their successors are going to be needing some change as
well. In a forward DFA whenever we remove a basic block from the worklist we compute its out state and if this state has changed the successors are added to the worklist. This basically captures our previous observation. This is going to be hopefully more efficient
than looking at all basic blocks in each iteration and that's the whole idea. For a backward DFA whenever we remove a basic block from the work list we compute its in state if this state has changed the blocks predecessors are added to the worklist.
\subsection{Algorithm}
\textbf{Worklist}: List of basic blocks that still need to be processed.
\textbf{Initialization}: Add basic blocks whose information is known.
\textbf{Termination condition}: Worklist becomes empty.
At the initialization time we add the basic blocks whose information is known so whatever
basic blocks for information is known are added to the vertices i.e. set of basic blocks for which we have boundary condition values. Typically for a forward DFA the boundary condition is known for the entry node and for a backward DFA typically the boundary condition is known for the exit node. Initialization would just involve adding either the entry node to the worklist for a forward DFA or the exit node into the worklist for a backward DFA.
Then the termination condition is that the worklist becomes empty. Things are added to the worklist only if something changed so at a point where the worklist becomes empty it basically indicates that we have reached a fixed point. Note that the worklist is a set
because each block may appear at most once in the worklist at any given time. It's possible that you know a block could have two predecessors and both those predecessors changed and so because of that we tried to add the same block twice into the worklist but if we add the same block twice it's not like we're going to process it twice so we just we just maintain it as a set.
\subsection{Ordering Blocks in Worklist Algorithm}
\begin{figure}[h!]
\caption{Worklist algorithm efficiency}
\begin {center}
\begin {tikzpicture}[-latex ,auto ,node distance =3.5cm and 5cm ,on grid ,
semithick ,
state/.style ={ rectangle ,top color =white , bottom color = processblue!20 ,
draw,processblue , text=blue , scale = 0.7 ,minimum width =3.5 cm, minimum height = 2.5 cm}]
\node[state] (A){} node [label = {[label distance = 0.55cm]90:}, rectangle split,rectangle split parts=1]{%
b1
};
\node[state] (B) [below left = of A]{} node [label = {[label distance = 0.55cm]90:},rectangle split,rectangle split parts=1] [below left = of A] {%
b2
};
\node[state] (D) [below right =of A]{} node [label = {[label distance = 0.55cm]90:},rectangle split,rectangle split parts=1] [below right = of A] {%
b3
};
\path[->] (A) edge node [above = 0.3 cm] {} (D);
\path[->] (A) edge node [above = 0.3 cm] {} (B);
\path[->] (B) edge (D);
\end{tikzpicture}
\end{center}
\end{figure}
\subsubsection{Intuition with an example}
This basically answers our first question which is - can we omit some basic blocks and if so how do we identify which ones to omit. Now the other question is if there are multiple basic blocks present in the work list at any point and then which one should we pick is it okay if we pick any one. So the answer to this question is yes indeed, it is okay to pick anyone that's just the property of this fixed point iteration algorithm and that's completely independent of the worklist algorithm. We can pick any basic block at any time
and yet we are going to arrive at the same solution and that's going to be the most precise solution for some definition of precision but from an efficiency point of view
does it matter which one we pick. Well it turns out that yes it matters which one to pick and here is an example, so let's say we have an example as shown in the figure. So let's say initially in our work list b1 is present and b1 changes so then we are going to add both b2 and b3 to our worklist. Now b2 and b3 have been added to the
work list now we have two options either we could have picked b3 first and then b2 or we could have picked b2 first and then b3. It feels like it's better to do b2 first because once you have done b2 then the information across (b2, b3) edge will become up to date. On the other hand if we do b3 first then what will happen is that the information on this edge would be stale and so maybe b3 would change or maybe b3 will not change, whatever happens but then we will now do b2 and now because of b2 now maybe b3 has to be added again to the worklist because b2 is pointing to b3. So in this case it would have been better to pick b2 before b3.
\subsubsection{Algorithm}
In general if some block bi has an edge to some other block bj then it seems better to basically pick bi before bj. If the code has no cycle then we can actually order these basic blocks so that nodes that are reachable are considered later so if bi can reach bj then bj will be considered later and bi would be considered before bj. But if there is a cycle in the control flow graph then both bi and bj can reach each other in which case we can pick either of them and we don't really don't have any good way or heuristic to say that which one to pick. So that's basically the idea that we're going to use to order the selection of basic blocks when there are multiple choices in the worklist. So for a forward DFA, it would be fastest if all predecessors of b are processed before b is processed so that when b is being processed we should be able to use the latest information on all the incoming edges and this is for a forward DFA. In the absence of loops it is possible to order the blocks so that the algorithm converges by processing each basic block at most once. We can just use any topological sort for the graph. A topological sort of a graph is a sort of the graph where if a node x can reach a node y then x appears before y. If we process the nodes in the topologically sorted order of the graph then we are going to get a good efficiency, every basic block would need to be processed at most once in the entire execution of the DFA fixed point iteration however. That is not true if there are cycles, let's say in the
previous example if b3 was actually again pointing to b2 and if we pick b2 first then it's possible that we have to do b2 again because we pick b2 then we pick b3 and b3 changes and because of b3 change, b2 needs to be processed again and maybe because of b2 change b3 needs to be processed again and so this can keep going on for some time until we hit bottom or we hit some fixed point in which case we have to process each basic block potentially more than once. So, in presence of loops in general the reverse postorder is a good idea for forward DFA and postorder is a good idea for backward DFA. There are no theoretical guarantees that it's going to give you the best efficiency but in general it helps because in the presence of cycles if we can just figure out what is the reverse postorder (Reverse post order basically captures the fact that if a node x can reach node y then in the reverse postorder x would appear before y in the absence of cycles and even if there are cycles some reverse postorder will give you some arbitrary ordering because there could be multiple reverse post orders and so it will give you some arbitrary ordering and that would typically work well for a forward DFA because for blocks that are not involved in the cycle it would give you a topological sort.
Isomorphically for the backward DFA, like reverse postorder is ordering things that are reachable later, postorder is ordering things that are reachable earlier because that's what we want in a backward DFA. So reverse postorder and postorder are just inverses of each other and so while one works better for forward the
other works better for backward DFA.
\clearpage
\section{March $12^{th}$ discussion}
\begin{itemize}
\item \textbf{Arpit} : In the forward DFA, we are only initializing the OUT of every basic block. Does it matter if we were initializing bot IN and OUT or only OUT to TOP value.
\textbf{A:} NO, it doesn't matter if we initialize both values or just the OUT value as in first iteration all the IN values will be again computed from the predecessors' OUT value. But, conceptually it is better to assume that all the values have been set to TOP.
\item \textbf{Arpit} : Does the definition of semilattice guarantee unique TOP and BOTTOM values?
\textbf{A:} No, semilattice is just the set of all values and the less than equal to operator. We can have multiple elements which are not less that any other element because it is a partial order. From DFA point of view, we want to have a unique TOP value. For Example, in constant propagation, we defined a new TOP value just because we wanted a unique TOP value. Also, we don't need a unique BOTTOM value.
\item \textbf{Anirudh} : Can we prove that the worklist algorithm and the earlier algorithm we had for DFA give the same result?
\textbf{A:} We can have an example where both algorithms give different answers. For example, we have a disconnected CFG, so the worklist algorithm will never reach the other half but the earlier DFA may change some values in the other half as well and thus produce a different answer(both the answers would be equally good).
But we define our transfer functions such that if $IN=TOP$ then $OUT=TOP$. Then in this case both algorithms are equivalent. We can use inductive argument to prove the equivalence (by using the fact that in an iteration, a value can potentially change only if its predecessors' value has changed in the previous iteration).
\item \textbf{Anirudh} : What if in a semilattice there are multiple first common descendants? Is it possible?
\textbf{A:} ...
\item \textbf{Namrata} : If the graph is disconnected, doesn't it mean that it is a dead code?
\textbf{A:} Exactly, both those algorithm will only differ for the graphs where we don't really care about the answer.
\item \textbf{Arpit} : If the graph has only one entry and one exit then how can the graph be disconnected?
\textbf{A:} Yes it is possible. There can be $if(0)\{ ...\}$ or some part of the code that never gets accessed. Then we don't want entry to reach there and that's the diconnected part of the program.
\item \textbf{Anirudh} : Can we say, if we have a level in graph where all the values are set to TOP then the worklist algorithm terminates prematurely?
\textbf{A:} This confusion arises from the fact that we have defined the Data Flow set of ``equaltions``(using the equal to sign). We could have defined it of the form $OUT[B]\leq meet(p_i)$.
Very informally we can say that both algorithms are equivalent. But once we go into the mathematics and proofs then we need to have some properties for the input like graph should not be disconnected or if $IN=TOP$ then $OUT=TOP$ etc.
\item \textbf{Jai} : In reaching definitions we did not remove the definitions in which $x$ was being used but in common subexpression elimination we remove those. In reaching definitions, how is the definition still valid if $x$ has been overwritten?
\textbf{A:} In one we are interested in the values i.e. the values $y$ can have and in other we are interested in the expressions i.e. what different expressions $y$ can hold(for example for phi nodes, where we only care what different definitions can reach a point and we are not interested in whether $x$ has changed or not).
\item \textbf{Jai} : In three analysis, reaching definitions, must reach and common subexpression, the first two look quite similar.
\textbf{A:} It's all about application. In reaching definitions, we are interested in all the definitions that reach this point. In must reach, we want the definitions that ``must`` reach this point. Reaching definitions may be used for phi nodes or specialized program paths. Must reach definitions is useful for dominator analysis or we want to decrease the distance between definition and use. So every analysis has it's value in different context.
\item \textbf{Sonu} : Given the semilattice, how can we define transfer function?
\textbf{A:} We have defined the semilattice and we will define the transfer function. Note that these are orthogonal things. It is possible that two different transfer functions have same semilattice. However, there are some properties that transfer function must have with respect to the semmilatice and we will study that.
\end{itemize}
\clearpage
|
|
\documentclass[twocolumn, 9pt]{article}
\input{packages.tex}
\input{title.tex}
\input{commands.tex}
\setlength\columnseprule{0pt}
\begin{document}
\maketitle
\begin{abstract}
Simon Peyton Jones is the primary contributor to Glasgow Haskell Compiler (GHS) and Haskell's design as a functional language, which re-envisions the way we write programs today. His remarkable work "The Haskell 98 Language Report" and remarks on Wadler Philip's paper "Comprehending Monads" (June 1990) helped our community to introduce monads, a mathematical concept from category theory as a standard functional design pattern. We present a summary of his personal activity and contributions as a researcher.
\end{abstract}
\section{Introduction}
Functional programming plays a central role in forming our next generation of programmers and software engineers; a fact confirmed by an increasing number of companies that are opting for functional languages \cite{tiobe:index}.
Simon Peyton Jones is a British computer scientist born on 18 January 1958 \cite{wiki:jones}, whose research shaped Haskell as a lazy functional programming language. His work as a lead developer concentrated around the Glasgow Haskell Compiler (GHC) and its ramifications \cite{wiki:jones}.
The available public information about Simon Jones's personal life is limited, published mainly by Jones himself. He is married to Dorothy Peyton Jones, a priest in the Church of England, and has six children \cite{microsoft:jones}.
In 1980, Simon Peyton Jones graduated from Trinity College, and although he never got a PhD in computer science, he became a pillar at Microsoft Research's lab in Cambridge, England \cite{jones} since 1998 \cite{wiki:jones} (Section \ref{sec::professional Activity}). During his years at Trinity College, he worked on designing and writing high-level compilers for the school's computers \cite{jones}.
He served as a lecturer at University College London and a professor at the University of Glasgow from 1990 to 1998 \cite{wiki:jones}.
His field of expertise is lazy functional programming \cite{jones}, focusing on language's design and implementation (Section \ref{sec::scientific::recognition}).
\section{Professional Activity}
\label{sec::professional Activity}
Simon Peyton Jones is a significant contributor to the design of Haskell programming language, being the editor of "The Haskell 98 Language Report" (December 2002) a document which serves as a documentation of Haskell 98 Language and Libraries \cite{haskell98}.
He co-created the C\-\- programming language (1997) which was designed to be generated by compilers for high-level languages such as Haskell \cite{wiki:jones}.
As a vital contributor to the book Cybernauts Awake (1999), Simon Peyton Jones sought to explore the ethical and spiritual implications raised by our new technologies from within a Christian context.
Peyton Jones is currently a chairman at Computing At School (CAS) group, an organisation that aims to promote computer science as a domain of interest in our education system \cite{wiki:jones}. He also provides educational talks, promoted by Microsoft in which he shares his insights, rather than lectures on topics such as "How to write a great research paper?" \cite{htwagp}.
\section{Scientific Recognition}
\label{sec::scientific::recognition}
At the time of this writing, Peyton Jones contributed to over 370 research papers, as a researcher at Microsoft's Research Institute in Cambridge \cite{microsoft:research:jones}; mostly focused on lazy functional programming in Haskell such as "Safe zero-cost coercions for Haskell" and "Injective Type Families for Haskell", yet some concentrated around Microsoft's technologies and products including "A User-Centred Approach to Functions in Excel" \cite{microsoft:research:jones}. Furthermore, his interests in designing useful languages are noticeable in his papers, e.g. "Backpack: Retrofitting Haskell with Interfaces" where he presents Backpack, a new language for building separately type-checkable *packages* on top of a weak module system like Haskell's \cite{backpack}.
As a reward for his contributions to functional programming languages, in 2004 Jones was inducted as a Fellow of the Association for Computing Machinery \cite{wiki:jones}; he received a couple of years later in 2011 membership in the Academia Europaea \cite{wiki:jones}, the European Union's Academy of Humanities and Sciences.
In the same year 2011, Simon Jones and Simon Marlow were awarded for their work on GHC with the SIGPLAN Programming Languages Software Award. Two years later, he received an honorary doctorate from the University of Glasgow \cite{wiki:jones}. He was named a Fellow of the Royal Society (FRS) in 2016 furthermore a Distinguished Fellow of the British Computer Society (DFBCS) in 2017. \cite{wiki:jones}.
\printbibliography
\end{document}
|
|
\section{Implementation} %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\label{sec:impl}
The traditional object-oriented approach to implementing
first-class patterns is based on run-time compositions through
interfaces. This ``\emph{patterns as objects}'' approach has been
explored in several different languages~\cite{Visser06matchingobjects,geller2010pattern,FuncCSharp,Grace2012}.
Implementations differ in where bindings are stored and what is returned as a
result, but in its most basic form it consists of the
\code{pattern} interface with a virtual function \code{match} that accepts a subject
and returns whether it was accepted or rejected.
This approach is open to new patterns and pattern combinators, but a mismatch in the type of the subject and the
type accepted by the pattern can only be detected at run-time.
Furthermore, it implies significant run-time overhead (\textsection\ref{sec:patcmp}).
%% While the approach is open to new patterns and pattern combinators (the patterns
%% are composed at run-time by holding references to other pattern objects), it has
%% some design problems. For example, mismatch in the type of the subject and the
%% type accepted by the pattern can only be detected at run-time, while in
%% languages with built-in support of pattern matching it is typically detected at
%% type-checking phase. The approach may also unnecessarily clutter the code by
%% requiring lots of similar boilerplate code be written. For example, modeling n+k
%% patterns requires additional interface for evaluating the \code{expression}.
%% With it, we have a dilemma of whether \code{expression} should be derived from
%% \code{pattern}, \code{pattern} from \code{expression}, or neither of those.
%% Independently of the choice, implementation of pattern combinators will require
%% that the class of the combinator conditionally derives from \code{pattern},
%% \code{expression} or both depending on which of these interfaces its arguments
%% implement. On one hand, this will require a separate implementation of the
%% combinator for each of the cases, while on the other it makes the combinators
%% dependent on something that was only needed to implement n+k patterns.
%To quantify the overhead somewhat, we reimplemented the factorial function from
%\textsection\ref{sec:cpppat} using object patterns and timed a million
%computations of factorial on arguments ranging from 0 to 10. Depending on the
%argument, the approach based on object patterns was 12-22 times slower than
%factorial based on \emph{Mach7}. Note that for this experiment we took extra care to
%not allocate patterns or intermediary objects on the heap, made sure the bodies
%of all virtual functions were also available for inlining since we composed
%objects on the stack and thus their complete types were known. We also used a
%faster \code{typeid}-check instead of a slower \code{dynamic_cast} to ensure the
%safety of unpacking an object. Finally, we repeated the experiment while removing
%the safety check altogether (assuming the argument will be of the correct
%dynamic type) and could reduce the overhead to 6.74-18 times, which is still too
%costly to be considered a viable solution for a modern \Cpp{} use. We show in
%\textsection\ref{sec:patcmp} that \emph{Mach7} patterns produce code that is only few
%percentage points slower than manualy handcrafted code without patterns.
\subsection{Patterns as Expression Templates}
\label{sec:pat}
Patterns in \emph{Mach7} are also represented as objects; however, they are
composed at compile time, based on \Cpp{} concepts.
\term{Concept} is the \Cpp{} community's long-established term for a set of
requirements for template parameters. Concepts were not included in \Cpp{}11,
but~techniques for emulating them with
\code{enable\_if}~\cite{jarvi:03:cuj_arbitrary_overloading} have been in use for
a while. \code{enable\_if} provides the ability to \emph{include} or
\emph{exclude} certain class or function declarations from the compiler's
consideration based on conditions defined by arbitrary metafunctions.
To~avoid the verbosity of \code{enable\_if}, in this work we use the notation
for \term{template constraints} -- a simpler version of concepts~\cite{N3580}.
The \emph{Mach7} implementation emulates these constraints.
There are two main constraints on which the entire library is built:
\code{Pattern} and \code{LazyExpression}.
\begin{lstlisting}[keepspaces]
template <typename P> constexpr bool Pattern() {
return Copyable<P> // P must also be Copyable
&& is_pattern<P>::value // this is a semantic constraint
&& requires (typename S, P p, S s) {// syntactic reqs:
bool = { p(s) }; // usable as a predicate on S
AcceptedType<P,S>; // has this type function
}; }
\end{lstlisting}
%It requires that any type \code{P} modeling \code{Pattern} concept must also
%model \code{Copyable} concept, be explicitly marked as pattern via
%\code{is_pattern} trait as well as be
\noindent
The \code{Pattern} constraint is the analog of the \code{pattern} interface from the
\emph{patterns as objects} solution. Objects of any class \code{P} satisfying
this constraint are patterns and can be composed with any other patterns in the
library as well as be used in the \code{Match} statement.
Patterns can be passed as arguments of a function, so they must be
\code{Copyable}. Implementation of pattern combinators requires the
library to overload certain operators on all the types satisfying the \code{Pattern}
constraint. To avoid overloading these operators for types that satisfy the
requirements accidentally, the \code{Pattern} constraint is a \emph{semantic constraint},
which means that classes claiming to satisfy it have to state that explicitly by specializing the
\code{is_pattern<P>} trait. The constraint also introduces some \emph{syntactic
requirements}, described by the \code{requires} clause. In particular, because
patterns are predicates on their subject type, they require presence of an
application operator that checks whether a pattern matches a given subject.
Unlike the \emph{patterns as objects} approach, the \code{Pattern} constraint does not impose
any restrictions on the subject type \code{S}. Patterns like the wildcard
pattern will leave the \code{S} type completely unrestricted, while other
patterns may require it to satisfy certain constraints, model a given concept,
inherit from a certain type, etc.
The application operator will typically return a value of type \code{bool}
indicating whether the pattern is \subterm{pattern}{accepted} on a given subject
or \subterm{pattern}{rejected}. %For convenience reasons,
%application operator is allowed to return any type that is convertible to
%\code{bool} instead, e.g. a pointer to a casted subject, which is useful in
%emulating the support of \subterm{pattern}{as-patterns}.
Most of the patterns are applicable only to subjects of a given \subterm{type}{expected type}
or types convertible to it. This is the case, for example, with value and
variable patterns, where the expected type is the type of the underlying value,
as well as with the constructor pattern, where the expected type is the type
being decomposed. Some patterns, however, do not have a single
expected type and may work with subjects of many unrelated types. A wildcard
pattern, for example, can accept values of any type without involving a
conversion. To account for this, the \code{Pattern} constraint requires the presence of
a type alias \code{AcceptedType}, which given a pattern of type \code{P} and
a subject of type \code{S} returns an expected type \code{AcceptedType<P,S>}
that will accept subjects of type \code{S} with no or a minimum of conversions.
By default, the alias is defined in terms of a nested type function
\code{accepted_type_for}, as follows:
\begin{lstlisting}
template<typename P, typename S>
using AcceptedType = P::accepted_type_for<S>::type;
\end{lstlisting}
\noindent
The wildcard pattern defines \code{accepted_type_for} to be an identity
function, while variable and value patterns define it to be their underlying
type. The constructor pattern's accepted type is the type it decomposes, which
is typically different from the subject type. \emph{Mach7} employs an efficient
type switch~\cite{TS12} under the hood to convert subject type to accepted type.
Guards, n+k patterns, the equivalence combinator, and potentially some
new user-defined patterns depend on capturing the structure (term) of lazily-evaluated expressions.
All such expressions are objects of some type \code{E}
that must satisfy the \code{LazyExpression} constraint:
\begin{lstlisting}[keepspaces]
template <typename E> constexpr bool LazyExpression() {
return Copyable<E> // E must also be Copyable
&& is_expression<E>::value // this is semantic constraint
&& requires (E e) { // syntactic requirements:
ResultType<E>; // associated result_type
ResultType<E> == { eval(e) };// eval(E)->result_type
ResultType<E> { e }; // conversion to result_type
}; }
@\halfline@
template<typename E> using ResultType = E::result_type;
\end{lstlisting}
\noindent
The constraint is, again, semantic, and the classes claiming to satisfy it must
assert it through the \code{is_expression<E>} trait. The template alias \code{ResultType<E>}
is defined to return the expression's associated type \code{result_type}, which
defines the type of the result of a lazily-evaluated expression. Any class
satisfying the \code{LazyExpression} constraint must also provide an implementation
of the function \code{eval} that evaluates the result of the expression. Conversion
to the \code{result_type} should call \code{eval} on the object in order to
allow the use of lazily-evaluated expressions in the contexts where their
eagerly-evaluated value is expected, e.g. a non-pattern-matching context of the
right-hand side of the \code{Case} clause.
Our implementation of the variable pattern \code{var<T>} satisfies the
\code{Pattern} and \code{LazyExpression} constraints as follows:
\begin{lstlisting}[keepspaces]
template <Regular T> struct var {
template <typename>
struct accepted_type_for { typedef T type; };
bool operator()(const T& t) const // exact match
{ m_value = t; return true; }
template <Regular S>
bool operator()(const S& s) const // with conversion
{ m_value = s; return m_value == s; }
typedef T result_type; // type when used in expression
friend const result_type& eval(const var& v) // eager eval
{ return v.m_value; }
operator result_type() const { return eval(*this); }
mutable T m_value; // value bound during matching
};
@\halfline@
template<Regular T>struct is_pattern<var<T>>:true_type{};
template<Regular T>struct is_expression<var<T>>:true_type{};
\end{lstlisting}
%Each of our six pattern kinds implements the application operator according to
%the semantics presented in Figure~\ref{exprsem}. The application operator's
%result has to be convertible to bool; \code{true} indicates a successful match.
%A class might have several overloads of the above operator that distinguish
%cases of interest. We summarize the requirements on template parameters of each
%of our pattern in Figure~\ref{xt-reqs}.
%
%\begin{figure}[h]
%\centering
%\begin{tabular}{llll}
%{\bf Pattern} & {\bf Parameters} & {\bf Argument of application operator U} \\ \hline
%\code{wildcard} & -- & -- \\
%\code{value<T>} & \code{Regular<T>} & \code{Convertible<U,T>} \\
%\code{variable<T>} & \code{Regular<T>} & \code{Convertible<U,T>} \\
%\code{expr<F,E...>} & \code{LazyExpression<E>} & \code{Convertible<U,expr<F,E...>::result_type>} \\
%\code{guard<E1,E2>} & \code{LazyExpression<Ei>} & any type accepted by \code{E1::operator()} \\
%\code{ctor<T,E...>} & \code{Polymorphic<T>} & \code{Polymorphic<U>} for open encoding \\
% & \code{Object<T>} & \code{is_base_and_derived<U,T>} for tag encoding \\
%\end{tabular}
%\caption{Requirements on parameters and argument type of an application operator}
%\label{xt-reqs}
%\end{figure}
\noindent
For semantic or efficiency reasons a pattern may have several overloads
of the application operator. In the example, the first alternative is used when no
conversion is required; thus, the variable pattern is guaranteed to be accepted.
The second may involve a (possibly-narrowing) conversion, which is why we check
that the values compare as equal after assignment. Similarly, for type checking
reasons, \code{accepted_type_for} may (and typically will) provide several partial
or full specializations to limit the set of acceptable subjects. For example, the
\subterm{pattern combinator}{address combinator} can only be applied to subjects
of pointer types, so its implementation will report a compile-time error when
applied to any non-pointer type.
%Its implementation manifests this by deriving unrestricted case of the type function
%\code{accepted_type_for} from \code{invalid_subject_type<S>}. This will trigger
%a static assertion when its associated type \code{type} gets instantiated,
%resulting in a compile-time error that states that a given subject type \code{S}
%cannot be used as an argument of the address pattern. The second case of the
%type function indicates through partial specialization of class templates that
%for any subject of a pointer type \code{S*}, the accepted type is going to be a
%pointer to the type accepted by the argument pattern \code{P1} of the address
%combinator.
%
%\begin{lstlisting}
%template <Pattern P1>
%struct address
%{ // ...
% template <typename S>
% struct accepted_type_for : invalid_subject_type<S> {};
% template <typename S> struct accepted_type_for<S*> {
% typedef typename P1::template
% accepted_type_for<S>::type* type;
% };
% template <typename S>
% bool operator()(const S* s) const
% { return s && m_p1(*s); }
% P1 m_p1;
%};
%\end{lstlisting}
%
%\noindent
%Checking whether a given subject type can be accepted is inherently late and
%happens at instantiation time of the nested \code{accepted_type_for} type
%function and possibly parameterized application operator. For this reason,
%pattern's implementation may have to provide a set of overloads of the
%application operator that will be able to accept all possible outcomes of
%\code{accepted_type_for<S>::type} on any valid subject type \code{S}.
To capture the structure of an expression, the library employs a commonly-used
technique called ``expression templates''~\cite{Veldhuizen95expressiontemplates,
vandevoorde2003c++}. %It captures the structure of expression through the type,
%which for binary addition may look as following:
%
%\begin{lstlisting}[keepspaces,columns=flexible]
%template <LazyExpression E1, LazyExpression E2>
%struct plus {
% E1 m_e1; E2 m_e2; // subexpressions
% plus(const E1& e1, const E2& e2) : m_e1(e1), m_e2(e2) {}
% typedef decltype(std::declval<E1::result_type>()
% + std::declval<E2::result_type>()
% ) result_type; // type of result
% friend result_type eval(const plus& e)
% { return eval(e.m_e1) + eval(e.m_e2); }
% friend plus operator+(const E1& e1, const E2& e2)
% { return plus(e1,e2); }
%};
%\end{lstlisting}
%
%\noindent
%The user of the library never sees this definition, instead she implicitly
%creates its objects with the help of overloaded \code{operator+} on any
%\code{LazyExpression} arguments. The type itself models the \code{LazyExpression}
%concept as well so that the lazy expressions can be composed. Notice that all
%the requirements of the concept are implemented in terms of the requirements
%on the types of the arguments. The key point to the efficiency of expression
%templates is that all the types in the final expression are known at compile
%time, while all the function calls are trivial and fully inlined. Use of new
%\Cpp{}11 features like move constructors and perfect forwarding allows us to
%ensure further that no temporary objects will ever be created at run-time and
%that the evaluation of the expression template will be as efficient as a hand
%coded function.
%
In general, an \term{expression template} is an algebraic structure $\langle \Sigma_\zeta,\{f_1,f_2,...\}\rangle$
defined over the set $\Sigma_\zeta = \{\tau~|~\tau \models \zeta\}$ of all the types $\tau$
modeling a given concept $\zeta$. The operations $f_i$ allow one to compose new types
modeling the concept $\zeta$ out of existing types. In this sense, the types of all lazy
expressions in \emph{Mach7} stem from a set of a few (possibly-parameterized) basic
types like \code{var<T>} and \code{value<T>} (which both model \code{LazyExpression})
by applying type functors like \code{plus} and \code{minus} to them. Every type
in the resulting family then has a function \code{eval} defined on it that
returns a value of the associated type \code{result_type}. Similarly, the types
of all the patterns stem from a set of a few (possibly-parameterized) patterns like
\code{wildcard}, \code{var<T>}, \code{value<T>}, \code{C<T>} etc. by applying to
them pattern combinators such as \code{conjunction}, \code{disjunction},
\code{equivalence}, \code{address} etc. The user is allowed to extend both
algebras with either basic expressions and patterns or with functors and combinators.
The sets $\Sigma_{LazyExpression}$ and $\Sigma_{Pattern}$ have a non-empty intersection, which
slightly complicates matters. The basic types \code{var<T>} and \code{value<T>}
belong to both of those sets, and so do some of the combinators, e.g.
\code{conjunction}. Since we can only have one overloaded \code{operator&&} for
a given combination of argument types, we have to state conditionally whether the
requirements of \code{Pattern}, \code{LazyExpression}, or both are satisfied in a
given instantiation of \code{conjunction<T1,T2>}, depending on what combination
of these concepts the argument types \code{T1} and \code{T2} model. Concepts,
unlike interfaces, allow modeling such behavior without multiplying
implementations or introducing dependencies.
\subsection{Structural Decomposition}
\label{sec:bnd}
\emph{Mach7}'s constructor patterns \code{C<T>(P1,...,Pn)} requires the
library to know which member of class \code{T} should be used as the subject to
$P_1$, which should be matched against $P_2$, etc. In functional languages
supporting algebraic data types, such decomposition is unambiguous as each
variant has only one constructor, which is thus also used as a \subterm{constructor}{deconstructor}~\cite{padl08,Thorn2012} to define the
decomposition of that type through pattern matching. In \Cpp{}, a class may have
several constructors, so we must be explicit about a class' decomposition.
We specify that by specializing the library template class \code{bindings}.
Here are the definitions that are required in order to be able to decompose the
lambda terms we introduced in \textsection\ref{sec:cpppat}:
\begin{lstlisting}
template<>class bindings<Var>{Members(Var::name);};
template<>class bindings<Abs>{Members(Abs::var,Abs::body);};
template<>class bindings<App>{Members(App::func,App::arg);};
\end{lstlisting}
\noindent
The variadic macro \code{Members} simply expands each of its arguments into the
following definition, demonstrated here on \code{App::func}:
\begin{lstlisting}
static decltype(&App::func) member1(){return &App::func;}
\end{lstlisting}
\noindent
Each such function returns a pointer-to-member that should be bound in
position $i$. The library applies them to the subject in order to obtain
subjects for the sub-patterns $P_1,...,P_n$.
%Calls to these functions get inlined so that the code to access a member in a
%given position becomes equivalent to the code to access that member directly.
Note that binding definitions made this way
are \emph{non-intrusive} since the original class definition is not touched.
The binding definitions also respect \emph{encapsulation} since only the public members of the
target type will be accessible from within a specialization of \code{bindings}.
Members do not have to be data members only, which can be inaccessible, but any
of the following three categories:
\begin{compactitem}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item a data member of the target type $T$
\item a nullary member function of the target type $T$
\item a unary external function taking the target type $T$ by pointer, reference, or value.
\end{compactitem}
\noindent
Unfortunately, \Cpp{} does not yet provide sufficient compile-time
introspection capabilities to let the library generate \code{bindings}
implicitly. These \code{bindings}, however, only need to be written once for
a given class hierarchy (e.g. by its designer) and can be reused everywhere.
This is also true for parameterized classes (\textsection\ref{sec:view}).
\subsection{Algebraic Decomposition}
\label{sec:slv}
Traditional approaches to generalizing n+k patterns treat matching a pattern
$f(x,y)$ against a value $r$ as solving an equation $f(x,y)=r$~\cite{OosterhofThesis}.
This interpretation is well-defined when there are zero or one solutions,
but alternative interpretations are possible when there are multiple solutions.
Instead of discussing which interpretation is the most general or appropriate,
we look at n+k patterns as a \term{notational decomposition} of
mathematical objects. The~elements of the notation are associated with
sub-components of the matched mathematical entity, which effectively lets us
decompose it into parts. The structure of the expression tree used in the notation
is an analog of a constructor symbol in structural decomposition, while its
leaves are placeholders for parameters to be matched against or inferred from
the mathematical object in question. In~essence, \term{algebraic decomposition}
is to mathematical objects what structural decomposition is to algebraic data
types. While the analogy is somewhat ad-hoc, it resembles the situation with
operator overloading: you do not strictly need it, but it is so %syntactically
convenient it is virtually impossible not to have it. We demonstrate this
alternative interpretation of the n+k patterns with examples.
\begin{compactitem}
\setlength{\itemsep}{0pt}
\setlength{\parskip}{0pt}
\item An expression $n/m$ is often used to decompose a rational number into
numerator and denominator.
\item An expression of the form $3q + r$ can be used to obtain the quotient and
remainder of dividing by 3. When $r$ is a constant, it can also be used to
check membership in a congruence class.
\item The Euler notation $a+bi$, with $i$ being the imaginary unit, is used to
decompose a complex number into real and imaginary parts. Similarly,
expressions $r(cos \phi + i\mathrm{sin} \phi)$ and $re^{i\phi}$ are used to
decompose it into polar form.
\item A 2D line can be decomposed with the slope-intercept form $mX+c$, the
linear equation form $aX+bY=c$, or the two-points form $(Y-y_0)(x_1-x_0)=(y_1-y_0)(X-x_0)$.
\item An object representing a polynomial can be decomposed for a specific degree:
$a_0$, $a_1X^1+a_0$, $a_2X^2+a_1X^1+a_0$, etc.
\item An element of a vector space can be decomposed along some sub-spaces of
interest. For example a 2D vector can be matched against $(0,0)$, $aX$,
$bY$, or $aX+bY$ to separate the general case from cases when one or both
components of the vector are $0$.
\end{compactitem}
\noindent
The expressions $i$, $X$, and $Y$ in those examples are not variables, but rather are named
constants of some dedicated type that allows the expression to be generically
decomposed into orthogonal parts.
The linear equation and two-point forms for decomposing lines already include
an equality sign, so it is hard to give them semantics in an equational
approach. In our library that equality sign is not different from any other
operator, like $+$ or $*$, and is only used to capture the structure of the
expression, while the exact semantics of matching against that expression is
given by the user. This~flexibility allows us to generically encode many of the interesting cases
of the equational approach. The following example, written with use of
\emph{Mach7}, defines a function for fast computation of Fibonacci numbers by using
generalized n+k patterns:
\begin{lstlisting}[keepspaces]
int fib(int n) {
var<int> mm;
Match(n) {
Case(any({1,2})) return 1;
Case(2*mm) return sqr(fib(mm+1)) - sqr(fib(mm-1));
Case(2*mm+1) return sqr(fib(mm+1)) + sqr(fib(mm));
} EndMatch // sqr(x) = x*x
}
\end{lstlisting}
%Applying equational approach to floating-point arithmetic creates even more
%problems. Even when the solution is unique, it may not be representable by
%a given floating-point type and thus not satisfy the equation. Once we settle
%for an approximation, we open ourselves to even more decompositions that become
%possible with our approach.
%
%\begin{compactitem}
%\setlength{\itemsep}{0pt}
%\setlength{\parskip}{0pt}
%\item Matching $n/m$ with integer variables $n$ and $m$ against a floating-point
% value can be given semantics of finding the closest fraction to the
% value.
%\item Matching an object representing sampling of some random variable against
% expressions like $Gaussian(\mu,\sigma^2)$, $Poisson(\lambda)$ or
% $Binomial(n,p)$ can be seen as distribution fitting.
%\item Any curve fitting in this sense becomes an application of pattern
% matching. Precision in this case can be a global constant or explicitly
% passed parameter of the matching expression.
%\end{compactitem}
%\noindent
%We can make several observations from these examples:
%\begin{compactitem}
%\setlength{\itemsep}{0pt}
%\setlength{\parskip}{0pt}
%\item We might need to have the entire expression available to us in order to
% decompose its parts.
%\item Matching the same expression can have different meanings depending on
% types of objects composing the expression and the expected result.
%\item An algorithm to decompose a given expression may depend on the types of
% objects in it and the type of the result.
%\end{compactitem}
%\subsubsection{Solvers}
\noindent
The \emph{Mach7} library already takes care of capturing the structure of lazy expressions
(i.e. terms). To implement the semantics of their matching, the \emph{Mach7} user (i.e. the designer
of a concrete notation) writes a new function overload to define the semantics of decomposing a value of a given
type \code{S} against a term \code{E}:
\begin{lstlisting}[keepspaces]
template <LazyExpression E, typename S>
bool solve(const E&, const S&);
\end{lstlisting}
\noindent
The first argument of the function takes an expression template representing a
term we are matching against, while the second argument represents the expected
result. Note that even though the first argument is passed in with the \code{const} qualifier,
it may still modify state in \code{E}. For example, when \code{E} is
\code{var<T>}, the application operator for const-object that will eventually be
called will update a mutable member \code{m_value}.
%
The following example defines a generic solver for multiplication by a
constant $c \neq 0$ of an expression $e=e_1*c$.
\begin{lstlisting}[keepspaces]
template <LazyExpression E, typename T>
requires Field<E::result_type>()
bool solve(const mult<E,value<T>>&e,const E::result_type&r)
{ return solve(e.m_e1,r/eval(e.m_e2)); } // e.m_e2 is @$c$@
@\halfline@
template <LazyExpression E, typename T>
requires Integral<E::result_type>()
bool solve(const mult<E,value<T>>&e,const E::result_type&r){
T c = eval(e.m_e2); // e.m_e2 is @$c$@
return r%c == 0 && solve(e.m_e1,r/c);
}
\end{lstlisting}
\noindent
Intuitively, matching $e_1*c$ against the value $r$ in the equational approach means
solving $e_1*c=r$, which means that we should try matching the sub-expression
$e_1$ against $\frac{r}{c}$.
The first overload is only applicable when the result type of the
sub-expression models the \code{Field} concept. In this case, we can rely on the
presence of a unique inverse and simply call division without any additional
checks. The second overload uses integer division, which does not guarantee the
unique inverse, and thus we have to verify that the result is divisible by the
constant first. This~last overload combined with a similar solver for addition
of integral types is everything the library needs to support the \code{fib} example.
%definition of the \code{fib} function from \textsection\ref{sec:cpppat}. This
%demonstrates how an equational approach can be generically implemented for a
%number of expressions.
%A generic solver capable of decomposing a complex value using the Euler
%notation is very easy to define by fixing the structure of expression:
%
%\begin{lstlisting}[keepspaces]
%template <LazyExpression E1, LazyExpression E2>
% requires SameType<E1::result_type,E2::result_type>()
%bool solve(
% const plus<mult<E1,value<complex<E1::result_type>>>,E2>& e,
% const complex<E1::result_type>& r);
%\end{lstlisting}
%
%\noindent
%As we mentioned in \textsection\ref{sec:cpppat}, the template facilities of
%\Cpp{} resemble pattern-matching facilities of other languages. Here, we
%essentially use these compile-time patterns to describe the structure of the
%expression this solver is applicable to: $e_1*c+e_2$ with types of $e_1$ and
%$e_2$ being the same as type on which a complex value $c$ is defined. The actual
%value of the complex constant $c$ will not be known until run-time, but assuming
%its imaginary part is not $0$, we will be able to generically obtain the values
%for sub-expressions.
%% Our approach is largely possible due to the fact that the library only serves as
%% an interface between expressions and functions defining their semantics and
%% algebraic decomposition. The fact that the user explicitly defines the variables
%% she would like to use in patterns is also a key as it lets us specialize not
%% only on the structure of the expression, but also on the types involved.
%% Inference of such types in functional languages would be hard or impossible as the
%% expression may have entirely different semantics depending on the types of
%% arguments involved. Concept-based overloading simplifies significantly the case
%% analysis on the properties of types, making the solvers generic and composable.
%% The approach is also viable as expressions are decomposed at compile-time and
%% not at run-time, letting the compiler inline the entire composition of solvers.
%An obvious disadvantage of this approach is that the more complex expression
%becomes, the more overloads the user will have to provide to cover all
%expressions of interest. The set of overloads will also have to be made
%unambiguous for any given expression, which may be challenging for novices. An
%important restriction of this approach is its inability to detect multiple uses
%of the same variable in an expression at compile time. This happens because
%expression templates remember the form of an expression in a type, so use of two
%variables of the same type is indistinguishable from the use of the same
%variable twice. This can be worked around by giving different variables
%(slightly) different types or making additional checks as to the structure of
%expression at run-time, but that will make the library even more verbose or
%incur a significant run-time overhead.
\subsection{Views}
\label{sec:view}
Any type $T$ may have an arbitrary number of \term{binding}s associated with it,
which are specified by varying the second parameter of the \code{bindings}
template: \term{layout}. The layout is a non-type template parameter of
integral type; the layout parameter has a default value and is thus omitted most of the time.
Our library's support of multiple bindings (through layouts) effectively enables
a facility similar to Wadler's \subterm{pattern}{views}\cite{Wadler87}. Consider:
\begin{lstlisting}[keepspaces]
enum { cartesian = default_layout, polar }; // Layouts
@\halfline@
template <class T> struct bindings<std::complex<T>>
{ Members(std::real<T>,std::imag<T>); };
template <class T> struct bindings<std::complex<T>, polar>
{ Members(std::abs<T>,std::arg<T>); };
@\halfline@
template <class T> using Cart = view<std::complex<T>>;
template <class T> using Pole = view<std::complex<T>,polar>;
@\halfline@
std::complex<double> c; double a,b,r,f;
Match(c)
Case(Cart<double>>(a,b)) ... // default layout
Case(Pole<double>>(r,f)) ... // view for polar layout
EndMatch
\end{lstlisting}
\noindent
The \Cpp{} standard effectively forces the standard library to use the Cartesian
representation~\cite[\textsection26.4-4]{C++11}, which is why we chose the
\code{Cart} layout as the default. We then define bindings for each
layout and introduce template aliases (an analog of typedefs for parameterized
classes) for each view. The \emph{Mach7} class \code{view<T,l>} binds a
target type with one of that type's layouts. \code{view<T,l>} can be used everywhere the
original target type \code{T} was expected.
The important difference from Wadler's solution is that our views can only be
used in a pattern-matching context, not as constructors or as arguments to functions.
\subsection{Match Statement}
\label{sec:matchstmt}
In functional languages with built-in pattern matching, \emph{relational
matching} on multiple subjects is usually reduced to \emph{nested matching} on a
single subject by wrapping multiple arguments into a tuple. In a library setting,
we are able to provide a more efficient implementation if we keep the arguments
separated. This is why our \code{Match} statement extends the efficient type
switch for \Cpp{}~\cite{TS12} to handle multiple subjects (both polymorphic and
non-polymorphic) (\textsection\ref{sec:multiarg}) and to accept patterns in case
clauses (\textsection\ref{sec:patcases}).
%The first extension enables efficient \emph{relational matching}, while the second enables \emph{nesting of patterns}.
\subsubsection{Multi-argument Type Switching}
\label{sec:multiarg}
The core of our efficient type switch~\cite{TS12} is based on the fact that
virtual table pointers (vtbl-pointers) uniquely identify subobjects
in the object and are perfect for hashing. Open type switch maps these
vtbl-pointers to jump targets and necessary this-pointer offsets and provides an
amortized constant-time dispatch to the appropriate case clause. Its
efficiency relies on the optimal hash function $H_{kl}^V$ built for a set of
vtbl-pointers $V$ seen by a type switch. It is chosen by varying the parameters $k$
and $l$ to minimize the probability of conflict. The parameter $k$ represents the
logarithm of the size of cache, while the parameter $l$ is the number of
low bits to ignore.
%We considered two different approaches to extending that solution to $N$
%arguments. The first approach was based on maintaining an $N$-dimensional
%table indexed by independent $H_{k_il_i}^{V_i}$ maintained for each of the
%arguments $i$. The second approach was to aggregate the information from
%multiple vtbl-pointers into a single hash in a hope the hashing would still
%maintain its favorable properties. The first approach requires amount of memory
%proportional to $O(|V|^N)$ regardless of how many different combinations of
%vtbl-pointers came through the statement. The second approach requires the
%amount of memory linear in the number of vtbl-pointer combinations seen, which
%in the worst case becomes the same $O(|V|^N)$. The first approach requires
%lookup in $N$ caches, with each lookup being a subject to potential collisions;
%the second approach requires non-trivial computations to aggregate $N$
%vtbl-pointers into a single hash value and may result in potentially more
%collisions in comparison to the first approach. Our experience of dealing with
%multiple dispatch in \Cpp{} suggests that we rarely see all combinations of
%types coming through a given multi-method in real-world applications. With this
%in mind, we did not expect all combination of types come through a given
%\code{Match} statement and thus preferred the second solution, which grows
%linearly in memory with the number of combinations seen.
A \emph{Morton order} (aka \emph{Z-order}) is a function that
maps multidimensional data to one dimension while preserving the locality of the
data points~\cite{Morton66}. A Morton number of an $N$-dimensional coordinate
point is obtained by interleaving the binary representations of all coordinates.
The original one-dimensional hash function $H_{kl}^V$ applied to arguments $v \in V$
produced hash values in a tight range $[0..2^k[$ where $k \in [K,K+1]$ for
$2^{K-1} < |V| \leq 2^K$. The produced values were close to each other, which
improved the cache hit rate due to increased locality of reference. The
idea is thus to use Morton order on these hash values -- not on the original
vtbl-pointers -- in order to preserve locality of reference. To do this, we
retain a single parameter $k$ reflecting the size of the cache, but we
keep $N$ optimal offsets $l_i$ for each argument $i$.
Consider a set $V^N = \{\tpl{v_1^1,...,v_1^N},...,\tpl{v_n^1,...,v_n^N}\}$ of
$N$-dimensional tuples representing the set of vtbl-pointer combinations coming
through a given \code{Match} statement. As with the one-dimensional case, we
restrict the size $2^k$ of the cache to be not larger than twice the closest
power of two greater or equal to $n=|V^N|$: i.e. $k \in [K,K+1]$, where
$2^{K-1} < |V^N| \leq 2^K$. For a given $k$ and offsets $l_1,...,l_N$ a hash
value of a given combination $\tpl{v^1,...,v^N}$ is defined as
$H_{kl_1...l_N}(\tpl{v^1,...,v^N})=\mu(\frac{v^1}{2^{l_1}},...,\frac{v^N}{2^{l_N}}) \mod 2^k$,
where the function $\mu$ returns the Morton number (bit interleaving) of $N$ numbers.
As in the one-dimensional case, we vary the parameters $k$,$l_1$,$...$,$l_N$ in
their finite and small domains to obtain an optimal hash function
$H^{V^N}_{kl_1...l_N}$ by minimizing the probability of conflict on values from
$V^N$. Unlike the one-dimensional case, we do not try to find the optimal
parameters every time we reconfigure the cache. Instead, we only try to improve
the parameters to render fewer conflicts in comparison to the number of conflicts
rendered by the current configuration. This does not prevent us from eventually
converging to the same optimal parameters, which we do over time, but is
important for holding constant the amortized complexity of the access.
%Observe that the domain of each parameter of the optimal hash function
%$H^{V^N}_{kl_1...l_N}$ only grows since $V^N$ only grows, while any cache
%configuration is also a valid cache configuration in a larger cache, rendering
%the same number of conflicts.
We demonstrate in \textsection\ref{sec:morton} that -- similarly to the one-dimensional
case -- such a hash function produces few collisions on real-world class
hierarchies, and yet it is simple enough to compute that it competes well with alternatives
that can cope with relational matching.
%In practice, the library does not consider all $N$ arguments of a given
%\code{Match} statement, but only the $M$ polymorphic arguments ($M \leq N$). It
%then builds an efficient type switch based on those $M$ arguments. The type
%switch guarantees efficient dispatch to the first case clause that can possibly
%handle a given combination of arguments based on the subset of only polymorphic
%arguments. The patterns are then tried sequentially. The underlying type switch
%uses pattern's type-function \code{accepted_type_for<Si>} instantiated with the
%subject type $Si$ of a given argument $i$ in order to obtain the target type
%requested by the pattern in that position.
\subsubsection{Support for Patterns}
\label{sec:patcases}
Given a statement \code{Match(e_1,...,e_N)} applied to arbitrary expressions $e_i$, the library introduces several
names into the scope of the statement: e.g. the number of arguments $N$, the subject
types \code{subject_type_ii} (defined as \code{decltype(e_ii)} modulo type
qualifiers), and the number of polymorphic arguments $M$. When $M > 0$ it also
introduces the necessary data structures to implement efficient type
switching~\cite{TS12}. Only the $M$ arguments whose \code{subject_type_ii} are
polymorphic will be used for fast type switching.
For each case clause \code{Case(p_1,...,p_N)} the library ensures that the
number of arguments to the case clause $N$ matches the number of arguments to
the \code{Match} statement, and that the type \code{P_ii} of every expression
\code{p_ii} passed as its argument models the \code{Pattern} concept.
%Initially we allowed case clauses to accept less than $N$ patterns, assuming the
%missing patterns to be the wildcard, however, brittleness of the macro system
%made us reconsider this. The problem is that macro system is blind to \Cpp{}
%syntax and template instantiation like \code{A<B,C>} used in a pattern will be
%treated by the preprocessor as 2 macro arguments. This resulted in errors that
%were hard for the users to comprehend.
For each \code{subject_type_ii} it introduces \code{target_type_ii} --
the result of evaluating the type function \code{AcceptedType<P_ii,subject_type_ii>} --
into the scope of the case clause.
This is the type the pattern
expects as an argument on a subject of type \code{subject_type_ii} (\textsection\ref{sec:pat}),
which is used by the type switching mechanism to properly cast the subject if necessary.
The library then introduces the names \code{match_ii} of type \code{target_type_ii&}
bound to properly casted subjects and available to the user in the right-hand
side of the case clause in the event of a successful match. The qualifiers applied to
the type of \code{match_ii} reflect the qualifiers applied to the type of the subject
\code{e_ii}. Finally, the library generates code that sequentially applies
each pattern to properly-casted subjects, making the clause's body conditional:
\begin{lstlisting}
if (p_1(match_1) && ... && p_N(match_N)) { /* body */ }
\end{lstlisting}
\noindent
When type switching is not involved, the generated code implements the na\"ive
backtracking strategy, which is known to be inefficient as it can produce
redundant computations~\cite[\textsection 5]{Cardelli84}. More-efficient
algorithms for compiling pattern matching have been developed
since~\cite{Augustsson85,Maranget92,Puel93,OPM01,Maranget08}. Unfortunately, while these
algorithms cover most of the typical kinds of patterns, they are not pattern-agnostic
as they make assumptions about the semantics of concrete patterns. A library-based
approach to pattern matching is agnostic of the semantics of any given
user-defined pattern. The interesting research question in this context would
be: what language support is required to be able to optimize open patterns?
%While we do not address this question in its generality, our solution makes a
%small step in that direction.
The main advantage from using pattern matching in \emph{Mach7} comes from the fast type
switching weaved into the \code{Match} statement. It effectively skips case
clauses that will definitely be rejected because their target type is not one
of the subject's dynamic types. Of course, this is only applicable to polymorphic
arguments; for non-polymorphic arguments, the matching is done na\"ively with a
cascade of conditional statements.
|
|
\subsubsection{Installation}
\begin{enumerate}
\item Download
\opt{iriverh10}{\url{http://download.rockbox.org/bootloader/iriver/H10_20GC.mi4}}
\opt{iriverh10_5gb}{
\begin{itemize}
\item \url{http://download.rockbox.org/bootloader/iriver/H10.mi4} if your \dap{} is UMS or
\item \url{http://download.rockbox.org/bootloader/iriver/H10_5GB-MTP/H10.mi4} if it is MTP.
\end{itemize}}
\item Connect your \playertype{} to the computer using UMS mode and the UMS trick%
\opt{iriverh10_5gb}{ if necessary}.
\item Rename the \opt{iriverh10}{\fname{H10\_20GC.mi4}}\opt{iriverh10_5gb}{\fname{H10.mi4}}
file to \fname{OF.mi4} in the \fname{System} directory on your \playertype{}.
\opt{iriverh10_5gb}{\note{If you have a Pure model \playertype{} (which
does not have an FM radio) it is possible that this file will be
called \fname{H10EMP.mi4} instead. If so, rename the \fname{H10.mi4}
you downloaded in step 1 to \fname{H10EMP.mi4}.}}
\note{You should keep a safe backup of this file for use if you ever wish
to switch back to the \playerman{} firmware.}
\note{If you cannot see the \fname{System} directory, you will need to make
sure your operating system is configured to show hidden files and
directories.}
\item Copy the \opt{iriverh10}{\fname{H10\_20GC.mi4}}\opt{iriverh10_5gb}{\fname{H10.mi4}
(or \fname{H10EMP.mi4} if you have a \playertype{} Pure)} file you
downloaded to the System directory on your \dap{}.
\end{enumerate}
|
|
\section{Related Work}
\label{sec:related}
AODV\cite{perkins2003ad} and DSR\cite{johnson2007rfc} are
traditional wireless protocols that allow any-to-any communication,
but they were designed for 802.11 and require too many states or
apply several overheads on the packet header.
%Our approach differ from traditional routing protocols by enabling any-to-any routes with low cost of states and overheads, also MATRIX provides by default IPv6 address allocation.
In the context of low-power and lossy networks, CTP\cite{Fonseca:2009} and
CodeDrip\cite{junior2014codedrip} were designed for bottom-up and
top-down data flows, respectively. They support communication in only one
direction.
State-of-the-art routing protocols for 6lowPAN that enable
any-to-any communication are RPL\cite{rfc6550}, XCTP\cite{xctp}, and
Hydro\cite{hydro}. RPL allows two modes of operation (storing and
non-storing) for downwards data flows. The non-storing mode is based
on source routing, and the storing mode pro-actively maintains an
entry in the routing table of every node on the path from the root
to each destination, which is not scalable to even moderate-size
networks. XCTP is an extension of CTP and is based on a reactive
reverse collection route creating between the root and every source
node. An entry in the reverse-route table is kept for every data
flow at each node on the path between the source and the
destination, which is also not scalable in terms of memory
footprint. Hydro protocol, like RPL, is based on a DAG
(directed acyclic graph) for bottom-up communication. Source nodes
need to periodically send reports to the border router, which builds
a global view (typically incomplete) of the network topology.
Some more recent protocols \cite{Palani2015, Moghadam:2015:MMR:2766739.2766774,
7374975} modified RPL to include new features. In~\cite{Palani2015}, a
load-balance technique is applied over nodes to decrease power consumption. In
\cite{Moghadam:2015:MMR:2766739.2766774, 7374975}, they provide multi-path
routing protocols to improve throughput and fault tolerance.
Matrix differs from previous work by providing a reliable and scalable solution
for any-to-any routing in 6LoWLAN, both in terms of routing table size and
control message overhead. Moreover, it allocates global and structured IPv6
addresses to all nodes, which allow nodes to act as destinations integrated into
the Internet, contributing to the realization of the Internet of Things.
|
|
\documentclass{wg21}
\usepackage{xcolor}
\usepackage{soul}
\usepackage{ulem}
\usepackage{fullpage}
\usepackage{parskip}
\usepackage{csquotes}
\usepackage{listings}
\usepackage{minted}
\usepackage{enumitem}
\lstdefinestyle{base}{
language=c++,
breaklines=false,
basicstyle=\ttfamily\color{black},
moredelim=**[is][\color{green!50!black}]{@}{@},
escapeinside={(*@}{@*)}
}
\newcommand{\cc}[1]{\mintinline{c++}{#1}}
\newminted[cpp]{c++}{}
\title{Making \cc{std::deque} constexpr}
\docnumber{P1923R0}
\audience{LEWGI}
\author{Alexander Zaitsev}{zamazan4ik@tut.by, zamazan4ik@gmail.com}
\begin{document}
\maketitle
\section{Revision history}
\begin{itemize}
\item R0 -- Initial draft
\end{itemize}
\section{Abstract}
\cc{std::deque} is not currently \cc{constexpr} friendly. With the loosening
of requirements on \cc{constexpr} in \cite{P0784R1} and related papers, we
can now make \cc{std::deque} \cc{constexpr}, and we should in order to support
the \cc{constexpr} reflection effort (and other evident use cases).
\section{Motivation}
\cc{std::deque} is not so widely-used standard container as \cc{std::vector} or \cc{std::string}. But there is no reason to keep \cc{std::deque} in non-\cc{constexpr} state since one of the main directions of C++ evolution is compile-time programming. And we want to use in compile-time as much as possible from STL. And this paper makes \cc{std::deque} available in compile-time.
\section{Proposed wording}
We basically mark all the member and non-member functions of \cc{std::deque} \cc{constexpr}.
Direction to the editor: please apply \cc{constexpr} to all of \cc{std::deque},
including any additions that might be missing from this paper.
In \textbf{[support.limits.general]}, add the new feature test macro
\cc{__cpp_lib_constexpr_deque} with the corresponding value for header
\cc{<deque>} to Table 36 \textbf{[tab:support.ft]}.
Change in \textbf{[deque.syn] 22.3.3}:
\begin{quote}
\begin{codeblock}
#include <initializer_list>
namespace std {
// 22.3.10, class template \tcode{deque}
template<class T, class Allocator = allocator<T>> class deque;
template<class T, class Allocator>
@\added{constexpr}@ bool operator==(const deque<T, Allocator>& x, const deque<T, Allocator>& y);
template<class T, class Allocator>
@\added{constexpr}@ synth-three-way-result<T> operator<=>(const deque<T, Allocator>& x, const deque<T, Allocator>& y);
template<class T, class Allocator>
@\added{constexpr}@ void swap(deque<T, Allocator>& x, deque<T, Allocator>& y)
noexcept(noexcept(x.swap(y)));
template<class T, class Allocator, class U>
@\added{constexpr}@ void erase(deque<T, Allocator>& c, const U& value);
template<class T, class Allocator, class Predicate>
@\added{constexpr}@ void erase_if(deque<T, Allocator>& c, Predicate pred);
[...]
}
\end{codeblock}
\end{quote}
Add after \textbf{[deque.overview] 22.3.8.1/2}:
\begin{quote}
\added{The types \texttt{iterator} and \texttt{const_iterator} meet the
constexpr iterator requirements ([iterator.requirements.general]).}
\end{quote}
Change in \textbf{[deque.overview] 22.3.8.1}:
\begin{quote}
\begin{codeblock}
namespace std {
template<class T, class Allocator = allocator<T>>
class deque {
public:
// types
using value_type = T;
using allocator_type = Allocator;
using pointer = typename allocator_traits<Allocator>::pointer;
using const_pointer = typename allocator_traits<Allocator>::const_pointer;
using reference = value_type&;
using const_reference = const value_type&;
using size_type = @\impdef@; // see 22.2
using difference_type = @\impdef@; // see 22.2
using iterator = @\impdef@; // see 22.2
using const_iterator = @\impdef@; // see 22.2
using reverse_iterator = std::reverse_iterator<iterator>;
using const_reverse_iterator = std::reverse_iterator<const_iterator>;
// 22.3.8.2, construct/copy/destroy
@\added{constexpr}@ deque() : deque(Allocator()) { }
@\added{constexpr}@ explicit deque(const Allocator&);
@\added{constexpr}@ explicit deque(size_type n, const Allocator& = Allocator());
@\added{constexpr}@ deque(size_type n, const T& value, const Allocator& = Allocator());
template<class InputIterator>
@\added{constexpr}@ deque(InputIterator first, InputIterator last, const Allocator& = Allocator());
@\added{constexpr}@ deque(const deque& x);
@\added{constexpr}@ deque(deque&& x);
@\added{constexpr}@ deque(const deque& x, const Allocator&);
@\added{constexpr}@ deque(deque&& x, const Allocator&);
@\added{constexpr}@ deque(initializer_list<T>, const Allocator& = Allocator());
@\added{constexpr}@ ~deque();
@\added{constexpr}@ deque& operator=(const deque& x);
@\added{constexpr}@ deque& operator=(deque&& x)
noexcept(allocator_traits<Allocator>::is_always_equal::value);
@\added{constexpr}@ deque& operator=(initializer_list<T>);
template<class InputIterator>
@\added{constexpr}@ void assign(InputIterator first, InputIterator last);
@\added{constexpr}@ void assign(size_type n, const T& u);
@\added{constexpr}@ void assign(initializer_list<T>);
@\added{constexpr}@ allocator_type get_allocator() const noexcept;
// iterators
@\added{constexpr}@ iterator begin() noexcept;
@\added{constexpr}@ const_iterator begin() const noexcept;
@\added{constexpr}@ iterator end() noexcept;
@\added{constexpr}@ const_iterator end() const noexcept;
@\added{constexpr}@ reverse_iterator rbegin() noexcept;
@\added{constexpr}@ const_reverse_iterator rbegin() const noexcept;
@\added{constexpr}@ reverse_iterator rend() noexcept;
@\added{constexpr}@ const_reverse_iterator rend() const noexcept;
@\added{constexpr}@ const_iterator cbegin() const noexcept;
@\added{constexpr}@ const_iterator cend() const noexcept;
@\added{constexpr}@ const_reverse_iterator crbegin() const noexcept;
@\added{constexpr}@ const_reverse_iterator crend() const noexcept;
// 22.3.8.3, capacity
[[nodiscard]] @\added{constexpr}@ bool empty() const noexcept;
@\added{constexpr}@ size_type size() const noexcept;
@\added{constexpr}@ size_type max_size() const noexcept;
@\added{constexpr}@ void resize(size_type sz);
@\added{constexpr}@ void resize(size_type sz, const T& c);
@\added{constexpr}@ void shrink_to_fit();
// element access
@\added{constexpr}@ reference operator[](size_type n);
@\added{constexpr}@ const_reference operator[](size_type n) const;
@\added{constexpr}@ reference at(size_type n);
@\added{constexpr}@ const_reference at(size_type n) const;
@\added{constexpr}@ reference front();
@\added{constexpr}@ const_reference front() const;
@\added{constexpr}@ reference back();
@\added{constexpr}@ const_reference back() const;
// 22.3.8.4, modifiers
template<class... Args> @\added{constexpr}@ reference emplace_front(Args&&... args);
template<class... Args> @\added{constexpr}@ reference emplace_back(Args&&... args);
template<class... Args> @\added{constexpr}@ iterator emplace(const_iterator position, Args&&... args);
@\added{constexpr}@ void push_front(const T& x);
@\added{constexpr}@ void push_front(T&& x);
@\added{constexpr}@ void push_back(const T& x);
@\added{constexpr}@ void push_back(T&& x);
@\added{constexpr}@ iterator insert(const_iterator position, const T& x);
@\added{constexpr}@ iterator insert(const_iterator position, T&& x);
@\added{constexpr}@ iterator insert(const_iterator position, size_type n, const T& x);
template<class InputIterator>
@\added{constexpr}@ iterator insert(const_iterator position, InputIterator first, InputIterator last);
@\added{constexpr}@ iterator insert(const_iterator position, initializer_list<T>);
@\added{constexpr}@ void pop_front();
@\added{constexpr}@ void pop_back();
@\added{constexpr}@ iterator erase(const_iterator position);
@\added{constexpr}@ iterator erase(const_iterator first, const_iterator last);
@\added{constexpr}@ void swap(deque&)
noexcept(allocator_traits<Allocator>::is_always_equal::value);
@\added{constexpr}@ void clear() noexcept;
};
template<class InputIterator,
class Allocator = allocator<@\textit{iter-value-type<InputIterator>}@>>
deque(InputIterator, InputIterator, Allocator = Allocator())
-> deque<@\textit{iter-value-type<InputIterator>}@, Allocator>;
// swap
template<class T, class Allocator>
@\added{constexpr}@ void swap(deque<T, Allocator>& x, deque<T, Allocator>& y)
noexcept(noexcept(x.swap(y)));
}
\end{codeblock}%
\end{quote}
Change in \textbf{[deque.cons] 22.3.8.2}:
\begin{quote}
\begin{itemdecl}
@\added{constexpr}@ explicit deque(const Allocator&);
\end{itemdecl}
[...]
\begin{itemdecl}
@\added{constexpr}@ explicit deque(size_type n, const Allocator& = Allocator());
\end{itemdecl}
[...]
\begin{itemdecl}
@\added{constexpr}@ deque(size_type n, const T& value, const Allocator& = Allocator());
\end{itemdecl}
[...]
\begin{itemdecl}
template<class InputIterator>
@\added{constexpr}@ deque(InputIterator first, InputIterator last,
const Allocator& = Allocator());
\end{itemdecl}
[...]
\end{quote}
Change in \textbf{[deque.capacity] 22.3.8.3}:
\begin{quote}
\begin{itemdecl}
@\added{constexpr}@ void resize(size_type sz);
\end{itemdecl}
[...]
\begin{itemdecl}
@\added{constexpr}@ void resize(size_type sz, const T& c);
\end{itemdecl}
[...]
\begin{itemdecl}
@\added{constexpr}@ void shrink_to_fit();
\end{itemdecl}
[...]
\end{quote}
Change in \textbf{[deque.modifiers] 22.3.8.4}:
\begin{quote}
\begin{itemdecl}
@\added{constexpr}@ iterator insert(const_iterator position, const T& x);
@\added{constexpr}@ iterator insert(const_iterator position, T&& x);
@\added{constexpr}@ iterator insert(const_iterator position, size_type n, const T& x);
template<class InputIterator>
@\added{constexpr}@ iterator insert(const_iterator position,
InputIterator first, InputIterator last);
@\added{constexpr}@ iterator insert(const_iterator position, initializer_list<T>);
template<class... Args> @\added{constexpr}@ reference emplace_front(Args&&... args);
template<class... Args> @\added{constexpr}@ reference emplace_back(Args&&... args);
template<class... Args> @\added{constexpr}@ iterator emplace(const_iterator position, Args&&... args);
@\added{constexpr}@ void push_front(const T& x);
@\added{constexpr}@ void push_front(T&& x);
@\added{constexpr}@ void push_back(const T& x);
@\added{constexpr}@ void push_back(T&& x);
\end{itemdecl}
[...]
\begin{itemdecl}
@\added{constexpr}@ iterator erase(const_iterator position);
@\added{constexpr}@ iterator erase(const_iterator first, const_iterator last);
@\added{constexpr}@ void pop_front();
@\added{constexpr}@ void pop_back();
\end{itemdecl}
[...]
\end{quote}
Change in \textbf{[deque.erasure] 22.3.8.5}:
\begin{quote}
\begin{itemdecl}
template<class T, class Allocator, class U>
@\added{constexpr}@ void erase(deque<T, Allocator>& c, const U& value);
template<class T, class Allocator, class Predicate>
@\added{constexpr}@ void erase_if(deque<T, Allocator>& c, Predicate pred);
\end{itemdecl}
\end{quote}
\section{Implementation}
Possible implementation can be found here: \href{https://github.com/ZaMaZaN4iK/llvm-project/tree/feature/deque_constexpr}{LLVM fork}. Notice that when proposal was written constexpr destructors were not supported in Clang. Also in this implementation isn't used \cc{operator<=>} - bunch of old operators used instead (just because libcxx at the moment doesn't use \cc{operator<=>} for \cc{std::deque}).
\section{References}
\renewcommand{\section}[2]{}%
\begin{thebibliography}{9}
\bibitem[P0784R1]{P0784R1}
Multiple authors,
\emph{Standard containers and constexpr}\newline
\url{http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2018/p0784r1.html}
\end{thebibliography}
\end{document}
|
|
\chapter{Speech Recognition Features}
\section{Mel Spectral Features}
\label{sec:mel-spectral-features}
Mel spectral features are generally formed using a filterbank
whose center frequencies increase exponential and whose
bandwidths are increasing along the frequency axis. They are
centered such that they are linear with the mel scale given
by the equation
\begin{equation}
mel(f) = 2595 \log_{10}\left(1 + \frac{f}{700}\right)
\end{equation}
which is given in \cite{wiki:mel_scale}.
|
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%
%
% This file is included from the file Segmentation.tex
%
% Section tag and label are placed in this top file.
%
%
%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\itkpiccaption[Zero Set Concept]{Concept of zero set in a level set.\label{fig:LevelSetZeroSet}}
\parpic(9cm,6cm)[r]{\includegraphics[width=8cm]{LevelSetZeroSet.eps}}
The paradigm of the level set is that it is a numerical method for tracking
the evolution of contours and surfaces. Instead of manipulating the contour
directly, the contour is embedded as the zero level set of a higher
dimensional function called the level-set function, $\psi(\bf{X},t)$. The
level-set function is then evolved under the control of a differential
equation. At any time, the evolving contour can be obtained by extracting
the zero level-set $\Gamma(\bf(X),t) =
\{\psi(\bf{X},t) = 0\}$ from the output. The main advantages of using level
sets is that arbitrarily complex shapes can be modeled and topological
changes such as merging and splitting are handled implicitly.
Level sets can be used for image segmentation by using image-based features
such as mean intensity, gradient and edges in the governing differential
equation. In a typical approach, a contour is initialized by a user and is
then evolved until it fits the form of an object in the image.
Many different implementations and variants of this basic concept have been
published in the literature. An overview of the field has been made by
Sethian \cite{Sethian1996}.
The following sections introduce practical examples of some
of the level set segmentation methods available in ITK. The remainder of this
section describes features common to all of these filters except the
\doxygen{itk}{FastMarchingImageFilter}, which is derived from a different code
framework. Understanding these features will aid in using the filters
more effectively.
Each filter makes use of a generic level-set equation to compute the update to
the solution $\psi$ of the partial differential equation.
\begin{equation}
\label{eqn:LevelSetEquation}
\frac{d}{dt}\psi = -\alpha \mathbf{A}(\mathbf{x})\cdot\nabla\psi - \beta
P(\mathbf{x})\mid\nabla\psi\mid +
\gamma Z(\mathbf{x})\kappa\mid\nabla\psi\mid
\end{equation}
where $\mathbf{A}$ is an advection term, $P$ is a propagation (expansion) term,
and $Z$ is a spatial modifier term for the mean curvature $\kappa$. The scalar
constants $\alpha$, $\beta$, and $\gamma$ weight the relative influence of
each of the terms on the movement of the interface. A segmentation filter may
use all of these terms in its calculations, or it may omit one or more terms.
If a term is left out of the equation, then setting the corresponding scalar
constant weighting will have no effect.
All of the level-set based segmentation filters \emph{must} operate with
floating point precision to produce valid results. The third, optional
template parameter is the \emph{numerical type} used for calculations and as
the output image pixel type. The numerical type is \code{float} by default,
but can be changed to \code{double} for extra precision. A user-defined,
signed floating point type that defines all of the necessary arithmetic
operators and has sufficient precision is also a valid choice. You should
not use types such as \code{int} or \code{unsigned char} for the numerical
parameter. If the input image pixel types do not match the numerical type,
those inputs will be cast to an image of appropriate type when the filter is
executed.
Most filters require two images as input, an initial model $\psi(\bf{X},
t=0)$, and a \emph{feature image}, which is either the image you wish to
segment or some preprocessed version. You must specify the isovalue that
represents the surface $\Gamma$ in your initial model. The single image
output of each filter is the function $\psi$ at the final time step. It is
important to note that the contour representing the surface $\Gamma$ is the
zero level-set of the output image, and not the isovalue you specified for
the initial model. To represent $\Gamma$ using the original isovalue, simply
add that value back to the output.
The solution $\Gamma$ is calculated to subpixel precision. The best discrete
approximation of the surface is therefore the set of grid positions closest to
the zero-crossings in the image, as shown in
Figure~\ref{fig:LevelSetSegmentationFigure1}. The
\doxygen{itk}{ZeroCrossingImageFilter} operates by finding exactly those grid
positions and can be used to extract the surface.
\begin{figure}
\centering
\includegraphics[width=0.4\textwidth]{LevelSetSegmentationFigure1.eps}
\itkcaption[Grid position of the embedded level-set surface.]{The implicit level
set surface $\Gamma$ is the black line superimposed over the image grid. The location
of the surface is interpolated by the image pixel values. The grid pixels
closest to the implicit surface are shown in gray. }
\protect\label{fig:LevelSetSegmentationFigure1}
\end{figure}
There are two important considerations when analyzing the processing time for
any particular level-set segmentation task: the surface area of the evolving
interface and the total distance that the surface must travel. Because the
level-set equations are usually solved only at pixels near the surface (fast
marching methods are an exception), the time taken at each iteration depends on
the number of points on the surface. This means that as the surface grows, the
solver will slow down proportionally. Because the surface must evolve slowly
to prevent numerical instabilities in the solution, the distance the surface
must travel in the image dictates the total number of iterations required.
Some level-set techniques are relatively insensitive to initial conditions
and are therefore suitable for region-growing segmentation. Other techniques,
such as the \doxygen{itk}{LaplacianSegmentationLevelSetImageFilter}, can easily
become ``stuck'' on image features close to their initialization and should
be used only when a reasonable prior segmentation is available as the
initialization. For best efficiency, your initial model of the surface
should be the best guess possible for the solution.
\subsection{Fast Marching Segmentation}
\label{sec:FastMarchingImageFilter}
\ifitkFullVersion
\input{FastMarchingImageFilter.tex}
\fi
%% \subsection{Shape Detection Segmentation}
%% \label{sec:ShapeDetectionLevelSetFilter}
%% \ifitkFullVersion
%% \input{ShapeDetectionLevelSetFilter.tex}
%% \fi
%% \subsection{Geodesic Active Contours Segmentation}
%% \label{sec:GeodesicActiveContourImageFilter}
%% \ifitkFullVersion
%% \input{GeodesicActiveContourImageFilter.tex}
%% \fi
%% \subsection{Threshold Level Set Segmentation}
%% \label{sec:ThresholdSegmentationLevelSetImageFilter}
%% \ifitkFullVersion
%% \input{ThresholdSegmentationLevelSetImageFilter.tex}
%% \fi
%% \subsection{Canny-Edge Level Set Segmentation}
%% \label{sec:CannySegmentationLevelSetImageFilter}
%% \ifitkFullVersion
%% \input{CannySegmentationLevelSetImageFilter.tex}
%% \fi
%% \subsection{Laplacian Level Set Segmentation}
%% \label{sec:LaplacianSegmentationLevelSetImageFilter}
%% \ifitkFullVersion
%% \input{LaplacianSegmentationLevelSetImageFilter.tex}
%% \fi
%% \subsection{Geodesic Active Contours Segmentation With Shape Guidance}
%% \label{sec:GeodesicActiveContourShapePriorLevelSetImageFilter}
%% \ifitkFullVersion
%% \input{GeodesicActiveContourShapePriorLevelSetImageFilter.tex}
%% \fi
|
|
%% Technical Report for the work on the AI-DSL over the period of
%% March to May 2021.
\documentclass[]{report}
\usepackage{url}
\usepackage{minted}
\usepackage[textsize=footnotesize]{todonotes}
\newcommand{\kabir}[2][]{\todo[color=yellow,author=kabir, #1]{#2}}
\newcommand{\nil}[2][]{\todo[color=purple,author=nil, #1]{#2}}
\usepackage[hyperindex,breaklinks]{hyperref}
\usepackage{breakurl}
\usepackage{listings}
\lstset{basicstyle=\ttfamily\footnotesize,breaklines=false,frame=single}
\usepackage{float}
\restylefloat{table}
\usepackage{longtable}
\usepackage{graphicx}
\usepackage[font=small,labelfont=bf]{caption}
\usepackage[skip=0pt]{subcaption}
\usepackage{circledsteps}
\begin{document}
\title{AI-DSL Technical Report (February to May 2021)}
\author{Nil Geisweiller, Kabir Veitas, Eman Shemsu Asfaw, Samuel Roberti}
\maketitle
\begin{abstract}
This document is a technical report of a work done between February
and May 2021, based on the \emph{AI-DSL
Proposal}~\cite{GoertzelGeisweillerBlog} published as a blogpost on
the SingularityNET website in December 2020. It is the first
iteration of a larger endeavor to create a system that enables
autonomous interoperability between AI services over the network, more
specifically over the SingularityNET-on-Cardano network. It presents
in detail what has been accomplished so far as well as future plans
for the continuation of that endeavor.
\end{abstract}
\tableofcontents
\chapter{Introduction}
\section{Autonomous Interoperability of AI Services}
Among the wonders that the blockchain technology enables is the
possibility to programmatically exchange values and services between
parties. In the context of AI services it becomes especially relevant
given the inherent complexity and decomposability of such systems.
Moreover, the abundance of AI algorithms available on the Internet
makes it that often creating a new AI solution consists of connecting
together existing AI algorithms. For instance building a system to
discover new drugs may consist of
\begin{enumerate}
\item a reasoner to extract background knowledge from biological
databases,
\item a principal component analyzer to discover abstractions,
\item a feature selector to discard irrelevant information,
\item a learner to generate predictive models relating selected
features and drug efficacy.
\end{enumerate}
The task of composing such AI algorithms is, in most cases, done by
humans and, as any AI practitioner knows, is tedious and time
consuming. Not just the composition of the whole but also the search
and the understanding required to find the parts. Facilitating and
ultimately automating such process is the goal of the \emph{Artificial
Intelligence Domain Specific Language}, or \emph{AI-DSL} for short.
Another important aspect is the management of resources, both
financial and computational (CPUs, GPUs, etc). The AI-DSL is also
intended to incorporate descriptions of such computational
requirements as well as measures of expected result quality.
Finally, as described in the blogpost, the current plan is to have
such system rely on Dependent Types~\cite{Altenkirch05whydependent} to
express and validate the specifications of the AI services, including
cost, quality and their relationships thereof. The reason Dependent
Types have been chosen is because they are geared toward program
specification checking and program generation, which in our case comes
close to AI services verification and combination if one sees AI
services as functions. More specifically Idris has been chosen as our
initial Dependently Typed Language (DTL) candidate, due to its
efficiency and the fact that is has been primarily designed to verify
and generate actual running programs as opposed to
proofs~\footnote{Programs are equivalent to proofs according to the
Curry-Howard correspondence but some representations are more amenable
to running actual programs than others}.
\section{Objectives and accomplishments}
For that first iteration the goals were to
\begin{enumerate}
\item Experiment with matching and retrieval of AI services using
Idris~\cite{Idris}, a Dependently Typed Language (DTL)~\cite{DTL},
equipped with a powerful type system to express function
specifications. That work is described in
Chapter~\ref{chap:aidsl_registry}.
\item Start building an AI ontology to ultimately provide a rich and
extendable vocabulary for the AI-DSL. That work is described in
Chapter~\ref{chap:aidsl_ontology}.
\item Start building the AI-DSL itself, from its syntax to its
semantics. Exploratory work on that is described in
Chapter~\ref{chap:soft_eng_strat}.
\item Integrate all the above into a holistic prototype, running on a
real world test case of AI service assemblage, in real conditions,
that is ideally on the SingularityNET-on-Cardano network.
Preparatory work on that is described in
Section~\ref{sec:net_ai_future_work}.
\end{enumerate}
All the objectives except the last one have been accomplished at
least to some degree. The last, and the most ambitious, objective had
to be postponed for the next iteration due to its complexity.
\section{Related work}
\label{sec:related_work}
To the best of our knowledge there is no existing work on creating
such AI-DSL system to enable autonomous AI services interoperability
using Dependent Types, let alone running on a blockchain. There are
however attempts using related methodologies, usually involving
ontologies with more or less explicit forms of reasoning, outside of
the context of the blockchain technology.
We are collecting a body of literature in a github
issue~\cite{AIDSLRelatedWork} serving as a living document and being
regularly updated with such related work. We will not describe the
entire body here due to its volume (over 80 references at the time of
writing) and the fact that we are still in the process of reviewing
it. Here is however the most relevant work we have encountered so
far.
The most recent and also most relevant work we have found is the
Function Ontology~\cite{FunctionOntology, DeMeester2020}. Its goal is
to define a standard for describing, both formally and informally,
functions, their references to implementations, as well as developing
tools for retrieving and executing them, remotely or locally. To the
best of our knowledge it does not make use of dependent types or
blockchain technology, however there is a lot of potential for reuse.
More investigation will be conducted to precisely determine to what
extent.
Still relevant work with potential for reuse if only conceptual is
described below. Let us start with the multi-agent system (MAS)
field. In~\cite{Brazier1995} the authors describe a use case of a
system called DESIRE, to formally define inputs and outputs of agents
and their control flows. The logic seems somewhat limited but their
goals align with ours. In~\cite{Bourahla20055} an extension of
Computation Tree Logic (CTL) called multi-modal branching-time logic
is defined to apply model checking on collections of agent given their
description in some abstract programming language. It's not clear the
specification language they use is very convenient and the model
checking methodology is open-ended enough, but nevertheless the work
is solid and relevant. In~\cite{Desouky2007} is described an
architecture for distributed multi-agent intelligent system. The
description of the architecture is high level but the authors mention
Agent Communication Language (ACL) standards~\cite{Labrou99thecurrent}
such as FIPA-ACL~\cite{FIPAACL}, used to communicate agents requests
to each other. In the ontology field we have found the following
papers. In~\cite{Gruber_anontology} the authors introduce an ontology
called EngMath for representing mathematical concepts useful for
engineering, including quantity units, equations and more in
KIF~\cite{KIF} to be used in SHADE~\cite{Gruber92towarda} a system for
collaborative intelligent agents. It is rather old and it seems
development has halted, but could nevertheless be interesting to learn
more about. OpenMath~\cite{Abbot1995} is another old, but still
maintained, ontology about mathematics. More recent mathematical
ontologies worth mentioning are OntoMath~\cite{Elizarov2017} and
OntoMathPRO~\cite{nevzorova2014ontomathpro}, they seem rather high
level but could still be useful. Broadening the scope,
\cite{Roelofs2020} is focused on validating the consistency of
ontologies. Relatedly, in~\cite{Witherell2009}, a tool for analyzing
and propagating changes across overlapping ontologies according to
predefined inference rules, called FIDOE, is introduced. The authors
mention standards such as SUMO~\cite{pease_standard_2009} and
SWRL~\cite{SWRL} as well. Such work could be relevant towards the
goal of facilitating the decentralization of the AI-DSL and its
ontology.
Finally, it is important to mention a rather young field called
Verified Artificial Intelligence
(VAI)~\cite{DBLP:journals/corr/SeshiaS16, HAND2020100037,
MENZIES2005153}. The objective of VAI is to bring formal
verification to AI to insure that algorithms and models meet certain
mathematical requirements. This includes work using dependent types
to that effect~\cite{Diehl2011, Pineyro2019}. For instance
\cite{SohamNeural} uses Idris to guaranty that layers of a neural
network are arranged in a sound manner. This does not guaranty that
the neural networks coming out of that program are good models of
reality, but does eliminate certain types of memory corruption errors
by forbidding, for instance, to connect an input vector of a size
unequal to that of the first layer.
\chapter{AI-DSL Registry}
\label{chap:aidsl_registry}
In this chapter we describe a prototype of an AI-DSL registry, a
service in charge of storing AI service specifications and returning
matching AI services upon request, given a specification to fulfill.
In Section \ref{sec:realized_function} we describe experiments for
implementing the \texttt{RealizedAttributes} and
\texttt{RealizedFunction} data structures described
in~\cite{GoertzelGeisweillerBlog} used for capturing financial,
computation costs, as well as measures of expected result quality. In
Section \ref{sec:network_idris_ai_services} we describe the implementation
of a network of trivially simple AI services implemented in Idris, and
use the Idris compiler to type check if they can properly connect to each
other. Finally, in Section \ref{sec:dependently_typed_registry} we describe the
implementation of an AI-DSL Registry prototype, as a proof-of-concept
for querying AI services based on their dependently typed
specifications.
\section{Realized Function}
\label{sec:realized_function}
\subsection{Description}
The \texttt{RealizedFunction} data structure, as introduced
in~\cite{GoertzelGeisweillerBlog}, is a wrapper around a regular
function to integrate aspects of its specifications pertaining to its
execution on real physical substrates as opposed to just its
algorithmic properties. For instance it contains descriptions of
costs (financial, computational, etc) and performances (quality, etc)
captured in the \texttt{RealizedAttributes} data structure, as
introduced in~\cite{GoertzelGeisweillerBlog} as well.
For that iteration we have implemented a simple version of
\texttt{RealizedFunction} and \texttt{RealizedAttributes} in
Idris2~\cite{Idris}. The \texttt{RealizedAttributes} data structure
contains
\begin{itemize}
\item \texttt{Costs}: as a triple of three constants,
\texttt{financial}, \texttt{temporal} and \texttt{computational},
\item \texttt{Quality}: as a single \texttt{quality} value.
\end{itemize}
as well as an example of compositional law,
\texttt{add\_costs\_min\_quality}, where costs are additive and
quality is infimum-itive. Below is a small snippet of that code to
give an idea of how it looks like
\begin{minted}[mathescape]{idris}
record RealizedAttributes where
constructor MkRealizedAttributes
costs : Costs
quality : Quality
\end{minted}
\begin{minted}[mathescape]{idris}
add_costs_min_quality : RealizedAttributes ->
RealizedAttributes ->
RealizedAttributes
add_costs_min_quality f_attrs g_attrs = fg_attrs where
fg_attrs : RealizedAttributes
fg_attrs = MkRealizedAttributes (add_costs f_attrs.costs g_attrs.costs)
(min f_attrs.quality g_attrs.quality)
\end{minted}
The full implementation can be found in
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/realized-function/RealizedAttributes.idr}{\texttt{RealizedAttributes.idr}},
under the
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/realized-function/}{\texttt{experimental/realized-function/}}
folder of the \href{https://github.com/singnet/ai-dsl/}{AI-DSL
repository}~\cite{AIDSLRepo}.\\
Then we have implemented \texttt{RealizedFunction} that essentially
attaches a \texttt{RealizedAttributes} instance to a function. In
addition we have implemented a composition (as in function
composition) operating on \texttt{RealizedFunction} instead of
regular function, making use of that compositional law above.
Likewise below is a snippet of that code
\begin{minted}[mathescape]{idris}
data RealizedFunction : (t : Type) -> (attrs : RealizedAttributes) -> Type where
MkRealizedFunction : (f : t) -> (attrs : RealizedAttributes) ->
RealizedFunction t attrs
\end{minted}
\begin{minted}[mathescape]{idris}
compose : {a : Type} -> {b : Type} -> {c : Type} ->
(RealizedFunction (b -> c) g_attrs) ->
(RealizedFunction (a -> b) f_attrs) ->
(RealizedFunction (a -> c) (add_costs_min_quality f_attrs g_attrs))
compose (MkRealizedFunction g g_attrs) (MkRealizedFunction f f_attrs) =
MkRealizedFunction (g . f) (add_costs_min_quality f_attrs g_attrs)
\end{minted}
The full implementation can be found in
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/realized-function/RealizedFunction.idr}{\texttt{RealizedFunction.idr}}
under the same folder.
Given such data structure we used the Idris compiler to type check if
the realized attributes of realized functions, i.e. AI services,
composed from other realized functions would follow the defined
compositional law, here \texttt{add\_costs\_min\_quality}. That is given
for instance the realized attributes of an incrementer function
\begin{minted}[mathescape]{idris}
incrementer_attrs = MkRealizedAttributes (MkCosts 100 10 1) 1
\end{minted}
and a twicer function
\begin{minted}[mathescape]{idris}
twicer_attrs = MkRealizedAttributes (MkCosts 200 20 2) 0.9
\end{minted}
the realized attributes of their compositions must be
\begin{minted}[mathescape]{idris}
rlz_compo1_attrs = MkRealizedAttributes (MkCosts 300 30 3) 0.9
\end{minted}
otherwise Idris detects a type error.
\subsection{Objectives and achievements}
The objectives of this work was to see if Idris2 was able to type
check that the realized attributes of composed realized functions
followed the defined compositional law. We have found that Idris2 is
not only able to do that, but to our surprise does it considerably
faster that Idris1 (instantaneous instead of seconds to minutes), by
bypassing induction on numbers and using efficient function-driven
rewriting on the realized attributes instead.
That experiment can be found in
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/realized-function/RealizedFunction-test.idr}{\texttt{RealizedFunction-test.idr}},
under the
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/realized-function/}{\texttt{experimental/realized-function/}}
folder of the \href{https://github.com/singnet/ai-dsl/}{AI-DSL
repository}~\cite{AIDSLRepo}.
An improvement of that work is also described in Section
\ref{sec:service_composition_in_idris2}.
\subsection{Future work}
Experimenting with constants as realized attributes was the first step
in our investigation. The subsequent steps will be to replace
constants by functions, probability distributions and other
sophisticated ways to represent costs and quality.
\section{Network of Idris AI services}
\label{sec:network_idris_ai_services}
\subsection{Description}
In this work we have implemented a small network of trivially simple
AI services, with the objective of testing if the Idris compiler could
be used to type check the validity of their connections. Three primary
services were implemented
\begin{enumerate}
\item \texttt{incrementer}: increment an integer by 1
\item \texttt{twicer}: multiply an integer by 2
\item \texttt{halfer}: divide an integer by 2
\end{enumerate}
as well as composite services based on these primary services, such as
\begin{itemize}
\item \texttt{incrementer . halfer . twicer}
\end{itemize}
to test that such composition, for instance, is properly typed. The
networking part was implemented based on the SingularityNET example
service~\cite{SNETExampleService} mentioned in the SingularityNET
tutorial~\cite{SNETTutorial}. The specifics of that implementation
are of little importance for that report and thus are largely ignored.
The point was to try to be as close as possible to real networking
conditions. For the part that matters to us we may mention that
communications between AI services are handled by gRPC~\cite{gRPC},
which has some level of type checking by insuring that the data being
exchanged fulfill some type structures (list of integers, union type
of string and bool, etc) specified in Protocol
Buffers~\cite{Protobuf}. Thus one may see the usage of Idris in that
context as adding an enhanced refined verification layer on top of
gRPC making use of the expressive power of dependent types.
\subsection{Objectives and achievements}
As mentioned above the objectives of such an experiment was to see how
the Idris compiler can be used to type check combinations of AI
services. It was initially envisioned to make use of dependent types
by specifying that the \texttt{twicer} service outputs an even
integer, as opposed to any integer, and that the \texttt{halfer}
service only accepts an even integer as well. The idea was to
prohibit certain combinations such as
\begin{itemize}
\item \texttt{halfer . incrementer . twicer}
\end{itemize}
Since the output of \texttt{incrementer . twicer} is provably odd,
\texttt{halfer} should refuse it and such combination should be
rejected. This objective was not reached in this experiment, but was
reached in the experiments described in Sections
\ref{sec:dependently_typed_registry} and \ref{sec:dependent_pairs}.
The other objective was to type check that the compositions have
realized attributes corresponding to the compositional law implemented
in Section \ref{sec:realized_function}, which was fully achieved in
this experiment. For instance by changing either the input/output
types, costs or quality of the following composition
\begin{minted}[mathescape]{idris}
-- Realized (twicer . incrementer).
rlz_compo1_attrs : RealizedAttributes
rlz_compo1_attrs = MkRealizedAttributes (MkCosts 300 30 3) 0.9
-- The following does not work because 301 /= 200+100
-- rlz_compo1_attrs = MkRealizedAttributes (MkCosts 301 30 3) 0.9
rlz_compo1 : RealizedFunction (Int -> Int) Compo1.rlz_compo1_attrs
rlz_compo1 = compose rlz_twicer rlz_incrementer
\end{minted}
defined in
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/simple-idris-services/service/Compo1.idr}{\texttt{experimental/simple-idris-services/service/Compo1.idr}},
the corresponding service would raise a type checking error at start
up. More details on the experiment and how to run it can be found in
the
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/simple-idris-services/README.md}{\texttt{README.md}}
under the
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/simple-idris-services/}{\texttt{experimental/simple-idris-services/service/}}
folder of the \href{https://github.com/singnet/ai-dsl/}{AI-DSL
repository}~\cite{AIDSLRepo}.
Thus besides the fact that dependent types were ignore in that
experiment, the objectives were met. See Section \ref{sec:dependently_typed_registry}
for a follow up experiment involving dependent types.
\subsection{Future work}
\label{sec:net_ai_future_work}
Such experiment was a good way to explore how Idris can be integrated
to a network of services. What we need to do next is experiment with
actual AI algorithms, making full use of dependent types in their
specifications. Such endeavor was actually attempted over the Fake
News Warning app described in Section
\ref{sec:domain_model_considerations}, but it was eventually concluded
to be too ambitious for that iteration and was postponed for the next.
More about that is discussed in Section \ref{sec:improve_test_cases}.
Also, we obviously want to be able to reuse existing AI services and
write their specifications on top of them, as opposed to writing both
specification and code in Idris (and ultimately the AI-DSL). To that
end it was noted that having a Protobuf to/from Idris converter would
be useful, so that a developer can start from an existing AI service,
specified in Protobuf, and enriched it with dependent types using
Idris. The other way around could be useful as well to enable a
developer to implement AI services entirely in Idris and expose their
Protobuf specification to the network. Relatedly having directly an
implementation of gRPC for Idris could be handy as well.
\section{Dependently Typed Registry}
\label{sec:dependently_typed_registry}
\subsection{Description}
One important goal of the AI-DSL is to have a system that can perform
autonomous matching and composition of AI services, so that provided
the specification of an AI, it should suffice to find it, complete it
or even entirely build it from scratch. We have implemented an
\emph{AI-DSL Registry} prototype to start experimenting with such
functionality.
So far we have two versions in the
\href{https://github.com/singnet/ai-dsl/}{AI-DSL repository}, one
without dependent types support, under
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/registry/}{\texttt{experimental/registry/}},
and a more recent one with dependent type support that can be found
under
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/registry-dtl/}{\texttt{experimental/registry-dtl/}}.
We will focus our attention on the latter which is far more
interesting.\\
The AI-DSL registry (reminiscent of the SingularityNET
registry~\cite{SNETRegistry}) is itself an AI service with the following functions
\begin{enumerate}
\item \texttt{retrieve}: find AI services on the network fulfilling a
given specification.
\item \texttt{compose}: construct composite services fulfilling that
specification. Useful when no such AI services can be found.
\end{enumerate}
The experiment contains the same \texttt{incrementer}, \texttt{twicer}
and \texttt{halfer} services described in Section
\ref{sec:network_idris_ai_services} with the important distinction that
their specifications now utilize dependent types. For instance the
type signature of \texttt{twicer} becomes
\begin{minted}[mathescape]{idris}
twicer : Integer -> EvenInteger
\end{minted}
instead of
\begin{minted}[mathescape]{idris}
twicer : Integer -> Integer
\end{minted}
where \texttt{EvenInteger} is a shorthand for the following
dependent type
\begin{minted}[mathescape]{idris}
EvenInteger : Type
EvenInteger = (n : WFInt ** Parity n 2)
\end{minted}
that is a \emph{dependent pair} composed of a \emph{well founded
integer} of type \texttt{WFInt} and a dependent data structure,
\texttt{Parity} containing a proof that the first element of the pair,
\texttt{n}, is even. More details on that can be found in
Section \ref{sec:dependent_pairs}.
For now our prototype of AI-DSL registry implements the
\texttt{retreive} function, which, given an Idris type signature,
searches through a database of AI services and returns one fulfilling
that type. In that experiment the database of AI services is composed
of \texttt{incrementer}, \texttt{twicer}, \texttt{halfer}, the
\texttt{registry} itself and \texttt{compo}, a composite service using
previously listed services.
One can query each service via gRPC. For instance querying the
\texttt{retreive} function of the \texttt{registry} service with the
following input
\begin{minted}[mathescape]{idris}
String -> (String, String)
\end{minted}
outputs
\begin{minted}[mathescape]{idris}
Registry.retreive
\end{minted}
which is actually itself (as the \texttt{retrieve} procedure of the
\texttt{registry} service takes a string, a type signature, and
returns two strings, the service and procedure names matching such
type signature). Likewise one can query
\begin{minted}[mathescape]{idris}
Integer -> Integer
\end{minted}
which outputs
\begin{minted}[mathescape]{idris}
Incrementer.incrementer
\end{minted}
corresponding to the \texttt{Incrementer} service with the
\texttt{incrementer} function.
Next one can provide a query involving dependent types, such as
\begin{minted}[mathescape]{idris}
Integer -> EvenInteger
\end{minted}
outputting
\begin{minted}[mathescape]{idris}
Twicer.twicer
\end{minted}
Or equivalently provide the unwrapped dependent type signature
\begin{minted}[mathescape]{idris}
Integer -> (n : WFInt ** Parity n (Nat 2))
\end{minted}
retrieving the correct service again
\begin{minted}[mathescape]{idris}
Twicer.twicer
\end{minted}
At the heart of it is Idris. Behind the scene the registry
communicates the type signature to the Idris REPL and requests, via
the \texttt{:search} meta function, all loaded functions matching the
type signature. Then the registry just returns the first match.
Secondly, we can now write composite services with missing parts. The
\texttt{compo} service illustrates this. This service essentially
implements the following composition
\begin{minted}[mathescape]{idris}
incrementer . halfer . (Registry.retrieve ?type)
\end{minted}
Thus upon execution, the \texttt{compo} AI service queries the
registry to fill the hole with the correct service according to its
specification, here \texttt{twicer}.
More details about this, including steps to reproduce it, can be found
in the
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/registry-dsl/README.md}{\texttt{README.md}}
under the
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/registry-dsl/}{\texttt{experimental/simple-idris-services/service/}}
folder of the \href{https://github.com/singnet/ai-dsl/}{AI-DSL
repository}~\cite{AIDSLRepo}.
\subsection{Objectives and achievements}
As shown above we were able to implement a prototype of an AI-DSL
registry. Only the \texttt{retrieve} function was implemented. The
\texttt{compose} function still remains to be implemented, although
the \texttt{compo} service is already somewhat halfway there, with the
limitation that the missing type, \texttt{?type}, is hardwired in the
code, \texttt{Integer -> EvenInteger}. It should be noted however
that Idris should be capable of inferring such information but more
work is needed to fully explore that functionality.
Of course it is a very simple example, in fact the simplest we could
come up with, but we believe serves as a proof of concept, and
demonstrates that AI services matching and retrieval, using dependent
types as formal specification language, is possible.
\subsection{Future work}
\label{sec:ai_dsl_future_work}
There are many possible future improvements for this work, falling
into two main categories, the prototype itself, and its use cases.
\subsubsection{Improve the prototype}
Here is a list in no particular order of possible improvements of that
AI-DSL prototype.
\begin{itemize}
\item Implement \texttt{compose} for autonomous composition.
\item Use structured types to represent type signatures instead of
String.
\item Return a list of services instead of the first one.
\item Allow fuzzy matching and infer \emph{sophisticated casts} to
automatically convert data in case of imperfect match between output
and input types.
\item Improve its implementation. The registry prototype is currently
implemented in Python\footnote{because the SingularityNET example it
is derived from is written in Python, not because Python is
considered to be the most suitable language for this purpose.},
querying Idris when necessary. However it is likely that this
should be better suited to Idris itself. Which leads us to an
interesting possibility, maybe the registry, and in fact most
(perhaps all) components and functions of the AI-DSL could or should
be implemented in the AI-DSL itself.
\end{itemize}
\subsubsection{Improve the test cases}
\label{sec:improve_test_cases}
\begin{itemize}
\item First, we want to expand our trivial AI service assemblage by
defining more complex properties, making use for instance of product
and sum types (corresponding to the logical connectors $\wedge$ and
$\vee$), and explore how to cast specialized properties into more
abstract ones. For instance, given the AI services A, B and C,
let's assume A's output satisfies a conjunction of properties, such
as an integer that is both even and within a certain interval. Then
B and C inputs may only need to partially fulfill such conjunction
of properties. For instance B may require an even integer, while C
may require that integer to be within a certain interval. In other
words both services B and C may take the output of A as input but
for different reasons. It is not conceptual difficult to cast the
output of A to match the input types of B and C, however this is
something we still need to explore in its full generality with
Idris.
\item Second, we want to adapt the Fake News Warning app described in
Section~\ref{sec:domain_model_considerations} as test case for the
AI-DSL Registry. To briefly explain, the Fake News Warning app is
an AI service assemblage estimating if the headline of an article is
consistent with its body. Such assemblage is composed of
\begin{enumerate}
\item a collection of classifiers, each attempting to learn to
recognize if the headline of an article is consistent with its body;
\item an aggregator combining the outputs of all classifiers into a
single answer.
\end{enumerate}
So what we would like to achieve is to
\begin{enumerate}
\item formally specify the functions of each AI service above (which
requires to enrich the leaf ontology described in
Section~\ref{sec:ontology_prototype} as well as the composition
functions described in Sections~\ref{sec:realized_function} and
\ref{sec:service_composition_in_idris2});
\item populate the AI-DSL registry described in
Section~\ref{sec:dependently_typed_registry} with these formal
specifications;
\item type check combinations of these AI services, rejecting
illegal ones, such as two classifiers serially connected, and
accepting legal ones, such as classifiers connected in parallel to
the aggregator;
\item automatically connect the AI services into a valid assemblage
given its high level specification of such assemblage. Such high
level specification should include its type signature, the overall
financial, temporal and computational cost, as well as the overall
expected result quality. Then the assemblage should be
constructed in a way to simultaneously satisfy all the
requirements. For instance in order to reach the expected result
quality, the assemblage may require more classifiers, which may
however increase the financial cost, etc.
\end{enumerate}
By now we have a good grasp of how such service assemblage works and
some ideas of how to formally specify its subcomponents. As an
intermediary step we have also started porting portions of the Fake
News Warning app to Idris, see
\href{https://github.com/singnet/ai-dsl/blob/master/ai-algorithms/NeuralNets/README.md}{\texttt{ai-algorithms/NeuralNets/README.md}}
in the \href{https://github.com/singnet/ai-dsl/}{AI-DSL
repository}~\cite{AIDSLRepo}, with the intension of refining the
type signatures of the various parts by taking advantage of
dependent types.
\end{itemize}
\chapter{Software Engineering Strategies}
\label{chap:soft_eng_strat}
\section{Service Composition in Idris2}
\label{sec:service_composition_in_idris2}
A key requirement of the AI-DSL is to provide both an ergonomic syntax for
describing service properties and a robust process for using these descriptions
to verify the correctness of composed services. This work involved
investigating several different methods for meeting this requirement using
Idris2.
\subsection{\texttt{RealizedFunction} and \texttt{RealizedAttributes}}
The \texttt{RealizedFunction} and \texttt{RealizedAttributes} data types were an
early strategy for describing and composing AI services. They directly
contained values representing the relevant properties of arbitrary Idris
functions and made use of a \texttt{compose} function to compute the properties
of the function resulting from the composition of two others.
While this approach worked to verify that a small, fixed set of attributes was
correct for a composition of functions, it also presented several issues:
\begin{itemize}
\item
The \texttt{RealizedFunction} definition contains only the raw data
representing function properties, while using a separate function to
represent composition logic. Because the composition logic is not
part of the type definition, there is no way for Idris to prove that the
correct logic was used to construct any given \texttt{RealizedFunction}.
\item \texttt{RealizedAttributes} represents only a set of example properties.
The syntax tree for the AI-DSL should be able to represent any
properties specified by the user, assuming the composition laws for
those properties are known.
\end{itemize}
\subsection{\texttt{Service}}
To address these problems, we implemented the \texttt{Service} type,
which can be found in
\href{https://github.com/singnet/ai-dsl/blob/master/experimental/realized-function/ServiceAttributes.idr}{\texttt{experimental/realized-function/ServiceAttributes.idr}}.
It differs from \texttt{RealizedFunction} in two important ways:
\begin{itemize}
\item Composition logic is represented entirely at the type level as a second
constructor for the \texttt{Service} type.
\item Idris' \texttt{Num} interface is used as a generic representation of any
attribute that can be added when two \texttt{Service}s are sequenced.
\end{itemize}
These changes were sufficient to solve the problems with our earlier approach,
but we still needed to improve the expressiveness of our representation. Many
important properties are too complex to be described using only the \texttt{Num}
interface.
\subsection{A Look into Dependent Pairs}
\label{sec:dependent_pairs}
Idris represents the intersection between a theorem proof assistant and a
programming language. As such, it is often useful to think of types as
logical propositions, and values as proofs of those propositions. Since our
goal is to verify that a desired property is true of some value, we can use
dependent types to describe a proposition parameterized by a specific value.
Idris provides a special syntax for this.
\texttt{(x : a ** p)} can be read as ``\texttt{x} is a value of type \texttt{a} such that
proposition \texttt{p} holds true of \texttt{x}''. This is called a dependent pair, and it can
only be constructed by providing both a value and a proof that a desired
property holds true for that specific value. In the context of service
composition, we can use dependent pairs as a direct representation of input
values that satisfy some condition.
To demonstrate the practicality of this pairing, consider the following types:
\begin{minted}[mathescape]{idris}
public export
data WFInt : Type where
Nat : (n : Nat) -> WFInt
Neg : (n : Nat) -> WFInt --Note: In the negative case, n=Z represents -1.
-- n-parity, i.e. proof that an integer a is evenly divisible by n (or not).
public export
data Parity : (a : WFInt) -> (n : WFInt) -> Type where
-- a has even n-parity if there exists an integer multiple x s.t. x*n = a.
Even : (x : WFInt ** (x * n) = a) -> Parity a n
public export
data OddParity : (a : WFInt) -> (n : WFInt) -> Type where
-- a has odd n-parity if there exists
Odd : (b : WFInt ** LT = compare (mag b) (mag n))
-> (Parity (a + b) n) -> OddParity a n
\end{minted}
\texttt{WFInt} is a type describing a well-founded view of an integer. This
alternate view is necessary in order to write more flexible inductive proofs for
integer inputs.
\texttt{Parity} demonstrates the proof obligation necessary to
show that one integer is evenly divisible by another. In plain English, it can
be read as ``If there exists some integer \texttt{x} such that
\texttt{x * n = a}, then \texttt{a} can be said to have \texttt{n-parity}.''
\texttt{OddParity} is a type representing the opposite proposition, i.e. that
dividing two integers will produce a remainder.
For services such as our Halfer example, this allows us to clearly express that
inputs should be only even numbers, as shown in this function type signature:
\begin{minted}[mathescape]{idris}
halfer : (a : WFInt ** Parity a 2) -> WFInt
\end{minted}
Similarly, the type signatures of the Twicer and Incrementer example services
can express their properties with regards to the 2-parity of the integers they
operate on:
\begin{minted}[mathescape]{idris}
-- Guaranteed to produce a value divisible by 2
twicer : (b : WFInt) -> (a : WFInt ** Parity a 2)
incrementer : (a : WFInt ** Parity a n) -> (b : WFInt ** OddParity b n)
\end{minted}
Now that the relevant properties for verification are expressed entirely at the
type level, the Idris2 typechecker can statically check the validity of service
compositions.
\begin{minted}[mathescape]{idris}
-- A valid sequence of services that successfully typechecks.
compo1 : WFInt -> WFInt
compo1 = fst (incrementer . halfer . twicer)
-- An invalid sequence of services that will always fail typechecking
compo2 : WFInt -> WFInt
compo2 = fst (halfer . incrementer . twicer)
\end{minted}
With dependent pairs, arbitrary properties of values can be encoded and formally
verified. For an AI-DSL that may need to describe AI services in many different
contexts, this ability to use custom types instead of a limited set of
primitives is crucial. However, this method is not a complete solution, as it
highlights major practical flaws.
An AI service developer making use of the AI-DSL should be able to adequately
describe the necessary properties of data their service will take as input, but
there should be no need for them to also encode the exact properties of their
service's output data. A service developer is not likely to have any knowledge
of how their service's outputs will be used by other services in the future, so
the AI-DSL should not force them to describe their output data in any more
detail than is possible\footnote{Of course given a full specification
of the service, which is admittedly hard but possible to provide, any
decidable correct property about its output data can be inferred,
possibly at a prohibitively high cost.}. In the examples above,
the \texttt{incrementer} service was forced to describe its inputs and
outputs in terms of properties that are only relevant to other
services.
\subsection{A Monadic DSL}
\label{sec:monadic_dsl}
At this stage, there are two key problems which must be solved:
\begin{enumerate}
\item At the point of service creation, developers should not be expected to
have knowledge of the properties that are only relevant to other
services. They should be able to encode only the properties relevant to
their own service.
\item Due to the limits of computability, some relevant properties of data
will not be formally provable. However, some of these properties might
still be safely assumed to hold in certain contexts, even if a formal
proof is impossible. The AI-DSL should be able to represent such cases
and provide the strongest possible guarantees.
\end{enumerate}
For the first issue, we borrowed a well-established design pattern from
strongly-typed functional programming and defined a new \texttt{Service} type
around the \texttt{Monad} interface. \cite{Monads1993}
Monads are a class of types used to
describe a context for operations, along with any custom logic necessary to
combine those operations without imposing any requirement for tight coupling.
This is perfect for the AI-DSL.
To address the issue of unproveable properties, we experimented with a
conceptual model of smart contracts as a core language feature within the DSL.
Because the actual implementation of logic to represent external smart contracts
was outside the scope of this work, we made the assumption that such contracts
could be used to represent a financially-backed assurance that some unproveable
property holds. In theory, this could allow compositions of AI services to be
analyzed for their overall financial risk.
The following type describes the Abstract Syntax Tree for a deeply-embedded DSL:
\begin{minted}[mathescape]{idris}
public export
data Service : Type -> Type where
||| A Service that is definitely of type `a`
Val : a -> Service a
||| A contract has promised a reward if `a` is not a Service b
Promise : Contract a b -> Service b
||| Application of a Service to another Service
App : Service (a -> b) -> Service a -> Service b
||| Explicitly construct a Service using monadic binding
Bind : Service a -> (a -> Service b) -> Service b
\end{minted}
This \texttt{Service} type describes a context wherein values may be either
native Idris2 values or a reference to an external smart contract.
Below are some simple definitions for the functions necessary for
\texttt{Service} to be a member of the \texttt{Monad} typeclass, as well as
superclasses \texttt{Functor} and \texttt{Applicative}.
\begin{minted}[mathescape]{idris}
public export
Functor Service where
map f (Val a) = Val $ f a
map f (Promise c) = Val $ f $ trustContract c
map f s = App (Val f) s
public export
Applicative Service where
pure = Val
(<*>) = App
public export
Monad Service where
(>>=) = Bind
join m = !m
\end{minted}
With these operations defined, sequencing services becomes much simpler. Idris2
provides a special /texttt{do}-notation for monads, as well as convenient syntax
for pattern-matching on intermediate values.
The following are several example composition scenarios, taken from
\texttt{experimental/typed-dsl/Compo.idr}:
\begin{minted}[mathescape]{idris}
-- composition of Twicer and Incrementer
compo1 : Integer -> Service (Integer)
compo1 a = do
n <- twicerService a
incrementerService n
-- composition of Twicer, Halfer, and Incrementer
compo2 : Integer -> Service (Integer)
compo2 a = do
i <- twicerService a
-- Because twicerService does not provide its own proof that its
-- output is always even, we use a Promise to provide a soft proof
-- of this property.
p <- Promise ?con
j <- halferService (cast i ** p)
incrementerService j
-- invalid composition of Twicer, Incrementer, and Halfer
compo3 : Integer -> Service (Integer)
compo3 a = do
i <- twicerService a
j <- incrementerService i
-- The `resolve` hole shows that the programmer must create apply some logic
-- of type Integer -> EvenNumber to resolve the mismatch here.
halferService $ ?resolve j
-- A potential method to resolve the above mismatch
compo3sol : Integer -> Service (Integer)
compo3sol a = do
i <- twicerService a
j <- incrementerService i
-- Because this function contains the actual composition of the various
-- Services, this is the point where the programmer is best able to decide
-- which measures are acceptable to resolve type mismatches.
-- In this case, forceEven is used.
halferService $ forceEven j
-- In this composition, we have no way to statically prove that
-- halferService is being passed an even number.
compo4 : Integer -> Service (Integer)
compo4 a = do
-- i could be even or odd, depending on the value of a
i <- incrementerService a
-- We can pattern match on the result of a runtime test
-- to create a branch in the logic of this Service.
Just j <- pure $ maybeEven i
| Nothing => twicerService i
-- If there is a Just EvenNumber, run the halferService on it.
-- If there is no EvenNumber value to be found, run the twicerService on i.
halferService j
-- Because all of the above examples are Integer -> Service (Integer),
-- it is relatively trivial to compose them.
compo5 : Integer -> Service (Integer)
compo5 a = (compo1 a) >>= compo2 >>= compo3 >>= compo4
\end{minted}
\section{Depth of Embedding}
\label{sec:depth_embedding}
A domain-specific language requires not only a formal specification for its
semantics, but a software implementation as well. The relationship between a
DSL and its implementation can vary, but most strategies fall into one or more
of three categories:
\begin{enumerate}
\item \texttt{Independent Syntax:} A language may be designed completely
separately from its implementation. Such languages typically require
dedicated compilers or interpreters, as they are unable to borrow any
functionality due to their lack of a host language.
\item \texttt{Shallow Embedding:} An embedded domain-specific language (eDSL)
is written as a module or library for some host language. Data in the
DSL's domain is represented directly as values in the host language.
\cite{EmbedDepth2014}
Shallow embeddings tend to be easy to use and extend, but often suffer
issues with performance and expressiveness. Programs written in a
shallowly-embedded DSL can only describe operations in the domain of
their host language, and thus are limited to a single interpretation.
\item \texttt{Deep Embedding:} Similarly to a shallowly-embedded DSL,
an eDSL with a deep embedding is defined in some host language.
Deep embeddings define a custom Generalized Algebraic Data Type
(GADT) in the host language and represent
all data as values of this type. \cite{EmbedDepth2014}
Because the entire Abstract Syntax
Tree (AST) of a deeply-embedded program is a single type, it is simple to
write functions in the host language that operate directly on the
embedded program. This allows for automatic optimization of embedded
programs, as well as multiple possible interpretations. However, any
extensions to a deep eDSL require significant effort, as changes to the
language's AST type incur a requirement to update every function that
operates on that type.
\end{enumerate}
For the AI-DSL, the most promising approach appears to be a hybrid method.
The basic domain of the DSL can be defined as deep embedding, while more
specialized features can be shallowly embedded as smaller DSLs within the main
AI-DSL instead of directly in Idris2.
\chapter{AI-DSL Ontology}
\label{chap:aidsl_ontology}
\section{Description}
\subsection{Design requirements}
\label{sec:design-requirements}
At the beginning of the current iteration of the AI-DSL project we had a round
of discussions about the high level functional and design requirements for
AI-DSL and its role in SingularityNET platform and ecosystem. The discussions
were based on
\cite{GoertzelGeisweillerBlog,singularitynet_foundation_phasetwo_2021} and are
\href{https://github.com/nunet-io/ai-dsl-ontology/wiki/AI-DSL\%20requirements}{available
online} in their original form. Here is the summary of the preliminary design
requirements informed by those discussions:
\begin{itemize} \item AI-DSL is a language that allows AI agents/services
running on SinglarityNET platform to declare their capabilities and needs for
data to other AI agents in a rich and versatile machine readable form; This will
enable different AI agents to search, find data sources and other AI services
without human interaction; \item AI-DSL ontology defines data and service
(task) types to be used by AI-DSL. Requirements for the ontology are shaped by
the scope and specification of the AI-DSL itself; \end{itemize}
High level requirements for AI-DSL are:
\begin{description} \item[Extendability] The ontology of data types and AI task
types should be extendable in the sense that individual service providers /
users should be able to create new types and tasks and make them available to
the network. AI-DSL should be able to ingest these new types / tasks and
immediately be able to do the type-checking job. In other words, AI-DSL ontology
of types / tasks should be able to evolve. At the same time, extended ontologies
should relate to existing basic AI-DSL ontology in a clear way, allowing AI
agents to perform reasoning across the whole space of available ontologies
(which, at lower levels, may be globally inconsistent). In order to ensure
interoperability of lower level ontologies, AI-DSL ontology will define small
kernel / vocabulary of globally accessible grounded types, which will be
built-in into the platform at the deep level. Changing this kernel will most
probably require some form of voting / global consensus on a platform level.
Therefore, it seems best to define the AI-DSL Ontology and the mechanism of using
it on two levels: \begin{itemize} \item \textit{The globally accessible
vocabulary/root ontology of grounded types}. This vocabulary can be seen as
immutable (in short and medium term) kernel. It should be extendable in the
long term, but the mechanisms of changing and extending it will be quite
complex, most probably involving theoretical considerations and/or a strict
procedures of reaching global consensus within the whole platform (a sort of
voting); \item \textit{A decentralized ontology of types and tasks} which each
are based (i.e. type-dependent) on the root ontology/vocabulary, but can be
extended in a decentralized manner -- in the sense that each agent in the
platform will be able to define, use and share derived types and task
definitions at its own discretion without the need of global consensus.
\end{itemize}
\item[Competing versions and consensus.] We want both consistency (for
enabling deterministic type checking -- as much as it is possible) and
flexibility (for enabling adaptation and support for innovation). This will be
achieved by enforcing different restrictions for competing versions and
consensus reaching on the two levels of ontology descrbed above:
\begin{itemize} \item The globally accessible vocabulary / root ontology of
grounded types will not allow for competing versions. In a sense, this level
will be the true ontology, representable by a one and unique root /
upper-level ontology of the network which users will not be able to modify
directly; \item All other types and task definitions within the platform will
be required to be derived from the root ontology (if they will want to be used
for interaction with other agents); However, the platform whould not restrict
the number of competing versions or define a global consensus of types and
task descriptions on this level. \item Furthermore, the ontology and the
AI-DSL logic should allow for some variant of 'soft matching' which would
allow to find the type / service that does not satisfy all requirements
exactly, but comes as closely as available in the platform. \item At the
lowest level of describing each instance of AI service or data source on the
platform, AI-DSL shall allow maximum extendability in so that AI service
providers and data providers will be able to describe and declare their
services in the most flexible and unconstrained manner, facilitating
competition and cooperation between them. \end{itemize}
\item[Code-level / service-level APIs.] It is important to ensure that the
ontology is readable / writable by different components of the SingularityNET
platform, at least between AI-DSL engine / data structures and each AI service
separately. This is needed because some of the required descriptors of AI
services will have to be dynamically calculated at the time of calling a
service and will depend on the immediate context (e.g. price of service, a
machine on which it is running, possibly reputation score, etc.). It is not
clear at this point how much of this functionality will be possible (and
practical) to implement on available dependently typed, ontology languages or
even if it is possible to use single language. Even it if is possible to
implement all AI-DSL purely on the current dependently typed language choice
Idris, it will have to interface with the world, deal with in-deterministic
input from network and mutable states -- operations that may fail in run-time
no matter how careful type checking is done during compile time
\cite{brady_resource-dependent_2015}.
Defining and maintaining code-level and service-level APIs will first of all
enable interfacing SingularityNET agents to AI-DSL and therefore between
themselves.
\item[Key AI Agents properties] We can distinguish two somewhat distinct (but
yet interacting) levels of AI-DSL Ontology AI service description level and
data description level. It seems that it may be best to start building the
ontology from the service level, because data description language is even
more open-ended than AI description language, which is already open enough.
Initially, we may want to include into the description of each AI service at
least these properties:
\begin{itemize}
\item Input and output data structures and types
\item Financial cost of service
\item Time of computation
\item Computational resource cost
\item Quality of results
\end{itemize}
As demonstrated in Chapter \ref{chap:aidsl_registry} it is
possible to express and reason about this data with Idris. It is
quite clear however, that in order to enable interaction with and
between SingularityNET agents (and NuNet adapters) all above
properties have to be made accessible outside Idris and therefore
supported by the code-level / service-level APIs and the
SingularityNET platform in general.
\end{description}
\subsection{Domain model considerations}
\label{sec:domain_model_considerations}
% TODO: perhaps move this elsewhere
In order to attend to all high level design requirements. All levels
of the AI-DSL
Ontology should be developed simultaneously, so that we could make sure that the
work is aligned with the function and role of AI-DSL within SingularityNET
platform and ecosystem. We therefore use the "AI/computer-scientific"
perspective to ontology and ontology building -- emphasizing \textit{what an
ontologoy is for} -- rather than the "philosophical perspecive" dealing with
\textit{the study of what there is in terms of basic categories}
\cite{gruber_translation_1993,sep-logic-ontology}. Therefore we first propose
the mechanism of how different levels (upper, domain and the leaf- (or
service)) of AI-DSL ontology will relate for facilitating interactions between
AI services on the platform.
Note, that design principles of such mechanism relate to the question how
abstract and consistent should relate to concrete and possibly inconsistent --
something that may need a deeper conceptual understanding than is attempted
during the project and presented here. We proceed in most practical manner for
proposing the AI-DSL ontology prototype, being aware that it may need to (and
possibly should) be subjected to more conceptual treatment in the future.
For a concrete domain model of AI-DSL ontology prototype we use the
\texttt{Fake News
Warning}\footnote{\href{https://gitlab.com/nunet/fake-news-detection}{https://gitlab.com/nunet/fake-news-detection}}
application being developed by NuNet -- a currently incubated spinoff of
SingularityNET\footnote{\href{https://nunet.io}{https://nunet.io}}.
NuNet is the platform enabling dynamic deployment and up/down-scaling of
SingularityNET AI Services on decentralized hardware devices of potentially any
type. Importantly for the AI-DSL project, service discovery on NuNet is designed
in a way that enables dynamic construction of application-specific service
meshes from several SingularityNET AI services\cite{nunet_nunet_2021}. In order
for the service mesh to be deployed, NuNet needs only a specification of program
graph of the application. Note, that conceptually, construction of an
application from several independent containers is almost equivalent to
functionality explained in Section~\ref{sec:dependently_typed_registry} on AI-DSL Registry,
namely performance of matching and composition of AI services. This is the main
reason why we chose \texttt{Fake News Warning} application as a domain model
for early development efforts of AI-DSL. However, we use this domain model
solely for the application-independent design of AI-DSL and attend to
its application specific aspects only as much as it informs the project.
The idea of dynamic service discovery is to enable application developers to
construct working applications (or at least their back-ends) by simply passing a
declarative definition of program graph to the special platform component
("network orchestrator") -- which then searches for appropriate SingularityNET
AI containers and connects them in to a single workflow (or workflows).
Suppose, that the back-end of \texttt{Fake News Warning} app consists of three
SingularityNET AI containers \texttt{news\_score}, \texttt{uclnlp} and
\texttt{binary-classification}:
\begin{table}[H]
\scriptsize
\centering
\begin{tabular}{p{0.15\linewidth}|p{0.2\linewidth}|p{0.2\linewidth}|p{0.2\linewidth}|p{0.1\linewidth}}
\textbf{Leaf item} & \textbf{Description} & \textbf{Input} &
\textbf{Output} &
\textbf{Source}\\
\hline
binary-classification & A pretrained binary classification model &
English text of any length & 1 -- the text is categorized
as fake; 0 -- text is categorized as not-fake & \textcopyright
\href{https://gitlab.com/nunet/fake-news-detection/binary-classification}{NuNet}
2021\\
\hline
uclnlp & Forked and adapted component of stance detection
algorithm (\href{http://www.fakenewschallenge.org/#fnc1results}{FNC} third place
winner) & Article title and text & Probabilities of the title \textit{agreeing},
\textit{disagreeing}, \textit{discussing} or being \textit{unrelated} to the
text & \textcopyright \href{https://github.com/uclnlp/fakenewschallenge}{UCL
Machine Reading} 2017; \textcopyright
\href{https://gitlab.com/nunet/fake-news-detection/uclnlp}{NuNet} 2021\\ \hline
news-score & Calls dependent services, calculates overall result and sends them
to the front-end & URL of the content to be checked & Probability that the content
in the URL is fake & \textcopyright
\href{https://gitlab.com/nunet/fake-news-detection/fake_news_score}{NuNet} 2021 \\
\end{tabular}
\captionsetup{width=0.7\linewidth}
\caption{\label{tbl:fns_components}Description of each component of
\texttt{Fake News Warning} application.}
\end{table}
Each component of application's back-end is a SingularityNET AI Service
registered on the platform. Note, that as SingularityNET AI services are
defined through their specification and their
metadata\cite{SNETDocumentationServiceSetup}. The main purpose of the AI-DSL
Ontology is to be able to describe SNet AI Services in a manner that would allow
them to search and match each other on the platform and compose into complex
workflows -- similarly to what is described in Section
\ref{sec:network_idris_ai_services}. Here is a simple representation of the program
graph of \texttt{Fake News Warning} app:
\begin{figure}[h]
\centering
\begin{minted}[linenos,tabsize=2,breaklines, fontsize=\small]{json}
"dag": {
"news-score" : ["uclnlp","binary-classification"]
}
\end{minted}
\vspace{-0.3cm}
\captionsetup{width=0.7\linewidth}
\caption{\label{lst:dag}A directed acyclic graph (DAT) of
\texttt{Fake News Warning} app
prototype\cite{NuNetFakeNewsWarningAppRepo}. It simply says that
\texttt{news-score} depends on \texttt{uclnlp} and
\texttt{binary-classification}.}
\end{figure}
The schematic representation of the \texttt{Fake News Warning} app deployed as a
result of processing the DAG is depicted below. The addition of NuNet platform
to SingularityNET service discovery is that each service may be deployed on
different hardware environments, sourced by NuNet. When the application back-end
is deployed, it can be accessed from the GUI interface, which in case of
\texttt{Fake News Warning} is a Brave browser extension.
\begin{figure}[H]
\begin{subfigure}[t]{0.50\textwidth}
\centering
\includegraphics[width=0.9\textwidth]{../../../ontology/images/fake_news_detector.png}
\captionsetup{width=0.8\linewidth}
\caption{Schema of dependencies between backend components of the application
(SingularityNET AI services potentially running on different machines).}
\label{fig:fake_news_detector_schema}
\end{subfigure}
\begin{subfigure}[t]{0.50\textwidth}
\centering
\includegraphics[width=0.5\textwidth]{../../../ontology/images/fake_news_detector_browser_extension.png}
\captionsetup{width=0.8\linewidth}
\caption{Brave browser extension which calls the backend of
\texttt{Fake News Warning} application on each invocation on new
content displayed in browser tab.}
\end{subfigure}
\end{figure}
We will use this application design principles as the domain model for the first
design of the AI-DSL Ontology and its prototype.
\subsection{Ontology language and upper level ontology}
After discussing several choices of ontology languages and reusing existing
ontologies for designing AI-DSL ontology\footnote{See
\href{https://github.com/singnet/ai-dsl/discussions/18}{Reusing Existing
Ontologies} discussion on AI-DSL Github repository\cite{AIDSLRepo}}, we have opted to
use SUO-KIF as an ontology language~\cite{pease_standard_2009} and SUMO as an
upper-level ontology~\cite{NilesPease2001}. The main motivation for this choice
were the versatility of KIF/SUO-KIF (Knowledge Interchange Format) language,
which essentially allows to express First Order Logic (FOL) statements in a simple
text format in terms of lisp-like syntax. Due to that, KIF can be easily
converted to other formats\cite{kalibatiene_survey_2011}. Also, a conversion to
Atomese -- the OpenCog's language also employing a lisp-like syntax -- has been
successfully attempted in the past\footnote{See
the \href{https://github.com/opencog/external-tools/tree/master/SUMO_importer}{SUMO
Importer} in the OpenCog External Tools
repository\cite{ExternalToolsRepo}}. SUMO and the related ontology design tools
\cite{pease_sigma_2001} provide a convenient way for starting to design AI-DSL
Ontology levels and their relations.
\subsection{Tools}
For the purposes of design, inital validation and displaying relations between
classes, subclasses and instances of the ontology, we have used software tools
which come together with SUMO ontology
\footnote{\href{https://ontologyportal.org}{https://ontologyportal.org}}:
\begin{itemize}
\item Sigma IDE for SUMO\footnote{\href{https://github.com/ontologyportal/sigmakee}
{https://github.com/ontologyportal/sigmakee}} and
\item jEdit plugin for SUMO
\footnote{\href{https://github.com/ontologyportal/SUMOjEdit}
{https://github.com/ontologyportal/SUMOjEdit}}
\end{itemize}
The ontology prototype, presented here, is fully accessible for browsing and
partial validation via the local Sigma installation\footnote{Can be temporarily
accessed at \href{http://nunetio.ddns.net:8080/sigma/KBs.jsp}
{http://nunetio.ddns.net:8080/sigma/KBs.jsp} or installed and accessed locally
by folowing \href{https://github.com/nunet-io/ai-dsl/blob/master/ontology/tools/README.md}{these} instructions}.
\section{Objectives and achievements}
\subsection{Decentralized ontology}
In order to satisfy the \textit{extendibility} requirement of ontology design,
we are proposing a notion and design of a \textit{decentralized ontology}, which
enables us to work with globally consistent and locally inconsistent components
within the same mechanism of AI-DSL. Based on our design, the full ontology of
\texttt{Fake News Warning} application is constructed from a number of separate
components, which operate at different level of decentralization. Table below
describes each of these components.
\begin{table}[H]
%describe each kif file / level of ontology / consistent / inconsistent;
\scriptsize
\centering
\begin{tabular}{p{0.24\linewidth}|p{0.24\linewidth}|p{0.24\linewidth}|p{0.24\linewidth}|}
\textbf{Component} &
\textbf{Description} &
\textbf{Dependencies} &
\textbf{Extendability}\\
\hline
\href{https://github.com/ontologyportal/sumo/blob/master/Merge.kif}{Merge.kif} &
SUMO structural ontology, base ontology, numerical functions, set/class theory, temporal concepts and mereotopology &
None - root ontology &
Centralized and globally enforced -- defined by \href{http://www.ontologyportal.org/}{ontologyportal.org} \\
\hline
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/
SingulairtyNet.kif}{SingularityNet.kif} &
Defines global classes and types to be used for describing each
SingularityNET AI Service &
ComputerInput.kif, Merge.kif [,..]&
Limited: versioning mechanism controlled by SingularityNET (to be defined)\\
\hline
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/
FakeNewsScore.kif}{FakeNewsScore.kif} &
SingularityNET service responsible for constructing the whole back-end of
each \texttt{Fake News Warning} application instance i.e. program graph
(DAG) of the application.&
SingularityNET.kif [,..] &
Fully decentralized: defined by application developers; Since \texttt{Fake News
Warning} application is open source, any developer can fork it and define it
otherwise; Technically, this would be a different application.\\
\hline
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/fnsBinaryClassifier.kif}{fnsBinaryClassifier.kif} &
A pre-trained binary classification model for fake news detection &
SingularityNET.kif [,..] &
Fully decentralized: defined by each algorithm developer independently.
Technically, from the platform perspective, these will be different
algorithms. \\
\hline
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/
uclnlp.kif}{uclnlp.kif} &
Forked and adapted component of stance detection algorithm by UCL Machine Reading group &
SingularityNET.kif [,..] &
Fully decentralized: defined by each algorithm developer independently.
Technically, from the platform perspective, these will be different
algorithms \\
\hline
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/
NuNetEnabledComputer.kif}{NuNetEnabledCompu-ter.kif} &
Each NuNet enabled hardware resource will have to be described accordingly
when on-boarded to NuNet platform &
NuNet.kif &
Fully decentralized: independently defined by the owner of a hardware
resource \\
\hline
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/NuNet.kif}{NuNet.kif} &
Defines classes to be used for describing each hardware resource eligible
for running SingularityNET AI Services via NuNet platform; &
Merge.kif, SingularityNET.kif [,..]&
Limited: versioning mechanism controlled by NuNet (to be defined) \\
\end{tabular}
\captionsetup{width=0.7\linewidth}
\caption{\label{tbl:all_kif_files}Description of each component of the AI-DSL Ontology prototype and links to related KIF files.}
\end{table}
\subsection{Ontology prototype}
\label{sec:ontology_prototype}
Using the ontology levels described in Table \ref{tbl:all_kif_files} and
referenced files, we prototyped the ontology of \texttt{Fake News Warning}
application\footnote{Can be temporarily
accessed at \href{http://nunetio.ddns.net:8080/sigma/KBs.jsp}
{http://nunetio.ddns.net:8080/sigma/KBs.jsp} or installed and accessed locally
by folowing \href{https://github.com/nunet-io/ai-dsl/blob/master/ontology/tools/README.md}{these} instructions}.
\begin{table}[H]
\scriptsize
\centering
\begin{tabular}{p{0.5\linewidth}|p{0.4\linewidth}|}
\textbf{Architectural level} & \textbf{Class} \\
\hline
SingularityNET platform & SNetAIService, SNetAIServiceIO,
SNetAIServiceMetadata\\
\hline
NuNet platform & NuNetEnabledSNetAIService, NuNetEnabledComputer\\
\end{tabular}
\captionsetup{width=0.9\linewidth}
\caption{\label{tbl:custom_classes_prototype}Main classes defined in AI-DSL
ontology prototype per level of the \texttt{Fake News Warning} application's
stack. Classes defined in SUMO are not included.}
\end{table}
AI algorithms onboarded on the SNet platfrom are instances of
\texttt{SNetAIService} class of sublasses of it.
Services of \texttt{Fake News Warning} application are defined as follows:
\begin{figure}[H]
\begin{subfigure}[b]{1\textwidth}
\centering
\inputminted[firstline=1, lastline=2, linenos,tabsize=2,breaklines, fontsize=\small]{scm}{../../../ontology/uclnlp.kif}
\vspace{-0.3cm}
\captionsetup{width=0.8\linewidth}
\caption{Service description}
\vspace{0.3cm}
\end{subfigure}
\begin{subfigure}[b]{1\textwidth}
\centering
\inputminted[firstline=4, lastline=8, linenos,tabsize=2,breaklines, fontsize=\small]{scm}{../../../ontology/uclnlp.kif}
\vspace{-0.3cm}
\captionsetup{width=0.8\linewidth}
\caption{Descriptions of service input and output types.}
\vspace{0.3cm}
\end{subfigure}
\begin{subfigure}[b]{1\textwidth}
\centering
\inputminted[firstline=10, lastline=26, linenos,tabsize=2,breaklines, fontsize=\small]{scm}{../../../ontology/uclnlp.kif}
\vspace{-0.3cm}
\captionsetup{width=0.8\linewidth}
\caption{Definition of types and their dependencies.}
\end{subfigure}
\caption{\label{fig:serviceDefinitionKif}SNet AI Service definition in KIF
(uclnlp and binary-classification services are described in this way).}
\end{figure}
Type definitions and their dependency definitions are actually the domain of
formal type-checking part of AI-DSL and Idris related research. However,
irrespectively of which language will be eventually chosen for AI-DSL, Figure
\ref{fig:serviceDefinitionKif} expresses that we can:
\begin{enumerate}
\item define correct serviceInput and serviceOutput types (unique for each
service);
\item potentially provide proofs that if a service data of correct type is
provided on input, then it will output correctly typed data;
\item if the above is not possible (which may be the default option when actual
service AI are not written in Idris):
\begin{enumerate}
\item check if input data is of correct type at run-time and refuse to start
service if it is not;
\item check if output data is of correct type before sending it to the
caller and raise error if it is not so;
\end{enumerate}
\end{enumerate}
\texttt{FakeNewsScore} AI Service is special in that it calls
other dependent services (as described by program graph in Figure
\ref{fig:fake_news_detector_schema}) and combines their results.
We can define the program graph in
terms of dependencies between services in KIF as follows:
\begin{figure}[H]
\captionsetup{width=0.8\linewidth}
\inputminted[firstline=1, lastline=9, linenos,tabsize=2,breaklines, fontsize=\small]{scm}{../../../ontology/FakeNewsScore.kif}
\vspace{-0.3cm}
\caption{\label{fig:serviceDependencies}Defining program graph as a formal ontology. This is similar to DAG of Figure \ref{lst:dag}.}
\end{figure}
Figure \ref{fig:serviceDependencies} demonstrates how a workflow of connected
SingularityNET AI services can be statically defined and proven to work at
compile time. However, we could go further and define dependencies as
\textit{subclasses} of services with the same input/output data types. In such case
any instantiation of the subclass would be able to dynamically compile into the
workflow. Therefore we would not need to describe concrete dependencies -- they
would be dynamically resolved at run-time by matching input and output types.
\begin{figure}[H]
\captionsetup{width=0.8\linewidth}
\inputminted[firstline=1, lastline=22, linenos,tabsize=2,breaklines, fontsize=\small]{scm}{../../../ontology/FakeNewsScoreDynamic.kif}
\vspace{-0.3cm}
\caption{\label{fig:fakeNewsScoreDynamic}Defining generic input types instead
of concrete dependencies in a \texttt{FakeNewsScoreDynamic} service.}
\end{figure}
Any AI service with output type matching input type of the
\texttt{FakeNewsScoreDynamic} could be compiled into the workflow:
\begin{figure}[H]
\captionsetup{width=0.8\linewidth}
\inputminted[firstline=1, lastline=4, linenos,tabsize=2,breaklines, fontsize=\small]{scm}{../../../ontology/uclnlpDynamic.kif}
\vspace{-0.3cm}
\caption{\label{fig:uclnlpDynamicOne}Using static globally defined types of
input and output data structures of matching services eligible for
compilation into a workfow.}
\end{figure}
However, systems with dependent typing, like Idris, may allow to go even further
and to find out if composite types are composed of the same components and
primitive types -- and thus match them.
\begin{figure}[H]
\captionsetup{width=0.8\linewidth}
\inputminted[firstline=7, lastline=26, linenos,tabsize=2,breaklines, fontsize=\small]{scm}{../../../ontology/uclnlpDynamic.kif}
\vspace{-0.3cm}
\caption{\label{fig:uclnlpDynamicTwo}Hypothetical usage of dynamic typing
(most probably could be achieved in Idris, but not in KIF).}
\end{figure}
Primitive (or grounded) types (like \texttt{RealNumeber} and \texttt{Text} in
Figure \ref{fig:uclnlpDynamicTwo}), however, should be globally accessible and
unambiguously defined for this scheme to work.
All services of \texttt{Fake News Warning} application are instances of
\texttt{NuNetEnablesSNetAIService} subclass, which, in turn, is a subclass of
\texttt{SNetAIService} class:
\begin{figure}[H]
\begin{subfigure}[t]{1\textwidth}
\centering
\begin{minted}[linenos,tabsize=2,breaklines,fontsize=\small]{scm}
(instance fakeNewsScore NuNetEnabledSNetAIService)
(instance uclnlp NuNetEnabledSNetAIService)
\end{minted}
\vspace{-0.3cm}
\captionsetup{width=0.8\linewidth}
\caption{Declaration of \texttt{FakeNewsScore} service in
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/
FakeNewsScore.kif}{FakeNewsScore.kif} and of \texttt{uclnlp} service
in \href{https://github.com/singnet/ai-dsl/blob/master/ontology/uclnlp.kif}{uclnlp.kif}.}
\vspace{0.3cm}
\end{subfigure}
\begin{subfigure}[t]{1\textwidth}
\centering
\inputminted[firstline=1, lastline=2, linenos,tabsize=2,breaklines, fontsize=\small]{scm}{../../../ontology/NuNet.kif}
\vspace{-0.3cm}
\captionsetup{width=0.8\linewidth}
\caption{Definition of \texttt{NuNetEnabledSNetAIService} in \href{https://github.com/singnet/ai-dsl/blob/master/ontology/NuNet.kif}{NuNet.kif}.}
\end{subfigure}
\captionsetup{width=0.9\linewidth}
\caption{Relation between SingularityNet and NuNet domain ontologies.}
\label{fig:SNET_and_NuNet}
\end{figure}
Figure \ref{fig:SNET_and_NuNet} describes relation between
SingularityNET and NuNet platforms. \texttt{SNetAIService} class, defined in
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/
SingularityNET.kif}{SingularityNET.kif}, contains all requirements for the
metadata of the service to be published on SingularityNET platform.
\texttt{NuNetEnabledSNetAIService} extends \texttt{SNetAIService} by adding
metadata that is needed for this service to be deployed via NuNet APIs:
\begin{figure}[H]
\centering
\inputminted[firstline=4, lastline=11, linenos,tabsize=2,breaklines,
fontsize=\small]{scm}{../../../ontology/NuNet.kif}
\captionsetup{width=1\linewidth}
\vspace{-0.3cm}
\caption{The definition of \texttt{NuNetEnabledSNetAIService} in
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/NuNet.kif}{NuNet.kif}
requires a service to have compute resource (and possibly other) requirements
included in service metadata. The idea is that without required metadata
fields, a service would not pass validation allowing it to be deployed via
NuNet. An arbitrary amount of requirements could be defined here.}
\label{fig:NuNetEnabledAIService_metadata_requirements}
\end{figure}
\texttt{NuNetEnabledSNetAIService}s can be deployed only on
\texttt{NuNetEnabledComputer}s, which expose their available computing resources
in a manner that the ability to run a service is automatically checked
\textbf{before} a service is dynamically deployed on a computer and a service
call is actually issued to it (see Figure
\ref{fig:NuNetEnabledComputer_requirements}). This formally described relation
between SingularityNET and NuNet ontologies enables to prove at 'compile time'
that a service will have enough computational resources to be executed. Recall,
that SingularityNET ontology alone enables to prove that a service or a
collection of services will return correct results when called with correct
inputs.
\begin{figure}[H]
\centering
\inputminted[firstline=13, lastline=31, linenos,tabsize=2,breaklines,
fontsize=\small]{scm}{../../../ontology/NuNet.kif}
\captionsetup{width=1\linewidth}
\vspace{-0.3cm}
\caption{The definition of \texttt{NuNetEnabledComputer} in
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/NuNet.kif}{NuNet.kif}
requires available computing resources, computer type and operating system to
be listed in the metadata.}
\label{fig:NuNetEnabledComputer_requirements}
\end{figure}
An \texttt{SNetAIService} can only be deployed on \texttt{NuNetEnabledComputer}
if available resources on the computer are not less than compute requirements of
a service:
\begin{figure}[H]
\centering
\inputminted[firstline=38, lastline=47, linenos,tabsize=2,breaklines,
fontsize=\small]{scm}{../../../ontology/NuNet.kif}
\captionsetup{width=1\linewidth}
\vspace{-0.3cm}
\caption{Constraints on eligible match between \texttt{SNetAIService} and
\texttt{NuNetEnabledComputer} defined in
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/NuNet.kif}{NuNet.kif}
and required for deployment of a service.}
\label{fig:service_deployment_requirements}
\end{figure}
\texttt{SNetAIService} and \texttt{NuNetEnabledSNetAIService} classes are positioned within the SUMO ontology as follows:
\begin{table}[H]
\scriptsize
\centering
\begin{tabular}{p{0.2\linewidth}|p{0.7\linewidth}|p{0.1\linewidth}|}
\textbf{Class, subclass or instance} &
\textbf{Description} &
\textbf{Where defined} \\
\hline
Entity &
The universal class of individuals. This is the root node of the ontology. &
\href{https://github.com/ontologyportal/sumo/blob/master/Merge.kif}{Merge.kif}\\
\hline
Abstract &
Properties or qualities as distinguished from any particular embodiment of
the properties/ qualities in a physical medium. Instances of Abstract can be
said to exist in the same sense as mathematical objects such as sets and
relations, but they cannot exist at a particular place and time without some
physical encoding or embodiment. &
\href{https://github.com/ontologyportal/sumo/blob/master/Merge.kif}{Merge.kif}\\
\hline
Proposition &
Propositions are Abstract entities that express a complete thought or a set of
such thoughts. Note that propositions are not restricted to the content
expressed by individual sentences of a Language. They may encompass the content
expressed by theories, books, and even whole libraries. A Proposition is a piece
of information, e.g. that the cat is on the mat, but a ContentBearingObject is
an Object that represents this information. A Proposition is an abstraction that
may have multiple representations: strings, sounds, icons, etc. For example, the
Proposition that the cat is on the mat is represented here as a string of
graphical characters displayed on a monitor and/ or printed on paper, but it can
be represented by a sequence of sounds or by some non-latin alphabet or by some
cryptographic form. &
\href{https://github.com/ontologyportal/sumo/blob/master/Merge.kif}{Merge.kif}\\
\hline
Procedure &
A sequence-dependent specification. Some examples are ComputerPrograms,
finite-state machines, cooking recipes, musical scores, conference schedules,
driving directions, and the scripts of plays and movies. &
\href{https://github.com/ontologyportal/sumo/blob/master/Merge.kif}{Merge.kif}\\
\hline
ComputerProgram &
A set of instructions in a computer programming language that can be
executed by a computer. &
\href{https://github.com/ontologyportal/sumo/blob/master/Merge.kif}{Merge.kif}\\
\hline
SoftwareContainer &
&
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/
SingulairtyNet.kif}{Singularity-Net.kif}\\
\hline
SNetAIService &
Software package exposed via SNetPlatfrom and conforming to the special
packaging rules &
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/ SingulairtyNet.kif}{Singularity-Net.kif}\\
\hline
NuNetEnabled-SNetAIService &
SNetAIService which can be deployed on NuNetEnabledComputers and
orchestrated via NuNet platfrom &
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/ NuNet.kif}{NuNet.kif}\\
\hline
\quad \texttt{uclnlp} &
Forked and adapted component of stance detection algorithm by UCL Machine
Reading group. &
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/ uclnp.kif}{uclnlp.kif}
\end{tabular}
\captionsetup{width=0.9\linewidth}
\caption{\label{tbl:uclnlp_hierarchy}Full hierarchy of dependencies of
\texttt{uclnlp} SNet AI service instance within SUMO ontology. The same
hierarchy applies to \texttt{binary-detection} and \texttt{fakeNewsScore} services used
in the \texttt{Fake News Warning} app.}
\end{table}
\subsection{The mechanism of dynamic workflow construction}
An important part of the \textit{decentralized ontology} design is the mechanism
which makes it work in actual scenarios. This mechanism was designed using the
same domain model of \texttt{Fake News Warning} application. It also clarifies
the reason why we propose this particular concept and design of
\textit{decentralized ontology}.
AI-DSL will allow to search, match, compile and
execute independently developed AI components in terms of a single veritable
workflow running on SingularityNET platform. AI components of the workflow may
be developed using different programming languages by different people,
have different licenses and, actually, may be developed with different initial
goals. Furthermore, these workflows will be executed on the machines owned by
different entities. In a decentralized system like this, each developer will be
able to freely choose properties, capabilities and internal structure of their
algorithms. Through the mechanism of dynamic workflow construction, AI-DSL will
be able to pull together the information about each component of desired workflow
when the execution of it is required.
The very high level view of SingularityNET's
AI Service calls involving AI-DSL looks as follows:
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{../../../ontology/images/high-level-workflow-construction.png}
\captionsetup{width=0.9\linewidth}
\caption{\label{fig:high-level-workflow-construction}Bird's eye view of
application-independent SingularityNET calls involving AI-DSL.}
\end{figure}
Within the domain model of \texttt{Fake News Warning} application (Figure
\ref{fig:fake_news_detector_schema}) this scheme works approximately in the following
way:
\begin{enumerate}
\item \textit{User/Business} accesses the platform
via browser-extension by sending (a) the definition of the workflow to the
platform in the form of a DAG (Figure \ref{lst:dag}) and
(b) the web content to be checked for probability to contain fake news.
\item The AI-DSL engine reads the DAG and identifies the dependent SNet AI Services that
need to be called.
\item If dependent services are indicated statically as in \ref{lst:dag},
then the platform knows immediately names of the services to be called. If, however,
dependent services are described in terms of their input / output types (as in
\ref{fig:serviceDependencies}), the AI-DSL engine searches and matches services available
in the platform that satisfy constraints defined there\footnote{In the
future, the AI-DSL engine
will aim to accommodate fuzzy service definitions and complex decision functions to
search and match them, involving ability for an AI Service to choose its
dependent services.}.
\item When matching services are found, the AI-DSL engine pulls their individual type
signatures and other metadata from each service (note, that a decentralized system
cannot be built with the assumption of availability of global registry; such registry
can, however, be built as a secondary index of otherwise decentralized information
sources) and compiles into a workflow. This operation may be done in a few stages:
\begin{itemize}
\item When the AI-DSL engine requests metadata for the dependent service and receives it,
it checks the received metadata for conformance to AI-DSL Ontology requirements
(e.g. well-formed description in SUO-KIF and correct type dependencies as defined
hierarchy in Table \ref{tbl:uclnlp_hierarchy} and displayed graphically in
Figure \ref{fig:uclnlp_hierarchy_graph}):
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{../../../ontology/images/uclnp_hierarchy_graph.png}
\captionsetup{width=0.9\linewidth}
\caption{\label{fig:uclnlp_hierarchy_graph}Graphical form of the hierarchy of
dependencies of \texttt{uclnlp} SNet AI service instance of \texttt{Fake News Warning}
application within SUMO ontology.}
\end{figure}
\pgfkeys{/csteps/inner color=white}
\pgfkeys{/csteps/outer color=black}
\pgfkeys{/csteps/fill color=black}
\item Note, that the correctness of type dependencies of decentralized components of
the AI-DSL Ontology (\texttt{uclnlp}) will be checked against centralized
components versioned by SingularityNET platform (Table \ref{tbl:all_kif_files}). Defining
versioning mechanism of global components of the ontology is not within the scope
of this work. However, merely acknowledging the possible existence of different
versions of root and middle level ontologies within the hierarchy requires to
think about reasonable way to accommodate them into the system. A possibility
is to include information about the version of global components of ontology when
communicating decentralized components between each other,
as suggested in \cite{YvesHellenschmidt2002}.
In such case, the stage \Circled{4} of the workflow construction
in Figure \ref{fig:high-level-workflow-construction} would look approximately like this:
\begin{figure}[H]
\centering
\includegraphics[width=1\textwidth]{../../../ontology/images/verification_for_workflow_correctness.png}
\captionsetup{width=0.9\linewidth}
\caption{\label{fig:verification-sequence}Stage \Circled{4} of workflow construction
sequence (fully depicted in Figure \ref{fig:high-level-workflow-construction}) -- compiling the workflow and
proving its correctness.}
\end{figure}
\item Service metadata returned by calls \Circled{2} and
\Circled{4} of Figure \ref{fig:verification-sequence} include service definitions
and the version number of the global AI-DSL Ontology that was used to build these definitions. For
example, call \Circled{4} may contain the following information:
\begin{figure}[h]
\centering
\begin{minted}[linenos,tabsize=2,breaklines, fontsize=\small]{json}
{
"sender": "uclnlp",
"receiver": "fakeNewsScore",
"upper-ai-dsl-ontology": "v0.1",
"service-metadata": $(include sevice_definition.json),
"leaf-ontology": $(include uclnlp.kif)
}
\end{minted}
\vspace{-0.3cm}
\captionsetup{width=0.8\linewidth}
\caption{\label{lst:uclnlp_metadata}Metadata of the \texttt{uclnlp} service. Sample contents
of included files can be seen separately for each
\href{https://gitlab.com/nunet/fake-news-detection/uclnlp/-/blob/master/service/service_spec/service_definition_prod.json}{service\_definition.json}
and \href{https://github.com/singnet/ai-dsl/blob/master/ontology/uclnlp.kif}{uclnlp.kif}.
}
\end{figure}
\item When all service definitions are collected and the their versions checked to match,
they can be checked for conformance with the global AI-DSL Ontology of respective version and,
if service definitions include type signature -- type-checked.
\item The actual compilation of the workflow, compliance to
the AI-DSL Ontology and type-checking need a
dedicated and properly configured execution environment. In the context of this document,
that execution environment may include the Idris compiler, an ontology prover that is able to process
SUO-KIF definitions (e.g. Sigma), their dependencies and possibly custom code.
For that, it would be most logical to introduce a dedicated
AI service into the platform -- which is the \texttt{verifier} component denoted in Figure
\ref{fig:verification-sequence}. The \texttt{verifier} service will be able to run any required
verification procedures in order to provide a proof that the workflow constructed from
services found in step \Circled{3} is valid and can be correctly executed on the
SingularityNET platform.
\item After calls \Circled{1}, \Circled{2}, \Circled{3} and \Circled{4} of
Figure \ref{fig:verification-sequence} are completed, a call \Circled{5} will be issued to
\texttt{verifier} sending all metadata of each service along
with the AI-DSL Ontology version's identifier. The \texttt{verifier} will then
request all required dependencies
(listed in Figure \ref{tbl:all_kif_files}) from the central SingularityNET repository
(or blockchain) and calculate the proof.
\item After \texttt{verifier} calculates the proof at step \Circled{8},
the proof is sent to the service that has requested it (in the case of
\ref{fig:high-level-workflow-construction} -- to \texttt{fakeNewsScore}).
Additionally, the existence of independent verifiers would allow to optimize the overall computational
costs of calculating proofs on the platform, by recording them into the blokchain and making
searchable by other services that may require the same workflow. One way to do this would be to:
\begin{enumerate}
\item Calculate a hash from the metadata of each service of the workflow
(i.e. *.kif or *.idr files);
\item Construct a Merkle tree
\footnote{\href{https://en.wikipedia.org/wiki/Merkle_tree}{https://en.wikipedia.org/wiki/Merkle\_tree}}
from those hashes which would exactly mirror the structure of the workflow
defined in DAG (Figure \ref{lst:dag});
\item Record the root hash of the tree into the blockchain with relevant metadata;
\end{enumerate}
\item Such setup would constitute an implicit reputation system of workflows in the sense that
a workflow with most proofs of correctness on the blockchain could be trusted to work without
recalculating the proof each time a workflow is constructed.
\end{itemize}
\end{enumerate}
\section{Future work}
\begin{itemize}
\item In the long term, it may be ideal to develop a converter for converting
OWL to KIF, since OWL may be representable in KIF
\cite{martin_translations_nodate} using \href{https://github.com/owlcs/owlapi}{OWL
API}; For the purpose of the ontology
prototype, we are manually selecting parts of the existing ontologies in order to build
the prototype and write them in SUO-KIF format.
\item Similarly we want to be able to convert SUO-KIF specifications
into Idris, and possibly vise versa, to take advantage of the strengths of
each formalism. To the best of our knowledge there are no existing
tools to automatically translate SUO-KIF to/from Idris, however
there is a tool to translate SUO-KIF to
FOL~\cite{Pease_firstorder} and a paper describing
the translation from a Dependently Typed Language (DTL) to
FOL~\cite{SojakovaKristina2009}. Additionally, to start building
an understanding about such process, we have manually ported the trivial AI
services described in Section \ref{sec:dependently_typed_registry} to SUO-KIF, see
\href{https://github.com/singnet/ai-dsl/blob/master/ontology/TrivialServices.kif}{TrivialServices.kif}
under the
\href{https://github.com/singnet/ai-dsl/blob/master/ontology}{ontology}
of the \href{https://github.com/singnet/ai-dsl/}{AI-DSL
repository}~\cite{AIDSLRepo}. As it turns out writing formal
specifications of functions in SUO-KIF is reasonably straight
forward. Here is for instance the SUO-KIF implementation of the Twicer service
\begin{minted}[mathescape]{idris}
(instance TwicerFn UnaryFunction)
(domain TwicerFn 1 Integer)
(range TwicerFn EvenInteger)
(=>
(instance ?INTEGER Integer)
(equal (TwicerFn ?INTEGER) (MultiplicationFn ?INTEGER 2))))
\end{minted}
where \texttt{EvenInteger} happens to be predefined in
\href{https://github.com/ontologyportal/sumo/blob/master/Merge.kif}{Merge.kif}
of SUMO, partially recalled below
\begin{minted}[mathescape]{idris}
(=>
(instance ?NUMBER EvenInteger)
(equal (RemainderFn ?NUMBER 2) 0))
\end{minted}
Thus one can see that is it easy to specify a function input type,
using \texttt{domain}, and output type, using \texttt{range}, in
SUO-KIF, as well its full or partial definition, using \texttt{=>},
\texttt{equal} and universally quantified variables such as
\texttt{?NUMBER}. It should be noted however that the reason it works
so well in that case is because the output type does not depend on the
input value, the output is an even integer no matter what. It is
expected that porting for instance the \texttt{append} function of the
dependent type \texttt{Vect}~\cite{Vectors} to SUO-KIF might not be as
trivial, since the \texttt{domain} and \texttt{range} constructs may
not be suitable to represent such dependence (i.e. that the size of
the resulting vector of \texttt{append} is the sum of the sizes of the
input vectors). However, given that dependent types are essentially
functions, it might be possible to set the \texttt{domain} and
\texttt{range} with such type functions. Alternatively such
dependence can be moved to the function definition as offered by
SUO-KIF expressiveness. Another aspect we need to explore is how
tools, such as Automatic Theorem Provers
(ATPs)~\cite{Baumgartner_automatedreasoning, Urban_anoverview,
Alvez_evaluating_atp_adimen_SUMO}, can be used to autonomously
compose as well as retrieve functions given their input and output
types. Obviously if ATP tools running over SUO-KIF turn out to be
deficient in that respect, we already know from Section
\ref{sec:dependently_typed_registry} that Idris can fulfill that
purpose.
\end{itemize}
\chapter{Conclusion}
The novel nature of such a project requires a large amount of
exploration, which is what this iteration has been all about. From
the start we agreed that we wanted to take a holistic approach,
attempting to achieve a full albeit limited prototype as rapidly as
possible in order to uncover hidden blocks as early as possible. Our
exploration has reflected this approach. Let us summarize what we
have accomplished so far and what are the next steps to bring us
closer to a complete AI-DSL.
\begin{itemize}
\item We have started gathering and reviewing literature of related
work to make sure we do not miss anything major and can take
advantage of existing technologies. Even though we have found no
such related project using dependent types and combined with the
blockchain technology, there are related projects, described in
Section \ref{sec:related_work}, with potential for re-usability of
ideas or implementations such as the Function Ontology, FIPA-ACL or
more. We intend to keep studying the literature of related work.
\item We have experimented with Idris to formalize and reason about
realized function attributes such as costs and quality, see
Section~\ref{sec:realized_function} for more details. We have done
so in a limited manners, only considering additive cost and
infimum-itive quality, but we have proven that it is possible and
tractable to do in Idris. More work is required to expand the
complexity of such realized function attributes, such as functional
or distributional costs and quality.
\item An AI-DSL Registry prototype have been build, to retrieve, match
and connect AI services based on their specifications as dependent
types using Idris meta-functions for function matching and retrieval
as described in Section~\ref{sec:dependently_typed_registry}. This
prototype has limits, such as returning only the first matching AI
service and not performing fully autonomous composition of AI
services, but none of these limits seem fundamentally hard to
address and should only require more development time.
\item In Chapter~\ref{chap:soft_eng_strat}, we have experimented with
Idris to formalize function properties as dependent types. This was
done in a limited manner, using trivial properties such as evenness
of numbers, but taught us that it is a possible to do in Idris, and
provided us insights on how to expand that to more complex
properties. Also, various approaches for defining AI-DSL as an
Idris eDLS have explored, see Section~\ref{sec:depth_embedding}.
Additional an important idea was approached in this Chapter, the
interaction between the AI-DSL and the tokenomics of the network as
a means to provide soft guaranties when hard guaranties are
difficult to obtain, see~\ref{sec:monadic_dsl}.
\item In Chapter \ref{chap:aidsl_ontology} we have explored ontologies
with the goal of defining a rich and extendable vocabulary for
specifying AI services, their algorithms, data types as well as
their relationship to real world. For now the decision was made to
build such AI ontology on top of SUMO due to its openness, breadth
and quality, as well as the expressiveness of its representational
language, SUO-KIF. The upper layer of the Fake News Warning app was
translated into SUO-KIF as a SUMO extension. We also explored how
to convert SUO-KIF knowledge into Idris, more work is required to
automate such conversions. Something that has been discussed but
remains to be fully explored is the use of Automatic Theorem Provers
as complement (or possibly replacement) of Idris for matching and
composing services.
\item We have started building, though at the preparatory level, a
real world AI service assemblage test case, based on Nunet Fake News
Warning app, see Section \ref{sec:improve_test_cases}. Such test
case is going to be critical to put our AI-DSL prototypes to the
test and build the understanding necessary to push it to the next
level.
\end{itemize}
The work taking place during the next iterations will consist in
continuing such exploration, refining our existing prototypes and
bringing them together in a holistic system, guided by real world AI
service assemblage test cases. Such test case will initially include
the Nunet Fake News Warning collective. Then more test cases will be
considered over various domains such as bio-informatics, finance,
embodied agent control and more.
To conclude, even though there is a long way to go, we believe a lot
of progress has been done already, and we are happy to say that no
profound difficulties have been revealed so far. We must however
remain cautious. One difficulty that is expected to eventually come
up is the tractability of the verification and automated composition
process of AI services. This is generally undecidable, and even when
restricted to subclasses of functions (as is the case in Idris due to
being based on Intuitionistic Logic) can still have explosive
complexity. However, it is also expected that a tremendous amount of
value will be created by having such system work with restricted
applicability, or by relaxing the level of guaranties demanded.
Ultimately it is expected that the AI-DSL will need to synergies with
AGI systems to reach its full potential, which, fortunately, is one of
the quintessential functions that SingularityNET aims to offer.
\appendix
\chapter{Glossary}
\begin{itemize}
\item \textbf{AI service assemblage}: collection of AI services
interacting together to fulfill a given function. Example of such
AI service assemblage would be the Nunet Fake News Warning system.
\item \textbf{Dependent Types}: types depending on values. Instead of
being limited to constants such as \texttt{Integer} or
\texttt{String}, dependent types are essentially functions that take
values and return types. A dependent type is usually expressed as a
term containing free variables. An example of dependent type is
\texttt{Vect n a}, representing the class of vectors containing
\texttt{n} elements of type \texttt{a}.
\item \textbf{Dependently Typed Language}: functional programming
language using dependent types. Examples of such languages are
Idris, AGDA and Coq.
\end{itemize}
\bibliographystyle{splncs04}
\bibliography{local}
\end{document}
|
|
\documentclass[10pt,twoside,twocolumn]{article}
\usepackage[bg-print]{rpg} % Options: bg-a4, bg-letter, bg-full, bg-print, bg-none.
\usepackage[utf8]{inputenc}
% Start document
\begin{document}
\fontfamily{ppl}\selectfont % Set text font
% Your content goes here
\twocolumn[\section{Box Declarations with Color Choice}]
\subsection{Red}
\begin{quotebox}{lightred}{darkred}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightred}{darkred}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightred}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Orange}
\begin{quotebox}{lightorange}{darkorange}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightorange}{darkorange}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightorange}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Yellow}
\begin{quotebox}{lightyellow}{darkyellow}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightyellow}{darkyellow}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightyellow}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Olive}
\begin{quotebox}{lightolive}{darkolive}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightolive}{darkolive}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightolive}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Green}
\begin{quotebox}{lightgreen}{darkgreen}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightgreen}{darkgreen}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightgreen}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Turquoise}
\begin{quotebox}{lightturquoise}{darkturquoise}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightturquoise}{darkturquoise}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightturquoise}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Cyan}
\begin{quotebox}{lightcyan}{darkcyan}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightcyan}{darkcyan}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightcyan}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Blue}
\begin{quotebox}{lightblue}{darkblue}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightblue}{darkblue}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightblue}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Violet}
\begin{quotebox}{lightviolet}{darkviolet}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightviolet}{darkviolet}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightviolet}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Purple}
\begin{quotebox}{lightpurple}{darkpurple}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightpurple}{darkpurple}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightpurple}
\lipsum[2][1-5]
\end{commentbox}
\newpage
\subsection{Brown}
\begin{quotebox}{lightbrown}{darkbrown}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightbrown}{darkbrown}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightbrown}
\lipsum[2][1-5]
\end{commentbox}
\subsection{Ivory}
\begin{quotebox}{lightivory}{darkivory}
\lipsum[1][1-4]
\end{quotebox}
\begin{paperbox}{}{lightivory}{darkivory}
\lipsum[1][5-9]
\end{paperbox}
\begin{commentbox}{}{lightivory}
\lipsum[2][1-5]
\end{commentbox}
% End document
\end{document}
|
|
\documentclass{article}
\usepackage{amsmath}
\usepackage{amsfonts}
\usepackage{graphicx}
\usepackage{biblatex}
\usepackage{authblk}
\usepackage{mathtools}
\usepackage{xurl} %see https://tex.stackexchange.com/questions/23394/url-linebreak-in-footnote for why we use xurl to get line breaks instead of regular url
\usepackage[hidelinks]{hyperref}
\usepackage{listings}
\usepackage{cancel}
\usepackage{enumitem}
\usepackage[bb=boondox]{mathalfa}
\addbibresource{sample-base.bib}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\usepackage{fancyhdr}
\usepackage{textcomp}
\setlist[description]{leftmargin=2cm,labelindent=1cm}
\lstdefinestyle{mystyle}{
% backgroundcolor=\color{backcolour},
% commentstyle=\color{codegreen},
% keywordstyle=\color{magenta},
% numberstyle=\tiny\color{codegray},
% stringstyle=\color{codepurple},
% basicstyle=\ttfamily\footnotesize,
% breakatwhitespace=false,
% breaklines=true,
captionpos=b,
keepspaces=true,
% numbers=left,
% numbersep=5pt,
% showspaces=false,
% showstringspaces=false,
% showtabs=false,
tabsize=2,
frame=single,
}
\lstset{style=mystyle}
\title{The Koyote Science, LLC, Approach to Personalization and Recommender Systems}
\author[1]{Douglas Mason}
\affil[1]{\href{http://www.koyotescience.com}{Koyote Science, LLC}}
\date{February 2022}
\begin{document}
\maketitle
\tableofcontents
\pagestyle{fancy}
% \fancyhf{}
\cfoot{\\
\\
\includegraphics[scale=0.15,valign=c]{koyote_science_logo.png}
}
\lhead{Page \thepage}
\rhead{Section \thesection}
\section{Introduction}
Personalization and recommender systems present unique challenges that can be addressed with intelligent bandit design. These algorithms require less feature-engineering, and use embeddings to represents users, queries, and items, making them the go-to for products with a history of interactions to build upon. When working with new users or when substantial historical data has not been recorded, bandits can help collect this data efficiently. Moreover, while recommender systems have evolved to enormously complex extremes to accommodate signals as quickly as possible, bandits paired with recommender systems can capture user intent in the moment with little additional effort, providing a tremendous improvement over traditional approaches. This document outlines the recommender system problem, how bandits have been used to address issues in the past, and how we would do it differently.
\section{History and background}
Any developed technology company will have a well-tuned, efficient global model covering all of their customers. Such systems derive largely from \textbf{collaborative filtering}\cite{CF_survey1, CF_survey2, CF_survey3}, which boils down to singular value decomposition on the user-item matrix whose rows identify a user, whose columns identify an item, and whose entries identify interactions between the given user and item.
Side information for the users, such as demographics, as well as the items, such as genre or format, can be used in a separate recommendation method called \textbf{content-based filtering}, by which a traditional supervised learning model is trained on the user and item features, and generally include the interactions between those features. A popular elaboration includes adding contextual features to the model, such as time-of-day, or day-of-week, which opens up the possibility of hierarchical models, whereby we learn global trends as well as user-specific preferences.
Content-based filtering is great for cold-start problems where these features are available but not prior interactions for the user, however it generally underperforms collaborative filtering once interactions have been recorded. Moreover, performance can suffer in large part because identifying meaningful features, building the appropriate data pipeline, and quality assurance are all non-trivial compared to accumulating the user-item matrix. Content-based filtering benefits from operating on traditional supervised learning methods, and can be easily integrated with any bandit or reinforcement-learning system accordingly.
Combining collaborative filtering and content-based filtering, we arrive at \textbf{hybrid recommendation models}\cite{hybrid_recommender, hybrid_recommender2}. Popular techniques in this domain include factorization machines\cite{factorization_machines, deep_factorization_machines}, which learn additional embeddings for side information features and add these to the model. A popular implementation of this style of recommender is LightFM\cite{lightFM, xLightFM} available in Python, although custom implementations can be built for businesses with the resources to do so.
Collaborative, content-based, and hybrid filtering are all mature methods that can be used not only to recommend items like movies, songs, and books, but also how those items are displayed, such as which rows of recommended items are presented, like "Previously watched" and "Great horror classics". This can be accomplished by encoding these display elements in a separate user-item matrix or content-based filtering model and finding clever ways to use this signal. For example, the reader can see the Netflix blog entry\footnote{Source: \url{https://netflixtechblog.com/learning-a-personalized-homepage-aa8ec670359a}} which covers the topic on a very high level, mostly focusing on heuristic methods on top of the underlying signal to encourage diversity or account for causal inference.
\section{Mathematical formulation}
Using \textbf{deep learning}, that is, a neural network function approximator guided by a stochastic gradient descent style optimizer, an alternate view of factorization machines sees each user and item being converted in an \textit{embedding} or \textit{representation}, with their interactions modeled by the dot product between those embeddings. This allows us to construct extremely expressive hybrid recommender systems as a variation on linear regression with quadratic feature interactions, using the dot products between the embeddings to dramatically reduce the number of parameters that need to be learned\footnote{See \url{https://towardsdatascience.com/factorization-machines-for-item-recommendation-with-implicit-feedback-data-5655a7c749db}}. While complex deep neural networks can be used, a single-layer network equivalent to a linear regression has the benefit of being a convex optimization problem that is far more stable to solve.
We can write out our function approximator as
\begin{equation}
f(\mathbf{x},\boldsymbol{\theta},\boldsymbol{\phi}) = \theta_0 + \sum_{p=1}^{P}\theta_p x_p + \sum_{p=1}^{P-1}\sum_{q=p+1}^{P}\boldsymbol{\phi}_p^\top \boldsymbol{\phi}_q x_p x_q
\end{equation}where
\begin{itemize}
\item $\mathbf{x}$ is a one-hot encoding of the users and items concatenated with (possibly float-valued) contextual features like time-of-day for a given interaction,
\item $\boldsymbol{\theta}$ is the set of (scalar) weights for each feature in $\mathbf{x}$, sometimes referred to as the user and item biases, but it also includes weights for contextual features
\item $\boldsymbol{\phi}$ is the $P\times M$ matrix of representations for each user and item as well as contextual features we would like represented this way, where $M$ is the number of embedding dimensions, although this number can depend on the user, item, or contextual feature rather than being static
\end{itemize}
Note that we don't one-hot encode the users or items explicitly but rather look them up in a dictionary defined by the model, however, the formalism is easier to write out this way.
The model parameters $\boldsymbol{\theta}$ and $\boldsymbol{\phi}$ are optimized to minimize the standard squared-error loss\cite{CF_neural, CF_neural2},
\begin{equation}
L(\mathbf{X},\boldsymbol{\theta},\boldsymbol{\phi}) = \frac{1}{N}\sum_{i=1}^{N}(\sigma(f(\mathbf{X}_i,\boldsymbol{\theta},\boldsymbol{\phi})) - y)^2 + \lambda \left(||\boldsymbol{\theta}||_2 + ||\boldsymbol{\phi}||_2\right)
\end{equation}where
\begin{itemize}
\item $N$ is the number of samples in $\mathbf{X}$
\item $\sigma$ is a link function that is the identity for a linear regression and the sigmoid function for binary outputs
\item $y$ is the measure of interest (item rankings, engagement, etc.)
\item $\lambda$ is the L2-norm regularization constant
\end{itemize}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.75\linewidth]{two_tower.png}
\end{center}
\caption{The user (query) and item features can be fed into a two-tower deep neural network to learn embeddings and interaction features among each type of feature (user and item) separately. The dot product is then used to model interactions between the user and the item, and the learned embeddings can be used in any k-nearest neighbors retrieval system to find similar users or items. This flexible design allows for a variety of side-chain information, such as time-of-day, to be used in either the user tower, item tower, or both. Source: \url{https://research.google/pubs/pub50257/}}
\label{fig:two_tower}
\end{figure}
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.75\linewidth]{youtube_figure.png}
\end{center}
\caption{The YouTube implementation encodes an enormous amount of side-chain information. Source: \url{https://research.google/pubs/pub45530/}}
\label{fig:you_tube}
\end{figure}
Further elaborations include triplet losses for learning relative rankings rather than explicit predictions of $y$\cite{learning_to_rank_recsys} which is implemented with LightFM\cite{lightFM}, and deep neural networks to learn expressive representations in well-known implementations such as YouTube's\cite{youtube_recsys} (Figure \ref{fig:you_tube}) and the "two tower DNN" design\cite{google_recsys_two_tower} (Figure \ref{fig:two_tower}). In general, the more expressive the model becomes, the easier it is to program in an auto-differentiation package like JAX compared to using pre-built APIs like Keras to allow for fine-tuned control.
\section{Implicit feedback and negative sampling}
When our goal is to predict click-through-rates rather than scalar regression (such as movie ratings), but we only record \textbf{implicit feedback} (the clicks), this can impose new challenges when the number of items is large. For a small number of items, a multi-class model can predict over all items and compute the softmax explicitly, training one interaction at a time and assuming all other items were not interacted with. For multi-label models, we can collect all interactions and non-interactions for a user and train them simultaneously.
However, when the number of items grows, we cannot model each item in our outputs due to resource constraints and are forced to use models that accept items as input vectors rather than as the set of discrete outputs. This leaves us with a multi-class softmax model that trains one interaction at a time, or a binary classification model that only predicts for a given user and item. However, if we only train on the interactions, we must beware of the phenomenon of "folding", by which distinct clusters of data will be incorrectly assigned similar embedding values\cite{folding_without_negative_sampling_recsys}. To address this issue, \textbf{negative sampling} is employed by which we sample the non-interactions rather than use all of them.
The negative samples can be trained explicitly for binary classification models, at which point the problem is identical to managing class imbalance, i.e., when the distribution of your training data differs from the data you infer on. Unfortunately, this approach introduces new hyperparmaters governing how we obtain the negative samples and in what proportion to include them, which are then tuned to optimize some classification goal. In our case, our goal is some metric of the recommender system, such as top-k ranking. Incidentally, these considerations amount to the importance sampling ratio used in off-policy evaluation in reinforcement learning, since we are attempting to bridge the differences between the the training data distribution the model is trained on and the inference data distribution the model will be applied on. Notably, when we only record data for one class (the interactions), we have no way to determine the importance sampling ratios a-priori, which is why we are left hyperparameters to tune.
For multi-class models, while negative samples can be included in the training data as in binary classification, they can also just be included in the normalization denominator, i.e., the partition function, in the per-sample likelihood \begin{equation}\mathcal{L}_\text{softmax}(\mathbf{a},\mathbf{c},\boldsymbol{\theta})=\frac{\exp(f(\mathbf{a},\mathbf{c},\boldsymbol{\theta}))}{\sum_{\mathbf{a}'}\exp(f(\mathbf{a}',\mathbf{c},\boldsymbol{\theta}))}
\end{equation}. The loss function \begin{equation}L(\boldsymbol{\theta})=-\frac{1}{N}\sum_{\mathbf{a},\mathbf{c}}\log\mathcal{L}_\text{softmax}(\mathbf{a},\mathbf{c},\boldsymbol{\theta})\end{equation}then averages over all actions (items) and contexts (users) in our dataset of size $N$. We don't need to explicitly train the negative samples to shift their embeddings because the gradients of the loss function related to the softmax denominator will do that for us.
Various methods are employed for selecting the negative samples, the most common and easiest-to-use being random negative sampling. However, similar to off-policy reinforcement learning, we can sample our negatives from any non-random distribution we like so long as we properly account for inverse probability weighting and importance sampling\cite{bengio_negative_sampling}. Alternatively, the user can be explicitly defined by the target-weighted sum of item embeddings that they have interacted with to avoid this issue, although this requires using sparse matrix libraries and limits the number of items that can be modeled.
Negative sampling is necessary for models where we we apply a softmax to our prediction probabilities, that is, we treat each interaction as a win against a pool of other candidates, forcing the model to rank the likelihood of the current interaction against the non-interactions. However, instead of treating this as a multi-class single-label classification problem, we can also train just to predict the interaction probability between a given user and item without the softmax function forcing a ranking among items in the model. This turns our problem into a single-value binary classification or regression problem. Once a substantial amount of interaction data has come in, the softmax approach will outperform the alternative formulation for the same reason that multi-task models learn better embeddings and improve model performance in general, and similarly, a multi-class \textit{multi-label} model using a final sigmoid activation layer over all items will perform in between. But the greater performance comes at a cost: handling missing information or heterogenous data becomes substantially more difficult since all items have to be ranked concurrently against each other. For this reason, cold-start problems often ignore the softmax function and rely on a more traditional bandit formulation.
Note that when the number of items becomes large, performing predictions for ranking becomes a substantial computational overhead. To address this, nearest-neighbor algorithms\cite{nearest_neighbors} like K-D tree and ball tree can be used to reduce the search time by searching for item embeddings that are closest to the query embedding since these will have the largest dot-products. Note that this algorithm requires the two-tower design depicted in Figure \ref{fig:two_tower}, to measure the distances between the final query and item embeddings, as well as direct access to the final embedding layers rather than building off of the model predictions, which can make the engineering a bit harder.
\section{Reinforcement learning for recommender systems}
\textbf{Reinforcement learning} has been extensively explored in the literature as a way of improving recommender systems\cite{rl_recsy, rl_recsys2, rl_recsys3}. First, we consider how it would be added to either of the above methods individually. For collaborative filtering alone, the predominant approach is to use a Bayesian implementation of matrix factorization called \textit{probabilistic matrix factorization}\cite{PMF_probabilistic_matrix_factorization,BPMF_bayesian_probabilistic_matrix_factorization} to create \textit{interactive collaborative filtering}\cite{interactive_CF,interactive_CF2,interactive_CF3} implementations which balance exploration and exploitation in data collection for the collaborative filtering signal. It remains unclear if similar approaches can be used for factorization machines. Meanwhile, content-based filtering extends to reinforcement learning trivially.
With the deep learning formulation, it is possible to combine all three approaches satisfyingly into one formalism since it treats collaborative filtering like a content-based filtering problem. The question remains how to capture prediction uncertainty to drive RL policies and make the exploration-exploitation trade-off possible, which is what probabilistic matrix factorization solved for the classical model. You can either employ an epsilon-greedy or softmax policy based on static predictions (poor performance), or use bootstrapped ensembles for Thompson sampling\cite{randomized_prior_functions, bootstrap_DQN, thompson_sampling} which incurs greater training costs but zero latency issues for the customer. This is our preferred method because it has been shown to outperform others in many contexts\cite{thompson_sampling1, thompson_sampling2, thompson_sampling3}. Other proposals, such as MC-dropout\cite{monte_carlo_dropout} and Bayes-by-backprop\cite{bayes_by_backprop}, have not yet been proven effective or accurate\cite{risk_versus_uncertainty}
\section{How we would do it differently}
While reinforcement learning has been explored to optimize these signals as we've discussed, such efforts ignore the major benefit of the technology: enhanced and responsive interactions. For example, a feature-rich recommender system with tons of data like YouTube's can tell us your propensity to watch horror movies on a Friday night, and can even tell us when users are globally excited for the genre because Halloween is fast-approaching. By adding recent historical features, such as representations of the most-recently interacted items, advanced models can also capture sequential tendencies, such as the propensity to watch the second episode of a show that a user just started.
However, even with enormous resources and instantaneous updates, \textbf{traditional recommenderg systems are based on static historical signals and cannot tell us about the customer in the moment.} In other words, such models can deduce your likelihood for wanting to see horror on this Friday night, but they cannot capture that intention directly, and therefore miss out on the most-important signal. Heuristic add-ons can replicate some of the desired responsiveness, but they are prone to errors in edge cases, require expensive A/B testing, and are designed by humans rather than built on sound foundational principles. In other words: we can certainly do better, and the lessons learned with bandits and reinforcement learning are the key to getting there.
The key concept here is that \textbf{any real-time signal we obtain must be extremely simple}. Recommender systems excel with enormous amounts of data, but any model that bootstraps off of their predictions and responds to users in the moment can only capture a few more nuances. We assume for the moment that we are supplied with a recommender system that provides us predictions based on historical data of user-item engagement. In the traditional linear bandit setup, we define a linear regression function approximator as
\begin{equation}
f(\mathbf{a},\mathbf{c},\boldsymbol{\theta}) = \theta_0 + \sum_{i=1}^{N_a}\theta_i a_i+\sum_{i=1}^{N_c}\theta_{i+N_a}c_i+\sum_{i=1}^{N_a}\sum_{j=1}^{N_c}\theta_{i+jN_a+N_a+N_c} a_i c_j
\end{equation}where $\mathbf{a}$ is the $N_a$-length action feature vector, $\mathbf{c}$ is the $N_c$-length context feature vector, and $\boldsymbol{\theta}$ is the model parameters. As we can see, this model has $1+N_a+N_c+N_a\times N_c$ parameters, which is substantial! How can we reduce this number?
We instead define our features as follows. We make one model for each user and session so that the context feature vector is null. For each item (the action) we encode the recommender system prediction between the user and item as well as any gestalt features we hope to capture about the item. We write this \textbf{simplified model} as
\begin{equation}
f(\mathbf{a},\boldsymbol{\theta}) = a_r+\sum_{i=1}^{N_a}\theta_i a_i
\end{equation}where we have removed the bias term $\theta_0$ since it doesn't affect rankings and replaced it with the recommender prediction $a_r$. How simple! So how would this work in practice?
Imagine that you want to capture whether a user is interested, right now, in horror. We use the simplified model in a bandit setup. We start with a prior on the parameters so that we don't begin with pure exploration set to $\boldsymbol{\theta}=\mathbf{0}$, with a prior noise parameter $\sigma_p$ tuned to how much we want to explore to begin with. As the system learns, it will discover how much it should weight its contribution against the recommender system. If $\theta_1$ is a Boolean capturing whether the item falls into horror, this model weight will capture how much we should up-recommend horror items. As we add more features to the model, we may need to tighten the prior noise parameter accordingly to capture the same level of exploitation, so this method still leaves over a hyperparameter to hand-tune, $\sigma_p$. Other models can be considered, for example, the multiplicative model
\begin{equation}
f(\mathbf{a},\boldsymbol{\theta}) = a_1 \sum_{i=1}^{N_a}\theta_i a_i
\end{equation}where the prior recommendation is multiplied through the parameters and features to give a desired outcome.
\section{Building probabilistic logic out of bandit components for expressive and responsive recommendation products}
Any recommender system is really a bandit or reinforcement learning algorithm in disguise, with the main difference being that collaborative filtering uses embeddings and dot products in the function approximator, while the bandit and RL literature focuses on more traditional formulations. Generally, practitioners don't concern themselves with the data collection feedback loop in a recommender system because it is assumed that there is a lot of historical data that needs to be batch-processed, and therefore the uncertainties of the predictions are assumed to be very small and irrelevant to the data collection process. However, for small-data domains like products that are bootstrapping themselves from their customers, and any cold-start or low-data user or item, this is certainly not the case (pun absolutely intended)!
There are two primary methods by which bandits and reinforcement learning components can be linked. The first, as discussed in the previous section, is to use the prediction of one bandit as a feature in another. We can further elaborate on uncertainty propagation by using not the mean prediction as our feature but a sample from the bandit's prediction distribution. Moreover, we can use predictions from multiple bandits and bandits trained on multiple rewards as input to another bandit. (Note that even though only on reward drives the policy for a given bandit, we can still train against many targets, the difference only appearing in how off-policy corrections and importance sampling are executed.) This can help ensure that we do not over-exploit too early, and is a method borrowed from the reinforcement learning literature (see, for example, randomized least squares value iteration\cite{RLSVI}).
The second way to link bandits is to use one bandit to pre-select the pool of choices for another. In this case, it is important to pass to the bandit consuming the pool the likelihood of including of including each element in the pool. An isolated bandit will be able to compute its own likelihood for pool inclusion depending on its acquisition function, and use the inverse of this value to weight the data in the model during the training process. In the case of only one item in the pool, such as presenting a single option to the user and seeing if they engage it, this amounts to traditional inverse probability weighting as discussed in \cite{Mason_Real-World_Reinforcement_Learning_2021} and \cite{sutton_barto_rl}, a technique that is critical to off-policy learning and causal inference, and is a simplification of more-sophisticated (and difficult-to-implement) hierarchical approaches like position-based models\cite{zappella_position_based}.
What these two links between bandits achieve for us is that they enable us to build sophisticated, hierarchical systems where each component only has to excel at its prescribed task. This means that any AutoML system used to monitor and launch a bandit service can be used in a modular fashion, each module perfecting its own selection process. This is a far more-tractable solution than traditional hierarchical Bayesian modeling which require unreliable techniques like Markov-chain Monte Carlo, variational inference, or other sampling methods developed in the field, and which require experienced practitioners and experimental software to execute such as \href{http://www.pyro.ai}{Pyro}, \href{https://docs.pymc.io/en/v3/}{PyMC3}, or \href{https://mc-stan.org}{Stan}. These limitations have restricted such approaches to the domain of one-off research studies, but the system described here is straightforward to productionize and to train engineering teams to monitor and ensure quality control. In fact, in our experience motivating engineers on this field is easy because they know they are building powerful skills for the future.
\section{Pool selection, fantasies, and enforcing diversity}
The majority of reinforcement learning literature is concerned with choosing a single optimal action given a context based on what has been learned from previous training episodes. However, recommendation systems often are tasked with choosing pools of mutually-exclusive recommendations rather than single items, generally ranked by a single score, and filtered by a query criterion. In certain two-tower and other deep-learning architectures, the query is included in the ranking itself so that all items are considered. When a user selects an item from this pool, we use the inverse policy probability of choosing that item to weight the selection's data in training our model. This means that an element with a low likelihood of being chosen will have a bigger impact if it is.
However, obtaining the correct policy probabilities in these scenarios is non-trivial, and has been extensively studied in hyperparameter optimization problems\cite{parallel_hyperparameter_tuning,parallel_hyperparameter_tuning2}, where hyperparameter selections will often be used to train models in parallel, and we may not get results back until we've had to start training new models on new hyperaparameter selections. The approach in this field is to first make a copy of your model before you start the pool selection, and to make a model copy for each item you will select. Then, for each item, you select the previous pool items that have been previously selected, obtain a new samples from the performance prediction distributions for the model copy, and use those samples to train the associated model copy. While the selections will be the same as we build up the pool, the prediction scores will not, and this allows us to sample the distribution of possibilities similar to Thompson sampling. These selections with their predicted performances are called "fantasies" and are used to sample how uncertainty may propagate during pool selection.
In reinforcement learning problems with a fixed and small number of items or actions, each one is equally available or filtered out completely, but when the availability is limited and exhibits a continuous distribution, we must make some adjustments. In particular, we must multiply the inverse probability weight by the inverse probability of having that item available as a selection. This arises because if an item was unlikely to be included in the selection, this further increases the surprise of choosing it, and averaged over many trials, we must account for it. For this situation, it is important to compute the probability of including each item in the selected pool, rather than the probability of recommending each item individually, in order to properly employ inverse probability weighting. More-advanced approaches use a hierarchical model to model click-through rates as a function of presentation position\cite{zappella_position_based}, but this is an advanced research field that is beyond the scope of this document.
Given a sequence of items $i(s)$ selected at each step $s$, with policy probability $\pi(i(s), s)$ and availability probability $a(i(s), s)$, we compute the probability for item $i'$ at step $s'$ using the recursive equation \begin{equation}
P(i',s')=\prod_{s=1}^{s'-1} P(i(s),s)\times a(i'(s'),s')\times \pi(i'(s'),s')
\end{equation}However, this is only for the given trajectory of items $i(s)\forall s<s'$, and we must sum our probabilities over all possible trajectories. This computation is in general intractable, although we can approximate it using sampling: create $N$ different trajectories with selection sequences that are mutually exclusive of each other, and then add them up. Note that the probabilities over all items for a given step must add up to one and are constant, even though we may have different prediction score samples for a given item over different trajectories.
For non-deterministic policies like we assume in our construction of fantasies using Thompson sampling, for each trajectory, create $M$ different probability sequences with the same selections but with different samples of the prediction distribution used to train the model copy, and average over the probability sequences for each $n\leq N$ trajectory. This approximation will converge to the true total as $N$ and $M$ grow large, although in experience only a handful of such trajectories are necessary. High-probability items will tend to dominate, and each successive trajectory contributes smaller overall probabilities since we are specifically excluding the popular choices made in previous trajectories.
It is possible some unlikely trajectory or probability sequence could swamp the probability distribution in an unexpected way, and this is why advanced Bayesian inference techniques were invented in the first place. However, they are generally never put into production because they are unstable, require hand-holding, their contributions are expensive to compute, and they are unlikely to have much impact on the final result. The bespoke algorithm MOE, which integrates over the performance probabilities of each selection\cite{scott_clark}, is another possibility, but it is unclear whether it provides a meaningful improvement given the substantial dependency it creates.
When selecting a pool of items, a common desire is to increase the diversity within it, but diversity always comes at the cost of predicted performance for the recommender system. For this reason, there is no free lunch -- we must choose what that trade-off will be, and as a result there are countless approaches to the problem, and we refer the reader to \cite{recsys_diversity_survey, netflix_recommender_system} for a detailed review.
Our approach increases diversity in our selection pool by using Thompson sampling in our selection process and we resample the effect of each item on successive item uncertainties in the pool. While this reduces predicted performance for any given selection, it actually improves performance long term as it navigates the exploration-exploitation trade-off.
In addition, we enable a few tunable parameters for increasing the diversity:
\begin{enumerate}
\item the fraction of selections that are truly random
\item a multiplier on the variance of our selections
\item a multiplier on the self-entropy (randomness) in our loss function
\end{enumerate}
Note that for the first tunable parameter, selections are still limited by the pool of consideration, i.e., the query filter. It is generally undesirable to introduce new tunable parameters that must be experimented on. Among the options provided here, we are least likely to recommend using the first parameter since it may have unintended consequences, while the second and third naturally converge to standard behavior when the multiplier goes to zero and one respectively. However, tunable parameters allow us to account for another type of uncertainty that Thompson sampling does not: the data could be corrupted, the measurements may be inaccurate, etc., so enhancing exploration a bit often helps accommodate these issues.
%%
%% The acknowledgments section is defined using the "acks" environment
%% (and NOT an unnumbered section). This ensures the proper
%% identification of the section in the article metadata, and the
%% consistent spelling of the heading.
%%\begin{acks}
%%To Robert, for the bagels and explaining CMYK and color spaces.
%%\end{acks}
%%
%% The next two lines define the bibliography style to be used, and
%% the bibliography file.
\printbibliography
\end{document}
|
|
\chapter{Basic Facilities of a Virtio Device}\label{sec:Basic Facilities of a Virtio Device}
A virtio device is discovered and identified by a bus-specific method
(see the bus specific sections: \ref{sec:Virtio Transport Options / Virtio Over PCI Bus}~\nameref{sec:Virtio Transport Options / Virtio Over PCI Bus},
\ref{sec:Virtio Transport Options / Virtio Over MMIO}~\nameref{sec:Virtio Transport Options / Virtio Over MMIO} and \ref{sec:Virtio Transport Options / Virtio Over Channel I/O}~\nameref{sec:Virtio Transport Options / Virtio Over Channel I/O}). Each
device consists of the following parts:
\begin{itemize}
\item Device status field
\item Feature bits
\item Notifications
\item Device Configuration space
\item One or more virtqueues
\end{itemize}
\section{\field{Device Status} Field}\label{sec:Basic Facilities of a Virtio Device / Device Status Field}
During device initialization by a driver,
the driver follows the sequence of steps specified in
\ref{sec:General Initialization And Device Operation / Device
Initialization}.
The \field{device status} field provides a simple low-level
indication of the completed steps of this sequence.
It's most useful to imagine it hooked up to traffic
lights on the console indicating the status of each device. The
following bits are defined (listed below in the order in which
they would be typically set):
\begin{description}
\item[ACKNOWLEDGE (1)] Indicates that the guest OS has found the
device and recognized it as a valid virtio device.
\item[DRIVER (2)] Indicates that the guest OS knows how to drive the
device.
\begin{note}
There could be a significant (or infinite) delay before setting
this bit. For example, under Linux, drivers can be loadable modules.
\end{note}
\item[FAILED (128)] Indicates that something went wrong in the guest,
and it has given up on the device. This could be an internal
error, or the driver didn't like the device for some reason, or
even a fatal error during device operation.
\item[FEATURES_OK (8)] Indicates that the driver has acknowledged all the
features it understands, and feature negotiation is complete.
\item[DRIVER_OK (4)] Indicates that the driver is set up and ready to
drive the device.
\item[DEVICE_NEEDS_RESET (64)] Indicates that the device has experienced
an error from which it can't recover.
\end{description}
\drivernormative{\subsection}{Device Status Field}{Basic Facilities of a Virtio Device / Device Status Field}
The driver MUST update \field{device status},
setting bits to indicate the completed steps of the driver
initialization sequence specified in
\ref{sec:General Initialization And Device Operation / Device
Initialization}.
The driver MUST NOT clear a
\field{device status} bit. If the driver sets the FAILED bit,
the driver MUST later reset the device before attempting to re-initialize.
The driver SHOULD NOT rely on completion of operations of a
device if DEVICE_NEEDS_RESET is set.
\begin{note}
For example, the driver can't assume requests in flight will be
completed if DEVICE_NEEDS_RESET is set, nor can it assume that
they have not been completed. A good implementation will try to
recover by issuing a reset.
\end{note}
\devicenormative{\subsection}{Device Status Field}{Basic Facilities of a Virtio Device / Device Status Field}
The device MUST initialize \field{device status} to 0 upon reset.
The device MUST NOT consume buffers or send any used buffer
notifications to the driver before DRIVER_OK.
\label{sec:Basic Facilities of a Virtio Device / Device Status Field / DEVICENEEDSRESET}The device SHOULD set DEVICE_NEEDS_RESET when it enters an error state
that a reset is needed. If DRIVER_OK is set, after it sets DEVICE_NEEDS_RESET, the device
MUST send a device configuration change notification to the driver.
\section{Feature Bits}\label{sec:Basic Facilities of a Virtio Device / Feature Bits}
Each virtio device offers all the features it understands. During
device initialization, the driver reads this and tells the device the
subset that it accepts. The only way to renegotiate is to reset
the device.
This allows for forwards and backwards compatibility: if the device is
enhanced with a new feature bit, older drivers will not write that
feature bit back to the device. Similarly, if a driver is enhanced with a feature
that the device doesn't support, it see the new feature is not offered.
Feature bits are allocated as follows:
\begin{description}
\item[0 to 23] Feature bits for the specific device type
\item[24 to 37] Feature bits reserved for extensions to the queue and
feature negotiation mechanisms
\item[38 and above] Feature bits reserved for future extensions.
\end{description}
\begin{note}
For example, feature bit 0 for a network device (i.e.
Device ID 1) indicates that the device supports checksumming of
packets.
\end{note}
In particular, new fields in the device configuration space are
indicated by offering a new feature bit.
\drivernormative{\subsection}{Feature Bits}{Basic Facilities of a Virtio Device / Feature Bits}
The driver MUST NOT accept a feature which the device did not offer,
and MUST NOT accept a feature which requires another feature which was
not accepted.
The driver SHOULD go into backwards compatibility mode
if the device does not offer a feature it understands, otherwise MUST
set the FAILED \field{device status} bit and cease initialization.
\devicenormative{\subsection}{Feature Bits}{Basic Facilities of a Virtio Device / Feature Bits}
The device MUST NOT offer a feature which requires another feature
which was not offered. The device SHOULD accept any valid subset
of features the driver accepts, otherwise it MUST fail to set the
FEATURES_OK \field{device status} bit when the driver writes it.
If a device has successfully negotiated a set of features
at least once (by accepting the FEATURES_OK \field{device
status} bit during device initialization), then it SHOULD
NOT fail re-negotiation of the same set of features after
a device or system reset. Failure to do so would interfere
with resuming from suspend and error recovery.
\subsection{Legacy Interface: A Note on Feature
Bits}\label{sec:Basic Facilities of a Virtio Device / Feature
Bits / Legacy Interface: A Note on Feature Bits}
Transitional Drivers MUST detect Legacy Devices by detecting that
the feature bit VIRTIO_F_VERSION_1 is not offered.
Transitional devices MUST detect Legacy drivers by detecting that
VIRTIO_F_VERSION_1 has not been acknowledged by the driver.
In this case device is used through the legacy interface.
Legacy interface support is OPTIONAL.
Thus, both transitional and non-transitional devices and
drivers are compliant with this specification.
Requirements pertaining to transitional devices and drivers
is contained in sections named 'Legacy Interface' like this one.
When device is used through the legacy interface, transitional
devices and transitional drivers MUST operate according to the
requirements documented within these legacy interface sections.
Specification text within these sections generally does not apply
to non-transitional devices.
\section{Notifications}\label{sec:Basic Facilities of a Virtio Device
/ Notifications}
The notion of sending a notification (driver to device or device
to driver) plays an important role in this specification. The
modus operandi of the notifications is transport specific.
There are three types of notifications:
\begin{itemize}
\item configuration change notification
\item available buffer notification
\item used buffer notification.
\end{itemize}
Configuration change notifications and used buffer notifications are sent
by the device, the recipient is the driver. A configuration change
notification indicates that the device configuration space has changed; a
used buffer notification indicates that a buffer may have been made used
on the virtqueue designated by the notification.
Available buffer notifications are sent by the driver, the recipient is
the device. This type of notification indicates that a buffer may have
been made available on the virtqueue designated by the notification.
The semantics, the transport-specific implementations, and other
important aspects of the different notifications are specified in detail
in the following chapters.
Most transports implement notifications sent by the device to the
driver using interrupts. Therefore, in previous versions of this
specification, these notifications were often called interrupts.
Some names defined in this specification still retain this interrupt
terminology. Occasionally, the term event is used to refer to
a notification or a receipt of a notification.
\section{Device Configuration Space}\label{sec:Basic Facilities of a Virtio Device / Device Configuration Space}
Device configuration space is generally used for rarely-changing or
initialization-time parameters. Where configuration fields are
optional, their existence is indicated by feature bits: Future
versions of this specification will likely extend the device
configuration space by adding extra fields at the tail.
\begin{note}
The device configuration space uses the little-endian format
for multi-byte fields.
\end{note}
Each transport also provides a generation count for the device configuration
space, which will change whenever there is a possibility that two
accesses to the device configuration space can see different versions of that
space.
\drivernormative{\subsection}{Device Configuration Space}{Basic Facilities of a Virtio Device / Device Configuration Space}
Drivers MUST NOT assume reads from
fields greater than 32 bits wide are atomic, nor are reads from
multiple fields: drivers SHOULD read device configuration space fields like so:
\begin{lstlisting}
u32 before, after;
do {
before = get_config_generation(device);
// read config entry/entries.
after = get_config_generation(device);
} while (after != before);
\end{lstlisting}
For optional configuration space fields, the driver MUST check that the
corresponding feature is offered before accessing that part of the configuration
space.
\begin{note}
See section \ref{sec:General Initialization And Device Operation / Device Initialization} for details on feature negotiation.
\end{note}
Drivers MUST
NOT limit structure size and device configuration space size. Instead,
drivers SHOULD only check that device configuration space is {\em large enough} to
contain the fields necessary for device operation.
\begin{note}
For example, if the specification states that device configuration
space 'includes a single 8-bit field' drivers should understand this to mean that
the device configuration space might also include an arbitrary amount of
tail padding, and accept any device configuration space size equal to or
greater than the specified 8-bit size.
\end{note}
\devicenormative{\subsection}{Device Configuration Space}{Basic Facilities of a Virtio Device / Device Configuration Space}
The device MUST allow reading of any device-specific configuration
field before FEATURES_OK is set by the driver. This includes fields which are
conditional on feature bits, as long as those feature bits are offered
by the device.
\subsection{Legacy Interface: A Note on Device Configuration Space endian-ness}\label{sec:Basic Facilities of a Virtio Device / Device Configuration Space / Legacy Interface: A Note on Configuration Space endian-ness}
Note that for legacy interfaces, device configuration space is generally the
guest's native endian, rather than PCI's little-endian.
The correct endian-ness is documented for each device.
\subsection{Legacy Interface: Device Configuration Space}\label{sec:Basic Facilities of a Virtio Device / Device Configuration Space / Legacy Interface: Device Configuration Space}
Legacy devices did not have a configuration generation field, thus are
susceptible to race conditions if configuration is updated. This
affects the block \field{capacity} (see \ref{sec:Device Types /
Block Device / Device configuration layout}) and
network \field{mac} (see \ref{sec:Device Types / Network Device /
Device configuration layout}) fields;
when using the legacy interface, drivers SHOULD
read these fields multiple times until two reads generate a consistent
result.
\section{Virtqueues}\label{sec:Basic Facilities of a Virtio Device / Virtqueues}
The mechanism for bulk data transport on virtio devices is
pretentiously called a virtqueue. Each device can have zero or more
virtqueues\footnote{For example, the simplest network device has one virtqueue for
transmit and one for receive.}.
Driver makes requests available to device by adding
an available buffer to the queue, i.e., adding a buffer
describing the request to a virtqueue, and optionally triggering
a driver event, i.e., sending an available buffer notification
to the device.
Device executes the requests and - when complete - adds
a used buffer to the queue, i.e., lets the driver
know by marking the buffer as used. Device can then trigger
a device event, i.e., send a used buffer notification to the driver.
Device reports the number of bytes it has written to memory for
each buffer it uses. This is referred to as ``used length''.
Device is not generally required to use buffers in
the same order in which they have been made available
by the driver.
Some devices always use descriptors in the same order in which
they have been made available. These devices can offer the
VIRTIO_F_IN_ORDER feature. If negotiated, this knowledge
might allow optimizations or simplify driver and/or device code.
Each virtqueue can consist of up to 3 parts:
\begin{itemize}
\item Descriptor Area - used for describing buffers
\item Driver Area - extra data supplied by driver to the device
\item Device Area - extra data supplied by device to driver
\end{itemize}
\begin{note}
Note that previous versions of this spec used different names for
these parts (following \ref{sec:Basic Facilities of a Virtio Device / Split Virtqueues}):
\begin{itemize}
\item Descriptor Table - for the Descriptor Area
\item Available Ring - for the Driver Area
\item Used Ring - for the Device Area
\end{itemize}
\end{note}
Two formats are supported: Split Virtqueues (see \ref{sec:Basic
Facilities of a Virtio Device / Split
Virtqueues}~\nameref{sec:Basic Facilities of a Virtio Device /
Split Virtqueues}) and Packed Virtqueues (see \ref{sec:Basic
Facilities of a Virtio Device / Packed
Virtqueues}~\nameref{sec:Basic Facilities of a Virtio Device /
Packed Virtqueues}).
Every driver and device supports either the Packed or the Split
Virtqueue format, or both.
\input{split-ring.tex}
\input{packed-ring.tex}
\section{Driver Notifications} \label{sec:Virtqueues / Driver notifications}
The driver is sometimes required to send an available buffer
notification to the device.
When VIRTIO_F_NOTIFICATION_DATA has not been negotiated,
this notification involves sending the
virtqueue number to the device (method depending on the transport).
However, some devices benefit from the ability to find out the
amount of available data in the queue without accessing the virtqueue in memory:
for efficiency or as a debugging aid.
To help with these optimizations, when VIRTIO_F_NOTIFICATION_DATA
has been negotiated, driver notifications to the device include
the following information:
\begin{description}
\item [vqn] VQ number to be notified.
\item [next_off] Offset
within the ring where the next available ring entry
will be written.
When VIRTIO_F_RING_PACKED has not been negotiated this refers to the
15 least significant bits of the available index.
When VIRTIO_F_RING_PACKED has been negotiated this refers to the offset
(in units of descriptor entries)
within the descriptor ring where the next available
descriptor will be written.
\item [next_wrap] Wrap Counter.
With VIRTIO_F_RING_PACKED this is the wrap counter
referring to the next available descriptor.
Without VIRTIO_F_RING_PACKED this is the most significant bit
(bit 15) of the available index.
\end{description}
Note that the driver can send multiple notifications even without
making any more buffers available. When VIRTIO_F_NOTIFICATION_DATA
has been negotiated, these notifications would then have
identical \field{next_off} and \field{next_wrap} values.
\input{shared-mem.tex}
\chapter{General Initialization And Device Operation}\label{sec:General Initialization And Device Operation}
We start with an overview of device initialization, then expand on the
details of the device and how each step is preformed. This section
is best read along with the bus-specific section which describes
how to communicate with the specific device.
\section{Device Initialization}\label{sec:General Initialization And Device Operation / Device Initialization}
\drivernormative{\subsection}{Device Initialization}{General Initialization And Device Operation / Device Initialization}
The driver MUST follow this sequence to initialize a device:
\begin{enumerate}
\item Reset the device.
\item Set the ACKNOWLEDGE status bit: the guest OS has noticed the device.
\item Set the DRIVER status bit: the guest OS knows how to drive the device.
\item\label{itm:General Initialization And Device Operation /
Device Initialization / Read feature bits} Read device feature bits, and write the subset of feature bits
understood by the OS and driver to the device. During this step the
driver MAY read (but MUST NOT write) the device-specific configuration fields to check that it can support the device before accepting it.
\item\label{itm:General Initialization And Device Operation / Device Initialization / Set FEATURES-OK} Set the FEATURES_OK status bit. The driver MUST NOT accept
new feature bits after this step.
\item\label{itm:General Initialization And Device Operation / Device Initialization / Re-read FEATURES-OK} Re-read \field{device status} to ensure the FEATURES_OK bit is still
set: otherwise, the device does not support our subset of features
and the device is unusable.
\item\label{itm:General Initialization And Device Operation / Device Initialization / Device-specific Setup} Perform device-specific setup, including discovery of virtqueues for the
device, optional per-bus setup, reading and possibly writing the
device's virtio configuration space, and population of virtqueues.
\item\label{itm:General Initialization And Device Operation / Device Initialization / Set DRIVER-OK} Set the DRIVER_OK status bit. At this point the device is
``live''.
\end{enumerate}
If any of these steps go irrecoverably wrong, the driver SHOULD
set the FAILED status bit to indicate that it has given up on the
device (it can reset the device later to restart if desired). The
driver MUST NOT continue initialization in that case.
The driver MUST NOT send any buffer available notifications to
the device before setting DRIVER_OK.
\subsection{Legacy Interface: Device Initialization}\label{sec:General Initialization And Device Operation / Device Initialization / Legacy Interface: Device Initialization}
Legacy devices did not support the FEATURES_OK status bit, and thus did
not have a graceful way for the device to indicate unsupported feature
combinations. They also did not provide a clear mechanism to end
feature negotiation, which meant that devices finalized features on
first-use, and no features could be introduced which radically changed
the initial operation of the device.
Legacy driver implementations often used the device before setting the
DRIVER_OK bit, and sometimes even before writing the feature bits
to the device.
The result was the steps \ref{itm:General Initialization And
Device Operation / Device Initialization / Set FEATURES-OK} and
\ref{itm:General Initialization And Device Operation / Device
Initialization / Re-read FEATURES-OK} were omitted, and steps
\ref{itm:General Initialization And Device Operation /
Device Initialization / Read feature bits},
\ref{itm:General Initialization And Device Operation / Device Initialization / Device-specific Setup} and \ref{itm:General Initialization And Device Operation / Device Initialization / Set DRIVER-OK}
were conflated.
Therefore, when using the legacy interface:
\begin{itemize}
\item
The transitional driver MUST execute the initialization
sequence as described in \ref{sec:General Initialization And Device
Operation / Device Initialization}
but omitting the steps \ref{itm:General Initialization And Device
Operation / Device Initialization / Set FEATURES-OK} and
\ref{itm:General Initialization And Device Operation / Device
Initialization / Re-read FEATURES-OK}.
\item
The transitional device MUST support the driver
writing device configuration fields
before the step \ref{itm:General Initialization And Device Operation /
Device Initialization / Read feature bits}.
\item
The transitional device MUST support the driver
using the device before the step \ref{itm:General Initialization
And Device Operation / Device Initialization / Set DRIVER-OK}.
\end{itemize}
\section{Device Operation}\label{sec:General Initialization And Device Operation / Device Operation}
When operating the device, each field in the device configuration
space can be changed by either the driver or the device.
Whenever such a configuration change is triggered by the device,
driver is notified. This makes it possible for drivers to
cache device configuration, avoiding expensive configuration
reads unless notified.
\subsection{Notification of Device Configuration Changes}\label{sec:General Initialization And Device Operation / Device Operation / Notification of Device Configuration Changes}
For devices where the device-specific configuration information can be
changed, a configuration change notification is sent when a
device-specific configuration change occurs.
In addition, this notification is triggered by the device setting
DEVICE_NEEDS_RESET (see \ref{sec:Basic Facilities of a Virtio Device / Device Status Field / DEVICENEEDSRESET}).
\section{Device Cleanup}\label{sec:General Initialization And Device Operation / Device Cleanup}
Once the driver has set the DRIVER_OK status bit, all the configured
virtqueue of the device are considered live. None of the virtqueues
of a device are live once the device has been reset.
\drivernormative{\subsection}{Device Cleanup}{General Initialization And Device Operation / Device Cleanup}
A driver MUST NOT alter virtqueue entries for exposed buffers,
i.e., buffers which have been
made available to the device (and not been used by the device)
of a live virtqueue.
Thus a driver MUST ensure a virtqueue isn't live (by device reset) before removing exposed buffers.
\chapter{Virtio Transport Options}\label{sec:Virtio Transport Options}
Virtio can use various different buses, thus the standard is split
into virtio general and bus-specific sections.
\section{Virtio Over PCI Bus}\label{sec:Virtio Transport Options / Virtio Over PCI Bus}
Virtio devices are commonly implemented as PCI devices.
A Virtio device can be implemented as any kind of PCI device:
a Conventional PCI device or a PCI Express
device. To assure designs meet the latest level
requirements, see
the PCI-SIG home page at \url{http://www.pcisig.com} for any
approved changes.
\devicenormative{\subsection}{Virtio Over PCI Bus}{Virtio Transport Options / Virtio Over PCI Bus}
A Virtio device using Virtio Over PCI Bus MUST expose to
guest an interface that meets the specification requirements of
the appropriate PCI specification: \hyperref[intro:PCI]{[PCI]}
and \hyperref[intro:PCIe]{[PCIe]}
respectively.
\subsection{PCI Device Discovery}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Discovery}
Any PCI device with PCI Vendor ID 0x1AF4, and PCI Device ID 0x1000 through
0x107F inclusive is a virtio device. The actual value within this range
indicates which virtio device is supported by the device.
The PCI Device ID is calculated by adding 0x1040 to the Virtio Device ID,
as indicated in section \ref{sec:Device Types}.
Additionally, devices MAY utilize a Transitional PCI Device ID range,
0x1000 to 0x103F depending on the device type.
\devicenormative{\subsubsection}{PCI Device Discovery}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Discovery}
Devices MUST have the PCI Vendor ID 0x1AF4.
Devices MUST either have the PCI Device ID calculated by adding 0x1040
to the Virtio Device ID, as indicated in section \ref{sec:Device
Types} or have the Transitional PCI Device ID depending on the device type,
as follows:
\begin{tabular}{|l|c|}
\hline
Transitional PCI Device ID & Virtio Device \\
\hline \hline
0x1000 & network card \\
\hline
0x1001 & block device \\
\hline
0x1002 & memory ballooning (traditional) \\
\hline
0x1003 & console \\
\hline
0x1004 & SCSI host \\
\hline
0x1005 & entropy source \\
\hline
0x1009 & 9P transport \\
\hline
\end{tabular}
For example, the network card device with the Virtio Device ID 1
has the PCI Device ID 0x1041 or the Transitional PCI Device ID 0x1000.
The PCI Subsystem Vendor ID and the PCI Subsystem Device ID MAY reflect
the PCI Vendor and Device ID of the environment (for informational purposes by the driver).
Non-transitional devices SHOULD have a PCI Device ID in the range
0x1040 to 0x107f.
Non-transitional devices SHOULD have a PCI Revision ID of 1 or higher.
Non-transitional devices SHOULD have a PCI Subsystem Device ID of 0x40 or higher.
This is to reduce the chance of a legacy driver attempting
to drive the device.
\drivernormative{\subsubsection}{PCI Device Discovery}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Discovery}
Drivers MUST match devices with the PCI Vendor ID 0x1AF4 and
the PCI Device ID in the range 0x1040 to 0x107f,
calculated by adding 0x1040 to the Virtio Device ID,
as indicated in section \ref{sec:Device Types}.
Drivers for device types listed in section \ref{sec:Virtio
Transport Options / Virtio Over PCI Bus / PCI Device Discovery}
MUST match devices with the PCI Vendor ID 0x1AF4 and
the Transitional PCI Device ID indicated in section
\ref{sec:Virtio
Transport Options / Virtio Over PCI Bus / PCI Device Discovery}.
Drivers MUST match any PCI Revision ID value.
Drivers MAY match any PCI Subsystem Vendor ID and any
PCI Subsystem Device ID value.
\subsubsection{Legacy Interfaces: A Note on PCI Device Discovery}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Discovery / Legacy Interfaces: A Note on PCI Device Discovery}
Transitional devices MUST have a PCI Revision ID of 0.
Transitional devices MUST have the PCI Subsystem Device ID
matching the Virtio Device ID, as indicated in section \ref{sec:Device Types}.
Transitional devices MUST have the Transitional PCI Device ID in
the range 0x1000 to 0x103f.
This is to match legacy drivers.
\subsection{PCI Device Layout}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout}
The device is configured via I/O and/or memory regions (though see
\ref{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / PCI configuration access capability}
for access via the PCI configuration space), as specified by Virtio
Structure PCI Capabilities.
Fields of different sizes are present in the device
configuration regions.
All 64-bit, 32-bit and 16-bit fields are little-endian.
64-bit fields are to be treated as two 32-bit fields,
with low 32 bit part followed by the high 32 bit part.
\drivernormative{\subsubsection}{PCI Device Layout}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout}
For device configuration access, the driver MUST use 8-bit wide
accesses for 8-bit wide fields, 16-bit wide and aligned accesses
for 16-bit wide fields and 32-bit wide and aligned accesses for
32-bit and 64-bit wide fields. For 64-bit fields, the driver MAY
access each of the high and low 32-bit parts of the field
independently.
\devicenormative{\subsubsection}{PCI Device Layout}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout}
For 64-bit device configuration fields, the device MUST allow driver
independent access to high and low 32-bit parts of the field.
\subsection{Virtio Structure PCI Capabilities}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / Virtio Structure PCI Capabilities}
The virtio device configuration layout includes several structures:
\begin{itemize}
\item Common configuration
\item Notifications
\item ISR Status
\item Device-specific configuration (optional)
\item PCI configuration access
\end{itemize}
Each structure can be mapped by a Base Address register (BAR) belonging to
the function, or accessed via the special VIRTIO_PCI_CAP_PCI_CFG field in the PCI configuration space.
The location of each structure is specified using a vendor-specific PCI capability located
on the capability list in PCI configuration space of the device.
This virtio structure capability uses little-endian format; all fields are
read-only for the driver unless stated otherwise:
\begin{lstlisting}
struct virtio_pci_cap {
u8 cap_vndr; /* Generic PCI field: PCI_CAP_ID_VNDR */
u8 cap_next; /* Generic PCI field: next ptr. */
u8 cap_len; /* Generic PCI field: capability length */
u8 cfg_type; /* Identifies the structure. */
u8 bar; /* Where to find it. */
u8 id; /* Multiple capabilities of the same type */
u8 padding[2]; /* Pad to full dword. */
le32 offset; /* Offset within bar. */
le32 length; /* Length of the structure, in bytes. */
};
\end{lstlisting}
This structure can be followed by extra data, depending on
\field{cfg_type}, as documented below.
The fields are interpreted as follows:
\begin{description}
\item[\field{cap_vndr}]
0x09; Identifies a vendor-specific capability.
\item[\field{cap_next}]
Link to next capability in the capability list in the PCI configuration space.
\item[\field{cap_len}]
Length of this capability structure, including the whole of
struct virtio_pci_cap, and extra data if any.
This length MAY include padding, or fields unused by the driver.
\item[\field{cfg_type}]
identifies the structure, according to the following table:
\begin{lstlisting}
/* Common configuration */
#define VIRTIO_PCI_CAP_COMMON_CFG 1
/* Notifications */
#define VIRTIO_PCI_CAP_NOTIFY_CFG 2
/* ISR Status */
#define VIRTIO_PCI_CAP_ISR_CFG 3
/* Device specific configuration */
#define VIRTIO_PCI_CAP_DEVICE_CFG 4
/* PCI configuration access */
#define VIRTIO_PCI_CAP_PCI_CFG 5
/* Shared memory region */
#define VIRTIO_PCI_CAP_SHARED_MEMORY_CFG 8
\end{lstlisting}
Any other value is reserved for future use.
Each structure is detailed individually below.
The device MAY offer more than one structure of any type - this makes it
possible for the device to expose multiple interfaces to drivers. The order of
the capabilities in the capability list specifies the order of preference
suggested by the device. A device may specify that this ordering mechanism be
overridden by the use of the \field{id} field.
\begin{note}
For example, on some hypervisors, notifications using IO accesses are
faster than memory accesses. In this case, the device would expose two
capabilities with \field{cfg_type} set to VIRTIO_PCI_CAP_NOTIFY_CFG:
the first one addressing an I/O BAR, the second one addressing a memory BAR.
In this example, the driver would use the I/O BAR if I/O resources are available, and fall back on
memory BAR when I/O resources are unavailable.
\end{note}
\item[\field{bar}]
values 0x0 to 0x5 specify a Base Address register (BAR) belonging to
the function located beginning at 10h in PCI Configuration Space
and used to map the structure into Memory or I/O Space.
The BAR is permitted to be either 32-bit or 64-bit, it can map Memory Space
or I/O Space.
Any other value is reserved for future use.
\item[\field{id}]
Used by some device types to uniquely identify multiple capabilities
of a certain type. If the device type does not specify the meaning of
this field, its contents are undefined.
\item[\field{offset}]
indicates where the structure begins relative to the base address associated
with the BAR. The alignment requirements of \field{offset} are indicated
in each structure-specific section below.
\item[\field{length}]
indicates the length of the structure.
\field{length} MAY include padding, or fields unused by the driver, or
future extensions.
\begin{note}
For example, a future device might present a large structure size of several
MBytes.
As current devices never utilize structures larger than 4KBytes in size,
driver MAY limit the mapped structure size to e.g.
4KBytes (thus ignoring parts of structure after the first
4KBytes) to allow forward compatibility with such devices without loss of
functionality and without wasting resources.
\end{note}
\end{description}
A variant of this type, struct virtio_pci_cap64, is defined for
those capabilities that require offsets or lengths larger than
4GiB:
\begin{lstlisting}
struct virtio_pci_cap64 {
struct virtio_pci_cap cap;
u32 offset_hi;
u32 length_hi;
};
\end{lstlisting}
Given that the \field{cap.length} and \field{cap.offset} fields
are only 32 bit, the additional \field{offset_hi} and \field {length_hi}
fields provide the most significant 32 bits of a total 64 bit offset and
length within the bar specified by \field{cap.bar}.
\drivernormative{\subsubsection}{Virtio Structure PCI Capabilities}{Virtio Transport Options / Virtio Over PCI Bus / Virtio Structure PCI Capabilities}
The driver MUST ignore any vendor-specific capability structure which has
a reserved \field{cfg_type} value.
The driver SHOULD use the first instance of each virtio structure type they can
support.
The driver MUST accept a \field{cap_len} value which is larger than specified here.
The driver MUST ignore any vendor-specific capability structure which has
a reserved \field{bar} value.
The drivers SHOULD only map part of configuration structure
large enough for device operation. The drivers MUST handle
an unexpectedly large \field{length}, but MAY check that \field{length}
is large enough for device operation.
The driver MUST NOT write into any field of the capability structure,
with the exception of those with \field{cap_type} VIRTIO_PCI_CAP_PCI_CFG as
detailed in \ref{drivernormative:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / PCI configuration access capability}.
\devicenormative{\subsubsection}{Virtio Structure PCI Capabilities}{Virtio Transport Options / Virtio Over PCI Bus / Virtio Structure PCI Capabilities}
The device MUST include any extra data (from the beginning of the \field{cap_vndr} field
through end of the extra data fields if any) in \field{cap_len}.
The device MAY append extra data
or padding to any structure beyond that.
If the device presents multiple structures of the same type, it SHOULD order
them from optimal (first) to least-optimal (last).
\subsubsection{Common configuration structure layout}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Common configuration structure layout}
The common configuration structure is found at the \field{bar} and \field{offset} within the VIRTIO_PCI_CAP_COMMON_CFG capability; its layout is below.
\begin{lstlisting}
struct virtio_pci_common_cfg {
/* About the whole device. */
le32 device_feature_select; /* read-write */
le32 device_feature; /* read-only for driver */
le32 driver_feature_select; /* read-write */
le32 driver_feature; /* read-write */
le16 config_msix_vector; /* read-write */
le16 num_queues; /* read-only for driver */
u8 device_status; /* read-write */
u8 config_generation; /* read-only for driver */
/* About a specific virtqueue. */
le16 queue_select; /* read-write */
le16 queue_size; /* read-write */
le16 queue_msix_vector; /* read-write */
le16 queue_enable; /* read-write */
le16 queue_notify_off; /* read-only for driver */
le64 queue_desc; /* read-write */
le64 queue_driver; /* read-write */
le64 queue_device; /* read-write */
};
\end{lstlisting}
\begin{description}
\item[\field{device_feature_select}]
The driver uses this to select which feature bits \field{device_feature} shows.
Value 0x0 selects Feature Bits 0 to 31, 0x1 selects Feature Bits 32 to 63, etc.
\item[\field{device_feature}]
The device uses this to report which feature bits it is
offering to the driver: the driver writes to
\field{device_feature_select} to select which feature bits are presented.
\item[\field{driver_feature_select}]
The driver uses this to select which feature bits \field{driver_feature} shows.
Value 0x0 selects Feature Bits 0 to 31, 0x1 selects Feature Bits 32 to 63, etc.
\item[\field{driver_feature}]
The driver writes this to accept feature bits offered by the device.
Driver Feature Bits selected by \field{driver_feature_select}.
\item[\field{config_msix_vector}]
The driver sets the Configuration Vector for MSI-X.
\item[\field{num_queues}]
The device specifies the maximum number of virtqueues supported here.
\item[\field{device_status}]
The driver writes the device status here (see \ref{sec:Basic Facilities of a Virtio Device / Device Status Field}). Writing 0 into this
field resets the device.
\item[\field{config_generation}]
Configuration atomicity value. The device changes this every time the
configuration noticeably changes.
\item[\field{queue_select}]
Queue Select. The driver selects which virtqueue the following
fields refer to.
\item[\field{queue_size}]
Queue Size. On reset, specifies the maximum queue size supported by
the device. This can be modified by the driver to reduce memory requirements.
A 0 means the queue is unavailable.
\item[\field{queue_msix_vector}]
The driver uses this to specify the queue vector for MSI-X.
\item[\field{queue_enable}]
The driver uses this to selectively prevent the device from executing requests from this virtqueue.
1 - enabled; 0 - disabled.
\item[\field{queue_notify_off}]
The driver reads this to calculate the offset from start of Notification structure at
which this virtqueue is located.
\begin{note} this is \em{not} an offset in bytes.
See \ref{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Notification capability} below.
\end{note}
\item[\field{queue_desc}]
The driver writes the physical address of Descriptor Area here. See section \ref{sec:Basic Facilities of a Virtio Device / Virtqueues}.
\item[\field{queue_driver}]
The driver writes the physical address of Driver Area here. See section \ref{sec:Basic Facilities of a Virtio Device / Virtqueues}.
\item[\field{queue_device}]
The driver writes the physical address of Device Area here. See section \ref{sec:Basic Facilities of a Virtio Device / Virtqueues}.
\end{description}
\devicenormative{\paragraph}{Common configuration structure layout}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Common configuration structure layout}
\field{offset} MUST be 4-byte aligned.
The device MUST present at least one common configuration capability.
The device MUST present the feature bits it is offering in \field{device_feature}, starting at bit \field{device_feature_select} $*$ 32 for any \field{device_feature_select} written by the driver.
\begin{note}
This means that it will present 0 for any \field{device_feature_select} other than 0 or 1, since no feature defined here exceeds 63.
\end{note}
The device MUST present any valid feature bits the driver has written in \field{driver_feature}, starting at bit \field{driver_feature_select} $*$ 32 for any \field{driver_feature_select} written by the driver. Valid feature bits are those which are subset of the corresponding \field{device_feature} bits. The device MAY present invalid bits written by the driver.
\begin{note}
This means that a device can ignore writes for feature bits it never
offers, and simply present 0 on reads. Or it can just mirror what the driver wrote
(but it will still have to check them when the driver sets FEATURES_OK).
\end{note}
\begin{note}
A driver shouldn't write invalid bits anyway, as per \ref{drivernormative:General Initialization And Device Operation / Device Initialization}, but this attempts to handle it.
\end{note}
The device MUST present a changed \field{config_generation} after the
driver has read a device-specific configuration value which has
changed since any part of the device-specific configuration was last
read.
\begin{note}
As \field{config_generation} is an 8-bit value, simply incrementing it
on every configuration change could violate this requirement due to wrap.
Better would be to set an internal flag when it has changed,
and if that flag is set when the driver reads from the device-specific
configuration, increment \field{config_generation} and clear the flag.
\end{note}
The device MUST reset when 0 is written to \field{device_status}, and
present a 0 in \field{device_status} once that is done.
The device MUST present a 0 in \field{queue_enable} on reset.
The device MUST present a 0 in \field{queue_size} if the virtqueue
corresponding to the current \field{queue_select} is unavailable.
If VIRTIO_F_RING_PACKED has not been negotiated, the device MUST
present either a value of 0 or a power of 2 in
\field{queue_size}.
\drivernormative{\paragraph}{Common configuration structure layout}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Common configuration structure layout}
The driver MUST NOT write to \field{device_feature}, \field{num_queues}, \field{config_generation} or \field{queue_notify_off}.
If VIRTIO_F_RING_PACKED has been negotiated,
the driver MUST NOT write the value 0 to \field{queue_size}.
If VIRTIO_F_RING_PACKED has not been negotiated,
the driver MUST NOT write a value which is not a power of 2 to \field{queue_size}.
The driver MUST configure the other virtqueue fields before enabling the virtqueue
with \field{queue_enable}.
After writing 0 to \field{device_status}, the driver MUST wait for a read of
\field{device_status} to return 0 before reinitializing the device.
The driver MUST NOT write a 0 to \field{queue_enable}.
\subsubsection{Notification structure layout}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Notification capability}
The notification location is found using the VIRTIO_PCI_CAP_NOTIFY_CFG
capability. This capability is immediately followed by an additional
field, like so:
\begin{lstlisting}
struct virtio_pci_notify_cap {
struct virtio_pci_cap cap;
le32 notify_off_multiplier; /* Multiplier for queue_notify_off. */
};
\end{lstlisting}
\field{notify_off_multiplier} is combined with the \field{queue_notify_off} to
derive the Queue Notify address within a BAR for a virtqueue:
\begin{lstlisting}
cap.offset + queue_notify_off * notify_off_multiplier
\end{lstlisting}
The \field{cap.offset} and \field{notify_off_multiplier} are taken from the
notification capability structure above, and the \field{queue_notify_off} is
taken from the common configuration structure.
\begin{note}
For example, if \field{notifier_off_multiplier} is 0, the device uses
the same Queue Notify address for all queues.
\end{note}
\devicenormative{\paragraph}{Notification capability}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Notification capability}
The device MUST present at least one notification capability.
For devices not offering VIRTIO_F_NOTIFICATION_DATA:
The \field{cap.offset} MUST be 2-byte aligned.
The device MUST either present \field{notify_off_multiplier} as an even power of 2,
or present \field{notify_off_multiplier} as 0.
The value \field{cap.length} presented by the device MUST be at least 2
and MUST be large enough to support queue notification offsets
for all supported queues in all possible configurations.
For all queues, the value \field{cap.length} presented by the device MUST satisfy:
\begin{lstlisting}
cap.length >= queue_notify_off * notify_off_multiplier + 2
\end{lstlisting}
For devices offering VIRTIO_F_NOTIFICATION_DATA:
The device MUST either present \field{notify_off_multiplier} as a
number that is a power of 2 that is also a multiple 4,
or present \field{notify_off_multiplier} as 0.
The \field{cap.offset} MUST be 4-byte aligned.
The value \field{cap.length} presented by the device MUST be at least 4
and MUST be large enough to support queue notification offsets
for all supported queues in all possible configurations.
For all queues, the value \field{cap.length} presented by the device MUST satisfy:
\begin{lstlisting}
cap.length >= queue_notify_off * notify_off_multiplier + 4
\end{lstlisting}
\subsubsection{ISR status capability}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / ISR status capability}
The VIRTIO_PCI_CAP_ISR_CFG capability
refers to at least a single byte, which contains the 8-bit ISR status field
to be used for INT\#x interrupt handling.
The \field{offset} for the \field{ISR status} has no alignment requirements.
The ISR bits allow the device to distinguish between device-specific configuration
change interrupts and normal virtqueue interrupts:
\begin{tabular}{ |l||l|l|l| }
\hline
Bits & 0 & 1 & 2 to 31 \\
\hline
Purpose & Queue Interrupt & Device Configuration Interrupt & Reserved \\
\hline
\end{tabular}
To avoid an extra access, simply reading this register resets it to 0 and
causes the device to de-assert the interrupt.
In this way, driver read of ISR status causes the device to de-assert
an interrupt.
See sections \ref{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Used Buffer Notifications} and \ref{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Notification of Device Configuration Changes} for how this is used.
\devicenormative{\paragraph}{ISR status capability}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / ISR status capability}
The device MUST present at least one VIRTIO_PCI_CAP_ISR_CFG capability.
The device MUST set the Device Configuration Interrupt bit
in \field{ISR status} before sending a device configuration
change notification to the driver.
If MSI-X capability is disabled, the device MUST set the Queue
Interrupt bit in \field{ISR status} before sending a virtqueue
notification to the driver.
If MSI-X capability is disabled, the device MUST set the Interrupt Status
bit in the PCI Status register in the PCI Configuration Header of
the device to the logical OR of all bits in \field{ISR status} of
the device. The device then asserts/deasserts INT\#x interrupts unless masked
according to standard PCI rules \hyperref[intro:PCI]{[PCI]}.
The device MUST reset \field{ISR status} to 0 on driver read.
\drivernormative{\paragraph}{ISR status capability}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / ISR status capability}
If MSI-X capability is enabled, the driver SHOULD NOT access
\field{ISR status} upon detecting a Queue Interrupt.
\subsubsection{Device-specific configuration}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Device-specific configuration}
The device MUST present at least one VIRTIO_PCI_CAP_DEVICE_CFG capability for
any device type which has a device-specific configuration.
\devicenormative{\paragraph}{Device-specific configuration}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Device-specific configuration}
The \field{offset} for the device-specific configuration MUST be 4-byte aligned.
\subsubsection{Shared memory capability}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Shared memory capability}
Shared memory regions \ref{sec:Basic Facilities of a Virtio
Device / Shared Memory Regions} are enumerated on the PCI transport
as a sequence of VIRTIO_PCI_CAP_SHARED_MEMORY_CFG capabilities, one per region.
The capability is defined by a struct virtio_pci_cap64 and
utilises the \field{cap.id} to allow multiple shared memory
regions per device.
The identifier in \field{cap.id} does not denote a certain order of
preference; it is only used to uniquely identify a region.
\devicenormative{\paragraph}{Device-specific configuration}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Shared memory capability}
The region defined by the combination of the \field {cap.offset},
\field {cap.offset_hi}, and \field {cap.length}, \field
{cap.length_hi} fields MUST be contained within the declared bar.
The \field{cap.id} MUST be unique for any one device instance.
\subsubsection{PCI configuration access capability}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / PCI configuration access capability}
The VIRTIO_PCI_CAP_PCI_CFG capability
creates an alternative (and likely suboptimal) access method to the
common configuration, notification, ISR and device-specific configuration regions.
The capability is immediately followed by an additional field like so:
\begin{lstlisting}
struct virtio_pci_cfg_cap {
struct virtio_pci_cap cap;
u8 pci_cfg_data[4]; /* Data for BAR access. */
};
\end{lstlisting}
The fields \field{cap.bar}, \field{cap.length}, \field{cap.offset} and
\field{pci_cfg_data} are read-write (RW) for the driver.
To access a device region, the driver writes into the capability
structure (ie. within the PCI configuration space) as follows:
\begin{itemize}
\item The driver sets the BAR to access by writing to \field{cap.bar}.
\item The driver sets the size of the access by writing 1, 2 or 4 to
\field{cap.length}.
\item The driver sets the offset within the BAR by writing to
\field{cap.offset}.
\end{itemize}
At that point, \field{pci_cfg_data} will provide a window of size
\field{cap.length} into the given \field{cap.bar} at offset \field{cap.offset}.
\devicenormative{\paragraph}{PCI configuration access capability}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / PCI configuration access capability}
The device MUST present at least one VIRTIO_PCI_CAP_PCI_CFG capability.
Upon detecting driver write access
to \field{pci_cfg_data}, the device MUST execute a write access
at offset \field{cap.offset} at BAR selected by \field{cap.bar} using the first \field{cap.length}
bytes from \field{pci_cfg_data}.
Upon detecting driver read access
to \field{pci_cfg_data}, the device MUST
execute a read access of length cap.length at offset \field{cap.offset}
at BAR selected by \field{cap.bar} and store the first \field{cap.length} bytes in
\field{pci_cfg_data}.
\drivernormative{\paragraph}{PCI configuration access capability}{Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / PCI configuration access capability}
The driver MUST NOT write a \field{cap.offset} which is not
a multiple of \field{cap.length} (ie. all accesses MUST be aligned).
The driver MUST NOT read or write \field{pci_cfg_data}
unless \field{cap.bar}, \field{cap.length} and \field{cap.offset}
address \field{cap.length} bytes within a BAR range
specified by some other Virtio Structure PCI Capability
of type other than \field{VIRTIO_PCI_CAP_PCI_CFG}.
\subsubsection{Legacy Interfaces: A Note on PCI Device Layout}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Legacy Interfaces: A Note on PCI Device Layout}
Transitional devices MUST present part of configuration
registers in a legacy configuration structure in BAR0 in the first I/O
region of the PCI device, as documented below.
When using the legacy interface, transitional drivers
MUST use the legacy configuration structure in BAR0 in the first
I/O region of the PCI device, as documented below.
When using the legacy interface the driver MAY access
the device-specific configuration region using any width accesses, and
a transitional device MUST present driver with the same results as
when accessed using the ``natural'' access method (i.e.
32-bit accesses for 32-bit fields, etc).
Note that this is possible because while the virtio common configuration structure is PCI
(i.e. little) endian, when using the legacy interface the device-specific
configuration region is encoded in the native endian of the guest (where such distinction is
applicable).
When used through the legacy interface, the virtio common configuration structure looks as follows:
\begin{tabularx}{\textwidth}{ |X||X|X|X|X|X|X|X|X| }
\hline
Bits & 32 & 32 & 32 & 16 & 16 & 16 & 8 & 8 \\
\hline
Read / Write & R & R+W & R+W & R & R+W & R+W & R+W & R \\
\hline
Purpose & Device Features bits 0:31 & Driver Features bits 0:31 &
Queue Address & \field{queue_size} & \field{queue_select} & Queue Notify &
Device Status & ISR \newline Status \\
\hline
\end{tabularx}
If MSI-X is enabled for the device, two additional fields
immediately follow this header:
\begin{tabular}{ |l||l|l| }
\hline
Bits & 16 & 16 \\
\hline
Read/Write & R+W & R+W \\
\hline
Purpose (MSI-X) & \field{config_msix_vector} & \field{queue_msix_vector} \\
\hline
\end{tabular}
Note: When MSI-X capability is enabled, device-specific configuration starts at
byte offset 24 in virtio common configuration structure structure. When MSI-X capability is not
enabled, device-specific configuration starts at byte offset 20 in virtio
header. ie. once you enable MSI-X on the device, the other fields move.
If you turn it off again, they move back!
Any device-specific configuration space immediately follows
these general headers:
\begin{tabular}{|l||l|l|}
\hline
Bits & Device Specific & \multirow{3}{*}{\ldots} \\
\cline{1-2}
Read / Write & Device Specific & \\
\cline{1-2}
Purpose & Device Specific & \\
\hline
\end{tabular}
When accessing the device-specific configuration space
using the legacy interface, transitional
drivers MUST access the device-specific configuration space
at an offset immediately following the general headers.
When using the legacy interface, transitional
devices MUST present the device-specific configuration space
if any at an offset immediately following the general headers.
Note that only Feature Bits 0 to 31 are accessible through the
Legacy Interface. When used through the Legacy Interface,
Transitional Devices MUST assume that Feature Bits 32 to 63
are not acknowledged by Driver.
As legacy devices had no \field{config_generation} field,
see \ref{sec:Basic Facilities of a Virtio Device / Device
Configuration Space / Legacy Interface: Device Configuration
Space}~\nameref{sec:Basic Facilities of a Virtio Device / Device Configuration Space / Legacy Interface: Device Configuration Space} for workarounds.
\subsubsection{Non-transitional Device With Legacy Driver: A Note
on PCI Device Layout}\label{sec:Virtio Transport Options / Virtio
Over PCI Bus / PCI Device Layout / Non-transitional Device With
Legacy Driver: A Note on PCI Device Layout}
All known legacy drivers check either the PCI Revision or the
Device and Vendor IDs, and thus won't attempt to drive a
non-transitional device.
A buggy legacy driver might mistakenly attempt to drive a
non-transitional device. If support for such drivers is required
(as opposed to fixing the bug), the following would be the
recommended way to detect and handle them.
\begin{note}
Such buggy drivers are not currently known to be used in
production.
\end{note}
\subparagraph{Device Requirements: Non-transitional Device With Legacy Driver}
\label{drivernormative:Virtio Transport Options / Virtio Over PCI
Bus / PCI-specific Initialization And Device Operation /
Device Initialization / Non-transitional Device With Legacy
Driver}
\label{devicenormative:Virtio Transport Options / Virtio Over PCI
Bus / PCI-specific Initialization And Device Operation /
Device Initialization / Non-transitional Device With Legacy
Driver}
Non-transitional devices, on a platform where a legacy driver for
a legacy device with the same ID (including PCI Revision, Device
and Vendor IDs) is known to have previously existed,
SHOULD take the following steps to cause the legacy driver to
fail gracefully when it attempts to drive them:
\begin{enumerate}
\item Present an I/O BAR in BAR0, and
\item Respond to a single-byte zero write to offset 18
(corresponding to Device Status register in the legacy layout)
of BAR0 by presenting zeroes on every BAR and ignoring writes.
\end{enumerate}
\subsection{PCI-specific Initialization And Device Operation}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation}
\subsubsection{Device Initialization}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization}
This documents PCI-specific steps executed during Device Initialization.
\paragraph{Virtio Device Configuration Layout Detection}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization / Virtio Device Configuration Layout Detection}
As a prerequisite to device initialization, the driver scans the
PCI capability list, detecting virtio configuration layout using Virtio
Structure PCI capabilities as detailed in \ref{sec:Virtio Transport Options / Virtio Over PCI Bus / Virtio Structure PCI Capabilities}
\subparagraph{Legacy Interface: A Note on Device Layout Detection}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization / Virtio Device Configuration Layout Detection / Legacy Interface: A Note on Device Layout Detection}
Legacy drivers skipped the Device Layout Detection step, assuming legacy
device configuration space in BAR0 in I/O space unconditionally.
Legacy devices did not have the Virtio PCI Capability in their
capability list.
Therefore:
Transitional devices MUST expose the Legacy Interface in I/O
space in BAR0.
Transitional drivers MUST look for the Virtio PCI
Capabilities on the capability list.
If these are not present, driver MUST assume a legacy device,
and use it through the legacy interface.
Non-transitional drivers MUST look for the Virtio PCI
Capabilities on the capability list.
If these are not present, driver MUST assume a legacy device,
and fail gracefully.
\paragraph{MSI-X Vector Configuration}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization / MSI-X Vector Configuration}
When MSI-X capability is present and enabled in the device
(through standard PCI configuration space) \field{config_msix_vector} and \field{queue_msix_vector} are used to map configuration change and queue
interrupts to MSI-X vectors. In this case, the ISR Status is unused.
Writing a valid MSI-X Table entry number, 0 to 0x7FF, to
\field{config_msix_vector}/\field{queue_msix_vector} maps interrupts triggered
by the configuration change/selected queue events respectively to
the corresponding MSI-X vector. To disable interrupts for an
event type, the driver unmaps this event by writing a special NO_VECTOR
value:
\begin{lstlisting}
/* Vector value used to disable MSI for queue */
#define VIRTIO_MSI_NO_VECTOR 0xffff
\end{lstlisting}
Note that mapping an event to vector might require device to
allocate internal device resources, and thus could fail.
\devicenormative{\subparagraph}{MSI-X Vector Configuration}{Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization / MSI-X Vector Configuration}
A device that has an MSI-X capability SHOULD support at least 2
and at most 0x800 MSI-X vectors.
Device MUST report the number of vectors supported in
\field{Table Size} in the MSI-X Capability as specified in
\hyperref[intro:PCI]{[PCI]}.
The device SHOULD restrict the reported MSI-X Table Size field
to a value that might benefit system performance.
\begin{note}
For example, a device which does not expect to send
interrupts at a high rate might only specify 2 MSI-X vectors.
\end{note}
Device MUST support mapping any event type to any valid
vector 0 to MSI-X \field{Table Size}.
Device MUST support unmapping any event type.
The device MUST return vector mapped to a given event,
(NO_VECTOR if unmapped) on read of \field{config_msix_vector}/\field{queue_msix_vector}.
The device MUST have all queue and configuration change
events are unmapped upon reset.
Devices SHOULD NOT cause mapping an event to vector to fail
unless it is impossible for the device to satisfy the mapping
request. Devices MUST report mapping
failures by returning the NO_VECTOR value when the relevant
\field{config_msix_vector}/\field{queue_msix_vector} field is read.
\drivernormative{\subparagraph}{MSI-X Vector Configuration}{Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization / MSI-X Vector Configuration}
Driver MUST support device with any MSI-X Table Size 0 to 0x7FF.
Driver MAY fall back on using INT\#x interrupts for a device
which only supports one MSI-X vector (MSI-X Table Size = 0).
Driver MAY intepret the Table Size as a hint from the device
for the suggested number of MSI-X vectors to use.
Driver MUST NOT attempt to map an event to a vector
outside the MSI-X Table supported by the device,
as reported by \field{Table Size} in the MSI-X Capability.
After mapping an event to vector, the
driver MUST verify success by reading the Vector field value: on
success, the previously written value is returned, and on
failure, NO_VECTOR is returned. If a mapping failure is detected,
the driver MAY retry mapping with fewer vectors, disable MSI-X
or report device failure.
\paragraph{Virtqueue Configuration}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization / Virtqueue Configuration}
As a device can have zero or more virtqueues for bulk data
transport\footnote{For example, the simplest network device has two virtqueues.}, the driver
needs to configure them as part of the device-specific
configuration.
The driver typically does this as follows, for each virtqueue a device has:
\begin{enumerate}
\item Write the virtqueue index (first queue is 0) to \field{queue_select}.
\item Read the virtqueue size from \field{queue_size}. This controls how big the virtqueue is
(see \ref{sec:Basic Facilities of a Virtio Device / Virtqueues}~\nameref{sec:Basic Facilities of a Virtio Device / Virtqueues}). If this field is 0, the virtqueue does not exist.
\item Optionally, select a smaller virtqueue size and write it to \field{queue_size}.
\item Allocate and zero Descriptor Table, Available and Used rings for the
virtqueue in contiguous physical memory.
\item Optionally, if MSI-X capability is present and enabled on the
device, select a vector to use to request interrupts triggered
by virtqueue events. Write the MSI-X Table entry number
corresponding to this vector into \field{queue_msix_vector}. Read
\field{queue_msix_vector}: on success, previously written value is
returned; on failure, NO_VECTOR value is returned.
\end{enumerate}
\subparagraph{Legacy Interface: A Note on Virtqueue Configuration}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Device Initialization / Virtqueue Configuration / Legacy Interface: A Note on Virtqueue Configuration}
When using the legacy interface, the queue layout follows \ref{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Layout}~\nameref{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Layout} with an alignment of 4096.
Driver writes the physical address, divided
by 4096 to the Queue Address field\footnote{The 4096 is based on the x86 page size, but it's also large
enough to ensure that the separate parts of the virtqueue are on
separate cache lines.
}. There was no mechanism to negotiate the queue size.
\subsubsection{Available Buffer Notifications}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Available Buffer Notifications}
When VIRTIO_F_NOTIFICATION_DATA has not been negotiated,
the driver sends an available buffer notification to the device by writing
the 16-bit virtqueue index
of this virtqueue to the Queue Notify address.
When VIRTIO_F_NOTIFICATION_DATA has been negotiated,
the driver sends an available buffer notification to the device by writing
the following 32-bit value to the Queue Notify address:
\lstinputlisting{notifications-le.c}
See \ref{sec:Virtqueues / Driver notifications}~\nameref{sec:Virtqueues / Driver notifications}
for the definition of the components.
See \ref{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI Device Layout / Notification capability}
for how to calculate the Queue Notify address.
\subsubsection{Used Buffer Notifications}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Used Buffer Notifications}
If a used buffer notification is necessary for a virtqueue, the device would typically act as follows:
\begin{itemize}
\item If MSI-X capability is disabled:
\begin{enumerate}
\item Set the lower bit of the ISR Status field for the device.
\item Send the appropriate PCI interrupt for the device.
\end{enumerate}
\item If MSI-X capability is enabled:
\begin{enumerate}
\item If \field{queue_msix_vector} is not NO_VECTOR,
request the appropriate MSI-X interrupt message for the
device, \field{queue_msix_vector} sets the MSI-X Table entry
number.
\end{enumerate}
\end{itemize}
\devicenormative{\paragraph}{Used Buffer Notifications}{Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Used Buffer Notifications}
If MSI-X capability is enabled and \field{queue_msix_vector} is
NO_VECTOR for a virtqueue, the device MUST NOT deliver an interrupt
for that virtqueue.
\subsubsection{Notification of Device Configuration Changes}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Notification of Device Configuration Changes}
Some virtio PCI devices can change the device configuration
state, as reflected in the device-specific configuration region of the device. In this case:
\begin{itemize}
\item If MSI-X capability is disabled:
\begin{enumerate}
\item Set the second lower bit of the ISR Status field for the device.
\item Send the appropriate PCI interrupt for the device.
\end{enumerate}
\item If MSI-X capability is enabled:
\begin{enumerate}
\item If \field{config_msix_vector} is not NO_VECTOR,
request the appropriate MSI-X interrupt message for the
device, \field{config_msix_vector} sets the MSI-X Table entry
number.
\end{enumerate}
\end{itemize}
A single interrupt MAY indicate both that one or more virtqueue has
been used and that the configuration space has changed.
\devicenormative{\paragraph}{Notification of Device Configuration Changes}{Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Notification of Device Configuration Changes}
If MSI-X capability is enabled and \field{config_msix_vector} is
NO_VECTOR, the device MUST NOT deliver an interrupt
for device configuration space changes.
\drivernormative{\paragraph}{Notification of Device Configuration Changes}{Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Notification of Device Configuration Changes}
A driver MUST handle the case where the same interrupt is used to indicate
both device configuration space change and one or more virtqueues being used.
\subsubsection{Driver Handling Interrupts}\label{sec:Virtio Transport Options / Virtio Over PCI Bus / PCI-specific Initialization And Device Operation / Driver Handling Interrupts}
The driver interrupt handler would typically:
\begin{itemize}
\item If MSI-X capability is disabled:
\begin{itemize}
\item Read the ISR Status field, which will reset it to zero.
\item If the lower bit is set:
look through all virtqueues for the
device, to see if any progress has been made by the device
which requires servicing.
\item If the second lower bit is set:
re-examine the configuration space to see what changed.
\end{itemize}
\item If MSI-X capability is enabled:
\begin{itemize}
\item
Look through all virtqueues mapped to that MSI-X vector for the
device, to see if any progress has been made by the device
which requires servicing.
\item
If the MSI-X vector is equal to \field{config_msix_vector},
re-examine the configuration space to see what changed.
\end{itemize}
\end{itemize}
\section{Virtio Over MMIO}\label{sec:Virtio Transport Options / Virtio Over MMIO}
Virtual environments without PCI support (a common situation in
embedded devices models) might use simple memory mapped device
(``virtio-mmio'') instead of the PCI device.
The memory mapped virtio device behaviour is based on the PCI
device specification. Therefore most operations including device
initialization, queues configuration and buffer transfers are
nearly identical. Existing differences are described in the
following sections.
\subsection{MMIO Device Discovery}\label{sec:Virtio Transport Options / Virtio Over MMIO / MMIO Device Discovery}
Unlike PCI, MMIO provides no generic device discovery mechanism. For each
device, the guest OS will need to know the location of the registers
and interrupt(s) used. The suggested binding for systems using
flattened device trees is shown in this example:
\begin{lstlisting}
// EXAMPLE: virtio_block device taking 512 bytes at 0x1e000, interrupt 42.
virtio_block@1e000 {
compatible = "virtio,mmio";
reg = <0x1e000 0x200>;
interrupts = <42>;
}
\end{lstlisting}
\subsection{MMIO Device Register Layout}\label{sec:Virtio Transport Options / Virtio Over MMIO / MMIO Device Register Layout}
MMIO virtio devices provide a set of memory mapped control
registers followed by a device-specific configuration space,
described in the table~\ref{tab:Virtio Trasport Options / Virtio Over MMIO / MMIO Device Register Layout}.
All register values are organized as Little Endian.
\newcommand{\mmioreg}[5]{% Name Function Offset Direction Description
{\field{#1}} \newline #3 \newline #4 & {\bf#2} \newline #5 \\
}
\newcommand{\mmiodreg}[7]{% NameHigh NameLow Function OffsetHigh OffsetLow Direction Description
{\field{#1}} \newline #4 \newline {\field{#2}} \newline #5 \newline #6 & {\bf#3} \newline #7 \\
}
\begin{longtable}{p{0.2\textwidth}p{0.7\textwidth}}
\caption {MMIO Device Register Layout}
\label{tab:Virtio Trasport Options / Virtio Over MMIO / MMIO Device Register Layout} \\
\hline
\mmioreg{Name}{Function}{Offset from base}{Direction}{Description}
\hline
\hline
\endfirsthead
\hline
\mmioreg{Name}{Function}{Offset from the base}{Direction}{Description}
\hline
\hline
\endhead
\endfoot
\endlastfoot
\mmioreg{MagicValue}{Magic value}{0x000}{R}{%
0x74726976
(a Little Endian equivalent of the ``virt'' string).
}
\hline
\mmioreg{Version}{Device version number}{0x004}{R}{%
0x2.
\begin{note}
Legacy devices (see \ref{sec:Virtio Transport Options / Virtio Over MMIO / Legacy interface}~\nameref{sec:Virtio Transport Options / Virtio Over MMIO / Legacy interface}) used 0x1.
\end{note}
}
\hline
\mmioreg{DeviceID}{Virtio Subsystem Device ID}{0x008}{R}{%
See \ref{sec:Device Types}~\nameref{sec:Device Types} for possible values.
Value zero (0x0) is used to
define a system memory map with placeholder devices at static,
well known addresses, assigning functions to them depending
on user's needs.
}
\hline
\mmioreg{VendorID}{Virtio Subsystem Vendor ID}{0x00c}{R}{}
\hline
\mmioreg{DeviceFeatures}{Flags representing features the device supports}{0x010}{R}{%
Reading from this register returns 32 consecutive flag bits,
the least significant bit depending on the last value written to
\field{DeviceFeaturesSel}. Access to this register returns
bits $\field{DeviceFeaturesSel}*32$ to $(\field{DeviceFeaturesSel}*32)+31$, eg.
feature bits 0 to 31 if \field{DeviceFeaturesSel} is set to 0 and
features bits 32 to 63 if \field{DeviceFeaturesSel} is set to 1.
Also see \ref{sec:Basic Facilities of a Virtio Device / Feature Bits}~\nameref{sec:Basic Facilities of a Virtio Device / Feature Bits}.
}
\hline
\mmioreg{DeviceFeaturesSel}{Device (host) features word selection.}{0x014}{W}{%
Writing to this register selects a set of 32 device feature bits
accessible by reading from \field{DeviceFeatures}.
}
\hline
\mmioreg{DriverFeatures}{Flags representing device features understood and activated by the driver}{0x020}{W}{%
Writing to this register sets 32 consecutive flag bits, the least significant
bit depending on the last value written to \field{DriverFeaturesSel}.
Access to this register sets bits $\field{DriverFeaturesSel}*32$
to $(\field{DriverFeaturesSel}*32)+31$, eg. feature bits 0 to 31 if
\field{DriverFeaturesSel} is set to 0 and features bits 32 to 63 if
\field{DriverFeaturesSel} is set to 1. Also see \ref{sec:Basic Facilities of a Virtio Device / Feature Bits}~\nameref{sec:Basic Facilities of a Virtio Device / Feature Bits}.
}
\hline
\mmioreg{DriverFeaturesSel}{Activated (guest) features word selection}{0x024}{W}{%
Writing to this register selects a set of 32 activated feature
bits accessible by writing to \field{DriverFeatures}.
}
\hline
\mmioreg{QueueSel}{Virtual queue index}{0x030}{W}{%
Writing to this register selects the virtual queue that the
following operations on \field{QueueNumMax}, \field{QueueNum}, \field{QueueReady},
\field{QueueDescLow}, \field{QueueDescHigh}, \field{QueueDriverlLow}, \field{QueueDriverHigh},
\field{QueueDeviceLow} and \field{QueueDeviceHigh} apply to. The index
number of the first queue is zero (0x0).
}
\hline
\mmioreg{QueueNumMax}{Maximum virtual queue size}{0x034}{R}{%
Reading from the register returns the maximum size (number of
elements) of the queue the device is ready to process or
zero (0x0) if the queue is not available. This applies to the
queue selected by writing to \field{QueueSel}.
}
\hline
\mmioreg{QueueNum}{Virtual queue size}{0x038}{W}{%
Queue size is the number of elements in the queue.
Writing to this register notifies the device what size of the
queue the driver will use. This applies to the queue selected by
writing to \field{QueueSel}.
}
\hline
\mmioreg{QueueReady}{Virtual queue ready bit}{0x044}{RW}{%
Writing one (0x1) to this register notifies the device that it can
execute requests from this virtual queue. Reading from this register
returns the last value written to it. Both read and write
accesses apply to the queue selected by writing to \field{QueueSel}.
}
\hline
\mmioreg{QueueNotify}{Queue notifier}{0x050}{W}{%
Writing a value to this register notifies the device that
there are new buffers to process in a queue.
When VIRTIO_F_NOTIFICATION_DATA has not been negotiated,
the value written is the queue index.
When VIRTIO_F_NOTIFICATION_DATA has been negotiated,
the \field{Notification data} value has the following format:
\lstinputlisting{notifications-le.c}
See \ref{sec:Virtqueues / Driver notifications}~\nameref{sec:Virtqueues / Driver notifications}
for the definition of the components.
}
\hline
\mmioreg{InterruptStatus}{Interrupt status}{0x60}{R}{%
Reading from this register returns a bit mask of events that
caused the device interrupt to be asserted.
The following events are possible:
\begin{description}
\item[Used Buffer Notification] - bit 0 - the interrupt was asserted
because the device has used a buffer
in at least one of the active virtual queues.
\item [Configuration Change Notification] - bit 1 - the interrupt was
asserted because the configuration of the device has changed.
\end{description}
}
\hline
\mmioreg{InterruptACK}{Interrupt acknowledge}{0x064}{W}{%
Writing a value with bits set as defined in \field{InterruptStatus}
to this register notifies the device that events causing
the interrupt have been handled.
}
\hline
\mmioreg{Status}{Device status}{0x070}{RW}{%
Reading from this register returns the current device status
flags.
Writing non-zero values to this register sets the status flags,
indicating the driver progress. Writing zero (0x0) to this
register triggers a device reset.
See also p. \ref{sec:Virtio Transport Options / Virtio Over MMIO / MMIO-specific Initialization And Device Operation / Device Initialization}~\nameref{sec:Virtio Transport Options / Virtio Over MMIO / MMIO-specific Initialization And Device Operation / Device Initialization}.
}
\hline
\mmiodreg{QueueDescLow}{QueueDescHigh}{Virtual queue's Descriptor Area 64 bit long physical address}{0x080}{0x084}{W}{%
Writing to these two registers (lower 32 bits of the address
to \field{QueueDescLow}, higher 32 bits to \field{QueueDescHigh}) notifies
the device about location of the Descriptor Area of the queue
selected by writing to \field{QueueSel} register.
}
\hline
\mmiodreg{QueueDriverLow}{QueueDriverHigh}{Virtual queue's Driver Area 64 bit long physical address}{0x090}{0x094}{W}{%
Writing to these two registers (lower 32 bits of the address
to \field{QueueDriverLow}, higher 32 bits to \field{QueueDriverHigh}) notifies
the device about location of the Driver Area of the queue
selected by writing to \field{QueueSel}.
}
\hline
\mmiodreg{QueueDeviceLow}{QueueDeviceHigh}{Virtual queue's Device Area 64 bit long physical address}{0x0a0}{0x0a4}{W}{%
Writing to these two registers (lower 32 bits of the address
to \field{QueueDeviceLow}, higher 32 bits to \field{QueueDeviceHigh}) notifies
the device about location of the Device Area of the queue
selected by writing to \field{QueueSel}.
}
\hline
\mmioreg{SHMSel}{Shared memory id}{0x0ac}{W}{%
Writing to this register selects the shared memory region \ref{sec:Basic Facilities of a Virtio Device / Shared Memory Regions}
following operations on \field{SHMLenLow}, \field{SHMLenHigh},
\field{SHMBaseLow} and \field{SHMBaseHigh} apply to.
}
\hline
\mmiodreg{SHMLenLow}{SHMLenHigh}{Shared memory region 64 bit long length}{0x0b0}{0x0b4}{R}{%
These registers return the length of the shared memory
region in bytes, as defined by the device for the region selected by
the \field{SHMSel} register. The lower 32 bits of the length
are read from \field{SHMLenLow} and the higher 32 bits from
\field{SHMLenHigh}. Reading from a non-existent
region (i.e. where the ID written to \field{SHMSel} is unused)
results in a length of -1.
}
\hline
\mmiodreg{SHMBaseLow}{SHMBaseHigh}{Shared memory region 64 bit long physical address}{0x0b8}{0x0bc}{R}{%
The driver reads these registers to discover the base address
of the region in physical address space. This address is
chosen by the device (or other part of the VMM).
The lower 32 bits of the address are read from \field{SHMBaseLow}
with the higher 32 bits from \field{SHMBaseHigh}. Reading
from a non-existent region (i.e. where the ID written to
\field{SHMSel} is unused) results in a base address of
0xffffffffffffffff.
}
\hline
\mmioreg{ConfigGeneration}{Configuration atomicity value}{0x0fc}{R}{
Reading from this register returns a value describing a version of the device-specific configuration space (see \field{Config}).
The driver can then access the configuration space and, when finished, read \field{ConfigGeneration} again.
If no part of the configuration space has changed between these two \field{ConfigGeneration} reads, the returned values are identical.
If the values are different, the configuration space accesses were not atomic and the driver has to perform the operations again.
See also \ref {sec:Basic Facilities of a Virtio Device / Device Configuration Space}.
}
\hline
\mmioreg{Config}{Configuration space}{0x100+}{RW}{
Device-specific configuration space starts at the offset 0x100
and is accessed with byte alignment. Its meaning and size
depend on the device and the driver.
}
\hline
\end{longtable}
\devicenormative{\subsubsection}{MMIO Device Register Layout}{Virtio Transport Options / Virtio Over MMIO / MMIO Device Register Layout}
The device MUST return 0x74726976 in \field{MagicValue}.
The device MUST return value 0x2 in \field{Version}.
The device MUST present each event by setting the corresponding bit in \field{InterruptStatus} from the
moment it takes place, until the driver acknowledges the interrupt
by writing a corresponding bit mask to the \field{InterruptACK} register. Bits which
do not represent events which took place MUST be zero.
Upon reset, the device MUST clear all bits in \field{InterruptStatus} and ready bits in the
\field{QueueReady} register for all queues in the device.
The device MUST change value returned in \field{ConfigGeneration} if there is any risk of a
driver seeing an inconsistent configuration state.
The device MUST NOT access virtual queue contents when \field{QueueReady} is zero (0x0).
\drivernormative{\subsubsection}{MMIO Device Register Layout}{Virtio Transport Options / Virtio Over MMIO / MMIO Device Register Layout}
The driver MUST NOT access memory locations not described in the
table \ref{tab:Virtio Trasport Options / Virtio Over MMIO / MMIO Device Register Layout}
(or, in case of the configuration space, described in the device specification),
MUST NOT write to the read-only registers (direction R) and
MUST NOT read from the write-only registers (direction W).
The driver MUST only use 32 bit wide and aligned reads and writes to access the control registers
described in table \ref{tab:Virtio Trasport Options / Virtio Over MMIO / MMIO Device Register Layout}.
For the device-specific configuration space, the driver MUST use 8 bit wide accesses for
8 bit wide fields, 16 bit wide and aligned accesses for 16 bit wide fields and 32 bit wide and
aligned accesses for 32 and 64 bit wide fields.
The driver MUST ignore a device with \field{MagicValue} which is not 0x74726976,
although it MAY report an error.
The driver MUST ignore a device with \field{Version} which is not 0x2,
although it MAY report an error.
The driver MUST ignore a device with \field{DeviceID} 0x0,
but MUST NOT report any error.
Before reading from \field{DeviceFeatures}, the driver MUST write a value to \field{DeviceFeaturesSel}.
Before writing to the \field{DriverFeatures} register, the driver MUST write a value to the \field{DriverFeaturesSel} register.
The driver MUST write a value to \field{QueueNum} which is less than
or equal to the value presented by the device in \field{QueueNumMax}.
When \field{QueueReady} is not zero, the driver MUST NOT access
\field{QueueNum}, \field{QueueDescLow}, \field{QueueDescHigh},
\field{QueueDriverLow}, \field{QueueDriverHigh}, \field{QueueDeviceLow}, \field{QueueDeviceHigh}.
To stop using the queue the driver MUST write zero (0x0) to this
\field{QueueReady} and MUST read the value back to ensure
synchronization.
The driver MUST ignore undefined bits in \field{InterruptStatus}.
The driver MUST write a value with a bit mask describing events it handled into \field{InterruptACK} when
it finishes handling an interrupt and MUST NOT set any of the undefined bits in the value.
\subsection{MMIO-specific Initialization And Device Operation}\label{sec:Virtio Transport Options / Virtio Over MMIO / MMIO-specific Initialization And Device Operation}
\subsubsection{Device Initialization}\label{sec:Virtio Transport Options / Virtio Over MMIO / MMIO-specific Initialization And Device Operation / Device Initialization}
\drivernormative{\paragraph}{Device Initialization}{Virtio Transport Options / Virtio Over MMIO / MMIO-specific Initialization And Device Operation / Device Initialization}
The driver MUST start the device initialization by reading and
checking values from \field{MagicValue} and \field{Version}.
If both values are valid, it MUST read \field{DeviceID}
and if its value is zero (0x0) MUST abort initialization and
MUST NOT access any other register.
Drivers not expecting shared memory MUST NOT use the shared
memory registers.
Further initialization MUST follow the procedure described in
\ref{sec:General Initialization And Device Operation / Device Initialization}~\nameref{sec:General Initialization And Device Operation / Device Initialization}.
\subsubsection{Virtqueue Configuration}\label{sec:Virtio Transport Options / Virtio Over MMIO / MMIO-specific Initialization And Device Operation / Virtqueue Configuration}
The driver will typically initialize the virtual queue in the following way:
\begin{enumerate}
\item Select the queue writing its index (first queue is 0) to
\field{QueueSel}.
\item Check if the queue is not already in use: read \field{QueueReady},
and expect a returned value of zero (0x0).
\item Read maximum queue size (number of elements) from
\field{QueueNumMax}. If the returned value is zero (0x0) the
queue is not available.
\item Allocate and zero the queue memory, making sure the memory
is physically contiguous.
\item Notify the device about the queue size by writing the size to
\field{QueueNum}.
\item Write physical addresses of the queue's Descriptor Area,
Driver Area and Device Area to (respectively) the
\field{QueueDescLow}/\field{QueueDescHigh},
\field{QueueDriverLow}/\field{QueueDriverHigh} and
\field{QueueDeviceLow}/\field{QueueDeviceHigh} register pairs.
\item Write 0x1 to \field{QueueReady}.
\end{enumerate}
\subsubsection{Available Buffer Notifications}\label{sec:Virtio Transport Options / Virtio Over MMIO / MMIO-specific Initialization And Device Operation / Available Buffer Notifications}
When VIRTIO_F_NOTIFICATION_DATA has not been negotiated,
the driver sends an available buffer notification to the device by writing
the 16-bit virtqueue index
of the queue to be notified to \field{QueueNotify}.
When VIRTIO_F_NOTIFICATION_DATA has been negotiated,
the driver sends an available buffer notification to the device by writing
the following 32-bit value to \field{QueueNotify}:
\lstinputlisting{notifications-le.c}
See \ref{sec:Virtqueues / Driver notifications}~\nameref{sec:Virtqueues / Driver notifications}
for the definition of the components.
\subsubsection{Notifications From The Device}\label{sec:Virtio Transport Options / Virtio Over MMIO / MMIO-specific Initialization And Device Operation / Notifications From The Device}
The memory mapped virtio device is using a single, dedicated
interrupt signal, which is asserted when at least one of the
bits described in the description of \field{InterruptStatus}
is set. This is how the device sends a used buffer notification
or a configuration change notification to the device.
\drivernormative{\paragraph}{Notifications From The Device}{Virtio Transport Options / Virtio Over MMIO / MMIO-specific Initialization And Device Operation / Notifications From The Device}
After receiving an interrupt, the driver MUST read
\field{InterruptStatus} to check what caused the interrupt (see the
register description). The used buffer notification bit being set
SHOULD be interpreted as a used buffer notification for each active
virtqueue. After the interrupt is handled, the driver MUST acknowledge
it by writing a bit mask corresponding to the handled events to the
InterruptACK register.
\subsection{Legacy interface}\label{sec:Virtio Transport Options / Virtio Over MMIO / Legacy interface}
The legacy MMIO transport used page-based addressing, resulting
in a slightly different control register layout, the device
initialization and the virtual queue configuration procedure.
Table \ref{tab:Virtio Trasport Options / Virtio Over MMIO / MMIO Device Legacy Register Layout}
presents control registers layout, omitting
descriptions of registers which did not change their function
nor behaviour:
\begin{longtable}{p{0.2\textwidth}p{0.7\textwidth}}
\caption {MMIO Device Legacy Register Layout}
\label{tab:Virtio Trasport Options / Virtio Over MMIO / MMIO Device Legacy Register Layout} \\
\hline
\mmioreg{Name}{Function}{Offset from base}{Direction}{Description}
\hline
\hline
\endfirsthead
\hline
\mmioreg{Name}{Function}{Offset from the base}{Direction}{Description}
\hline
\hline
\endhead
\endfoot
\endlastfoot
\mmioreg{MagicValue}{Magic value}{0x000}{R}{}
\hline
\mmioreg{Version}{Device version number}{0x004}{R}{Legacy device returns value 0x1.}
\hline
\mmioreg{DeviceID}{Virtio Subsystem Device ID}{0x008}{R}{}
\hline
\mmioreg{VendorID}{Virtio Subsystem Vendor ID}{0x00c}{R}{}
\hline
\mmioreg{HostFeatures}{Flags representing features the device supports}{0x010}{R}{}
\hline
\mmioreg{HostFeaturesSel}{Device (host) features word selection.}{0x014}{W}{}
\hline
\mmioreg{GuestFeatures}{Flags representing device features understood and activated by the driver}{0x020}{W}{}
\hline
\mmioreg{GuestFeaturesSel}{Activated (guest) features word selection}{0x024}{W}{}
\hline
\mmioreg{GuestPageSize}{Guest page size}{0x028}{W}{%
The driver writes the guest page size in bytes to the
register during initialization, before any queues are used.
This value should be a power of 2 and is used by the device to
calculate the Guest address of the first queue page
(see QueuePFN).
}
\hline
\mmioreg{QueueSel}{Virtual queue index}{0x030}{W}{%
Writing to this register selects the virtual queue that the
following operations on the \field{QueueNumMax}, \field{QueueNum}, \field{QueueAlign}
and \field{QueuePFN} registers apply to. The index
number of the first queue is zero (0x0).
.
}
\hline
\mmioreg{QueueNumMax}{Maximum virtual queue size}{0x034}{R}{%
Reading from the register returns the maximum size of the queue
the device is ready to process or zero (0x0) if the queue is not
available. This applies to the queue selected by writing to
\field{QueueSel} and is allowed only when \field{QueuePFN} is set to zero
(0x0), so when the queue is not actively used.
}
\hline
\mmioreg{QueueNum}{Virtual queue size}{0x038}{W}{%
Queue size is the number of elements in the queue.
Writing to this register notifies the device what size of the
queue the driver will use. This applies to the queue selected by
writing to \field{QueueSel}.
}
\hline
\mmioreg{QueueAlign}{Used Ring alignment in the virtual queue}{0x03c}{W}{%
Writing to this register notifies the device about alignment
boundary of the Used Ring in bytes. This value should be a power
of 2 and applies to the queue selected by writing to \field{QueueSel}.
}
\hline
\mmioreg{QueuePFN}{Guest physical page number of the virtual queue}{0x040}{RW}{%
Writing to this register notifies the device about location of the
virtual queue in the Guest's physical address space. This value
is the index number of a page starting with the queue
Descriptor Table. Value zero (0x0) means physical address zero
(0x00000000) and is illegal. When the driver stops using the
queue it writes zero (0x0) to this register.
Reading from this register returns the currently used page
number of the queue, therefore a value other than zero (0x0)
means that the queue is in use.
Both read and write accesses apply to the queue selected by
writing to \field{QueueSel}.
}
\hline
\mmioreg{QueueNotify}{Queue notifier}{0x050}{W}{}
\hline
\mmioreg{InterruptStatus}{Interrupt status}{0x60}{R}{}
\hline
\mmioreg{InterruptACK}{Interrupt acknowledge}{0x064}{W}{}
\hline
\mmioreg{Status}{Device status}{0x070}{RW}{%
Reading from this register returns the current device status
flags.
Writing non-zero values to this register sets the status flags,
indicating the OS/driver progress. Writing zero (0x0) to this
register triggers a device reset. The device
sets \field{QueuePFN} to zero (0x0) for all queues in the device.
Also see \ref{sec:General Initialization And Device Operation / Device Initialization}~\nameref{sec:General Initialization And Device Operation / Device Initialization}.
}
\hline
\mmioreg{Config}{Configuration space}{0x100+}{RW}{}
\hline
\end{longtable}
The virtual queue page size is defined by writing to \field{GuestPageSize},
as written by the guest. The driver does this before the
virtual queues are configured.
The virtual queue layout follows
p. \ref{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Layout}~\nameref{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Layout},
with the alignment defined in \field{QueueAlign}.
The virtual queue is configured as follows:
\begin{enumerate}
\item Select the queue writing its index (first queue is 0) to
\field{QueueSel}.
\item Check if the queue is not already in use: read \field{QueuePFN},
expecting a returned value of zero (0x0).
\item Read maximum queue size (number of elements) from
\field{QueueNumMax}. If the returned value is zero (0x0) the
queue is not available.
\item Allocate and zero the queue pages in contiguous virtual
memory, aligning the Used Ring to an optimal boundary (usually
page size). The driver should choose a queue size smaller than or
equal to \field{QueueNumMax}.
\item Notify the device about the queue size by writing the size to
\field{QueueNum}.
\item Notify the device about the used alignment by writing its value
in bytes to \field{QueueAlign}.
\item Write the physical number of the first page of the queue to
the \field{QueuePFN} register.
\end{enumerate}
Notification mechanisms did not change.
\section{Virtio Over Channel I/O}\label{sec:Virtio Transport Options / Virtio Over Channel I/O}
S/390 based virtual machines support neither PCI nor MMIO, so a
different transport is needed there.
virtio-ccw uses the standard channel I/O based mechanism used for
the majority of devices on S/390. A virtual channel device with a
special control unit type acts as proxy to the virtio device
(similar to the way virtio-pci uses a PCI device) and
configuration and operation of the virtio device is accomplished
(mostly) via channel commands. This means virtio devices are
discoverable via standard operating system algorithms, and adding
virtio support is mainly a question of supporting a new control
unit type.
As the S/390 is a big endian machine, the data structures transmitted
via channel commands are big-endian: this is made clear by use of
the types be16, be32 and be64.
\subsection{Basic Concepts}\label{sec:Virtio Transport Options / Virtio over channel I/O / Basic Concepts}
As a proxy device, virtio-ccw uses a channel-attached I/O control
unit with a special control unit type (0x3832) and a control unit
model corresponding to the attached virtio device's subsystem
device ID, accessed via a virtual I/O subchannel and a virtual
channel path of type 0x32. This proxy device is discoverable via
normal channel subsystem device discovery (usually a STORE
SUBCHANNEL loop) and answers to the basic channel commands:
\begin{itemize}
\item NO-OPERATION (0x03)
\item BASIC SENSE (0x04)
\item TRANSFER IN CHANNEL (0x08)
\item SENSE ID (0xe4)
\end{itemize}
For a virtio-ccw proxy device, SENSE ID will return the following
information:
\begin{tabular}{ |l|l|l| }
\hline
Bytes & Description & Contents \\
\hline \hline
0 & reserved & 0xff \\
\hline
1-2 & control unit type & 0x3832 \\
\hline
3 & control unit model & <virtio device id> \\
\hline
4-5 & device type & zeroes (unset) \\
\hline
6 & device model & zeroes (unset) \\
\hline
7-255 & extended SenseId data & zeroes (unset) \\
\hline
\end{tabular}
A virtio-ccw proxy device facilitates:
\begin{itemize}
\item Discovery and attachment of virtio devices (as described above).
\item Initialization of virtqueues and transport-specific facilities (using
virtio-specific channel commands).
\item Notifications (via hypercall and a combination of I/O interrupts
and indicator bits).
\end{itemize}
\subsubsection{Channel Commands for Virtio}\label{sec:Virtio Transport Options / Virtio
over channel I/O / Basic Concepts/ Channel Commands for Virtio}
In addition to the basic channel commands, virtio-ccw defines a
set of channel commands related to configuration and operation of
virtio:
\begin{lstlisting}
#define CCW_CMD_SET_VQ 0x13
#define CCW_CMD_VDEV_RESET 0x33
#define CCW_CMD_SET_IND 0x43
#define CCW_CMD_SET_CONF_IND 0x53
#define CCW_CMD_SET_IND_ADAPTER 0x73
#define CCW_CMD_READ_FEAT 0x12
#define CCW_CMD_WRITE_FEAT 0x11
#define CCW_CMD_READ_CONF 0x22
#define CCW_CMD_WRITE_CONF 0x21
#define CCW_CMD_WRITE_STATUS 0x31
#define CCW_CMD_READ_VQ_CONF 0x32
#define CCW_CMD_SET_VIRTIO_REV 0x83
#define CCW_CMD_READ_STATUS 0x72
\end{lstlisting}
\subsubsection{Notifications}\label{sec:Virtio Transport Options / Virtio
over channel I/O / Basic Concepts/ Notifications}
Available buffer notifications are realized as a hypercall. No additional
setup by the driver is needed. The operation of available buffer
notifications is described in section \ref{sec:Virtio Transport Options /
Virtio over channel I/O / Device Operation / Guest->Host Notification}.
Used buffer notifications are realized either as so-called classic or
adapter I/O interrupts depending on a transport level negotiation. The
initialization is described in sections \ref{sec:Virtio Transport Options
/ Virtio over channel I/O / Device Initialization / Setting Up Indicators
/ Setting Up Classic Queue Indicators} and \ref{sec:Virtio Transport
Options / Virtio over channel I/O / Device Initialization / Setting Up
Indicators / Setting Up Two-Stage Queue Indicators} respectively. The
operation of each flavor is described in sections \ref{sec:Virtio
Transport Options / Virtio over channel I/O / Device Operation /
Host->Guest Notification / Notification via Classic I/O Interrupts} and
\ref{sec:Virtio Transport Options / Virtio over channel I/O / Device
Operation / Host->Guest Notification / Notification via Adapter I/O
Interrupts} respectively.
Configuration change notifications are done using so-called classic I/O
interrupts. The initialization is described in section \ref{sec:Virtio
Transport Options / Virtio over channel I/O / Device Initialization /
Setting Up Indicators / Setting Up Configuration Change Indicators} and
the operation in section \ref{sec:Virtio Transport Options / Virtio over
channel I/O / Device Operation / Host->Guest Notification / Notification
via Classic I/O Interrupts}.
\devicenormative{\subsubsection}{Basic Concepts}{Virtio Transport Options / Virtio over channel I/O / Basic Concepts}
The virtio-ccw device acts like a normal channel device, as specified
in \hyperref[intro:S390 PoP]{[S390 PoP]} and \hyperref[intro:S390 Common I/O]{[S390 Common I/O]}. In particular:
\begin{itemize}
\item A device MUST post a unit check with command reject for any command
it does not support.
\item If a driver did not suppress length checks for a channel command,
the device MUST present a subchannel status as detailed in the
architecture when the actual length did not match the expected length.
\item If a driver did suppress length checks for a channel command, the
device MUST present a check condition if the transmitted data does
not contain enough data to process the command. If the driver submitted
a buffer that was too long, the device SHOULD accept the command.
\end{itemize}
\drivernormative{\subsubsection}{Basic Concepts}{Virtio Transport Options / Virtio over channel I/O / Basic Concepts}
A driver for virtio-ccw devices MUST check for a control unit
type of 0x3832 and MUST ignore the device type and model.
A driver SHOULD attempt to provide the correct length in a channel
command even if it suppresses length checks for that command.
\subsection{Device Initialization}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization}
virtio-ccw uses several channel commands to set up a device.
\subsubsection{Setting the Virtio Revision}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting the Virtio Revision}
CCW_CMD_SET_VIRTIO_REV is issued by the driver to set the revision of
the virtio-ccw transport it intends to drive the device with. It uses the
following communication structure:
\begin{lstlisting}
struct virtio_rev_info {
be16 revision;
be16 length;
u8 data[];
};
\end{lstlisting}
\field{revision} contains the desired revision id, \field{length} the length of the
data portion and \field{data} revision-dependent additional desired options.
The following values are supported:
\begin{tabular}{ |l|l|l|l| }
\hline
\field{revision} & \field{length} & \field{data} & remarks \\
\hline \hline
0 & 0 & <empty> & legacy interface; transitional devices only \\
\hline
1 & 0 & <empty> & Virtio 1 \\
\hline
2 & 0 & <empty> & CCW_CMD_READ_STATUS support \\
\hline
3-n & & & reserved for later revisions \\
\hline
\end{tabular}
Note that a change in the virtio standard does not necessarily
correspond to a change in the virtio-ccw revision.
\devicenormative{\paragraph}{Setting the Virtio Revision}{Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting the Virtio Revision}
A device MUST post a unit check with command reject for any \field{revision}
it does not support. For any invalid combination of \field{revision}, \field{length}
and \field{data}, it MUST post a unit check with command reject as well. A
non-transitional device MUST reject revision id 0.
A device MUST answer with command reject to any virtio-ccw specific
channel command that is not contained in the revision selected by the
driver.
A device MUST answer with command reject to any attempt to select a different revision
after a revision has been successfully selected by the driver.
A device MUST treat the revision as unset from the time the associated
subchannel has been enabled until a revision has been successfully set
by the driver. This implies that revisions are not persistent across
disabling and enabling of the associated subchannel.
\drivernormative{\paragraph}{Setting the Virtio Revision}{Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting the Virtio Revision}
A driver SHOULD start with trying to set the highest revision it
supports and continue with lower revisions if it gets a command reject.
A driver MUST NOT issue any other virtio-ccw specific channel commands
prior to setting the revision.
After a revision has been successfully selected by the driver, it
MUST NOT attempt to select a different revision.
\paragraph{Legacy Interfaces: A Note on Setting the Virtio Revision}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting the Virtio Revision / Legacy Interfaces: A Note on Setting the Virtio Revision}
A legacy device will not support the CCW_CMD_SET_VIRTIO_REV and answer
with a command reject. A non-transitional driver MUST stop trying to
operate this device in that case. A transitional driver MUST operate
the device as if it had been able to set revision 0.
A legacy driver will not issue the CCW_CMD_SET_VIRTIO_REV prior to
issuing other virtio-ccw specific channel commands. A non-transitional
device therefore MUST answer any such attempts with a command reject.
A transitional device MUST assume in this case that the driver is a
legacy driver and continue as if the driver selected revision 0. This
implies that the device MUST reject any command not valid for revision
0, including a subsequent CCW_CMD_SET_VIRTIO_REV.
\subsubsection{Configuring a Virtqueue}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Configuring a Virtqueue}
CCW_CMD_READ_VQ_CONF is issued by the driver to obtain information
about a queue. It uses the following structure for communicating:
\begin{lstlisting}
struct vq_config_block {
be16 index;
be16 max_num;
};
\end{lstlisting}
The requested number of buffers for queue \field{index} is returned in
\field{max_num}.
Afterwards, CCW_CMD_SET_VQ is issued by the driver to inform the
device about the location used for its queue. The transmitted
structure is
\begin{lstlisting}
struct vq_info_block {
be64 desc;
be32 res0;
be16 index;
be16 num;
be64 driver;
be64 device;
};
\end{lstlisting}
\field{desc}, \field{driver} and \field{device} contain the guest
addresses for the descriptor area,
available area and used area for queue \field{index}, respectively. The actual
virtqueue size (number of allocated buffers) is transmitted in \field{num}.
\devicenormative{\paragraph}{Configuring a Virtqueue}{Virtio Transport Options / Virtio over channel I/O / Device Initialization / Configuring a Virtqueue}
\field{res0} is reserved and MUST be ignored by the device.
\paragraph{Legacy Interface: A Note on Configuring a Virtqueue}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Configuring a Virtqueue / Legacy Interface: A Note on Configuring a Virtqueue}
For a legacy driver or for a driver that selected revision 0,
CCW_CMD_SET_VQ uses the following communication block:
\begin{lstlisting}
struct vq_info_block_legacy {
be64 queue;
be32 align;
be16 index;
be16 num;
};
\end{lstlisting}
\field{queue} contains the guest address for queue \field{index}, \field{num} the number of buffers
and \field{align} the alignment. The queue layout follows \ref{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Layout}~\nameref{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Layout}.
\subsubsection{Communicating Status Information}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Communicating Status Information}
The driver changes the status of a device via the
CCW_CMD_WRITE_STATUS command, which transmits an 8 bit status
value.
As described in
\ref{devicenormative:Basic Facilities of a Virtio Device / Feature Bits},
a device sometimes fails to set the \field{device status} field: For example, it
might fail to accept the FEATURES_OK status bit during device initialization.
With revision 2, CCW_CMD_READ_STATUS is defined: It reads an 8 bit status
value from the device and acts as a reverse operation to CCW_CMD_WRITE_STATUS.
\drivernormative{\paragraph}{Communicating Status Information}{Virtio Transport Options / Virtio over channel I/O / Device Initialization / Communicating Status Information}
If the device posts a unit check with command reject in response to the
CCW_CMD_WRITE_STATUS command, the driver MUST assume that the device failed
to set the status and the \field{device status} field retained
its previous value.
If at least revision 2 has been negotiated, the driver SHOULD use the
CCW_CMD_READ_STATUS command to retrieve the \field{device status} field after
a configuration change has been detected.
If not at least revision 2 has been negotiated, the driver MUST NOT attempt
to issue the CCW_CMD_READ_STATUS command.
\devicenormative{\paragraph}{Communicating Status Information}{Virtio Transport Options / Virtio over channel I/O / Device Initialization / Communicating Status Information}
If the device fails to set the \field{device status} field
to the value written by the driver, the device MUST assure
that the \field{device status} field is left unchanged and
MUST post a unit check with command reject.
If at least revision 2 has been negotiated, the device MUST return the
current \field{device status} field if the CCW_CMD_READ_STATUS
command is issued.
\subsubsection{Handling Device Features}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Handling Device Features}
Feature bits are arranged in an array of 32 bit values, making
for a total of 8192 feature bits. Feature bits are in
little-endian byte order.
The CCW commands dealing with features use the following
communication block:
\begin{lstlisting}
struct virtio_feature_desc {
le32 features;
u8 index;
};
\end{lstlisting}
\field{features} are the 32 bits of features currently accessed, while
\field{index} describes which of the feature bit values is to be
accessed. No padding is added at the end of the structure, it is
exactly 5 bytes in length.
The guest obtains the device's device feature set via the
CCW_CMD_READ_FEAT command. The device stores the features at \field{index}
to \field{features}.
For communicating its supported features to the device, the driver
uses the CCW_CMD_WRITE_FEAT command, denoting a \field{features}/\field{index}
combination.
\subsubsection{Device Configuration}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Device Configuration}
The device's configuration space is located in host memory.
To obtain information from the configuration space, the driver
uses CCW_CMD_READ_CONF, specifying the guest memory for the device
to write to.
For changing configuration information, the driver uses
CCW_CMD_WRITE_CONF, specifying the guest memory for the device to
read from.
In both cases, the complete configuration space is transmitted. This
allows the driver to compare the new configuration space with the old
version, and keep a generation count internally whenever it changes.
\subsubsection{Setting Up Indicators}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting Up Indicators}
In order to set up the indicator bits for host->guest notification,
the driver uses different channel commands depending on whether it
wishes to use traditional I/O interrupts tied to a subchannel or
adapter I/O interrupts for virtqueue notifications. For any given
device, the two mechanisms are mutually exclusive.
For the configuration change indicators, only a mechanism using
traditional I/O interrupts is provided, regardless of whether
traditional or adapter I/O interrupts are used for virtqueue
notifications.
\paragraph{Setting Up Classic Queue Indicators}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting Up Indicators / Setting Up Classic Queue Indicators}
Indicators for notification via classic I/O interrupts are contained
in a 64 bit value per virtio-ccw proxy device.
To communicate the location of the indicator bits for host->guest
notification, the driver uses the CCW_CMD_SET_IND command,
pointing to a location containing the guest address of the
indicators in a 64 bit value.
If the driver has already set up two-staged queue indicators via the
CCW_CMD_SET_IND_ADAPTER command, the device MUST post a unit check
with command reject to any subsequent CCW_CMD_SET_IND command.
\paragraph{Setting Up Configuration Change Indicators}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting Up Indicators / Setting Up Configuration Change Indicators}
Indicators for configuration change host->guest notification are
contained in a 64 bit value per virtio-ccw proxy device.
To communicate the location of the indicator bits used in the
configuration change host->guest notification, the driver issues the
CCW_CMD_SET_CONF_IND command, pointing to a location containing the
guest address of the indicators in a 64 bit value.
\paragraph{Setting Up Two-Stage Queue Indicators}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting Up Indicators / Setting Up Two-Stage Queue Indicators}
Indicators for notification via adapter I/O interrupts consist of
two stages:
\begin{itemize}
\item a summary indicator byte covering the virtqueues for one or more
virtio-ccw proxy devices
\item a set of contigous indicator bits for the virtqueues for a
virtio-ccw proxy device
\end{itemize}
To communicate the location of the summary and queue indicator bits,
the driver uses the CCW_CMD_SET_IND_ADAPTER command with the following
payload:
\begin{lstlisting}
struct virtio_thinint_area {
be64 summary_indicator;
be64 indicator;
be64 bit_nr;
u8 isc;
} __attribute__ ((packed));
\end{lstlisting}
\field{summary_indicator} contains the guest address of the 8 bit summary
indicator.
\field{indicator} contains the guest address of an area wherein the indicators
for the devices are contained, starting at \field{bit_nr}, one bit per
virtqueue of the device. Bit numbers start at the left, i.e. the most
significant bit in the first byte is assigned the bit number 0.
\field{isc} contains the I/O interruption subclass to be used for the adapter
I/O interrupt. It MAY be different from the isc used by the proxy
virtio-ccw device's subchannel.
No padding is added at the end of the structure, it is exactly 25 bytes
in length.
\devicenormative{\subparagraph}{Setting Up Two-Stage Queue Indicators}{Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting Up Indicators / Setting Up Two-Stage Queue Indicators}
If the driver has already set up classic queue indicators via the
CCW_CMD_SET_IND command, the device MUST post a unit check with
command reject to any subsequent CCW_CMD_SET_IND_ADAPTER command.
\paragraph{Legacy Interfaces: A Note on Setting Up Indicators}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Initialization / Setting Up Indicators / Legacy Interfaces: A Note on Setting Up Indicators}
In some cases, legacy devices will only support classic queue indicators;
in that case, they will reject CCW_CMD_SET_IND_ADAPTER as they don't know that
command. Some legacy devices will support two-stage queue indicators, though,
and a driver will be able to successfully use CCW_CMD_SET_IND_ADAPTER to set
them up.
\subsection{Device Operation}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Operation}
\subsubsection{Host->Guest Notification}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Operation / Host->Guest Notification}
There are two modes of operation regarding host->guest notification,
classic I/O interrupts and adapter I/O interrupts. The mode to be
used is determined by the driver by using CCW_CMD_SET_IND respectively
CCW_CMD_SET_IND_ADAPTER to set up queue indicators.
For configuration changes, the driver always uses classic I/O
interrupts.
\paragraph{Notification via Classic I/O Interrupts}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Operation / Host->Guest Notification / Notification via Classic I/O Interrupts}
If the driver used the CCW_CMD_SET_IND command to set up queue
indicators, the device will use classic I/O interrupts for
host->guest notification about virtqueue activity.
For notifying the driver of virtqueue buffers, the device sets the
corresponding bit in the guest-provided indicators. If an
interrupt is not already pending for the subchannel, the device
generates an unsolicited I/O interrupt.
If the device wants to notify the driver about configuration
changes, it sets bit 0 in the configuration indicators and
generates an unsolicited I/O interrupt, if needed. This also
applies if adapter I/O interrupts are used for queue notifications.
\paragraph{Notification via Adapter I/O Interrupts}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Operation / Host->Guest Notification / Notification via Adapter I/O Interrupts}
If the driver used the CCW_CMD_SET_IND_ADAPTER command to set up
queue indicators, the device will use adapter I/O interrupts for
host->guest notification about virtqueue activity.
For notifying the driver of virtqueue buffers, the device sets the
bit in the guest-provided indicator area at the corresponding offset.
The guest-provided summary indicator is set to 0x01. An adapter I/O
interrupt for the corresponding interruption subclass is generated.
The recommended way to process an adapter I/O interrupt by the driver
is as follows:
\begin{itemize}
\item Process all queue indicator bits associated with the summary indicator.
\item Clear the summary indicator, performing a synchronization (memory
barrier) afterwards.
\item Process all queue indicator bits associated with the summary indicator
again.
\end{itemize}
\devicenormative{\subparagraph}{Notification via Adapter I/O Interrupts}{Virtio Transport Options / Virtio over channel I/O / Device Operation / Host->Guest Notification / Notification via Adapter I/O Interrupts}
The device SHOULD only generate an adapter I/O interrupt if the
summary indicator had not been set prior to notification.
\drivernormative{\subparagraph}{Notification via Adapter I/O Interrupts}{Virtio Transport Options / Virtio over channel I/O / Device Operation / Host->Guest Notification / Notification via Adapter I/O Interrupts}
The driver
MUST clear the summary indicator after receiving an adapter I/O
interrupt before it processes the queue indicators.
\paragraph{Legacy Interfaces: A Note on Host->Guest Notification}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Operation / Host->Guest Notification / Legacy Interfaces: A Note on Host->Guest Notification}
As legacy devices and drivers support only classic queue indicators,
host->guest notification will always be done via classic I/O interrupts.
\subsubsection{Guest->Host Notification}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Operation / Guest->Host Notification}
For notifying the device of virtqueue buffers, the driver
unfortunately can't use a channel command (the asynchronous
characteristics of channel I/O interact badly with the host block
I/O backend). Instead, it uses a diagnose 0x500 call with subcode
3 specifying the queue, as follows:
\begin{tabular}{ |l|l|l| }
\hline
GPR & Input Value & Output Value \\
\hline \hline
1 & 0x3 & \\
\hline
2 & Subchannel ID & Host Cookie \\
\hline
3 & Notification data & \\
\hline
4 & Host Cookie & \\
\hline
\end{tabular}
When VIRTIO_F_NOTIFICATION_DATA has not been negotiated,
the \field{Notification data} contains the Virtqueue number.
When VIRTIO_F_NOTIFICATION_DATA has been negotiated,
the value has the following format:
\lstinputlisting{notifications-be.c}
See \ref{sec:Virtqueues / Driver notifications}~\nameref{sec:Virtqueues / Driver notifications}
for the definition of the components.
\devicenormative{\paragraph}{Guest->Host Notification}{Virtio Transport Options / Virtio over channel I/O / Device Operation / Guest->Host Notification}
The device MUST ignore bits 0-31 (counting from the left) of GPR2.
This aligns passing the subchannel ID with the way it is passed
for the existing I/O instructions.
The device MAY return a 64-bit host cookie in GPR2 to speed up the
notification execution.
\drivernormative{\paragraph}{Guest->Host Notification}{Virtio Transport Options / Virtio over channel I/O / Device Operation / Guest->Host Notification}
For each notification, the driver SHOULD use GPR4 to pass the host cookie received in GPR2 from the previous notication.
\begin{note}
For example:
\begin{lstlisting}
info->cookie = do_notify(schid,
virtqueue_get_queue_index(vq),
info->cookie);
\end{lstlisting}
\end{note}
\subsubsection{Resetting Devices}\label{sec:Virtio Transport Options / Virtio over channel I/O / Device Operation / Resetting Devices}
In order to reset a device, a driver sends the
CCW_CMD_VDEV_RESET command.
\chapter{Device Types}\label{sec:Device Types}
On top of the queues, config space and feature negotiation facilities
built into virtio, several devices are defined.
The following device IDs are used to identify different types of virtio
devices. Some device IDs are reserved for devices which are not currently
defined in this standard.
Discovering what devices are available and their type is bus-dependent.
\begin{tabular} { |l|c| }
\hline
Device ID & Virtio Device \\
\hline \hline
0 & reserved (invalid) \\
\hline
1 & network card \\
\hline
2 & block device \\
\hline
3 & console \\
\hline
4 & entropy source \\
\hline
5 & memory ballooning (traditional) \\
\hline
6 & ioMemory \\
\hline
7 & rpmsg \\
\hline
8 & SCSI host \\
\hline
9 & 9P transport \\
\hline
10 & mac80211 wlan \\
\hline
11 & rproc serial \\
\hline
12 & virtio CAIF \\
\hline
13 & memory balloon \\
\hline
16 & GPU device \\
\hline
17 & Timer/Clock device \\
\hline
18 & Input device \\
\hline
19 & Socket device \\
\hline
20 & Crypto device \\
\hline
21 & Signal Distribution Module \\
\hline
22 & pstore device \\
\hline
23 & IOMMU device \\
\hline
24 & Memory device \\
\hline
25 & Audio device \\
\hline
26 & file system device \\
\hline
27 & PMEM device \\
\hline
28 & RPMB device \\
\hline
\end{tabular}
Some of the devices above are unspecified by this document,
because they are seen as immature or especially niche. Be warned
that some are only specified by the sole existing implementation;
they could become part of a future specification, be abandoned
entirely, or live on outside this standard. We shall speak of
them no further.
\section{Network Device}\label{sec:Device Types / Network Device}
The virtio network device is a virtual ethernet card, and is the
most complex of the devices supported so far by virtio. It has
enhanced rapidly and demonstrates clearly how support for new
features are added to an existing device. Empty buffers are
placed in one virtqueue for receiving packets, and outgoing
packets are enqueued into another for transmission in that order.
A third command queue is used to control advanced filtering
features.
\subsection{Device ID}\label{sec:Device Types / Network Device / Device ID}
1
\subsection{Virtqueues}\label{sec:Device Types / Network Device / Virtqueues}
\begin{description}
\item[0] receiveq1
\item[1] transmitq1
\item[\ldots]
\item[2(N-1)] receiveqN
\item[2(N-1)+1] transmitqN
\item[2N] controlq
\end{description}
N=1 if VIRTIO_NET_F_MQ is not negotiated, otherwise N is set by
\field{max_virtqueue_pairs}.
controlq only exists if VIRTIO_NET_F_CTRL_VQ set.
\subsection{Feature bits}\label{sec:Device Types / Network Device / Feature bits}
\begin{description}
\item[VIRTIO_NET_F_CSUM (0)] Device handles packets with partial checksum. This
``checksum offload'' is a common feature on modern network cards.
\item[VIRTIO_NET_F_GUEST_CSUM (1)] Driver handles packets with partial checksum.
\item[VIRTIO_NET_F_CTRL_GUEST_OFFLOADS (2)] Control channel offloads
reconfiguration support.
\item[VIRTIO_NET_F_MTU(3)] Device maximum MTU reporting is supported. If
offered by the device, device advises driver about the value of
its maximum MTU. If negotiated, the driver uses \field{mtu} as
the maximum MTU value.
\item[VIRTIO_NET_F_MAC (5)] Device has given MAC address.
\item[VIRTIO_NET_F_GUEST_TSO4 (7)] Driver can receive TSOv4.
\item[VIRTIO_NET_F_GUEST_TSO6 (8)] Driver can receive TSOv6.
\item[VIRTIO_NET_F_GUEST_ECN (9)] Driver can receive TSO with ECN.
\item[VIRTIO_NET_F_GUEST_UFO (10)] Driver can receive UFO.
\item[VIRTIO_NET_F_HOST_TSO4 (11)] Device can receive TSOv4.
\item[VIRTIO_NET_F_HOST_TSO6 (12)] Device can receive TSOv6.
\item[VIRTIO_NET_F_HOST_ECN (13)] Device can receive TSO with ECN.
\item[VIRTIO_NET_F_HOST_UFO (14)] Device can receive UFO.
\item[VIRTIO_NET_F_MRG_RXBUF (15)] Driver can merge receive buffers.
\item[VIRTIO_NET_F_STATUS (16)] Configuration status field is
available.
\item[VIRTIO_NET_F_CTRL_VQ (17)] Control channel is available.
\item[VIRTIO_NET_F_CTRL_RX (18)] Control channel RX mode support.
\item[VIRTIO_NET_F_CTRL_VLAN (19)] Control channel VLAN filtering.
\item[VIRTIO_NET_F_GUEST_ANNOUNCE(21)] Driver can send gratuitous
packets.
\item[VIRTIO_NET_F_MQ(22)] Device supports multiqueue with automatic
receive steering.
\item[VIRTIO_NET_F_CTRL_MAC_ADDR(23)] Set MAC address through control
channel.
\item[VIRTIO_NET_F_GUEST_HDRLEN(59)] Driver can provide the exact \field{hdr_len}
value. Device benefits from knowing the exact header length.
\item[VIRTIO_NET_F_RSC_EXT(61)] Device can process duplicated ACKs
and report number of coalesced segments and duplicated ACKs
\item[VIRTIO_NET_F_STANDBY(62)] Device may act as a standby for a primary
device with the same MAC address.
\end{description}
\subsubsection{Feature bit requirements}\label{sec:Device Types / Network Device / Feature bits / Feature bit requirements}
Some networking feature bits require other networking feature bits
(see \ref{drivernormative:Basic Facilities of a Virtio Device / Feature Bits}):
\begin{description}
\item[VIRTIO_NET_F_GUEST_TSO4] Requires VIRTIO_NET_F_GUEST_CSUM.
\item[VIRTIO_NET_F_GUEST_TSO6] Requires VIRTIO_NET_F_GUEST_CSUM.
\item[VIRTIO_NET_F_GUEST_ECN] Requires VIRTIO_NET_F_GUEST_TSO4 or VIRTIO_NET_F_GUEST_TSO6.
\item[VIRTIO_NET_F_GUEST_UFO] Requires VIRTIO_NET_F_GUEST_CSUM.
\item[VIRTIO_NET_F_HOST_TSO4] Requires VIRTIO_NET_F_CSUM.
\item[VIRTIO_NET_F_HOST_TSO6] Requires VIRTIO_NET_F_CSUM.
\item[VIRTIO_NET_F_HOST_ECN] Requires VIRTIO_NET_F_HOST_TSO4 or VIRTIO_NET_F_HOST_TSO6.
\item[VIRTIO_NET_F_HOST_UFO] Requires VIRTIO_NET_F_CSUM.
\item[VIRTIO_NET_F_CTRL_RX] Requires VIRTIO_NET_F_CTRL_VQ.
\item[VIRTIO_NET_F_CTRL_VLAN] Requires VIRTIO_NET_F_CTRL_VQ.
\item[VIRTIO_NET_F_GUEST_ANNOUNCE] Requires VIRTIO_NET_F_CTRL_VQ.
\item[VIRTIO_NET_F_MQ] Requires VIRTIO_NET_F_CTRL_VQ.
\item[VIRTIO_NET_F_CTRL_MAC_ADDR] Requires VIRTIO_NET_F_CTRL_VQ.
\item[VIRTIO_NET_F_RSC_EXT] Requires VIRTIO_NET_F_HOST_TSO4 or VIRTIO_NET_F_HOST_TSO6.
\end{description}
\subsubsection{Legacy Interface: Feature bits}\label{sec:Device Types / Network Device / Feature bits / Legacy Interface: Feature bits}
\begin{description}
\item[VIRTIO_NET_F_GSO (6)] Device handles packets with any GSO type. This was supposed to indicate segmentation offload support, but
upon further investigation it became clear that multiple bits were needed.
\item[VIRTIO_NET_F_GUEST_RSC4 (41)] Device coalesces TCPIP v4 packets. This was implemented by hypervisor patch for certification
purposes and current Windows driver depends on it. It will not function if virtio-net device reports this feature.
\item[VIRTIO_NET_F_GUEST_RSC6 (42)] Device coalesces TCPIP v6 packets. Similar to VIRTIO_NET_F_GUEST_RSC4.
\end{description}
\subsection{Device configuration layout}\label{sec:Device Types / Network Device / Device configuration layout}
\label{sec:Device Types / Block Device / Feature bits / Device configuration layout}
Three driver-read-only configuration fields are currently defined. The \field{mac} address field
always exists (though is only valid if VIRTIO_NET_F_MAC is set), and
\field{status} only exists if VIRTIO_NET_F_STATUS is set. Two
read-only bits (for the driver) are currently defined for the status field:
VIRTIO_NET_S_LINK_UP and VIRTIO_NET_S_ANNOUNCE.
\begin{lstlisting}
#define VIRTIO_NET_S_LINK_UP 1
#define VIRTIO_NET_S_ANNOUNCE 2
\end{lstlisting}
The following driver-read-only field, \field{max_virtqueue_pairs} only exists if
VIRTIO_NET_F_MQ is set. This field specifies the maximum number
of each of transmit and receive virtqueues (receiveq1\ldots receiveqN
and transmitq1\ldots transmitqN respectively) that can be configured once VIRTIO_NET_F_MQ
is negotiated.
The following driver-read-only field, \field{mtu} only exists if
VIRTIO_NET_F_MTU is set. This field specifies the maximum MTU for the driver to
use.
\begin{lstlisting}
struct virtio_net_config {
u8 mac[6];
le16 status;
le16 max_virtqueue_pairs;
le16 mtu;
};
\end{lstlisting}
\devicenormative{\subsubsection}{Device configuration layout}{Device Types / Network Device / Device configuration layout}
The device MUST set \field{max_virtqueue_pairs} to between 1 and 0x8000 inclusive,
if it offers VIRTIO_NET_F_MQ.
The device MUST set \field{mtu} to between 68 and 65535 inclusive,
if it offers VIRTIO_NET_F_MTU.
The device SHOULD set \field{mtu} to at least 1280, if it offers
VIRTIO_NET_F_MTU.
The device MUST NOT modify \field{mtu} once it has been set.
The device MUST NOT pass received packets that exceed \field{mtu} (plus low
level ethernet header length) size with \field{gso_type} NONE or ECN
after VIRTIO_NET_F_MTU has been successfully negotiated.
The device MUST forward transmitted packets of up to \field{mtu} (plus low
level ethernet header length) size with \field{gso_type} NONE or ECN, and do
so without fragmentation, after VIRTIO_NET_F_MTU has been successfully
negotiated.
If the driver negotiates the VIRTIO_NET_F_STANDBY feature, the device MAY act
as a standby device for a primary device with the same MAC address.
\drivernormative{\subsubsection}{Device configuration layout}{Device Types / Network Device / Device configuration layout}
A driver SHOULD negotiate VIRTIO_NET_F_MAC if the device offers it.
If the driver negotiates the VIRTIO_NET_F_MAC feature, the driver MUST set
the physical address of the NIC to \field{mac}. Otherwise, it SHOULD
use a locally-administered MAC address (see \hyperref[intro:IEEE 802]{IEEE 802},
``9.2 48-bit universal LAN MAC addresses'').
If the driver does not negotiate the VIRTIO_NET_F_STATUS feature, it SHOULD
assume the link is active, otherwise it SHOULD read the link status from
the bottom bit of \field{status}.
A driver SHOULD negotiate VIRTIO_NET_F_MTU if the device offers it.
If the driver negotiates VIRTIO_NET_F_MTU, it MUST supply enough receive
buffers to receive at least one receive packet of size \field{mtu} (plus low
level ethernet header length) with \field{gso_type} NONE or ECN.
If the driver negotiates VIRTIO_NET_F_MTU, it MUST NOT transmit packets of
size exceeding the value of \field{mtu} (plus low level ethernet header length)
with \field{gso_type} NONE or ECN.
A driver SHOULD negotiate the VIRTIO_NET_F_STANDBY feature if the device offers it.
\subsubsection{Legacy Interface: Device configuration layout}\label{sec:Device Types / Network Device / Device configuration layout / Legacy Interface: Device configuration layout}
\label{sec:Device Types / Block Device / Feature bits / Device configuration layout / Legacy Interface: Device configuration layout}
When using the legacy interface, transitional devices and drivers
MUST format \field{status} and
\field{max_virtqueue_pairs} in struct virtio_net_config
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
When using the legacy interface, \field{mac} is driver-writable
which provided a way for drivers to update the MAC without
negotiating VIRTIO_NET_F_CTRL_MAC_ADDR.
\subsection{Device Initialization}\label{sec:Device Types / Network Device / Device Initialization}
A driver would perform a typical initialization routine like so:
\begin{enumerate}
\item Identify and initialize the receive and
transmission virtqueues, up to N of each kind. If
VIRTIO_NET_F_MQ feature bit is negotiated,
N=\field{max_virtqueue_pairs}, otherwise identify N=1.
\item If the VIRTIO_NET_F_CTRL_VQ feature bit is negotiated,
identify the control virtqueue.
\item Fill the receive queues with buffers: see \ref{sec:Device Types / Network Device / Device Operation / Setting Up Receive Buffers}.
\item Even with VIRTIO_NET_F_MQ, only receiveq1, transmitq1 and
controlq are used by default. The driver would send the
VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command specifying the
number of the transmit and receive queues to use.
\item If the VIRTIO_NET_F_MAC feature bit is set, the configuration
space \field{mac} entry indicates the ``physical'' address of the
network card, otherwise the driver would typically generate a random
local MAC address.
\item If the VIRTIO_NET_F_STATUS feature bit is negotiated, the link
status comes from the bottom bit of \field{status}.
Otherwise, the driver assumes it's active.
\item A performant driver would indicate that it will generate checksumless
packets by negotating the VIRTIO_NET_F_CSUM feature.
\item If that feature is negotiated, a driver can use TCP or UDP
segmentation offload by negotiating the VIRTIO_NET_F_HOST_TSO4 (IPv4
TCP), VIRTIO_NET_F_HOST_TSO6 (IPv6 TCP) and VIRTIO_NET_F_HOST_UFO
(UDP fragmentation) features.
\item The converse features are also available: a driver can save
the virtual device some work by negotiating these features.\note{For example, a network packet transported between two guests on
the same system might not need checksumming at all, nor segmentation,
if both guests are amenable.}
The VIRTIO_NET_F_GUEST_CSUM feature indicates that partially
checksummed packets can be received, and if it can do that then
the VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6,
VIRTIO_NET_F_GUEST_UFO and VIRTIO_NET_F_GUEST_ECN are the input
equivalents of the features described above.
See \ref{sec:Device Types / Network Device / Device Operation /
Setting Up Receive Buffers}~\nameref{sec:Device Types / Network
Device / Device Operation / Setting Up Receive Buffers} and
\ref{sec:Device Types / Network Device / Device Operation /
Processing of Incoming Packets}~\nameref{sec:Device Types /
Network Device / Device Operation / Processing of Incoming Packets} below.
\end{enumerate}
A truly minimal driver would only accept VIRTIO_NET_F_MAC and ignore
everything else.
\subsection{Device Operation}\label{sec:Device Types / Network Device / Device Operation}
Packets are transmitted by placing them in the
transmitq1\ldots transmitqN, and buffers for incoming packets are
placed in the receiveq1\ldots receiveqN. In each case, the packet
itself is preceded by a header:
\begin{lstlisting}
struct virtio_net_hdr {
#define VIRTIO_NET_HDR_F_NEEDS_CSUM 1
#define VIRTIO_NET_HDR_F_DATA_VALID 2
#define VIRTIO_NET_HDR_F_RSC_INFO 4
u8 flags;
#define VIRTIO_NET_HDR_GSO_NONE 0
#define VIRTIO_NET_HDR_GSO_TCPV4 1
#define VIRTIO_NET_HDR_GSO_UDP 3
#define VIRTIO_NET_HDR_GSO_TCPV6 4
#define VIRTIO_NET_HDR_GSO_ECN 0x80
u8 gso_type;
le16 hdr_len;
le16 gso_size;
le16 csum_start;
le16 csum_offset;
le16 num_buffers;
};
\end{lstlisting}
The controlq is used to control device features such as
filtering.
\subsubsection{Legacy Interface: Device Operation}\label{sec:Device Types / Network Device / Device Operation / Legacy Interface: Device Operation}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_net_hdr
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
The legacy driver only presented \field{num_buffers} in the struct virtio_net_hdr
when VIRTIO_NET_F_MRG_RXBUF was negotiated; without that feature the
structure was 2 bytes shorter.
When using the legacy interface, the driver SHOULD ignore the
used length for the transmit queues
and the controlq queue.
\begin{note}
Historically, some devices put
the total descriptor length there, even though no data was
actually written.
\end{note}
\subsubsection{Packet Transmission}\label{sec:Device Types / Network Device / Device Operation / Packet Transmission}
Transmitting a single packet is simple, but varies depending on
the different features the driver negotiated.
\begin{enumerate}
\item The driver can send a completely checksummed packet. In this case,
\field{flags} will be zero, and \field{gso_type} will be VIRTIO_NET_HDR_GSO_NONE.
\item If the driver negotiated VIRTIO_NET_F_CSUM, it can skip
checksumming the packet:
\begin{itemize}
\item \field{flags} has the VIRTIO_NET_HDR_F_NEEDS_CSUM set,
\item \field{csum_start} is set to the offset within the packet to begin checksumming,
and
\item \field{csum_offset} indicates how many bytes after the csum_start the
new (16 bit ones' complement) checksum is placed by the device.
\item The TCP checksum field in the packet is set to the sum
of the TCP pseudo header, so that replacing it by the ones'
complement checksum of the TCP header and body will give the
correct result.
\end{itemize}
\begin{note}
For example, consider a partially checksummed TCP (IPv4) packet.
It will have a 14 byte ethernet header and 20 byte IP header
followed by the TCP header (with the TCP checksum field 16 bytes
into that header). \field{csum_start} will be 14+20 = 34 (the TCP
checksum includes the header), and \field{csum_offset} will be 16.
\end{note}
\item If the driver negotiated
VIRTIO_NET_F_HOST_TSO4, TSO6 or UFO, and the packet requires
TCP segmentation or UDP fragmentation, then \field{gso_type}
is set to VIRTIO_NET_HDR_GSO_TCPV4, TCPV6 or UDP.
(Otherwise, it is set to VIRTIO_NET_HDR_GSO_NONE). In this
case, packets larger than 1514 bytes can be transmitted: the
metadata indicates how to replicate the packet header to cut it
into smaller packets. The other gso fields are set:
\begin{itemize}
\item If the VIRTIO_NET_F_GUEST_HDRLEN feature has been negotiated,
\field{hdr_len} indicates the header length that needs to be replicated
for each packet. It's the number of bytes from the beginning of the packet
to the beginning of the transport payload.
Otherwise, if the VIRTIO_NET_F_GUEST_HDRLEN feature has not been negotiated,
\field{hdr_len} is a hint to the device as to how much of the header
needs to be kept to copy into each packet, usually set to the
length of the headers, including the transport header\footnote{Due to various bugs in implementations, this field is not useful
as a guarantee of the transport header size.
}.
\begin{note}
Some devices benefit from knowledge of the exact header length.
\end{note}
\item \field{gso_size} is the maximum size of each packet beyond that
header (ie. MSS).
\item If the driver negotiated the VIRTIO_NET_F_HOST_ECN feature,
the VIRTIO_NET_HDR_GSO_ECN bit in \field{gso_type}
indicates that the TCP packet has the ECN bit set\footnote{This case is not handled by some older hardware, so is called out
specifically in the protocol.}.
\end{itemize}
\item \field{num_buffers} is set to zero. This field is unused on transmitted packets.
\item The header and packet are added as one output descriptor to the
transmitq, and the device is notified of the new entry
(see \ref{sec:Device Types / Network Device / Device Initialization}~\nameref{sec:Device Types / Network Device / Device Initialization}).
\end{enumerate}
\drivernormative{\paragraph}{Packet Transmission}{Device Types / Network Device / Device Operation / Packet Transmission}
The driver MUST set \field{num_buffers} to zero.
If VIRTIO_NET_F_CSUM is not negotiated, the driver MUST set
\field{flags} to zero and SHOULD supply a fully checksummed
packet to the device.
If VIRTIO_NET_F_HOST_TSO4 is negotiated, the driver MAY set
\field{gso_type} to VIRTIO_NET_HDR_GSO_TCPV4 to request TCPv4
segmentation, otherwise the driver MUST NOT set
\field{gso_type} to VIRTIO_NET_HDR_GSO_TCPV4.
If VIRTIO_NET_F_HOST_TSO6 is negotiated, the driver MAY set
\field{gso_type} to VIRTIO_NET_HDR_GSO_TCPV6 to request TCPv6
segmentation, otherwise the driver MUST NOT set
\field{gso_type} to VIRTIO_NET_HDR_GSO_TCPV6.
If VIRTIO_NET_F_HOST_UFO is negotiated, the driver MAY set
\field{gso_type} to VIRTIO_NET_HDR_GSO_UDP to request UDP
segmentation, otherwise the driver MUST NOT set
\field{gso_type} to VIRTIO_NET_HDR_GSO_UDP.
The driver SHOULD NOT send to the device TCP packets requiring segmentation offload
which have the Explicit Congestion Notification bit set, unless the
VIRTIO_NET_F_HOST_ECN feature is negotiated, in which case the
driver MUST set the VIRTIO_NET_HDR_GSO_ECN bit in
\field{gso_type}.
If the VIRTIO_NET_F_CSUM feature has been negotiated, the
driver MAY set the VIRTIO_NET_HDR_F_NEEDS_CSUM bit in
\field{flags}, if so:
\begin{enumerate}
\item the driver MUST validate the packet checksum at
offset \field{csum_offset} from \field{csum_start} as well as all
preceding offsets;
\item the driver MUST set the packet checksum stored in the
buffer to the TCP/UDP pseudo header;
\item the driver MUST set \field{csum_start} and
\field{csum_offset} such that calculating a ones'
complement checksum from \field{csum_start} up until the end of
the packet and storing the result at offset \field{csum_offset}
from \field{csum_start} will result in a fully checksummed
packet;
\end{enumerate}
If none of the VIRTIO_NET_F_HOST_TSO4, TSO6 or UFO options have
been negotiated, the driver MUST set \field{gso_type} to
VIRTIO_NET_HDR_GSO_NONE.
If \field{gso_type} differs from VIRTIO_NET_HDR_GSO_NONE, then
the driver MUST also set the VIRTIO_NET_HDR_F_NEEDS_CSUM bit in
\field{flags} and MUST set \field{gso_size} to indicate the
desired MSS.
If one of the VIRTIO_NET_F_HOST_TSO4, TSO6 or UFO options have
been negotiated:
\begin{itemize}
\item If the VIRTIO_NET_F_GUEST_HDRLEN feature has been negotiated,
the driver MUST set \field{hdr_len} to a value equal to the length
of the headers, including the transport header.
\item If the VIRTIO_NET_F_GUEST_HDRLEN feature has not been negotiated,
the driver SHOULD set \field{hdr_len} to a value
not less than the length of the headers, including the transport
header.
\end{itemize}
The driver SHOULD accept the VIRTIO_NET_F_GUEST_HDRLEN feature if it has
been offered, and if it's able to provide the exact header length.
The driver MUST NOT set the VIRTIO_NET_HDR_F_DATA_VALID and
VIRTIO_NET_HDR_F_RSC_INFO bits in \field{flags}.
\devicenormative{\paragraph}{Packet Transmission}{Device Types / Network Device / Device Operation / Packet Transmission}
The device MUST ignore \field{flag} bits that it does not recognize.
If VIRTIO_NET_HDR_F_NEEDS_CSUM bit in \field{flags} is not set, the
device MUST NOT use the \field{csum_start} and \field{csum_offset}.
If one of the VIRTIO_NET_F_HOST_TSO4, TSO6 or UFO options have
been negotiated:
\begin{itemize}
\item If the VIRTIO_NET_F_GUEST_HDRLEN feature has been negotiated,
the device MAY use \field{hdr_len} as the transport header size.
\begin{note}
Caution should be taken by the implementation so as to prevent
a malicious driver from attacking the device by setting an incorrect hdr_len.
\end{note}
\item If the VIRTIO_NET_F_GUEST_HDRLEN feature has not been negotiated,
the device MAY use \field{hdr_len} only as a hint about the
transport header size.
The device MUST NOT rely on \field{hdr_len} to be correct.
\begin{note}
This is due to various bugs in implementations.
\end{note}
\end{itemize}
If VIRTIO_NET_HDR_F_NEEDS_CSUM is not set, the device MUST NOT
rely on the packet checksum being correct.
\paragraph{Packet Transmission Interrupt}\label{sec:Device Types / Network Device / Device Operation / Packet Transmission / Packet Transmission Interrupt}
Often a driver will suppress transmission virtqueue interrupts
and check for used packets in the transmit path of following
packets.
The normal behavior in this interrupt handler is to retrieve
used buffers from the virtqueue and free the corresponding
headers and packets.
\subsubsection{Setting Up Receive Buffers}\label{sec:Device Types / Network Device / Device Operation / Setting Up Receive Buffers}
It is generally a good idea to keep the receive virtqueue as
fully populated as possible: if it runs out, network performance
will suffer.
If the VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6 or
VIRTIO_NET_F_GUEST_UFO features are used, the maximum incoming packet
will be to 65550 bytes long (the maximum size of a
TCP or UDP packet, plus the 14 byte ethernet header), otherwise
1514 bytes. The 12-byte struct virtio_net_hdr is prepended to this,
making for 65562 or 1526 bytes.
\drivernormative{\paragraph}{Setting Up Receive Buffers}{Device Types / Network Device / Device Operation / Setting Up Receive Buffers}
\begin{itemize}
\item If VIRTIO_NET_F_MRG_RXBUF is not negotiated:
\begin{itemize}
\item If VIRTIO_NET_F_GUEST_TSO4, VIRTIO_NET_F_GUEST_TSO6 or
VIRTIO_NET_F_GUEST_UFO are negotiated, the driver SHOULD populate
the receive queue(s) with buffers of at least 65562 bytes.
\item Otherwise, the driver SHOULD populate the receive queue(s)
with buffers of at least 1526 bytes.
\end{itemize}
\item If VIRTIO_NET_F_MRG_RXBUF is negotiated, each buffer MUST be at
least the size of the struct virtio_net_hdr.
\end{itemize}
\begin{note}
Obviously each buffer can be split across multiple descriptor elements.
\end{note}
If VIRTIO_NET_F_MQ is negotiated, each of receiveq1\ldots receiveqN
that will be used SHOULD be populated with receive buffers.
\devicenormative{\paragraph}{Setting Up Receive Buffers}{Device Types / Network Device / Device Operation / Setting Up Receive Buffers}
The device MUST set \field{num_buffers} to the number of descriptors used to
hold the incoming packet.
The device MUST use only a single descriptor if VIRTIO_NET_F_MRG_RXBUF
was not negotiated.
\begin{note}
{This means that \field{num_buffers} will always be 1
if VIRTIO_NET_F_MRG_RXBUF is not negotiated.}
\end{note}
\subsubsection{Processing of Incoming Packets}\label{sec:Device Types / Network Device / Device Operation / Processing of Incoming Packets}
\label{sec:Device Types / Network Device / Device Operation / Processing of Packets}%old label for latexdiff
When a packet is copied into a buffer in the receiveq, the
optimal path is to disable further used buffer notifications for the
receiveq and process packets until no more are found, then re-enable
them.
Processing incoming packets involves:
\begin{enumerate}
\item \field{num_buffers} indicates how many descriptors
this packet is spread over (including this one): this will
always be 1 if VIRTIO_NET_F_MRG_RXBUF was not negotiated.
This allows receipt of large packets without having to allocate large
buffers: a packet that does not fit in a single buffer can flow
over to the next buffer, and so on. In this case, there will be
at least \field{num_buffers} used buffers in the virtqueue, and the device
chains them together to form a single packet in a way similar to
how it would store it in a single buffer spread over multiple
descriptors.
The other buffers will not begin with a struct virtio_net_hdr.
\item If
\field{num_buffers} is one, then the entire packet will be
contained within this buffer, immediately following the struct
virtio_net_hdr.
\item If the VIRTIO_NET_F_GUEST_CSUM feature was negotiated, the
VIRTIO_NET_HDR_F_DATA_VALID bit in \field{flags} can be
set: if so, device has validated the packet checksum.
In case of multiple encapsulated protocols, one level of checksums
has been validated.
\end{enumerate}
Additionally, VIRTIO_NET_F_GUEST_CSUM, TSO4, TSO6, UDP and ECN
features enable receive checksum, large receive offload and ECN
support which are the input equivalents of the transmit checksum,
transmit segmentation offloading and ECN features, as described
in \ref{sec:Device Types / Network Device / Device Operation /
Packet Transmission}:
\begin{enumerate}
\item If the VIRTIO_NET_F_GUEST_TSO4, TSO6 or UFO options were
negotiated, then \field{gso_type} MAY be something other than
VIRTIO_NET_HDR_GSO_NONE, and \field{gso_size} field indicates the
desired MSS (see Packet Transmission point 2).
\item If the VIRTIO_NET_F_RSC_EXT option was negotiated (this
implies one of VIRTIO_NET_F_GUEST_TSO4, TSO6), the
device processes also duplicated ACK segments, reports
number of coalesced TCP segments in \field{csum_start} field and
number of duplicated ACK segments in \field{csum_offset} field
and sets bit VIRTIO_NET_HDR_F_RSC_INFO in \field{flags}.
\item If the VIRTIO_NET_F_GUEST_CSUM feature was negotiated, the
VIRTIO_NET_HDR_F_NEEDS_CSUM bit in \field{flags} can be
set: if so, the packet checksum at offset \field{csum_offset}
from \field{csum_start} and any preceding checksums
have been validated. The checksum on the packet is incomplete and
if bit VIRTIO_NET_HDR_F_RSC_INFO is not set in \field{flags},
then \field{csum_start} and \field{csum_offset} indicate how to calculate it
(see Packet Transmission point 1).
\end{enumerate}
\devicenormative{\paragraph}{Processing of Incoming Packets}{Device Types / Network Device / Device Operation / Processing of Incoming Packets}
\label{devicenormative:Device Types / Network Device / Device Operation / Processing of Packets}%old label for latexdiff
If VIRTIO_NET_F_MRG_RXBUF has not been negotiated, the device MUST set
\field{num_buffers} to 1.
If VIRTIO_NET_F_MRG_RXBUF has been negotiated, the device MUST set
\field{num_buffers} to indicate the number of buffers
the packet (including the header) is spread over.
If a receive packet is spread over multiple buffers, the device
MUST use all buffers but the last (i.e. the first $num_buffers -
1$ buffers) completely up to the full length of each buffer
supplied by the driver.
The device MUST use all buffers used by a single receive
packet together, such that at least \field{num_buffers} are
observed by driver as used.
If VIRTIO_NET_F_GUEST_CSUM is not negotiated, the device MUST set
\field{flags} to zero and SHOULD supply a fully checksummed
packet to the driver.
If VIRTIO_NET_F_GUEST_TSO4 is not negotiated, the device MUST NOT set
\field{gso_type} to VIRTIO_NET_HDR_GSO_TCPV4.
If VIRTIO_NET_F_GUEST_UDP is not negotiated, the device MUST NOT set
\field{gso_type} to VIRTIO_NET_HDR_GSO_UDP.
If VIRTIO_NET_F_GUEST_TSO6 is not negotiated, the device MUST NOT set
\field{gso_type} to VIRTIO_NET_HDR_GSO_TCPV6.
The device SHOULD NOT send to the driver TCP packets requiring segmentation offload
which have the Explicit Congestion Notification bit set, unless the
VIRTIO_NET_F_GUEST_ECN feature is negotiated, in which case the
device MUST set the VIRTIO_NET_HDR_GSO_ECN bit in
\field{gso_type}.
If the VIRTIO_NET_F_GUEST_CSUM feature has been negotiated, the
device MAY set the VIRTIO_NET_HDR_F_NEEDS_CSUM bit in
\field{flags}, if so:
\begin{enumerate}
\item the device MUST validate the packet checksum at
offset \field{csum_offset} from \field{csum_start} as well as all
preceding offsets;
\item the device MUST set the packet checksum stored in the
receive buffer to the TCP/UDP pseudo header;
\item the device MUST set \field{csum_start} and
\field{csum_offset} such that calculating a ones'
complement checksum from \field{csum_start} up until the
end of the packet and storing the result at offset
\field{csum_offset} from \field{csum_start} will result in a
fully checksummed packet;
\end{enumerate}
If none of the VIRTIO_NET_F_GUEST_TSO4, TSO6 or UFO options have
been negotiated, the device MUST set \field{gso_type} to
VIRTIO_NET_HDR_GSO_NONE.
If \field{gso_type} differs from VIRTIO_NET_HDR_GSO_NONE, then
the device MUST also set the VIRTIO_NET_HDR_F_NEEDS_CSUM bit in
\field{flags} MUST set \field{gso_size} to indicate the desired MSS.
If VIRTIO_NET_F_RSC_EXT was negotiated, the device MUST also
set VIRTIO_NET_HDR_F_RSC_INFO bit in \field{flags},
set \field{csum_start} to number of coalesced TCP segments and
set \field{csum_offset} to number of received duplicated ACK segments.
If VIRTIO_NET_F_RSC_EXT was not negotiated, the device MUST
not set VIRTIO_NET_HDR_F_RSC_INFO bit in \field{flags}.
If one of the VIRTIO_NET_F_GUEST_TSO4, TSO6 or UFO options have
been negotiated, the device SHOULD set \field{hdr_len} to a value
not less than the length of the headers, including the transport
header.
If the VIRTIO_NET_F_GUEST_CSUM feature has been negotiated, the
device MAY set the VIRTIO_NET_HDR_F_DATA_VALID bit in
\field{flags}, if so, the device MUST validate the packet
checksum (in case of multiple encapsulated protocols, one level
of checksums is validated).
\drivernormative{\paragraph}{Processing of Incoming
Packets}{Device Types / Network Device / Device Operation /
Processing of Incoming Packets}
The driver MUST ignore \field{flag} bits that it does not recognize.
If VIRTIO_NET_HDR_F_NEEDS_CSUM bit in \field{flags} is not set or
if VIRTIO_NET_HDR_F_RSC_INFO bit \field{flags} is set, the
driver MUST NOT use the \field{csum_start} and \field{csum_offset}.
If one of the VIRTIO_NET_F_GUEST_TSO4, TSO6 or UFO options have
been negotiated, the driver MAY use \field{hdr_len} only as a hint about the
transport header size.
The driver MUST NOT rely on \field{hdr_len} to be correct.
\begin{note}
This is due to various bugs in implementations.
\end{note}
If neither VIRTIO_NET_HDR_F_NEEDS_CSUM nor
VIRTIO_NET_HDR_F_DATA_VALID is set, the driver MUST NOT
rely on the packet checksum being correct.
\subsubsection{Control Virtqueue}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue}
The driver uses the control virtqueue (if VIRTIO_NET_F_CTRL_VQ is
negotiated) to send commands to manipulate various features of
the device which would not easily map into the configuration
space.
All commands are of the following form:
\begin{lstlisting}
struct virtio_net_ctrl {
u8 class;
u8 command;
u8 command-specific-data[];
u8 ack;
};
/* ack values */
#define VIRTIO_NET_OK 0
#define VIRTIO_NET_ERR 1
\end{lstlisting}
The \field{class}, \field{command} and command-specific-data are set by the
driver, and the device sets the \field{ack} byte. There is little it can
do except issue a diagnostic if \field{ack} is not
VIRTIO_NET_OK.
\paragraph{Packet Receive Filtering}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Packet Receive Filtering}
\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Setting Promiscuous Mode}%old label for latexdiff
If the VIRTIO_NET_F_CTRL_RX and VIRTIO_NET_F_CTRL_RX_EXTRA
features are negotiated, the driver can send control commands for
promiscuous mode, multicast, unicast and broadcast receiving.
\begin{note}
In general, these commands are best-effort: unwanted
packets could still arrive.
\end{note}
\begin{lstlisting}
#define VIRTIO_NET_CTRL_RX 0
#define VIRTIO_NET_CTRL_RX_PROMISC 0
#define VIRTIO_NET_CTRL_RX_ALLMULTI 1
#define VIRTIO_NET_CTRL_RX_ALLUNI 2
#define VIRTIO_NET_CTRL_RX_NOMULTI 3
#define VIRTIO_NET_CTRL_RX_NOUNI 4
#define VIRTIO_NET_CTRL_RX_NOBCAST 5
\end{lstlisting}
\devicenormative{\subparagraph}{Packet Receive Filtering}{Device Types / Network Device / Device Operation / Control Virtqueue / Packet Receive Filtering}
If the VIRTIO_NET_F_CTRL_RX feature has been negotiated,
the device MUST support the following VIRTIO_NET_CTRL_RX class
commands:
\begin{itemize}
\item VIRTIO_NET_CTRL_RX_PROMISC turns promiscuous mode on and
off. The command-specific-data is one byte containing 0 (off) or
1 (on). If promiscous mode is on, the device SHOULD receive all
incoming packets.
This SHOULD take effect even if one of the other modes set by
a VIRTIO_NET_CTRL_RX class command is on.
\item VIRTIO_NET_CTRL_RX_ALLMULTI turns all-multicast receive on and
off. The command-specific-data is one byte containing 0 (off) or
1 (on). When all-multicast receive is on the device SHOULD allow
all incoming multicast packets.
\end{itemize}
If the VIRTIO_NET_F_CTRL_RX_EXTRA feature has been negotiated,
the device MUST support the following VIRTIO_NET_CTRL_RX class
commands:
\begin{itemize}
\item VIRTIO_NET_CTRL_RX_ALLUNI turns all-unicast receive on and
off. The command-specific-data is one byte containing 0 (off) or
1 (on). When all-unicast receive is on the device SHOULD allow
all incoming unicast packets.
\item VIRTIO_NET_CTRL_RX_NOMULTI suppresses multicast receive.
The command-specific-data is one byte containing 0 (multicast
receive allowed) or 1 (multicast receive suppressed).
When multicast receive is suppressed, the device SHOULD NOT
send multicast packets to the driver.
This SHOULD take effect even if VIRTIO_NET_CTRL_RX_ALLMULTI is on.
This filter SHOULD NOT apply to broadcast packets.
\item VIRTIO_NET_CTRL_RX_NOUNI suppresses unicast receive.
The command-specific-data is one byte containing 0 (unicast
receive allowed) or 1 (unicast receive suppressed).
When unicast receive is suppressed, the device SHOULD NOT
send unicast packets to the driver.
This SHOULD take effect even if VIRTIO_NET_CTRL_RX_ALLUNI is on.
\item VIRTIO_NET_CTRL_RX_NOBCAST suppresses broadcast receive.
The command-specific-data is one byte containing 0 (broadcast
receive allowed) or 1 (broadcast receive suppressed).
When broadcast receive is suppressed, the device SHOULD NOT
send broadcast packets to the driver.
This SHOULD take effect even if VIRTIO_NET_CTRL_RX_ALLMULTI is on.
\end{itemize}
\drivernormative{\subparagraph}{Packet Receive Filtering}{Device Types / Network Device / Device Operation / Control Virtqueue / Packet Receive Filtering}
If the VIRTIO_NET_F_CTRL_RX feature has not been negotiated,
the driver MUST NOT issue commands VIRTIO_NET_CTRL_RX_PROMISC or
VIRTIO_NET_CTRL_RX_ALLMULTI.
If the VIRTIO_NET_F_CTRL_RX_EXTRA feature has not been negotiated,
the driver MUST NOT issue commands
VIRTIO_NET_CTRL_RX_ALLUNI,
VIRTIO_NET_CTRL_RX_NOMULTI,
VIRTIO_NET_CTRL_RX_NOUNI or
VIRTIO_NET_CTRL_RX_NOBCAST.
\paragraph{Setting MAC Address Filtering}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Setting MAC Address Filtering}
If the VIRTIO_NET_F_CTRL_RX feature is negotiated, the driver can
send control commands for MAC address filtering.
\begin{lstlisting}
struct virtio_net_ctrl_mac {
le32 entries;
u8 macs[entries][6];
};
#define VIRTIO_NET_CTRL_MAC 1
#define VIRTIO_NET_CTRL_MAC_TABLE_SET 0
#define VIRTIO_NET_CTRL_MAC_ADDR_SET 1
\end{lstlisting}
The device can filter incoming packets by any number of destination
MAC addresses\footnote{Since there are no guarantees, it can use a hash filter or
silently switch to allmulti or promiscuous mode if it is given too
many addresses.
}. This table is set using the class
VIRTIO_NET_CTRL_MAC and the command VIRTIO_NET_CTRL_MAC_TABLE_SET. The
command-specific-data is two variable length tables of 6-byte MAC
addresses (as described in struct virtio_net_ctrl_mac). The first table contains unicast addresses, and the second
contains multicast addresses.
The VIRTIO_NET_CTRL_MAC_ADDR_SET command is used to set the
default MAC address which rx filtering
accepts (and if VIRTIO_NET_F_MAC_ADDR has been negotiated,
this will be reflected in \field{mac} in config space).
The command-specific-data for VIRTIO_NET_CTRL_MAC_ADDR_SET is
the 6-byte MAC address.
\devicenormative{\subparagraph}{Setting MAC Address Filtering}{Device Types / Network Device / Device Operation / Control Virtqueue / Setting MAC Address Filtering}
The device MUST have an empty MAC filtering table on reset.
The device MUST update the MAC filtering table before it consumes
the VIRTIO_NET_CTRL_MAC_TABLE_SET command.
The device MUST update \field{mac} in config space before it consumes
the VIRTIO_NET_CTRL_MAC_ADDR_SET command, if VIRTIO_NET_F_MAC_ADDR has
been negotiated.
The device SHOULD drop incoming packets which have a destination MAC which
matches neither the \field{mac} (or that set with VIRTIO_NET_CTRL_MAC_ADDR_SET)
nor the MAC filtering table.
\drivernormative{\subparagraph}{Setting MAC Address Filtering}{Device Types / Network Device / Device Operation / Control Virtqueue / Setting MAC Address Filtering}
If VIRTIO_NET_F_CTRL_RX has not been negotiated,
the driver MUST NOT issue VIRTIO_NET_CTRL_MAC class commands.
If VIRTIO_NET_F_CTRL_RX has been negotiated,
the driver SHOULD issue VIRTIO_NET_CTRL_MAC_ADDR_SET
to set the default mac if it is different from \field{mac}.
The driver MUST follow the VIRTIO_NET_CTRL_MAC_TABLE_SET command
by a le32 number, followed by that number of non-multicast
MAC addresses, followed by another le32 number, followed by
that number of multicast addresses. Either number MAY be 0.
\subparagraph{Legacy Interface: Setting MAC Address Filtering}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Setting MAC Address Filtering / Legacy Interface: Setting MAC Address Filtering}
When using the legacy interface, transitional devices and drivers
MUST format \field{entries} in struct virtio_net_ctrl_mac
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
Legacy drivers that didn't negotiate VIRTIO_NET_F_CTRL_MAC_ADDR
changed \field{mac} in config space when NIC is accepting
incoming packets. These drivers always wrote the mac value from
first to last byte, therefore after detecting such drivers,
a transitional device MAY defer MAC update, or MAY defer
processing incoming packets until driver writes the last byte
of \field{mac} in the config space.
\paragraph{VLAN Filtering}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / VLAN Filtering}
If the driver negotiates the VIRTIO_NET_F_CTRL_VLAN feature, it
can control a VLAN filter table in the device.
\begin{note}
Similar to the MAC address based filtering, the VLAN filtering
is also best-effort: unwanted packets could still arrive.
\end{note}
\begin{lstlisting}
#define VIRTIO_NET_CTRL_VLAN 2
#define VIRTIO_NET_CTRL_VLAN_ADD 0
#define VIRTIO_NET_CTRL_VLAN_DEL 1
\end{lstlisting}
Both the VIRTIO_NET_CTRL_VLAN_ADD and VIRTIO_NET_CTRL_VLAN_DEL
command take a little-endian 16-bit VLAN id as the command-specific-data.
\subparagraph{Legacy Interface: VLAN Filtering}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / VLAN Filtering / Legacy Interface: VLAN Filtering}
When using the legacy interface, transitional devices and drivers
MUST format the VLAN id
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
\paragraph{Gratuitous Packet Sending}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Gratuitous Packet Sending}
If the driver negotiates the VIRTIO_NET_F_GUEST_ANNOUNCE (depends
on VIRTIO_NET_F_CTRL_VQ), the device can ask the driver to send gratuitous
packets; this is usually done after the guest has been physically
migrated, and needs to announce its presence on the new network
links. (As hypervisor does not have the knowledge of guest
network configuration (eg. tagged vlan) it is simplest to prod
the guest in this way).
\begin{lstlisting}
#define VIRTIO_NET_CTRL_ANNOUNCE 3
#define VIRTIO_NET_CTRL_ANNOUNCE_ACK 0
\end{lstlisting}
The driver checks VIRTIO_NET_S_ANNOUNCE bit in the device configuration \field{status} field
when it notices the changes of device configuration. The
command VIRTIO_NET_CTRL_ANNOUNCE_ACK is used to indicate that
driver has received the notification and device clears the
VIRTIO_NET_S_ANNOUNCE bit in \field{status}.
Processing this notification involves:
\begin{enumerate}
\item Sending the gratuitous packets (eg. ARP) or marking there are pending
gratuitous packets to be sent and letting deferred routine to
send them.
\item Sending VIRTIO_NET_CTRL_ANNOUNCE_ACK command through control
vq.
\end{enumerate}
\drivernormative{\subparagraph}{Gratuitous Packet Sending}{Device Types / Network Device / Device Operation / Control Virtqueue / Gratuitous Packet Sending}
If the driver negotiates VIRTIO_NET_F_GUEST_ANNOUNCE, it SHOULD notify
network peers of its new location after it sees the VIRTIO_NET_S_ANNOUNCE bit
in \field{status}. The driver MUST send a command on the command queue
with class VIRTIO_NET_CTRL_ANNOUNCE and command VIRTIO_NET_CTRL_ANNOUNCE_ACK.
\devicenormative{\subparagraph}{Gratuitous Packet Sending}{Device Types / Network Device / Device Operation / Control Virtqueue / Gratuitous Packet Sending}
If VIRTIO_NET_F_GUEST_ANNOUNCE is negotiated, the device MUST clear the
VIRTIO_NET_S_ANNOUNCE bit in \field{status} upon receipt of a command buffer
with class VIRTIO_NET_CTRL_ANNOUNCE and command VIRTIO_NET_CTRL_ANNOUNCE_ACK
before marking the buffer as used.
\paragraph{Automatic receive steering in multiqueue mode}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Automatic receive steering in multiqueue mode}
If the driver negotiates the VIRTIO_NET_F_MQ feature bit (depends
on VIRTIO_NET_F_CTRL_VQ), it MAY transmit outgoing packets on one
of the multiple transmitq1\ldots transmitqN and ask the device to
queue incoming packets into one of the multiple receiveq1\ldots receiveqN
depending on the packet flow.
\begin{lstlisting}
struct virtio_net_ctrl_mq {
le16 virtqueue_pairs;
};
#define VIRTIO_NET_CTRL_MQ 4
#define VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET 0
#define VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MIN 1
#define VIRTIO_NET_CTRL_MQ_VQ_PAIRS_MAX 0x8000
\end{lstlisting}
Multiqueue is disabled by default. The driver enables multiqueue by
executing the VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command, specifying
the number of the transmit and receive queues to be used up to
\field{max_virtqueue_pairs}; subsequently,
transmitq1\ldots transmitqn and receiveq1\ldots receiveqn where
n=\field{virtqueue_pairs} MAY be used.
When multiqueue is enabled, the device MUST use automatic receive steering
based on packet flow. Programming of the receive steering
classificator is implicit. After the driver transmitted a packet of a
flow on transmitqX, the device SHOULD cause incoming packets for that flow to
be steered to receiveqX. For uni-directional protocols, or where
no packets have been transmitted yet, the device MAY steer a packet
to a random queue out of the specified receiveq1\ldots receiveqn.
Multiqueue is disabled by setting \field{virtqueue_pairs} to 1 (this is
the default) and waiting for the device to use the command buffer.
\drivernormative{\subparagraph}{Automatic receive steering in multiqueue mode}{Device Types / Network Device / Device Operation / Control Virtqueue / Automatic receive steering in multiqueue mode}
The driver MUST configure the virtqueues before enabling them with the
VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command.
The driver MUST NOT request a \field{virtqueue_pairs} of 0 or
greater than \field{max_virtqueue_pairs} in the device configuration space.
The driver MUST queue packets only on any transmitq1 before the
VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command.
The driver MUST NOT queue packets on transmit queues greater than
\field{virtqueue_pairs} once it has placed the VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command in the available ring.
\devicenormative{\subparagraph}{Automatic receive steering in multiqueue mode}{Device Types / Network Device / Device Operation / Control Virtqueue / Automatic receive steering in multiqueue mode}
The device MUST queue packets only on any receiveq1 before the
VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command.
The device MUST NOT queue packets on receive queues greater than
\field{virtqueue_pairs} once it has placed the
VIRTIO_NET_CTRL_MQ_VQ_PAIRS_SET command in a used buffer.
\subparagraph{Legacy Interface: Automatic receive steering in multiqueue mode}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Automatic receive steering in multiqueue mode / Legacy Interface: Automatic receive steering in multiqueue mode}
When using the legacy interface, transitional devices and drivers
MUST format \field{virtqueue_pairs}
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
\paragraph{Offloads State Configuration}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Offloads State Configuration}
If the VIRTIO_NET_F_CTRL_GUEST_OFFLOADS feature is negotiated, the driver can
send control commands for dynamic offloads state configuration.
\subparagraph{Setting Offloads State}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Offloads State Configuration / Setting Offloads State}
To configure the offloads, the following layout structure and
definitions are used:
\begin{lstlisting}
le64 offloads;
#define VIRTIO_NET_F_GUEST_CSUM 1
#define VIRTIO_NET_F_GUEST_TSO4 7
#define VIRTIO_NET_F_GUEST_TSO6 8
#define VIRTIO_NET_F_GUEST_ECN 9
#define VIRTIO_NET_F_GUEST_UFO 10
#define VIRTIO_NET_CTRL_GUEST_OFFLOADS 5
#define VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET 0
\end{lstlisting}
The class VIRTIO_NET_CTRL_GUEST_OFFLOADS has one command:
VIRTIO_NET_CTRL_GUEST_OFFLOADS_SET applies the new offloads configuration.
le64 value passed as command data is a bitmask, bits set define
offloads to be enabled, bits cleared - offloads to be disabled.
There is a corresponding device feature for each offload. Upon feature
negotiation corresponding offload gets enabled to preserve backward
compartibility.
\drivernormative{\subparagraph}{Setting Offloads State}{Device Types / Network Device / Device Operation / Control Virtqueue / Offloads State Configuration / Setting Offloads State}
A driver MUST NOT enable an offload for which the appropriate feature
has not been negotiated.
\subparagraph{Legacy Interface: Setting Offloads State}\label{sec:Device Types / Network Device / Device Operation / Control Virtqueue / Offloads State Configuration / Setting Offloads State / Legacy Interface: Setting Offloads State}
When using the legacy interface, transitional devices and drivers
MUST format \field{offloads}
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
\subsubsection{Legacy Interface: Framing Requirements}\label{sec:Device
Types / Network Device / Legacy Interface: Framing Requirements}
When using legacy interfaces, transitional drivers which have not
negotiated VIRTIO_F_ANY_LAYOUT MUST use a single descriptor for the
struct virtio_net_hdr on both transmit and receive, with the
network data in the following descriptors.
Additionally, when using the control virtqueue (see \ref{sec:Device
Types / Network Device / Device Operation / Control Virtqueue})
, transitional drivers which have not
negotiated VIRTIO_F_ANY_LAYOUT MUST:
\begin{itemize}
\item for all commands, use a single 2-byte descriptor including the first two
fields: \field{class} and \field{command}
\item for all commands except VIRTIO_NET_CTRL_MAC_TABLE_SET
use a single descriptor including command-specific-data
with no padding.
\item for the VIRTIO_NET_CTRL_MAC_TABLE_SET command use exactly
two descriptors including command-specific-data with no padding:
the first of these descriptors MUST include the
virtio_net_ctrl_mac table structure for the unicast addresses with no padding,
the second of these descriptors MUST include the
virtio_net_ctrl_mac table structure for the multicast addresses
with no padding.
\item for all commands, use a single 1-byte descriptor for the
\field{ack} field
\end{itemize}
See \ref{sec:Basic
Facilities of a Virtio Device / Virtqueues / Message Framing}.
\section{Block Device}\label{sec:Device Types / Block Device}
The virtio block device is a simple virtual block device (ie.
disk). Read and write requests (and other exotic requests) are
placed in one of its queues, and serviced (probably out of order) by the
device except where noted.
\subsection{Device ID}\label{sec:Device Types / Block Device / Device ID}
2
\subsection{Virtqueues}\label{sec:Device Types / Block Device / Virtqueues}
\begin{description}
\item[0] requestq1
\item[\ldots]
\item[N] requestqN
\end{description}
N=1 if VIRTIO_BLK_F_MQ is not negotiated, otherwise N is set by
\field{num_queues}.
\subsection{Feature bits}\label{sec:Device Types / Block Device / Feature bits}
\begin{description}
\item[VIRTIO_BLK_F_SIZE_MAX (1)] Maximum size of any single segment is
in \field{size_max}.
\item[VIRTIO_BLK_F_SEG_MAX (2)] Maximum number of segments in a
request is in \field{seg_max}.
\item[VIRTIO_BLK_F_GEOMETRY (4)] Disk-style geometry specified in
\field{geometry}.
\item[VIRTIO_BLK_F_RO (5)] Device is read-only.
\item[VIRTIO_BLK_F_BLK_SIZE (6)] Block size of disk is in \field{blk_size}.
\item[VIRTIO_BLK_F_FLUSH (9)] Cache flush command support.
\item[VIRTIO_BLK_F_TOPOLOGY (10)] Device exports information on optimal I/O
alignment.
\item[VIRTIO_BLK_F_CONFIG_WCE (11)] Device can toggle its cache between writeback
and writethrough modes.
\item[VIRTIO_BLK_F_MQ (12)] Device supports multiqueue.
\item[VIRTIO_BLK_F_DISCARD (13)] Device can support discard command, maximum
discard sectors size in \field{max_discard_sectors} and maximum discard
segment number in \field{max_discard_seg}.
\item[VIRTIO_BLK_F_WRITE_ZEROES (14)] Device can support write zeroes command,
maximum write zeroes sectors size in \field{max_write_zeroes_sectors} and
maximum write zeroes segment number in \field{max_write_zeroes_seg}.
\end{description}
\subsubsection{Legacy Interface: Feature bits}\label{sec:Device Types / Block Device / Feature bits / Legacy Interface: Feature bits}
\begin{description}
\item[VIRTIO_BLK_F_BARRIER (0)] Device supports request barriers.
\item[VIRTIO_BLK_F_SCSI (7)] Device supports scsi packet commands.
\end{description}
\begin{note}
In the legacy interface, VIRTIO_BLK_F_FLUSH was also
called VIRTIO_BLK_F_WCE.
\end{note}
\subsection{Device configuration layout}\label{sec:Device Types / Block Device / Device configuration layout}
The \field{capacity} of the device (expressed in 512-byte sectors) is always
present. The availability of the others all depend on various feature
bits as indicated above.
The field \field{num_queues} only exists if VIRTIO_BLK_F_MQ is set. This field specifies
the number of queues.
The parameters in the configuration space of the device \field{max_discard_sectors}
\field{discard_sector_alignment} are expressed in 512-byte units if the
VIRTIO_BLK_F_DISCARD feature bit is negotiated. The \field{max_write_zeroes_sectors}
is expressed in 512-byte units if the VIRTIO_BLK_F_WRITE_ZEROES feature
bit is negotiated.
\begin{lstlisting}
struct virtio_blk_config {
le64 capacity;
le32 size_max;
le32 seg_max;
struct virtio_blk_geometry {
le16 cylinders;
u8 heads;
u8 sectors;
} geometry;
le32 blk_size;
struct virtio_blk_topology {
// # of logical blocks per physical block (log2)
u8 physical_block_exp;
// offset of first aligned logical block
u8 alignment_offset;
// suggested minimum I/O size in blocks
le16 min_io_size;
// optimal (suggested maximum) I/O size in blocks
le32 opt_io_size;
} topology;
u8 writeback;
u8 unused0;
u16 num_queues;
le32 max_discard_sectors;
le32 max_discard_seg;
le32 discard_sector_alignment;
le32 max_write_zeroes_sectors;
le32 max_write_zeroes_seg;
u8 write_zeroes_may_unmap;
u8 unused1[3];
};
\end{lstlisting}
\subsubsection{Legacy Interface: Device configuration layout}\label{sec:Device Types / Block Device / Device configuration layout / Legacy Interface: Device configuration layout}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_blk_config
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
\subsection{Device Initialization}\label{sec:Device Types / Block Device / Device Initialization}
\begin{enumerate}
\item The device size can be read from \field{capacity}.
\item If the VIRTIO_BLK_F_BLK_SIZE feature is negotiated,
\field{blk_size} can be read to determine the optimal sector size
for the driver to use. This does not affect the units used in
the protocol (always 512 bytes), but awareness of the correct
value can affect performance.
\item If the VIRTIO_BLK_F_RO feature is set by the device, any write
requests will fail.
\item If the VIRTIO_BLK_F_TOPOLOGY feature is negotiated, the fields in the
\field{topology} struct can be read to determine the physical block size and optimal
I/O lengths for the driver to use. This also does not affect the units
in the protocol, only performance.
\item If the VIRTIO_BLK_F_CONFIG_WCE feature is negotiated, the cache
mode can be read or set through the \field{writeback} field. 0 corresponds
to a writethrough cache, 1 to a writeback cache\footnote{Consistent with
\ref{devicenormative:Device Types / Block Device / Device Operation},
a writethrough cache can be defined broadly as a cache that commits
writes to persistent device backend storage before reporting their
completion. For example, a battery-backed writeback cache actually
counts as writethrough according to this definition.}. The cache mode
after reset can be either writeback or writethrough. The actual
mode can be determined by reading \field{writeback} after feature
negotiation.
\item If the VIRTIO_BLK_F_DISCARD feature is negotiated,
\field{max_discard_sectors} and \field{max_discard_seg} can be read
to determine the maximum discard sectors and maximum number of discard
segments for the block driver to use. \field{discard_sector_alignment}
can be used by OS when splitting a request based on alignment.
\item If the VIRTIO_BLK_F_WRITE_ZEROES feature is negotiated,
\field{max_write_zeroes_sectors} and \field{max_write_zeroes_seg} can
be read to determine the maximum write zeroes sectors and maximum
number of write zeroes segments for the block driver to use.
\item If the VIRTIO_BLK_F_MQ feature is negotiated, \field{num_queues} field
can be read to determine the number of queues.
\end{enumerate}
\drivernormative{\subsubsection}{Device Initialization}{Device Types / Block Device / Device Initialization}
Drivers SHOULD NOT negotiate VIRTIO_BLK_F_FLUSH if they are incapable of
sending VIRTIO_BLK_T_FLUSH commands.
If neither VIRTIO_BLK_F_CONFIG_WCE nor VIRTIO_BLK_F_FLUSH are
negotiated, the driver MAY deduce the presence of a writethrough cache.
If VIRTIO_BLK_F_CONFIG_WCE was not negotiated but VIRTIO_BLK_F_FLUSH was,
the driver SHOULD assume presence of a writeback cache.
The driver MUST NOT read \field{writeback} before setting
the FEATURES_OK \field{device status} bit.
\devicenormative{\subsubsection}{Device Initialization}{Device Types / Block Device / Device Initialization}
Devices SHOULD always offer VIRTIO_BLK_F_FLUSH, and MUST offer it
if they offer VIRTIO_BLK_F_CONFIG_WCE.
If VIRTIO_BLK_F_CONFIG_WCE is negotiated but VIRTIO_BLK_F_FLUSH
is not, the device MUST initialize \field{writeback} to 0.
The device MUST initialize padding bytes \field{unused0} and
\field{unused1} to 0.
\subsubsection{Legacy Interface: Device Initialization}\label{sec:Device Types / Block Device / Device Initialization / Legacy Interface: Device Initialization}
Because legacy devices do not have FEATURES_OK, transitional devices
MUST implement slightly different behavior around feature negotiation
when used through the legacy interface. In particular, when using the
legacy interface:
\begin{itemize}
\item the driver MAY read or write \field{writeback} before setting
the DRIVER or DRIVER_OK \field{device status} bit
\item the device MUST NOT modify the cache mode (and \field{writeback})
as a result of a driver setting a status bit, unless
the DRIVER_OK bit is being set and the driver has not set the
VIRTIO_BLK_F_CONFIG_WCE driver feature bit.
\item the device MUST NOT modify the cache mode (and \field{writeback})
as a result of a driver modifying the driver feature bits, for example
if the driver sets the VIRTIO_BLK_F_CONFIG_WCE driver feature bit but
does not set the VIRTIO_BLK_F_FLUSH bit.
\end{itemize}
\subsection{Device Operation}\label{sec:Device Types / Block Device / Device Operation}
The driver queues requests to the virtqueues, and they are used by
the device (not necessarily in order). Each request is of form:
\begin{lstlisting}
struct virtio_blk_req {
le32 type;
le32 reserved;
le64 sector;
u8 data[];
u8 status;
};
\end{lstlisting}
The type of the request is either a read (VIRTIO_BLK_T_IN), a write
(VIRTIO_BLK_T_OUT), a discard (VIRTIO_BLK_T_DISCARD), a write zeroes
(VIRTIO_BLK_T_WRITE_ZEROES) or a flush (VIRTIO_BLK_T_FLUSH).
\begin{lstlisting}
#define VIRTIO_BLK_T_IN 0
#define VIRTIO_BLK_T_OUT 1
#define VIRTIO_BLK_T_FLUSH 4
#define VIRTIO_BLK_T_DISCARD 11
#define VIRTIO_BLK_T_WRITE_ZEROES 13
\end{lstlisting}
The \field{sector} number indicates the offset (multiplied by 512) where
the read or write is to occur. This field is unused and set to 0 for
commands other than read or write.
VIRTIO_BLK_T_IN requests populate \field{data} with the contents of sectors
read from the block device (in multiples of 512 bytes). VIRTIO_BLK_T_OUT
requests write the contents of \field{data} to the block device (in multiples
of 512 bytes).
The \field{data} used for discard or write zeroes commands consists of one or
more segments. The maximum number of segments is \field{max_discard_seg} for
discard commands and \field{max_write_zeroes_seg} for write zeroes commands.
Each segment is of form:
\begin{lstlisting}
struct virtio_blk_discard_write_zeroes {
le64 sector;
le32 num_sectors;
struct {
le32 unmap:1;
le32 reserved:31;
} flags;
};
\end{lstlisting}
\field{sector} indicates the starting offset (in 512-byte units) of the
segment, while \field{num_sectors} indicates the number of sectors in each
discarded range. \field{unmap} is only used in write zeroes commands and allows
the device to discard the specified range, provided that following reads return
zeroes.
The final \field{status} byte is written by the device: either
VIRTIO_BLK_S_OK for success, VIRTIO_BLK_S_IOERR for device or driver
error or VIRTIO_BLK_S_UNSUPP for a request unsupported by device:
\begin{lstlisting}
#define VIRTIO_BLK_S_OK 0
#define VIRTIO_BLK_S_IOERR 1
#define VIRTIO_BLK_S_UNSUPP 2
\end{lstlisting}
The status of individual segments is indeterminate when a discard or write zero
command produces VIRTIO_BLK_S_IOERR. A segment may have completed
successfully, failed, or not been processed by the device.
\drivernormative{\subsubsection}{Device Operation}{Device Types / Block Device / Device Operation}
A driver MUST NOT submit a request which would cause a read or write
beyond \field{capacity}.
A driver SHOULD accept the VIRTIO_BLK_F_RO feature if offered.
A driver MUST set \field{sector} to 0 for a VIRTIO_BLK_T_FLUSH request.
A driver SHOULD NOT include any data in a VIRTIO_BLK_T_FLUSH request.
The length of \field{data} MUST be a multiple of 512 bytes for VIRTIO_BLK_T_IN
and VIRTIO_BLK_T_OUT requests.
The length of \field{data} MUST be a multiple of the size of struct
virtio_blk_discard_write_zeroes for VIRTIO_BLK_T_DISCARD and
VIRTIO_BLK_T_WRITE_ZEROES requests.
VIRTIO_BLK_T_DISCARD requests MUST NOT contain more than
\field{max_discard_seg} struct virtio_blk_discard_write_zeroes segments in
\field{data}.
VIRTIO_BLK_T_WRITE_ZEROES requests MUST NOT contain more than
\field{max_write_zeroes_seg} struct virtio_blk_discard_write_zeroes segments in
\field{data}.
If the VIRTIO_BLK_F_CONFIG_WCE feature is negotiated, the driver MAY
switch to writethrough or writeback mode by writing respectively 0 and
1 to the \field{writeback} field. After writing a 0 to \field{writeback},
the driver MUST NOT assume that any volatile writes have been committed
to persistent device backend storage.
The \field{unmap} bit MUST be zero for discard commands. The driver
MUST NOT assume anything about the data returned by read requests after
a range of sectors has been discarded.
A driver MUST NOT assume that individual segments in a multi-segment
VIRTIO_BLK_T_DISCARD or VIRTIO_BLK_T_WRITE_ZEROES request completed
successfully, failed, or were processed by the device at all if the request
failed with VIRTIO_BLK_S_IOERR.
\devicenormative{\subsubsection}{Device Operation}{Device Types / Block Device / Device Operation}
A device MUST set the \field{status} byte to VIRTIO_BLK_S_IOERR
for a write request if the VIRTIO_BLK_F_RO feature if offered, and MUST NOT
write any data.
The device MUST set the \field{status} byte to VIRTIO_BLK_S_UNSUPP for
discard and write zeroes commands if any unknown flag is set.
Furthermore, the device MUST set the \field{status} byte to
VIRTIO_BLK_S_UNSUPP for discard commands if the \field{unmap} flag is set.
For discard commands, the device MAY deallocate the specified range of
sectors in the device backend storage.
For write zeroes commands, if the \field{unmap} is set, the device MAY
deallocate the specified range of sectors in the device backend storage,
as if the discard command had been sent. After a write zeroes command
is completed, reads of the specified ranges of sectors MUST return
zeroes. This is true independent of whether \field{unmap} was set or clear.
The device SHOULD clear the \field{write_zeroes_may_unmap} field of the
virtio configuration space if and only if a write zeroes request cannot
result in deallocating one or more sectors. The device MAY change the
content of the field during operation of the device; when this happens,
the device SHOULD trigger a configuration change notification.
A write is considered volatile when it is submitted; the contents of
sectors covered by a volatile write are undefined in persistent device
backend storage until the write becomes stable. A write becomes stable
once it is completed and one or more of the following conditions is true:
\begin{enumerate}
\item\label{item:flush1} neither VIRTIO_BLK_F_CONFIG_WCE nor
VIRTIO_BLK_F_FLUSH feature were negotiated, but VIRTIO_BLK_F_FLUSH was
offered by the device;
\item\label{item:flush2} the VIRTIO_BLK_F_CONFIG_WCE feature was negotiated and the
\field{writeback} field in configuration space was 0 \textbf{all the time between
the submission of the write and its completion};
\item\label{item:flush3} a VIRTIO_BLK_T_FLUSH request is sent \textbf{after the write is
completed} and is completed itself.
\end{enumerate}
If the device is backed by persistent storage, the device MUST ensure that
stable writes are committed to it, before reporting completion of the write
(cases~\ref{item:flush1} and~\ref{item:flush2}) or the flush
(case~\ref{item:flush3}). Failure to do so can cause data loss
in case of a crash.
If the driver changes \field{writeback} between the submission of the write
and its completion, the write could be either volatile or stable when
its completion is reported; in other words, the exact behavior is undefined.
% According to the device requirements for device initialization:
% Offer(CONFIG_WCE) => Offer(FLUSH).
%
% After reversing the implication:
% not Offer(FLUSH) => not Offer(CONFIG_WCE).
If VIRTIO_BLK_F_FLUSH was not offered by the
device\footnote{Note that in this case, according to
\ref{devicenormative:Device Types / Block Device / Device Initialization},
the device will not have offered VIRTIO_BLK_F_CONFIG_WCE either.}, the
device MAY also commit writes to persistent device backend storage before
reporting their completion. Unlike case~\ref{item:flush1}, however, this
is not an absolute requirement of the specification.
\begin{note}
An implementation that does not offer VIRTIO_BLK_F_FLUSH and does not commit
completed writes will not be resilient to data loss in case of crashes.
Not offering VIRTIO_BLK_F_FLUSH is an absolute requirement
for implementations that do not wish to be safe against such data losses.
\end{note}
\subsubsection{Legacy Interface: Device Operation}\label{sec:Device Types / Block Device / Device Operation / Legacy Interface: Device Operation}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_blk_req
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
When using the legacy interface, transitional drivers
SHOULD ignore the used length values.
\begin{note}
Historically, some devices put the total descriptor length,
or the total length of device-writable buffers there,
even when only the status byte was actually written.
\end{note}
The \field{reserved} field was previously called \field{ioprio}. \field{ioprio}
is a hint about the relative priorities of requests to the device:
higher numbers indicate more important requests.
\begin{lstlisting}
#define VIRTIO_BLK_T_FLUSH_OUT 5
\end{lstlisting}
The command VIRTIO_BLK_T_FLUSH_OUT was a synonym for VIRTIO_BLK_T_FLUSH;
a driver MUST treat it as a VIRTIO_BLK_T_FLUSH command.
\begin{lstlisting}
#define VIRTIO_BLK_T_BARRIER 0x80000000
\end{lstlisting}
If the device has VIRTIO_BLK_F_BARRIER
feature the high bit (VIRTIO_BLK_T_BARRIER) indicates that this
request acts as a barrier and that all preceding requests SHOULD be
complete before this one, and all following requests SHOULD NOT be
started until this is complete.
\begin{note} A barrier does not flush
caches in the underlying backend device in host, and thus does not
serve as data consistency guarantee. Only a VIRTIO_BLK_T_FLUSH request
does that.
\end{note}
Some older legacy devices did not commit completed writes to persistent
device backend storage when VIRTIO_BLK_F_FLUSH was offered but not
negotiated. In order to work around this, the driver MAY set the
\field{writeback} to 0 (if available) or it MAY send an explicit flush
request after every completed write.
If the device has VIRTIO_BLK_F_SCSI feature, it can also support
scsi packet command requests, each of these requests is of form:
\begin{lstlisting}
/* All fields are in guest's native endian. */
struct virtio_scsi_pc_req {
u32 type;
u32 ioprio;
u64 sector;
u8 cmd[];
u8 data[][512];
#define SCSI_SENSE_BUFFERSIZE 96
u8 sense[SCSI_SENSE_BUFFERSIZE];
u32 errors;
u32 data_len;
u32 sense_len;
u32 residual;
u8 status;
};
\end{lstlisting}
A request type can also be a scsi packet command (VIRTIO_BLK_T_SCSI_CMD or
VIRTIO_BLK_T_SCSI_CMD_OUT). The two types are equivalent, the device
does not distinguish between them:
\begin{lstlisting}
#define VIRTIO_BLK_T_SCSI_CMD 2
#define VIRTIO_BLK_T_SCSI_CMD_OUT 3
\end{lstlisting}
The \field{cmd} field is only present for scsi packet command requests,
and indicates the command to perform. This field MUST reside in a
single, separate device-readable buffer; command length can be derived
from the length of this buffer.
Note that these first three (four for scsi packet commands)
fields are always device-readable: \field{data} is either device-readable
or device-writable, depending on the request. The size of the read or
write can be derived from the total size of the request buffers.
\field{sense} is only present for scsi packet command requests,
and indicates the buffer for scsi sense data.
\field{data_len} is only present for scsi packet command
requests, this field is deprecated, and SHOULD be ignored by the
driver. Historically, devices copied data length there.
\field{sense_len} is only present for scsi packet command
requests and indicates the number of bytes actually written to
the \field{sense} buffer.
\field{residual} field is only present for scsi packet command
requests and indicates the residual size, calculated as data
length - number of bytes actually transferred.
\subsubsection{Legacy Interface: Framing Requirements}\label{sec:Device Types / Block Device / Legacy Interface: Framing Requirements}
When using legacy interfaces, transitional drivers which have not
negotiated VIRTIO_F_ANY_LAYOUT:
\begin{itemize}
\item MUST use a single 8-byte descriptor containing \field{type},
\field{reserved} and \field{sector}, followed by descriptors
for \field{data}, then finally a separate 1-byte descriptor
for \field{status}.
\item For SCSI commands there are additional constraints.
\field{sense} MUST reside in a
single separate device-writable descriptor of size 96 bytes,
and \field{errors}, \field{data_len}, \field{sense_len} and
\field{residual} MUST reside a single separate
device-writable descriptor.
\end{itemize}
See \ref{sec:Basic Facilities of a Virtio Device / Virtqueues / Message Framing}.
\section{Console Device}\label{sec:Device Types / Console Device}
The virtio console device is a simple device for data input and
output. A device MAY have one or more ports. Each port has a pair
of input and output virtqueues. Moreover, a device has a pair of
control IO virtqueues. The control virtqueues are used to
communicate information between the device and the driver about
ports being opened and closed on either side of the connection,
indication from the device about whether a particular port is a
console port, adding new ports, port hot-plug/unplug, etc., and
indication from the driver about whether a port or a device was
successfully added, port open/close, etc. For data IO, one or
more empty buffers are placed in the receive queue for incoming
data and outgoing characters are placed in the transmit queue.
\subsection{Device ID}\label{sec:Device Types / Console Device / Device ID}
3
\subsection{Virtqueues}\label{sec:Device Types / Console Device / Virtqueues}
\begin{description}
\item[0] receiveq(port0)
\item[1] transmitq(port0)
\item[2] control receiveq
\item[3] control transmitq
\item[4] receiveq(port1)
\item[5] transmitq(port1)
\item[\ldots]
\end{description}
The port 0 receive and transmit queues always exist: other queues
only exist if VIRTIO_CONSOLE_F_MULTIPORT is set.
\subsection{Feature bits}\label{sec:Device Types / Console Device / Feature bits}
\begin{description}
\item[VIRTIO_CONSOLE_F_SIZE (0)] Configuration \field{cols} and \field{rows}
are valid.
\item[VIRTIO_CONSOLE_F_MULTIPORT (1)] Device has support for multiple
ports; \field{max_nr_ports} is valid and control virtqueues will be used.
\item[VIRTIO_CONSOLE_F_EMERG_WRITE (2)] Device has support for emergency write.
Configuration field emerg_wr is valid.
\end{description}
\subsection{Device configuration layout}\label{sec:Device Types / Console Device / Device configuration layout}
The size of the console is supplied
in the configuration space if the VIRTIO_CONSOLE_F_SIZE feature
is set. Furthermore, if the VIRTIO_CONSOLE_F_MULTIPORT feature
is set, the maximum number of ports supported by the device can
be fetched.
If VIRTIO_CONSOLE_F_EMERG_WRITE is set then the driver can use emergency write
to output a single character without initializing virtio queues, or even
acknowledging the feature.
\begin{lstlisting}
struct virtio_console_config {
le16 cols;
le16 rows;
le32 max_nr_ports;
le32 emerg_wr;
};
\end{lstlisting}
\subsubsection{Legacy Interface: Device configuration layout}\label{sec:Device Types / Console Device / Device configuration layout / Legacy Interface: Device configuration layout}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_console_config
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
\subsection{Device Initialization}\label{sec:Device Types / Console Device / Device Initialization}
\begin{enumerate}
\item If the VIRTIO_CONSOLE_F_EMERG_WRITE feature is offered,
\field{emerg_wr} field of the configuration can be written at any time.
Thus it works for very early boot debugging output as well as
catastophic OS failures (eg. virtio ring corruption).
\item If the VIRTIO_CONSOLE_F_SIZE feature is negotiated, the driver
can read the console dimensions from \field{cols} and \field{rows}.
\item If the VIRTIO_CONSOLE_F_MULTIPORT feature is negotiated, the
driver can spawn multiple ports, not all of which are necessarily
attached to a console. Some could be generic ports. In this
case, the control virtqueues are enabled and according to
\field{max_nr_ports}, the appropriate number
of virtqueues are created. A control message indicating the
driver is ready is sent to the device. The device can then send
control messages for adding new ports to the device. After
creating and initializing each port, a
VIRTIO_CONSOLE_PORT_READY control message is sent to the device
for that port so the device can let the driver know of any additional
configuration options set for that port.
\item The receiveq for each port is populated with one or more
receive buffers.
\end{enumerate}
\devicenormative{\subsubsection}{Device Initialization}{Device Types / Console Device / Device Initialization}
The device MUST allow a write to \field{emerg_wr}, even on an
unconfigured device.
The device SHOULD transmit the lower byte written to \field{emerg_wr} to
an appropriate log or output method.
\subsection{Device Operation}\label{sec:Device Types / Console Device / Device Operation}
\begin{enumerate}
\item For output, a buffer containing the characters is placed in
the port's transmitq\footnote{Because this is high importance and low bandwidth, the current
Linux implementation polls for the buffer to become used, rather than
waiting for a used buffer notification, simplifying the implementation
significantly. However, for generic serial ports with the
O_NONBLOCK flag set, the polling limitation is relaxed and the
consumed buffers are freed upon the next write or poll call or
when a port is closed or hot-unplugged.
}.
\item When a buffer is used in the receiveq (signalled by a
used buffer notification), the contents is the input to the port associated
with the virtqueue for which the notification was received.
\item If the driver negotiated the VIRTIO_CONSOLE_F_SIZE feature, a
configuration change notification indicates that the updated size can
be read from the configuration fields. This size applies to port 0 only.
\item If the driver negotiated the VIRTIO_CONSOLE_F_MULTIPORT
feature, active ports are announced by the device using the
VIRTIO_CONSOLE_PORT_ADD control message. The same message is
used for port hot-plug as well.
\end{enumerate}
\drivernormative{\subsubsection}{Device Operation}{Device Types / Console Device / Device Operation}
The driver MUST NOT put a device-readable buffer in a receiveq. The driver
MUST NOT put a device-writable buffer in a transmitq.
\subsubsection{Multiport Device Operation}\label{sec:Device Types / Console Device / Device Operation / Multiport Device Operation}
If the driver negotiated the VIRTIO_CONSOLE_F_MULTIPORT, the two
control queues are used to manipulate the different console ports: the
control receiveq for messages from the device to the driver, and the
control sendq for driver-to-device messages. The layout of the
control messages is:
\begin{lstlisting}
struct virtio_console_control {
le32 id; /* Port number */
le16 event; /* The kind of control event */
le16 value; /* Extra information for the event */
};
\end{lstlisting}
The values for \field{event} are:
\begin{description}
\item [VIRTIO_CONSOLE_DEVICE_READY (0)] Sent by the driver at initialization
to indicate that it is ready to receive control messages. A value of
1 indicates success, and 0 indicates failure. The port number \field{id} is unused.
\item [VIRTIO_CONSOLE_DEVICE_ADD (1)] Sent by the device, to create a new
port. \field{value} is unused.
\item [VIRTIO_CONSOLE_DEVICE_REMOVE (2)] Sent by the device, to remove an
existing port. \field{value} is unused.
\item [VIRTIO_CONSOLE_PORT_READY (3)] Sent by the driver in response
to the device's VIRTIO_CONSOLE_PORT_ADD message, to indicate that
the port is ready to be used. A \field{value} of 1 indicates success, and 0
indicates failure.
\item [VIRTIO_CONSOLE_CONSOLE_PORT (4)] Sent by the device to nominate
a port as a console port. There MAY be more than one console port.
\item [VIRTIO_CONSOLE_RESIZE (5)] Sent by the device to indicate
a console size change. \field{value} is unused. The buffer is followed by the number of columns and rows:
\begin{lstlisting}
struct virtio_console_resize {
le16 cols;
le16 rows;
};
\end{lstlisting}
\item [VIRTIO_CONSOLE_PORT_OPEN (6)] This message is sent by both the
device and the driver. \field{value} indicates the state: 0 (port
closed) or 1 (port open). This allows for ports to be used directly
by guest and host processes to communicate in an application-defined
manner.
\item [VIRTIO_CONSOLE_PORT_NAME (7)] Sent by the device to give a tag
to the port. This control command is immediately
followed by the UTF-8 name of the port for identification
within the guest (without a NUL terminator).
\end{description}
\devicenormative{\paragraph}{Multiport Device Operation}{Device Types / Console Device / Device Operation / Multiport Device Operation}
The device MUST NOT specify a port which exists in a
VIRTIO_CONSOLE_DEVICE_ADD message, nor a port which is equal or
greater than \field{max_nr_ports}.
The device MUST NOT specify a port in VIRTIO_CONSOLE_DEVICE_REMOVE
which has not been created with a previous VIRTIO_CONSOLE_DEVICE_ADD.
\drivernormative{\paragraph}{Multiport Device Operation}{Device Types / Console Device / Device Operation / Multiport Device Operation}
The driver MUST send a VIRTIO_CONSOLE_DEVICE_READY message if
VIRTIO_CONSOLE_F_MULTIPORT is negotiated.
Upon receipt of a VIRTIO_CONSOLE_CONSOLE_PORT message, the driver
SHOULD treat the port in a manner suitable for text console access
and MUST respond with a VIRTIO_CONSOLE_PORT_OPEN message, which MUST
have \field{value} set to 1.
\subsubsection{Legacy Interface: Device Operation}\label{sec:Device Types / Console Device / Device Operation / Legacy Interface: Device Operation}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_console_control
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
When using the legacy interface, the driver SHOULD ignore the
used length values for the transmit queues
and the control transmitq.
\begin{note}
Historically, some devices put the total descriptor length there,
even though no data was actually written.
\end{note}
\subsubsection{Legacy Interface: Framing Requirements}\label{sec:Device
Types / Console Device / Legacy Interface: Framing Requirements}
When using legacy interfaces, transitional drivers which have not
negotiated VIRTIO_F_ANY_LAYOUT MUST use only a single
descriptor for all buffers in the control receiveq and control transmitq.
\section{Entropy Device}\label{sec:Device Types / Entropy Device}
The virtio entropy device supplies high-quality randomness for
guest use.
\subsection{Device ID}\label{sec:Device Types / Entropy Device / Device ID}
4
\subsection{Virtqueues}\label{sec:Device Types / Entropy Device / Virtqueues}
\begin{description}
\item[0] requestq
\end{description}
\subsection{Feature bits}\label{sec:Device Types / Entropy Device / Feature bits}
None currently defined
\subsection{Device configuration layout}\label{sec:Device Types / Entropy Device / Device configuration layout}
None currently defined.
\subsection{Device Initialization}\label{sec:Device Types / Entropy Device / Device Initialization}
\begin{enumerate}
\item The virtqueue is initialized
\end{enumerate}
\subsection{Device Operation}\label{sec:Device Types / Entropy Device / Device Operation}
When the driver requires random bytes, it places the descriptor
of one or more buffers in the queue. It will be completely filled
by random data by the device.
\drivernormative{\subsubsection}{Device Operation}{Device Types / Entropy Device / Device Operation}
The driver MUST NOT place driver-readable buffers into the queue.
The driver MUST examine the length written by the device to determine
how many random bytes were received.
\devicenormative{\subsubsection}{Device Operation}{Device Types / Entropy Device / Device Operation}
The device MUST place one or more random bytes into the buffer, but it
MAY use less than the entire buffer length.
\section{Traditional Memory Balloon Device}\label{sec:Device Types / Memory Balloon Device}
This is the traditional balloon device. The device number 13 is
reserved for a new memory balloon interface, with different
semantics, which is expected in a future version of the standard.
The traditional virtio memory balloon device is a primitive device for
managing guest memory: the device asks for a certain amount of
memory, and the driver supplies it (or withdraws it, if the device
has more than it asks for). This allows the guest to adapt to
changes in allowance of underlying physical memory. If the
feature is negotiated, the device can also be used to communicate
guest memory statistics to the host.
\subsection{Device ID}\label{sec:Device Types / Memory Balloon Device / Device ID}
5
\subsection{Virtqueues}\label{sec:Device Types / Memory Balloon Device / Virtqueues}
\begin{description}
\item[0] inflateq
\item[1] deflateq
\item[2] statsq.
\end{description}
Virtqueue 2 only exists if VIRTIO_BALLOON_F_STATS_VQ set.
\subsection{Feature bits}\label{sec:Device Types / Memory Balloon Device / Feature bits}
\begin{description}
\item[VIRTIO_BALLOON_F_MUST_TELL_HOST (0)] Host has to be told before
pages from the balloon are used.
\item[VIRTIO_BALLOON_F_STATS_VQ (1)] A virtqueue for reporting guest
memory statistics is present.
\item[VIRTIO_BALLOON_F_DEFLATE_ON_OOM (2) ] Deflate balloon on
guest out of memory condition.
\end{description}
\drivernormative{\subsubsection}{Feature bits}{Device Types / Memory Balloon Device / Feature bits}
The driver SHOULD accept the VIRTIO_BALLOON_F_MUST_TELL_HOST
feature if offered by the device.
\devicenormative{\subsubsection}{Feature bits}{Device Types / Memory Balloon Device / Feature bits}
If the device offers the VIRTIO_BALLOON_F_MUST_TELL_HOST feature
bit, and if the driver did not accept this feature bit, the
device MAY signal failure by failing to set FEATURES_OK
\field{device status} bit when the driver writes it.
\subparagraph{Legacy Interface: Feature bits}\label{sec:Device
Types / Memory Balloon Device / Feature bits / Legacy Interface:
Feature bits}
As the legacy interface does not have a way to gracefully report feature
negotiation failure, when using the legacy interface,
transitional devices MUST support guests which do not negotiate
VIRTIO_BALLOON_F_MUST_TELL_HOST feature, and SHOULD
allow guest to use memory before notifying host if
VIRTIO_BALLOON_F_MUST_TELL_HOST is not negotiated.
\subsection{Device configuration layout}\label{sec:Device Types / Memory Balloon Device / Device configuration layout}
Both fields of this configuration
are always available.
\begin{lstlisting}
struct virtio_balloon_config {
le32 num_pages;
le32 actual;
};
\end{lstlisting}
\subparagraph{Legacy Interface: Device configuration layout}\label{sec:Device Types / Memory Balloon Device / Device
configuration layout / Legacy Interface: Device configuration layout}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_balloon_config
according to the little-endian format.
\begin{note}
This is unlike the usual convention that legacy device fields are guest endian.
\end{note}
\subsection{Device Initialization}\label{sec:Device Types / Memory Balloon Device / Device Initialization}
The device initialization process is outlined below:
\begin{enumerate}
\item The inflate and deflate virtqueues are identified.
\item If the VIRTIO_BALLOON_F_STATS_VQ feature bit is negotiated:
\begin{enumerate}
\item Identify the stats virtqueue.
\item Add one empty buffer to the stats virtqueue.
\item DRIVER_OK is set: device operation begins.
\item Notify the device about the stats virtqueue buffer.
\end{enumerate}
\end{enumerate}
\subsection{Device Operation}\label{sec:Device Types / Memory Balloon Device / Device Operation}
The device is driven either by the receipt of a configuration
change notification, or by changing guest memory needs, such as
performing memory compaction or responding to out of memory
conditions.
\begin{enumerate}
\item \field{num_pages} configuration field is examined. If this is
greater than the \field{actual} number of pages, the balloon wants
more memory from the guest. If it is less than \field{actual},
the balloon doesn't need it all.
\item To supply memory to the balloon (aka. inflate):
\begin{enumerate}
\item The driver constructs an array of addresses of unused memory
pages. These addresses are divided by 4096\footnote{This is historical, and independent of the guest page size.
} and the descriptor
describing the resulting 32-bit array is added to the inflateq.
\end{enumerate}
\item To remove memory from the balloon (aka. deflate):
\begin{enumerate}
\item The driver constructs an array of addresses of memory pages
it has previously given to the balloon, as described above.
This descriptor is added to the deflateq.
\item If the VIRTIO_BALLOON_F_MUST_TELL_HOST feature is negotiated, the
guest informs the device of pages before it uses them.
\item Otherwise, the guest is allowed to re-use pages previously
given to the balloon before the device has acknowledged their
withdrawal\footnote{In this case, deflation advice is merely a courtesy.
}.
\end{enumerate}
\item In either case, the device acknowledges inflate and deflate
requests by using the descriptor.
\item Once the device has acknowledged the inflation or
deflation, the driver updates \field{actual} to reflect the new number of pages in the balloon.
\end{enumerate}
\drivernormative{\subsubsection}{Device Operation}{Device Types / Memory Balloon Device / Device Operation}
The driver SHOULD supply pages to the balloon when \field{num_pages} is
greater than the actual number of pages in the balloon.
The driver MAY use pages from the balloon when \field{num_pages} is
less than the actual number of pages in the balloon.
The driver MAY supply pages to the balloon when \field{num_pages} is
greater than or equal to the actual number of pages in the balloon.
If VIRTIO_BALLOON_F_DEFLATE_ON_OOM has not been negotiated, the
driver MUST NOT use pages from the balloon when \field{num_pages}
is less than or equal to the actual number of pages in the
balloon.
If VIRTIO_BALLOON_F_DEFLATE_ON_OOM has been negotiated, the
driver MAY use pages from the balloon when \field{num_pages}
is less than or equal to the actual number of pages in the
balloon if this is required for system stability
(e.g. if memory is required by applications running within
the guest).
The driver MUST use the deflateq to inform the device of pages that it
wants to use from the balloon.
If the VIRTIO_BALLOON_F_MUST_TELL_HOST feature is negotiated, the
driver MUST NOT use pages from the balloon until
the device has acknowledged the deflate request.
Otherwise, if the VIRTIO_BALLOON_F_MUST_TELL_HOST feature is not
negotiated, the driver MAY begin to re-use pages previously
given to the balloon before the device has acknowledged the
deflate request.
In any case, the driver MUST NOT use pages from the balloon
after adding the pages to the balloon, but before the device has
acknowledged the inflate request.
The driver MUST NOT request deflation of pages in
the balloon before the device has acknowledged the inflate
request.
The driver MUST update \field{actual} after changing the number
of pages in the balloon.
The driver MAY update \field{actual} once after multiple
inflate and deflate operations.
\devicenormative{\subsubsection}{Device Operation}{Device Types / Memory Balloon Device / Device Operation}
The device MAY modify the contents of a page in the balloon
after detecting its physical number in an inflate request
and before acknowledging the inflate request by using the inflateq
descriptor.
If the VIRTIO_BALLOON_F_MUST_TELL_HOST feature is negotiated, the
device MAY modify the contents of a page in the balloon
after detecting its physical number in an inflate request
and before detecting its physical number in a deflate request
and acknowledging the deflate request.
\paragraph{Legacy Interface: Device Operation}\label{sec:Device
Types / Memory Balloon Device / Device Operation / Legacy
Interface: Device Operation}
When using the legacy interface, the driver SHOULD ignore the
used length values.
\begin{note}
Historically, some devices put the total descriptor length there,
even though no data was actually written.
\end{note}
When using the legacy interface, the driver MUST write out all
4 bytes each time it updates the \field{actual} value in the
configuration space, using a single atomic operation.
When using the legacy interface, the device SHOULD NOT use the
\field{actual} value written by the driver in the configuration
space, until the last, most-significant byte of the value has been
written.
\begin{note}
Historically, devices used the \field{actual} value, even though
when using Virtio Over PCI Bus the device-specific configuration
space was not guaranteed to be atomic. Using intermediate
values during update by driver is best avoided, except for
debugging.
Historically, drivers using Virtio Over PCI Bus wrote the
\field{actual} value by using multiple single-byte writes in
order, from the least-significant to the most-significant value.
\end{note}
\subsubsection{Memory Statistics}\label{sec:Device Types / Memory Balloon Device / Device Operation / Memory Statistics}
The stats virtqueue is atypical because communication is driven
by the device (not the driver). The channel becomes active at
driver initialization time when the driver adds an empty buffer
and notifies the device. A request for memory statistics proceeds
as follows:
\begin{enumerate}
\item The device uses the buffer and sends a used buffer notification.
\item The driver pops the used buffer and discards it.
\item The driver collects memory statistics and writes them into a
new buffer.
\item The driver adds the buffer to the virtqueue and notifies the
device.
\item The device pops the buffer (retaining it to initiate a
subsequent request) and consumes the statistics.
\end{enumerate}
Within the buffer, statistics are an array of 10-byte entries.
Each statistic consists of a 16 bit
tag and a 64 bit value. All statistics are optional and the
driver chooses which ones to supply. To guarantee backwards
compatibility, devices omit unsupported statistics.
\begin{lstlisting}
struct virtio_balloon_stat {
#define VIRTIO_BALLOON_S_SWAP_IN 0
#define VIRTIO_BALLOON_S_SWAP_OUT 1
#define VIRTIO_BALLOON_S_MAJFLT 2
#define VIRTIO_BALLOON_S_MINFLT 3
#define VIRTIO_BALLOON_S_MEMFREE 4
#define VIRTIO_BALLOON_S_MEMTOT 5
#define VIRTIO_BALLOON_S_AVAIL 6
#define VIRTIO_BALLOON_S_CACHES 7
#define VIRTIO_BALLOON_S_HTLB_PGALLOC 8
#define VIRTIO_BALLOON_S_HTLB_PGFAIL 9
le16 tag;
le64 val;
} __attribute__((packed));
\end{lstlisting}
\drivernormative{\paragraph}{Memory Statistics}{Device Types / Memory Balloon Device / Device Operation / Memory Statistics}
Normative statements in this section apply if and only if the
VIRTIO_BALLOON_F_STATS_VQ feature has been negotiated.
The driver MUST make at most one buffer available to the device
in the statsq, at all times.
After initializing the device, the driver MUST make an output
buffer available in the statsq.
Upon detecting that device has used a buffer in the statsq, the
driver MUST make an output buffer available in the statsq.
Before making an output buffer available in the statsq, the
driver MUST initialize it, including one struct
virtio_balloon_stat entry for each statistic that it supports.
Driver MUST use an output buffer size which is a multiple of 6
bytes for all buffers submitted to the statsq.
Driver MAY supply struct virtio_balloon_stat entries in the
output buffer submitted to the statsq in any order, without
regard to \field{tag} values.
Driver MAY supply a subset of all statistics in the output buffer
submitted to the statsq.
Driver MUST supply the same subset of statistics in all buffers
submitted to the statsq.
\devicenormative{\paragraph}{Memory Statistics}{Device Types / Memory Balloon Device / Device Operation / Memory Statistics}
Normative statements in this section apply if and only if the
VIRTIO_BALLOON_F_STATS_VQ feature has been negotiated.
Within an output buffer submitted to the statsq,
the device MUST ignore entries with \field{tag} values that it does not recognize.
Within an output buffer submitted to the statsq,
the device MUST accept struct virtio_balloon_stat entries in any
order without regard to \field{tag} values.
\paragraph{Legacy Interface: Memory Statistics}\label{sec:Device Types / Memory Balloon Device / Device Operation / Memory Statistics / Legacy Interface: Memory Statistics}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_balloon_stat
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
When using the legacy interface,
the device SHOULD ignore all values in the first buffer in the
statsq supplied by the driver after device initialization.
\begin{note}
Historically, drivers supplied an uninitialized buffer in the
first buffer.
\end{note}
\subsubsection{Memory Statistics Tags}\label{sec:Device Types / Memory Balloon Device / Device Operation / Memory Statistics Tags}
\begin{description}
\item[VIRTIO_BALLOON_S_SWAP_IN (0)] The amount of memory that has been
swapped in (in bytes).
\item[VIRTIO_BALLOON_S_SWAP_OUT (1)] The amount of memory that has been
swapped out to disk (in bytes).
\item[VIRTIO_BALLOON_S_MAJFLT (2)] The number of major page faults that
have occurred.
\item[VIRTIO_BALLOON_S_MINFLT (3)] The number of minor page faults that
have occurred.
\item[VIRTIO_BALLOON_S_MEMFREE (4)] The amount of memory not being used
for any purpose (in bytes).
\item[VIRTIO_BALLOON_S_MEMTOT (5)] The total amount of memory available
(in bytes).
\item[VIRTIO_BALLOON_S_AVAIL (6)] An estimate of how much memory is available
(in bytes) for starting new applications, without pushing the system to swap.
\item[VIRTIO_BALLOON_S_CACHES (7)] The amount of memory, in bytes, that can be
quickly reclaimed without additional I/O. Typically these pages are used for
caching files from disk.
\item[VIRTIO_BALLOON_S_HTLB_PGALLOC (8)] The number of successful hugetlb page
allocations in the guest.
\item[VIRTIO_BALLOON_S_HTLB_PGFAIL (9)] The number of failed hugetlb page
allocations in the guest.
\end{description}
\section{SCSI Host Device}\label{sec:Device Types / SCSI Host Device}
The virtio SCSI host device groups together one or more virtual
logical units (such as disks), and allows communicating to them
using the SCSI protocol. An instance of the device represents a
SCSI host to which many targets and LUNs are attached.
The virtio SCSI device services two kinds of requests:
\begin{itemize}
\item command requests for a logical unit;
\item task management functions related to a logical unit, target or
command.
\end{itemize}
The device is also able to send out notifications about added and
removed logical units. Together, these capabilities provide a
SCSI transport protocol that uses virtqueues as the transfer
medium. In the transport protocol, the virtio driver acts as the
initiator, while the virtio SCSI host provides one or more
targets that receive and process the requests.
This section relies on definitions from \hyperref[intro:SAM]{SAM}.
\subsection{Device ID}\label{sec:Device Types / SCSI Host Device / Device ID}
8
\subsection{Virtqueues}\label{sec:Device Types / SCSI Host Device / Virtqueues}
\begin{description}
\item[0] controlq
\item[1] eventq
\item[2\ldots n] request queues
\end{description}
\subsection{Feature bits}\label{sec:Device Types / SCSI Host Device / Feature bits}
\begin{description}
\item[VIRTIO_SCSI_F_INOUT (0)] A single request can include both
device-readable and device-writable data buffers.
\item[VIRTIO_SCSI_F_HOTPLUG (1)] The host SHOULD enable reporting of
hot-plug and hot-unplug events for LUNs and targets on the SCSI bus.
The guest SHOULD handle hot-plug and hot-unplug events.
\item[VIRTIO_SCSI_F_CHANGE (2)] The host will report changes to LUN
parameters via a VIRTIO_SCSI_T_PARAM_CHANGE event; the guest
SHOULD handle them.
\item[VIRTIO_SCSI_F_T10_PI (3)] The extended fields for T10 protection
information (DIF/DIX) are included in the SCSI request header.
\end{description}
\subsection{Device configuration layout}\label{sec:Device Types / SCSI Host Device / Device configuration layout}
All fields of this configuration are always available.
\begin{lstlisting}
struct virtio_scsi_config {
le32 num_queues;
le32 seg_max;
le32 max_sectors;
le32 cmd_per_lun;
le32 event_info_size;
le32 sense_size;
le32 cdb_size;
le16 max_channel;
le16 max_target;
le32 max_lun;
};
\end{lstlisting}
\begin{description}
\item[\field{num_queues}] is the total number of request virtqueues exposed by
the device. The driver MAY use only one request queue,
or it can use more to achieve better performance.
\item[\field{seg_max}] is the maximum number of segments that can be in a
command. A bidirectional command can include \field{seg_max} input
segments and \field{seg_max} output segments.
\item[\field{max_sectors}] is a hint to the driver about the maximum transfer
size to use.
\item[\field{cmd_per_lun}] tells the driver the maximum number of
linked commands it can send to one LUN.
\item[\field{event_info_size}] is the maximum size that the device will fill
for buffers that the driver places in the eventq. It is
written by the device depending on the set of negotiated
features.
\item[\field{sense_size}] is the maximum size of the sense data that the
device will write. The default value is written by the device
and MUST be 96, but the driver can modify it. It is
restored to the default when the device is reset.
\item[\field{cdb_size}] is the maximum size of the CDB that the driver will
write. The default value is written by the device and MUST
be 32, but the driver can likewise modify it. It is
restored to the default when the device is reset.
\item[\field{max_channel}, \field{max_target} and \field{max_lun}] can be
used by the driver as hints to constrain scanning the logical units
on the host to channel/target/logical unit numbers that are less than
or equal to the value of the fields. \field{max_channel} SHOULD
be zero. \field{max_target} SHOULD be less than or equal to 255.
\field{max_lun} SHOULD be less than or equal to 16383.
\end{description}
\drivernormative{\subsubsection}{Device configuration layout}{Device Types / SCSI Host Device / Device configuration layout}
The driver MUST NOT write to device configuration fields other than
\field{sense_size} and \field{cdb_size}.
The driver MUST NOT send more than \field{cmd_per_lun} linked commands
to one LUN, and MUST NOT send more than the virtqueue size number of
linked commands to one LUN.
\devicenormative{\subsubsection}{Device configuration layout}{Device Types / SCSI Host Device / Device configuration layout}
On reset, the device MUST set \field{sense_size} to 96 and
\field{cdb_size} to 32.
\subsubsection{Legacy Interface: Device configuration layout}\label{sec:Device Types / SCSI Host Device / Device configuration layout / Legacy Interface: Device configuration layout}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_scsi_config
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
\devicenormative{\subsection}{Device Initialization}{Device Types / SCSI Host Device / Device Initialization}
On initialization the driver SHOULD first discover the
device's virtqueues.
If the driver uses the eventq, the driver SHOULD place at least one
buffer in the eventq.
The driver MAY immediately issue requests\footnote{For example, INQUIRY
or REPORT LUNS.} or task management functions\footnote{For example, I_T
RESET.}.
\subsection{Device Operation}\label{sec:Device Types / SCSI Host Device / Device Operation}
Device operation consists of operating request queues, the control
queue and the event queue.
\paragraph{Legacy Interface: Device Operation}\label{sec:Device
Types / SCSI Host Device / Device Operation / Legacy
Interface: Device Operation}
When using the legacy interface, the driver SHOULD ignore the
used length values.
\begin{note}
Historically, devices put the total descriptor length,
or the total length of device-writable buffers there,
even when only part of the buffers were actually written.
\end{note}
\subsubsection{Device Operation: Request Queues}\label{sec:Device Types / SCSI Host Device / Device Operation / Device Operation: Request Queues}
The driver queues requests to an arbitrary request queue, and
they are used by the device on that same queue. It is the
responsibility of the driver to ensure strict request ordering
for commands placed on different queues, because they will be
consumed with no order constraints.
Requests have the following format:
\begin{lstlisting}
struct virtio_scsi_req_cmd {
// Device-readable part
u8 lun[8];
le64 id;
u8 task_attr;
u8 prio;
u8 crn;
u8 cdb[cdb_size];
// The next three fields are only present if VIRTIO_SCSI_F_T10_PI
// is negotiated.
le32 pi_bytesout;
le32 pi_bytesin;
u8 pi_out[pi_bytesout];
u8 dataout[];
// Device-writable part
le32 sense_len;
le32 residual;
le16 status_qualifier;
u8 status;
u8 response;
u8 sense[sense_size];
// The next field is only present if VIRTIO_SCSI_F_T10_PI
// is negotiated
u8 pi_in[pi_bytesin];
u8 datain[];
};
/* command-specific response values */
#define VIRTIO_SCSI_S_OK 0
#define VIRTIO_SCSI_S_OVERRUN 1
#define VIRTIO_SCSI_S_ABORTED 2
#define VIRTIO_SCSI_S_BAD_TARGET 3
#define VIRTIO_SCSI_S_RESET 4
#define VIRTIO_SCSI_S_BUSY 5
#define VIRTIO_SCSI_S_TRANSPORT_FAILURE 6
#define VIRTIO_SCSI_S_TARGET_FAILURE 7
#define VIRTIO_SCSI_S_NEXUS_FAILURE 8
#define VIRTIO_SCSI_S_FAILURE 9
/* task_attr */
#define VIRTIO_SCSI_S_SIMPLE 0
#define VIRTIO_SCSI_S_ORDERED 1
#define VIRTIO_SCSI_S_HEAD 2
#define VIRTIO_SCSI_S_ACA 3
\end{lstlisting}
\field{lun} addresses the REPORT LUNS well-known logical unit, or
a target and logical unit in the virtio-scsi device's SCSI domain.
When used to address the REPORT LUNS logical unit, \field{lun} is 0xC1,
0x01 and six zero bytes. The virtio-scsi device SHOULD implement the
REPORT LUNS well-known logical unit.
When used to address a target and logical unit, the only supported format
for \field{lun} is: first byte set to 1, second byte set to target,
third and fourth byte representing a single level LUN structure, followed
by four zero bytes. With this representation, a virtio-scsi device can
serve up to 256 targets and 16384 LUNs per target. The device MAY also
support having a well-known logical units in the third and fourth byte.
\field{id} is the command identifier (``tag'').
\field{task_attr} defines the task attribute as in the table above, but
all task attributes MAY be mapped to SIMPLE by the device. Some commands
are defined by SCSI standards as "implicit head of queue"; for such
commands, all task attributes MAY also be mapped to HEAD OF QUEUE.
Drivers and applications SHOULD NOT send a command with the ORDERED
task attribute if the command has an implicit HEAD OF QUEUE attribute,
because whether the ORDERED task attribute is honored is vendor-specific.
\field{crn} may also be provided by clients, but is generally expected
to be 0. The maximum CRN value defined by the protocol is 255, since
CRN is stored in an 8-bit integer.
The CDB is included in \field{cdb} and its size, \field{cdb_size},
is taken from the configuration space.
All of these fields are defined in \hyperref[intro:SAM]{SAM} and are
always device-readable.
\field{pi_bytesout} determines the size of the \field{pi_out} field
in bytes. If it is nonzero, the \field{pi_out} field contains outgoing
protection information for write operations. \field{pi_bytesin} determines
the size of the \field{pi_in} field in the device-writable section, in bytes.
All three fields are only present if VIRTIO_SCSI_F_T10_PI has been negotiated.
The remainder of the device-readable part is the data output buffer,
\field{dataout}.
\field{sense} and subsequent fields are always device-writable. \field{sense_len}
indicates the number of bytes actually written to the sense
buffer.
\field{residual} indicates the residual size,
calculated as ``data_length - number_of_transferred_bytes'', for
read or write operations. For bidirectional commands, the
number_of_transferred_bytes includes both read and written bytes.
A \field{residual} that is less than the size of \field{datain} means that
\field{dataout} was processed entirely. A \field{residual} that
exceeds the size of \field{datain} means that \field{dataout} was
processed partially and \field{datain} was not processed at
all.
If the \field{pi_bytesin} is nonzero, the \field{pi_in} field contains
incoming protection information for read operations. \field{pi_in} is
only present if VIRTIO_SCSI_F_T10_PI has been negotiated\footnote{There
is no separate residual size for \field{pi_bytesout} and
\field{pi_bytesin}. It can be computed from the \field{residual} field,
the size of the data integrity information per sector, and the sizes
of \field{pi_out}, \field{pi_in}, \field{dataout} and \field{datain}.}.
The remainder of the device-writable part is the data input buffer,
\field{datain}.
\devicenormative{\paragraph}{Device Operation: Request Queues}{Device Types / SCSI Host Device / Device Operation / Device Operation: Request Queues}
The device MUST write the \field{status} byte as the status code as
defined in \hyperref[intro:SAM]{SAM}.
The device MUST write the \field{response} byte as one of the following:
\begin{description}
\item[VIRTIO_SCSI_S_OK] when the request was completed and the \field{status}
byte is filled with a SCSI status code (not necessarily
``GOOD'').
\item[VIRTIO_SCSI_S_OVERRUN] if the content of the CDB (such as the
allocation length, parameter length or transfer size) requires
more data than is available in the datain and dataout buffers.
\item[VIRTIO_SCSI_S_ABORTED] if the request was cancelled due to an
ABORT TASK or ABORT TASK SET task management function.
\item[VIRTIO_SCSI_S_BAD_TARGET] if the request was never processed
because the target indicated by \field{lun} does not exist.
\item[VIRTIO_SCSI_S_RESET] if the request was cancelled due to a bus
or device reset (including a task management function).
\item[VIRTIO_SCSI_S_TRANSPORT_FAILURE] if the request failed due to a
problem in the connection between the host and the target
(severed link).
\item[VIRTIO_SCSI_S_TARGET_FAILURE] if the target is suffering a
failure and to tell the driver not to retry on other paths.
\item[VIRTIO_SCSI_S_NEXUS_FAILURE] if the nexus is suffering a failure
but retrying on other paths might yield a different result.
\item[VIRTIO_SCSI_S_BUSY] if the request failed but retrying on the
same path is likely to work.
\item[VIRTIO_SCSI_S_FAILURE] for other host or driver error. In
particular, if neither \field{dataout} nor \field{datain} is empty, and the
VIRTIO_SCSI_F_INOUT feature has not been negotiated, the
request will be immediately returned with a response equal to
VIRTIO_SCSI_S_FAILURE.
\end{description}
All commands must be completed before the virtio-scsi device is
reset or unplugged. The device MAY choose to abort them, or if
it does not do so MUST pick the VIRTIO_SCSI_S_FAILURE response.
\drivernormative{\paragraph}{Device Operation: Request Queues}{Device Types / SCSI Host Device / Device Operation / Device Operation: Request Queues}
\field{task_attr}, \field{prio} and \field{crn} SHOULD be zero.
Upon receiving a VIRTIO_SCSI_S_TARGET_FAILURE response, the driver
SHOULD NOT retry the request on other paths.
\paragraph{Legacy Interface: Device Operation: Request Queues}\label{sec:Device Types / SCSI Host Device / Device Operation / Device Operation: Request Queues / Legacy Interface: Device Operation: Request Queues}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_scsi_req_cmd
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
\subsubsection{Device Operation: controlq}\label{sec:Device Types / SCSI Host Device / Device Operation / Device Operation: controlq}
The controlq is used for other SCSI transport operations.
Requests have the following format:
{
\lstset{escapechar=\$}
\begin{lstlisting}
struct virtio_scsi_ctrl {
le32 type;
$\ldots$
u8 response;
};
/* response values valid for all commands */
#define VIRTIO_SCSI_S_OK 0
#define VIRTIO_SCSI_S_BAD_TARGET 3
#define VIRTIO_SCSI_S_BUSY 5
#define VIRTIO_SCSI_S_TRANSPORT_FAILURE 6
#define VIRTIO_SCSI_S_TARGET_FAILURE 7
#define VIRTIO_SCSI_S_NEXUS_FAILURE 8
#define VIRTIO_SCSI_S_FAILURE 9
#define VIRTIO_SCSI_S_INCORRECT_LUN 12
\end{lstlisting}
}
The \field{type} identifies the remaining fields.
The following commands are defined:
\begin{itemize}
\item Task management function.
\begin{lstlisting}
#define VIRTIO_SCSI_T_TMF 0
#define VIRTIO_SCSI_T_TMF_ABORT_TASK 0
#define VIRTIO_SCSI_T_TMF_ABORT_TASK_SET 1
#define VIRTIO_SCSI_T_TMF_CLEAR_ACA 2
#define VIRTIO_SCSI_T_TMF_CLEAR_TASK_SET 3
#define VIRTIO_SCSI_T_TMF_I_T_NEXUS_RESET 4
#define VIRTIO_SCSI_T_TMF_LOGICAL_UNIT_RESET 5
#define VIRTIO_SCSI_T_TMF_QUERY_TASK 6
#define VIRTIO_SCSI_T_TMF_QUERY_TASK_SET 7
struct virtio_scsi_ctrl_tmf {
// Device-readable part
le32 type;
le32 subtype;
u8 lun[8];
le64 id;
// Device-writable part
u8 response;
};
/* command-specific response values */
#define VIRTIO_SCSI_S_FUNCTION_COMPLETE 0
#define VIRTIO_SCSI_S_FUNCTION_SUCCEEDED 10
#define VIRTIO_SCSI_S_FUNCTION_REJECTED 11
\end{lstlisting}
The \field{type} is VIRTIO_SCSI_T_TMF; \field{subtype} defines which
task management function. All
fields except \field{response} are filled by the driver.
Other fields which are irrelevant for the requested TMF
are ignored but they are still present. \field{lun}
is in the same format specified for request queues; the
single level LUN is ignored when the task management function
addresses a whole I_T nexus. When relevant, the value of \field{id}
is matched against the id values passed on the requestq.
The outcome of the task management function is written by the
device in \field{response}. The command-specific response
values map 1-to-1 with those defined in \hyperref[intro:SAM]{SAM}.
Task management function can affect the response value for commands that
are in the request queue and have not been completed yet. For example,
the device MUST complete all active commands on a logical unit
or target (possibly with a VIRTIO_SCSI_S_RESET response code)
upon receiving a "logical unit reset" or "I_T nexus reset" TMF.
Similarly, the device MUST complete the selected commands (possibly
with a VIRTIO_SCSI_S_ABORTED response code) upon receiving an "abort
task" or "abort task set" TMF. Such effects MUST take place before
the TMF itself is successfully completed, and the device MUST use
memory barriers appropriately in order to ensure that the driver sees
these writes in the correct order.
\item Asynchronous notification query.
\begin{lstlisting}
#define VIRTIO_SCSI_T_AN_QUERY 1
struct virtio_scsi_ctrl_an {
// Device-readable part
le32 type;
u8 lun[8];
le32 event_requested;
// Device-writable part
le32 event_actual;
u8 response;
};
#define VIRTIO_SCSI_EVT_ASYNC_OPERATIONAL_CHANGE 2
#define VIRTIO_SCSI_EVT_ASYNC_POWER_MGMT 4
#define VIRTIO_SCSI_EVT_ASYNC_EXTERNAL_REQUEST 8
#define VIRTIO_SCSI_EVT_ASYNC_MEDIA_CHANGE 16
#define VIRTIO_SCSI_EVT_ASYNC_MULTI_HOST 32
#define VIRTIO_SCSI_EVT_ASYNC_DEVICE_BUSY 64
\end{lstlisting}
By sending this command, the driver asks the device which
events the given LUN can report, as described in paragraphs 6.6
and A.6 of \hyperref[intro:SCSI MMC]{SCSI MMC}. The driver writes the
events it is interested in into \field{event_requested}; the device
responds by writing the events that it supports into
\field{event_actual}.
The \field{type} is VIRTIO_SCSI_T_AN_QUERY. \field{lun} and \field{event_requested}
are written by the driver. \field{event_actual} and \field{response}
fields are written by the device.
No command-specific values are defined for the \field{response} byte.
\item Asynchronous notification subscription.
\begin{lstlisting}
#define VIRTIO_SCSI_T_AN_SUBSCRIBE 2
struct virtio_scsi_ctrl_an {
// Device-readable part
le32 type;
u8 lun[8];
le32 event_requested;
// Device-writable part
le32 event_actual;
u8 response;
};
\end{lstlisting}
By sending this command, the driver asks the specified LUN to
report events for its physical interface, again as described in
\hyperref[intro:SCSI MMC]{SCSI MMC}. The driver writes the events it is
interested in into \field{event_requested}; the device responds by
writing the events that it supports into \field{event_actual}.
Event types are the same as for the asynchronous notification
query message.
The \field{type} is VIRTIO_SCSI_T_AN_SUBSCRIBE. \field{lun} and
\field{event_requested} are written by the driver.
\field{event_actual} and \field{response} are written by the device.
No command-specific values are defined for the response byte.
\end{itemize}
\paragraph{Legacy Interface: Device Operation: controlq}\label{sec:Device Types / SCSI Host Device / Device Operation / Device Operation: controlq / Legacy Interface: Device Operation: controlq}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_scsi_ctrl, struct
virtio_scsi_ctrl_tmf, struct virtio_scsi_ctrl_an and struct
virtio_scsi_ctrl_an
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
\subsubsection{Device Operation: eventq}\label{sec:Device Types / SCSI Host Device / Device Operation / Device Operation: eventq}
The eventq is populated by the driver for the device to report information on logical
units that are attached to it. In general, the device will not
queue events to cope with an empty eventq, and will end up
dropping events if it finds no buffer ready. However, when
reporting events for many LUNs (e.g. when a whole target
disappears), the device can throttle events to avoid dropping
them. For this reason, placing 10-15 buffers on the event queue
is sufficient.
Buffers returned by the device on the eventq will be referred to
as ``events'' in the rest of this section. Events have the
following format:
\begin{lstlisting}
#define VIRTIO_SCSI_T_EVENTS_MISSED 0x80000000
struct virtio_scsi_event {
// Device-writable part
le32 event;
u8 lun[8];
le32 reason;
};
\end{lstlisting}
The devices sets bit 31 in \field{event} to report lost events
due to missing buffers.
The meaning of \field{reason} depends on the
contents of \field{event}. The following events are defined:
\begin{itemize}
\item No event.
\begin{lstlisting}
#define VIRTIO_SCSI_T_NO_EVENT 0
\end{lstlisting}
This event is fired in the following cases:
\begin{itemize}
\item When the device detects in the eventq a buffer that is
shorter than what is indicated in the configuration field, it
MAY use it immediately and put this dummy value in \field{event}.
A well-written driver will never observe this
situation.
\item When events are dropped, the device MAY signal this event as
soon as the drivers makes a buffer available, in order to
request action from the driver. In this case, of course, this
event will be reported with the VIRTIO_SCSI_T_EVENTS_MISSED
flag.
\end{itemize}
\item Transport reset
\begin{lstlisting}
#define VIRTIO_SCSI_T_TRANSPORT_RESET 1
#define VIRTIO_SCSI_EVT_RESET_HARD 0
#define VIRTIO_SCSI_EVT_RESET_RESCAN 1
#define VIRTIO_SCSI_EVT_RESET_REMOVED 2
\end{lstlisting}
By sending this event, the device signals that a logical unit
on a target has been reset, including the case of a new device
appearing or disappearing on the bus. The device fills in all
fields. \field{event} is set to
VIRTIO_SCSI_T_TRANSPORT_RESET. \field{lun} addresses a
logical unit in the SCSI host.
The \field{reason} value is one of the three \#define values appearing
above:
\begin{description}
\item[VIRTIO_SCSI_EVT_RESET_REMOVED] (``LUN/target removed'') is used
if the target or logical unit is no longer able to receive
commands.
\item[VIRTIO_SCSI_EVT_RESET_HARD] (``LUN hard reset'') is used if the
logical unit has been reset, but is still present.
\item[VIRTIO_SCSI_EVT_RESET_RESCAN] (``rescan LUN/target'') is used if
a target or logical unit has just appeared on the device.
\end{description}
The ``removed'' and ``rescan'' events can happen when
VIRTIO_SCSI_F_HOTPLUG feature was negotiated; when sent for LUN 0,
they MAY apply to the entire target so the driver can ask the
initiator to rescan the target to detect this.
Events will also be reported via sense codes (this obviously
does not apply to newly appeared buses or targets, since the
application has never discovered them):
\begin{itemize}
\item ``LUN/target removed'' maps to sense key ILLEGAL REQUEST, asc
0x25, ascq 0x00 (LOGICAL UNIT NOT SUPPORTED)
\item ``LUN hard reset'' maps to sense key UNIT ATTENTION, asc 0x29
(POWER ON, RESET OR BUS DEVICE RESET OCCURRED)
\item ``rescan LUN/target'' maps to sense key UNIT ATTENTION, asc
0x3f, ascq 0x0e (REPORTED LUNS DATA HAS CHANGED)
\end{itemize}
The preferred way to detect transport reset is always to use
events, because sense codes are only seen by the driver when it
sends a SCSI command to the logical unit or target. However, in
case events are dropped, the initiator will still be able to
synchronize with the actual state of the controller if the
driver asks the initiator to rescan of the SCSI bus. During the
rescan, the initiator will be able to observe the above sense
codes, and it will process them as if it the driver had
received the equivalent event.
\item Asynchronous notification
\begin{lstlisting}
#define VIRTIO_SCSI_T_ASYNC_NOTIFY 2
\end{lstlisting}
By sending this event, the device signals that an asynchronous
event was fired from a physical interface.
All fields are written by the device. \field{event} is set to
VIRTIO_SCSI_T_ASYNC_NOTIFY. \field{lun} addresses a logical
unit in the SCSI host. \field{reason} is a subset of the
events that the driver has subscribed to via the ``Asynchronous
notification subscription'' command.
\item LUN parameter change
\begin{lstlisting}
#define VIRTIO_SCSI_T_PARAM_CHANGE 3
\end{lstlisting}
By sending this event, the device signals a change in the configuration parameters
of a logical unit, for example the capacity or cache mode.
\field{event} is set to VIRTIO_SCSI_T_PARAM_CHANGE.
\field{lun} addresses a logical unit in the SCSI host.
The same event SHOULD also be reported as a unit attention condition.
\field{reason} contains the additional sense code and additional sense code qualifier,
respectively in bits 0\ldots 7 and 8\ldots 15.
\begin{note}
For example, a change in capacity will be reported as asc 0x2a, ascq 0x09
(CAPACITY DATA HAS CHANGED).
\end{note}
For MMC devices (inquiry type 5) there would be some overlap between this
event and the asynchronous notification event, so for simplicity the host never
reports this event for MMC devices.
\end{itemize}
\drivernormative{\paragraph}{Device Operation: eventq}{Device Types / SCSI Host Device / Device Operation / Device Operation: eventq}
The driver SHOULD keep the eventq populated with buffers. These
buffers MUST be device-writable, and SHOULD be at least
\field{event_info_size} bytes long, and MUST be at least the size of
struct virtio_scsi_event.
If \field{event} has bit 31 set, the driver SHOULD
poll the logical units for unit attention conditions, and/or do
whatever form of bus scan is appropriate for the guest operating
system and SHOULD poll for asynchronous events manually using SCSI commands.
When receiving a VIRTIO_SCSI_T_TRANSPORT_RESET message with
\field{reason} set to VIRTIO_SCSI_EVT_RESET_REMOVED or
VIRTIO_SCSI_EVT_RESET_RESCAN for LUN 0, the driver SHOULD ask the
initiator to rescan the target, in order to detect the case when an
entire target has appeared or disappeared.
\devicenormative{\paragraph}{Device Operation: eventq}{Device Types / SCSI Host Device / Device Operation / Device Operation: eventq}
The device MUST set bit 31 in \field{event} if events were lost due to
missing buffers, and it MAY use a VIRTIO_SCSI_T_NO_EVENT event to report
this.
The device MUST NOT send VIRTIO_SCSI_T_TRANSPORT_RESET messages
with \field{reason} set to VIRTIO_SCSI_EVT_RESET_REMOVED or
VIRTIO_SCSI_EVT_RESET_RESCAN unless VIRTIO_SCSI_F_HOTPLUG was negotiated.
The device MUST NOT report VIRTIO_SCSI_T_PARAM_CHANGE for MMC devices.
\paragraph{Legacy Interface: Device Operation: eventq}\label{sec:Device Types / SCSI Host Device / Device Operation / Device Operation: eventq / Legacy Interface: Device Operation: eventq}
When using the legacy interface, transitional devices and drivers
MUST format the fields in struct virtio_scsi_event
according to the native endian of the guest rather than
(necessarily when not using the legacy interface) little-endian.
\subsubsection{Legacy Interface: Framing Requirements}\label{sec:Device
Types / SCSI Host Device / Legacy Interface: Framing Requirements}
When using legacy interfaces, transitional drivers which have not
negotiated VIRTIO_F_ANY_LAYOUT MUST use a single descriptor for the
\field{lun}, \field{id}, \field{task_attr}, \field{prio},
\field{crn} and \field{cdb} fields, and MUST only use a single
descriptor for the \field{sense_len}, \field{residual},
\field{status_qualifier}, \field{status}, \field{response} and
\field{sense} fields.
\input{virtio-gpu.tex}
\input{virtio-input.tex}
\input{virtio-crypto.tex}
\input{virtio-vsock.tex}
\input{virtio-fs.tex}
\input{virtio-rpmb.tex}
\chapter{Reserved Feature Bits}\label{sec:Reserved Feature Bits}
Currently these device-independent feature bits defined:
\begin{description}
\item[VIRTIO_F_RING_INDIRECT_DESC (28)] Negotiating this feature indicates
that the driver can use descriptors with the VIRTQ_DESC_F_INDIRECT
flag set, as described in \ref{sec:Basic Facilities of a Virtio
Device / Virtqueues / The Virtqueue Descriptor Table / Indirect
Descriptors}~\nameref{sec:Basic Facilities of a Virtio Device /
Virtqueues / The Virtqueue Descriptor Table / Indirect
Descriptors} and \ref{sec:Packed Virtqueues / Indirect Flag: Scatter-Gather Support}~\nameref{sec:Packed Virtqueues / Indirect Flag: Scatter-Gather Support}.
\item[VIRTIO_F_RING_EVENT_IDX(29)] This feature enables the \field{used_event}
and the \field{avail_event} fields as described in
\ref{sec:Basic Facilities of a Virtio Device / Virtqueues / Used Buffer Notification Suppression}, \ref{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Used Ring} and \ref{sec:Packed Virtqueues / Driver and Device Event Suppression}.
\item[VIRTIO_F_VERSION_1(32)] This indicates compliance with this
specification, giving a simple way to detect legacy devices or drivers.
\item[VIRTIO_F_ACCESS_PLATFORM(33)] This feature indicates that
the device can be used on a platform where device access to data
in memory is limited and/or translated. E.g. this is the case if the device can be located
behind an IOMMU that translates bus addresses from the device into physical
addresses in memory, if the device can be limited to only access
certain memory addresses or if special commands such as
a cache flush can be needed to synchronise data in memory with
the device. Whether accesses are actually limited or translated
is described by platform-specific means.
If this feature bit is set to 0, then the device
has same access to memory addresses supplied to it as the
driver has.
In particular, the device will always use physical addresses
matching addresses used by the driver (typically meaning
physical addresses used by the CPU)
and not translated further, and can access any address supplied to it by
the driver. When clear, this overrides any platform-specific description of
whether device access is limited or translated in any way, e.g.
whether an IOMMU may be present.
\item[VIRTIO_F_RING_PACKED(34)] This feature indicates
support for the packed virtqueue layout as described in
\ref{sec:Basic Facilities of a Virtio Device / Packed Virtqueues}~\nameref{sec:Basic Facilities of a Virtio Device / Packed Virtqueues}.
\item[VIRTIO_F_IN_ORDER(35)] This feature indicates
that all buffers are used by the device in the same
order in which they have been made available.
\item[VIRTIO_F_ORDER_PLATFORM(36)] This feature indicates
that memory accesses by the driver and the device are ordered
in a way described by the platform.
If this feature bit is negotiated, the ordering in effect for any
memory accesses by the driver that need to be ordered in a specific way
with respect to accesses by the device is the one suitable for devices
described by the platform. This implies that the driver needs to use
memory barriers suitable for devices described by the platform; e.g.
for the PCI transport in the case of hardware PCI devices.
If this feature bit is not negotiated, then the device
and driver are assumed to be implemented in software, that is
they can be assumed to run on identical CPUs
in an SMP configuration.
Thus a weaker form of memory barriers is sufficient
to yield better performance.
\item[VIRTIO_F_SR_IOV(37)] This feature indicates that
the device supports Single Root I/O Virtualization.
Currently only PCI devices support this feature.
\item[VIRTIO_F_NOTIFICATION_DATA(38)] This feature indicates
that the driver passes extra data (besides identifying the virtqueue)
in its device notifications.
See \ref{sec:Virtqueues / Driver notifications}~\nameref{sec:Virtqueues / Driver notifications}.
\end{description}
\drivernormative{\section}{Reserved Feature Bits}{Reserved Feature Bits}
A driver MUST accept VIRTIO_F_VERSION_1 if it is offered. A driver
MAY fail to operate further if VIRTIO_F_VERSION_1 is not offered.
A driver SHOULD accept VIRTIO_F_ACCESS_PLATFORM if it is offered, and it MUST
then either disable the IOMMU or configure the IOMMU to translate bus addresses
passed to the device into physical addresses in memory. If
VIRTIO_F_ACCESS_PLATFORM is not offered, then a driver MUST pass only physical
addresses to the device.
A driver SHOULD accept VIRTIO_F_RING_PACKED if it is offered.
A driver SHOULD accept VIRTIO_F_ORDER_PLATFORM if it is offered.
If VIRTIO_F_ORDER_PLATFORM has been negotiated, a driver MUST use
the barriers suitable for hardware devices.
If VIRTIO_F_SR_IOV has been negotiated, a driver MAY enable
virtual functions through the device's PCI SR-IOV capability
structure. A driver MUST NOT negotiate VIRTIO_F_SR_IOV if
the device does not have a PCI SR-IOV capability structure
or is not a PCI device. A driver MUST negotiate
VIRTIO_F_SR_IOV and complete the feature negotiation
(including checking the FEATURES_OK \field{device status}
bit) before enabling virtual functions through the device's
PCI SR-IOV capability structure. After once successfully
negotiating VIRTIO_F_SR_IOV, the driver MAY enable virtual
functions through the device's PCI SR-IOV capability
structure even if the device or the system has been fully
or partially reset, and even without re-negotiating
VIRTIO_F_SR_IOV after the reset.
\devicenormative{\section}{Reserved Feature Bits}{Reserved Feature Bits}
A device MUST offer VIRTIO_F_VERSION_1. A device MAY fail to operate further
if VIRTIO_F_VERSION_1 is not accepted.
A device SHOULD offer VIRTIO_F_ACCESS_PLATFORM if its access to
memory is through bus addresses distinct from and translated
by the platform to physical addresses used by the driver, and/or
if it can only access certain memory addresses with said access
specified and/or granted by the platform.
A device MAY fail to operate further if VIRTIO_F_ACCESS_PLATFORM is not
accepted.
If VIRTIO_F_IN_ORDER has been negotiated, a device MUST use
buffers in the same order in which they have been available.
A device MAY fail to operate further if
VIRTIO_F_ORDER_PLATFORM is offered but not accepted.
A device MAY operate in a slower emulation mode if
VIRTIO_F_ORDER_PLATFORM is offered but not accepted.
It is RECOMMENDED that an add-in card based PCI device
offers both VIRTIO_F_ACCESS_PLATFORM and
VIRTIO_F_ORDER_PLATFORM for maximum portability.
A device SHOULD offer VIRTIO_F_SR_IOV if it is a PCI device
and presents a PCI SR-IOV capability structure, otherwise
it MUST NOT offer VIRTIO_F_SR_IOV.
\section{Legacy Interface: Reserved Feature Bits}\label{sec:Reserved Feature Bits / Legacy Interface: Reserved Feature Bits}
Transitional devices MAY offer the following:
\begin{description}
\item[VIRTIO_F_NOTIFY_ON_EMPTY (24)] If this feature
has been negotiated by driver, the device MUST issue
a used buffer notification if the device runs
out of available descriptors on a virtqueue, even though
notifications are suppressed using the VIRTQ_AVAIL_F_NO_INTERRUPT
flag or the \field{used_event} field.
\begin{note}
An example of a driver using this feature is the legacy
networking driver: it doesn't need to know every time a packet
is transmitted, but it does need to free the transmitted
packets a finite time after they are transmitted. It can avoid
using a timer if the device notifies it when all the packets
are transmitted.
\end{note}
\end{description}
Transitional devices MUST offer, and if offered by the device
transitional drivers MUST accept the following:
\begin{description}
\item[VIRTIO_F_ANY_LAYOUT (27)] This feature indicates that the device
accepts arbitrary descriptor layouts, as described in Section
\ref{sec:Basic Facilities of a Virtio Device / Virtqueues / Message Framing / Legacy Interface: Message Framing}~\nameref{sec:Basic Facilities of a Virtio Device / Virtqueues / Message Framing / Legacy Interface: Message Framing}.
\item[UNUSED (30)] Bit 30 is used by qemu's implementation to check
for experimental early versions of virtio which did not perform
correct feature negotiation, and SHOULD NOT be negotiated.
\end{description}
|
|
\documentclass[twoside,twocolumn]{article}
\usepackage{graphicx}
\usepackage[sc]{mathpazo}
\usepackage[T1]{fontenc}
\linespread{1.05}
\usepackage{microtype}
\usepackage[english]{babel}
\usepackage[hmarginratio=1:1,top=32mm,columnsep=20pt]{geometry}
\usepackage[hang, small,labelfont=bf,up,textfont=it,up]{caption}
\usepackage{booktabs}
\usepackage{lettrine}
\usepackage{enumitem}
\setlist[itemize]{noitemsep}
\usepackage{abstract}
\renewcommand{\abstractnamefont}{\normalfont\bfseries}
\renewcommand{\abstracttextfont}{\normalfont\small\itshape}
\usepackage{titlesec}
\renewcommand\thesection{\Roman{section}}
\renewcommand\thesubsection{\roman{subsection}}
\titleformat{\section}[block]{\large\scshape\centering}{\thesection.}{1em}{}
\titleformat{\subsection}[block]{\large}{\thesubsection.}{1em}{}
\usepackage{fancyhdr}
\pagestyle{fancy}
\fancyhead{}
\fancyfoot{}
\fancyhead[C]{Ethereum 2.0 Client Metrics $\bullet$ \today}
\fancyfoot[RO,LE]{\thepage}
\usepackage{titling}
\usepackage{hyperref}
\usepackage{caption}
\usepackage{subcaption}
\setlength{\droptitle}{-4\baselineskip}
\pretitle{\begin{center}\Huge\bfseries}
\posttitle{\end{center}}
\title{Ethereum 2.0 Client Metrics 07/2020}
\author{\textsc{Afri Schoedon, \href{https://github.com/q9f}{@q9f}}}\date{\today}
\begin{document}
\maketitle
\section{Introduction}
\lettrine[nindent=0em,lines=3]{E}thereum 2.0 will be a new blockchain protocol enabling -- amongst others -- horizontal scalability through sharding and transitioning the chain to a proof-of-stake consensus algorithm.\par
\begin{figure}
\centering
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[draft,page=1,width=0.6\textwidth]{../res/plots.pdf}
\caption{Lighthouse is depicted in orange.}
\label{img:lh}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[draft,page=2,width=0.6\textwidth]{../res/plots.pdf}
\caption{Prysm is depicted in purple.}
\label{img:pr}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[draft,page=3,width=0.6\textwidth]{../res/plots.pdf}
\caption{Teku is depicted in turquoise.}
\label{img:tk}
\end{subfigure}
\begin{subfigure}[t]{0.48\textwidth}
\centering
\includegraphics[draft,page=4,width=0.6\textwidth]{../res/plots.pdf}
\caption{Nimbus is depicted in blue.}
\label{img:nb}
\end{subfigure}
\caption{All data collected is displayed in these matrices: time running, slot height, blocks per second, database size, memory usage, and peer count.}
\label{fig:cli}
\end{figure}
\subsection{Motivation}
None of the features that Ethereum 2.0 will bring are being implemented in established Ethereum 1.x clients such as Geth or Besu. Therefore, a new generation of core clients to power the beacon chain is under development. None of these clients has ever been used in production before.\par
With the launch of the beacon chain supposedly happening in 2020, the second compilation of key metrics of four selected Ethereum 2.0 clients will be conducted, namely Lighthouse, Prysm, Teku, and Nimbus.\par
This work shall allow insights into the performance and stability of the given beacon-chain node implementations.\par
\subsection{Previous Benchmark}
In June 2020, a similar, preliminary benchmark has been conducted\footnote{\href{https://github.com/q9f/eth2-bench-2020-06}{github.com/q9f/eth2-bench-2020-06}} gathering first insights into client metrics and getting feedback from the Ethereum 2.0 core-developer community.\par
Before diving into the results, please note the following.
\begin{enumerate}
\item Most importantly, this work adds Nimbus to the list of profiled clients, allowing for a comparison of four client's metrics instead of three.
\item The numbers in this report are \textit{not} comparable with numbers in the previous report. This is mainly due to the use of different, dedicated, bare-metal hardware for determining these numbers as compared to the virtual hosts used in the previous work.
\item Unfortunately, the previous report contained a methodological inaccuracy. While all clients were run under the same conditions doing a full synchronization, the Prysm client was not built with optimized compiler settings. The team is aware and the documentation will be updated accordingly for all users\footnote{\href{https://github.com/prysmaticlabs/documentation/issues/189}{prysmaticlabs/documentation\#189}}. This has been revised and all clients are provided with release binaries.
\item Last but not least, this benchmark was conducted on the Altona testnet\footnote{\href{https://github.com/goerli/altona}{github.com/goerli/altona}}. In contrast to the Witti testnet used in the last report, the Altona testnet has a different composition of validators and currently contains fewer blocks than the Witti testnet had in June 2020.
\end{enumerate}
\subsection{Commented Data}
This article seeks to document the gathered metrics of different clients adhering to scientific methodology. It does not, however, intend to replace a peer-reviewed publication. It simply represents a version of the data commented by the author.\par
The raw data is available on Github\footnote{\href{https://github.com/q9f/eth2-bench-2020-07}{github.com/q9f/eth2-bench-2020-07}} for further analysis.
\section{Clients}
\label{sec:cli}
Four clients are used for comparing key-performance metrics.\par
\textsc{Lighthouse} is an Ethereum 2.0 client developed by Sigma Prime\footnote{\href{https://github.com/sigp/lighthouse}{github.com/sigp/lighthouse}}. It is implemented in the Rust programming language. Data referring to the Lighthouse client is depicted in orange throughout this document (figure \ref{img:lh}).\par
\textsc{Prysm} is a beacon-chain implementation written in Go\footnote{\href{https://github.com/prysmaticlabs/prysm}{github.com/prysmaticlabs/prysm}}. It is being maintained by the Prysmatic Labs team. Data referring to the Prysm client is depicted in purple throughout this document (figure \ref{img:pr}).\par
\textsc{Teku} is an enterprise-grade Ethereum 2.0 client built by the PegaSys Engineering team\footnote{\href{https://github.com/PegaSysEng/teku}{github.com/PegaSysEng/teku}}. It is implemented in Java and data referring to the Teku client is depicted in turquoise throughout this document (figure \ref{img:tk}).\par
\textsc{Nimbus} is a beacon-node implementation written in Nim built by the Status team\footnote{\href{https://github.com/status-im/nim-beacon-chain}{github.com/status-im/nim-beacon-chain}}. Data referring to the Nim-Beacon-Chain client is depicted in blue throughout this document (figure \ref{img:nb}).\par
Other clients implementing the Ethereum 2.0 protocol exist, namely ChainSafe Systems' \textsc{Lodestar}\footnote{\href{https://github.com/ChainSafe/lodestar}{github.com/ChainSafe/lodestar}}, Nethermind's \textsc{Cortex}\footnote{\href{https://github.com/NethermindEth/cortex}{github.com/NethermindEth/cortex}}, and the Ethereum Foundation's \textsc{Trinity}\footnote{\href{https://github.com/ethereum/trinity}{github.com/ethereum/trinity}}. Due to the different progress of implementing the protocol specification and core components, these clients were not considered for comparison, yet.\par
\section{Metadata}
The data is gathered on the Altona testnet. Altona is the third multi-client testnet launched with the four in section \ref{sec:cli} introduced clients as genesis validators.\par
At the time of collecting the metrics, the Altona testnet is based on \texttt{v0.12.1} of the Ethereum 2.0 beacon-chain specification. It contains approximately 120,000 slots and is run by 3,792 validators.\par
\subsection{Host Systems}
Four identical host systems have been installed for the sole purpose of the performance inspection. The host systems are dedicated bare-metal servers with an Ubuntu 20.04 LTS operating system kernel version \texttt{5.4.0-40-generic}.\par
The host machines are powered by an Intel Xeon E3-1240 v6 CPU with 8 cores. The available memory is 32 GB and the SSD disks allow for 250GB capacity.\par
\begin{figure}[t]
\centering
\includegraphics[draft,page=5,width=0.45\textwidth]{../res/plots.pdf}
\caption{Synchronization progress over time.}
\label{img:sync:prog}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[draft,page=9,width=0.45\textwidth]{../res/plots.pdf}
\caption{Synchronization speed over time.}
\label{img:sync:sped}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[draft,page=6,width=0.45\textwidth]{../res/plots.pdf}
\caption{Database size over time.}
\label{img:db}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[draft,page=7,width=0.45\textwidth]{../res/plots.pdf}
\caption{Resident memory usage over time.}
\label{img:mem}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[draft,page=8,width=0.45\textwidth]{../res/plots.pdf}
\caption{Client's peer count over time.}
\label{img:per}
\end{figure}
\subsection{Client Versions}
All clients were compiled on July 16th, 2020, from the latest available source-code targeting the version \texttt{v0.12.1} of the Ethereum 2.0 specification.
\begin{itemize}
\item \textbf{Lighthouse}: version \texttt{lighthouse/0.1.2}, compiled from \texttt{master} branch at commit \texttt{fc5e6cbb} from July 16th, 2020, with Rust version \texttt{1.44.1}-stable through Cargo.
\item \textbf{Prysm}: compiled from \texttt{master} branch at commit \texttt{df738517} from July 16th, 2020, with Go version \texttt{1.13.8} through Bazel.
\item \textbf{Teku}: version \texttt{teku/v0.12.2-dev}, compiled from \texttt{master} branch at commit \texttt{04b0a00a} from July 16th, 2020, with Java version \texttt{14.0.1} through Gradle.
\item \textbf{Nimbus}: version \texttt{beacon\_node v0.5.0}, compiled from \texttt{devel} branch at commit \texttt{3dfbc311} from July 15th, 2020, with Nim version \texttt{1.2.2} through Make.
\end{itemize}
All clients contain a built-in Altona configuration and were provided with a sufficient number of bootstrap nodes to ensure good connectivity and eliminate potential networking bottlenecks (compare section \ref{sec:perf} point \ref{sec:perf:p2p}).\par
\section{Performance}
\label{sec:perf}
This document only inspects the performance metrics of beacon-chain node implementations. Other features such as running validator clients, bootstrap nodes, or other relevant tooling are disregarded for simplicity.
\subsection{Synchronization Metrics}
Figure \ref{img:sync:prog} displays the progress of synchronizing the four aforementioned clients. Notably, the Lighthouse client manages to fully synchronize all blocks and verify all signatures in a little less than 25 minutes with the Prysm client being \textit{on par} finishing the same task in just about the same time. Teku completes the same task in 1 hour and 26 minutes, whereas Nimbus requires 6 hours and 54 minutes to fully sync and verify the Altona beacon chain.\par
Also, figure \ref{img:sync:sped} displays the same data but computing the synchronization speed in slots per second by taking the time required to fully catch up with the beacon-chain head. The plotted data points display a moving average over 60 seconds, the plotted line shows a moving average over 10 minutes. Lighthouse and Prysm lead the chart at an average of approximately 80 slots per second on the dedicated hardware.\par
The data at glance.
\begin{itemize}
\item \textbf{Lighthouse} synchronizes 122,105 slots in 1,495 seconds at an overall average speed of 81.676 slots per second.
\item \textbf{Prysm} catches up with 122,069 slots in 1,535 seconds at 79.524 slots per second.
\item \textbf{Teku} synchronizes 122,412 slots in 5,174 seconds at an average speed of 23.659 slots per second.
\item \textbf{Nimbus} catches up with 124,051 slots in 24,844 seconds at 4.9932 slots per second.
\end{itemize}
All clients do a full verification of all signatures during synchronization by default. The teams currently working on integrating new \texttt{bls} libraries which could improve these metrics even further.\par
\subsection{Database Metrics}
Figure \ref{img:db} displays the database size in Bytes plotted over time of running the nodes. The patterns are left uncommented for the client developers to analyze.\par
The data at glance.
\begin{itemize}
\item \textbf{Teku} requires 66.3 MiB after 124,342 slots.
\item \textbf{Prysm} requires 324 MiB after 124,342 slots.
\item \textbf{Lighthouse} requires 403 MiB after 124,342 slots.
\item \textbf{Nimbus} requires 3.98 GiB after 124,295 slots.
\end{itemize}
The data indicates that the Teku, Prysm, and Lighthouse clients implement database pruning by default, i.e., by removing everything that is invalidated by finalization or non-checkpointed states. The Nimbus client is running in archive mode by default.\par
\subsection{Memory Metrics}
Figure \ref{img:mem} displays resident set size reported by the four clients. Again, the patterns are left uncommented. Notably, the Nimbus and Lighthouse clients appear to be most efficient concerning memory usage, requiring around 500 MiB in default operation mode. Prysm peaks at just below 1.3 GiB.\par
Teku reports a little less than 10 GiB. The actual Java heap memory used by Teku on Altona can be assumed much lower. The off-heap memory that Java allocates is outside of the team's easy control. The JVM is being greedy about available memory, however, it is still possible to run Teku nodes on machines with very small available memory, e.g., 2GB.\par
\subsection{Networking Metrics}
\label{sec:perf:p2p}
Figure \ref{img:per} displays the peer count of every client during operation. There is not much to be commented on. This metric simply serves as a sanity check to rule out networking issues that could impact any of the other metrics.\par
Notably, there is a drastic drop in peers of the Nimbus client which, however, does not appear to correlate with any of the other metrics collected above.\par
\section{Conclusion}
The plots allow for an overview of key performance and stability metrics of the four tested clients.\par
Notably, both Lighthouse and Prysm appear to be highly optimized in their performance and mature in the implementation of the beacon-chain specification.\par
The relatively new Teku client already shows good performance but the metrics allow the conclusion that there is still room for optimization, especially regarding its memory footprint.\par
The Nimbus client which premiered as genesis validator on the Altona testnet shows potential for implementing further features such as pruning and optimizations of the networking and verification code.\par
\vspace{\fill}
\section*{Note}
The author is not affiliated with any of the teams implementing an Ethereum 2.0 client. The author is independently funded through the Ethereum Foundation's Ecosystem Support Program\footnote{\href{https://esp.ethereum.foundation}{esp.ethereum.foundation}} and the Goerli Testnet Initiative\footnote{\href{https://goerli.net}{goerli.net}}.\par
The author is not speaking on behalf of any organization.\par
A warm note of thanks goes out to everyone who reviewed the initial June-2020 report and provided valuable feedback allowing for a more accurate data gathering in this subsequent report.\par
And finally, a big thanks to the client teams patiently answering questions and sharing insights about the protocol implementations.
\texttt{:)}
\end{document}
|
|
\documentclass{beamer}
\usepackage[utf8]{inputenc}
\usepackage{capt-of}
\usepackage[american]{babel}
\usepackage{csquotes}
\usepackage[style=apa,backend=biber]{biblatex}
\usepackage{tikz}
\usepackage{mathrsfs}
\usetheme{Madrid}
\usecolortheme{default}
\let\oldfootnoterule\footnoterule
\def\footnoterule{\only<2->\oldfootnoterule}
\DeclareLanguageMapping{american}{american-apa}
\addbibresource{biblio.bib}
\setbeamertemplate{bibliography item}{\insertbiblabel}
%------------------------------------------------------------
%This block of code defines the information to appear in the
%Title page
\title[Defining Random Variables] %optional
{Defining Random Variables}
% \subtitle{}
\author[R. Mok] % (optional)
{R.~Mok BSc(AdvMath), MTeach, GradCertDS}
\date[November 2021] % (optional)
{AISNSW Focus Day, November 2021}
%End of title page configuration block
%------------------------------------------------------------
% Random Variables are a new addition to the HSC Syllabus in the Mathematics Advanced and Mathematics Extension I courses. However, there is a problem - there are not many resources accessible to high school students or teachers, including the Syllabus, that clearly or rigorously define what Random Variables are, often glossing over the definition and moving straight on to distributions. As a response to this problem, this presentation aims to uncover what Random Variables actually are with greater detail and how they are connected to other probability concepts, developing a deeper understanding of the concept before discussing distributions. This will enable and equip teachers to approach their own study of the topic with a clearer understanding, and as a result be able to answer questions posed by their students with more confidence and correctness. So what is a Random Variable? Let's find out.
%------------------------------------------------------------
%The next block of commands puts the table of contents at the
%beginning of each section and highlights the current section:
\AtBeginSection[]
{
\begin{frame}
\frametitle{Table of Contents}
\tableofcontents[currentsection]
\end{frame}
}
%------------------------------------------------------------
\begin{document}
%The next statement creates the title page.
\frame{\titlepage}
%---------------------------------------------------------
%This block of code is for the table of contents after
%the title page
\begin{frame}
\frametitle{Table of Contents}
\tableofcontents
\end{frame}
%---------------------------------------------------------
\section{Purpose of Presentation}
%---------------------------------------------------------
\begin{frame}
\frametitle{Purpose of Presentation}
\begin{block}{The Problem}
\begin{itemize}
\item<2-> The Syllabus attempts to define a random variable in terms of what it tries to achieve/model but doesn't actually define what it is:
\begin{itemize}
\item Define and categorise random variables
\begin{itemize}
\item know that a random variable describes some aspect in a population from which samples can be drawn
\item know the difference between a discrete random variable and a continuous random variable
\end{itemize}
\end{itemize}
\parencite[p.~47]{syllabus}
\item<3-> The Syllabus glossary isn't a great definition at all:
\begin{itemize}
\item A random variable is a variable whose possible values are outcomes of a statistical experiment or random phenomenon.
\parencite[p.~73]{syllabus}
\end{itemize}
\item<5-> A well known textbook used in schools gives an \emph{example} of a random variable that models tossing 5 coins rather than defines what it is.
\end{itemize}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Purpose of Presentation}
\begin{block}{Purpose}
\begin{itemize}
\item<2-> Define probability concepts properly such as Random Variables.
\item<3-> Analogous to being able to \emph{compute} derivatives vs. \emph{understand} derivatives, a deeper understanding of Random Variables enhances one's appreciation for the computations and applications.
% \item<4-> As a disclaimer, this means some material in this presentation could be out of syllabus.
\end{itemize}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Purpose of Presentation}
\begin{block}{\parencite[pp.~2-3]{analysis_tao}}
There is a certain philosophical satisfaction in knowing \emph{why} things work... you can certainly use things like the chain rule, L'H\^{o}pital's rule, or integration by parts without knowing why these rules work, or whether there are exceptions to these rules. However, one can get into trouble if one applies rules without knowing where they came from and what the limits of their applicability are.
% Page 2-3 Analysis I
\end{block}
\end{frame}
%---------------------------------------------------------
\section{Set and Function Preliminaries}
%---------------------------------------------------------
\begin{frame}
\frametitle{Preliminaries}
\begin{block}{Sets}
A set is a collection of distinct objects. e.g. $A = \{1,2,a,\mbox{Dave}\}$, or $B = \{c, 2, \pi, \mathbb{H}, \mathfrak{g}\}$.
\end{block}
Recall the set operations:
\begin{itemize}
\item Union: $A \cup B = \{1,2,a,c,\pi,\mathbb{H},\mathfrak{g},\mbox{Dave}\}$
\item Intersection: $A \cap B = \{2\}$
\end{itemize}
Recall that a set $C$ is a subset of $A$ if all elements of $C$ are also in $A$, and we write: $C \subseteq A$. Example: $\{1,\mbox{Dave}\}\subseteq A$.
\end{frame}
\begin{frame}
\frametitle{Preliminaries}
\begin{block}{Function}
% Preferred notation that can be found in the glossary of the syllabus, or at universities - but we don't expect students to learn it this way as it is not in the main content outcomes in the syllabus
A function $f$ from a set $A$ into $B$, denoted $f: A \rightarrow B$ assigns to each $a \in A$ an element $f(a) = b \in B$.
The set $A$ is called the \emph{domain} and the set $B$ is called the \emph{co-domain}.
The \emph{range} is the set $\{ f(a) | a \in A \}$.
\end{block}
\begin{example}
\begin{center}
\begin{tikzpicture}[scale = .4]
\draw (-3,-3.5) node {$A$} (3,-3.5) node {$B$};
\draw (0,3) node {$f$};
\draw (-3,0) ellipse (2 and 3);
\draw (-3,1.5) node {$1$} (-3,0.5) node {$2$} (-3, -0.5) node {$a$} (-3,-1.5) node {Dave};
\draw (3,2) node {$c$} (3,1) node {$2$} (3, 0) node {$\pi$} (3,-1) node {$\mathbb{H}$} (3,-2) node {$\mathfrak{g}$};
\draw (3,0) ellipse (2 and 3);
\draw[->] (-2.2,1.5) -- (2.5,2);
\draw[->] (-2.2,0.5) -- (2.5,-1.85);
\draw[->] (-2.2,-0.5) -- (2.5,1);
\draw[->] (-1.8,-1.5) -- (2.5,-2);
\pause
\draw (12, 1.5) node {Domain: $\{1,2,a,\mbox{Dave}\}$};
\draw (12, 0) node {Co-domain: $\{c,2,\pi,\mathbb{H},\mathfrak{g}\}$};
\draw (12, -1.5) node {Range: $\{c, 2, \mathfrak{g}\}$};
\end{tikzpicture}
\end{center}
\end{example}
\end{frame}
\begin{frame}
\frametitle{Preliminaries}
\begin{block}{Countable and Uncountable Sets}
Two types of size (or cardinality) of sets that one encounters in mathematics are:
\begin{itemize}
\item Countable: elements in a countable set can be listed (enumerated), e.g the set of integers $\mathbb{Z}$, the set of rational numbers $\mathbb{Q}$, any finite set.
\item Uncountable: elements in an uncountable set cannot be listed (enumerated). e.g. the set of real numbers $\mathbb{R}$, the real interval $[0,1]$.
\end{itemize}
% \begin{center}
% $n(\mathbb{Z}) = n(\mathbb{Q}) < n(\mathbb{R})$
% \end{center}
\end{block}
\end{frame}
% --------------------------------------------------------
\section{Probability Concepts as Sets}
% --------------------------------------------------------
\begin{frame}
\frametitle{Sample Space}
\begin{block}{Sample Space \parencite[p.~192]{measure_tao}}
The Sample Space is the set of all possible states that a random system could be in, denoted $\Omega$.
\end{block}
\begin{example}
Consider the coloured spinner:
\begin{center}
\begin{tikzpicture}
\draw[fill=blue] (0,0) -- (1,0) arc (0:90:1) -- cycle;
\draw[fill=red] (0,0) -- (1,0) arc (0:{-135/2}:1) -- cycle;
\draw[fill=red] (0,0) -- ({cos(-135/2)},{sin(-135/2)}) arc ({-135/2}: -135: 1) -- cycle;
\draw[fill=green] (0,0) -- (0,1) arc (90:225:1) -- cycle;
\draw[ultra thick, ->, black, opacity = 0.7] (0,0) -- ({0.9*cos(-24)}, {0.9*sin(-24)});
\draw[fill=black] (0,0) circle (0.05);
\draw ({0.5*cos(45)}, {0.5*sin(45)}) node {$B$};
\draw ({0.5*cos(-135/4)}, {0.5*sin(-135/4)}) node {$R$};
\draw ({0.5*cos(-135/4*3)}, {0.5*sin(-135/4*3)}) node {$R$};
\draw ({0.5*cos((90+225)/2)}, {0.5*sin((90+225)/2)}) node {$G$};
\end{tikzpicture}
\end{center}
What is the sample space?
\pause
$\Omega = \{B, G, R\}$
\pause
It could also be: $\Omega = [0, 2\pi)$
\end{example}
\end{frame}
\begin{frame}
\frametitle{Events and Event Spaces}
\begin{block}{Event \parencite[p.~192]{measure_tao}}
An Event, $E$, is a subset of the Sample Space $\Omega$. i.e. $E \subseteq \Omega$.
\end{block}
\begin{example}
The set $E = \{2,4,6\}$ would represent the Event of rolling even numbers on a 6-sided die with sample space $\Omega = \{1,2,3,4,5,6\}$.
\end{example}
\end{frame}
\begin{frame}
\begin{block}{Event Space \parencite[p.~192]{measure_tao}}
An Event Space (also known as a sigma-field), $\mathscr{F}$, is the set of all possible Events that one can measure the probability of.
\end{block}
\begin{example}
Continuing with the running example of the spinner on the previous slide, the Event Space would be:
\begin{itemize}
\item<2-> $\mathscr{F} = \{\{\}, \{B\}, \{G\}, \{R\}, \{B,G\}, \{B,R\}, \{G, R\}, \{B, G, R\}\}$
\item<3-> What would $\mathscr{F}$ be if the Sample Space was $[0,2\pi)$? (Not in the scope of today's presentation - open for questions at the end of the presentation)
\end{itemize}
\end{example}
\end{frame}
\begin{frame}
\frametitle{Probability Measure}
\begin{block}{Probability Measure \parencite[p.~10]{daners}}
The probability measure of an Event $E$, denoted $P(E)$, is a function $P: \mathscr{F} \rightarrow [0,1]$, and:
% input is an event
% output is a number between 0 and 1
\begin{itemize}
\item For all $n \in \mathbb{N}$, if the events $E_1, E_2, \ldots, E_n \in \mathscr{F}$ are disjoint (i.e. they all have empty intersection), then $P(E_1 \cup E_2 \cup \ldots \cup E_n) = P(E_1) + P(E_2) + \ldots + P(E_n)$.
\item $P(\{\}) = 0$
\item $P(\Omega) = 1$
\end{itemize}
\end{block}
\begin{example}
\pause
Continuing with the running example of the spinner with sample space $\Omega = \{B, G, R\}$, we can measure the probability of the Events with the following function:
$P(\{\}) = 0$, $P(\{B\}) = \frac{1}{4}$, $P(\{G\}) = \frac{3}{8}$, $P(\{R\}) = \frac{3}{8}$. The probability of the other events can be found using the first property listed above.
\end{example}
\end{frame}
\begin{frame}
\frametitle{Summary So Far}
\begin{block}{Summary So Far}
\begin{itemize}
\item<1-> Sets
\item<2-> Functions $f: A \rightarrow B$, domain, co-domain, range
\item<3-> Countable and Uncountable sets
\item<4-> Sample Space $\Omega$
\item<5-> Event Space $\mathscr{F}$
\item<6-> Probability Measure $P: \mathscr{F} \rightarrow [0,1]$
\item<6-> Note: Many references call the triple $(\Omega, \mathscr{F}, P)$ a Probability Space.
\end{itemize}
\end{block}
\end{frame}
% --------------------------------------------------------
\section{Random Variables}
% --------------------------------------------------------
\begin{frame}
\frametitle{Random Variables}
\begin{alertblock}{Random Variable \parencite[p.~143]{daners}}
A random variable $X$ is a \emph{function} $X: \Omega \rightarrow \mathbb{R}$.
\begin{itemize}
\item<2-> $X$ is a \emph{discrete random variable} if and only if the \emph{range} of $X$ is countable.
\item<3-> $X$ is a \emph{continuous random variable} if and only if the \emph{range} of $X$ is uncountable.
\end{itemize}
\end{alertblock}
\pause\pause\pause
\begin{block}{Useful Notation for Random Variables}
Some conventional notation:
\begin{itemize}
\item<5-> $[X = a] = \{\omega \in \Omega | X(\omega) = a\}$
\item<6-> $[X < a] = \{\omega \in \Omega | X(\omega) < a\}$
\item<7-> $[a < X < b] = \{\omega \in \Omega | a < X(\omega) < b\}$, etc
\item<8-> More generally: $[X \in S] = \{\omega \in \Omega | X(\omega) \in S\}$
\end{itemize}
\end{block}
\end{frame}
\begin{frame}
\frametitle{Example of Discrete Random Variable}
\begin{example}
\begin{center}
\begin{tikzpicture}[scale = 0.9]
% draw the spinner
\draw[fill=blue] (0,0) -- (1,0) arc (0:90:1) -- cycle;
\draw[fill=red] (0,0) -- (1,0) arc (0:{-135/2}:1) -- cycle;
\draw[fill=red] (0,0) -- ({cos(-135/2)},{sin(-135/2)}) arc ({-135/2}: -135: 1) -- cycle;
\draw[fill=green] (0,0) -- (0,1) arc (90:225:1) -- cycle;
\draw[ultra thick, ->, black, opacity = 0.7] (0,0) -- ({0.9*cos(-24)}, {0.9*sin(-24)});
\draw[fill=black] (0,0) circle (0.05);
\draw ({0.5*cos(45)}, {0.5*sin(45)}) node {$B$};
\draw ({0.5*cos(-135/4)}, {0.5*sin(-135/4)}) node {$R$};
\draw ({0.5*cos(-135/4*3)}, {0.5*sin(-135/4*3)}) node {$R$};
\draw ({0.5*cos((90+225)/2)}, {0.5*sin((90+225)/2)}) node {$G$};
% draw the random variable as internal diagrams
\draw (4.5, 2) node {$X$};
\draw (3,-2.3) node {$\Omega$};
\draw (6,-2.3) node {$\mathbb{R}$};
\draw (3,0) ellipse (1 and 2);
\draw (6,0) ellipse (1 and 2);
\pause
\draw (3,1) node {$B$};
\draw (3,0) node {$G$};
\draw (3,-1) node {$R$};
\pause
\draw (6,1.2) node {$\vdots$};
\draw (6,0.5) node {$1$};
\draw (6,-0.5) node {$4$};
\draw (6,-1) node {$\vdots$};
\draw[->] (3.2, 1) -- (5.7, 0.5);
\draw[->] (3.2, 0) -- (5.7, -0.4);
\draw[->] (3.2, -1) -- (5.7, -0.6);
\end{tikzpicture}
\end{center}
\end{example}
\begin{itemize}
\item<4-> Range: $\{1, 4\}$
\item<5-> $[X = 4] = \{G, R\}$
\item<6-> $[X > -1] = \{B, G, R\}$
\item<7-> $[X > 100] = \{\}$
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Example of Continuous Random Variable}
\begin{example}
A machine with a slider moving between the values of $-2$ and $2$ inclusive prints a value equal to the square of where the slider lands.\\
\pause
$X: [-2, 2] \rightarrow \mathbb{R}$\\
$X(\omega) = \omega^2$
\end{example}
\begin{itemize}
\item<3-> Range: $[0,4]$
\item<4-> $[X = 4] = \{2, -2\}$
\item<5-> $[1 < X \leq 3] = [-\sqrt{3}, -1)\cup(1, \sqrt{3}]$
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Why Use Random Variables?}
\begin{itemize}
\item<2-> An advantage of using a Random Variable over just working with Sample Spaces (like in Stage 4 Mathematics) is that it allows us to focus on relevant results in the experiment.
\item<3-> In the example of the spinner, let's say spinning a $B$ awards us with \$1 and spinning a $R$ or $G$ awards us with \$4.
\item<4-> We don't actually care about getting $B$, $G$ or $R$ - we care about the money!
\end{itemize}
\end{frame}
% --------------------------------------------------------
\section{Finding Probabilities of Random Variables}
% Finite sample space case -> just list out the probabilities of the singleton sets
% Infinite sample space case -> introduce concept of cumulative distribution function
% Mention nominally Radon-Nikodym Theorem that defines probability measure of the sample space via measure on the real numbers and derivative of the cumulative distribution function
% --------------------------------------------------------
\begin{frame}
\frametitle{Probability of Random Variables}
\begin{itemize}
\item<1-> Recall that a probability measure is a function $P: \mathscr{F} \rightarrow [0,1]$ where $\mathscr{F}$ is the set of all possible Events.
\item<2-> The notation we introduced earlier, e.g $[X = 4]$ is what we ``sub in'' to this function $P$.
\end{itemize}
\pause\pause
\begin{example}
Continuing with the discrete random variable example of the spinner:
\[ P([X = 4]) = P(\{G, R\}) \]
\pause
Conventionally, writing $($ followed by $[$ and remembering to close them off is annoying, so we shorten the left side to just $P(X=4)$.
\pause
Recall that $P(\{G, R\}) = P(\{G\}) + P(\{R\})$.
\pause
Hence, $P(X=4) = P(\{G, R\}) = \frac{3}{8} + \frac{3}{8} = \frac{3}{4}$.
\end{example}
\end{frame}
\begin{frame}
\frametitle{Probability Distributions}
As stated before, Random Variables should just allow us to focus on results in an experiment, without much care about the Sample Space.\\
\pause
To do this, we can define the Probability Measure to directly interact with the range of the Random Variable (the theory behind how and why this works is beyond the scope of this presentation):
\pause
\begin{block}{Discrete Random Variable Probabilities}
A \emph{Probability Distribution Function} that assigns a probability to each $P(X=k)$ for each $k$ in the range of $X$ is the simplest approach when dealing with Discrete Random Variables.
\end{block}
\pause
\begin{example}
As with the spinner, we can measure the probability of the Random Variable with:
\begin{tabular}{|c|c|c|c|}\hline
$k$ & $1$ & $4$ & Everything else (usually not written)\\ \hline
$P(X=k)$ & $\frac{1}{4}$ & $\frac{3}{4}$ & 0\\ \hline
\end{tabular}
\end{example}
\end{frame}
\begin{frame}
\frametitle{Probability Distributions}
\begin{block}{Continuous Random Variable Probabilities}
A probability measure for a Continuous Random Variable $X$ can be defined by:
\[ P(a < X < b) = \int_a^b f(x)\;dx \]
where $f(x)$ is called the \emph{probability density function}.
\end{block}
Note: $P(X = k) = 0$ for any $k \in \mathbb{R}$
\end{frame}
\begin{frame}
\frametitle{Cumulative Distribution Function}
\begin{block}{Cumulative Distribution Function}
The Cumulative Distribution Function $F(x)$ of a continuous random variable $X$ with probability density function $f(x)$ is
\[ F(x) = P(X < x) = \int_{-\infty}^x f(x)\;dx \]
It accumulates the total of probabilities in the distribution.
\end{block}
\pause
This allows us to find quantiles easily. For example the median can be found by solving for $k$ in $F(k) = 0.5$.\\
\pause
Note also that by the Fundamental Theorem of Calculus, differentiating the cumulative distribution function $F(x)$ will yield the probability density function $f(x)$.
\end{frame}
\begin{frame}
\frametitle{Expectation and Variance}
Often researchers are interested in finding the average (mean) result of a random phenomenon and the spread of the results.
\begin{block}{Expectation}
For Discrete Random Variables:
\[ \mu = E(X) = \sum_{x \in X} xP(X=x) \]
For Continuous Random Variables (Note: \alert{NOT IN SYLLABUS}):
\[ \mu = E(X) = \int_{-\infty}^\infty xf(x)\;dx \]
\end{block}
Note that if $a$ and $b$ are constants: \[E(a+bX) = a + bE(X)\]
\end{frame}
\begin{frame}
\frametitle{Variance}
\begin{block}{Variance}
\[ Var(X) = E((X-\mu)^2) \]
By expanding and simplifying, we can find a simpler formula:
\begin{align*}
Var(X) & = E(X^2 - 2X\mu +\mu^2)\\
& = E(X^2) - E(2X\mu) + E(\mu^2)\\
& = E(X^2) - 2\mu E(X) + \mu^2 = E(X^2) - 2\mu^2 + \mu^2\\
& = E(X^2) - \mu^2 = E(X^2) - E(X)^2
\end{align*}
\end{block}
\begin{block}{Standard Deviation}
Often to \emph{standardise} the squared units of the Variance to units (not squared), the standard deviation is used:
\[\sigma = \sqrt{Var(X)}\]
\end{block}
\end{frame}
\begin{frame}
\frametitle{Important Considerations}
\begin{itemize}
\item<1-> If $a$ and $b$ are constants: $Var(a + bX) = b^2 Var(X)$ % left as proof for the reader
\item<2-> Why $E((X-\mu)^2)$ and not $E(X-\mu)$ to measure spread? % negatives cancelling positives
\item<3-> Why not $E(|X-\mu|)$? % mean absolute error, squared is mean squared error - squaring emphasises larger differences, and easier algebra
\item<4-> What is the link between mean and variance in Random Variables with the mean and variance used in Statistics? % uniform distribution of data in statistics, identical data tagged as repeats e.g. two data points of 5 could be 5_1 and 5_2
% \item<5-> What is the difference between \emph{sample} and \emph{population} standard deviation? Why is sample formula over $n-1$ and population formula over $n$? % Bessel's correction factor
% \item<6-> Which do we use in HSC?
\end{itemize}
\end{frame}
\begin{frame}
\frametitle{Special Distributions Studied in HSC}
Some special distributions studied in the HSC are listed below. There is not enough time to cover them in this presentation as they can each take up an entire session in themselves.
\begin{itemize}
\item<1-> Discrete Probability Distributions
\begin{itemize}
\item<2-> Discrete Uniform Distribution
\item<3-> Bernoulli Distribution $X \sim B(p)$
\item<4-> Binomial Distribution $X \sim Bin(n,p)$
\item<5-> Not in syllabus but good to look at: Geometric Distribution $X \sim Geo(p)$
\end{itemize}
\item<6-> Continuous Probability Distributions
\begin{itemize}
\item<7-> Continuous Uniform Distribution
\item<8-> Normal Distribution $X \sim N(\mu, \sigma^2)$
\end{itemize}
\item<9-> Covered in HSC Science Extension
\begin{itemize}
\item<9-> $t$ Distribution
\item<9-> $\chi^2$ Distribution
\item<9-> $F$ Distribution
\end{itemize}
\item<9-> Not covered in any HSC syllabus but really cool for exploration
\begin{itemize}
\item<9-> Poisson Distribution
\end{itemize}
\end{itemize}
\end{frame}
% --------------------------------------------------------
\section{Application to Questions}
% --------------------------------------------------------
% Make sure to provide solutions to these questions
\begin{frame}
\frametitle{Double Dice}
Two 6-sided dice are thrown and the result on the top face are added.
\begin{enumerate}
\item What is the sample space?
\item How can we define a random variable here that is appropriate to the scenario?
\item What subset of the sample space does $[X < \pi]$ represent?
\item Find $P(X < \pi)$.
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{A Geometric Distribution Example}
Let $X$ be a random variable that represents the number of tosses of a four-sided die until a 3 is thrown.
\begin{enumerate}
\item What is the sample space?
\item How can we define a random variable that represents the scenario?
\item Find $P(X = 1)$, $P(X=2)$, $P(X=x)$.
\item Find $E(X)$.
\end{enumerate}
\end{frame}
\begin{frame}
\frametitle{Independent Binomial Probability}
John tosses a coin 4 times and independently Paul tosses a coin 4 times.
\begin{enumerate}
\item How can we define the random variables for John and Paul's results?
\item What distribution do these random variables have?
\item In random variable notation, how can we represent the probability that they toss the same number of heads?
\item Find the probability that they toss the same number of heads.
\end{enumerate}
Note: for numbers higher than 4, you may need to prove binomial identities such as $\sum_{k=0}^n \binom{n}{k}^2 = \binom{2n}{k}$
\end{frame}
\begin{frame}
\frametitle{Distribution Functions}
A continuous random variable is distributed according to the probability density function $f(x)$ such that:
\[f(x) = \left \{ \begin{matrix} kx^2+1 \mbox{ if } 0 \leq x \leq 3 \\ 0 \mbox{ otherwise }\end{matrix} \right.\]
where $k$ is some constant.
\begin{enumerate}
\item Find the value of $k$.
\item Find the cumulative distribution function.
\end{enumerate}
\end{frame}
% --------------------------------------------------------
\section{Bibliography}
% --------------------------------------------------------
\begin{frame}
\frametitle{Bibliography}
\printbibliography
Source files and PDF for this presentation can be found on:
\url{https://github.com/moksifu/aisnsw_focus_day_2021}
under the MIT Licence.
\end{frame}
%---------------------------------------------------------
\end{document}
|
|
\chapter{Stimuli design}\label{Appendix}
In the following we explain in more detail the design of stimuli from Chapter \ref{Experimental study on constraints on clitic climbing out of infinitive complements}. The number of an example refers to the number of the experimental list. Each experimental list consisted of stimuli with only one of the seven matrix predicate types, i.e. raising, simple subject control, etc. as explained in Section \ref{Selection of matrix verbs}. Similarly, the Latin letter assigned to each example indicates the infinitive CL type. The letter \textit{a} is assigned to examples with third person pronominal CLs in the dative, while \textit{b} stands for examples with third person pronominal CLs in the accusative. Examples with the \textsc{refl\textsubscript{2nd}} CLs \textit{si} and \textit{se} are marked with \textit{c} and \textit{d}, respectively, while the letter \textit{e} stands for examples with the \textsc{refl\textsubscript{lex}} CL se.\largerpage[-2]
Examples of noCC stimuli sentences for each matrix predicate type for infinitives with pronominal dative CLs are presented in \REF{A1a}--\REF{A7a}.\footnote{\vspace{-\baselineskip}%
\begin{enumerate}[label=(A.\arabic*a), ref=A.\arabic*a]
\item\label{A1a}‘We are entirely stopping complaining about the bad company he keeps.’
\item\label{A2a}‘You are presently deciding to complain about the bad company he keeps.’
\item\label{A3a}‘You are once again ashamed to complain about the bad company he keeps.’
\item\label{A4a}‘You are once again allowing them to complain about the bad company he keeps.’
\item\label{A5a}‘You are once again forcing him to complain about the bad company that one keeps.’
\item\label{A6a}‘You are once again allowing yourself to complain about the bad company he keeps.’
\item\label{A7a}‘They are once again preparing themselves to complain about the bad company he keeps.’
\end{enumerate}}
{\tabcolsep=3pt\vspace{\topsep}
\noindent\begin{tabular}{@{}lllllllll@{}}
\REF{A1a} & Posve && prestajemo & prigovarati & \textit{mu} & zbog & lošeg & društva.\\
\REF{A2a} & Ubrzo && odlučujete & prigovarati & \textit{mu} & zbog & lošeg & društva.\\
\REF{A3a} & Opet &\textit{se} & stidiš & prigovarati& \textit{mu} & zbog & lošeg& društva.\\
\REF{A4a} & Opet &\textit{im} & dopuštate &prigovarati &\textit{mu} & zbog & lošeg & društva.\\
\REF{A5a} & Opet & \textit{ga}& prisiljavaš& prigovarati &\textit{mu} & zbog & lošeg & društva.\\
\REF{A6a} & Opet & \textit{si} & dopuštaš& prigovarati & \textit{mu} & zbog & lošeg & društva.\\
\REF{A7a} & Opet & \textit{se} & spremaju& prigovarati & \textit{mu} & zbog & lošeg & društva.\\
\end{tabular}\vspace{\topsep}%
\pagebreak\noindent Examples of noCC stimuli for each matrix predicate type for infinitives with pronominal accusative CL are presented in \REF{A1b}--\REF{A7b}.\footnote{\vspace{-\baselineskip}%
\begin{enumerate}[label=(A.\arabic*b), ref=A.\arabic*b]
\item\label{A1b}‘Therefore, I am starting to invite him to the monthly meetings.
\item\label{A2b}‘I am even trying to invite him to the monthly meetings.’
\item\label{A3b}‘We kind of hesitate to invite him to the monthly meetings.’
\item\label{A4b}‘They are persistently ordering me to invite him to the monthly meetings.’
\item\label{A5b}‘I publicly oblige them to invite him to the monthly meetings.’
\item\label{A6b}‘At the same time I am allowing myself to invite him to the monthly meetings.’
\item\label{A7b}‘We begrudgingly force ourselves to invite him to the monthly meetings.’
\end{enumerate}}
\vspace{\topsep}
\noindent\begin{tabular}{@{}lllllllll@{}}
\REF{A1b} &Stoga && krećem& pozivati &\textit{ga} &na &mjesečne &sastanke.\\
\REF{A2b} &Čak && nastojim &pozivati &\textit{ga} &na &mjesečne &sastanke.\\
\REF{A3b} &Nekako &\textit{se} &ustručavamo &pozivati &\textit{ga} &na &mjesečne &sastanke.\\
\REF{A4b} &Uporno &\textit{mi} &naređuju &pozivati &\textit{ga} &na &mjesečne &sastanke.\\
\REF{A5b} &Javno &\textit{ih} &obvezujem &pozivati &\textit{ga} &na &mjesečne &sastanke.\\
\REF{A6b} &Ujedno &\textit{si} &dopuštam &pozivati &\textit{ga} &na &mjesečne &sastanke.\\
\REF{A7b} &Nevoljko &\textit{se} &prisiljavamo &pozivati &\textit{ga} &na &mjesečne &sastanke.\\
\end{tabular}\vspace{\topsep}%
\noindent Examples of noCC stimuli for each matrix predicate type for infinitives with \textsc{refl\textsubscript{2nd}} CL \textit{si} are presented in \REF{A1c}--\REF{A7c}.\footnote{\vspace{-\baselineskip}%
\begin{enumerate}[label=(A.\arabic*c), ref=A.\arabic*c]
\item\label{A1c}‘He really has to please himself in every way.’
\item\label{A2c}‘We are really trying to please ourselves in every way.’
\item\label{A3c}‘I truly hesitate to please myself in every way.’
\item\label{A4c}‘We really allow you to please yourself in every way.’
\item\label{A5c}‘We are truly encouraging her to please herself in every way.’
\item\label{A6c}‘I am really ordering myself to please myself in every way.’
\item\label{A7c}‘He is really encouraging himself to please himself in every way.’
\end{enumerate} }
\vspace{\topsep}
\noindent\begin{tabular}{@{}lllllllll@{}}
\REF{A1c} &Zaista &&mora &ugađati &\textit{si} &u &svakom &pogledu.\\
\REF{A2c} &Zaista & &nastojimo &ugađati &\textit{si} &u &svakom &pogledu.\\
\REF{A3c} &Zaista &\textit{se} &ustručavam &ugađati &\textit{si} &u &svakom &pogledu.\\
\REF{A4c} &Zaista &\textit{ti} &dozvoljavamo& ugađati& \textit{si} &u& svakom &pogledu.\\
\REF{A5c} &Zaista & \textit{je} &potičemo &ugađati &\textit{si} &u &svakom &pogledu.\\
\REF{A6c} &Zbilja& \textit{si} &naređujem &ugađati &\textit{si} &u &svakom &pogledu.\\
\REF{A7c} &Zaista& \textit{se} &ohrabruje &ugađati &\textit{si} &u &svakom &pogledu.\\
\end{tabular}\vspace{\topsep}%
\noindent Examples of noCC stimuli for each matrix predicate type for infinitives with \textsc{refl\textsubscript{2nd}} CL \textit{se} are presented in \REF{A1d}--\REF{A7d}.\footnote{\vspace{-\baselineskip}%
\begin{enumerate}[label=(A.\arabic*d), ref=A.\arabic*d]
\item\label{A1d}‘You consciously stop hiding from curious glances.’
\item\label{A2d}‘You are consciously trying to hide from curious glances.’
\item\label{A3d}‘They even dare to hide from curious glances.’\\
\item\label{A4d}‘You have been allowing him to hide from curious glances since always.’
\item\label{A5d}‘Since always I have been letting you hide from curious glances.’
\item\label{A6d}‘You always allow yourself to hide from curious glances.’
\item\label{A7d}‘Since always they have been forcing themselves to hide from curious glances.’
\end{enumerate}}
\vspace{\topsep}
\noindent\begin{tabular}{@{}lllllllll@{}}
\REF{A1d} &Svjesno&& prestajete &skrivati &\textit{se} &od &znatiželjnih &pogleda.\\
\REF{A2d} &Svjesno && pokušavaš &skrivati &\textit{se} &od &znatiželjnih &pogleda.\\
\REF{A3d} &Čak &\textit{se} &usuđuju &skrivati &\textit{se} &od &znatiželjnih &pogleda.\\
\REF{A4d} &Oduvijek &\textit{mu} &dozvoljavaš &skrivati &\textit{se} &od &znatiželjnih &pogleda.\\
\REF{A5d} &Oduvijek &\textit{te} &puštam &skrivati &\textit{se} &od &znatiželjnih &pogleda.\\
\REF{A6d} &Uvijek &\textit{si} &dozvoljavaš &skrivati &\textit{se} &od &znatiželjnih &pogleda.\\
\REF{A7d} &Oduvijek &\textit{se} &primoravaju &skrivati &\textit{se} &od &znatiželjnih &pogleda.\\
\end{tabular}\vspace{\topsep}%
\noindent Examples of noCC stimuli for each matrix predicate type for infinitives with \textsc{refl\textsubscript{lex}} CL \textit{se} are presented in \REF{A1e}--\REF{A7e}.\footnote{\vspace{-\baselineskip}%
\begin{enumerate}[label=(A.\arabic*e), ref=A.\arabic*e]
\item\label{A1e}‘We are officially starting to voice your opinion on the presented suggestions.’
\item\label{A2e}‘You clearly know to voice your opinion on the presented suggestions.’
\item\label{A3e}‘They are immensely afraid to voice their opinion on the presented suggestions.’
\item\label{A4e}‘You even allow us to voice our opinion on the presented suggestions.’
\item\label{A5e}‘I visibly hurry her to voice her opinion on the presented suggestions.’
\item\label{A6e}‘I regularly allow myself to voice my opinion on the presented suggestions.’
\item\label{A7e}‘I regularly authorise myself to voice my opinion on the presented suggestions.’
\end{enumerate}}
\vspace{\topsep}
\noindent\begin{tabular}{@{}lllllllll@{}}
\REF{A1e} &Službeno && počinjemo &očitovati &\textit{se} &o &iznesenim &prijedlozima.\\
\REF{A2e} &Jasno &&znaš &očitovati &\textit{se} &o &iznesenim &prijedlozima.\\
\REF{A3e} &Silno& \textit{se} &boje &očitovati &\textit{se} &o &iznesenim &prijedlozima.\\
\REF{A4e} &Čak &\textit{nam} &dopuštaš &očitovati &\textit{se} &o &iznesenim &prijedlozima.\\
\REF{A5e} &Vidno &\textit{je} &požurujem &očitovati &\textit{se} &o &iznesenim &prijedlozima.\\
\REF{A6e} &Uredno& \textit{si} &dozvoljavam &očitovati &\textit{se} &o &iznesenim &prijedlozima.\\
\REF{A7e} &Redovito &\textit{se} &ovlašćujem &očitovati &\textit{se} &o &iznesenim &prijedlozima.\\
\end{tabular}\vspace{\topsep}%
}
|
|
\paragraph{Ground Wire} Multiple sources insist that a ground wire is necessary between the stimulating and recording electrodes~\cite{StahlMSEE,Olivo,KuehJellies,EllingerMSEE,Kladt2010}. In~\cite{Olivo}, it is suggested that one of the dissecting pins may be connected to ground. In~\cite{Kladt2010}, which is an experiment in which the earthworm is not dissected, a piece of aluminium foil is placed on the earthworm body and connected to ground. \cite{KuehJellies} implies that a chlorided silver wire is placed under the body of earthworm and connected to ground; this is illustrated in figures~\ref{fig:EWSetup} and~\ref{fig:EWSetupPA}, and it is the setup used to achieve the results in section~\ref{sec:app}.
In some of the first attempts at the experiment, I used a general purpose tin or nickel (I'm not sure which) plated solid copper core hookup wire stripped of its insulation and placed under the earthworm body and connected to ground. With the plated copper wire, I saw 60Hz noise coming out of the Preamp connected to the recording electrodes. Switching to chlorided silver wire for the ground wire under the earthworm eliminated the problem. It was suggested to me by either Dr. Miller or Mr. Mike Ellinger that copper is toxic to cells and should certainly not be used for the recording electrodes, but it seems that plated copper wire should not be used for the ground wire. It may also be that the tin or nickel plating is toxic to the worm (this is only my supposition). \cite{Olivo} and~\cite{Kladt2010} appear to successfully use a steel pin and aluminium foil, respectively, as the ground ``wire.''
\paragraph{Response Abnormalities} Another issue that appeared in our attempts at recreating the experiment with earthworm giant axons described in~\cite{StahlMSEE,Olivo,KuehJellies,EllingerMSEE} was trying to recreate the shape (biphasic) and amplitude. This issue was unrelated to the DASS hardware, specifically, as much of this behavior was observed with a commercial stimulator and a Preamp board from~\cite{BatzerCorsiCrampton}.
The expected biphasic (0V to positive to 0V to negative to 0V) shape of the combined action potentials along the nerve cord, as seen in the lateral response in figure~\ref{fig:EWLatResp}, is due to the relative polarization on the nerve cord between the recording electrodes~\cite{KuehJellies,McGillCAP}. It is a coincidence that the combined action potential resembles the membrane voltage during a single action potential response. It was somewhat disconcerting to see that the median response in figures~\ref{fig:EWMedResp} and~\ref{fig:EWLatResp} was monophasic (0V to positive to 0V). Although, monophasic median and lateral responses were also observed in figure 32 of~\cite{StahlMSEE}. Also,~\cite{McGillCAP} concerns a similar experiment with the sciatic nerve of a frog, and it says that a nerve cord that is crushed at one of the recording electrode will result in a monophasic response measurement. I hypothesize that a similar phenomenon could occur in the earthworm's nerve cord.
We also experienced inconsistent results in the amplitude of the response waveforms. The position of the recording electrodes will affect the amplitude of the response: recording electrodes placed further apart will result in a lower amplitude, as reported in~\cite{KuehJellies,McGillCAP}. Consequently, it is expected that the responses measure with different earthworms will have different amplitudes. But, we experienced varying amplitudes with the same worm. One thing we observed was that when using the commercial Grass SD9 Stimulator set to send stimulation pulses at the same level about 1-5 times per second, the amplitude of the response waveforms were consistent. If the stimulation were turned off for a time and turned back on, the amplitude of the response would be different. This causes inconsistent response amplitudes when using the DASS because stimulation pulses are not happening at a consistent rate: the script run on the DASCC might be set to send one stimulation pulse at some amplitude, the results analyzed by the operator, the stimulation amplitude adjusted, and then another stimulation pulse sent. This means that stimulation pulses are sent at irregular intervals with minutes in between pulses. Moistening the nerve cord also changed the response amplitude.
Large stimulation artifacts that did not settle before the median response occurred (figure~\ref{fig:settle}), abnormal (not mono or bi-phasic) shapes (figure~\ref{fig:abnormal}), and multiple apparent responses from one stimulus (figure~\ref{fig:multi}) also appeared in the early experiments. As the issues were being investigated, I focused on improving the biological experimental technique. Keeping the nerve cord moist with Ringer's solution (as suggested by~\cite{Olivo,KuehJellies}) while also keeping the amount of solution collecting around the worm to a minimum (by wicking excess solution away with paper towel) appears to have kept the aforementioned issues from happening again (since the Aug. 10, 2012 experiments).
\begin{figure}[H]
\centering
\begin{singlespace}
\includegraphics[trim=0 0.1in 0 0.1in,clip,angle=-90,width=0.48\textwidth]{./figures/F0002TEK_settle_120810} %[trim=left bottom right top]
\caption{ALL0002 Aug. 10, 2012; long stimulus artifact settling time\label{fig:settle}}
\end{singlespace}
\end{figure}
\begin{figure}[H]
\centering
\begin{singlespace}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[trim=0 0.1in 0 0.1in,clip,angle=-90,width=\textwidth]{./figures/F0000TEK_abnorm_120627} %[trim=left bottom right top]
\caption{ALL0000 June 27, 2012}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[trim=0 0.1in 0 0.1in,clip,clip,angle=-90,width=\textwidth]{./figures/F0001TEK_abnorm_120627} %[trim=left bottom right top]
\caption{ALL0001 June 27, 2012}
\end{subfigure}
\caption{Abnormally shaped earthworm giant axon responses\label{fig:abnormal}}
\end{singlespace}
\end{figure}
\begin{figure}[H]
\centering
\begin{singlespace}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[trim=0 0.1in 0 0.1in,clip,angle=-90,width=\textwidth]{./figures/F0003TEK_multi_120606} %[trim=left bottom right top]
\caption{ALL0003 June 06, 2012}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\centering
\resizebox{\textwidth}{!}{\input{MULTIresponse.tex}}
\caption{data\_out0-multi1 May 16, 2012}
\end{subfigure}
\caption{Multiple earthworm giant axon responses per single stimulus\label{fig:multi}}
\end{singlespace}
\end{figure}
\paragraph{Silver Wire Size} In an attempt to save money, larger diameter silver wire than is recommended in~\cite{KuehJellies} was used to perform the earthworm experiment. The following is an excerpt from an email composed by me and sent to Dr.~Damon A.~Miller and Mr.~Mike Ellinger on April 22, 2012 summarizing the cost savings:
\begin{quotation}
For the silver wire,~\cite{KuehJellies} says to use 0.25mm diameter wire from Warner Instruments. It looks like it can be bought here: \url{http://www.warneronline.com/product_info.cfm?ID=280&CFID=6793639&CFTOKEN=40541278}
Their silver wire is 99.99\% pure, and under ``pricing and ordering,'' 2 meters of 0.25mm diameter wire costs \$24 not including shipping plus there's a \$10 charge for having an order of less than \$75.
I found another website that sells 99.99\% 0.635mm (0.025in) wire at \$3.00/ft. Which means we could get 6ft. for \~\$24 including shipping or \~\$15 for 3ft. Link: \url{http://www.ccsilver.com/silver/superfines.html}
Yet another website sells 6ft. of 99.9\% 0.025mm wire for \~\$10 including shipping. Link: \url{http://www.ottofrei.com/store/product.php?productid=21270&cat=3847&page=1} (dead)
To sum up, we can get the original medical grade, 99.99\% purity wire in the same diameter for $>$\$35, we can get the same purity wire but with 2.5 times the diameter for \$15, or we can get 99.9\% wire in the same diameter as the experiment for \$10.
I'm thinking the 99.99\% 0.635mm wire for \$15 would be an acceptable solution.
\end{quotation}
Many of the anomalies in shape and amplitude mentioned in the previous section were observed while using the 0.635mm wire from C.C.~Silver for the recording electrodes. To eliminate the wire diameter as a factor in those anomalies, the 0.25mm diameter wire from Warner Instruments specified by~\cite{KuehJellies} was purchased and compared with the 0.635mm wire. Both wire sizes were chlorided and an earthworm was prepared. The commercial stimulator was used along with a Preamp from~\cite{BatzerCorsiCrampton} connected to an oscilloscope. Figure~\ref{fig:30to22} shows a 3.5V stimulation with response recorded first using the 0.25mm wire from Warner Instruments, then using the 0.635mm wire from C.C.~Silver. Figure~\ref{fig:22to30} shows a 3.75V stimulation with response recorded first using the 0.635mm wire, then using the 0.25mm wire.
\begin{figure}[H]
\centering
\begin{singlespace}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[trim=0 0.1in 0 0.1in,clip,angle=-90,width=\textwidth]{./figures/F0004TEK_30a_120627} %[trim=left bottom right top]
\caption{ALL0004 June 27, 2012; 0.25mm wire; 3.5V stimulus}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[trim=0 0.1in 0 0.1in,clip,angle=-90,width=\textwidth]{./figures/F0005TEK_22a_120627} %[trim=left bottom right top]
\caption{ALL0005 June 27, 2012; 0.635mm wire; 3.5V stimulus}
\end{subfigure}
\caption{Recording electrode silver wire comparison: 0.25mm to 0.635mm\label{fig:30to22}}
\end{singlespace}
\end{figure}
\begin{figure}[H]
\centering
\begin{singlespace}
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[trim=0 0.1in 0 0.1in,clip,angle=-90,width=\textwidth]{./figures/F0006TEK_22b_120627} %[trim=left bottom right top]
\caption{ALL0006 June 27, 2012; 0.635mm wire; 3.75V stimulus}
\end{subfigure}
~
\begin{subfigure}[b]{0.48\textwidth}
\centering
\includegraphics[trim=0 0.1in 0 0.1in,clip,angle=-90,width=\textwidth]{./figures/F0007TEK_30b_120627} %[trim=left bottom right top]
\caption{ALL0007 June 27, 2012; 0.25mm wire; 3.5V stimulus}
\end{subfigure}
\caption{Recording electrode silver wire comparison: 0.635mm to 0.25mm\label{fig:22to30}}
\end{singlespace}
\end{figure}
Anomalies in shape and amplitude were experienced with both diameter wires, during the experiment. The figures show that similar shapes could be observed with both diameter wires. This led me to the conclusion that wire diameter was not the cause of the difficulties examined in the previous section.
|
|
\documentclass[a4paper,10pt]{article}
\usepackage{amssymb}%Blacksquare
\usepackage[margin=0.7in]{geometry}
\usepackage{amsmath,amssymb}
\usepackage{parskip,graphicx}
\usepackage[makeroom]{cancel} %used to cross out terms in equations
\usepackage[utf8]{inputenc} %not clear effect
\usepackage[T1]{fontenc} %not clear effect
\usepackage[english]{babel}% not clear effect
\usepackage{float} %used for fixed tables with [H]
\floatstyle{plaintop} %used to place table caption on top
\restylefloat{table}
\usepackage[document]{ragged2e} %used to justify text
\usepackage{amsthm}
\usepackage{commath}
\usepackage{textcomp}
\usepackage{enumerate}
\usepackage{wrapfig}
\usepackage{epstopdf}
\usepackage{subfig}
\usepackage[font=small,labelfont=bf,
justification=justified,
format=plain]{caption}
\usepackage{array}% http://ctan.org/pkg/array
\usepackage{breqn} %used to split equations on multiple lines
\usepackage{listings}
\usepackage{pdfpages} %used to include pdfs
\usepackage{siunitx} %used for units
\usepackage{titlesec}
\usepackage[titletoc,toc,title]{appendix} %used for apendix
\newcommand\numberthis{\addtocounter{equation}{1}\tag{\theequation}} %used for numbering only one equation
\usepackage{braket} %quantum brackets
\DeclareMathAlphabet\mathbfcal{OMS}{cmsy}{b}{n}%caligraphy font
\numberwithin{equation}{section}
%used for pointing at equations
\usepackage{pst-node}
\usepackage{tikz-cd}
\usepackage{tikz}
\usetikzlibrary{tikzmark}
\usepackage[nottoc]{tocbibind}
\usepackage{comment}
\usepackage{fancyhdr}
\newcommand{\parl}{\overset{\leftarrow}{\partial}}
\newcommand{\parr}{\overset{\rightarrow}{\partial}}
%used for boxing equations
\usepackage{empheq}
\usepackage{xcolor}
\definecolor{lightgreen}{HTML}{90EE90}
\newcommand{\boxedeq}[2]{\begin{empheq}[box={\fboxsep=6pt\fbox}]{align}\label{#1}#2\end{empheq}}
%used for disjoint unions
\newcommand{\cupdot}{\mathbin{\mathaccent\cdot\cup}}
\usepackage{tcolorbox}
\tcbuselibrary{theorems}
\newtcbtheorem{futwork}{Future work}%
{colback=white,colframe=white,coltitle=black,fonttitle=\bfseries}{fw}
\usepackage{hyperref} %Used to hyper reference.Usually it has to be the last package to be imported, but there might be some exceptions to this rule.
\pagestyle{fancy}
\fancyhf{}
\fancyhead[L]{\rightmark}
\fancyhead[R]{\thepage}
\renewcommand{\headrulewidth}{0.4pt}
\begin{document}
\pagenumbering{gobble}
\begin{titlepage}
\begin{center}
\begin{figure}[H]
\raggedleft
\includegraphics[width=0.3\linewidth]{jacobslogo.jpg}
\end{figure}
\LARGE
\text{Jacobs University Bremen} \\
\text{Department of Physics and Earth Sciences}
\vspace{25mm}
\huge{Quantization Procedures and the Time-Energy Uncertainty} \\
\vspace{15mm}
\Large{Project Thesis}\\
\vspace{2mm}
\large{as part of the course}\\
\vspace{2mm}
\large{CA08-200303 Project Physics}\\
\vspace{7.5mm}
\end{center}
\begin{minipage}{0.4\textwidth}
\begin{flushleft}
\textit{Author:} \\
\large{Daniel Prelipcean}
\end{flushleft}
\end{minipage}
\hfill
\begin{minipage}{0.4\textwidth}
\begin{flushright}
\textit{Research Supervisor:} \\
\large Prof. Dr. Peter Schupp
\end{flushright}
\end{minipage}
\begin{center}
\large{\textbf{\textsc{Abstract:}}}
\justify
Quantization procedures, in particular for constrained systems like the point particle, were investigated in order to derive and interpret the Time Energy Uncertainty Relation (TEUR). Two reviews, one of canonical quantization procedures and one of the TEUR, are presented. The canonical quantization of the free relativistic point particle is carried out in a general background, as well as deformation quantization in a static background. Future work includes obtaining uncertainty relations in deformation quantization and its generalization for some explicit metrics, as well as to investigate other minor inconsistencies in the canonical approach.
\end{center}
\vfill
\begin{center}
\normalsize{\large{Bremen, \today}}
\end{center}
\end{titlepage}
\newpage
\setcounter{tocdepth}{3}
\tableofcontents
\newpage
\justify
\pagenumbering{arabic}
\setcounter{page}{1}
\section{Introduction: Motivation and Thesis Summary}
Rigorous formalism often comes only after the discovery process, and not during or before. A mistery still standing is the derivation and interpretation of the Heisenberg time-energy uncertainty relations. Compared to the the position and momenta uncertainty relations, they lack a solid mathematical derivation. However, it is may be possible that we understand a theory perfectly well physically, that mathematical rigour is an unnecessary luxury. Nevertheless, I take the view that there exists a clear structure beneath the variety of heuristic notions physicists use when approaching and understanding physical phenomena. And while any physical theory is necessarily an approximation, the structure of the theory need not be “approximate.”
Since the introduction of the uncertainty relation between position and momenta operators \cite{HeisenbergUR}, a controversial issue were the time-energy commutation relations:
\begin{equation}
Et - tE = -i\hbar \qquad \text{or} \qquad Jw - wJ = -i\hbar
\end{equation}
which would give the time-energy uncertainty relations (TEUR):
\begin{equation}
\Delta E \Delta t \geq \hbar/2 \qquad \text{or} \qquad \Delta J \Delta w \geq \hbar/2
\end{equation}
where $J$ is the action variable and $w$ is the angle variable (a time indicating coordinate in perdiodical motion). This already introduced confusion as one noted in Ref. \cite{BuschTEUR}, since: i) it suggests the existence of a time operator and ii) it mixes up energy and time with action and angle variables. Different types of TEUR do indeed hold depending on the context (interpretation of what time means), in spite of some Gedankenexperiments like Einstein`s photon box. However, a unique universal TEUR similar to the position-momentum does not exist in the standard view of QM.
In relativistic QM, where time promoted to a canonical variable with the Hamiltonian as conjugated momentum, the canonical TEUR \cite{HeisenbergUR} holds axiomatically as:
\begin{equation}
\Delta T \cdot \Delta E \geq \frac{\hbar}{2}
\end{equation}
In doing so, a new parameter time evolution has to be introduced and, depending on the gauge fixing, it may be identical to the original coordinate time, possibly rendering the above relation futile. In any case, further interpretation of the time energy relation is needed.
This thesis is organized as follows. Section \ref{sec:TimeQgrav} presents a short review of the problem of time in combining Quantum Mechanics and General Relativity. Section \ref{sec:relaprticleaction} shows how even the most simple state (a single particle in a gravitational background) implies a static universe. The new work of this thesis is presented in section \ref{sec:staticworldinterpretation}, where a novel interpretation to the mass-shell condition is given. Further discussion, results and future outlook are given in section \ref{sec:discussion}.
\newpage
\section{The Problem of Time in Quantum Gravity}
\label{sec:TimeQgrav}
Quantization is the transition from the classical analysis of physical phenomena to a newer understanding known as quantum mechanics, and thus it is the procedure for building quantum mechanics \textit{from} classical mechanics. Considering that the world is truly quantum and the everyday world represents the "classical" limit, one finds that the quantization process is not unique, i.e. one needs to add several concepts and can do so along different paths.
The three logically sound paths to quantization developed so far are:
\begin{enumerate}
\item
The Canonical Quantization Procedure, which attempts also to preserve the formal structure of the classical theory, to the greatest extent possible.
\item
Path Integrals, conceived by Dirac and developed by Feynmann, involving a functional integral over all topologically allowed trajectories to compute a quantum amplitude.
\item
Deformation Quantization or Phase-Space Formulation, based on Wigner's quasi-distribution function and Weyl's correspondence between quantum-mechanical operators and ordinary phase-space functions.
\end{enumerate}
Different approaches to combine Quantum Mechanics (QM) with General Relativity (GR) have been made, but the "Holy Grail" of Theoretical Physics is yet to be found. The first immediate issue is the Problem of Time, which occurs because \textit{time} takes a different meaning in each of QM and GR. A quite philosophical review of this question is presented in Ref. \cite{AndersonTimeQG}.
\begin{figure}[H]
\centering
\[ \begin{tikzcd}
\mathcal{L} \arrow{r}{LT} & H \arrow{r}{Quant.} \arrow[swap]{d}{Red.} & \hat{H} \arrow{d}{Red.} \\%
& \tilde{H} \arrow{r}{Quant.}& \tilde{\hat{H}}
\end{tikzcd}
\]
\caption{The usual starting point is the Lagrangian $\mathcal{L}$, whence one derives the Hamiltonian $H$ via a Legendre Transformation (LT). Next, Quantization and Reduction to the physical phase-space (for constrained systems) have to be performed. The question, which now arises, is whether Quantization and Reduction commute or not. This is yet to be fully answered. Moreover, the Legendre transformation itself may impose problems.}
\label{quantizationscheme}
\end{figure}
\subsection{Canonical Quantization and Explicit Quantization Schemes}
The Canonical approach is inspired from the Hamiltonian approach to classical mechanics, in which a system's dynamics is generated via canonical Poisson brackets. However, not all properties are preserved. The Canonical approach to Quantum Mechanics can be summarized by asserting that the classical Poisson brackets are replaced by commutators. Mathematically, this represents finding a quantization map $Q$ such that any function on the classical phase-space gets promoted to an operator $\hat{Q}_f$:
\begin{equation}
\{Q_f,Q_g\} \rightarrow \frac{1}{i\hbar} [ \hat{Q}_f, \hat{Q}_g ]
\end{equation}
The following properties are desirable \cite{Qproperties} for the quantization map $Q$ :
\begin{enumerate}
\item Position and momentum representation, e.g. $\hat{Q}_x \psi = x \psi$ and $\hat{Q}_p \psi = - i \hbar \partial_x \psi$
\item Linearity of $f \stackrel{Q}{\rightarrow} \hat{Q}_f$
\item Poisson brackets: $i\hbar Q_{\{f,g\}} = [ \hat{Q}_f, \hat{Q}_g ]$
\item von Neumann rule: $Q_{\{g \circ f\}} = g(Q_f)$
\end{enumerate}
Unfortunately, these four properties are mutually inconsistent \cite{incompatibility}. Just as one example of this incompatibility, Groenewold`s theorem states that there is no such quantization map $Q$ on polynomials of degree less than or equal to four whenever $f$ and $g$ have degrees less than or equal to three.
The only pair of these properties that lead to self-consistent, nontrivial solutions are two (2) and three (3). Hence, there is \textit{no} such quantization map satisfying the above. In particular, accepting the first two properties and a weaker condition of the third one (to be true only asymptotically in the limit $\hbar \to 0$, yielding Moyal brackets), leads to deformation quantization, tackled later in this paper.
\begin{enumerate}
\item
Dirac Quantization \cite{DiracQMLectures}: The classical Poisson Bracket is altered to implement system's constraints $\psi_j$ with Lagrange multipliers $u_j$. Given initial $k$ constraints $\psi_j$, with $j = 1$ to k, one has to iteratively find all subsequent constraints obeying the consistency condition:
\begin{equation}
\dot{\psi_j} = \{H, \psi_j \} + \sum_{i\neq j}^k u_i \{\psi_j, \psi_i \}
\end{equation}
This condition may give rise to new constraints $\psi_j$, with $j>k$. The procedure ends when no new constraints (that cannot be expressed as previous constraints) are found. All constrainsts are then further classified into:
\begin{enumerate}
\item First class if their Poisson brackets vanish on the constraint surface defined by $C_\alpha (z) = 0$:
\begin{equation}
\{ C_\alpha, C_\beta \} = f_{\alpha \beta}^\gamma C_\gamma
\end{equation}
\item Second class, if their Poisson brackets do not vanish on their constraint surface:
\begin{equation}
det\{\chi_a, \chi_b\}|_{\chi_a = 0} \neq 0
\end{equation}
\end{enumerate}
The improved Poisson brackets are called Dirac brackets:
\begin{equation}
\{F, G\}_{D} = \{F, G \}_{PB} - \{F, \chi_m\} \Lambda^{mn}\{\chi_n, G\}
\end{equation}
where $\Lambda^{mn} = det\{\chi_m, \chi_n\}$.
\item
BRST-BV Quantization \cite{BRSTprimer, BVformalism}: The constraints are promoted to another first-order object, accomplished by extending the target space F by a Grassmann algebra of “ghost-antighost pairs,” while maintaining its structure as a symplectic manifold. That is, the cotangent bundle is promoted to a super-Poisson manifold, whose odd coordinates are given by the ghost-antighost pairs. This procedure is mainly used for gauge-invariant theories.
\item
Fadeev-Jackiw Quantization \cite{withouttears}: It can be considered as an alternative shorter Dirac version of quantization for certain scenarios. It implies reaching the desired formulas for brackets and for the Hamiltonian even for constrained Lagrangians, and it is based on Darboux`s theorem. In particular, it is used when the Lagrangian depends linearly on velocities, and one does not need to distinguish between first and second class constraints.
\end{enumerate}
\subsection{Time in Classical Mechanics}
In the Lagrangian-Hamiltonian formalism, one would like to extend the $(2n)$ dimensional phase space $T^*M$, locally defined by pairs $\{(x; p)\}$, by introducing energy and time as the $(n + 1)$ canonical conjugate pair, to finally reach a relativistically covariant description. However, this program cannot be carried through, i.e. it is not possible to find a Hamiltonian that generates the motion in the enlarged phase space \cite{JanNoTimeCoordinate}. The first intuitive problem is already the fact the H is not independent of the other canonical variables.
Treating time $t = q_0$ completely symmetrically with the others, one can obtain a symmetric Lagrangian, that is, promoting time to a canonical coordinate and introducing some other evolution parameter $\tau$, which is any arbitrary, continuous parameter that varies monotonically and differentiably along the path of the motion of the Lagrangian in the tangent bundle $TM$, locally defined by pairs$\{(q_i; \dot{q}_i)\}$, with $i = 0$ to $N$. Introducing a "unified variational principle", one obtains the Lagrange equations of motion for all coordinates $(q_i)$, with $i = 0$ to $N$, together with a dependence relation (or constraint) of the form $f(q_i) = 0$. This implies that the Legendre transform $L(q_i, \dot{q}_i) \rightarrow H(q_i, p_i)$ cannot be performed.
However, the Hamiltonian equations can be obtained if one sets any pair of the $(n + 1)$ phase-space variables $(q, p)$ from the list of canonical variables to serve as a system parameter, replacing the evolution parameter $\tau$.
\iffalse
A general class of 'timelike' canonical variable is formed by the angle variables (e.g. defining the direction of the hands of a clock), with corresponding canonical momenta called action variables, such that the Hamiltonian takes the form $H = H(J1; :::; Jn)$, i.e. does not depend on the angle variables.
\fi
\subsection{Reparametrization Invariance and Static World}
\label{frozenintime}.
It is shown in Ref. \cite{noHamiltonian} that there is no definition of a symmetric (that is, treating time on equal footing with space) non-vanishing Hamiltonian $H(q_i, p_i)$ for which the Hamiltonian equations:
\begin{equation}
\dot{q}_i = \frac{\partial H(q_i,p_i) }{\partial p_i} \qquad \text{and} \qquad \dot{p}_i = - \frac{\partial H(q_i,p_i) }{\partial q_i}
\end{equation}
with $i = 0$ to $N$ are correct for any arbitrary choice of parameter $\tau$ and equivalent to the Lagrangian equations. The main aspect of the above result is that the desired Hamiltonian $H$ derived by a Legendre transformation is identically zero. Moreover, although it is possible in Lagrangian mechanics that a function that is constant on the unvaried path will still lead to useful differential equations because its variation $\delta H \neq 0$ is non-zero, this is not the case for our Hamiltonian $H$.
Moreover, quantizing the result of Section \ref{particleHamiltonian} implies a stationary, timeless equation:
\begin{equation}
i \frac{\partial \psi}{\partial \tau} = \hat{H} \psi = 0
\end{equation}
In the case of GR, this is the Wheeler-De Witt equation \cite{WdWeq}, which occurs for any reparametrization invariant theory in a covariant approach. Alternatively, one may think that the vanishing Hamiltonian may imply that one is obliged to work in the Heisenberg picture of quantum mechanics, where time evolution appears in the operators. It would be surprinsingly beautiful, if one could define a time operator (not as imply as $\exp{( -iEt/\hbar)}$ that could be inserted into all other operators.
\subsection{Time in Quantum Mechanics}
Two opposite (mathematical) views of time in QM have appeared.
\begin{enumerate}
\item
Time is just a scalar, a parametric label for dynamical evolution. This is asserted in standard textbooks, e.g. by Sakurai: “The first important point we should keep in mind is that time is just a parameter in quantum mechanics, not an operator. In particular, time is not an observable. It is nonsensical to talk about the time operator in the same sense as we talk about the position operator." \cite{SakuraiQM}
\item
Time is indeed an operator and defining it is a difficult task. Von Neumann argued that the scalar view of QM is its main weakness: "First of all we must admit that this objection [time being just a number] points at an essential weakness, which is, in fact, the chief weakness of quantum mechanics." \cite{NeumannQM}
\end{enumerate}
Promoting time to a canonical coordinate, one then requires an extra evolution parameter for the respective system. Henceforth, the \textit{physical time} shall refer to the coordinate time, which gets promoted to an operator, and \textit{parameter time} shall refer to the evolution parameter of the system, which indeed is just a scalar. Therefore, both views of time are reconciled.
A further classification of time in non-relativistic quantum theory can be done, according to Ref. \cite{BuschTEUR}:
\begin{enumerate}
\item \textit{external time:} Identified as the parameter entering the Schrödinger equation and measured by an external, detached laboratory clock. Such external time measurement are carried with clocks that are not dynamically connected to the quantum system
\item \textit{intrinsic time:} As the dynamical evolution parameter $\tau_\phi(A)$ through which quantum observables $A$ change, giving a quantitative measure of the length of the time interval between two events
\item \textit{observable time:} Promoting time to a quantum operator.
\end{enumerate}
One already notices that the \textit{observable time} and the \textit{physical time} have the same philosophical meaning in the covariant quantum gravity language. Such confusion arises also due to mixing up the canonical position coordinates of a point particle and the coordinates of a point in space. In Ref. \cite{JanNoTimeCoordinate}, it is argued that there is no reason why coordinate time should be an operator in quantum mechanics.
A possible way to distinguish between intrinsic and observable time would be to define intrinsic time as: The time from which the quantum phenomenon under observation has started and it possibly finishes at decoherence.
\subsection{Time Operators}
Considering time as an observable is motivated by the experiments in which times of events are recorded using lab clocks. The question is how one can relate this observed time with the intrinsic time of quantum phenomenon. In this paper, the parametric view of time does not suffice, as the main interest is the Time Energy Uncertainty Relation, which requires an operator interpretation of time.
Quite early in the development of Quantum Mechanics, Pauli has argued that the existence of a self-adjoint time operator canonically conjugate with a Hamiltonian implies that both operators have continuous spectra spanning the entire real line, a result widely known as Pauli’s theorem \cite{pauli1980general}. This had a severe impact on the Hamiltonian of physical systems, which usually have a stable ground state (hence semi-bounded) or discrete (e.g. Harmonic Oscillator) since the time operator would not exist.
Although Pauli`s theorem states that one cannot find a self-adjoint operator yielding time, for special cases (Hamiltonians) one can obtain such an operator, canonically conjugate to the hamiltonian, e.g. for a free falling particle \cite{BuschTEUR}:
\begin{equation}
\hat{H}_g = \frac{\hat{P}^2}{2m} - mg\hat{Q} \Rightarrow \hat{T}_g = -\frac{1}{mg}\hat{P}
\end{equation}
Sticking with the Hilbert space formulation, it is considered that a compromise has to be made \cite{GalaponCanonicalPairs}.
\begin{enumerate}
\item Imposing self-adjointness leads to violation of the Canonical Commutation Relations (CCR) with the Hamiltonian
\item Strictly obeying the CCR leads to the proper time operator to not be self-adjoint
\end{enumerate}
Recently, the later route has been taken. This has also implied that the quantum observables have been extended to maximally symmetric bot not necessarily self-adjoing operators, i.e. generally positive operator valued measures (POVM). Defining a time observable through a POVM rather than a self-adjoint or symmetric operator was investigated by Brunetti and Fredenhagen \cite{Fredenhagen}, who performed normalization at the operator level.
A further results \cite{galaponcanonicaltriple} states that a pair of Hilbert space operators $(Q, P)$ obeying the CCR can at most satisfy the CCR in a \textit{proper} subspace $\mathcal{D}_c \subset \mathcal{H}$. Therefore, a canonical-pair in a Hilbert space is a triple $\mathcal{C}\left( Q,P; \mathcal{D}_c\right)$.
The relation between $\mathcal{D}_c$ and the reduced (constrained) phase space may not be trivial. Therefore, futher work may include investigating whether the subset $\mathcal{D}_c \subset \mathcal{H}$ coincides identically with the space of physical wavefunctions obeying the physical constraints (e.g. mass-shell condition).
In the free non-relativistic particle case, the Hamiltonian and its conjugate time operator are:
\begin{equation}
H_{free} = \frac{P^2}{2m} \Rightarrow T = m\frac{q}{p}
\end{equation}
Quantizing, one has to consider the ordering of operators, and thus more symmetric terms are proposed, such as the Aharanov-Bohm's time operator:
\begin{equation}
T = m \dfrac{q}{p} \Rightarrow \qquad \hat{T} = \dfrac{1}{2}m \left( \hat{q}\hat{p}^{-1} + \hat{p}^{-1}\hat{q}\right) \qquad \text{ or} \qquad \hat{T} = \dfrac{m}{2} \left( \hat{p}^{-1/2} \hat{x} \hat{p}^{-1/2} \right)
\end{equation}
One can try to obtain a self-adjoint operator by using a formal series expansion, but runs into convergence problems (see Appendix \ref{timeoperatorconvproblems}). For the Hamiltonian of section \ref{particleHamiltonian}, the time operator before quantization for constant einbein and metric is:
\begin{equation}
T = \frac{1}{e} \frac{q}{p} \Rightarrow \hat{T} = \frac{1}{2}m \left( \hat{q} \hat{p}^{-1} + \hat{p}^{-1}\hat{q} \right)
\label{constanttimeoperator}
\end{equation}
\newpage
\section{The relativistic particle action}
\label{sec:relaprticleaction}
By introducing the einbein as an auxiliary field $e$ (See Appendix \ref{einbeinexplained}), one allows the action for massless particles to be non-trivial, namely:
\begin{equation}
S = -\int d\lambda L = \int d\lambda \frac{1}{2} \left(\frac{1}{e} \dot{X}^{\mu}\dot{X}^{\nu}g_{\mu \nu} - e m^2 \right)
\label{action}
\end{equation}
The evolution parameter $\lambda$ is not a measure of proper time for the particle unless the tangent vector is of unit length, i.e. $\dot{X}^\mu \dot{X}_\mu =1$. Any such parametrization will work, since the action $S$ is reparametrization invariant.
The einbein $e$ is considered to be an arbitrary function of the time evolution parameter $\lambda$, i.e. $e = e(\lambda)$ as a world-line reparametrization or equivalently as shorthand notation for $\sqrt{-\dot{X}^{\mu}\dot{X}^{\nu}g_{\mu \nu}}/m$. It is a non-dynamical function, like $p_\mu$, since $\dot{e}$ does not appeard in the action. For simplicity, the metric $g_{\mu \nu} = g_{\mu \nu}(X)$ was later particularized for the constant Minkowski metric $\eta_{\mu\nu} = diag(-,+,+,+)$.
\subsection{Equations of Motion}
The Lagrange equations of motion become:
\begin{subequations}
\begin{eqnarray}
& p_\alpha = \frac{\partial L}{\partial \dot{X}^\alpha} = \frac{1}{e} \dot{X}^{\mu}g_{\alpha \mu}
\label{generalizedmomentum} \qquad \dfrac{\partial L}{\partial X^\alpha} = \dfrac{1}{2e}\dot{X}^\mu \dot{X}^\nu \partial_\alpha g_{\mu\nu}
\\
& \dfrac{\partial}{\partial \lambda}\left( \dfrac{\partial L}{\partial \dot{X}^\alpha}\right) - \dfrac{\partial L}{\partial X^\alpha} = -\dfrac{\dot{e}}{e^2}\dot{X}^\nu g_{\alpha \nu} + \dfrac{1}{e}\ddot{X}^\mu g_{\alpha \mu} + \frac{1}{e}\dot{X}^\nu \dot{X}^\mu \partial_\nu g_{\alpha \mu} - \dfrac{1}{2e}\dot{X}^\mu \dot{X}^\nu \partial_\alpha g_{\mu \nu} = 0
\end{eqnarray}
\end{subequations}
One can obtain the Christoffel symbol from the last two terms, namely:
\begin{equation}
\partial_\nu g_{\alpha \mu} - \frac{1}{2}\partial_\alpha g_{\mu \nu} = \frac{1}{2} \left( \partial_\nu g_{\alpha \mu} + \partial_\mu g_{\alpha \nu} - \partial_\alpha g_{\mu \nu} \right) = \Gamma_{\alpha \nu \mu}
\end{equation}
Raising the $\alpha$ index by multiplication with $g^{\alpha \nu}$ finally gives:
\begin{equation}
\frac{1}{e}\left(\frac{-\dot{e}}{e}\dot{X}^\alpha + \ddot{X}^\alpha + \Gamma^\alpha_{\mu \nu} \dot{X}^\mu \dot{X}^\nu\right) = 0
\end{equation}
which is the geodesic equation for an affine parameter with the extra first term. Moreover, considering this parameter $\lambda$ as a general function of the proper time $\tau$, this relation becomes:
\begin{equation}
\frac{-1}{e}\frac{de}{d\lambda}\frac{dX^\alpha}{d\lambda} + \frac{d^2X^\alpha}{d\lambda^2} + \Gamma^\alpha_{\mu \nu}\frac{dX^\mu}{d\lambda}\frac{dX^\nu}{d\lambda} = - \left(\frac{d^2 \lambda}{d\tau^2} \right)\cdot \left(\frac{d\lambda}{d\tau} \right)^{-2}\cdot\frac{dx^\alpha}{d\tau}
\end{equation}
An attempt to treat $e = e(X^{\mu},\lambda)$ and $ g_{\mu \nu}(X,\lambda,e)$ as general functions has also been carried out. Next to having close to no physical interpretation, no meaningful results were obtained.
\subsection{A word on the mass-shell constraint}
Varying this action with respect to $e$ yields its equation of motion:
\begin{equation}
\frac{1}{e^2} \dot{X}^{\mu}\dot{X}^{\nu}g_{\mu \nu} + m^2 =0
\label{alleom}
\end{equation}
Using the momenta definition of \ref{generalizedmomentum}, one obtains the mass-shell constraint $p_\mu p^\mu + m^2 = 0$ that we will indeed treat as an equation of motion. The Hamiltonian defined via Legendre Transform becomes:
\begin{equation}
H = p_\mu \dot{X}^\mu - L = \frac{1}{2}e(p_\mu p^\mu + m^2) = 0
\label{simpleHamiltonian}
\end{equation}
that is, an exactly vanishing Hamiltonian via the einbein equation of motion. The einbein thus introduced enforces the mass-shell condition, which shall be treated as a constraint weakly equal to 0, rather than an equation of motion exactly zero. The quantization of this result has been considered so far to yield the usual problem of time in quantum gravity, $\hat{H} \ket{\psi} = 0$.
One observes that this constraint is not linear in velocities (and not even semi-holonomic). This brings up the subtle issues of Hamiltonian mechanics not being able to deal with non-holonomic constraints (e.g. cannot be simply added by introducing Lagrange multipliers). A further topic to be investigated is the relation between Dirac constraints as First and Second Class with Holonomic and Non-Holonomic constraints.
\iffalse
\subsubsection{General metric}
An interesting case arises if one keeps a metric that explicitly depends on the einbein, i.e. $g_{\mu \nu} = g_{\mu \nu}(e, X^\alpha, \tau)$. The einbein equation of motion REF can be rewritten as:
\begin{equation}
\frac{1}{e^2}\dot{X}^\mu \dot{X}^\nu g_{\mu \nu} + m^2 = - \frac{1}{e}\dot{X}^\mu \dot{X}^\nu \frac{\partial g_{\mu \nu}}{\partial e}
\end{equation}
Using this in the above formalism allows one to use Leibniz rule for chain derivatives, namely:
\begin{equation}
\frac{\partial e}{\partial X^\alpha} \frac{\partial g_{\mu \nu}}{\partial e} = \frac{\partial g_{\mu \nu}}{\partial X^\alpha}
\end{equation}
Then, the coordinates derivatives surprisingly vanish:
\begin{equation}
\frac{\partial L}{\partial X^\alpha } = \frac{1}{2e}\dot{X}^\mu \dot{X}^\nu \partial_\alpha g_{\mu \nu} - \frac{\partial e}{\partial X^\alpha } \frac{\partial g_{\mu \nu}}{\partial e}\frac{1}{2e}\dot{X}^\mu \dot{X}^\nu = \frac{1}{2e}\dot{X}^\mu \dot{X}^\nu \partial_\alpha g_{\mu \nu} - \frac{1}{2e}\dot{X}^\mu \dot{X}^\nu \partial_\alpha g_{\mu \nu} = 0
\end{equation}
This interestingly implies that the momenta are always constant in this scenario, regardless of the metric in questions (constant or not):
\begin{equation}
\frac{d}{d\tau}(p_\alpha) = 0
\label{alwaysconstantmomenta}
\end{equation}
\fi
\subsection{Overcoming the static universe}
In standard Quantum Mechanics, the Canonical Poisson brackets of classical mechanics are promoted to the Canonical Commutation Relations of quantum mechanics, that is:
\begin{eqnarray}
\{X^\mu, p_\nu\} = \delta^\mu_\nu \quad \Rightarrow \quad \left[ \hat{X}^{\mu}, \hat{p}_{\nu} \right] = i\hbar \delta^\mu_\nu \quad & \Rightarrow \quad \left[ \hat{X}^\mu, \hat{H} \right] = i\hbar e \eta^{\mu\nu} \hat{p}_\nu
\end{eqnarray}
and the observables are promoted to (self-adjoint) operators acting on the wavefunctions $\ket{\psi}$ living in the Hilbert Space $\mathcal{H}= L^2(\mathbb{R}^4)$
\subsubsection{Schr{\"o}dinger Picture}
In particular, for the Hamiltonian constraint:
\begin{equation}
\hat{H} \psi = \frac{1}{2}e ( \hat{p}_\mu \hat{p}^\mu + m^2) \ket{\psi} = \frac{\partial}{\partial \lambda} \ket{\psi}= 0
\label{standardHamiltonian}
\end{equation}
we run again into the usual problem of time in quantum gravity. There is no time evolution due to the trivial Hamiltonian, and the momenta is identically zero:
\begin{equation}
\hat{p}_\mu = 0
\end{equation}
as given by the vanishing Hamiltonian $\bra{\psi} \hat{H} = 0 = \hat{H} \ket{\psi}$:
\begin{subequations}
\begin{eqnarray}
e \eta^{\mu\nu} \hat{p}_\mu \ket{\psi} & = \frac{-1}{i \hbar} \left(\hat{H} \hat{X}^\mu - \hat{X}^\mu \hat{H}\right)\ket{\psi} =\frac{-1}{i \hbar} \hat{H} X^\mu \ket{\psi} = \frac{-1}{i \hbar}X^\mu \hat{H} \ket{\psi} = 0 \\
\bra{\psi} e \eta^{\mu\nu} \hat{p}_\mu & = \frac{-1}{i \hbar} \bra{\psi} \left(\hat{H} \hat{X}^\mu - \hat{X}^\mu \hat{H}\right) =\frac{-1}{i \hbar}\bra{\psi} X^\mu\hat{H} = \frac{-1}{i \hbar}X^\mu \bra{\psi} \hat{H} = 0
\end{eqnarray}
\end{subequations}
To overcome this, the less stricter condition of operator averages (usually encountered in String Theory) is used and eqn. \ref{standardHamiltonian} becomes:
\begin{equation}
\bra{\psi} \hat{H} \ket{\psi} = 0 \quad \text{ with } \quad \hat{H} \ket{\psi} = \frac{\partial}{\partial \lambda} \ket{\psi} \stackrel{!}{\neq} 0
\end{equation}
This allows having non-vanishing momenta:
\begin{equation}
\hat{p}^\mu \neq 0
\end{equation}
while still keeping the expectation values obeying the classical observations, that is:
\begin{equation}
\eta^{\mu\nu} e \bra{\psi} \hat{p}_\mu \ket{\psi}= \frac{-1}{i \hbar}\bra{\psi} \left[ \hat{H}, \hat{X}^\mu \right] \ket{\psi} = \frac{-1}{i \hbar} \bra{\psi} \hat{X}^\mu \hat{H} - \hat{H} \hat{X}^\mu \ket{\psi} = \frac{-1}{i \hbar} \bra{\psi}X^\mu (\hat{H} - \hat{H} ) \ket{\psi} = 0
\end{equation}
\subsubsection{Heisenberg Picture}
The Hamiltonian constraint of eq. \ref{standardHamiltonian} may forcefully prefer the Heisenberg picture, where one has by default that the wave functions are static, since the operators carry the time evolution via the Heisenberg equation of motion::
\begin{equation}
\frac{\partial}{\partial \lambda} \ket{\psi}_H =0 \qquad \text{ and } \qquad \frac{d}{d\lambda}A(\lambda) = \frac{1}{i\hbar} \left[A(\lambda), H\right] + \left(\frac{\partial A}{\partial \lambda} \right)_H
\label{heisenbergpicture}
\end{equation}
Therefore, in the Schr{\"o}dinger pictures, the operators do not carry the time evolution and the vanishing momentum may be a consequence of the fact that:
\begin{equation}
\frac{\partial \hat{X}_S^\mu}{\partial \lambda} = 0 \qquad \text{ and } \qquad \frac{\partial \hat{X}_H^\mu}{\partial \lambda} = \frac{1}{i\hbar} \left[ \hat{X}^\mu, H \right]= \ e \eta^{\mu\nu} \hat{p}_\nu
\end{equation}
\newpage
\section{Static world interpretation}
\label{sec:staticworldinterpretation}
Using the einbein formulation, where the mass-shell constraint is implicitly obtained as an equation of motion, one may have also immediately restricted to the time-independent formulation. Therefore, without any explicit gauge fixing (which shall be discussed in subsection \ref{sec:gaugefixing}), the Hamiltonian that gives the time evolution in the Scr{\"o}dinger picture of ordinary quantum mechanics is considered an operator $\hat{H}$ acting on the wavefunctions $\ket{\psi}$ living in the Hilbert Space $\mathcal{H}= L^2(\mathcal{R}^4)$. Acting trivially on physical wavefunctions that we denote using the subscript $m$ for mass-shell, the Hamiltonian:
\begin{equation}
\hat{H} \ket{\psi}_m = \frac{1}{2}e ( \hat{p}_\mu \hat{p}^\mu + m^2) \ket{\psi}_m = 0
\label{hamiltonianquantumconstraint}
\end{equation}
vanishes due to the einbein equation of motion, which enforces the mass-shell condition. Using the standard representation of the momentum operator $\hat{p}_\mu = - i \hbar \partial_\mu$, the above represents the Klein-Gordon equation.
The novel interpretation of this thesis is to view equation \ref{hamiltonianquantumconstraint} as a parameter time-independent equation, solved by \textit{physical} wavefunctons $\ket{\psi}_m$ that are \textit{stationary} wrt. parameter time $\lambda$, that is:
\begin{equation}
\hat{H}_0 \ket{\psi}_{m} = e \hat{p}_\mu \hat{p}^\mu \ket{\psi}_{m}= i\hbar \dfrac{\partial \ket{\psi}_{m}}{\partial \lambda} \stackrel{!}{=} -e m^2 \ket{\psi}_{m}
\label{parameternergyeigenvalueeqn}
\end{equation}
However, for a general wave function $\ket{\psi}$ that is not necessarily on mass-shell, the time evolution is given just by:
\boxedeq{eq:timeevolution}{\hat{H}_0 \ket{\psi} = e \hat{p}_\mu \hat{p}^\mu \ket{\psi}= i\hbar \dfrac{\partial \ket{\psi}}{\partial \lambda}}
The above can be seen as a free Schr{\"o}dinger equation in four (1+3) dimensions, and can later on include potentials, which may be not only coordinate time-dependent, but potential barriers in time as for spatial coordinates.
In doing so, the einbein $e$ and the parameter time $\lambda$ are introduced as two new variables. This section is dedicated to the study of the relationship between these new variables and the ordinary physical ones: $x^\mu$, $p_\mu$, $m$.
\subsection{A word on mathematics}
The above construction treats the temporal component as a spatial coordinate, and the system`s evolution is described by the introduced parameter $\lambda$, playing the role of intrinsic time, parameterized by the einbein $e$. Therefore, the functional status, properties and results from the non-relativistic Schr{\"o}dinger equation are preserved, with the mention that there is the extra fourth dimension represented by coordinate time.
\subsubsection{The Hilbert Space of wave functions}
Here, the Hilbert space of wave functions represents the set of all possible normalizable wave functions for a system, together with the null vector. Evidently, there are several choices of basis.
Unfortunately, not all wave functions of interest are elements of some Hilbert space, e.g. $L^2$. The plane wave solutions of the Schr{\"o}dinger equation for a free particle of the form $\exp{\left( i k_\mu x^\mu \right)}$ are not normalizable, and thus not in $L^2$. Nevertheless, they are essential as one can express functions that are normalizable using wave packets. Therefore, they represent in a loose a sense a basis (but not a Hilbert space basis, nor a Hamel basis) in which other wave functions of interest can be expressed.
The Hilbert under consideration will mainly be the Banach space $L^2(\mathbb{R}^4)= \mathcal{H}$, the space of square-integrable functions on the four dimensional space. Vectors $\psi\in\mathcal{H}$ are represented by wave functions $\psi (x^\mu)$ with the standard inner product extended to four dimensions, which also defines the norm of a wave functions:
\begin{equation}
\langle \psi _{1}|\psi _{2}\rangle = \int_{\mathbb{R}^4} {\mathrm {d}}^4 x \, \psi _{1}^{\ast }(x^\mu) \psi_{2}(x^\mu) \qquad \text{ and } \qquad \norm{\psi} = \sqrt{\braket{\psi|\psi}} \leq \infty
\label{def:innerproduct}
\end{equation}
where the last part holds only for normalizable wave functions. Moreover, the inner product defined above is positive definite. The expectation values of operator $\hat{A}$ can be written as:
\begin{equation}
\langle \hat{A} \rangle_\sigma = \int_{\mathbb{R}^4} {\mathrm {d}}^4 x \, \psi _{1}^{\ast }(x) \hat{A} \psi_{2}(x)
\end{equation}
Because in the usual treatment of the Klein-Gordon equation, the inner product is not positive definite, some important results from ordinary quantum mechanics do not hold, e.g. probability conservation. This is remedied in subsection \ref{subseq:currentdensity}.
\iffalse
Attempts were in made in Quantum Field Theory to define an extended Klein-Gordon inner product:
\begin{equation}
\langle \psi _{1}|\psi _{2}\rangle_{(KG)} = i g \int_{\mathbb{R}^4} {\mathrm {d}}^4 x \left[\psi _{1}^{\ast }(x) \dot{\psi}_{2}(x) - \dot{\psi}_{1}^{\ast }(x)\psi _{2}(x) \right]
\end{equation}
where $g$ is a positive real number. In our treatment, this inner product can be used as a measure for orthogonality of states based on their mass, as for $\psi_2$ an eigenstate of $H_0$, that is, $\dot{\psi}_2 = H_ 0 \psi_2 = c_2 \psi_2$:
\begin{equation}
\langle \psi _{1}|\psi _{2}\rangle_{(KG)} = i g \int_{\mathbb{R}^4} {\mathrm {d}}^4 x \left(\psi _{1}^{\ast } c_2 - \dot{\psi}_{1}^{\ast } \right) \psi _{2}
\end{equation}
which is equal to $0$ if:
\begin{enumerate}
\item $\psi_1$ also an eigenstate of $H_0$ with $H_0 \psi_1^* = - c_1 \psi_1$.
\begin{equation}
\langle \psi _{1}|\psi _{2}\rangle_{(KG)} = i g \int_{\mathbb{R}^4} {\mathrm {d}}^4 x \left(c_2 + c_1 \right) \psi_{1}^{\ast } \psi _{2} = 0
\end{equation}
Since eigenstates are orthogonal, $\braket{\psi_1 | \psi_2} = 0$. Note that $c_1 = - c_2$, ( particles with opposite eigenvalues) also implies vanishing inner product.
\item or $\psi_3 = \psi _{1}^{\ast } c_2 - \dot{\psi}_{1}^{\ast }$ is orthogonal to $\psi_2$, i.e. both $\braket{\psi_1|\psi_2} = 0 = \braket{\dot{\psi_1}|\psi_2}$ have to hold.
\end{enumerate}
\fi
\subsubsection{The functional status of einbein $e$ and mass $m$}
Upon quantization, a choice has to be made on which classical functions (e.g. physical observables) are promoted to (self-adjoint) operators. The einbein is a one dimensional parametrization of the world line, and it is not promoted on an operator, given its scalar nature. Although in the present treatment the einbein will be considered as such, its quantization as a dynamical function may yield surprising results.
Since mass can be measured, it is natural to consider it as an observable. This question has been dealt with, and the answer is given here, for completion. One has to distinguish between elementary systems (or free particles) and compound (interacting) systems, according to Wigner`s classification \cite{WignerClassification} of the non-negative $(E ≥ 0)$ energy irreducible (strongly continuous) unitary representations of the Poincaré group. This classification holds when the wave functions have sharp mass eigenvalues. In this treatment, this is not a strict condition on the investigated wave functions, and an operational approach can be further investigated.
Each such representation of the Poincaré group is identified by a set of numbers defining the eigenvalues of the observables, which have the form $\lambda Id$ (where $\lambda$ is fixed a real number) in the irreducible Hilbert space of the system. The mass operator is then an elementary observable, and thus, elementary systems have trivial mass operator, which can be considered as a given non-quantum parameter.
The picture changes dramatically if one focuses on compound systems: the mass is simply the energy operator evaluated in the rest frame of the system. It generally shows a mixed spectrum made of a continuous part, due to the "relative" kinetic energy and, under that part, a point spectrum describing the possible masses of the overall system.
\subsubsection{Complex structure of $\mathcal{H}$ due to mass }
In a recent paper \cite{MassGivingComplexStructure}, it has been argued that the positive valued mass operator $m^2$ is responsible of the complex structure of the Hilbert spaces of wave functions. Reasons for ruling out the real Hilbert space formulation are more physically intuitive, rather than mathematically rigorous, e.g. assuming that any quantum mechanics formulation should encompass a statement of Heisenberg principle, which is an important aspect of this current treatment.
Focusing on this issue from another viewpoint, it has been argued that there is a general fundamental reason why elementary quantum systems are not described in real Hilbert spaces, namely, their basic symmetry group. An elementary relativistic system within Wigner’s classification (defined as a locally-faithful irreducible strongly-continuous unitary representation of the Poincare group in a real Hilbert space) admits a natural, Poincare invariant and unique up to sign, complex structure which commutes with the whole algebra of observables generated by the representation itself, if the squared-mass operator is non-negative. This complex structure leads to a physically equivalent reformulation of the theory in a complex Hilbert space, and moreover, reveals a nice interplay of Poincare symmetry and the classification of the commutant of irreducible real von Neumann algebras
\subsection{Wave functions}
\subsubsection{Physical wave functions (on mass-shell)}
Obeying the quantized version of the mass-shell condition, the physical wave function $\ket{\psi(x^\mu)}_m $ are solutions of the Klein-Gordon equation given by plane waves:
\begin{equation}
\psi(x^\mu, 0) = \psi_0 \exp{\left( i k_\mu x^\mu \right)}
\end{equation}
for some constant wave number $k_\mu = (\omega_0, \vec{k}) \in \mathbb{R}^4$ and normalization factor $\psi_0$. Their parameter time evolution is easily found by viewing the latter part of eqn. \ref{parameternergyeigenvalueeqn} as an eigenvalue problem with parameter energy $E_\lambda = -e m^2$. Note that only the worldline metric $e$ and the particle`s mass $m$ define the parameter energy $E_\lambda$, both quantities locally defined for the particle. In this case, the parameter time evolution takes the simple form of a complex phase:
\begin{equation}
\psi (x^\mu, \lambda) = \psi(x^\mu, 0) \cdot S(\lambda) \qquad \text{ with } \qquad S(\lambda) = \exp{\left( +i \frac{e m^2}{\hbar} \lambda\right)}
\end{equation}
such that the on mass-shell solutions are:
\boxedeq{eq:masshellsolutions}{\psi(x^\mu, \lambda) = \psi_0 \exp{\left( i k_\mu x^\mu \right) }\exp{\left( +i \frac{e m^2}{\hbar} \lambda\right)}}
\subsubsection{Non-physical wavefunctions (off mass-shell)}
The motivation to look for off mass-shell wave functions is given by such known examples of virtual particles, plasmons, etc.
The physical solutions form an eigenbasis of $\hat{H}_0$ (where the mass $m$ is treated as a continuous parameter), and thus, any non-physical state can then be written as an expansion of physical states using an arbitrary function $f(m)$, that is, a Fourier Transform from between parameter energy and time:
\begin{equation}
\psi (x^\mu, \lambda) = \int_{-\infty}^{+\infty} d(em^2) \ \exp{\left( +i \frac{e m^2}{\hbar} \lambda\right)} \psi(x^\mu) f(em^2)
\label{eq:FTmasslambda}
\end{equation}
The above represents a mass distribution (tempered distribution), and hence these states are not necessarily on mass-shell. In this view, a particle with definite mass is represented by a delta peaked function.
\iffalse
%proof that the above is not on shell
\begin{align*}
\hat{H}_0 \ket{\psi} & = \hat{e} \hat{p}_\mu \hat{p}^\mu \ket{\psi} = i\hbar \dfrac{\partial \ket{\psi}}{\partial \lambda} = i \hbar \int_0^m d m' \ \frac{\partial}{\partial \lambda} \left[\exp{\left( +i \frac{e m'^2}{\hbar} \lambda\right)} \psi(x^\mu) f(x^\mu, m', \lambda)\right] \\
& = \int_0^m d m' \ -em'^2 \ \exp{\left( +i \frac{e m'^2}{\hbar} \lambda\right)} \psi(x^\mu) f(x^\mu, m', \lambda) + \int_0^m d m' \ \exp{\left( +i \frac{e m'^2}{\hbar} \lambda\right)} \psi(x^\mu) \frac{\partial}{\partial \lambda} \left(f(x^\mu, m', \lambda)\right) \\
& \neq -e m^2 \ket{\psi} \numberthis{}
\end{align*}
\fi
Choosing a discrete set of delta-peaked functions at $m_n$ with the normalization condition $\sum_n |c_n|^2 = 1$, such that they form a complete set of orthonormal eigenfunctions of $\hat{H}_0$, yields the usual expansion in stationary (here, physical) states:
\begin{equation}
f(m) = \sum_n c_n \delta(m_n) \quad \Rightarrow \quad \psi (x^\mu, \lambda) = \sum_n c_n \exp{\left( +i \dfrac{e m_n^2}{\hbar} \lambda\right)} \psi(x^\mu)
\end{equation}
Alternatively, one can Fourier Transform eqn. \ref{eq:FTmasslambda} again such that:
\begin{equation}
\psi (x^\mu, m) = \int_{-\infty}^{+\infty} d\lambda \ \exp{\left( +i \frac{e m^2}{\hbar} \lambda\right)} \psi(x^\mu) g(\lambda)
\label{eq:FTmasslambdaInLambda}
\end{equation}
The above represents a parameter time distribution and this requires thorough physical interpretation, carried out in the context of uncertainty relation in subsection \ref{subseq:TEUR}.
Now we proceed to construct wavepackets from these states, as we want to allow uncertainties in both mass $m$ and parameter time $\lambda$.
\subsubsection{Gaussian wavepacket}
The wave function in 4-position space can be written using the Fourier transform of the 4-momentum space together with the parameter time evolution as:
\begin{equation}
\psi(x) = \int_{\mathbb{R}^4} d^4k \ \phi(k) \exp{\left(i (k_\mu x^\mu - \omega_\lambda (k_\mu) \lambda) \right)}
\label{Fouriermomentumspace}
\end{equation}
where the dependency of the parameter angular frequency $\omega_\lambda$ on the wave numbers $k_\mu$ is given implicitly as:
\begin{equation}
E_\lambda = - e m^2 = e p^2 = e \hbar^2 k_\mu k^\mu \stackrel{!}{=} \hbar \omega_n \Rightarrow \omega_\lambda (k_\mu) = e \hbar k_\mu k^\mu
\end{equation}
Expanding $\omega(k_\mu)$ around the center of the wave packet in $k$-space gives:
\begin{equation}
\omega(k_\mu) = \omega_0 + \tikzmark{a}v_g^\mu (k_\mu - k_\mu^0) + \tikzmark{b}\beta_{\mu\nu} (k_\mu - k_\mu^0) (k_\nu - k_\nu^0)
\end{equation}
\begin{tikzpicture}[remember picture,overlay]
\draw[<-]
([shift={(2pt,-2pt)}]pic cs:a) |- ([shift={(-10pt,-20pt)}]pic cs:a)
node[anchor=east] {$\scriptstyle \text{four-vector}$};
\draw[<-]
([shift={(2pt,-2pt)}]pic cs:b) |- ([shift={(14pt,-20pt)}]pic cs:b)
node[anchor=west] {$\scriptstyle \text{tensor}$};
\end{tikzpicture}
The four vector $v_g^\mu = \left. \partial_\mu \omega(k_\mu) \right|_{k_\mu^0}$ plays the role of the group velocity of the wave packet, and the tensor $\beta_{\mu\nu} = H(k_\mu)$ is the Hessian matrix of second derivatives, which it is assumed that it can be written using the (Minkowski) metric as: $\beta_{\mu\nu} = \beta \eta_{\mu\nu}$, where $\beta$ is just a scalar.
Starting with a localized Gaussian wave packet in momentum space $\phi(k) = \exp{\left(-\alpha \eta_{\mu\nu}( k_\mu - k_\mu^0)( k_\nu - k_\nu^0)\right)}$, where one could also identify $\alpha_{\mu\nu} = \alpha \eta_{\mu\nu}$, with $\alpha = 1/\sigma(0)$ a scalar (positive real number) giving the square of the width of the wave packet, one obtains:
\boxedeq{eq:gaussianwavepacket}{
\psi ( x, \lambda) = \sqrt{\frac{\pi}{\alpha + i \beta \lambda}} \exp{\left(i (k_\mu^0 x^\mu - \omega_0 \lambda) \right)} \exp{\left(-\frac{ \alpha \eta_{\mu\nu}( x^\mu - v_g^\mu \lambda) ( x^\nu - v_g^\nu \lambda)}{2( \alpha^2 + \beta^2 \lambda^2)}\right)}
}
At parameter and coordinate times $\lambda = 0 = t$, the above reduces to the well-known Gaussian wave packet in 3D:
\begin{equation}
\psi (x_j, t =0, \lambda = 0) = \sqrt{\frac{\pi}{\alpha}} \exp{\left(\frac{i}{\hbar} k_j^0 x^j \right)} \exp{\left(-\frac{ x^j x^j}{2 \alpha}\right)}
\end{equation}
\begin{figure}[H]
\centering
\begin{minipage}[H]{0.49\textwidth}
\justify
Equation \ref{eq:gaussianwavepacket} exhibits the classical wave packet spreading, but in parameter time $\lambda$, as given by the width of the wave packet:
\begin{equation}
\sigma(\lambda) = \sqrt{\frac{ \alpha^2 + \beta^2 \lambda^2}{ \alpha}}
\label{eq:spreadingwidth}
\end{equation}
In the usual treatment, where the dimension is spatial, the wave packet naturally spreads because it contains waves of different momenta and hence different velocities. The above implies that such spreading happens also in time coordinate.
\end{minipage}
\hfill
\begin{minipage}[H]{0.49\textwidth}
\centering
\includegraphics[width=1.0\textwidth]{Pics/position_spread.jpg}
\caption{Spreading of wavepacket}
\label{picpositionspread}
\end{minipage}
\end{figure}
\subsubsection{Time Gaussian wavepacket}
Moving to the rest frame of the particle under observation, such that $p_\mu = (p_0, 0, 0, 0)$, the above reduces from the four position $x^\mu$ to a one dimensional problem in coordinate time $x^0 = t$. Then, the mass-shell constraint $- p^\mu p_\mu = p_0^2 = m^2$ identifies the wave number with the mass and gives the dispersion relation:
\begin{equation}
k_0 = p_0/\hbar = \pm m /\hbar \qquad \text{ and } \qquad \omega(m) = E_\lambda (m) /\hbar = -em^2/\hbar
\label{dispersionrelation}
\end{equation}
Henceforth, we call the wave number $k$-space the mass $m$-space, as they differ only by a factor of $\hbar$. The same computations give:
\begin{equation}
\psi(x) = \int_{-\infty}^{\infty} dk \ \phi(k_0) \exp{\left(i (k_0 x^0 - \omega(k_0) \lambda) \right)} = \int_{-\infty}^{\infty} \frac{dm}{\hbar} \ \phi(m) \exp{\left(-i m t/\hbar \right)} \exp{\left( i em^2 \lambda/\hbar \right)}
\label{Fouriermomentumspace}
\end{equation}
Which is equivalent to \ref{eq:FTmasslambda} for $k_\mu x^\mu = mt/\hbar$. However, now we can interpret the physical meaning of these equation better.
Repeating the same procedure as above, that is, expanding $\omega(m)$ around the center of the wave packet in $m$-space and choosing a Gaussian distribution in mass space:
\begin{equation}
\omega(m) = \omega_0 + v_g (m - m_0) + \beta (m - m_0)^2 \qquad \text{ and } \qquad \phi(m) = \exp{\left( - \alpha (m - m_0)^2 \right)}
\end{equation}
where $v_g = -2 e m_0/\hbar$ and $\beta = - e/\hbar$, as derived from \ref{dispersionrelation}. Inserting this into \ref{Fouriermomentumspace} gives a wave function that spreads out in parameter time:
\begin{equation}
\psi ( t, \lambda) = \sqrt{\frac{\pi}{\alpha + i e \lambda/\hbar}} \exp{\left(\frac{i}{\hbar} (m_0 t + e m_0^2 \lambda) \right)} \exp{\left(-\frac{ \alpha ( t +2 e m_0\lambda/\hbar )^2}{2( \alpha^2 + e^2 \lambda^2/\hbar^2)}\right)}
\end{equation}
With probability to find the particle at time $t$ as
\begin{equation}
P(t,\lambda) = \psi^* ( t, \lambda) \cdot \psi ( t, \lambda) = \frac{ \alpha}{ \alpha^2 + e^2 \lambda^2/\hbar^2} \exp{\left(-\frac{ \alpha ( t +2 e m_0\lambda/\hbar )^2}{ \alpha^2 + e^2 \lambda^2/\hbar^2}\right)}
\end{equation}
The distribution width (from eqn. \ref{eq:spreadingwidth}) is:
\boxedeq{eq:timegaussianwavepacket}{
\sigma(\lambda) = \sqrt{\frac{1}{ \alpha} \left( \alpha^2 + \frac{e^2}{\hbar^2} \lambda^2\right)}
}
This implies that the probability to find a particle at time $t$ decreases with parameter time $\lambda$. The relationship between these two times is further investigated later on. However, it is worthwhile to mention here that the spreading of the wave packet in parameter time is given solely by the einbein $e$. This result further confirms that the einbein represents a parametrization of the world line of the particle.
In the computations of eqn. \ref{Fouriermomentumspace}, the Fourier Transform was done over the whole real axis, including negative masses. These interesting cases of negative energies and/or masses (that is, $p_0 <0$) would correspond to particles moving backwards in time. Such examples are tachyonic particles with imaginary or even negative mass.
\iffalse
Alternatively, one could work in position space with a localized Gaussian wavepacket and carry out the same procedure, which yields:
\begin{equation}
\psi ( m, \lambda) = \sqrt{\frac{\pi}{\alpha + i \beta \lambda}} \exp{\left(\frac{i}{\hbar} (x_0 m + \omega_0 \lambda) \right)} \exp{\left(-\frac{ \alpha ( m + v_g \lambda)^2}{2( \alpha^2 + \beta^2 \lambda^2)}\right)}
\end{equation}
The localized position wavepacket requires constant velocities, implying that the spread of the wavefunctions in momentum space can be seen as particles acquiring or losing mass at exactly the same rate as the wavepacket spreading.
\fi
\subsection{Current Density}
\label{subseq:currentdensity}
Compared to the standard Klein-Gordon equation that has a conserved current $\partial_\mu j^\mu = 0$, this treatment implies a continuity equation as:
\boxedeq{eq:continuityeq}{
\dot{\rho} + \partial_\mu j^\mu = 0
}
where:
\begin{equation}
\rho = \psi^* \psi \qquad \text{ and } \qquad j^\mu = - e \hbar^2 ( \psi^* \partial^\mu \psi - \psi \partial^\mu \psi^*)
\end{equation}
\begin{proof}
\begin{align*}
\dot{\rho} & = \frac{\partial\left( \psi^* \psi\right) }{\partial \lambda} = \psi^* \frac{\partial\psi}{\partial \lambda} + \psi \frac{\partial\psi^*}{\partial \lambda} = \psi^* \cdot \hat{H}_0 \psi - \psi \cdot \hat{H}_0 \psi^* = \psi^* \cdot e \hat{p}_\mu \hat{p}^\mu \psi - \psi \cdot e \hat{p}_\mu \hat{p}^\mu \psi^* =\\
& = - e \hbar^2 \left( \psi^* \partial_\mu \partial^\mu \psi - \psi \partial_\mu \partial^\mu \psi^* \right)
= -e \hbar^2 \partial_\mu \left( \psi^* \partial^\mu \psi - \psi \partial^\mu \psi^* \right) = \partial_\mu j^\mu
\end{align*}
\end{proof}
Therefore, one can freely consider $\rho = \psi^* \psi = |\psi|^2$ via the Born rule as probability interpretation of the wave functions.
From the differential form of the continuity equation, its representation in integral form is obtained by means of Gauss’s integral theorem for an arbitrary fixed volume $V$ with surface $\partial V = S$ as:
\begin{eqnarray}
\frac{\partial}{\partial \lambda} \int d^4 x \rho (x^\mu, \lambda) = \int_V d^4 x \partial_\mu j^\mu (x^\mu, \lambda) = - \int_{\partial V} d^3 x \ j^\mu (x^\mu, \lambda)
\label{constantunity}
\end{eqnarray}
Assuming normalizable wave functions (that is, decaying faster than $1/|x^\mu x_\mu|$ at infinity in order for the integral over the probability density to be finite), the integrand of the last part of equation \ref{constantunity} tends to 0 as the volume $V$ tends infinity, implying that the normalization to unity does not change over time:
\begin{equation}
\frac{\partial}{\partial \lambda} \int d^4 x \rho (x^\mu, \lambda) = \frac{\partial}{\partial \lambda} \int d^4 x |\psi (x^\mu, \lambda)|^2 = 0
\end{equation}
After normalization, the fact that $\int \rho = 1$ implies that the particle exists in space-time.
\iffalse
\begin{equation}
\partial_\mu\partial^\mu = (\partial_t)^2 - \nabla^2 \qquad \stackrel{t \to it}{\rightarrow} \qquad \partial_\mu\partial^\mu = - (\partial_t)^2 - \nabla^2
\end{equation}
For the quantum harmonic oscillator (QHO), this would correspond to:
\begin{equation}
\hat{H}_0 \ket{\psi}_{stat} = i\hbar \dfrac{\partial \ket{\psi}_{stat} }{\partial \lambda} = \hbar \omega \left( n + \frac{1}{2} \right) \ket{\psi}_{stat} \iff \left[ \hat{H}_0 - \hbar \omega \left( n + \frac{1}{2} \right) \right] \ket{\psi}_{stat} = 0
\end{equation}
\fi
\iffalse
\subsubsection{Expectation Values}
In general, quantum states $ \sigma$ are described by positive normalized linear functionals on the set of observables, mathematically rigours taken to be a $C^*$ algebra. The expectation value of an observable $A$ is then given by:
\begin{equation}
\langle A \rangle_\sigma = \sigma(A)
\end{equation}
If the algebra of observables acts irreducibly on a Hilbert space $\mathcal{H}$, and if $ \sigma$ is a normal functional (continuous in the ultraweak topology), then it can be written using a positive trace-class operator called the density matrix $\rho$ with unity trace $Tr(\rho) =1$.
\begin{equation}
\sigma (\cdot) = \mathrm{Tr} (\rho \; \cdot)
\end{equation}
Pure quantum states correspond to unit vectors in a Hilbert space, which can be also seen as projections $\rho= |\psi\rangle\langle\psi|$ such that $\sigma = \langle \psi |\cdot \; \psi\rangle$.
Each observable quantity (such as the energy or momentum of a particle) is associated with a mathematical operator, assumed to be a self-adjoint operator. In the general case, its spectrum will neither be entirely discrete nor entirely continuous. Still, one can write the observable $A $ in a spectral decomposition:
\begin{equation}
A = \int_{\sigma(A)} a \, \mathrm{d}P(a)
\end{equation}
with a projector-valued measure $P$. When the self-adjoint operator in question is compact, this version of the spectral theorem reduces a finite or countably infinite linear combination of projections.
For the expectation value of $A$ in a pure state $\sigma=\langle\psi | \cdot \, \psi \rangle$, this means
\begin{equation}
\langle A \rangle_\sigma = \int a \; \mathrm{d} \langle \psi | P(a) \psi\rangle
\end{equation}
\fi
\subsection{Time evolution}
Employing the standard representation of the momentum operator in position space $\hat{p}^\mu = -i\hbar \partial_\mu$, the parameter time evolution is given by:
\begin{equation}
\hat{H}_0 \ket{\psi} = e \hat{p}_\mu \hat{p}^\mu \ket{\psi} = - \hbar^2 e \partial_\mu \partial^\mu \ket{\psi}= i\hbar \dfrac{\partial \ket{\psi}}{\partial \lambda}
\label{differentialoperatorproblems}
\end{equation}
\subsubsection{Differential operators}
The left hand side (LHS) of equation \ref{differentialoperatorproblems} is quadratic in coordinate derivatives, whereas the right hand side is just linear in the parameter time evolution.
For physical states, the LHS is given by the sign of $e$, which can be taken wlog. to be positive, and thus one has a non-negative operator.
The proper time is defined as $d \tau ^2 = -ds^2 = dX^\mu dX^\nu \eta_{\mu\nu}$, whence one can write:
\begin{equation}
e \partial_\mu \partial^\mu \ket{\psi} = e \frac{\partial^2 \ket{\psi}}{\partial \tau^2} = \frac{1}{i\hbar} \frac{\partial \ket{\psi}}{\partial \lambda} \Rightarrow \lambda = \frac{1}{ie\hbar} \tau^2 + \mathcal{C}
\label{res:diffoperatorrelation}
\end{equation}
whence one can define wlog. a starting point of the parameter time as $\lambda_0 = 0$.
\subsubsection{Wick rotation}
Performing a Wick rotation $t \to -it_w$ changes Minkowski metric to the euclidean four dimensional one:
\begin{equation}
ds^{2}=-(dt^{2})+dx^{2}+dy^{2}+dz^{2} \qquad \xrightarrow{t \to it_w} \qquad ds^{2}=d t_w ^{2}+dx^{2}+dy^{2}+dz^{2}
\end{equation}
and the wave operator becomes:
\begin{equation}
\partial_\mu \partial^\mu = (\partial_t)^2 - \nabla^2 \qquad \xrightarrow{t \to it_w} \qquad \partial_\mu\partial^\mu = - (\partial_{t_w})^2 - \nabla^2
\end{equation}
Then, working in the four-dimensional Euclidean space , one finds the modified Hamiltonian to be self-adjoint, with spectrum $\sigma(H) = [0, \infty)$. Given that this operator is equivalent to the (linear) parameter time evolution operator, the only allowed values for $\lambda$ have to coincide with $[0, \infty)$. This again implies that for any given particle, its parameter-time has a well-defined starting point $\lambda_0 = 0$, as for the Schr{\"o}dinger equation.
\subsection{Uncertainty Relations}
Time and its conjugated momentum (energy) are thus promoted to operators similarly to position and spatial momenta. Since the inner product we use from eqn. \ref{def:innerproduct} is positive definite, the Cauchy-Schwartz inequality holds. Therefore, the Robertson-Schr{\"o}dinger uncertainty relations are obeyed:
\begin{equation}
(\Delta{A})^{2}(\Delta{B})^{2}\geq \left|{\frac {1}{2i}}\langle [{\hat {A}},{\hat {B}}]\rangle \right|^{2} + \left|{\frac {1}{2}}\langle \{{\hat {A}},{\hat {B}}\}\rangle -\langle {\hat {A}}\rangle \langle {\hat {B}}\rangle \right|^{2}
\end{equation}
where $\delta A$ represents operator uncertainty, $\{A, B\} = AB + BA$ is the anti-commutator.
It is important to note that in the general form of the Robertson–Schr{\"o}dinger uncertainty relation, the operators are not necessarily self-adjoint operators, but it suffices to assume that they are merely symmetric operators \cite{HallUR}. This may help solving the issue that one cannot find a self-adjoint time operator conjugated to the usual Hamiltonian.
The uncertainty relations follow axiomatically:
\begin{equation}
\Delta X^{\mu} \Delta p_{\nu} \geq \frac{1}{2} \left| \bra{\psi} \left[ X^\mu, p_\nu \right] \ket{\psi} \right| = \frac{\hbar}{2} \delta_\nu^\mu
\end{equation}
with the 0th cordinate as:
\boxedeq{THERESULT}{
\Delta X^{0} \Delta p_{0} = \Delta t \Delta E \geq \frac{\hbar}{2}
}
The present loose interpretation of time-energy uncertainty relation is then taken by the parameter time and energy, that is:
\boxedeq{res:weakTEUR}{
\Delta E_\lambda \Delta \lambda = \Delta (em^2) \Delta \lambda \approx \hbar
}
The physical interpretation of \ref{res:weakTEUR} provides answers and confirmations to several issues.
Particles with their mass known precisely (i.e. $\delta m = 0$) have infinite uncertainty in parameter time, meaning that they can and actually have to exist for all time(s). Moreover, massless particles ($m=0$) have no parameter time evolution, as $S(m=0) = Id$ and thus one cannot define the time parameter. This is interpreted as particles having no proper time, e.g. photons.
When even the smallest uncertainty exists in mass (i.e. $\delta m \neq 0$), the uncertainty in evolution parameter becomes finite and the particle is not bound to exist for all times.
One physical requirement that one must have is that $\lambda$ is a monotonically increasing function of proper time $\tau$. Assuming a starting point $\lambda_0= 0$, if the particles do not exist for all parameter times $\lambda \in I_\lambda \subset \mathbb{R}^+$, then one deals with particle creation and annihilation in ordinary quantum mechanics.
A disconnected distribution of parameter time such that $I_\lambda = \cdot \hspace{-7pt}\bigcup_{k=1}^n I_k $ with $I_k$ disjoint implies a particle popping in and out of existence (in the temporal coordinate) and could moreover imply tunneling (in the spatial coordinate).
\subsection{Gauge Fixing}
\label{sec:gaugefixing}
Several Gauge choices are used throughout the literature depending of the problem. Here, we remind a selection:
\begin{enumerate}
\item temporal: $t(\tau) = \alpha \tau$
\item spatial: $z(\tau) = \alpha \tau$
\item light-cone: $x^+(\tau) = t(\tau) + z(\tau) = \alpha \tau$
\item proper time: $ds = d\tau$
\item constant einbein: $\dot{e} = 0$
\end{enumerate}
In our treatment, the last one implies a strict parametrization which would also fix $\lambda = f(\tau)$. Eqn. \ref{res:diffoperatorrelation} seems to imply a particular gauge fixing $\lambda = \alpha \tau^2$.
\newpage
\iffalse
\section{Deformation Quantization}
An easy introduction to the mathematics behind Deformation Quantization is Ref. \cite{DefQuantPhys}, with the relevant elements presented here. Although not explicitly written out, the notation of four-vectors is used in this review. The complete formulation is based on the Wigner Function (WF), which is a quasi-probability distribution function in phase-space. The advantage of this formalism is that it allows parameter time evolution and uncertainty relations. This formalism is applied later on for the relativistic free particle.
It has been shown in \cite{TillmanSparling:KGinDQ}, that the Klein-Gordon equation in an arbitraty space-time can be formulated using DQ, as:
\begin{equation}
H \star f = f \star H = m^2 f
\end{equation}
where the Hamiltonian is $H = p_\mu \star p^\mu + \xi R(x)$. In our treatment in Minkowski space, the Rici scalar is 0, i.e. $R(x) = 0$.
\subsection{Preliminaries}
\subsubsection{Weyl Symbol}
The Weyl symbol gives a one to one map between quantum operators and the ordinary functions defined in the phase space. For Hermitian operators this map is real, and for an arbitrary operator $\hat{\Omega}(x,p)$, the Weyl symbol $\Omega_W(x,p)$ is formally defined as:
\begin{equation}
\Omega_W(x,p) = \int dy \bra{x^\alpha - \frac{y}{2}}\hat{\Omega}(x,p) \ket{x + \frac{y}{2}}\exp\left[\frac{i}{\hbar}p\cdot y\right]
\end{equation}
If the operator $\hat{\Omega}(x,p)$ is written in the symmetrized form then the Weyl symbol $\Omega_W(x,p)$ is obtained by simple substitution $\hat{x}\rightarrow x$ and $\hat{p}\rightarrow p$. In particular, this is true for all operators of the form $\hat{\Omega}(x,p) = \hat{A}(\hat{x}) + \hat{B}(\hat{p})$.
\subsubsection{Wigner Function}
The Wigner Function (WF) is a quasi probability distribution function in the phase-space, formally defined as the Weyl symbol of the density matrix $\hat{\rho}$:
\begin{equation}
f_W(x,p) = \int dy\bra{x-\frac{y}{2}}\rho\ket{x+\frac{y}{2}}\exp\left[\frac{i}{\hbar}p \cdot y\right]
\end{equation}
This reduces for a pure state $\rho = \bra{\phi}\ket{\phi}$ to:
\begin{equation}
f(x,p) = \frac{1}{2\pi}\int dy \phi^*\left(x-\frac{y}{2}\right) \exp\left[\frac{i}{\hbar}p \cdot y \right]\phi \left(x+\frac{y}{2}\right)
\end{equation}
\subsubsection{Moyal Bracket}
The Weyl-correspondent of quantum commutators, the Moyal Bracket is the essentially unique one-parameter $\hbar$ associative deformation of the Poisson Brackets of classical mechanics. It is defined using the $\star$-product:
\begin{equation}
\star \equiv \exp \left[\frac{i\hbar}{2}\Lambda \right] = \exp \left[\frac{i\hbar}{2}(\parl_x\parr_p - \parl_p\parr_x \right]
\end{equation}
where $\Lambda = \parl_x \parr_p - \parl_p\parr_x = \Sigma_j \parl_{x_j}\parr_{p_j} - \parl_{p_j}\parr_{x_j} $ is called the symplectic operator.\\
Expansion in $\hbar$ around 0 reveals that it consists of the Poisson Bracket corrected by higher order terms, i.e.:
\begin{equation}
\star = \exp \left[\frac{i\hbar}{2}\Lambda \right] = \sum_{k=0}^{\infty} \frac{1}{k!} \left(\frac{i\hbar}{2}\Lambda \right)^k = 1 - \frac{i\hbar}{2}\Lambda - \frac{\hbar^2}{4}\Lambda^2 + ... = 1+ \{,\}_{PB} + \mathcal{O}(\hbar^2)
\end{equation}
In particular, the Moyal braket becomes:
\begin{equation}
\{f, g\}_\star \stackrel{def}{=}\frac{1}{i \hbar} \left( f \star g - g \star f \right) = \{f,g\} + \mathcal{O}(\hbar^2)
\end{equation}
Since the $\star$-product involves exponentials of derivative operators, it may be evaluated in practice through translation of function arguments (so called "Bopp shifts"):
\begin{equation}
f(x,p)\star g(x,p) = f(x + \frac{i\hbar}{2}\parr_p,p - \frac{i\hbar}{2}\parr_x)g(x,p)
\end{equation}
\subsubsection{Expectation values}
Given an operator in the form of:
\begin{equation}
\mathcal{O} (\mathcal{R}, \mathcal{P}) = \frac{1}{(2\pi)^2} \int d \tau d\sigma dx dp g(x,p) \exp (i\tau(\mathcal{P} - p) + i \sigma (\mathcal{R} - x))
\end{equation}
the corresponding classical kernel of the operator is obtained by: $\mathcal{P} \rightarrow p$ and $\mathcal{R}\rightarrow x$. Then, its expectation value is the "phase-space average":
\begin{equation}
<\mathcal{O}> = \int dx dp f(x,p) g(x,p)
\end{equation}
\subsubsection{Parameter time evolution}
The dynamical evolution is specified by Moyal's equation. This is the extension of Liouville's theorem of classical mechanics. In Def. Quant. language:
\begin{equation}
\frac{\partial f}{\partial \tau} = \frac{H \star f - f \star H}{i \hbar}
\label{parametertimeevolution}
\end{equation}
\subsubsection{Static WF}
A powerful $\star$-eigenvalue equation is obeyed by static WF's:
\begin{equation}
H(x,p)\star f(x,p) = H(x + \frac{i\hbar}{2}\parr_p,p - \frac{i\hbar}{2}\parr_x)f(x,p) = f(x,p) \star H(x,p) = Ef(x,p)
\end{equation}
\newpage
\subsection{The free relativistic particle}
First, the static WF`s are obtained by using the $\star$-eigenvalue equation for the Hamiltonian of equation \ref{simpleHamiltonian}:
\begin{equation}
H(x^\alpha,p_\alpha) = \frac{1}{2} e (p_\mu p _\nu g^{\mu\nu} + m^2)
\end{equation}
which becomes:
\begin{equation}
H(x^\alpha,p_\alpha) \star f(x^\alpha,p_\alpha) = H \left(x^\alpha + \frac{i\hbar}{2} \parr_{p_\alpha}, p_\alpha - \frac{i\hbar}{2} \parr_{x^\alpha} \right) f \left(x^\alpha,p_\alpha \right) = E f\left(x^\alpha,p_\alpha \right)
\end{equation}
\begin{equation}
\frac{1}{2}e \left[ \left( p_\mu - \frac{i\hbar}{2} \parr_{x^\mu} \right) \left( p_\nu - \frac{i\hbar}{2} \parr_{x^\nu} \right) \eta^{\mu\nu} + m^2 \right] f \left(x^\alpha,p_\alpha \right) = E f\left(x^\alpha,p_\alpha \right)
\end{equation}
\begin{equation}
\frac{1}{2}e \left[ p_\mu p_\nu \eta^{\mu\nu} - \frac{i\hbar}{2} \left( \parr_{x^\mu} p_\nu \eta^{\mu\nu} + p_\mu \parr_{x^\nu}\eta^{\mu\nu}\right) - \frac{\hbar^2}{4}\parr_{x^\mu}\parr_{x^\nu} \eta^{\mu\nu} + m^2 \right] f \left(x^\alpha,p_\alpha \right) = E f\left(x^\alpha,p_\alpha \right)
\label{THEequation}
\end{equation}
Imposing the mass-shell condition $p_\mu p^\mu + m^2 = 0$ and reordering yields:
\begin{equation}
\left[ \eta^{\mu\nu}\parr_{x^\mu}\parr_{x^\nu} + \frac{4i}{\hbar} p_\nu \eta^{\mu\nu} \parr_{x^\mu} + \frac{8E}{\hbar^2 e} \right] f \left(x^\alpha,p_\alpha \right) = 0
\label{THEequation}
\end{equation}
Solving the above involves the term:
\begin{equation}
\sqrt{\left( \frac{4i}{\hbar}\right)^2 p^2 - \left( \frac{4}{\hbar} \right)^2 \frac{2E}{e}} \pm \frac{4i}{\hbar}p = \frac{4ip}{\hbar} \left( \sqrt{ 1 - \frac{2E}{m^2 e}} \pm 1\right) = 2 \frac{i}{\hbar} p \alpha_\mp
\end{equation}
where $\alpha_\mp = 2 \left( \sqrt{1 - \frac{2E}{m^2 e}} \pm 1\right)$. The 1D solution (only in $x$ and $p$) of \ref{THEequation} is then:
\begin{equation}
f(x,p) = k_1 e^{-i\frac{xp}{\hbar}\alpha_-} + k_2 e^{i\frac{xp}{\hbar}\alpha_+}
\end{equation}
To obtain a real solution, one multiplies $f$ with its complex conjugate $\bar{f}$:
\begin{align}
f \cdot \bar{f} = & \left( k_1 e^{-i\frac{xp}{\hbar}\alpha_-} + k_2 e^{i\frac{xp}{\hbar}\alpha_+}\right) \cdot \left( \bar{k}_1 e^{i\frac{xp}{\hbar}\alpha_-} + \bar{k}_2 e^{-i\frac{xp}{\hbar}\alpha_+}\right) =\\
= & |k_1|^2 + |k_2|^2 + k_1 \bar{k}_2 e^{-i\frac{xp}{\hbar}(\alpha_- + \alpha_+)} + k_2 \bar{k}_1 e^{i\frac{xp}{\hbar}(\alpha_- + \alpha_+)} =\\
= & 2 k^2 \left( 1 + \cos{\left(\frac{xp}{\hbar} \alpha\right)} \right)
\end{align}
where we have assumed real $k_1 = k_2 = k$ and further introduced $\alpha = \alpha_- + \alpha_+ = 4 \sqrt{1 - \frac{2E}{m^2 e}} $
\iffalse
Assuming real Energy eigenvalues and Wigner Functions, this equation can be split into the real and imaginary part. The real part is:
\begin{equation}
\frac{1}{2}e \left[ p_\mu p_\nu \eta^{\mu\nu} - \frac{\hbar^2}{4}\parr_{x^\mu}\parr_{x^\nu} \eta^{\mu\nu} + m^2 \right] f \left(x^\alpha,p_\alpha \right) = E f\left(x^\alpha,p_\alpha \right)
\end{equation}
Imposing now the mas-shell constraint $p_\mu p_\nu \eta^{\mu\nu} - m^2 = 0$, one obtains:
\begin{equation}
\parr_{x^\mu}\parr_{x^\nu} \eta^{\mu\nu} = \frac{4}{\hbar^2} \left( \frac{2E}{e} - m^2 \right)
\end{equation}
Expanding the above and recovering the speed of light from natural units yields a "generalized" Klein-Gordon equation:
\begin{equation}
\frac{1}{c^2}\frac{\partial^2}{\partial t^2} f - \nabla ^2 f = \frac{4 c^2}{\hbar^2} \left( \frac{2E}{e} - m^2 \right) f
\label{realpartofEQ}
\end{equation}
Making the solution Ansatz with an arbitraty momenta function $\phi(p_\alpha)$:
\begin{equation}
f\left(x^\alpha,p_\alpha \right) = \exp (i x^\mu k_\mu (p_\alpha) ) \phi(p_\alpha)
\end{equation}
and restricting only to the real part of the WF yields:
\begin{equation}
f\left(x^\alpha,p_\alpha \right) = \cos (x^\mu k_\mu ) \phi(p_\alpha)
\end{equation}
Plugging this back into \ref{realpartofEQ} gives a modified mass-shell constraint:
\begin{equation}
k_\mu k_\nu \eta^{\mu\nu} f = - \frac{4 c^2}{\hbar} \left( \frac{2E}{e} - m^2 \right) f = -\varepsilon^2 f
\label{generalmasshell}
\end{equation}
The imaginary part of Eq. \ref{THEequation} further constraints the solution to more explicit values:
\begin{equation}
\parr_{x^\mu} p_\nu \eta^{\mu\nu} f = 0 \Rightarrow k_\mu p_\nu \eta^{\mu\nu} = 0 \Rightarrow k_0 p_0 - k_i p^i = 0
\end{equation}
where $k_i p^i$ denotes summation over the spatial coordinates. Using Eq. \ref{generalmasshell} as well, one finds the explicit values of $k_\alpha$. In 2D, they simplify to:
\begin{equation}
k_0 = \frac{p_i \varepsilon}{m} \qquad \text{ and } \qquad k_i = \frac{p_0 \varepsilon}{m}
\end{equation}
\begin{futwork}{Deformation quantization}{defquant}
The next step in this example is to find the time-energy uncertainty relations using the expectation values of the Deformation Quantization formalism \cite{DefQuantPhys}. In particular, Def. Quant. allows parameter time evolution via \ref{parametertimeevolution}. Since this was for the constant einbein and metric, its generalization could be also carried out. Because explicit wavefunctions are required, some particular metrics could be chosen.
\end{futwork}
\fi
\fi
\newpage
\section{Discussion}
\label{sec:discussion}
The initial aim of this paper was to carry out the quantization of a relativistic point particle in order to derive a covariant interpretation of the Time-Energy Uncertainty Relation. In doing so, several side remarks arose the interest to investigate them more closely, although not all of them were presented. Here is the summary of results and future work.
\subsection{Results}
\begin{enumerate}
\item
Time Energy Uncertainty Relations
\end{enumerate}
\subsection{Future Work}
\begin{enumerate}
\item
The relation between Dirac constraints as First and Second Class with Holonomic and Non-Holonomic constraints.
\item
Relation between $\mathcal{D}_c$ and the reduced (constrained) phase space
\item
Quantized einbein and mass operator
\item
View the action integral $\lambda$ as a Rieman-Stieltjies integral
\end{enumerate}
\newpage
\begin{appendices}
\section{Conventions and notation}
The metric signature used is $(-,+,+,+)$, such that the mass-shell condition covariantly reads:
\begin{equation}
p_\mu p^\mu + m^2 = 0
\end{equation}
An overdot over any function (variable) $X$ shall be understood as time-parameter differentiation, i.e.:
\begin{equation}
\dot{X} = \dfrac{d X}{d \tau}
\end{equation}
The notation for coordinates is either $Q$ or $x$. Although used interchangebly, the same notation is used throughout the same section.
\section{The need of an einbein}\label{einbeinexplained}
The action for the relativistic point particle:
\begin{equation}
S = - m \int ds = - m \int d\tau \sqrt{-\dot{X^\mu}\dot{X^\nu}g_{\mu\nu}}
\label{basicaction}
\end{equation}
poses several issues:
\begin{enumerate}
\item it does not apply to massless particles as it becomes trivial,
\item it is not manifestly Lorentz invariant,
\item the formula implies a square root which makes further treatment problematic.
\end{enumerate}
However, it is known experimentally that massless particles travel at the speed of light along null geodesics (with $ds = 0$). Thus, one seeks a non-trivial action, i.e. a non-vanishing integrand. Rewriting equation \ref{basicaction} and using $ds^2 = - g_{\mu\nu} dx^{\mu} dx^{\nu}$ yields:
\begin{equation}
S = - m^2 \int \dfrac{ds}{m} = - m^2 \int d\tau \dfrac{ds^2}{d\tau^2} = -m^2 \int d\tau \sqrt{g_{\mu,\nu} \dot{x}^{\mu} \dot{x}^{\nu} } = - m^2 \int d\tau \dfrac{\sqrt{-\dot{x}^2}}{m}
\label{beforeeinbeinaction}
\end{equation}
One introduces now an extra degree of freedom on the particle worldline, the "einbein" $e(\tau)$ is introduced and the Polyakov action is postulated as:
\begin{equation}
S = \int d\tau \left( a e^{-1} \dot{x}^2 + b e m^2\right)
\label{aftereinbeinaction}
\end{equation}
This has to yield the same equations of motion as \ref{beforeeinbeinaction}, whence one obtains $a = - b = 1/2$.
The equation of motion from varying $e$ fixes the einben to:
\begin{equation}
e \equiv \sqrt{-\dot{X^\mu}\dot{X^\nu}g_{\mu\nu}}/m
\label{einbeindef}
\end{equation}
Therefore, the einbein is not an independent dynamiical degree of freedom: given a trajectory $x^\mu (\tau)$, the eibein is determined. Plugging \ref{einbeindef} back into \ref{aftereinbeinaction}, gives back the initial action \ref{beforeeinbeinaction}.
The Polyakov action has a geometric interpretation, by introducing a metric on the one dimensional wordline of the particle with line element $ds^2 = g_{\tau\tau} d\tau^2$ with $g_{\tau\tau} \equiv e^2$, the action becomes:
\begin{equation}
S = \int d\tau \sqrt{g_{\tau\tau}} \left( \frac{1}{2} g^{\tau\tau} \partial_\tau x^\mu \partial_\tau x^\nu \eta_{\mu\nu} - \frac{1}{2}m^2\right)
\end{equation}
\section{A word on units}
The factor in the exponential $ S(\lambda)$ has to be unitless. Bringing back the speed of light $c$ on stage and doing a dimensional analysis, one has:
\begin{equation}
\left[ \frac{(mc)^2}{\hbar}\right] = \frac{kg}{s} \Rightarrow \left[e\right] = (kg)^{-1}
\end{equation}
meaning that the einbein has units of inverse mass, although equation \ref{einbeindef} from Appendix \ref{einbeinexplained} seems to suggest that it has units of speed over mass.
This is one of the reasons why the einbein $e$ should not be cancelled from the mass-shell constraint.
\section{Convergence for formal series of Time Operators}
\label{timeoperatorconvproblems}
Let the time operator for a free relativistic particle be given by the symmetric Aharanov-Bohm operator:
\begin{equation}
\hat{T} = \frac{1}{2}\left( \hat{q}\hat{p}^{-1} + \hat{p}^{-1}\hat{q}\right)
\end{equation}
Here, one runs into the problem of converge if one plainly takes:
\begin{equation}
\frac{1}{p} = \frac{1}{1 - (1 - p)} = \sum_{n=0}^{\infty} (1-p)^n = \sum_{n=0}^{\infty} \sum_{k=0}^n {n \choose k} p^k (-1)^{n-k} = \sum_{k=0}^{\infty} p^k \sum_{n=0}^{\infty} { n \choose k} (-1)^{n} = \sum_{k=0}^{\infty} c_k p^k
\end{equation}
Unfortunately, the coefficients $c_k$ diverge $\forall k$.
\end{appendices}
\newpage
\bibliographystyle{unsrt}
\bibliography{biblio}
\section{Attempts}
\begin{figure}
\begin{tikzpicture}[scale=1.4]
% O'S AXES
\draw [->,thick] (0,-1) -- (0,5) node [above] {$t$};
\draw [->,thick] (-1,0) -- (6,0) node [right] {$x$};
% OBAR'S axis
\draw [-,thick] (0,0) -- (2,4);
\draw (1.5,3) arc (60:90:3) ;
\draw [->,thick] (2,1.5) -- (0.2,1.6) node [right] {};
\node at (3.3,1.6) {tangent of this angle is v};
\end{tikzpicture}
\caption{The time-axis of a frame whose velocity is v}
\end{figure}
$\bar{t}$ axis: locus of events at $\bar{x}$=0 (by definition) = locus of origin of $\bar{O}$'s spatial coordinates = $\bar{O}$'s world line (fig. 1.2).
$\bar{x}$ axis: need to determine the locus of events at $\bar{t}$ = 0. i.e. the events measured by $\bar{O}$ to be simultaneous with event $\bar{t}$=$\bar{x}$=0.
Consider an event $\varepsilon$ at $\bar{x}$ = 0 and $\bar{t}$ = -a. A light ray from $\varepsilon$ reduces the $\bar{x}$ axis at $\bar{x}$=a (photons "travel" at $45^ \circ$) (event P in fig. 1.3).
\begin{figure}
\begin{tikzpicture}[scale=1.4]
% O'S AXES
\draw [->,thick] (4,0) -- (-4,0) node [left] {$\bar{x}$};
\draw [->,thick] (0,-3) -- (0,3) node [right] {$\bar{t}$};
\filldraw [black] (0,2.5) circle (1.2pt);
\filldraw [black] (0,-2.5) circle (1.2pt);
\filldraw [black] (2.5,0) circle (1.2pt);
\node at (-0.2,2.5) {a};
\node at (2.5,-0.2) {a};
\node at (-0.2,-2.5) {-a};
\node at (0.3,2.5) {R};
\node at (0.3,-2.5) {$\xi$};
\node at (0.25,-2) {$45^\circ$};
\draw [-,dotted,thick] (0,2.5) -- (2.5,0);
\draw [-,dotted,thick] (2.5,0) -- (0,-2.5);
\node at (-3,2) {$\bar{O}$'s Spacetime Diagram};
\end{tikzpicture}
\caption{Light reflected at a, as measured by $\bar{O}$.}
\end{figure}
Suppose the light ray is reflected at P. It returns to $\bar{O}$=0, but at the later time $\bar{t}$=a (event R). The $\bar{x}$ axis can therefore be defined as the locus of events (for all values of a).
Now consider O's spacetime diagram (fig 1.4). We already know where the $\bar{t}$ axis falls (fig 1.2). We know also where the events $\varepsilon$ and R take place (i.e. "emission" at $\bar{t}$=-a and "reception" at $\bar{t}$=+a).
\begin{figure}
\begin{tikzpicture}[scale=1.4]
% O'S AXES
\draw [->,thick] (-4,0) -- (4,0) node [left] {x};
\draw [->,thick] (0,-3) -- (0,3) node [right] {t};
\draw [-,dashed,thick] (-3,-2.5) -- (3,2.5) node [right] {$\bar{x}$};
\draw [-,thick] (-1.5,-3) -- (1.5,3) node [left] {$\bar{t}$};
\draw [-,dotted,thick] (1,2) -- (1.7,1.4166667);
\draw [-,dotted,thick] (1.7,1.416667) -- (-1,-2);
\node at (-3,2) {O's Spacetime Diagram};
\node at (0.8,2) {a};
\node at (1.2,2) {R};
\node at (1.8,1.416667) {p};
\node at (-0.8,-2.2) {$\xi$};
\end{tikzpicture}
\caption{The reflection in Fig.1.3,as measured O.}
\end{figure}
Now we use the 2nd fundamental postulate of SR (i.e. the universality of c). Even though we have changed our coordinate system from $\bar{O}$ to O. The light ray from $\varepsilon$ still travels at $45^\circ$. And, the light ray arriving at R also travels at $45^\circ$.
The point of intersection of both rays gives the location of the reflection event P and then gives a point on the $\bar{x}$, as measured in O's frame. The $\bar{x}$ is thus constructed by the line from P through the origin.
The above leads to an important conclusion concerning simultaneity. Recall-an inertical observes regards events to be simultaneous if they occur at the same time in that coordinate system, at the spatial coordinate of the event. So any events falling on the x in fig 1.2 or 1.4 would be regarded by O as being simultaneous. Similarly, any event falling on the $\bar{x}$ axis in fig 1.3 would be regared by $\bar{O}$ as being simultaneous.
But in fig 1.4 we see that $\bar{t}$ and t are not parallel, so events which $\bar{O}$ "thinks" are simultaneous are not simultaneous as measured by O.
fig 1.5(a) represents the situation just described. Fig 1.5(b) represents the case as viewed by $\bar{O}$, i.e. so that O moves to the left with velocity -v.
\begin{figure}
\begin{tikzpicture}[scale=1.4]
% O'S AXES
\draw [->,thick] (-4,0) -- (4,0) node [left] {x};
\draw [->,thick] (0,-3) -- (0,3) node [right] {t};
\draw [-,dashed,thick] (-3,-2.5) -- (3,2.5) node [right] {$\bar{x}$};
\draw [-,thick] (-1.5,-3) -- (1.5,3) node [left] {$\bar{t}$};
\draw [-,dotted,thick] (1,2) -- (1.7,1.4166667);
\draw [-,dotted,thick] (1.7,1.416667) -- (-1,-2);
\node at (-3,2) {O's Spacetime Diagram};
\node at (0.8,2) {a};
\node at (1.2,2) {R};
\node at (1.8,1.416667) {p};
\node at (-0.8,-2.2) {$\xi$};
\end{tikzpicture}
\caption{The reflection in Fig.1.3,as measured O.}
\end{figure}
\end{document}
|
|
\documentclass[12pt,a4paper]{article}
\usepackage{algorithm, algpseudocode, amsmath, amssymb, amsthm, csquotes, empheq, geometry, graphicx, hyperref, listings, multirow, siunitx, subcaption, upgreek}
\usepackage[italicdiff]{physics}
\usepackage[section]{placeins}
\usepackage[justification=centering]{caption}
\title{Computational Physics\\Problem Set 7}
\author{Saleh Shamloo Ahmadi\\Student Number: 98100872}
\date{November 8, 2021}
\hypersetup{colorlinks=true, urlcolor=cyan}
\newcommand{\fig}{../fig}
\newcommand{\ddfrac}[2]{{\displaystyle\frac{\displaystyle #1}{\displaystyle #2}}}
\newcommand{\multlinecell}[1]{\begin{tabular}[c]{@{}c@{}}#1\end{tabular}}
\begin{document}
\maketitle
\section{Monte Carlo Integration}
\subsection[pi/2 * erf(2): Uniform Sampling vs. Importance Sampling]
{Uniform Sampling vs. Importance Sampling:\\$\frac{\sqrt{\pi}}{2}\erf(2)=\int_0^2 e^{-x^2} \dd{x}$}
\begin{table}[hbt!]
\centering
\caption{The exact value of the integral $\frac{\sqrt{\pi}}{2}\erf(2)=\int_0^2 e^{-x^2} \dd{x}$
up to the 6th decimal is $0.882081$. The runtimes are from an AMD Ryzen 7 5800H @3.2GHz (up to 4.4GHz)}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{\multlinecell{Number\\of\\Samples}}
& \multicolumn{2}{|c|}{Calculated Integral} & \multicolumn{2}{|c|}{Runtime} \\
\cline{2-5}
& \multlinecell{Uniform\\Sampling} & \multlinecell{Importance\\Sampling}
& \multlinecell{Uniform\\Sampling} & \multlinecell{Importance\\Sampling} \\
\hline
$10$ & $0.460596$ & $0.93583$ & $\SI{0.891513}{\micro\second}$ & $\SI{3.74894}{\micro\second}$ \\
$100$ & $0.921345$ & $0.903689$ & $\SI{1.6772}{\micro\second}$ & $\SI{21.2343}{\micro\second}$ \\
$1000$ & $0.899169$ & $0.882798$ & $\SI{8.6187}{\micro\second}$ & $\SI{183.041}{\micro\second}$ \\
$10000$ & $0.889229$ & $0.880103$ & $\SI{79.3509}{\micro\second}$ & $\SI{1.82276}{\milli\second}$ \\
$100000$ & $0.880586$ & $0.881593$ & $\SI{844.887}{\micro\second}$ & $\SI{18.7801}{\milli\second}$ \\
$1000000$ & $0.880835$ & $0.882562$ & $\SI{9.04039}{\milli\second}$ & $\SI{178.5040}{\milli\second}$ \\
$10^7$ & $0.882187$ & $0.88204$ & $\SI{93.6652}{\milli\second}$ & $\SI{2.51925}{\second}$ \\
$10^8$ & $0.882014$ & $0.882084$ & $\SI{0.919785}{\second}$ & $\SI{23.3132}{\second}$ \\
\hline
\end{tabular}
\end{table}
\begin{table}[hbt!]
\centering
\caption{Errors of each method. The \enquote{actual} error is the deviation of the calculation from
$\frac{\pi}{2}\erf(2)$.}
\begin{tabular}{|c|c|c|c|c|}
\hline
\multirow{2}{*}{\multlinecell{Number\\of\\Samples}}
& \multicolumn{2}{|c|}{Calculated Standard Error} & \multicolumn{2}{|c|}{Actual Error} \\
\cline{2-5}
& \multlinecell{Uniform\\Sampling} & \multlinecell{Importance\\Sampling}
& \multlinecell{Uniform\\Sampling} & \multlinecell{Importance\\Sampling} \\
\hline
$10$ & $0.211308$ & $0.094789$ & $0.421485$ & $0.0537491$ \\
$100$ & $0.0699633$ & $0.0242373$ & $0.0392635$ & $0.021608$ \\
$1000$ & $0.0220645$ & $0.00838951$ & $0.0170874$ & $0.000716757$ \\
$10000$ & $0.00685873$ & $0.0026753$ & $0.00714762$ & $0.00197877$ \\
$100000$ & $0.00217824$ & $0.000843176$ & $0.00149538$ & $0.000488134$ \\
$1000000$ & $0.000688905$ & $0.000265692$ & $0.00124646$ & $0.000480197$ \\
$10^7$ & $0.000217995$ & $\num{8.41355e-5}$ & $0.000106063$ & $\num{4.187e-5}$ \\
$10^8$ & $\num{6.89327e-5}$ & $\num{2.66006e-5}$ & $\num{6.72089e-5}$ & $\num{2.30387e-6}$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{\fig/montecarlo-erf2-value}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{\fig/montecarlo-erf2-error}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{\fig/montecarlo-erf2-time}
\end{figure}
\FloatBarrier
\subsection{Center of Mass of Sphere with Linearly Increasing Density in One Direction}
A sphere with radius $R$ has a density that linearly increases in the $z$ direction, such that the density at $z=2R$
is twice the density at $z=0$. To calculate the position of the center of mass, we must evaluate the integral
\begin{equation}
\vb{r}_{cm} = \frac{\int_{\text{sphere}}\rho(\vb{r})\vb{r}\dd{V}}{\int_{\text{sphere}}\rho(\vb{r})\dd{V}}.
\end{equation}
Since the sphere is symmetric in the $x$ and $y$ directions, the center of mass is at $x=0$ and $y=0$. It remains to
find the $z$ position of the center of mass. In polar coordinates
\begin{gather}
\rho(z=2R) = 2\rho(z=0) \implies \rho(r=R, \theta=0) = 2\rho(r=R, \theta=\pi) \\
\xRightarrow{\rho(z) is linear} \rho(r, \theta) = \rho_0\qty(3+\frac{r}{R}\cos{\theta}) \implies \\
z_{cm} = \ddfrac{\int_{\text{sphere}}z\rho\dd{V}}{\int_{\text{sphere}}\rho\dd{V}} =
\ddfrac{\int_0^R\int_0^\pi\int_0^{2\pi} \rho_0\qty(3+\frac{r}{R}\cos{\theta})r^3\sin{\theta}\cos{\theta}
\dd{\varphi}\dd{\theta}\dd{r}}{\int_0^R\int_0^\pi\int_0^{2\pi} \rho_0\qty(3+\frac{r}{R}\cos{\theta})r^2
\sin{\theta}\dd{\varphi}\dd{\theta}\dd{r}} \\
z_{cm} = \ddfrac{\int_0^R\int_0^\pi \qty(3+\frac{r}{R}\cos{\theta})r^3\sin{\theta}\cos{\theta}
\dd{\theta}\dd{r}}{\int_0^R\int_0^\pi \qty(3+\frac{r}{R}\cos{\theta})r^2\sin{\theta}
\dd{\theta}\dd{r}} \label{eq:multi}
\end{gather}
We can use the Monte Carlo integration method to calculate \eqref{eq:multi} and find the center of mass.
These integrals can also be calculated analytically to check the accuracy of the numerical results:
\begin{align}
z_{cm} &= \ddfrac{R^4 \int_0^\pi \qty(\frac{3}{4}+\frac{\cos{\theta}}{5})\sin{\theta}\cos{\theta}\dd{\theta}}
{R^3 \int_0^\pi \qty(1+\frac{\cos{\theta}}{4})\sin{\theta}\dd{\theta}} \\
&= \ddfrac{\frac{3}{8}\int_0^\pi\sin(2\theta)\dd{\theta} + \frac{1}{5}\int_{-1}^{+1}\cos^2{\theta}
\dd{(\cos{\theta})}} {\int_0^\pi\qty(\sin{\theta} + \frac{\sin(2\theta)}{2})\dd{\theta}}R \\
&= \frac{-\frac{3}{16}\eval{\cos(2\theta)}_0^\pi + \eval{\frac{x^3}{15}}_{-1}^{+1}}
{-\eval{\cos{\theta}}_0^\pi - \eval{\frac{\cos(2\theta)}{4}}_0^\pi}
\end{align}
\begin{empheq}[box=\fbox]{equation}
z_{cm} = \frac{R}{15}
\end{empheq}
Numerically evaluating the two integrals in \eqref{eq:multi} by the Monte Carlo Integration method,
uniformly sampling 40 million points, we get
\begin{equation}
z_{cm} = (0.06667 \pm 0.00009) R,
\end{equation}
which is exactly equal to $R/15$ up to the 5th decimal place.
\section{Metropolis Algorithm}
\begin{table}[hbt!]
\centering
\caption{For $10^8$ samples}
\begin{tabular}{|c|c|c|}
\hline
Step Size $Delta$ & Acceptance Rate $a_r$ & Correlation Length $\xi$ \\
\hline
15.8846 & 0.1004 & $7.23 \pm 0.03$ \\
7.969 & 0.19996 & $3.53 \pm 0.03$ \\
5.3048 & 0.300005 & $2.3 \pm 0.03$ \\
3.888 & 0.4003 & $1.84 \pm 0.01$ \\
2.9486 & 0.4991 & $2.06 \pm 0.02$ \\
2.2094 & 0.5992 & $2.72 \pm 0.02$ \\
1.578 & 0.7007 & $4.11 \pm 0.02$ \\
1.0236 & 0.8002 & $7.89 \pm 0.02$ \\
0.5 & 0.9008 & $27.76 \pm 0.01$ \\
\hline
\end{tabular}
\end{table}
\begin{figure}[htb!]
\centering
\includegraphics[width=0.65\linewidth]{\fig/metropolis-hist}
\caption{For $10^8$ samples, step size 3 and acceptance rate 0.49}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{\fig/metropolis-acceptance}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{\fig/metropolis-autocor}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{\fig/metropolis-autocor-log}
\end{figure}
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{\fig/metropolis-corlen}
\end{figure}
\end{document}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.