arxiv_id stringlengths 0 16 | text stringlengths 10 1.65M |
|---|---|
1409.1685 | \section*{Introduction}
The concept of a \emph{face algebra} was introduced by T. Hayashi in \cite{Hay2}, motivated by the theory of solvable lattice models in statistical mechanics. It was further studied in \cite{Hay1,Hay3,Hay4,Hay5,Hay6,Hay7,Hay8}, where for example associated $^*$-structures and a canonical Tannaka duality were developed. This canonical Tannaka duality allows one to construct a canonical face algebra from any (finite) fusion category. For example, a face algebra can be associated to the fusion category of a quantum group at root unity, for which no genuine quantum group implementation can be found.
In \cite{Nil1,Sch1,Sch2}, it was shown that face algebras are particular kinds of $\times_R$-algebras \cite{Tak2} and of weak bialgebras \cite{Boh3,BCJ,Nik1}. More intuitively, they can be considered as quantum groupoids with a classical, finite object set. In this article, we want to extend Hayashi's theory by allowing an \emph{infinite} (but still discrete) object set. This requires passing from weak bialgebras to weak \emph{multiplier} bialgebras \cite{Boh1}. At the same time, our structures admit a piecewise description by what we call a \emph{partial bialgebra}, which is more in the spirit of Hayashi's original definition. In the presence of an antipode, an invariant integral and a compatible $^*$-structure, we call our structures \emph{partial compact quantum groups}.
The passage to the infinite object case is delicate at points, and requires imposing the proper finiteness conditions on associated structures. However, once all conditions are in place, many of the proofs are similar in spirit to the finite object case.
Our main result is a Tannaka-Kre$\breve{\textrm{\i}}$n-Woronowicz duality result which states that partial compact quantum groups are in one-to-one correspondence with \emph{concrete partial fusion C$^*$-categories}. In essence, a partial fusion C$^*$-category is a multifusion C$^*$-category \cite{ENO1}, except that (in a slight abuse of terminology) we allow an infinite number of irreducible objects as well as an infinite number of summands inside the unit object. By a \emph{concrete} multifusion C$^*$-category, we mean a multifusion C$^*$-category realized inside a category of (locally finite-dimensional) bigraded Hilbert spaces. Of course, Tannaka reconstruction is by now a standard procedure. For closely related results most relevant to our work, we mention \cite{Wor2,Sch3,Hay8,Ost1,Hai1,Szl1,Pfe1,DCY1,Nes1} as well as the surveys \cite{JoS1} and \cite[Section 2.3]{NeT1}.
As an application, we generalize Hayashi's Tannaka duality \cite{Hay8} (see also \cite{Ost1}) by showing that any module C$^*$-category over a multifusion C$^*$-category has an associated canonical partial compact quantum group. By the results of \cite{DCY1}, such data can be produced from ergodic actions of compact quantum groups. In particular, we consider the case of ergodic actions of $SU_q(2)$ for $q$ a non-zero real. This will allow us to show that the construction of \cite{Hay4} generalizes to produce partial compact quantum group versions of the dynamical quantum $SU(2)$-groups of \cite{EtV1,KoR1}, see also \cite{Sto1} and references therein. This construction will immediately provide the right setting for the operator algebraic versions of these dynamical quantum $SU(2)$-groups, which was the main motivation for writing this paper. These operator algebraic details will be studied elsewhere \cite{DCT2}.
The precise layout of the paper is as follows.
The first two sections introduce the basic theory of the structures
which we will be concerned with in this paper. In the \emph{first
section}, we introduce the notions of a \emph{partial bialgebra},
\emph{partial Hopf algebra} and \emph{partial compact quantum group},
and show how they are related to the notion of a weak multiplier
bialgebra \cite{Boh1}, weak multiplier Hopf algebra \cite{VDW1,VDW2}
and compact quantum group of face type \cite{Hay1}. In the
\emph{second section}, we introduce the corresponding notions of a \emph{partial tensor category} and \emph{partial fusion C$^*$-category}.
In the next two sections, our main result is proven, namely the Tannaka-Kre$\breve{\textrm{\i}}$n-Woronowicz duality. In the \emph{third section} we develop the corepresentation theory of partial Hopf algebras and the representation theory of partial compact quantum groups, and we show that the latter allows one to construct a concrete partial fusion C$^*$-category. In the \emph{fourth} section, we show conversely how any concrete partial fusion C$^*$-category allows one to construct a partial compact quantum group, and we briefly show how the two constructions are inverses of each other.
In the final two sections, we provide some examples of our structures
and applications of our main result. In the \emph{fifth section}, we
first consider the construction of a canonical partial compact quantum
group from any partial module C$^*$-category for a partial fusion
C$^*$-category. We then introduce the notions of \emph{Morita}, \emph{co-Morita} and \emph{weak Morita equivalence} \cite{Mug1} of partial compact quantum groups, and show that two partial compact quantum groups are weakly Morita equivalent if and only if they can be connected by a string of Morita and co-Morita equivalences. In the \emph{sixth section}, we study in more detail a concrete example of a canonical partial compact quantum group, constructed from an ergodic action of quantum $SU(2)$. In particular, we obtain a partial compact quantum group version of the dynamical quantum $SU(2)$-group.
\emph{Note}: we follow the physicist's convention that inner products on Hilbert spaces are anti-linear in their \emph{first} argument.
\section{Partial compact quantum groups}
We generalize Hayashi's definition of a compact quantum group of face type \cite{Hay1} to the case where the commutative base algebra is no longer finite-dimensional. We will present two approaches, based on \emph{partial bialgebras} and \emph{weak multiplier bialgebras} \cite{Boh1,VDW1}. The first approach is piecewise and concrete, but requires some bookkeeping. The second approach is global but more abstract. As we will see from the general theory and the concrete examples, both approaches have their intrinsic value.
\subsection{Partial algebras}
Let $I$ be a set. We consider $I^2=I\times I$ as the pair groupoid with $\cdot$ denoting composition. That is, an element $K=(k,l)\in I^2$ has source $K_l = k$ and target $K_r=l$, and if $K=(k,l)$ and $L=(l,m)$ we write $K\cdot L = (k,m)$.
\begin{Def} A \emph{partial algebra} $\mathscr{A}=(\mathscr{A},M)$ (over $\mathbb{C}$) is a set $I$ (the \emph{object} set) together with
\begin{itemize}
\item[$\bullet$] for each $K=(k,l)\in I^2$ a vector space $A(K) = \Grs{A}{k}{l}=\!\!\GrDA{A}{k}{l}$ (possibly the zero vector space),
\item[$\bullet$] for each $K,L$ with $K_r = L_l$ a multiplication map \[M(K,L):A(K) \otimes A(L)\rightarrow A(K\cdot L),\qquad a\otimes b \mapsto ab\] and
\item[$\bullet$] elements $\mathbf{1}(k) = \mathbf{1}_k \in \Grs{A}{k}{k}$ (the units),
\end{itemize}
such that the obvious associativity and unit conditions are satisfied.
By an \emph{$I$-partial algebra} we will mean a partial algebra with object set $I$.
\end{Def}
\begin{Rem}
\begin{enumerate}[label=(\arabic*)]\item It will be important to allow the local units $\mathbf{1}_k$ to be zero.
\item A partial algebra is by definition the same as a small
$\mathbb{C}$-linear category. However, we do not emphasize this viewpoint,
as the natural notion of a morphism for partial algebras will be \emph{contravariant} on objects, see Definition \ref{DefMor}
\end{enumerate}
\end{Rem}
Let $\mathscr{A}$ be an $I$-partial algebra. We define $A(K\cdot L)$ to be $\{0\}$ when $K\cdot L$ is ill-defined, i.e. $K_r\neq L_l$. We then let $\Grs{M}{K}{L}$ be the zero map.
\begin{Def} The \emph{total algebra} $A$ of an $I$-partial algebra $\mathscr{A}$ is the vector space \[A = \bigoplus_{K\in I^2} A(K)\] endowed with the unique multiplication whose restriction to $A(K)\otimes A(L)$ concides with $M(K,L)$.
\end{Def}
Clearly $A$ is an associative algebra. It will in general not possess a unit, but it is a \emph{locally unital algebra} as there exist mutually orthogonal idempotents $\mathbf{1}_k$ with $A = \osum{k,l} \mathbf{1}_kA\mathbf{1}_l$. An element $a\in A$ can be interpreted as a function assigning to each element $(k,l)\in I^2$ an element $a_{kl}\in A(k,l)$, namely the $(k,l)$-th component of $a$. This identifies $A$ with finitely supported $I$-indexed matrices whose $(k,l)$-th entry lies in $A(k,l)$, equipped with the natural matrix multiplication.
\begin{Rem}\label{RemGrad} When $\mathscr{A}$ is an $I$-partial algebra with total algebra $A$, then $A\otimes A$ can be naturally identified with the total algebra of an $I\times I$-partial algebra $\mathscr{A}\otimes \mathscr{A}$, where \[(A\otimes A)((k,k'),(l,l')) = A(k,l)\otimes A(k',l')\] with the obvious tensor product multiplications and the $\mathbf{1}_{k,k'} = \mathbf{1}_k\otimes \mathbf{1}_{k'}$ as units.
\end{Rem}
Working with non-unital algebras necessitates the use of their \emph{multiplier algebra}. Let us first recall some general notions concerning non-unital algebras from \cite{Dau1,VDae1}.
\begin{Def} Let $A$ be an algebra over $\mathbb{C}$, not necessarily with unit. We call $A$ \emph{non-degenerate} if $A$ is faithfully represented on itself by left and right multiplication. It is called \emph{idempotent} if $A^2 = A$.
\end{Def}
\begin{Def} Let $A$ be an algebra. A \emph{multiplier} $m$ for $A$ consists of a couple of maps \begin{eqnarray*} L_m:A\rightarrow A,\quad a\mapsto ma\\ R_m:A\rightarrow A,\quad a\mapsto am\end{eqnarray*} such that $(am)b = a(mb)$ for all $a,b\in A$.
The set of all multipliers forms an algebra under composition for the
$L$-maps and anti-composition for the $R$-maps. It is called the
\emph{multiplier algebra} of $A$, and is denoted by $M(A)$.
\end{Def}
One has a natural homomorphism $A\rightarrow M(A)$. When $A$ is non-degenerate, this homomorphism is injective, and we can then identify $A$ as a subalgebra of the (unital) algebra $M(A)$. We then also have inclusions \[A\otimes A\subseteq M(A)\otimes M(A)\subseteq M(A\otimes A).\]
\begin{Exa}\label{ExaMult}
\begin{enumerate}[label=(\arabic*)]
\item Let $I$ be a set, and $\mathrm{Fun}_{\fin}(I)$ the algebra of all finitely supported functions on $I$. Then $M(\mathrm{Fun}_{\fin}(I)) = \mathrm{Fun}(I)$, the algebra of all functions on $I$.
\item Let $A$ be the total algebra of an $I$-partial algebra $\mathscr{A}$. As $A$ has local units, it is non-degenerate and idempotent. Then one can identify $M(A)$ with \[M(A) = \left(\prod_l \bigoplus_k A(k,l)\right) \bigcap \left(\prod_k\bigoplus_l A(k,l)\right) \subseteq \prod_{k,l} A(k,l),\] i.e.~ with the space of functions \[m:I^2\rightarrow A,\quad m_{kl}\in A(k,l)\] which have finite support in either one of the variables when the other variable has been fixed. The multiplication is given by the formula \[(mn)_{kl} = \sum_p m_{kp}n_{pl}.\]
\item Let $m_i$ be any collection of multipliers of $A$, and assume that for each $a\in A$, $m_ia =0$ for almost all $i$, and similarly $am_i=0$ for almost all $i$. Then one can define a multiplier $\sum_i m_i$ in the obvious way by termwise multiplication. One says that the sum $\sum_i m_i$ converges in the \emph{strict} topology.
\end{enumerate}
\end{Exa}
The condition appearing in the second example above will appear time and again, so we introduce it formally in the next definition.
\begin{Def} We will call any general assignment $(k,l)\rightarrow m_{kl}$ into a set with a distinguished zero element \emph{row-and column-finite} (rcf) if the assignment has finite support in either one of the variables when the other variable has been fixed.
\end{Def}
Let us comment on the notion of a morphism for partial algebras. We first introduce the piecewise definition.
\begin{Def}\label{DefMor} Let $\mathscr{A}$ and $\mathscr{B}$ be respectively $I$ and $J$-partial algebras. Let \[\phi: I \ni k \mapsto J_k \subseteq J\] with the $J_k$ disjoint. A \emph{homomorphism} (based on $\phi$) from $\mathscr{A}$ to $\mathscr{B}$ consists of linear maps \[\GrDA{f}{r}{s}: A(k\;l)\rightarrow B(r\;s),\quad a\mapsto \GrDA{f(a)}{r}{s}\] for all $r\in J_k, s\in J_l$, satisfying
\begin{enumerate}[label = (\arabic*)]
\item (Unitality) $\GrDA{f(\mathbf{1}_{k})}{r}{s} = \delta_{rs}\mathbf{1}_r$ for all $r,s\in J_k$.
\item (Local finiteness) For each $k,l\in I$ and $a\in A(k\;l)$, the assigment $(r,s)\rightarrow \GrDA{f(a)}{r}{s}$ on $J_k\times J_l$ is rcf.
\item (Multiplicativity) For all $k,l,m\in I$, all $r\in J_k$ and all $t\in J_m$, and all $a\in A(k\;l)$ and $b\in A(l\;m)$, one has \[\GrDA{f(ab)}{r}{t} = \sum_{s\in J_l} \GrDA{f(a)}{r}{s}\GrDA{f(b)}{s}{t}.\]
\end{enumerate}
The homomorphism is called \emph{unital} if $J=\bigcup \{J_k\mid k\in I\}$.
\end{Def}
\begin{Rem}
\begin{enumerate}[label=(\arabic*)]
\item
Note that the multiplicativity condition makes sense because of the local finiteness condition.
\item
If $J = \bigcup_{k} J_k$, we can interpret $\phi$ as a map \[J\rightarrow I,\quad r\mapsto k \iff r\in J_k.\] In the more general case, we obtain a function $J\rightarrow I^*$, where $I^*$ is $I$ with an extra point `at infinity' added.
\end{enumerate}
\end{Rem}
The following lemma provides the global viewpoint concerning homomorphisms.
\begin{Lem}\label{LemPAMor} Let $\mathscr{A}$ and $\mathscr{B}$ be respectively $I$- and $J$-partial algebras, and fix an assignment $\phi: k\mapsto J_k$. Then there is a one-to-one correspondence between homomorphisms $\mathscr{A}\rightarrow \mathscr{B}$ based on $\phi$ and homomorphisms $f:A\rightarrow M(B)$ with $f(\mathbf{1}_k) = \sum_{r\in J_k} \mathbf{1}_r$.
\end{Lem}
\begin{proof}
Straightforward, using the characterisation of the multiplier algebra
provided in Remark \ref{ExaMult} (2).
\end{proof}
\subsection{Partial coalgebras}
The notion of a partial algebra nicely dualizes, one of the main benefits of the local approach. For this we consider again $I^2$ as the pair groupoid, but now with elements considered as column vectors, and with $*$ denoting the (vertical) composition. So $K=\Grt{}{k}{l}$ has source $K_u = k$ and target $K_d = l$, and if $K=\Grt{}{k}{l}$ and $L=\Grt{}{l}{m}$ then $K* L = \Grt{}{k}{m}$.
\begin{Def} A \emph{partial coalgebra} $\mathscr{A}=(\mathscr{A},\Delta)$ (over $\mathbb{C}$) consists of a set $I$ (the object set) together with
\begin{itemize}
\item[$\bullet$] for each $K=\Grru{k}{l}\in I^2$ a vector space $A(K) = \Grt{A}{k}{l}=\!\!\GrRA{A}{k}{l}$,
\item[$\bullet$] for each $K,L$ with $K_d = L_u$ a comultiplication map \[\Grt{\Delta}{K}{L}:A(K*L)\rightarrow A(K)\otimes A(L),\qquad a \mapsto a_{(1)K}\otimes a_{(2)L},\] and
\item[$\bullet$] counit maps $\epsilon_k:\Grt{A}{k}{k}\rightarrow \mathbb{C}$,
\end{itemize}
satisfying the obvious coassociativity and counitality conditions.
By \emph{$I$-partial coalgebra} we will mean a partial coalgebra with object set $I$.
\end{Def}
\begin{Not}\label{NotCom} As the index of $\epsilon_k$ is determined by the element to which it is applied, there is no harm in dropping the index $k$ and simply writing $\epsilon$.
Similarly, if $K = \Grt{}{k}{l}$ and $L = \Grt{}{l}{m}$, we abbreviate $\Delta_l = \Grt{\Delta}{K}{L}$, as the other indices are determined by the element to which $\Delta_l$ is applied.
\end{Not}
We also make again the convention that $A(K*L)=\{0\}$ and
$\Grt{\Delta}{K}{L}$ is the zero map when $K_d \neq L_u$. Similarly $\epsilon$ is seen as the zero functional on $A(K)$ when $K=\Grt{}{k}{l}$ with $k\neq l$.
\subsection{Partial bialgebras}
We can now superpose the notions of a partial algebra and a partial coalgebra. Let $I$ be a set, and let $M_2(I)$ be the set of 4-tuples of elements of $I$ arranged as 2$\times$2-matrices. We can endow $M_2(I)$ with two compositions, namely $\cdot$ (viewing $M_2(I)$ as a row vector of column vectors) and $*$ (viewing $M_2(I)$ as a column vector of row vectors). When $K\in M_2(I)$, we will write $K = \Grs{}{K_l}{K_r} = \Grt{}{K_u}{K_d} = \eGr{}{K_{lu}}{K_{ru}}{K_{ld}}{K_{rd}}$. One can view $M_2(I)$ as a double groupoid, and in fact as a \emph{vacant} double groupoid in the sense of \cite{AN1}.
In the following, a vector $(r,s)$ will sometimes be written simply as $r,s$ (without parentheses) or $rs$ in an index. We also follow Notation \ref{NotCom}, but the reader should be aware that the index of $\Delta$ will now be a 1$\times$2 vector in $I^2$ as we will work with partial coalgebras over $I^2$.
\begin{Def}\label{DefPartBiAlg} A \emph{partial bialgebra} $\mathscr{A}=(\mathscr{A},M,\Delta)$ consists of a set $I$ (the \emph{object set}) and a collection of vector spaces $A(K)$ for $K\in M_2(I)$ such that
\begin{itemize}
\item[$\bullet$] the $\Grs{A}{K_l}{K_r}$ form an $I^2$-partial algebra,
\item[$\bullet$] the $\Grt{A}{K_u}{K_d}$ form an $I^2$-partial coalgebra,
\end{itemize}
and for which the following compatibility relations are satisfied.
\begin{enumerate}[label=(\arabic*)]
\item\label{Propa} (Comultiplication of Units) For all $k,l,l',m\in I$, one has
\[\Delta_{l,l'}(\UnitC{k}{m}) = \delta_{l,l'} \UnitC{k}{l}\otimes \UnitC{l}{m}.\]
\item\label{Propb} (Counit of Multiplication) For all $K,L\in M_2(I)$ with $K_r = L_l$ and all $a\in A(K)$ and $b\in A(L)$, \[\epsilon(ab) = \epsilon(a)\epsilon(b).\
\item\label{Propc} (Non-degeneracy) For all $k\in I$, $\epsilon(\UnitC{k}{k})=1$.
\item\label{Propd} (Finiteness) For each $K\in M_2(I)$ and each $a\in A(K)$, the assignment $(r,s)\rightarrow \Delta_{rs}(a)$ is rcf.
\item\label{Prope} (Comultiplication is multiplicative) For all $a\in A(K)$ and $b\in A(L)$ with $K_r= L_l$, \[\Delta_{rs}(ab) = \sum_t \Delta_{rt}(a)\Delta_{ts}(b).\]
\end{enumerate}
\end{Def}
\begin{Rem}\label{RemBA}\begin{enumerate}[label=(\arabic*)]
\item By assumption \ref{Propd}, the sum on the right hand side in condition \ref{Prope} is in fact finite and hence well-defined.
\item Note that the object set of the above $\mathscr{A}$ as a partial bialgebra is $I$, but the object set of its underlying partial algebra (or coalgebra) is $I^2$.
\item Properties \ref{Propa}, \ref{Propd} and \ref{Prope} simply say that $\Delta$ is a homomorphism $\mathscr{A}\rightarrow \mathscr{A}\otimes \mathscr{A}$ of partial algebras based over the assignment $I^2\rightarrow \mathscr{P}(I^2\times I^2)$, the power set of $I^2\times I^2$, such that \[(I^2\times I^2)_{{\tiny \begin{pmatrix} k\\m \end{pmatrix}}} = \{\left(\begin{pmatrix} k \\ l \end{pmatrix},\begin{pmatrix} l \\ m \end{pmatrix}\right)\mid l\in I\}.\]
\end{enumerate}
\end{Rem}
We relate the notion of a partial bialgebra to the recently introduced
notion of a weak multiplier bialgebra \cite{Boh1}. Let us first
introduce the following notation, using the notion introduced in
Example \ref{ExaMult} (2).
\begin{Not}
If $\mathscr{A}$ is an $I$-partial bialgebra, we write \[\lambda_k = \sum_l \UnitC{k}{l},\qquad \rho_l = \sum_k\UnitC{k}{l} \qquad \in M(A).\]
\end{Not}
\begin{Rem} From Property \ref{Propc} of Definition \ref{DefPartBiAlg}, it follows that $\lambda_k\neq 0\neq \rho_k$ for any $k\in I$.
\end{Rem}
To show that the total algebra of a partial bialgebra becomes a weak
multiplier bialgebra, we will need some easy lemmas. The first one is
an immediate consequence of Remark \ref{RemBA} (3) and Lemma \ref{LemPAMor}:
\begin{Lem} Let $\mathscr{A}$ be an $I$-partial bialgebra. Then for each $a\in A$, there exists a unique multiplier $\Delta(a) \in M(A\otimes A)$ such that \begin{align}\label{EqDel}
\begin{aligned}
\Delta_{rs}(a) &= (1\otimes \lambda_r)\Delta(a)(1\otimes
\lambda_s) \\ &= (\rho_r\otimes 1)\Delta(a)(\rho_s\otimes 1)
\end{aligned}
\end{align} for all $r,s\in I$, all $K\in M_2(I)$ and all $a\in A(K)$.
The resulting map \[\Delta:A\rightarrow M(A\otimes A),\quad a\mapsto \Delta(a)\] is a homomorphism.
\end{Lem}
We will refer to $\Delta: A\rightarrow M(A\otimes A)$ as the
\emph{total comultiplication} of $\mathscr{A}$. We will then also use
the suggestive Sweedler notation for this map, \[\Delta(a) =
a_{(1)}\otimes a_{(2)}.\] Note for example that \[\Delta(\UnitC{k}{m}) = \sum_{l}\UnitC{k}{l}\otimes \UnitC{l}{m} = \sum_l \lambda_k\rho_l\otimes \lambda_l\rho_m.\]
\begin{Lem} The element $E = \sum_{k,l,m} \UnitC{k}{l}\otimes \UnitC{l}{m}= \sum_l \rho_l\otimes \lambda_l$ is a well-defined idempotent in $M(A\otimes A)$, and satisfies \[\Delta(A)(A\otimes A)=E(A\otimes A),\quad (A\otimes A)\Delta(A)= (A\otimes A)E.\]
\end{Lem}
\begin{proof} Clearly the sum defining $E$ is strictly convergent, and makes $E$ into an idempotent. It is moreover immediate that $E\Delta(a)=\Delta(a) = \Delta(a)E$ for all $a\in A$. Since \[E(\UnitC{k}{l}\otimes \UnitC{m}{n}) = \Delta(\UnitC{k}{n})(\UnitC{k}{l}\otimes \UnitC{m}{n}) \] by the property \ref{Propa} of Definition \ref{DefPartBiAlg}, and analogously for multiplication with $E$ on the right, the lemma is proven.
\end{proof}
By \cite[Proposition A.3]{VDW2}, there is a unique homomorphism $\Delta:M(A)\rightarrow M(A\otimes A)$ extending $\Delta$ and such that $\Delta(1) = E$. Similarly the maps $\id\otimes \Delta$ and $\Delta\otimes \id$ extend to maps from $M(A\otimes A)$ to $M(A\otimes A\otimes A)$.
For example, note that
\begin{align} \label{eq:delta-lambda-rho} \Delta(\lambda_{k}) &=
(\lambda_{k} \otimes 1)\Delta(1), & \Delta(\rho_{m}) &= (1 \otimes \rho_{m})\Delta(1).
\end{align}
The following proposition gathers the properties of $\Delta$, $\epsilon$ and $\Delta(1)$ which guarantee that $(A,\Delta)$ forms a weak multiplier bialgebra in the sense of \cite[Definition 2.1]{Boh1}. We will call it the \emph{total weak multiplier bialgebra} associated to $\mathscr{A}$.
\begin{Prop} Let $\mathscr{A}$ be a partial bialgebra with total algebra $A$, total comultiplication $\Delta$ and counit $\epsilon$. Then the following properties are satisfied.
\begin{enumerate}[label={(\arabic*)}]
\item Coassociativity: $(\Delta\otimes \id)\Delta = (\id\otimes \Delta)\Delta$ (as maps $M(A)\rightarrow M(A^{\otimes 3})$).
\item Counitality: $(\epsilon\otimes \id)(\Delta(a)(1\otimes b)) = ab = (\id\otimes \epsilon)((a\otimes 1)\Delta(b))$ for all $a,b\in A$.
\item Weak Comultiplicativity of the Unit: \[(\Delta(1)\otimes 1)(1\otimes \Delta(1)) = (\Delta\otimes \id)\Delta(1) = (\id\otimes \Delta)\Delta(1) = (1\otimes \Delta(1))(\Delta(1)\otimes 1).\]
\item \label{WMC} Weak Multiplicativity of the Counit: For all $a,b,c\in A$, one has \[(\epsilon\otimes \id)(\Delta(a)(b\otimes c)) = (\epsilon\otimes \id)((1\otimes a)\Delta(1)(b\otimes c))\] and
\[(\epsilon\otimes \id)((a\otimes b)\Delta(c)) = (\epsilon\otimes \id)((a\otimes b)\Delta(1)(1\otimes c)).\]
\item Strong multiplier property: For all $a,b\in A$, one has \[\Delta(A)(1\otimes A)\cup (A\otimes 1)\Delta(A)\subseteq A\otimes A.\]
\end{enumerate}
\end{Prop}
\begin{proof} Most of these properties follow immediately from the definition of a partial bialgebra. For demonstrational purposes, let us check the first identity of property \ref{WMC}. Let us choose $a\in A(K)$, $b\in A(L)$ and $c\in A(M)$. Then \[\Delta(a)(b\otimes c) = \delta_{K_{ru},L_{lu}}\delta_{M_{lu},L_{ld}} \sum_r \Delta_{r,L_{ld}}(a)(b\otimes c).\] Applying $(\epsilon\otimes \id)$ to both sides, we obtain by Proposition \ref{Propb} of Definition \ref{DefPartBiAlg} and counitality of $\epsilon$ that \[(\epsilon \otimes \id)(\Delta(a)(b\otimes c)) = \delta_{K_{ru},L_{lu},L_{ld},M_{lu}} \epsilon(b) ac.\] On the other hand, \begin{eqnarray*} (1\otimes a)\Delta(1)(b\otimes c) &=& \sum_{r,s,t} \UnitC{r}{s} b \otimes a\UnitC{s}{t}c \\ &=& \delta_{L_{ld},K_{ru},M_{lu}} b \otimes ac.\end{eqnarray*} Applying $(\epsilon\otimes \id)$, we find \begin{eqnarray*} (\epsilon\otimes \id)( (1\otimes a)\Delta(1)(b\otimes c) ) &=& \delta_{L_{ld},K_{ru},M_{lu}}\delta_{L_{lu},L_{ld}}\delta_{L_{ru},L_{rd}} \epsilon(b)ac \\ &=& \delta_{L_{ld},L_{lu},K_{ru},M_{lu}} \epsilon(b)ac,\end{eqnarray*} which agrees with the expression above.
\end{proof}
\begin{Rem}
Since also the expressions $\Delta(a)(b\otimes 1)$ and $(1\otimes a)\Delta(b)$ are in $A\otimes A$ for all $a,b\in A$, we see that $(A,\Delta)$ is in fact a \emph{regular} weak multiplier bialgebra \cite[Definition 2.3]{Boh1}.
\end{Rem}
Recall from \cite[Section 3]{Boh1} that a regular weak multiplier
bialgebra admits four projections $A\rightarrow M(A)$, given
by \begin{align*} \bar{\Pi}^L(a) = (\epsilon\otimes \id)((a\otimes
1)\Delta(1)),\quad & \bar{\Pi}^R(a) = (\id\otimes
\epsilon)(\Delta(1)(1\otimes a)),\\ \Pi^L(a) = (\epsilon\otimes
\id)(\Delta(1)(a\otimes 1)),\quad& \Pi^R(a) =
(\id\otimes\epsilon)((1\otimes a)\Delta(1)),\end{align*} where the
right hand side expressions are interpreted as multipliers in the
obvious way. The relation $\Delta(1)=\sum_{p} \rho_{p} \otimes \lambda_{p}$ and condition \ref{Propc} in Definition \ref{DefPartBiAlg} imply
\begin{align*}
\bar \Pi^{L}(A) &=\mathrm{span}\{\lambda_{p}:p\in I\} = \Pi^{L}(A), &
\bar \Pi^{R}(A) &= \mathrm{span}\{\rho_{p}:p\in I\} =\Pi^{R}(A).
\end{align*}
The \emph{base algebra} of $(A,\Delta)$ is therefore just the algebra
$\mathrm{Fun}_{\fin}(I)$ of finitely supported functions on $I$, and the
comultiplication of $A$ is (left and right) \emph{full} (meaning
roughly that the legs of $\Delta(A)$ span $A$) by \cite[Theorem 3.13]{Boh1}.
The maps $\Pi^{L}$ and $\Pi^{R}$ can also
be written in the form
\begin{align} \label{eq:pi}
\Pi^L(a) & = \sum_{p}\epsilon(\lambda_{p}a)\lambda_p, & \Pi^R(a) & = \sum_{p}\epsilon(a \rho_{p}) \rho_p
\end{align}
because $\epsilon(\lambda_{k}\rho_{m} a \lambda_{l}\rho_{n})=0$ if $(k,l)\neq(m,n)$. These relations and \eqref{EqDel}, \eqref{eq:delta-lambda-rho} imply
\begin{align} \label{eq:pi-l-delta}
(\Pi^{L} \otimes \id)(\Delta(a)) &= \sum_{p} \lambda_{p}\otimes \lambda_{p}a, &
(\id \otimes \Pi^{L})(\Delta(a)) &= \sum_{p} \rho_{p}a \otimes \lambda_{p}, & \\ \label{eq:pi-r-delta}
(\Pi^{R} \otimes \id)(\Delta(a)) &= \sum_{p} \rho_{p} \otimes a\lambda_{p}, &
(\id \otimes \Pi^{R})(\Delta(a)) &= \sum_{p} a\rho_{p} \otimes \rho_{p}.
\end{align}
Let us now show a converse. If $(A,\Delta)$ is a regular weak multiplier bialgebra, let us write $A^L = \Pi^L(A) = \bar{\Pi}^L(A)\subseteq M(A)$ and $A^R = \Pi^R(A)= \bar{\Pi}^R(A)\subseteq M(A)$ for the base algebras, where the identities follow from \cite[Theorem 3.13]{Boh1}. Then if moreover $(A,\Delta)$ is left and right full, we have that $A^L$ is (canonically) anti-isomorphic to $A^R$ by the map \[\sigma: A^L \rightarrow A^R, \quad \bar{\Pi}^L(a) \rightarrow \Pi^R(a), \qquad a\in A,\] by \cite[Lemma 4.8]{Boh1}. We then simply refer to $A^L$ as \emph{the} base algebra.
\begin{Rem}\label{RemNak} We could also have used the map $\bar{\sigma}(\Pi^L(a)) = \bar{\Pi}^R(a)$ to identify $A^L$ and $A^R$. As it turns out, $\bar{\sigma}^{-1}\sigma$ is the (unique) Nakayama automorphism for some functional $\varepsilon$ on $A^L$, cf. \cite[Proposition 4.9]{Boh1}. Hence if $A^L$ is commutative, it follows that $\sigma = \bar{\sigma}$.
\end{Rem}
\begin{Prop}\label{PropCharPBA} Let $(A,\Delta)$ be a left and right full regular weak multiplier bialgebra whose base algebra is isomorphic to $\mathrm{Fun}_{\fin}(I)$ for some set $I$, and such that moreover $A^LA^R \subseteq A$. Then $(A,\Delta)$ is the total weak multiplier bialgebra of a uniquely determined partial bialgebra $\mathscr{A}$ over $I$.
\end{Prop}
\begin{Rem} The condition $A^LA^R \subseteq A$ is of course essential, as we want $A$ to behave locally as a bialgebra, not a multiplier bialgebra. Indeed, in case $A^L= \mathbb{C}$, the condition simply says that $A$ is unital. In general, it should be considered as a \emph{properness} condition.
\end{Rem}
\begin{proof} Let us write the standard generators (Dirac functions) of $A^L$ as $\lambda_k$ for $k\in I$, and write $\sigma(\lambda_k) = \rho_k\in A^R$. By assumption, $\UnitC{k}{l} = \lambda_k\rho_l\in A$. Further $A= AA^R = AA^L = A^LA=A^RA$, cf.~ the proof of \cite[Theorem 3.13]{Boh1}. Hence the $\UnitC{k}{l}$ make $A$ into the total algebra of an $I\times I$-partial algebra, as $A^L$ and $A^R$ pointwise commute by \cite[Lemma 3.5]{Boh1}.
Define \[\Delta_{rs}(a) = (\rho_r\otimes \lambda_r)\Delta(a)(\rho_s\otimes \lambda_s).\] From \cite[Lemma 3.3]{Boh1}, it follows that $\Delta_{rs}$ is a map from $\Gr{A}{k}{l}{m}{n}$ to $\Gr{A}{k}{l}{r}{s}\otimes \Gr{A}{r}{s}{m}{n}$. That same lemma, together with the coassociativity of $\Delta$, show that the $\Delta_{rs}$ form a coassociative family.
Now by \cite[Lemma 3.9]{Boh1}, we have $(\rho_k\otimes 1)\Delta(a) = (1\otimes \lambda_k)\Delta(a)$ for all $a$. By that same lemma and Remark \ref{RemNak}, we have as well $\Delta(a)(\rho_k\otimes 1) = \Delta(a)(1\otimes \lambda_k)$. Hence we may as well write \begin{eqnarray*} \Delta_{rs}(a) &=& (\rho_r\otimes 1)\Delta(a)(\rho_s\otimes 1) \\ &=& (1\otimes \lambda_r)\Delta(a)(1\otimes \lambda_s)\end{eqnarray*} It is now straightforward that the counit map of $(A,\Delta)$ also provides a counit for the $\Delta_{rs}$, hence the $\Gr{A}{k}{l}{m}{n}$ also form a partial coalgebra.
As $\Delta(a)(1\otimes \lambda_s)$ and $(1\otimes \lambda_r)\Delta(a)$ are already in $A\otimes A$, it is also clear that $\Delta_{rs}(a)$ is rcf for each $a$. The multiplicativity of the $\Delta_{rs}$ is then immediate from the multiplicativity of $\Delta$.
To show that $\Delta_{ll'}(\UnitC{k}{m}) = \delta_{l,l'} \UnitC{k}{l}\otimes \UnitC{l}{m}$, it suffices to show that $\Delta(1) = \sum_k \rho_k\otimes \lambda_k$. Now as $\Delta(1)(A\otimes A) = \Delta(A)(A\otimes A)$, and as clearly $\Delta(a) = \sum_{r,s}\Delta_{rs}(a)$ in the strict topology for all $a\in A$, it follows that \[\Delta(1) = \left(\sum_k \rho_k\otimes \lambda_k\right)\Delta(1).\] Similarly, $\Delta(1) = \Delta(1)\left(\sum_k\rho_k\otimes \lambda_k\right)$. On the other hand, by \cite[Lemma 4.10]{Boh1} it follows that we can then write \[\Delta(1) = \sum_{k\in I'} \rho_k\otimes \lambda_k\] for some subset $I'\subseteq I$. As by definition $\bar{\Pi}^L(A) = \mathrm{Fun}_{\fin}(I)$, we deduce that $I=I'$.
For $a\in \Gr{A}{k}{l}{p}{q}$ and $b\in \Gr{A}{l}{m}{q}{r}$, we then have $\epsilon(ab) = \epsilon(a\UnitC{l}{q}b) = \epsilon(a)\epsilon(b)$ by \cite[Proposition 2.6.(4)]{Boh1}, which shows the partial multiplicativity of $\epsilon$.
Finally, assume that $k$ was such that $\epsilon(\UnitC{k}{k})=0$. Then by the partial multiplication law, $\epsilon$ is zero on all $\Gr{A}{k}{l}{k}{l}$. Applying $\Delta_{kl}$ to $\Gr{A}{k}{l}{m}{n}$ and using the counit property on the first leg, it follows that $\Gr{A}{k}{l}{m}{n}=0$ for all $l,m,n$. In particular, $\UnitC{k}{m}=0$ for all $m$. But this entails $\lambda_k=0$, a contradiction. Hence $\epsilon(\UnitC{k}{k})\neq 0$. From the partial multiplication law, it follows that $\epsilon(\UnitC{k}{k})^2 = \epsilon(\UnitC{k}{k})$, hence $\epsilon(\UnitC{k}{k})=1$.
This concludes the proof that $(A,\Delta)$ determines a partial bialgebra $\mathscr{A}$. It is immediate that $(A,\Delta)$ is in fact the total weak multiplier bialgebra of $\mathscr{A}$.
\end{proof}
\subsection{Partial Hopf algebras}
We now formulate the notion of a partial Hopf algebra, whose total form will correspond to a weak multiplier Hopf algebra \cite{Boh1,VDW2,VDW1}. We will mainly refer to \cite{Boh1} for uniformity.
Let us write $\circ$ for the inverse of $\cdot$, and $\bullet$ for the inverse of $*$, so \[\begin{pmatrix} k & l \\ m & n \end{pmatrix}^{\circ} = \begin{pmatrix} l & k \\ n & m \end{pmatrix},\quad \begin{pmatrix} k & l \\ m & n \end{pmatrix}^{\bullet} = \begin{pmatrix} m & n \\ k & l \end{pmatrix},\quad \begin{pmatrix} k & l \\ m & n \end{pmatrix}^{\circ \bullet} = \begin{pmatrix} n & m \\ l & k \end{pmatrix}.\] The notation $\circ$ (resp. $\bullet$) will also be used for row vectors (resp. column vectors).
\begin{Def}\label{DefPartBiAlgAnt} An \emph{antipode} for an
$I$-partial bialgebra $\mathscr{A}$ consists of linear
maps \[S:A(K)\rightarrow A(K^{\circ\bullet})\]
such that the following property holds: for all $M,P\in M_2(I)$ and
all $a\in A(M)$, \begin{align} \label{eq:antipode-pi-l}\underset{K\cdot
L^{\circ\bullet}=P}{\sum_{K* L = M}} a_{(1)K}S(a_{(2)L})&=
\delta_{P_l,P_r}\epsilon(a)\mathbf{1}(P_l),
\\ \label{eq:antipode-pi-r}
\underset{K^{\circ\bullet}\cdot L=P}{\sum_{K* L = M}}
S(a_{(1)K})a_{(2)L}&=
\delta_{P_l,P_r}\epsilon(a)\mathbf{1}(P_r).\end{align}
A partial bialgebra $\mathscr{A}$ is called a \emph{partial Hopf algebra} if it admits an antipode.
\end{Def}
\begin{Rem} Note that condition \ref{Propd} of Definition \ref{DefPartBiAlg} again guarantees that the above sums are in fact finite.
\end{Rem}
If $S$ is an antipode for a partial bialgebra, we can extend $S$ to a
linear map \[S:A\rightarrow A\] on the total algebra $A$. Conditions
\eqref{eq:antipode-pi-l} and \eqref{eq:antipode-pi-r} then take the
following simple form:
\begin{Lem} \label{lemma:antipode}
A family of maps $S \colon A(K) \to A(K^{\circ\bullet})$ satisfies
\eqref{eq:antipode-pi-l} and \eqref{eq:antipode-pi-r} if and only if
the total map $S\colon A \to A$ satisfies
\begin{align} \label{eq:total-antipode}
a_{(1)}S(a_{(2)}) &= \Pi^{L}(a), &
S(a_{(1)})a_{(2)} &= \Pi^{R}(a)
\end{align}
for all $a\in A$.
\end{Lem}
Note that these should be considered a priori as equalities of left (resp. right) multipliers on $A$.
\begin{proof}
For $M,P\in M_{2}(I)$ and $a\in A(M)$, the left and the right hand side of \eqref{eq:antipode-pi-l} are the $P$-homogeneous components of $ a_{(1)}S(a_{(2)})$ and $\Pi^{L}(a)=\sum_{p} \epsilon(\lambda_{p}a)\lambda_{p}$, respectively.
\end{proof}
\begin{Lem}\label{LemAntiUnit} Let $\mathscr{A}$ be a partial Hopf algebra with antipode $S$. For all $k,l\in I$, $S(\UnitC{k}{l}) = \UnitC{l}{k}$.
\end{Lem}
\begin{proof} For example the first identity in Equation \eqref{eq:total-antipode} of Lemma \ref{lemma:antipode} applied to $\UnitC{k}{k}$ gives \[\sum_l S(\UnitC{l}{k}) = \sum_l \UnitC{k}{l}S(\UnitC{l}{k}) = \lambda_k,\] as $S(\UnitC{l}{k}) \in \Gr{A}{k}{k}{l}{l}$ and $\Pi^{L}(\UnitC{k}{k}) = \lambda_k$. This implies the lemma.
\end{proof}
\begin{Rem} \label{remark:index-equivalence}
Let $\mathscr{A}$ be an $I$-partial Hopf algebra. Then the relation
on $I$ defined by
\begin{align*}
k \sim l \Leftrightarrow \UnitC{k}{l} \neq 0
\end{align*}
is an equivalence relation. Indeed, it is reflexive and transitive by
assumptions (3) and (1) in Definition \ref{DefPartBiAlg}, and
symmetric by the preceding result. We call the set $\mathscr{I}$ of equivalence classes the \emph{hyperobject} set of $\mathscr{A}$.
\end{Rem}
The existence of an antipode is closely related to partial invertibility of
the maps $T_{1},T_{2} \colon A \otimes A \to A\otimes A$ given by
\begin{align} \label{eq:wt-12}
T_{1} (a\otimes b)&= \Delta(a)(1 \otimes b), &
T_{2} (a\otimes b)&= (a \otimes 1)\Delta(b).
\end{align}
The precise formulation involves the linear maps $E_{i},G_{i}
\colon A\otimes A\to A\otimes A$ given by
\begin{align} \label{eq:e1g1}
G_{1}(a\otimes b) &=
\sum_{p} a\rho_{p} \otimes \rho_{p}b, & E_{1}(a \otimes b) &=\Delta(1)(a\otimes b)=\sum_{p} \rho_{p}a\otimes \lambda_{p}b, \\ \label{eq:e2g2}
G_{2}(a \otimes b) &= \sum_{p} a\lambda_{p} \otimes
\lambda_{p}b, &
E_{2}(a\otimes b) &= (a\otimes b)\Delta(1)=\sum_{p} a\rho_{p} \otimes b\lambda_{p}.
\end{align}
\begin{Prop} \label{prop:riti}
Let $\mathscr{A}$ be a partial Hopf algebra with total algebra $A$,
total comultiplication $\Delta$ and antipode $S$. Then the maps
$R_{1},R_{2} \colon A \otimes A \to M(A \otimes A)$ given by
\begin{align*}
R_{1}(a \otimes b) &= a_{(1)}\otimes S(a_{(2)})b, &
R_{2}(a\otimes b) &= aS(b_{(1)})\otimes b_{(2)}
\end{align*}
take values in $A\otimes A$ and satisfy for $i=1,2$ the relations
\begin{align} \label{eq:riti}
T_{i}R_{i}&=E_{i}, & R_{i}T_{i}&= G_{i}, & T_{i}R_{i}T_{i}&= T_{i}, & R_{i}T_{i}R_{i} &= R_{i}.
\end{align}
\end{Prop}
\begin{proof}
The map $R_{1}$ takes values in $A\otimes A$ because
\begin{align*}
a_{(1)} \otimes
S(a_{(2)})\lambda_{k}\rho_{l} = a_{(1)} \otimes S(\rho_{l}\lambda_{k}a_{(2)}) \in A
\otimes A
\end{align*}
for all $a\in A$, and Lemma
\ref{lemma:antipode}, Equation \eqref{eq:pi-l-delta} and Lemma \ref{LemAntiUnit} imply
\begin{align*}
T_{1}R_{1}(a \otimes b)&= a_{(1)} \otimes a_{(2)}S(a_{(3)})b =
a_{(1)} \otimes \Pi^{L}(a_{(2)})b =
\sum_{p} \rho_{p}a \otimes \lambda_{p}b, \\
R_{1}T_{1}(a \otimes b) &= a_{(1)} \otimes S(a_{(2)})a_{(3)}b =
a_{(1)} \otimes \Pi^{R}(a_{(2)})b = \sum_{p} a\rho_{p}\otimes
\rho_{p}b.
\end{align*}
The relations $T_{1}R_{1}T_{1}=T_{1}$ and
$R_{1}T_{1}R_{1}=R_{1}$ follow easily from \eqref{EqDel} and
\eqref{eq:delta-lambda-rho}. The assertions concerning $R_{2}$ and
$T_{2}$ follow similarly.
\end{proof}
\begin{Theorem} \label{theorem:partial-hopf-algebra}
Let $\mathscr{A}$ be a partial bialgebra with total algebra $A$,
total comultiplication $\Delta$ and counit $\epsilon$. Then the
following conditions are equivalent:
\begin{enumerate}[label={(\arabic*)}]
\item\label{tph1} $\mathscr{A}$ is a partial Hopf algebra.
\item\label{tph2} There exist linear maps $R_{1},R_{2} \colon A\otimes A\to
A\otimes A$ satisfying \eqref{eq:riti}.
\item\label{tph3} $(A,\Delta,\epsilon)$ is a weak multiplier Hopf algebra in the sense of \cite{VDW1}.
\end{enumerate}
If these conditions hold, then the total antipode of $\mathscr{A}$ coincides with the antipode of $(A,\Delta,\epsilon)$.
\end{Theorem}
\begin{proof}
\ref{tph1} implies \ref{tph2} by Proposition \ref{prop:riti}. \ref{tph2} is equivalent to \ref{tph3} by Definition
1.14 in \cite{VDW1}. Indeed, the maps $G_{1},G_{2}$ defined in \eqref{eq:e1g1} and \eqref{eq:e2g2} satisfy
\begin{align*}
G_{1}(a_{(1)} \otimes b) \otimes a_{(2)}c &= \sum_{p} a_{(1)} \otimes \rho_{p}b
\otimes a_{(2)}\lambda_{p}c, \\
ac_{(1)} \otimes G_{2}(b\otimes c_{(2)}) &=\sum_{p} a\rho_{p}c_{(1)} \otimes b\lambda_{p} \otimes c_{(2)}
\end{align*}
and therefore coincide with the maps introduced in Proposition 1.14 in
\cite{VDW1}. Finally, assume \ref{tph3}. Then
Lemma 6.14 and equation (6.14) in \cite{Boh1} imply that the antipode
$S$ of $(A,\Delta)$ satisfies $S(A(K))\subseteq A(K^{\circ\bullet})$ and relation \eqref{eq:total-antipode}. Now, Lemma \ref{lemma:antipode} implies \ref{tph1}.
\end{proof}
From \cite[Proposition 3.5 and Proposition 3.7]{VDW1} or \cite[Theorem
6.12 and Corollary 6.16]{Boh1}, we can conclude that the antipode of a
partial Hopf algebra reverses the multiplication and
comultiplication. Denote by $\Delta^{\op}$ the composition of
$\Delta$ with the flip map.
\begin{Cor} \label{corollary:antipode} Let $\mathscr{A}$ be a partial
Hopf algebra. Then the total antipode $S:A\rightarrow A$ is uniquely determined and satisfies
$S(ab) = S(b)S(a)$ and $\Delta(S(a)) = (S\otimes S)\Delta^{\op}(a)$
for all $a,b\in A$.
\end{Cor}
\begin{proof} Uniqueness of the antipode follows from the identities \eqref{eq:total-antipode}, see also \cite[Remark 2.8.(ii)]{VDW1}.
\end{proof}
We will need the following relation between $\epsilon$ and $S$ at some point.
\begin{Lem}\label{LemCoAnt} Let $(\mathscr{A},\Delta)$ be a partial Hopf algebra. Then $\epsilon\circ S = \epsilon$ on each $\Gr{A}{k}{l}{m}{n}$.
\end{Lem}
\begin{proof} Using the notation in Proposition \ref{prop:riti} and the discussion preceding it, we have that \[T_1: \sum_p(A\rho_p\otimes \rho_p A)\rightarrow \Delta(1)(A\otimes A)\] is a bijection with $R_1$ as inverse. As one easily verifies that $(\id\otimes \epsilon)T_1 = \id\otimes \epsilon$ by the partial multiplicativity and counit property of $\epsilon$, it follows that also $(\id\otimes \epsilon)R_1 = \id\otimes \epsilon$ on $\Delta(1)(A\otimes A)$. Applying both sides to $a\otimes \UnitC{k}{k}$ with $a\in \Gr{A}{k}{l}{k}{l}$, we find \[(\id\otimes (\epsilon\circ S))\Delta_{kl}(a) = a.\] Applying $\epsilon$ to this identity, we find $\epsilon\circ S = \epsilon$ on each $\Gr{A}{k}{l}{k}{l}$, and hence on all $\Gr{A}{k}{l}{m}{n}$.
\end{proof}
In practice, it is convenient to have an \emph{invertible} antipode around. Although the invertibility often comes for free in case extra structure is around, we will mostly just impose it to make life easier. The following definition follows the terminology of \cite{VDae1}.
\begin{Def} Let $\mathscr{A}$ be a partial Hopf algebra. We call $\mathscr{A}$ a \emph{regular} partial Hopf algebra if the antipode maps on $\mathscr{A}$ are invertible.
\end{Def}
From the uniqueness of the antipode, it follows immediately that $S^{-1}$ is then an antipode for $(\mathscr{A},\Delta^{\op})$. Conversely, if both $(\mathscr{A},\Delta)$ and $(\mathscr{A},\Delta^{\op})$ have antipodes, then $(\mathscr{A},\Delta)$ is a regular partial Hopf algebra.
\subsection{Invariant integrals}
\begin{Def}
Let $\mathscr{A}$ be an $I$-partial bialgebra. We call a family of
functionals
\begin{align} \label{eq:functionals}
\phic{k}{m} \colon A\pmat{k}{k}{m}{m} \to \mathbb{C}
\end{align}
a \emph{left invariant} \emph{integral} if
$\phic{k}{k}(\UnitC{k}{k})=1$ for all $k\in
I$ and
\begin{align}
\label{eq:integral}
(\id \otimes \phic{l}{m})(\Delta_{ll}(a))
&= \delta_{k,p} \phic{k}{m}(a)
\UnitC{k}{l}
\end{align}
for all $k,l,m,p\in I$, $a \in A\pmat{k}{p}{m}{m}$.
We call them a \emph{right invariant} \emph{integral} if instead one has \begin{align}
(\phic{k}{l} \otimes
\id)(\Delta_{ll}(a))&= \delta_{m,p} \phic{k}{m}(a) \UnitC{l}{m}\end{align}
for all $k,l,m,p\in I$, $a \in A\pmat{k}{k}{m}{p}$.
A left integral which is at the same time a right invariant integral will simply be called an \emph{invariant integral}.
\end{Def}
As before, we can extend a (left or right) invariant integral to a functional $\phi$ on $A$ by linearity and by putting $\phi=0$ on $\Gr{A}{k}{l}{m}{n}$ if $k\neq l$ or $m\neq n$. The total form of the invariance conditions
\eqref{eq:integral} reads as follows.
\begin{Lem} \label{lemma:total-integral}
A family of functionals as in \eqref{eq:functionals}
is left invariant
if and only if
for all $a,b\in A$,
\begin{align*}
(\id\otimes \phi)((b\otimes 1)\Delta(a)) &= \sum_{k}\phi(\lambda_{k}a)b\lambda_k.
\end{align*}
It defines a right invariant functional if and only if
\begin{align*} (\phi\otimes \id)(\Delta(a)(1\otimes b)) &= \sum_{n}
\phi(\rho_{n} a)\rho_n b.\end{align*}
\end{Lem}
\begin{proof}
Straightforward.
\end{proof}
We have the following form of \emph{strong invariance}.
\begin{Lem} \label{lemma:strong-invariance}
Let $\mathscr{A}$ be a partial Hopf algebra with left invariant integral $\phi$. Then
for all $a\in A$,
\begin{align*}
S\left(( \id\otimes
\phi)(\Delta(b)(1 \otimes a))\right) &= (\id \otimes \phi)((1 \otimes b)\Delta(a)).
\end{align*}
Similarly, if $\mathscr{A}$ is a partial Hopf algebra with right invariant integral $\phi$, then
\begin{align*} S\left((\phi \otimes
\id)((a\otimes 1)\Delta(b))\right) &= (\phi \otimes \id)(\Delta(a)(b\otimes 1)).\end{align*}
\end{Lem}
\begin{proof}
The counit property, the relations \eqref{EqDel} and
\eqref{eq:total-antipode} and Lemma \ref{lemma:total-integral} imply
\begin{align*}
a_{(1)}\phi(ba_{(2)}) &= \sum_{n}
a_{(1)}\phi(\epsilon(b_{(1)}\rho_{n})b_{(2)}\lambda_{n}a_{(2)}) \\
&= \sum_{n} \epsilon(b_{(1)}\rho_{n})\rho_{n}a_{(1)}\phi(b_{(2)}a_{(2)})
\\
&= S(b_{(1)})b_{(2)}a_{(1)}\phi(b_{(3)}a_{(2)}) =
S(b_{(1)})\phi(b_{(2)}a)
\end{align*}
for all $a,b \in A$. The second equation
follows similarly.
\end{proof}
\begin{Lem} Assume that $\mathscr{A}$ is a regular $I$-partial Hopf algebra which admits a left invariant integral $\phi$. Then the following hold.
\begin{enumerate}[label = {(\arabic*)}]
\item\label{LI1} $\phi(\UnitC{k}{m})=1$ for all $k,m\in I$ with $\UnitC{k}{m}\neq 0$.
\item\label{LI2} $\phi$ is uniquely determined.
\item\label{LI3} $\phi=\phi S$.
\item\label{LI4} $\phi$ is invariant.
\end{enumerate}
\end{Lem}
\begin{proof}
To see \ref{LI1}, take $a=\UnitC{k}{k}$ in \eqref{eq:integral}.
Now by Corollary \ref{corollary:antipode}, we have that $\phi S$ is right invariant. But assume that $\psi$ is any
right invariant integral. Then for all $k,l,m\in I$, $a\in A\pmat{k}{k}{m}{m}$,
\begin{align*}
\phic{k}{m}(a) &= (\Grt{\psi}{k}{k} \otimes
\phic{k}{m})(\Delta_{kk}(a)) = \Grt{\psi}{k}{m}(a)\Grt{\phi}{k}{m}(\UnitC{k}{m}) = \Grt{\psi}{k}{m}(a) .
\end{align*}
This proves \ref{LI2}, \ref{LI3} and \ref{LI4}.
\end{proof}
We will need the following lemma at some point, cf.~ \cite[Proposition 3.4]{VDae2}.
\begin{Lem}\label{LemFaith} Let $\mathscr{A}$ be a regular partial Hopf
algebra with an invariant integral $\phi$. Then
$\phi$ is faithful in the following sense: if $a\in A$ and
$\phi(ab) =0$ (resp. $\phi(ba)=0$) for all $b\in A$, then
$a=0$.
\end{Lem}
\begin{proof} Suppose $a\in A$ and $\phi(ba)=0$ for all $b\in A$. By the support condition of $\phi$, we may suppose $a$ is homogeneous, $a\in \Gr{A}{k}{l}{m}{n}$.
We will first show that necessarily $\epsilon(a)=0$, for which we may already assume $k=m$ and $l=n$. Indeed, the condition on $a$ implies also $(\id\otimes \phi)(\Delta_{lk}(b)(1\otimes a))=0$ for all $b\in \Gr{A}{s}{r}{l}{k}$. Applying the strong invariance identity, we deduce \begin{equation}\label{EqSwitch}(\id\otimes \phi)((1\otimes b)\Delta_{rs}(a))=0,\qquad \forall b\in \Gr{A}{s}{r}{l}{k}.\end{equation} Writing $\Delta_{rs}(a) = \sum_i p_i\otimes q_i$ with the $p_i$ linearly independent, we deduce $\phi(bq_i)=0$ for all $i$ and $b$, and so also $\sum_i \phi(S(p_i)q_i)=0$. Hence $0=\sum_r \phi(S(a_{(1){\tiny \begin{pmatrix} k & l \\ r & l \end{pmatrix}}})a_{(2){\tiny \begin{pmatrix} r & l \\ k & l\end{pmatrix}}}) = \phi(\epsilon(a) \UnitC{l}{l}) = \epsilon(a)$.
Note now that from \eqref{EqSwitch}, it follows that for any functional $\omega$ on $\Gr{A}{k}{l}{m}{n}$, also $a'=(\omega\otimes \id)\Delta_{mn}(a)$ satisfies $\phi(ba')=0$ for all $b\in A$. Hence, by what we have just shown, $\epsilon(a')=0$, i.e.~ $\omega(a)=0$. As $\omega$ was arbitrary, we deduce $a=0$.
The other case follows similarly, or by considering the opposite comultiplication.
\end{proof}
\subsection{Partial compact quantum groups}
Our main objects of interest are partial Hopf algebras with involutions and invariant integrals.
\begin{Def} A \emph{partial $*$-algebra} $\mathscr{A}$ is a partial
algebra whose total algebra $A$ is equipped with an antilinear,
antimultiplicative involution $*\colon A\rightarrow A$, $ a\mapsto
a^*$, such that the $\mathbf{1}_k$ are selfadjoint for all $k$ in
the object set.
\end{Def}
One can of course give an alternative definition directly in terms of the partial algebra structure by requiring that we are given antilinear maps $A(k,l)\rightarrow A(l,k)$ satisfying the obvious antimultiplicativity and involution properties.
\begin{Def} A \emph{partial $*$-bialgebra} $\mathscr{A}$ is a
partial bialgebra whose underlying partial algebra has been
endowed with a partial $*$-algebra structure such that
$\Delta_{rs}(a)^* = \Delta_{sr}(a^*)$ for all $a \in \Gr{A}{k}{l}{m}{n}$.
A \emph{partial Hopf $*$-algebra} is a partial bialgebra which is at the same time a partial $*$-bialgebra and a partial Hopf algebra.
\end{Def}
Thus, a partial bialgebra is a partial
$*$-bialgebra if and only if the underlying weak multiplier bialgebra
is a weak multiplier $*$-bialgebra.
From Theorem \ref{theorem:partial-hopf-algebra} and \cite{Boh1},
\cite{VDW1}, we can deduce:
\begin{Cor} \label{cor:involutive}
An $I$-partial $*$-bialgebra $\mathscr{A}$ is an $I$-partial Hopf
$*$-algebra if and only if the weak multiplier $*$-bialgebra
$(A,\Delta)$ is a weak multiplier Hopf $*$-algebra. In that case,
the counit and antipode satisfy
$\epsilon(a^{*})=\overline{\epsilon(a)}$ and $S(S(a)^{*})^{*}=a$ for
all $a\in A$. In particular, the total antipode is bijective.
\end{Cor}
\begin{proof}
The if and only if part follows immediately from Theorem
\ref{theorem:partial-hopf-algebra}, the relation for the counit from
uniqueness of the counit \cite[Theorem 2.8]{Boh1}, and the relation
for the antipode from \cite[Proposition 4.11]{VDW1}.
\end{proof}
We are finally ready to formulate our main definition.
\begin{Def} A \emph{partial compact quantum group} $\mathscr{G}$ is a
partial Hopf $*$-algebra $\mathscr{A} = P(\mathscr{G})$ with an invariant integral $\phi$ that is positive in the sense that $\phi(a^*a)\geq 0$ for all $a\in A$. We also say that $\mathscr{G}$ is the partial compact quantum group \emph{defined by} $\mathscr{A}$.
\end{Def}
\begin{Rem} It will follow from our Proposition \ref{prop:rep-cosemisimple} and \cite[Theorem 3.3 and Theorem 4.4]{Hay1} that for $I$ finite, a partial compact quantum group is precisely a compact quantum group of face type \cite[Definition 4.1]{Hay1}. However, we feel that terminology could be misleading if the object set is not finite. When referring to partial compact quantum groups, we feel that it is better reflected that only the \emph{parts} of this object are to be considered compact, not the total object.
\end{Rem}
\section{Partial tensor categories}
The notion of a partial algebra has a nice categorification. Recall first that the appropriate (vertical) categorification of a unital $\mathbb{C}$-algebra is a $\mathbb{C}$-linear additive tensor category. From now on, by `category' we will by default mean a $\mathbb{C}$-linear additive category.
\begin{Def} A \emph{partial tensor category} $\mathscr{C}$ over a set $\mathscr{I}$ consists of
\begin{itemize}
\item[$\bullet$] a collection of (small) categories $\mathcal{C}_{\alpha\beta}$ with $\alpha,\beta\in \mathscr{I}$,
\item[$\bullet$] $\mathbb{C}$-bilinear functors \[\otimes: \mathcal{C}_{\alpha\beta}\times \mathcal{C}_{\beta\gamma}\rightarrow \mathcal{C}_{\alpha\gamma},\]
\item[$\bullet$] natural isomorphisms \[ a_{X,Y,Z}: (X\otimes Y)\otimes Z \rightarrow X\otimes (Y\otimes Z),\qquad X \in \mathcal{C}_{\alpha\beta},Y\in \mathcal{C}_{\beta\gamma},Z\in \mathcal{C}_{\gamma\delta},\]
\item[$\bullet$] non-zero objects $\mathbbm{1}_{\alpha} \in \mathcal{C}_{\alpha\alpha}$,
\item[$\bullet$] natural isomorphisms \[\lambda_X^{(\alpha)}:\mathbbm{1}_\alpha\otimes X \rightarrow X,\qquad \rho_X^{(\beta)}:X\otimes \mathbbm{1}_\beta\rightarrow X, \qquad X\in \mathcal{C}_{\alpha\beta},\]
\end{itemize}
satisfying the obvious associativity and unit constraints.
\end{Def}
\begin{Rem} In true analogy with the partial algebra case, we could let the $\mathbbm{1}_\alpha$ also be zero objects, but this generalisation will not be needed in the following.
\end{Rem}
The corresponding total notion is as follows.
\begin{Def} A \emph{tensor category with local units (indexed by $\mathscr{I}$)} consists of
\begin{itemize}
\item[$\bullet$] a (small) category $\mathcal{C}$,
\item[$\bullet$] a $\mathbb{C}$-bilinear functor $\otimes: \mathcal{C}\times \mathcal{C} \rightarrow \mathcal{C}$ with compatible associativity constraint $a$,
\item[$\bullet$]\label{FinSup} a collection $\{\mathbbm{1}_\alpha\}_{\alpha\in \mathscr{I}}$ of objects such that
\begin{enumerate}[label=(\arabic*)]
\item $\mathbbm{1}_\alpha\otimes \mathbbm{1}_\beta \cong 0$ for each $\alpha\neq \beta$, and
\item for each object $X$, $\mathbbm{1}_\alpha\otimes X \cong 0 \cong X\otimes \mathbbm{1}_\alpha$ for all but a finite set of $\alpha$,
\end{enumerate}
\item[$\bullet$]\label{UnCon} natural isomorphisms $\lambda_X:\oplus_\alpha (\mathbbm{1}_\alpha\otimes X) \rightarrow X$ and $\rho_X:\oplus_\alpha(X\otimes \mathbbm{1}_\alpha)\rightarrow X$ satisfying the obvious unit conditions.
\end{itemize}
\end{Def}
Note that the condition \ref{UnCon} makes sense because of the local support condition in \ref{FinSup}.
\begin{Rem} \begin{enumerate}[label=(\arabic*)]
\item There is no problem in modifying Mac Lane's coherence theorem, and we will henceforth assume that our partial tensor categories and tensor categories with local units are strict, just to lighten notation.
\item One can also see the global tensor category $\mathcal{C}$ as an inductive limit of (unital) tensor categories.
\end{enumerate}
\end{Rem}
\begin{Not} If $(\mathcal{C},\otimes,\{\mathbbm{1}_\alpha\})$ is a tensor category with local units, and $X\in \mathcal{C}$, we define \[X_{\alpha\beta} = \mathbbm{1}_\alpha\otimes X \otimes \mathbbm{1}_\beta,\] and we denote by \[\eta_{\alpha\beta}:X_{\alpha\beta} \rightarrow \oplus_{\gamma,\delta} \left(\mathbbm{1}_\gamma \otimes X \otimes \mathbbm{1}_\delta\right) \cong X\] the natural inclusion maps.
\end{Not}
\begin{Lem} Up to equivalence, there is a canonical one-to-one correspondence between partial tensor categories and tensor categories with local units.
\end{Lem}
The reader can easily cook up the definition of equivalence referred to in this lemma.
\begin{proof} Let $(\mathcal{C},\otimes,\{\mathbbm{1}_\alpha\}_{\alpha\in \mathscr{I}})$ be a tensor category with local units indexed by $\mathscr{I}$. Then the $\mathcal{C}_{\alpha\beta} = \{X \in \mathcal{C}\mid X_{\alpha\beta} \underset{\eta_{\alpha\beta}}{\cong} X\}$, seen as full subcategories of $\mathcal{C}$, form a partial tensor category upon restriction of $\otimes$.
Conversely, let $\mathscr{C}$ be a partial tensor category. Then we let $\mathcal{C}$ be the category formed by formal finite direct sums $\oplus X_{\alpha\beta}$ with $X_{\alpha\beta}\in \mathcal{C}_{\alpha\beta}$, and with \[\mathrm{Mor}(\oplus X_{\alpha\beta},\oplus Y_{\alpha\beta}) := \oplus_{\alpha\beta} \mathrm{Mor}(X_{\alpha\beta},Y_{\alpha\beta}).\] The tensor product can be extended to $\mathcal{C}$ by putting $X_{\alpha\beta} \otimes X_{\gamma\delta} = 0$ when $\beta\neq \gamma$. The associativity constraints can then be summed to an associativity constraint for $\mathcal{C}$. It is evident that the $\mathbbm{1}_\alpha$ provide local units for $\mathcal{C}$.
\end{proof}
\begin{Rem} Another global viewpoint is to see the collection of
$\mathcal{C}_{\alpha\beta}$ as a 2-category with 0-cells indexed by the
set $\mathscr{I}$, the objects of the $C_{\alpha\beta}$ as 1-cells,
and the morphisms of the $C_{\alpha\beta}$ as 2-cells. As for
partial algebras vs.~ linear categories, we will not emphasize this
way of looking at our structures, as this viewpoint is not
compatible with the notion of a monoidal functor between partial tensor categories.
\end{Rem}
Continuing the analogy with the algebra case, we define the enveloping \emph{multiplier tensor category} of a tensor category with local units.
\begin{Def} Let $\mathscr{C}$ be a partial tensor category over $\mathscr{I}$ with total tensor category $\mathcal{C}$. The \emph{multiplier tensor category} $M(\mathcal{C})$ of $\mathcal{C}$ is defined to be the category consisting of formal sums $\oplus_{\alpha,\beta\in \mathscr{I}} X_{\alpha\beta}$ which are rcf, and with \[\mathrm{Mor}(\oplus X_{\alpha\beta},\oplus Y_{\alpha\beta}) = \left(\prod_\beta\bigoplus_\alpha \mathrm{Mor}(X_{\alpha\beta},Y_{\alpha\beta}) \right) \cap \left(\prod_\alpha\bigoplus_\beta \mathrm{Mor}(X_{\alpha\beta},Y_{\alpha\beta})\right),\] the composition of morphisms being entry-wise (`Hadamard product').
\end{Def}
\begin{Rem} Because of the rcf condition on objects, we could in fact have written simply $\mathrm{Mor}(\oplus X_{\alpha\beta},\oplus Y_{\alpha\beta}) = \prod_{\alpha\beta} \mathrm{Mor}(X_{\alpha\beta},Y_{\alpha\beta})$.
\end{Rem}
The tensor product of $\mathcal{C}$ can be extended to $M(\mathcal{C})$ by putting \[\left(\oplus X_{\alpha\beta}\right)\otimes \left(\oplus Y_{\alpha\beta}\right) = \oplus_{\alpha,\beta,\gamma} \left(X_{\alpha\beta}\otimes Y_{\beta\gamma}\right),\] and similarly for morphism spaces. This makes sense because of the rcf condition of the objects of $M(\mathcal{C})$. The associativity constraints of the $\mathcal{C}_{\alpha\beta}$ can be summed to an associativity constraint for $M(\mathcal{C})$, while $\mathbbm{1} := \oplus_{\alpha\in \mathscr{I}} \mathbbm{1}_\alpha$ becomes a unit for $M(\mathcal{C})$, rendering $M(\mathcal{C})$ into an ordinary tensor category (with unit object).
\begin{Rem} With some effort, a more intrinsic construction of the
multiplier tensor category can be given in terms of couples of
endofunctors, in the same vein as the construction of the multiplier
algebra of a non-unital algebra.
\end{Rem}
\begin{Exa}\label{ExaVectBiGr} Let $I$ be a set. We can consider the partial tensor category $\mathscr{C} = \{\mathrm{Vect}_{\mathrm{fd}}\}_{i,j\in I}$ where each $\mathcal{C}_{ij}$ is a copy of the category of finite-dimensional vector spaces $\mathrm{Vect}_{\mathrm{fd}}$, and with each $\otimes$ the ordinary tensor product. The total category $\mathcal{C}$ can then be identified with the category $\Gr{\mathrm{Vect}}{I}{I}{}{\fin}$ of finite-dimensional bi-graded vector spaces with the `balanced' tensor product over $I$. More precisely, the tensor product of $V$ and $W$ is $V\underset{I}{\otimes} W$ with components \[\Gru{(}{k}{}V\underset{I}{\otimes} W\Gru{)}{}{m} = \oplus_l \;(\Gru{V}{k}{l}\otimes \Gru{W}{l}{m})\subseteq V\otimes W.\] The multiplier category $M(\Gr{\mathrm{Vect}}{I}{I}{}{\fin})$ equals $\Gr{\mathrm{Vect}}{I}{I}{}{\rcf}$, the category of bigraded vector spaces which are rcfd (i.e.~ finite-dimensional on each row and column).
\end{Exa}
We now formulate the appropriate notion of a functor between partial tensor categories. Let us first give an auxiliary definition.
\begin{Def} Let $\mathscr{C}$ be a partial tensor category over $\mathscr{I}$. If $\mathscr{J}\subseteq \mathscr{I}$, we call $\mathscr{D} = \{\mathcal{C}_{\alpha\beta}\}_{\alpha,\beta\in \mathscr{J}}$ a \emph{restriction} of $\mathscr{C}$.
\end{Def}
\begin{Def} Let $\mathscr{C}$ and $\mathscr{D}$ be partial tensor categories over respective sets $\mathscr{I}$ and $\mathscr{J}$, and let \[\phi:\mathscr{J}\rightarrow \mathscr{I},\quad k \mapsto k'\] determine a decomposition $\mathscr{J} = \{\mathscr{J}_\alpha\mid \alpha\in \mathscr{I}\}$ with $k\in \mathscr{J}_\alpha \iff \phi(k)=\alpha$.
A \emph{unital morphism} from $\mathscr{C}$ to $\mathscr{D}$ (based on $\phi$)
consists of $\mathbb{C}$-linear functors \[F_{kl}: \mathcal{C}_{k'l'}\rightarrow
\mathcal{D}_{kl},\quad X\mapsto F_{kl}(X) = \Gru{F(X)}{k}{l}\] natural
monomorphisms \[\iota^{(klm)}_{X,Y}: \GrDA{F(X)}{k}{l} \otimes
\GrDA{F(Y)}{l}{m} \hookrightarrow \GrDA{F(X\otimes Y)}{k}{m}, \quad
X\in \mathcal{C}_{k'l'},Y\in \mathcal{D}_{l'm'},\] and isomorphisms \[\mu_{k}:
\mathbbm{1}_k \cong \GrDA{F(\mathbbm{1}_{k'})}{k}{k}\] satisfying the following conditions. \begin{enumerate}[label=(\arabic*)]
\item (Unitality) $\GrDA{F(\mathbbm{1}_{\alpha})}{k}{l}= 0$ if $k\neq l$ in $\mathscr{J}_\alpha$.
\item (Local finiteness) For each $\alpha,\beta\in \mathscr{I}$ and $X\in \mathcal{C}_{\alpha\beta}$, the application $(k,l)\mapsto \GrDA{F(X)}{k}{l}$ is rcf on $\mathscr{J}_{\alpha}\times \mathscr{J}_{\beta}$.
\item (Multiplicativity) For all $X\in \mathcal{C}_{k'\beta}$ and $Y\in \mathcal{C}_{\beta m'}$, one has\[\bigoplus_{l\in \mathscr{J}_\beta} \iota^{(klm)}_{X,Y}: \left(\bigoplus_{l\in \mathscr{J}_\beta} \GrDA{F(X)}{k}{l} \otimes \GrDA{F(Y)}{l}{m}\right) \cong \GrDA{F(X\otimes Y)}{k}{m}.\]
\item (Coherence) The $\iota^{(klm)}$ satisfy the 2-cocycle condition making \[\xymatrix{F_{kl}(X)\otimes F_{lm}(Y)\otimes F_{mn}(Z) \ar[rr]^{\id\otimes \iota^{(lmn)}_{Y,Z}} \ar[d]_{\iota^{(klm)}_{X,Y}\otimes\id}&& F_{kl}(X)\otimes F_{ln}(Y\otimes Z)\ar[d]^{\iota^{(kln)}_{X,Y\otimes Z}}\\ F_{km}(X\otimes Y)\otimes F_{mn}(Z) \ar[rr]_{\iota^{(kmn)}_{X\otimes Y,Z}}&& F_{kn}(X\otimes Y \otimes Z)}\] commute for all $X\in \mathcal{C}_{k'l'},Y\in \mathcal{C}_{l'm'}, Z\in \mathcal{C}_{m'n'}$, and the $\mu_k$ satisfy the commutation relations \[\xymatrix{ \GrDA{F(X)}{k}{l}\otimes \mathbbm{1}_l \ar[r]^{\!\!\!\!\id\otimes \mu_l} \ar@{=}[d] & \GrDA{F(X)}{k}{l} \otimes \GrDA{F(\mathbbm{1}_{l'})}{l}{l} \ar[d]^{\iota^{(kll)}_{X, \mathbbm{1}_{l'}}} \\ \GrDA{F(X)}{k}{l} & \ar@{=}[l] \GrDA{F(X\otimes \mathbbm{1}_{l'})}{k}{l}} \qquad \xymatrix{ \mathbbm{1}_k\otimes \GrDA{F(X)}{k}{l}\ar[r]^{\!\!\!\!\mu_k\otimes \id} \ar@{=}[d] & \GrDA{F(\mathbbm{1}_{k'})}{k}{k} \otimes \GrDA{F(X)}{k}{l} \ar[d]^{\iota^{(kkl)}_{\mathbbm{1}_{k'},X}} \\ \GrDA{F(X)}{k}{l} & \ar@{=}[l] \GrDA{ F(\mathbbm{1}_{k'}\otimes X)}{k}{l}} \]
\end{enumerate}
A \emph{morphism} from $\mathscr{C}$ to $\mathscr{D}$ is a unital morphism from $\mathscr{C}$ to a restriction of $\mathscr{D}$.
\end{Def}
The corresponding global notion (of a unital morphism) is as follows.
\begin{Lem} Let $\mathscr{C}$ and $\mathscr{D}$ be partial tensor categories over respective sets $\mathscr{I}$ and $\mathscr{J}$. Fix an application \[\phi: \mathscr{J}\rightarrow \mathscr{I}\] inducing a disjoint decomposition $\{\mathscr{J}_\alpha\mid \alpha\in \mathscr{I}\}$. Then there is a one-to-one correspondence between unital morphisms $\mathscr{C}\rightarrow \mathscr{D}$ based on $\phi$ and functors $F:\mathcal{C} \rightarrow M(\mathcal{D})$ with isomorphisms \[\iota_{X,Y}:F(X)\otimes F(Y)\cong F(X\otimes Y),\qquad \mu_\alpha:\oplus_{k\in \mathscr{J}_\alpha} \mathbbm{1}_k \cong F(\mathbbm{1}_\alpha)\] satisfying the natural coherence conditions.
\end{Lem}
\begin{Rem} If $J_{\alpha}=\emptyset$, the global functor $F$ sends
$\mathbbm{1}_{\alpha}$ to the zero object in $M(\mathcal{D})$.
\end{Rem}
The reader has already furnished for himself the notion of
equivalence of partial tensor categories. There is a closely related
but weaker notion of equivalence corresponding to chopping up a partial tensor category into smaller pieces (or, vice versa, gluing certain blocks of a partial tensor category together). Let us formalize this in the following definition.
\begin{Def} Let $\mathscr{C}$ and $\mathscr{D}$ be partial tensor categories. We say $\mathscr{D}$ is a \emph{partitioning} of $\mathscr{C}$ (or $\mathscr{C}$ a \emph{globalisation} of $\mathscr{D}$) if there exists a unital morphism $\mathscr{C}\rightarrow \mathscr{D}$ inducing an equivalence of categories $\mathcal{C}\rightarrow \mathcal{D}$.
\end{Def}
The partial tensor categories that we will be interested in will be required to have some further structure.
\begin{Def} A partial tensor category $\mathscr{C}$ is called \emph{semi-simple} if all $\mathcal{C}_{\alpha\beta}$ are semi-simple.
A partial tensor category is said to have \emph{indecomposable units} if all units $\mathbbm{1}_\alpha$ are indecomposable.
\end{Def}
It is easy to see that any semi-simple partial tensor category can be
partitioned into a semi-simple partial tensor category with indecomposable units. Hence we will from now on only consider semi-simple partial tensor categories with indecomposable units.
The following definition introduces the notion of duality for partial tensor categories.
\begin{Def} Let $\mathscr{C}$ be a partial tensor category.
An object $X\in \mathcal{C}_{\alpha\beta}$ is said to admit a \emph{left dual} if there exists an object $Y=\hat{X} \in \mathcal{C}_{\beta\alpha}$ and morphisms $\mathrm{ev}_{X}: Y\otimes X \rightarrow \mathbbm{1}_\beta$ and $\mathrm{coev}_X: \mathbbm{1}_\alpha\rightarrow X\otimes Y$ satisfying the obvious snake identities.
We say $\mathscr{C}$ \emph{admits left duality} if each object of each $\mathcal{C}_{\alpha\beta}$ has a left dual.
\end{Def}
Similarly, one defines right duality $X\rightarrow \check{X}$ and (two-sided) duality $X\rightarrow \bar{X}$. As for tensor categories with unit, if $X$ admits a (left or right) dual, it is unique up to isomorphism.
\begin{Lem}\label{LemMorDua}
\begin{enumerate}[label=(\arabic*)]
\item Let $\mathscr{C}$ be a partial tensor category. If $X$ has left dual $\hat{X}$, then $X$ is a right dual to $\hat{X}$.
\item Let $F$ be a morphism $\mathscr{C}\rightarrow \mathscr{D}$ based over
$\phi:\mathscr{J}\rightarrow \mathscr{I}$. If $X\in \mathcal{C}_{k'l'}$
has a left dual $\hat X$, then $F_{lk}(\hat{X})$ is a left dual to $F_{kl}(X)$.
\end{enumerate}
\end{Lem}
\begin{proof}
We can consider the restriction $\mathscr{C}'$ of $\mathscr{C}$ to any two-element subset $\mathscr{I}'$ of $\mathscr{I}$, and the first property then follows from the usual argument inside the global (unital) tensor category $\mathscr{C}'$. For the second property, consider also the associated restriction $\mathscr{D}'$ to $\phi^{-1}(\mathscr{I})$. We can then again apply the usual arguments to the associated global category $\mathcal{C}'$ and global unital morphism $F:\mathcal{C}'\rightarrow M(\mathcal{D}')$ to see that $F(\hat{X})\cong \widehat{F(X)}$. Using that local units are evidently self-dual and that duality behaves anti-multiplicatively w.r.t.~ tensor products, we can cut down with unit objects on both sides to obtain the statement in the lemma.
\end{proof}
A final ingredient which will be needed is an analytic structure on our partial tensor categories.
\begin{Def} A \emph{partial fusion C$^*$-category} is a partial tensor category $(\mathcal{C},\otimes)$ with duality such that all $\mathcal{C}_{\alpha\beta}$ are semi-simple C$^*$-categories, all functors $\otimes$ are $^*$-functors (in the sense that $(f\otimes g)^* = f^*\otimes g^*$ for morphisms), and the associativity and unit constraints are unitary.
\end{Def}
\begin{Rem}
\begin{enumerate}[label=(\arabic*)]
\item If $\mathscr{C}$ is a partial tensor C$^*$-category, the total category $\mathcal{C}$ only has pre-C$^*$-algebras as endomorphism spaces, as the morphisms spaces need not be closed in the C$^*$-norm. On the other hand, $M(\mathcal{C})$ only has $^*$-algebras as endomorphism spaces, since we did not restrict our direct products.
\item The notion of duality for a partial tensor C$^*$-category is the
same as in the absence of a C$^*$-structure. However, because of the
presence of the $^*$-structure, any left dual is automatically a
two-sided dual, and the dual object of $X$ is then simply denoted by $\overline{X}$.
\item We slightly abuse the terminology `fusion', as strictly speaking this would require there to be only a finite set of mutually non-equivalent irreducible objects in each $\mathcal{C}_{\alpha\beta}$.
\item In the same vein, the total C$^*$-category with local units associated to a partial fusion C$^*$-category could be called a \emph{multiplier fusion C$^*$-category}.
\end{enumerate}
\end{Rem}
\begin{Exa} Let $I$ be a set. Then we can consider the partial fusion C$^*$-category $\mathscr{C} = \{\mathrm{Hilb}_{\mathrm{fd}}\}_{I\times I}$ of finite-dimensional Hilbert spaces, with all $\otimes$ the ordinary tensor product. The associated global category is the category $\Gr{\mathrm{Hilb}}{I}{I}{}{\fin}$ of finite-dimensional bi-graded Hilbert spaces. The dual of a Hilbert space $\mathcal{H} \in \mathcal{C}_{kl}$ is just the ordinary dual Hilbert space $\mathcal{H}^* \cong \overline{\mathcal{H}}$, but considered in the category $\mathcal{C}_{lk}$.
\end{Exa}
The notion of a morphism for partial semi-simple tensor C$^*$-categories has to be adapted in the following way.
\begin{Def} Let $\mathscr{C}$ and $\mathscr{D}$ be partial fusion
C$^*$-categories over respective sets $\mathscr{I}$ and
$\mathscr{J}$, and let $\phi:\mathscr{J}\rightarrow \mathscr{I}$. A
\emph{morphism} from $\mathscr{C}$ to $\mathscr{D}$ (based on $\phi$) is a
$\phi$-based morphism $(F,\iota,\mu)$ from $\mathscr{C}$ to $\mathscr{D}$ as
partial tensor categories, with the added requirement that all
$F_{kl}$ are $^*$-functors and all $\iota$- and $\mu$-maps are
isometric.
\end{Def}
\begin{Rem} If a morphism of partial fusion C$^*$-categories is based over a \emph{surjective} map $\varphi: \mathscr{J}\rightarrow \mathscr{I}$, then it is automatically faithful. Indeed, by semi-simplicity a non-faithful morphism would send some irreducible object to zero. However, by the duality assumption this would mean that some irreducible unit is sent to zero, which is excluded by surjectivity of $\varphi$ and the definition of morphism.
\end{Rem}
\section{Representations of partial compact quantum groups}
In this section, the representation theory of partial compact quantum
groups is investigated.
\subsection{Corepresentations of partial bialgebras}
Let $\mathscr{A}$ be an $I$-partial bialgebra. We will now write its
homogeneous components in the form $A(K) = \eGr{A}{k}{l}{m}{n}
\Gr{A}{k}{l}{m}{n}$.
We denote by $\Hom_\mathbb{C}(V,W)$ the vector space of linear
maps between two vector spaces $V$ and $W$.
Let $I$ be a set. As in Example \ref{ExaVectBiGr}, an $I^{2}$-graded
vector space $V=\bigoplus_{k,l\in I} \Gru{V}{k}{l}$ will be called
\emph{row-and column finite-dimensional} (rcfd) if the $\oplus_l
V_{kl}$ (resp.~ $\oplus_k V_{kl}$) are finite-dimensional for each $k$
(resp.~ $l$) fixed, and $\Gr{\mathrm{Vect}}{I}{I}{}{\rcf}$ denotes the category whose objects are rcfd
$I^{2}$-graded vector spaces. Morphisms are linear maps $T$ that
preserve the grading and therefore can be written $T=\prod_{k,l\in I}
\Gru{T}{k}{l}$.
\begin{Def} \label{definition:corep} Let $\mathscr{A}$ be an
$I$-partial bialgebra and let $V=\bigoplus_{k,l} \Gru{V}{k}{l}$
be
an rcfd $I^{2}$-graded vector space. A \emph{corepresentation}
$\mathscr{X}=(\Gr{X}{k}{l}{m}{n})_{k,l,m,n}$ of $\mathscr{A}$ on $V$
is a family of elements
\begin{align} \label{eq:rep-blocks}
\Gr{X}{k}{l}{m}{n} \in \Gr{A}{k}{l}{m}{n} \otimes
\Hom_\mathbb{C}(\Gru{V}{m}{n},\Gru{V}{k}{l})
\end{align}
satisfying
\begin{align}
\label{eq:rep-comultiplication}
(\Delta_{pq} \otimes
\id)(\Gr{X}{k}{l}{m}{n}) &=
\Big{(}\Gr{X}{k}{l}{p}{q}\Big{)}_{13}\Big{(}\Gr{X}{p}{q}{m}{n}\Big{)}_{23},
\\ \label{eq:rep-counit}
(\epsilon \otimes
\id)(\Gr{X}{k}{l}{m}{n})&=\delta_{k,m}\delta_{l,n}\id_{\Gru{V}{k}{l}}
\end{align}
for all possible indices. We also call $(V,\mathscr{X})$ a
\emph{corepresentation}.
\end{Def}
Here, we use here the standard leg numbering notation, e.g.~ $a_{23}=1\otimes a$.
\begin{Exa} \label{example:rep-triv} Equip the vector space
$\mathbb{C}^{(I)}=\bigoplus_{k\in I} \mathbb{C}$ with the diagonal
$I^{2}$-grading. Then the family $\mathscr{U}$ given by
\begin{align} \label{eq:rep-triv}
\Gr{U}{k}{l}{m}{n} = \delta_{k,l}\delta_{m,n} \UnitC{k}{m} \in
\Gr{A}{k}{l}{m}{n}
\end{align}
is a corepresentation of $\mathscr{A}$ on $\mathbb{C}^{(I)}$. We call it the
\emph{trivial corepresentation}.
\end{Exa}
\begin{Exa} \label{example:rep-regular}
Assume given an rcfd family of subspaces
\begin{align*}
\Gru{V}{m}{n} \subseteq \bigoplus_{k,l} \Gr{A}{k}{l}{m}{n}
\end{align*}
satisfying
\begin{align} \label{eq:rep-regular-inclusion}
\Delta_{pq}(\Gru{V}{m}{n}) &\subseteq \Gru{V}{p}{q} \otimes
\Gr{A}{p}{q}{m}{n}.
\end{align}
Then the elements $\Gr{X}{k}{l}{m}{n} \in \Gr{A}{k}{l}{m}{n} \otimes
\Hom_{\mathbb{C}}(\Gru{V}{m}{n},\Gru{V}{k}{l})$ defined by
\begin{align*}
\Gr{X}{k}{l}{m}{n}(1 \otimes b) &= \Delta^{\op}_{kl}(b) \in
\Gr{A}{k}{l}{m}{n} \otimes \Gru{V}{k}{l} \quad
\text{for all } b\in \Gru{V}{m}{n}
\end{align*}
form a corepresentation $\mathscr{X}$ of $\mathscr{A}$ on
$V$. Indeed,
\begin{align*}
(\Delta_{pq} \otimes \id)(\Gr{X}{k}{l}{m}{n})(1 \otimes 1 \otimes
b) &=(\Delta_{pq}\otimes \id)(\Delta^{\op}_{kl}(b)) =
\Big{(}\Gr{X}{k}{l}{p}{q}\Big{)}_{13}\Big{(}\Gr{X}{p}{q}{m}{n}\Big{)}_{23}(1
\otimes 1 \otimes b), \\
(\epsilon \otimes \id)(\Gr{X}{k}{l}{m}{n})b &= (\epsilon \otimes
\id)(\Delta^{\op}_{kl}(b)) = \delta_{k,m}\delta_{l,n}b
\end{align*}
for all $b\in \Gru{V}{m}{n}$. We call $\mathscr{X}$ the
\emph{regular corepresentation on $V$}.
\end{Exa}
Morphisms of corepresentations are defined as follows.
\begin{Def}
Let $\mathscr{A}$ be an $I$-partial bialgebra. A \emph{morphism}
$T$ between corepresentations
$(V,\mathscr{X})$ and $(W,\mathscr{Y})$ of $\mathscr{A}$ is a family
of linear maps
\[\Gru{T}{k}{l} \in
\Hom_\mathbb{C}(\Gru{V}{k}{l},\Gru{W}{k}{l})\] satisfying \[(1 \otimes
\Gru{T}{k}{l})\Gr{X}{k}{l}{m}{n} = \Gr{Y}{k}{l}{m}{n}(1 \otimes
\Gru{T}{m}{n})\]
\end{Def}
We denote the category of all corepresentations of $\mathscr{A}$ on rcfd $I^2$-graded vector spaces by $\mathrm{Corep}_{\rcf}(\mathscr{A})$.
We next consider the total form of a corepresentation.
Let $\mathscr{A}$ be a partial bialgebra with total algebra $A$, and
let $V$ be an rcfd $I^{2}$-graded vector space.
Denote by $\lambda^{V}_{k},\rho^{V}_{l} \in \Hom_{\mathbb{C}}(V)$ the
projections onto the summands $\Gru{V}{k}{} = \bigoplus_{q}
\Gru{V}{k}{q}$ and $\Gru{V}{}{l}=\bigoplus_{p}\Gru{V}{p}{l}$
respectively, and identify $\Hom_{\mathbb{C}}(\Gru{V}{m}{n},\Gru{V}{k}{l})$ with
$\lambda^{V}_{k}\rho^{V}_{l}\Hom_{\mathbb{C}}(V)\lambda^{V}_{m}\rho^{V}_{n}$. Denote by $\Hom_{\mathbb{C}}^{0}(V) \subseteq \Hom_{\mathbb{C}}(V)$ the algebraic sum of all
these subspaces. Then we can define a homomorphism
\begin{align*}
\Delta \otimes \id \colon M(A \otimes \Hom_{\mathbb{C}}^{0}(V)) \to M(A
\otimes A \otimes \Hom_{\mathbb{C}}^{0}(V))
\end{align*}
similarly as we defined $ \Delta \colon A \to M(A\otimes A)$.
\begin{Lem} \label{lemma:rep-multiplier}
Let $\mathscr{A}$ be an $I$-partial bialgebra and $V$ an rcfd $I^{2}$-graded vector space. If $\mathscr{X}$ is a
corepresentation of $\mathscr{A}$ on $V$, then the sum
\begin{align}
\label{eq:rep-multiplier}
X:=\sum_{k,l,m,n} \Gr{X}{k}{l}{m}{n} \in M(A
\otimes \Hom_{\mathbb{C}}^{0}(V))
\end{align}
converges strictly and satisfies the following conditions:
\begin{enumerate}[label=(\arabic*)]
\item\label{repma} $(\lambda_{k}\rho_{m} \otimes \id){X}(\lambda_{l}\rho_{n}
\otimes \id) = (1 \otimes \lambda^{V}_{k}\rho^{V}_{l}){X}(1 \otimes
\lambda^{V}_{m}\rho^{V}_{n}) = \Gr{X}{k}{l}{m}{n}$,
\item\label{repmb} $(A \otimes 1){X}$, $ {X}(A \otimes 1)$ and $(1 \otimes
\Hom^{0}_{\mathbb{C}}(V))X(1 \otimes \Hom^{0}_{\mathbb{C}}(V))$ lie in $A \otimes \Hom_{\mathbb{C}}^{0}(V)$,
\item\label{repmc} $(\Delta\otimes \id)(X)=X_{13}X_{23}$,
\item\label{repmd} the sum $(\epsilon \otimes \id)({X}) :=\sum (\epsilon \otimes
\id)(\Gr{X}{k}{l}{m}{n})$ converges in $M(\Hom^{0}_{\mathbb{C}}(V))$ strictly
to $\id_{V}$.
\end{enumerate}
Conversely, if $ X \in M(A \otimes \Hom_{\mathbb{C}}^{0}(V))$ satisfies
\ref{repma}--\ref{repmd} with $\Gr{X}{k}{l}{m}{n}$ defined by \ref{repma}, then
$\mathscr{X}=(\Gr{X}{k}{l}{m}{n})_{k,l,m,n}$ is a corepresentation
of $\mathscr{A}$ on $V$.
\end{Lem}
\begin{proof}
Straightforward.
\end{proof}
\begin{Def} If $\mathscr{X}$ and $X$ are as in Lemma \ref{lemma:rep-multiplier}, we will call $X$ the \emph{corepresentation multiplier} of $\mathscr{X}$.
\end{Def}
Let us relate the notion of a corepresentation multiplier to the
notion of a full comodule for a weak multiplier bialgebra introduced in \cite[Definition 2.2 and Definition 4.2]{Boh2}. Recall first from \cite[Theorem 4.5]{Boh2} that if $(A,\Delta)$ is a weak multiplier bialgebra, then any full comodule over $A$ carries the structure of a firm bimodule over the base algebra. In particular, if $A$ arises from a partial bialgebra, any comodule is bigraded over the object set.
\begin{Prop} Let $\mathscr{A}$ be a partial bialgebra with corepresentation $X$ on $V = \oplus_{m,n} \Gru{V}{m}{n}$. Then the couple \[\lambda_X: V\otimes A \rightarrow V\otimes A,\quad v\otimes a \rightarrow X_{21}(v\otimes a),\]
\[\rho_X: V\otimes A\rightarrow V\otimes A,\quad v\otimes a \mapsto (1\otimes a)X_{21}(v\otimes 1)\] is well-defined and defines a full comodule for the weak multiplier bialgebra $(A,\Delta)$. Conversely, any full comodule which is rcfd for the induced bigrading arises in this way.
\end{Prop}
\begin{proof} Well-definedness of the couple $(\lambda_X,\rho_X)$ is immediate from the local support condition, and it is clear then that $(1\otimes a)\lambda_X(v\otimes b) = \rho_X(v\otimes a)(1\otimes b)$. The conditions (2.11) and (2.12) in \cite[Definition 2.12]{Boh2} are then easily seen to follow from the identity $(\Delta\otimes \id)(X) = X_{13}X_{23}$. Finally, as for $v\in \Gru{V}{m}{n}$ one has $(\id\otimes \varepsilon)(X_{21}(v\otimes \UnitC{n}{n}) = (\varepsilon\otimes \id)(\Gr{X}{m}{n}{m}{n}) v = v$, it follows that $(\lambda_X,\rho_X)$ is full.
Assume now conversely that $(\lambda,\rho)$ defines a full comodule structure on $V = \oplus_{m,n} \Gru{V}{m}{n}$. From the definition of the grading, it follows that we obtain maps \[\Gru{V}{m}{n}\rightarrow \Gru{V}{k}{l}\otimes \Gr{A}{k}{l}{m}{n},\quad v \mapsto (1\otimes \UnitC{k}{m})\lambda(v\otimes \UnitC{l}{n}).\] As the $\Gru{V}{m}{n}$ are finite-dimensional, there hence exists $\Gr{X}{k}{l}{m}{n} \in \Gr{A}{k}{l}{m}{n}\otimes \Hom_{\mathbb{C}}(\Gru{V}{m}{n},\Gru{V}{k}{l})$ such that
\[(\Gr{X}{k}{l}{m}{n})_{21}(v\otimes 1)=(1\otimes \UnitC{k}{m})\lambda(v\otimes \UnitC{l}{n}).\] As $(\lambda,\rho)$ form a multiplier, it is then moreover immediate that in fact \[(\Gr{X}{k}{l}{m}{n})_{21}(v\otimes a) = (1\otimes \UnitC{k}{m})\lambda(v\otimes a).\] From (2.12) in \cite[Definition 2.12]{Boh2}, it is then immediate that the $\Gr{X}{k}{l}{m}{n}$ satisfy \eqref{eq:rep-comultiplication}. Moreover, from the proof of \cite[Theorem 4.5]{Boh2} it follows that for $v\in \Gru{V}{m}{n}$, one has \[v = (\id\otimes \epsilon)(X_{21}(v\otimes \UnitC{n}{n}),\] hence \eqref{eq:rep-counit} holds, and $\mathscr{X}$ forms a corepresentation.
\end{proof}
We present some more general constructions for corepresentations of partial bialgebras. Given an rcfd $I^{2}$-graded vector space $V=\bigoplus_{k,l} \Gru{V}{k}{l}$
and a family of subspaces $\Gru{W}{k}{l} \subseteq \Gru{V}{k}{l}$, we
denote by $\iota^{W}\colon W\to V$ and $\pi^{W} \colon V \to
V/W=\bigoplus_{k,l} \Gru{V}{k}{l}/\Gru{W}{k}{l}$ the embedding and the
quotient map.
\begin{Def} Let $(V,\mathscr{X})$ be a
corepresentation of a partial bialgebra $\mathscr{A}$. We call a
family of subspaces $\Gru{W}{k}{l} \subseteq \Gru{V}{k}{l}$
\emph{invariant (w.r.t.\ $\mathscr{X}$)} if
\begin{align} \label{eq:rep-invariant} (1\otimes
\Gr{\pi}{}{W}{k}{l})\Gr{X}{k}{l}{m}{n}(1 \otimes
\Gr{\iota}{}{W}{m}{n}) =0.
\end{align}
We call $(V,\mathscr{X})$
\emph{irreducible} if the only invariant families of subspaces are
$(0)_{k,l}$ and $(\Gru{V}{k}{l})_{k,l}$.
\end{Def}
The next lemmas deal with restriction, factorisation and Schur's lemma. We skip their proofs which are straightforward.
\begin{Lem}
Let $(V,\mathscr{X})$ be a corepresentation
of a partial bialgebra and let $\Gru{W}{k}{l}
\subseteq \Gru{V}{k}{l}$ be an invariant family of subspaces. Then
there exist unique corepresentations
$(W,(\iota^{W})^{*}\mathscr{X})$ and $(V/W,\pi^{W}_{*}\mathscr{X})$
such that $\iota^{W}$ and $\pi^{W}$ are morphisms $(W,(\iota^{W})^{*}\mathscr{X}) \to (V,\mathscr{X}) \to (V/W,\pi^{W}_{*}\mathscr{X})$.
\end{Lem}
\begin{Lem} Let $T$ be a morphism of
corepresentations $(V,\mathscr{X})$ and $(W,\mathscr{Y})$ of a
partial bialgebra. Then the families of subspaces $\ker
\Gru{T}{k}{l} \subseteq \Gru{V}{k}{l}$ and $\img\Gru{T}{k}{l}
\subseteq \Gru{W}{k}{l}$ are invariant. In particular, if
$(V,\mathscr{X})$ and $(W,\mathscr{Y})$ are irreducible, then either
all $\Gru{T}{k}{l}$ are zero or all $\Gru{T}{k}{l}$ are
isomorphisms.
\end{Lem}
Given corepresentations $\mathscr{X}$ and $\mathscr{Y}$ of
a partial bialgebra $\mathscr{A}$ on respective rcfd $I^{2}$-graded vector spaces $V$ and $W$,
we obtain an $I^{2}$-graded vector space $V\oplus W$ by taking
component-wise direct sums, and use the canonical embedding
\begin{align*}
\Hom(\Gru{V}{m}{n},\Gru{V}{k}{l}) \oplus
\Hom(\Gru{W}{m}{n},\Gru{W}{k}{l}) \hookrightarrow
\Hom(\Gru{V}{m}{n} \oplus \Gru{W}{m}{n},\Gru{V}{k}{l} \oplus
\Gru{W}{k}{l})
\end{align*}
to define the \emph{direct sum} $\mathscr{X} \oplus \mathscr{Y}$,
which is a corepresentation of $\mathscr{A}$ on $V\oplus W$. Then the
natural embeddings from $V$ and $W$ into $V\oplus W$ and the
projections onto $V$ and $W$ are evidently morphisms of
corepresentations. More generally, given a family of corepresentations
$((V_{\alpha},\mathscr{X}_{\alpha}))_{\alpha}$ such that the sum
$\bigoplus_{\alpha} V_{\alpha}$ is rcfd again, one
can form the direct sum $\bigoplus_{\alpha} \mathscr{X}_{\alpha}$,
which is a corepresentation on $\bigoplus_{\alpha} V_{\alpha}$.
\begin{Prop}
Let $\mathscr{A}$ be an $I$-partial bialgebra. Then $\mathrm{Corep}_{\rcf}(\mathscr{A})$
is a $\mathbb{C}$-linear abelian category, and the forgetful functor
$\mathrm{Corep}_{\rcf}(\mathscr{A}) \to \Gr{\mathrm{Vect}}{I}{I}{}{\rcf}$ lifts kernels, cokernels and biproducts.
\end{Prop}
\begin{proof}
The preceding considerations show that the forgetful functor lifts
kernels, cokernels and biproducts. Moreover, in
$\mathrm{Corep}_{\rcf}(\mathscr{A})$, every monic is a kernel
and every epic is a cokernel because the same is true in $\Gr{\mathrm{Vect}}{I}{I}{}{\rcf}$
and because kernels and cokernels lift.
\end{proof}
\subsection{Corepresentations of partial Hopf algebras}
If $\mathscr{A}$ is a partial Hopf algebra, then every
corepresentation multiplier has a generalized inverse.
\begin{Lem} \label{lemma:rep-invertible}
Let $(V,\mathscr{X})$ be a corepresentation of a partial Hopf
algebra $\mathscr{A}$. Then with $\Gr{Z}{k}{l}{m}{n} = (S\otimes \id)(\Gr{X}{n}{m}{l}{k})$, we have $\Gr{Z}{k}{l}{m}{n}\in \Gr{A}{k}{l}{m}{n}\otimes \Hom_{\mathbb{C}}(\Gru{V}{l}{k},\Gru{V}{n}{m})$ and
\begin{align*}
\Gr{X}{k}{l}{m}{n} \cdot \Gr{Z}{l}{k'}{n}{m'} &=0 \text{ if } m'\neq m &
\sum_{n} \Gr{X}{k}{l}{m}{n} \cdot \Gr{Z}{l}{k'}{n}{m} &= \delta_{k,k'}\UnitC{k}{m} \otimes
\id_{\Gru{V}{k}{l}}, \\
\Gr{Z}{n}{m}{l}{k}\cdot \Gr{X}{m}{n'}{k}{l'} &= 0
\text{ if } n\neq n' &
\sum_{m} \Gr{Z}{n}{m}{l}{k}\cdot \Gr{X}{m}{n}{k}{l'} &=
\delta_{l,l'} \UnitC{n}{l} \otimes \id_{\Gru{V}{k}{l}}.
\end{align*}
In particular, the multiplier $Z:= (S \otimes
\id)(X) \in M(A \otimes \Hom_{\mathbb{C}}^{0}(V))$
satisfies
\begin{align} \label{eq:rep-generalized-inverse}
XZ &= \sum_{k} \lambda_{k} \otimes \lambda^{V}_{k}, &
ZX &= \sum_{l} \rho_{l} \otimes \rho^{V}_{l},
\end{align}
and is a generalized inverse of $X$ in the sense that $XZX=X$ and $ZXZ=Z$.
\end{Lem}
\begin{proof}
The grading property of $\Gr{Z}{k}{l}{m}{n}$ follows from $S(\Gr{A}{p}{q}{r}{s})\subseteq \Gr{A}{s}{r}{q}{p}$, and then the upper left hand identity is immediate. To
verify the upper right hand one, we use identities \eqref{eq:rep-comultiplication}, \eqref{eq:rep-counit} and \eqref{eq:antipode-pi-l}. Namely, with $M_{A}$ denoting the multiplication of $A$, we find
\begin{align*}
\sum_{n} \Gr{X}{k}{l}{m}{n} \cdot (S \otimes
\id)(\Gr{X}{m}{n}{k'}{l}) &= \sum_{n} (M_{A} (\id \otimes S)
\otimes \id)((\Gr{X}{k}{l}{m}{n})_{13}(\Gr{X}{m}{n}{k'}{l})_{23})
\\ &= \sum_{n} (M_{A} (\id \otimes S) \Delta_{m,n} \otimes
\id)(\Gr{X}{k}{l}{k'}{l}) \\
&= \delta_{k,k'} \UnitC{k}{l} \otimes (\epsilon \otimes
\id)(\Gr{X}{k}{l}{k'}{l})
\\ &=
\delta_{k,k'}\UnitC{k}{m} \otimes
\id_{\Gru{V}{k}{l}}. \end{align*} The other
equations follow similarly, and the assertions concerning $Z$ are
direct consequences.
\end{proof}
\begin{Def}
Let $\mathscr{X}$ be a corepresentation of a partial Hopf
algebra. We denote the generalized inverse $(S \otimes \id)(X)$
of $X$ by $X^{-1}$ and let
\begin{align*}
\Gr{(X^{-1})}{k}{l}{m}{n}=(S \otimes \id)(\Gr{X}{n}{m}{l}{k}) \in
\Gr{A}{k}{l}{m}{n} \otimes \Hom_{\mathbb{C}}(\Gru{V}{l}{k},\Gru{V}{n}{m})
\end{align*}
\end{Def}
For completeness, we mention the following converse to Lemma \ref{lemma:rep-invertible}.
\begin{Lem}
Let $\mathscr{A}$ be an $I$-partial bialgebra, $V$ an rcfd
$I^{2}$-graded vector space and $X,Z \in M(A \otimes
\Hom_{\mathbb{C}}^{0}(V))$. If conditions \ref{repma}--\ref{repmc} in Lemma
\ref{lemma:rep-multiplier} and \eqref{eq:rep-generalized-inverse}
hold, then the corresponding family
$\mathscr{X}=(\Gr{X}{k}{l}{m}{n})_{k,l,m,n}$ is a corepresentation
of $\mathscr{A}$ on $V$.
\end{Lem}
\begin{proof}
We have to verify condition \ref{repmd} in Lemma
\ref{lemma:rep-multiplier}. If $(k,l) \neq (p,q)$, then
$\epsilon(\Gr{A}{k}{l}{p}{q})=0$ and hence $(\epsilon
\otimes \id)(\Gr{X}{k}{l}{p}{q}) =0$. The counit property and condition
\ref{repmc} in Lemma \ref{lemma:rep-multiplier} imply
\begin{align*}
\Gr{X}{k}{l}{m}{n} &= ((\epsilon\otimes \id) \Delta \otimes
\id)( \Gr{X}{k}{l}{m}{n})
\\ & = \sum_{p,q} (\epsilon\otimes \id \otimes
\id)\left((\Gr{X}{k}{l}{p}{q})_{13}(\Gr{X}{p}{q}{m}{n})_{23}\right)
= (1 \otimes \Gru{T}{k}{l})\Gr{X}{k}{l}{m}{n},
\end{align*}
where $\Gru{T}{k}{l}=(\epsilon \otimes \id)(\Gr{X}{k}{l}{k}{l}) \in
\Hom_{\mathbb{C}}(\Gru{V}{k}{l})$. Therefore, $T=\prod_{k,l} T_{k,l}$ satisfies $(1 \otimes T)X =
X$. Multiplying on the right by $Z$, we find
$T\lambda^{V}_{k}=\lambda^{V}_{k}$ for all $k$. Thus, $T=\id_{V}$.
\end{proof}
\begin{Lem} \label{lemma:rep-total-morphism}
A bigraded map $T$ defines a morphism from
$(V,\mathscr{X})$ to $(W,\mathscr{Y})$ if and only if one of the following relations hold:
\begin{align*}
Y^{-1}(1 \otimes T)X&=\sum_{m,n} \rho_{n} \otimes \Gru{T}{m}{n},
&
Y(1\otimes T)X^{-1} &=\sum_{k,l} \lambda_{k} \otimes \Gru{T}{k}{l}.
\end{align*}
\end{Lem}
\subsection{Tensor product and duality}
Recall from Example \ref{ExaVectBiGr} that the category $\Gr{\mathrm{Vect}}{I}{I}{}{\rcf}$ is a tensor category. The tensor product of morphisms is the
restriction of the ordinary tensor product. We will interpret this product as being strictly associative. The unit for this product is the vector
space $\mathbb{C}^{(I)}=\bigoplus_{k\in I} \mathbb{C}$.
Given $V$ and $W$ in $\Gr{\mathrm{Vect}}{I}{I}{}{\rcf}$, we identify $\Hom_\mathbb{C}(\Gru{V}{m}{n},\Gru{V}{k}{l})\otimes
\Hom_\mathbb{C}(\Gru{W}{n}{q},\Gru{W}{l}{p})$ with a subspace of
\begin{align*}
\Hom_\mathbb{C}(\Gru{V}{m}{n}\otimes
\Gru{W}{n}{q},\Gru{V}{k}{l}\otimes \Gru{W}{l}{p})\subseteq
\Hom_\mathbb{C}(\Gru{(}{m}{}V\underset{I}{\otimes}
W\Gru{)}{}{q},\Gru{(}{k}{}V\underset{I}{\otimes} W\Gru{)}{}{p}).
\end{align*}
We can now construct a product of corepresentations as follows.
\begin{Lem} Let $\mathscr{X}$ and $\mathscr{Y}$ be copresentations of
$\mathscr{A}$ on respective rcfd $I^{2}$-graded vector spaces $V$ and
$W$. Then the sum
\begin{align} \label{eq:rep-product-blocks}
\Gr{(X\Circt Y)}{k}{p}{m}{q} := \sum_{l,n}
\left(\Gr{X}{k}{l}{m}{n}\right)_{12}\left(\Gr{Y}{l}{p}{n}{q}\right)_{13}
\end{align}
has only finitely many non-zero terms, and the elements
\[\Gr{(X\Circt
Y)}{k}{p}{m}{q}\in \Gr{A}{k}{p}{m}{q} \otimes
\Hom_\mathbb{C}(\Gru{(}{m}{}V\underset{I}{\otimes} W\Gru{)}{}{q},\Gru{(}{k}{}V\underset{I}{\otimes} W\Gru{)}{}{p})
\]
define a corepresentation $\mathscr{X} \Circt \mathscr{Y}$ of
$\mathscr{A}$ on $V\underset{I}{\otimes} W$.
\end{Lem}
\begin{proof}
The sum \eqref{eq:rep-product-blocks} is finite because $V$ and
$W$ are rcfd. Using the identification above, we
see that
\[
\left(\Gr{X}{k}{l}{m}{n}\right)_{12}\left(\Gr{Y}{l}{p}{n}{q}\right)_{13}\in \Gr{A}{k}{p}{m}{q} \otimes \Hom_\mathbb{C}(\Gru{(}{m}{}V\underset{I}{\otimes}
W\Gru{)}{}{q},\Gru{(}{k}{}V\underset{I}{\otimes} W\Gru{)}{}{p}).\] Now, the fact that $\Gr{(X\Circt
Y)}{k}{p}{m}{q}$ is a corepresentation follows easily
from the multiplicativity of $\Delta$ and the weak multiplicativity
of $\epsilon$.
\end{proof}
\begin{Rem} \label{remark:rep-tensor-multiplier}
The corepresentation multiplier associated to $\mathscr{X}\Circt
\mathscr{Y}$ is just $X_{12}Y_{13}$.
\end{Rem}
\begin{Prop} \label{prop:rep-tensor} Let $\mathscr{A}$ be an
$I$-partial bialgebra. Then $\mathrm{Corep}_{\rcf}(\mathscr{A})$ carries the
structure of a strict tensor category such that the product of corepresentations $(V,\mathscr{X})$ and
$(W,\mathscr{Y})$ is the corepresentation $(V\underset{I}{\otimes}
W,\mathscr{X}\Circt \mathscr{Y})$, the unit is the trivial
corepresentation $(\mathbb{C}^{(I)},\mathscr{U})$, and the forgetful functor
$\mathrm{Corep}_{\rcf}(\mathscr{A}) \to \Gr{\mathrm{Vect}}{I}{I}{}{\rcf}$ is a strict tensor functor.
\end{Prop}
\begin{proof}
This is clear.
\end{proof}
Given a corepresentation of a partial Hopf algebra, one can use the
antipode to define a contragredient corepresentation on a dual space.
Denote the dual of vector spaces $V$ and linear maps $T$ by
$\dual{V}$ and $\dualop{T}$, respectively, and define the dual of an
$I^{2}$-graded vector space $V=\bigoplus_{k,l} \Gru{V}{k}{l}$ to be
the space
\begin{align*}
\dual{V}=\bigoplus_{k,l} \Gru{(\dual{V})}{k}{l}, \quad \text{where }
\Gru{(\dual{V})}{k}{l} = \dual{(\Gru{V}{l}{k})}.
\end{align*}
\begin{Prop}
Let $\mathscr{A}$ be an $I$-partial Hopf algebra with antipode $S$
and let $(V,\mathscr{X})$ be a
corepresentation of $\mathscr{A}$. Then $\dual{V}$ and the family
$\dualco{\mathscr{X}}$ given by
\begin{align} \label{eq:rep-left-dual}
\Gr{\dualco{X}}{k}{l}{m}{n} := (S \otimes \dualop{-})(\Gr{X}{n}{m}{l}{k})
\end{align}
form a corepresentation of $\mathscr{A}$ which is a left dual of $(V,\mathscr{X})$. If the antipode
$S$ of $\mathscr{A}$ is bijective, then $\dual{V}$ and the family
$\dualcor{\mathscr{X}}$ given by
\begin{align} \label{eq:rep-right-dual}
\Gr{\dualcor{X}}{k}{l}{m}{n} :=(S^{-1}
\otimes \dualop{-})(\Gr{X}{n}{m}{l}{k})
\end{align}
form a corepresentation
of $\mathscr{A}$ which is a
right dual of $(V,\mathscr{X})$.
\end{Prop}
\begin{proof}
We only prove the assertion concerning
$(\dual{V},\dualco{\mathscr{X}})$. To see that this is a corepresentation, note that the element
\eqref{eq:rep-left-dual} belongs to $\Gr{A}{k}{l}{m}{n} \otimes
\Hom_{\mathbb{C}}(\Gru{(\dual{V})}{m}{n},\Gru{(\dual{V})}{k}{l})$ and use
the relations $\Delta \circ S = (S \otimes S)\Delta^{\op}$ and
$\epsilon \circ S = \epsilon$ from Corollary
\ref{corollary:antipode} and Lemma \ref{LemCoAnt}.
Let us show that $(\dual{V},\dualco{\mathscr{X}})$ is a left dual
of $(V,\mathscr{X})$.
Given a finite-dimensional vector space $W$, denote by $\mathrm{ev}_{W}
\colon \dual{W} \otimes W \to \mathbb{C}$ the evaluation map and by $\mathrm{coev}_{W}
\colon \mathbb{C} \to W \otimes \dual{W}$ the coevaluation map, given by
$1\mapsto \sum_{i} w_{i} \otimes \dual{w_{i}}$ if $(w_{i})_{i}$
and $(\dual{w_{i}})_{i}$ are dual bases of $W$ and
$\dual{W}$. With respect to these maps, $\dual{W}$ is a left dual
of $W$. If $F\colon W_{1}\to W_{2}$ is a linear map between
finite-dimensional spaces, then
\begin{align} \label{eq:coev-vee} (\id_{W_{2}} \otimes F^{\tr}) \circ \mathrm{coev}_{W_{2}} &= (F \otimes \id_{W_{1}^{*}})\circ
\mathrm{coev}_{W_{1}}, &
\mathrm{ev}_{W_{1}}(F^{\tr}
\otimes \id_{W_{2}})&= \mathrm{ev}_{W_{2}}(\id_{W_{2}^{*}} \otimes F).
\end{align}
Now, define morphisms $\mathrm{coev} \colon \mathbb{C}^{(I)} \to V\underset{I}{\otimes} \dual{V}$ and
$\mathrm{ev} \colon \dual{V} \underset{I}{\otimes} V \to \mathbb{C}^{(I)}$ by
\begin{align*}
\Gru{\mathrm{coev}}{k}{l} &= \delta_{k,l} \sum_{p} \mathrm{coev}_{\tiny\Gru{V}{k}{p}} \colon
\mathbb{C} \to
\Gru{(}{k}{}V\underset{I}{\otimes} \dual{V}\Gru{)}{}{l}, &
\Gru{\mathrm{ev}}{k}{l} &= \delta_{k,l} \sum_{p} \mathrm{ev}_{\Gru{V}{p}{k}} \colon
\Gru{(}{k}{}V\underset{I}{\otimes} \dual{V}\Gru{)}{}{l} \to \mathbb{C}.
\end{align*}
One easily checks that with respect to these maps, $\dual{V}$ is a
left dual of $V$ in $\Gr{\mathrm{Vect}}{I}{I}{}{\rcf}$.
We therefore only need to show that $\mathrm{ev}$ is a morphism from
$\dualco{\mathscr{X}}\Circt\mathscr{X}$ to $\mathscr{U}$ and that $\mathrm{coev}$ is
a morphism from $\mathscr{U}$ to
$\mathscr{X}\Circt\dualco{\mathscr{X}}$. But \eqref{eq:coev-vee} and
Lemma \ref{lemma:rep-invertible} imply
\begin{align*}
(1\otimes \Gru{\mathrm{ev}}{k}{k})
\sum_{l,n} \big(
\Gr{\dualco{X}}{k}{l}{m}{n}\big)_{12}
\big(\Gr{X}{l}{k}{n}{q}\big)_{13} &=
(1\otimes \Gru{\mathrm{ev}}{k}{k})
\sum_{l,n}
(S \otimes \dualop{-})(\Gr{X}{n}{m}{l}{k})_{12}
(\Gr{X}{l}{k}{n}{q})_{13} \\ &=
(1\otimes \Gru{\mathrm{ev}}{m}{m}) \sum_{l,n}
(S \otimes \id)(\Gr{X}{n}{m}{l}{k})_{13}(\Gr{X}{l}{k}{n}{q})_{13} \\
&= \delta_{m,q}\UnitC{k}{q}\otimes \Gru{\mathrm{ev}}{m}{m} \\
&= \Gr{U}{k}{k}{m}{q}(1 \otimes \Gru{\mathrm{ev}}{m}{m}).
\end{align*}
A similar calculation shows that also $\mathrm{coev}$ is a morphism, whence the claim follows.
\end{proof}
\begin{Cor} \label{cor:rep-tensor-duality}
Let $\mathscr{A}$ be a partial Hopf algebra. Then
$\mathrm{Corep}_{\rcf}(\mathscr{A})$ is a tensor category with left
duals and, if the antipode of $\mathscr{A}$ is invertible, with right duals.
\end{Cor}
Let $\mathscr{A}$ be an $I$-partial Hopf algebra. Then the tensor
unit in $\mathrm{Corep}_{\rcf}(\mathscr{A})$, which is the trivial corepresentation
$\mathscr{U}$ on $\mathbb{C}^{(I)}$, need not be irreducible. Instead, it decomposes
into irreducible corepresentations indexed by the hyperobject set $\mathscr{I}$ of equivalence
classes for the relation $\sim$ on $I$ given by $k \sim l \iff
\UnitC{k}{l}\neq 0$ (see Remark
\ref{remark:index-equivalence}).
\begin{Lem}
Let $\mathscr{A}$ be an $I$-partial Hopf algebra and let
$(I_{\alpha})_{\alpha\in \mathscr{I}}$ be a labelled partition of $I$ into
equivalence classes for the relation $\sim$. Then for each $\alpha\in \mathscr{I}$, the subspace
$\mathbb{C}^{(I_{\alpha})} \subseteq \mathbb{C}^{(I)}$ is invariant, and the restriction
$\mathscr{U_{\alpha}}$ of $\mathscr{U}$ to $\mathbb{C}^{(I_{\alpha})}$ is
irreducible. In particular, $\mathscr{U}=\bigoplus_{\alpha\in\mathscr{I}}
\mathscr{U_{\alpha}}$ is a decomposition into irreducible corepresentations.
\end{Lem}
\begin{proof}
Immediate from the fact that $\Gr{U}{k}{k}{m}{m} =
\UnitC{k}{m}$ is $1$ if $k\sim m$ and $0$ if $k\not\sim m$.
\end{proof}
\begin{Def} We denote by $\mathrm{Corep}(\mathscr{A})$ the category of corepresentations $(V,\mathscr{X})$ for which there exists a finite subset of the hyperobject set $\mathscr{I}$ such that $\Gru{V}{k}{l}=0$ for the equivalence classes of $k,l$ outside this subset.
\end{Def}
It is easily seen that $\mathrm{Corep}(\mathscr{A})$ is then a tensor category with local units indexed by $\mathscr{I}$. We will use the same notation for the associated partial tensor category.
\subsection{Decomposition into irreducibles}
When there is an invariant integral around, one can average morphisms of vector spaces to obtain morphisms of corepresentations.
\begin{Lem} \label{lem:rep-average} Let $(V,\mathscr{X})$ and
$(W,\mathscr{Y})$ be corepresentations of a partial
Hopf algebra $\mathscr{A}$ with an invariant integral $\phi$, and let
$\Gru{T}{k}{l} \in \Hom_{\mathbb{C}}(\Gru{V}{k}{l},\Gru{W}{k}{l})$ for all $k,l\in I$. Then for each $m,n$ fixed, the families
\begin{align*}
\Gr{\check T}{m}{n}{k}{l} &:= (\phi \otimes
\id)(\Gr{(Y^{-1})}{n}{m}{l}{k}(1\otimes
\Gru{T}{m}{n})\Gr{X}{m}{n}{k}{l}), \\
\Gr{\hat T}{m}{n}{k}{l} &:=(\phi \otimes
\id)(\Gr{Y}{k}{l}{m}{n}(1\otimes
\Gru{T}{m}{n})\Gr{(X^{-1})}{l}{k}{n}{m})
\end{align*}
form morphisms $\Grd{\check{T}}{m}{n}$ and $\Grd{\hat{T}}{m}{n}$ from $(V,\mathscr{X})$ to $(W,\mathscr{Y})$.
\end{Lem}
\begin{proof} Clearly, we may suppose that $T$ is supported only on the component at index $(m,n)$, and we may then drop the upper indices and simply write $\Gru{\check{T}}{k}{l}$ and $\Gru{\hat{T}}{k}{l}$. Then
in total form, $\check{T}=(\phi \otimes \id)(Y^{-1}(1 \otimes T)X)$
and $\hat{T}=(\phi \otimes \id)(Y(1 \otimes T)X^{-1})$. Now, Lemma
\ref{lemma:rep-multiplier} and Lemma \ref{lemma:total-integral}
imply
\begin{align*}
Y^{-1}(1 \otimes \check{T})X &= (\phi \otimes \id \otimes
\id)((Y^{-1})_{23}(Y^{-1})_{13}(1 \otimes 1
\otimes T)X_{13}X_{23}) \\
&= ((\phi \otimes\id) \Delta \otimes \id)(Y^{-1}(1 \otimes T)X) \\
&= \sum_{l} \rho_{l} \otimes (\phi \otimes \id)((\rho_{l} \otimes
1)Y^{-1}(1 \otimes T)X) \\
&= \sum_{k,l} \rho_{l} \otimes \Gru{\check T}{k}{l},
\end{align*}
whence $\check{T}$ is a morphism from $\mathscr{X}$ to $\mathscr{Y}$
by Lemma \ref{lemma:rep-total-morphism}. The assertion for $\hat
T$ follows similarly.
\end{proof}
\begin{Lem}
Let $\mathscr{A}$ be an $I$-partial Hopf algebra with an invariant integral $\phi$.
Let $(V,\mathscr{X})$ be a corepresentation
and $\Gru{W}{k}{l} \subseteq \Gru{V}{k}{l}$ an invariant family of
subspaces. Then there exists an idempotent endomorphism $T$ of
$(V,\mathscr{X})$ such that $\Gru{W}{k}{l}=\img\Gru{T}{k}{l}$ for
all $k,l$.
\end{Lem}
\begin{proof}
By a direct sum decomposition, we may assume that $V$ is in a fixed component $\mathrm{Corep}(\mathscr{A})_{\alpha\beta}$. For all $k\in I_{\alpha},l\in I_{\beta}$, choose idempotent endomorphisms $\Gru{T}{k}{l}$ of $\Gru{V}{k}{l}$
with image $\Gru{W}{k}{l}$. Let
$\mathscr{Y}$ be the restriction of $\mathscr{X}$ to $W$. By Lemma
\ref{lem:rep-average}, we obtain morphisms $\Grd{\check{T}}{m}{n}$
of $(V,\mathscr{X})$ to $(W,\mathscr{Y})$, which we can also
interpret as endomorphisms of $(V,\mathscr{X})$. Fix $n\in
I_{\beta}$ and write $\Grd{\check{T}}{}{n} = \sum_m
\Grd{\check{T}}{m}{n}$ (using column-finiteness of $V$). We claim
that $W$ is the image of $\Grd{\check{T}}{}{n}$.
In
total form, invariance of $W$ implies \[(1 \otimes T)X(1
\otimes T)=X(1\otimes T).\] Applying
$(S \otimes \id)$, we get \[(1 \otimes T)X^{-1}(1
\otimes T)=X^{-1}(1\otimes T).\]
Now choose $n\in I_{\beta}$ and write $\Grd{\check{T}}{}{n} = \sum_m \Grd{\check{T}}{m}{n}$, which makes sense because of column-finiteness of $V$. We combine Lemma
\ref{lemma:rep-multiplier}, Lemma \ref{lemma:rep-invertible} and
normalisation of $\phi$, and find
\begin{align*}
\Grd{\check{T}}{}{n} T &= (\phi \otimes \id)(X^{-1}(1 \otimes
\rho_{n}^{V}T)X(1 \otimes T)) \\ &=
(\phi \otimes \id)(X^{-1}(1 \otimes
\rho_{n}^{V})X(1 \otimes T)) \\
&=
\sum_l \phi(\UnitC{n}{l}) \rho^{V}_{l}T \\& =T,
\end{align*}
since we only have to sum over $l\in I_{\beta}$ as $n \in I_{\beta}$ by assumption.
Now as $W$ is invariant and $T$ sends $V$ into $W$, we have that
$\Gr{\check{T}}{}{n}{k}{l}$ sends $\Gru{V}{k}{l}$ into
$\Gru{W}{k}{l}$. Hence it follows that $\img{\check{T}^{n}}=\img T$,
and $\check{T}^{n}$ is the desired intertwiner.
\end{proof}
\begin{Cor} \label{cor:rep-cosemisimple
Let $\mathscr{A}$ be a partial Hopf algebra with an invariant integral. Then
every corepresentation of $\mathscr{A}$ decomposes into a (possibly infinite) direct
sum of irreducible corepresentations.
\end{Cor}
\begin{proof}
The preceding lemma shows that every non-zero corepresentation is either
irreducible or the direct sum of two non-zero corepresentations, and we can apply Zorn's lemma.
\end{proof}
We can now prove that the category $\mathrm{Corep}(\mathscr{A})$ of a partial Hopf algebra with invariant integral is semi-simple, that is, any object is a finite direct sum of irreducible objects. If one allows a more relaxed definition of semisimplicity allowing infinite direct sums, this will be true also for the potentially bigger category $\mathrm{Corep}_{\rcf}(\mathscr{A})$.
We will first state a lemma which will also be convenient at other occasions.
\begin{Lem}\label{LemInjMor} Let $\mathscr{A}$ be a partial Hopf algebra and fix $\alpha,\beta$ in the hyperobject set. Then if $T$ is a morphism in $\mathrm{Corep}(\mathscr{A})_{\alpha\beta}$ and $\sum_{k\in I_\alpha} \Gru{T}{k}{l}=0$ for some $l \in I_\beta$, then $T=0$.
\end{Lem}
\begin{proof} This follows from the equations in Lemma \ref{lemma:rep-total-morphism}
\end{proof}
\begin{Prop}\label{prop:rep-cosemisimple} Let $\mathscr{A}$ be a partial Hopf algebra with an invariant integral. Then the components of the partial tensor category $\mathrm{Corep}(\mathscr{A})$ are semi-simple.
\end{Prop}
\begin{proof}
Let $V$ be in any object of $\mathrm{Corep}(\mathscr{A})_{\alpha\beta}$ for $\alpha,\beta\in \mathscr{I}$. From Lemma \ref{LemInjMor}, we see that for $T$ a morphism in $\mathrm{Corep}(\mathscr{A})_{\alpha\beta}$, the map $T\mapsto \sum_{k\in I_\alpha} \Gru{T}{k}{l}$ is injective for any choice of $l\in I_\beta$. It follows by column-finiteness of $V$ that the algebra of self-intertwiners of $V$ is finite-dimensional. We then immediately conclude from Corollary \ref{cor:rep-cosemisimple} that $V$ is a finite direct sum of irreducible invariant subspaces.
\end{proof}
\subsection{Matrix coefficients of irreducible corepresentations}
Our next goal is to obtain the analogue of Schur's orthogonality
relations for matrix coefficients of corepresentations.
Given finite-dimensional vector spaces $V$ and $W$, the dual space of
$\Hom_{\mathbb{C}}(V,W)$ is linearly spanned by functionals of the form
\begin{align*}
\omega_{f,v} \colon \Hom_{\mathbb{C}}(V,W) \to \mathbb{C}, \quad T \mapsto (f|Tv),
\end{align*}
where $v\in V$, $f\in \dual{W}$, and $(-|-)$ denotes the natural
pairing of $\dual{W}$ with $W$.
\begin{Def} Let $\mathscr{A}$ be a partial bialgebra. The space of
\emph{matrix coefficients} $\mathcal{C}(\mathscr{X})$ of a
corepresentation $(V,\mathscr{X})$ is the sum of the subspaces
\begin{align*}
\Gr{\mathcal{C}(\mathscr{X})}{k}{l}{m}{n} &= \Span \left\{ (\id \otimes
\omega_{f,v})(\Gr{X}{k}{l}{m}{n}) \mid v\in \Gru{V}{m}{n}, f \in
\dual{(\Gru{V}{k}{l})} \right\} \subseteq \Gr{A}{k}{l}{m}{n}.
\end{align*}
\end{Def}
Let $(V,\mathscr{X})$ be a corepresentation of a partial bialgebra
$\mathscr{A}$. Condition \eqref{eq:rep-comultiplication} in Definition \ref{definition:corep}
implies
\begin{align} \label{eq:rep-matrix-delta}
\Delta_{pq}(\Gr{\mathcal{C}(\mathscr{X})}{k}{l}{m}{n}) \subseteq
\Gr{\mathcal{C}(\mathscr{X})}{k}{l}{p}{q} \otimes
\Gr{\mathcal{C}(\mathscr{X})}{p}{q}{m}{n}.
\end{align}
Thus, the $\Gr{\mathcal{C}(\mathscr{X})}{k}{l}{m}{n}$ form a partial
coalgebra with respect to $\Delta$ and $\epsilon$. Moreover, for each
$k,l$, the $I^{2}$-graded vector space
\begin{align*}
\Grd{\mathcal{C}(\mathscr{X})}{k}{l}:=\bigoplus_{m,n }
\Gr{\mathcal{C}(\mathscr{X})}{k}{l}{m}{n}
\end{align*}
is rcfd, and the inclusion above shows that one can
form the regular corepresentation on this space.
\begin{Lem} \label{lemma:rep-regular-embedding}
Let $(V,\mathscr{X})$ be a corepresentation
of a partial bialgebra and let $f\in
\dual{(\Gru{V}{k}{l})}$. Then the family of maps
\begin{align*}
\Gr{T}{}{(f)}{m}{n} \colon \Gru{V}{m}{n} \to
\Gr{\mathcal{C}(\mathscr{X})}{k}{l}{m}{n}, \ w \mapsto (\id
\otimes \omega_{f,w})(\Gr{X}{k}{l}{m}{n})=(\id \otimes
f)(\Gr{X}{k}{l}{m}{n}(1 \otimes w)),
\end{align*}
is a morphism from $\mathscr{X}$ to the regular corepresentation on
$\Grd{\mathcal{C}(\mathscr{X})}{k}{l}$.
\end{Lem}
\begin{proof}
Denote by $\mathscr{Y}$ the regular corepresentation on
$\bigoplus_{m,n } \Gr{\mathcal{C}(\mathscr{X})}{k}{l}{m}{n}$. Then
\begin{align*}
\Gr{Y}{p}{q}{m}{n} (1\otimes \Gr{T}{}{(f)}{m}{n}(v)) &=
(\Delta^{\op}_{pq} \otimes \omega_{f,v})( \Gr{X}{k}{l}{m}{n})
\\ & = (\id \otimes \id \otimes
f)((\Gr{X}{k}{l}{p}{q})_{23}(\Gr{X}{p}{q}{m}{n})_{13}(1 \otimes 1
\otimes v)) \\ &=(1 \otimes \Gr{T}{}{(f)}{p}{q})\Gr{X}{p}{q}{m}{n}(1 \otimes v)
\end{align*}
for all $v \in \Gru{V}{m}{n}$.
\end{proof}
As before, we denote by $\dual{V}$ the dual of a vector space $V$.
\begin{Lem} \label{lemma:regular-corep} Let $\mathscr{A}$ be a partial
Hopf algebra.
\begin{enumerate}[label=(\arabic*)]
\item Let $a \in \bigoplus_{k,l} \Gr{A}{k}{l}{m}{n}$. Then the family of
subspaces
\begin{align} \label{eq:element-reg-corep}
\Gru{V}{p}{q} = \{ (\id\otimes f)(\Delta_{pq}(a)) : f \in
\dual{(\Gr{A}{p}{q}{m}{n})}\}
\end{align}
is rcfd and satisfies $\Delta_{rs}(\Gru{V}{p}{q}) \subseteq
\Gru{V}{r}{s} \otimes \Gr{A}{r}{s}{p}{q}$ so that one can form the
restriction of the regular corepresentation
$(V,\mathscr{X})$. Moreover, $a \in \Gru{V}{m}{n}$.
\item Let $(V,\mathscr{X})$ be an irreducible restriction of the
regular corepresentation. Then \eqref{eq:element-reg-corep} holds
for any non-zero $a \in \Gru{V}{m}{n}$.
\end{enumerate}
\end{Lem}
\begin{proof}
(1) Taking $f=\epsilon$, one finds $a \in \Gru{V}{m}{n}$. Next, write
\begin{align*}
\Delta_{pq}(a)=\sum_{i} b_{pq}^{i} \otimes c^{i}_{pq}
\end{align*}
with linearly independent $(c_{pq}^{i})_{i}$. Then $ \Gru{V}{p}{q} =
\mathrm{span}\{b_{pq}^{i} : i \}$, and $\Delta_{rs}(\Gru{V}{p}{q}) \subseteq
\Gru{V}{r}{s} \otimes \Gr{A}{r}{s}{p}{q}$ because
\begin{align*}
\sum_{i}
\Delta_{rs}(b^{i}_{pq}) \otimes c^{i}_{pq} =
(\Delta_{rs} \otimes \id)\Delta_{pq}(a) = (\id \otimes
\Delta_{pq}) \Delta_{rs}(a) = \sum_{j} b^{j}_{rs} \otimes
\Delta_{pq}(c^{j}_{rs}).
\end{align*}
(2) If $a\in \Gru{V}{m}{n}$ is non-zero, then the right hand
sides of \eqref{eq:element-reg-corep} form a non-zero invariant
family of subspaces of $\Gru{V}{p}{q}$ by (1).
\end{proof}
\begin{Prop} \label{prop:rep-weak-pw} Let $\mathscr{A}$ be a partial
Hopf algebra with an invariant integral. Then the total algebra $A$ is the sum
of the matrix coefficients of irreducible corepresentations.
\end{Prop}
\begin{proof}
Let $a \in \Gr{A}{k}{l}{m}{n}$, define $\Gru{V}{p}{q}$ as in
\eqref{eq:element-reg-corep} and form the restriction of the regular
corepresentation $(V,\mathscr{X})$. Then
\begin{align*}
a = (\id \otimes \epsilon)(\Delta^{\op}_{kl}(a)) =
(\id \otimes \epsilon)(\Gr{X}{k}{l}{m}{n}(1 \otimes a)) \in
\Gr{\mathcal{C}(\mathscr{X})}{k}{l}{m}{n}.
\end{align*}
Decomposing $(V,\mathscr{X})$, we find that
$a$ is contained in the sum of matrix coefficients of irreducible
corepresentations.
\end{proof}
The first part of the orthogonality relations concerns matrix
coefficients of inequivalent irreducible corepresentations.
\begin{Prop} \label{prop:rep-orthogonality-1} Let $\mathcal{A}$ be a
partial Hopf algebra with an invariant integral $\phi$ and inequivalent
irreducible corepresentations $(V,\mathscr{X})$ and
$(W,\mathscr{Y})$. Then for all
$a\in \mathcal{C}(X), b \in \mathcal{C}(Y)$,
\[\phi(S(b)a) = \phi(bS(a))=0.\]
\end{Prop}
\begin{proof}
Since $\phi$ vanishes on $S(\Gr{A}{k}{l}{m}{n})\Gr{A}{p}{q}{r}{s}$ and
on $\Gr{A}{p}{q}{r}{s}S(\Gr{A}{k}{l}{m}{n})$ unless
$(p,q,r,s) = (m,n,k,l)$, it suffices to prove the assertion for elements of the form
\begin{align*}
a&=(\id \otimes \omega_{f,v})(\Gr{X}{k}{l}{m}{n}) && \text{and} &
b&=(\id \otimes \omega_{g,w})(\Gr{Y}{m}{n}{k}{l})
\end{align*}
where $f\in \dual{(\Gru{V}{k}{l})}, v \in \Gru{V}{m}{n}$ and $g \in
\dual{(\Gru{W}{m}{n})}, w \in \Gru{W}{k}{l}$. Lemma
\ref{lem:rep-average}, applied to the family
\begin{align*}
\Gru{T}{p}{q} \colon \Gru{V}{p}{q} \to \Gru{W}{p}{q}, \quad u
\mapsto \delta_{p,k}\delta_{q,l} f(u)w,
\end{align*}
yields morphisms $\Grd{\check{T}}{k}{l},\Grd{\hat{T}}{k}{l}$ from $(V,\mathscr{X})$ to
$(W,\mathscr{Y})$ which necessarily are $0$. Inserting the
definition of $\Grd{\check{T}}{k}{l}$, we find
\begin{align*}
\phi(S(b)a) &= \phi\big((S \otimes
\omega_{g,w})(\Gr{Y}{m}{n}{k}{l}) \cdot (\id \otimes
\omega_{f,v})(\Gr{X}{k}{l}{m}{n})\big) \\ &= (\phi \otimes \omega_{g,v})\left(\Gr{(Y^{-1})}{l}{k}{n}{m}(1 \otimes
\Gru{T}{k}{l} ) \Gr{X}{k}{l}{m}{n}\right)
= \omega_{g,v}( \Gr{\check{T}}{k}{l}{m}{n}) = 0.
\end{align*}
A similar calculation involving $\hat{T}$ shows that
$\phi(bS(a))=0$.
\end{proof}
\begin{Theorem} \label{thm:rep-orthogonality} Let $\mathcal{A}$ be a
partial Hopf algebra with an invariant integral $\phi$. Let $\alpha,\beta\in \mathscr{I}$, and let $(V,\mathscr{X})$
be an irreducible corepresentation of $\mathscr{A}$ inside $\mathrm{Corep}(\mathscr{A})_{\alpha\beta}$. Suppose
$F=F_{\mathscr{X}}$ is an isomorphism from $(V,\mathscr{X})$ to
$(V,\hat{\hat{\mathscr{X}}})$ with inverse
$ G=F^{-1}$. Then the following hold.
\begin{enumerate}[label=(\arabic*)]
\item The numbers $d_G:=\sum_{k} \mathrm{Tr} (\Gru{G}{k}{l})$ and $d_F:=\sum_{n} \mathrm{Tr} (\Gru{F}{m}{n})$ are non-zero and do not depend on the choice of $l \in I_\beta$ or $m\in I_\alpha$.
\item For all $k,m \in I_\alpha$ and $l,n\in I_\beta$,
\begin{align*}
(\phi \otimes \id)(\Gr{(X^{-1})}{l}{k}{n}{m}\Gr{X}{k}{l}{m}{n})
&=d_G^{-1}\mathrm{Tr}(\Gru{G}{k}{l})
\id_{\Gru{V}{m}{n}}, \\
(\phi \otimes \id)(\Gr{X}{k}{l}{m}{n}\Gr{(X^{-1})}{l}{k}{n}{m})
&=d_F^{-1}\mathrm{Tr}(\Gru{F}{m}{n})
\id_{\Gru{V}{k}{l}}.
\end{align*}
\item Denote by $\Sigma_{klmn}$ the flip map $\Gru{V}{k}{l}
\otimes \Gru{V}{m}{n} \to \Gru{V}{m}{n}
\otimes \Gru{V}{k}{l}$. Then
\begin{align*}
(\phi \otimes \id \otimes
\id)((\Gr{(X^{-1})}{l}{k}{n}{m})_{12}(\Gr{X}{k}{l}{m}{n})_{13}) &=
d_G^{-1}
(\id_{\Gru{V}{m}{n}} \otimes \Gru{G}{k}{l})
\circ \Sigma_{klmn}, \\
(\phi \otimes \id \otimes
\id)((\Gr{X}{k}{l}{m}{n})_{13}(\Gr{(X^{-1})}{l}{k}{n}{m})_{12}) &= d_F^{-1} (\Gru{F}{m}{n}
\otimes \id_{\Gru{V}{k}{l}}) \circ \Sigma_{klmn}.
\end{align*}
\end{enumerate}
\end{Theorem}
\begin{proof}
We prove the assertions and equations involving $d_G$ in (1), (2)
and (3) simultaneously; the assertions involving $d_F$ follow similarly.
Consider
the following endomorphism $F_{mnkl}$ of $\Gru{V}{m}{n}\otimes \Gru{V}{k}{l}$,
\begin{align*}
F_{mnkl}
&:=(\phi \otimes \id \otimes \id)\left((\Gr{(X^{-1})}{l}{k}{n}{m})_{12}(\Gr{X}{k}{l}{m}{n})_{13}\right)
\circ \Sigma_{mnkl} \\ &= (\phi \otimes \id \otimes
\id)\left((\Gr{(X^{-1})}{m}{n}{k}{l})_{12}
\Sigma_{klkl,23}(\Gr{X}{k}{l}{m}{n})_{12}\right).
\end{align*}
By applying Lemma \ref{lem:rep-average} with respect to the flip map $\Sigma_{klkl}$, we see that the family $(F_{mnkl})_{m,n}$ is
an endomorphism of $(V \otimes \Gru{V}{k}{l}, X\otimes \id)$ and hence
\begin{align}
F_{mnkl} &= \id_{\Gru{V}{m}{n}} \otimes \Gru{R}{k}{l} \label{eq:rep-orthogonal-1}
\end{align}
with some $\Gru{R}{k}{l} \in \Hom_{\mathbb{C}}(\Gru{V}{k}{l})$ not
depending on $m,n$.
On the other hand, since $\phi = \phi S$,
\begin{align*}
F_{mnkl} &= (\phi \otimes \id \otimes \id)((S \otimes
\id)(\Gr{X}{m}{n}{k}{l})_{12}(\Gr{X}{k}{l}{m}{n})_{13})
\circ \Sigma_{mnkl} \\
&= (\phi \otimes \id \otimes \id)\left(((S \otimes
\id)(\Gr{X}{k}{l}{m}{n}))_{13}
((S^{2} \otimes \id)(\Gr{X}{m}{n}{k}{l}))_{12}\right) \circ \Sigma_{mnkl}\\
&= (\phi \otimes \id \otimes
\id)\left((\Gr{(X^{-1})}{k}{l}{m}{n})_{13} (\Sigma_{mnmn})_{23}
(\Gr{(\dual{\dual{X}{}\!})}{m}{n}{k}{l})_{13}\right).
\end{align*}
Hence we can again apply Lemma \ref{lem:rep-average} and
find that the family $(F_{mnkl})_{k,l}$ is a morphism \[(F_{mnkl})_{k,l}:
(\Gru{V}{m}{n} \otimes V, \hat{\hat{X}}_{13})\rightarrow (\Gru{V}{m}{n} \otimes V,
X_{13}).\] Therefore,
\begin{align}
F_{mnkl} &= \Gru{T}{m}{n} \otimes \Gru{G}{k}{l} \label{eq:rep-orthogonal-2}
\end{align}
with some $\Gru{T}{m}{n} \in \mathcal{\Hom_{\mathbb{C}}}(\Gru{V}{m}{n})$
not depending on $k,l$. Combining \eqref{eq:rep-orthogonal-1} and
\eqref{eq:rep-orthogonal-2}, we conclude that, for some $\lambda\in \mathbb{C}$, \[F_{mnkl} = \lambda
(\id_{\Gru{V}{m}{n}} \otimes \Gru{G}{k}{l})\]
Choose dual bases
$(v_{i})_{i}$ for $\Gru{V}{k}{l}$ and $(f_{i})_{i}$ for $\dual{(\Gru{V}{k}{l})}$. Then
\begin{align*}
\lambda \mathrm{Tr}( \Gru{G}{k}{l}) \id_{\Gru{V}{m}{n}}
&= \sum_{i} (\id \otimes
\omega_{f_{i},v_{i}})(F_{mnkl}) = (\phi \otimes
\id)(\Gr{(X^{-1})}{l}{k}{n}{m} \Gr{X}{k}{l}{m}{n}).
\end{align*}
Take now $n=l$. By Lemma \ref{LemInjMor}, we can choose $m\in I_{\alpha}$ with $\Gru{V}{m}{n}\neq 0$. Then summing the previous relation over $k$, the relations $\sum_{k}
\Gr{(X^{-1})}{l}{k}{n}{m} \Gr{X}{k}{l}{m}{n} = \UnitC{l}{n}
\otimes \id_{\Gru{V}{m}{n}}$ and
$\phi(\UnitC{l}{l})=1$ give
\begin{align*}
\lambda \cdot \sum_{k} \mathrm{Tr}(\Gru{G}{k}{l}) = 1.
\end{align*}
Now all assertions in (1)--(3) concerning $d_G$ follow.
\end{proof}
\begin{Rem} For semi-simple tensor categories with duals, it is known
that any object is isomorphic to its left bidual \cite[Proposition
2.1]{ENO1}, hence there always exists an isomorphism $F_{\mathscr{X}}$ as in the previous Theorem. In fact, from the faithfulness of $\phi$ and Proposition \ref{prop:rep-orthogonality-1}, it follows that not all $F_{mnkl}$ in the previous proof are zero. Hence $G=F_{\mathscr{X}}^{-1}$ is a non-zero morphism and thus an isomorphism from the left bidual of $\mathscr{X}$ to $\mathscr{X}$.
\end{Rem}
\begin{Cor}\label{CorOrth}
Let $\mathscr{A}$ be a partial Hopf algebra with an invariant integral $\phi$, let
$(V,\mathscr{X})$ be an irreducible corepresentation of
$\mathscr{A}$, let $F=F_{\mathscr{X}}$ be an isomorphism from
$(V,\mathscr{X})$ to $(V,\dualco{\dualco{\mathscr{X}}})$ and
$G=F^{-1}$, and let $a=(\id \otimes
\omega_{f,v})(\Gr{X}{k}{l}{m}{n})$ and $b=(\id \otimes
\omega_{g,w})(\Gr{X}{m}{n}{k}{l})$, where
$f \in \dual{(\Gru{V}{k}{l})}$, $v \in\Gru{V}{m}{n}$, $g \in
\dual{(\Gru{V}{m}{n})}$, $w \in \Gru{V}{k}{l}$. Then
\begin{align*}
\phi(S(b)a) &= \frac{(g|v)(f|Gw)}{\sum_{r}
\mathrm{Tr}(\Gru{G}{r}{n})}, & \phi(aS(b)) = \frac{(g|Fv)(f|w)}{\sum_{s}
\mathrm{Tr}(\Gru{F}{m}{s})}.
\end{align*}
\end{Cor}
\begin{proof}
Apply $\omega_{g,w} \otimes
\omega_{f,v}$ to the formulas in Theorem
\ref{thm:rep-orthogonality}.(c).
\end{proof}
\begin{Cor} \label{cor:rep-pw}
Let $\mathscr{A}$ be a partial Hopf algebra with an invariant integral and let
$((V^{(a)},\mathscr{X}_{a}))_{a \in \mathcal{I}}$ be a maximal family of mutually non-isomorphic irreducible corepresentations of
$\mathscr{A}$. Then the map
\begin{align*}
\bigoplus_{a} \bigoplus_{k,l,m,n}
(\dual{(\Gr{V}{}{(a)}{k}{l})} \otimes
\Gr{V}{}{(a)}{m}{n}) \to A
\end{align*}
that sends $f \otimes w \in
\dual{(\Gr{V}{}{(a)}{k}{l})} \otimes
\Gr{V}{}{(a)}{m}{n}$ to $ (\id \otimes
\omega_{f,w})(\Gr{(X_{a})}{k}{l}{m}{n})$,
is a linear isomorphism.
\end{Cor}
\begin{proof} This follows from Proposition \ref{prop:rep-weak-pw}, Proposition \ref{prop:rep-orthogonality-1} and Corollary \ref{CorOrth}.
\end{proof}
\begin{Cor} \label{cor:rep-pw-morphisms}
Let $\mathscr{A}$ be a regular partial Hopf algebra with an invariant integral, let
$((V^{(a)},\mathscr{X}_{a}))_{a\in \mathcal{I}}$ be a maximal
family of mutually non-isomorphic irreducible corepresentations of $\mathscr{A}$,
fix $a \in \mathcal{I}$ and $k,l\in I$, and denote by $\Gr{\mathscr{Y}}{k}{l}{}{a}$
the regular corepresentation on
$\Grd{\mathcal{C}(\mathscr{X}_a)}{k}{l}$. Then there exists a
linear isomorphism
\begin{align*}
\dual{( \Gr{V}{}{(a)}{k}{l})} \to
\mathrm{Mor}((V^{(a)},\mathscr{X}_{a}),
(\Grd{\mathcal{C}(\mathscr{X}_a)}{k}{l},\Gr{\mathscr{Y}}{k}{l}{}{a}))
\end{align*}
assigning to each $f\in \dual{( \Gr{V}{}{(a)}{k}{l})}$ the morphism
$T^{(f)}$ of Lemma \ref{lemma:rep-regular-embedding}.
\end{Cor}
\subsection{Unitary corepresentations of partial compact quantum groups}
Let us now enhance our partial Hopf algebras to partial compact
quantum groups. We write $B(\mathcal{H},\mathcal{G})$ for the linear space of
bounded morphisms between Hilbert spaces $\mathcal{H}$ and $\mathcal{G}$.
\begin{Def} Let $\mathscr{A}$ define a partial compact quantum group
$\mathscr{G}$. We call a corepresentation $\mathscr{X}$ of
$\mathscr{A}$ on a collection of Hilbert spaces $\Gru{\mathcal{H}}{k}{l}$
\emph{unitary}
if \[\Gr{(X^{-1})}{k}{l}{m}{n}=(\Gr{X}{l}{k}{n}{m})^{*}\quad
\textrm{in }\Gr{A}{k}{l}{m}{n}\otimes
B(\Gru{\mathcal{H}}{l}{k},\Gru{\mathcal{H}}{n}{m}).\]
\end{Def}
\begin{Rem}
The total object $\mathcal{H}$ will then only be a pre-Hilbert space, but as the local components are finite-dimensional, this will not be an issue.
\end{Rem}
\begin{Exa}\label{example:rep-trivial-unitary}
Regard $\mathbb{C}^{(I)}$ as a direct sum of the trivial Hilbert spaces $\mathbb{C}$. Then the
trivial corepresentation $\mathscr{U}$ on $\mathbb{C}^{(I)}$ is unitary.
\end{Exa}
The tensor product of corepresentations lifts to a tensor product
of unitary corepresentations as follows. We define the tensor product
of rcfd $I^{2}$-graded Hilbert spaces similarly as for rcfd
$I^{2}$-graded vector spaces and pretend it to be strict again. Let
$(\mathcal{H},\mathscr{X})$ and $(\mathcal{G},\mathscr{Y})$ be unitary rcfd
corepresentations. Then the tensor product $(\mathcal{H} \underset{I}{\otimes}
\mathcal{G},\mathscr{X} \Circt \mathscr{Y})$ is unitary again. Indeed,
in total form, $(X\Circt Y)^{-1} = Y_{13}^{-1}X_{12}^{-1}
=Y_{13}^{*}X_{12}^{*} = (X \Circt Y)^{*}$ by Remark \ref{remark:rep-tensor-multiplier}.
We hence obtain a tensor C$^*$-category $\mathrm{Corep}_{u,\rcf}(\mathscr{A})$ of unitary corepresentations. We denote again by $\mathrm{Corep}_u(\mathscr{A})$ the subcategory of all corepresentations with finite support on the hyperobject set. It is the total tensor C$^*$-category with local units of a semi-simple partial tensor C$^*$-category.
Our aim now is to show that every (irreducible) corepresentation is
equivalent to a unitary one. We show this by embedding the
corepresentation into a restriction of the regular corepresentation.
\begin{Lem} \label{lemma:rep-regular-unitary}
Let $\mathscr{A}$ define a partial compact quantum group with
positive invariant integral $\phi$, and let $\Gru{V}{m}{n} \subseteq
\bigoplus_{k,l} \Gr{A}{k}{l}{m}{n}$ be subspaces such that
$\Delta_{pq}(\Gru{V}{m}{n}) \subseteq \Gru{V}{p}{q} \otimes
\Gr{A}{p}{q}{m}{n}$ and $V=\bigoplus_{k,l} \Gru{V}{k}{l}$ is rcfd. Then each $\Gru{V}{k}{l}$ is a Hilbert space with
respect to the inner product given by $\langle
a|b\rangle:=\phi(a^{*}b)$, and the regular corepresentation
$\mathscr{X}$ on $V$ is unitary.
\end{Lem}
\begin{proof}
By Lemma \ref{lemma:rep-invertible}, it suffices to show that
\begin{equation}\label{EqUnit} \sum_{k}
(\Gr{X}{k}{l}{m}{n'})^* \Gr{X}{k}{l}{m}{n} =
\delta_{n,n'}\UnitC{l}{n}\otimes
\id_{\Gru{V}{m}{n}}.
\end{equation}
Let $a\in \Gru{V}{m}{n}$, $b\in \Gru{V}{m}{n'}$ and define $\omega_{b,a} \colon
\Hom_{\mathbb{C}}(\Gru{V}{m}{n},\Gru{V}{m}{n'}) \to \mathbb{C}$ by $T
\mapsto \langle b|Ta\rangle$. Then
\begin{eqnarray*}
\sum_{k }(\id \otimes \omega_{b,a})
((\Gr{X}{k}{l}{m}{n'})^* \Gr{X}{k}{l}{m}{n})) &=& \sum_k
(\id\otimes \phi)(\Delta_{kl}^{\op}(b)^*\Delta_{kl}^{\op}(a))\\
&=& \sum_k (\phi\otimes
\id)(\Delta_{lk}(b^*)\Delta_{kl}(a)) \\ &=& (\phi\otimes
\id)(\Delta_{ll}(b^*a)) \\ &=& \phi(b^*a)\UnitC{l}{n} \\&=&
\delta_{n',n} \UnitC{l}{n} \otimes \langle b|a\rangle.
\end{eqnarray*}
This proves \eqref{EqUnit}.
\end{proof}
\begin{Prop} \label{prop:rep-unitarisable} Let $\mathscr{A}$ define
a partial compact quantum group. Then every
corepresentation of $\mathscr{A}$ is
isomorphic to a unitary one.
\end{Prop}
\begin{proof}
By Proposition \ref{prop:rep-cosemisimple} and Corollary
\ref{cor:rep-pw}, every corepresentation is isomorphic to a direct
sum of irreducible regular corepresentations, which are unitary by
Lemma \ref{lemma:rep-regular-unitary}.
\end{proof}
\begin{Cor} The partial C$^*$-tensor category $\mathrm{Corep}_u(\mathscr{A})$ is a partial fusion C$^{*}$-category.
\end{Cor}
\begin{Rem}
If $\mathscr{A}$ defines a partial compact quantum group $\mathscr{G}$, we will also write $\mathrm{Corep}_u(\mathscr{A})= \mathrm{Rep}_u(\mathscr{G})$, and talk of (unitary) representations of $\mathscr{G}$.
\end{Rem}
Let now $\mathscr{X}$ be a unitary corepresentation of $\mathscr{A}$. Then there exists an isomorphism from $\mathscr{X}$ to $\dualco{\dualco{\mathscr{X}}} = (S^2\otimes \id)\mathscr{X}$. The following proposition shows that it can be implemented by positive operators.
\begin{Prop} \label{prop:rep-unitary-bidual}
Let $\mathscr{A}$ define a partial compact quantum group and let
$(\mathcal{H},\mathscr{X})$ be an irreducible unitary corepresentation of
$\mathscr{A}$. Then there exists an isomorphism $F=F_{\mathscr{X}}$
from $(\mathcal{H},\mathscr{X})$ to
$(\mathcal{H},(S^{2} \otimes \id)(\mathscr{X}))$ in $\mathrm{Corep}(\mathscr{A})$ such
that each $\Gru{F}{k}{l}$ is positive.
\end{Prop}
\begin{proof}
By Proposition \ref{prop:rep-unitarisable}, there exists an
isomorphism $T \colon \dualco{\mathscr{X}} \to \mathscr{Y}$ for some
unitary corepresentation $\mathscr{Y}$ on $\dual{\mathcal{H}}$, so that in total form,
$(1\otimes T)\dualco{X} = Y(1 \otimes T)$.
We apply $S \otimes -^{\tr}$ and $-^{*} \otimes -^{*\tr}$,
respectively to find
\begin{align*}
\dualco{\dualco{X}}(1 \otimes \dualop{T}) &= (1 \otimes
\dualop{T})\dualco{Y}, & (1 \otimes T^{*\tr})X=\dualco{Y}(1\otimes T^{*\tr}).
\end{align*}
Combining both equations, we
find $\dualco{\dualco{X}}(1 \otimes \dualop{T}T^{*\tr})=(1 \otimes
\dualop{T}T^{*\tr})X$. Thus, we can take
$F:=\dualop{T}T^{*\tr}$.
\end{proof}
The Schur orthogonality relations in Corollary \ref{CorOrth} can be
rewritten using the involution instead of the antipode as follows.
Let $(\mathcal{H},\mathscr{X})$ be a unitary corepresentation of
$\mathscr{A}$. Since $(S\otimes \id)(X)=X^{-1}=X^{*}$, the space of
matrix coefficients $\mathcal{C}(\mathscr{X})$ satisfies
\begin{align} \label{eq:rep-unitary-matrix-coefficients}
S(\Gr{\mathcal{C}(\mathscr{X})}{k}{l}{m}{n}) &=
(\Gr{\mathcal{C}(\mathscr{X})}{m}{n}{k}{l})^{*} \subseteq \Gr{A}{n}{m}{l}{k}.
\end{align}
More precisely, let $v \in \Gru{\mathcal{H}}{k}{l}$, $v' \in \Gru{\mathcal{H}}{m}{n}$
and denote by $\omega_{v,v'}$ the functional
given by $T \mapsto \langle v|Tv'\rangle$. Then
\begin{align*}
S((\id \otimes \omega_{v,v'})(\Gr{X}{k}{l}{m}{n})) &=
(\id \otimes \omega_{v,v'}) (\Gr{(X^{-1})}{n}{m}{l}{k})) \\ & =
(\id \otimes \omega_{v,v'})( (\Gr{X}{m}{n}{k}{l})^{*}) =
(\id \otimes \omega_{v',v})(\Gr{X}{m}{n}{k}{l})^{*}.
\end{align*}
This equation, Proposition \ref{prop:rep-orthogonality-1}, Lemma
\ref{LemFaith} and
Corollary \ref{CorOrth} imply the following
corollaries:
\begin{Cor} \label{cor:rep-unitary-orthogonality-1} Let $\mathscr{A}$ define a partial compact quantum group with
positive invariant integral $\phi$ and let $(V,\mathscr{X})$ and
$(W,\mathscr{Y})$ be inequivalent irreducible unitary
corepresentations of $\mathcal{A}$. Then for all $a\in
\mathcal{C}(X), b \in \mathcal{C}(Y)$,
\[\phi(b^{*}a) = \phi(ba^{*})=0.\] In particular, $\mathcal{C}(X)
\cap \mathcal{C}(Y)=0$.
\end{Cor}
\begin{Cor}\label{cor:rep-unitary-schur-orthogonality}
Let $\mathscr{A}$ define a partial compact quantum group with
positive invariant integral $\phi$, let $(\mathcal{H},\mathscr{X})$ be an irreducible
unitary corepresentation of $\mathscr{A}$, let $F=F_{\mathscr{X}}$ be a positive
isomorphism from $(\mathcal{H},\mathscr{X})$ to
$(\mathcal{H},\dualco{\dualco{\mathscr{X}}})$ and
$G=F^{-1}$, and let $a=(\id \otimes
\omega_{v,v'})(\Gr{X}{k}{l}{m}{n})$ and $b=(\id \otimes
\omega_{w,w'})(\Gr{X}{k}{l}{m}{n})$, where $v,w \in
\Gru{\mathcal{H}}{k}{l}$ and $v',w' \in \Gru{\mathcal{H}}{m}{n}$. Then
\begin{align*}
\phi(b^{*}a) &= \frac{\langle w|v'\rangle\langle v|Gw'\rangle}{\sum_{r}
\mathrm{Tr}(\Gru{G}{r}{n})}, & \phi(ab^{*}) = \frac{\langle
w|Fv'\rangle \langle v|w'\rangle}{\sum_{s}
\mathrm{Tr}(\Gru{F}{m}{s})}.
\end{align*}
\end{Cor}
As a consequence of Proposition \ref{prop:rep-weak-pw} and Proposition
\ref{prop:rep-unitarisable} or Lemma \ref{lemma:rep-regular-unitary},
the matrix coefficients of irreducible unitary corepresentations
span $\mathscr{A}$, and in the Corollary \ref{cor:rep-pw}, we may
assume the irreducible corepresentations
$(V^{i},\mathscr{X}_{i})$ to be unitary if $\mathscr{A}$
defines a partial compact quantum group.
\begin{Rem}\label{RemPos} In fact, Proposition \ref{prop:rep-unitary-bidual} and Corollary \ref{cor:rep-unitary-schur-orthogonality} show the following. Let $\mathscr{A}$ be a partial Hopf $^*$-algebra admitting an invariant integral $\phi$, which a priori we do not assume to be positive. Suppose however that each irreducible corepresentation of $\mathscr{A}$ is equivalent to a unitary corepresentation. Then $\phi$ is necessarily positive.
\end{Rem}
\subsection{Analogues of Woronowicz's characters}
Let $\mathscr{A}$ be a partial bialgebra, and $a\in \Gr{A}{k}{l}{m}{n}$. Then for $\omega \in A^*$, we can define
\begin{align*} \omega \aste{p,q} a = (\id \otimes \omega) (\Delta_{pq}(a)),&&a \aste{r,s}
\omega:=(\omega \otimes \id)(\Delta_{rs}(a)),\end{align*}and this defines a bimodule structure with respect to the natural $I\times I$-partial convolution algebra structure on $\oplus \left(\Gr{A}{k}{l}{m}{n}\right)^*$. When $\omega$ has support on $\sum_{k,l}\Gr{A}{k}{l}{k}{l}$, it is meaningful to define \begin{align*} \omega\ast a := \sum_{p,q} \omega\aste{p,q}a && a\ast \omega = \sum_{r,s} a\aste{r,s}\omega \end{align*}
We recall that an entire function $f$ has \emph{exponential growth
on the right half-plane} if there exist $C,d>0$ such that $|f(x+iy)|\leq
C\mathrm{e}^{dx}$ for all $x,y\in \mathbb{R}$ with $x>0$.
\begin{Theorem} \label{thm:rep-characters} Let $\mathscr{A}$ be a
partial Hopf algebra with an invariant integral $\phi$. Then there
exists a unique family of linear functionals $f_{z} \colon A\to \mathbb{C}$
such that
\begin{enumerate}[label={(\arabic*)}]
\item $f_z$ vanishes on $A(K)$ when $K_u\neq K_d$.
\item for each $a\in A$, the function $z\mapsto f_{z}(a)$ is entire
and of exponential growth on the right half-plane.
\item $f_{0} = \epsilon$ and $(f_{z} \otimes f_{z'}) \circ
\Delta= f_{z+z'}$ for all $z,z' \in \mathbb{C}$.
\item $\phi(ab)=\phi(b(f_{1} \ast a \ast f_{1}))$ for all $a,b\in A$.
\end{enumerate}
This family furthermore satisfies
\begin{enumerate}[label={(\arabic*)}]\setcounter{enumi}{4}
\item $f_z(ab) = f_z(a)f_z(b)$ for $a\in A(K)$ and $b\in A(L)$ with $K_r = L_l$.
\item $S^{2}(a)=f_{-1} \ast a \ast f_{1}$ for all $a\in A$.
\item $f_{z}(\UnitC{l}{n})=\delta_{l,n}$ and $f_{z} \circ S = f_{-z}$ for all $a\in A$.
\item $\bar{f}_{z}=f_{-\overline{z}}$ if $\mathscr{A}$ is a partial
Hopf $^*$-algebra and $\phi$ is positive.
\end{enumerate}
\end{Theorem}
Note that conditions (3), (4) and (6) are meaningful by condition (1).
\begin{proof}
We first prove uniqueness. Assume that $(f_{z})_{z}$ is a family of
functionals satisfying (1)--(4). Since $\phi$ is faithful, the map
$\sigma\colon a \mapsto f_{1} \ast a \ast f_{1}$ is uniquely
determined by $\phi$, and one easily sees that it is a homomorphism. Using
(3), we find that $\epsilon \circ \sigma^n=f_{2n}$, which uniquely determines these functionals. Using (2) and the
fact that every entire function of exponential growth on the right
half-plane is uniquely determined by its values at $\mathbb{N} \subseteq \mathbb{C}$, we can conclude that the family $f_{z}$ is uniquely determined. Moreover, since the property (5) holds for $z = 2n$, we also conclude by the same argument as above that it holds for all $z\in \mathbb{C}$.
Let us now prove existence. By Theorem \ref{thm:rep-orthogonality}, Corollary \ref{cor:rep-pw} and Proposition \ref{prop:rep-unitary-bidual}, we can
define for each $z\in \mathbb{C}$ a functional $f_{z} \colon A \to \mathbb{C}$ such
that for every
irreducible corepresentation
$(V,\mathscr{X})$ in $\mathrm{Corep}(\mathscr{A})$,
\begin{align*}
f_{z}((\id \otimes \omega_{\xi,\eta})(\Gr{X}{k}{l}{m}{n})) &=
\delta_{k,m}\delta_{l,n}
\omega_{\xi,\eta}((\Gru{F}{k}{l})^{z}) \quad \text{for all }
\xi \in \Gru{V}{k}{l},\eta \in
\Gru{V}{m}{n},
\end{align*}
or, equivalently,
\begin{align*}
(f_{z} \otimes \id)(\Gr{X}{k}{l}{m}{n}) =
\delta_{k,m}\delta_{l,n} (\Gru{F}{k}{l})^{z},
\end{align*}
where $F=F_{\mathscr{X}}$ is a non-zero operator implementing a morphism from $(V,\mathscr{X})$ to
$(V, \dualco{\dualco{\mathscr{X}}})$, scaled such that
\begin{align*}
d_{\mathscr{X}}:= \sum_{r} \mathrm{Tr}(\Gru{(F^{-1})}{r}{l}) = \sum_{s}
\mathrm{Tr}(\Gru{F}{m}{s})
\end{align*}
for all $l$ in the right and all $m$ in the left hyperobject support of $\mathscr{X}$. By
construction, (1) and (2) hold. We show that the $(f_{z})_{z}$ satisfy the
assertions (3)--(7).
Throughout the following arguments, let $(V,\mathscr{X})$ and $F$ be as
above.
We first prove property (3). This follows from the relations
\begin{align*}
(f_{0} \otimes \id)(\Gr{X}{k}{l}{m}{n}) &=
\delta_{k,m}\delta_{l,n} \id_{\Gru{V}{k}{l}} =
(\epsilon \otimes \id)(\Gr{X}{k}{l}{m}{n})
\end{align*}
and
\begin{align*}
(((f_{z}\otimes f_{z'})\circ \Delta) \otimes
\id)(\Gr{X}{k}{l}{m}{n}) &= \delta_{k,m}\delta_{l,n}(f_{z} \otimes f_{z'} \otimes
\id)\big((\Gr{X}{k}{l}{k}{l})_{13}
(\Gr{X}{k}{l}{k}{l})_{23}\big) \\
&= \delta_{k,m}\delta_{l,n}(\Gru{F}{k}{l})^{z} \cdot (\Gru{F}{k}{l})^{z'} \\
&= (f_{z+z'} \otimes \id)(\Gr{X}{k}{l}{m}{n}).
\end{align*}
Applying slice maps of the form $\id
\otimes \omega_{\xi,\xi'}$ and invoking Theorem \ref{thm:rep-orthogonality}, this proves (3).
To prove (4), write $ \Delta^{(2)} = (
\Delta \otimes \id)\circ \Delta = (\id \otimes
\Delta) \circ \Delta$, and put \[\theta_{z,z'}:=(f_{z'} \otimes \id
\otimes f_{z})\circ \Delta^{(2)}.\] Then
\begin{align*}
(\theta_{z,z'} \otimes \id)(\Gr{X}{k}{l}{m}{n}) &= (f_{z'} \otimes
\id \otimes f_{z} \otimes
\id)((\Gr{X}{k}{l}{k}{l})_{14}(\Gr{X}{k}{l}{m}{n})_{24}(\Gr{X}{m}{n}{m}{n})_{34})
\\
&= (1 \otimes (\Gru{F}{k}{l})^{z'}) \Gr{X}{k}{l}{m}{n} (1
\otimes (\Gru{F}{m}{n})^{z}).
\end{align*}
We take $z=z'=1$, use Theorem \ref{thm:rep-orthogonality}, where
now $d_F= d_G=d_{\mathscr{X}}$ by our scaling of $F$, and obtain
\begin{eqnarray*}
&& \hspace{-2cm} (\phi \otimes \id \otimes
\id)((\Gr{(X^{-1})}{l}{k}{n}{m})_{12}((\theta_{1,1} \otimes
\id)(\Gr{X}{k}{l}{m}{n}))_{13})\\ && =d_{\mathscr{X}}^{-1}(\id \otimes
\Gru{F}{k}{l}) (\id \otimes \Gru{(F^{-1})}{k}{l})
\Sigma_{k,l,m,n} (\id \otimes
\Gru{F}{m}{n}) \\
&&=d_{\mathscr{X}}^{-1}(\Gru{F}{m}{n} \otimes \id) \Sigma_{klmn} \\
&&= (\phi \otimes \id \otimes
\id)((\Gr{X}{k}{l}{m}{n})_{13}(\Gr{(X^{-1})}{l}{k}{n}{m})_{12}).
\end{eqnarray*}
To conclude the proof of assertion (4), apply again slice maps of the form
$\omega_{\xi,\xi'} \otimes \omega_{\eta,\eta'}$.
We have then already argued that the property (5) automatically holds. To show the property (6), note that by Proposition \ref{prop:rep-unitary-bidual} and the calculation above,
\begin{align*}
(S^{2} \otimes \id)(\Gr{X}{k}{l}{m}{n}) &= (1
\otimes\Gru{F}{k}{l})
\Gr{X}{k}{l}{m}{n}(1 \otimes \Gru{F}{m}{n})^{-1}
=(\theta_{-1,1} \otimes \id)(\Gr{X}{k}{l}{m}{n}).
\end{align*}
Assertion (6) follows again by applying slice maps.
To check, (7), note that (1), (2) and (4) immediately imply
$f_{z}(\UnitC{k}{m})=\delta_{k,m}$. As both $z \rightarrow
f_{-z}$ and $z\rightarrow f_z\circ S$ satisfy the conditions
(1)--(4) for $\mathscr{A}$ with the opposite product and
coproduct (using the partial character property (5) and the
invariance of $\phi$ with respect to $S$), we find $f_{-z} =
f_{z} \circ S$.
Finally, we assume that $\mathscr{A}$ is a partial Hopf
$^*$-algebra with positive invariant integral $\phi$ and prove
(8). By Proposition \ref{prop:rep-unitary-bidual}, we can assume
$\Gru{F}{k}{l}$ to be positive. Write
$\bar{f}_z(a) = \overline{f_z(a^*)}$. Using the relations $
(\Gr{X}{k}{l}{k}{l})^{*}=(S \otimes \id)(\Gr{X}{k}{l}{k}{l})$,
$f_{z} \circ S=f_{-z}$ and
positivity of $\Gru{F}{k}{l}$, we conclude
\begin{align*}
(\bar{f}_z \otimes
\id)(\Gr{X}{k}{l}{k}{l})
&= \left((f_{z} \otimes
\id)((\Gr{X}{k}{l}{k}{l})^{*})\right)^{*} \\
& = \left((f_{-z} \otimes \id)(\Gr{X}{k}{l}{k}{l})\right)^{*}
=
((\Gru{F}{k}{l})^{-z})^{*}
= (\Gru{F}{k}{l})^{-\overline{z}} = (f_{-\overline{z}}
\otimes \id)(\Gr{X}{k}{l}{k}{l}),
\end{align*}
whence $\bar{f}_z(a) = f_{-\overline{z}}(a)$ for all $a\in
\Gr{\mathcal{C}(\mathscr{X})}{k}{l}{k}{l}$. Since $f_{z}$ and
$f_{-\overline{z}}$ vanish on $\Gr{A}{k}{l}{m}{n}$ if $(k,l)\neq
(m,n)$ and the matrix coefficients of unitary
corepresentations span $A$, we can conclude $\bar{f}_{z}=f_{-\overline{z}}$.
\end{proof}
Note that our formula for the Woronowicz characters is slightly different from the one in \cite{Hay1}, as we are using a different normalisation of the Haar functional.
\section{Tannaka-Kre$\breve{\textrm{\i}}$n-Woronowicz duality for partial compact quantum groups}
In the previous section, we showed how any partial compact quantum group gave rise to a partial fusion C$^*$-category with a unital morphism into a partial tensor C$^*$-category of finite-dimensional Hilbert spaces. In this section we reverse this construction, and show that the two structures are in duality with each other. The proof does not differ much from the usual Tannaka-Kre$\breve{\textrm{\i}}$n reconstruction process, but one has to pay some extra care to the well-definedness of certain constructions. Implicitly, we build our reconstruction process by passing first through the construction of the discrete dual of a partial compact quantum group, which we however refrain from formally introducing.
Let us at first fix a semi-simple partial tensor category $\mathscr{C}$
with indecomposable units over a base set $\mathscr{I}$. We will again view the tensor product of $\mathcal{C}$ as being strict, for notational convenience.
Assume that we also have another set $I$ and a partition $I = \{I_\alpha\mid \alpha\in \mathscr{I}\}$ with associated \emph{surjective} function \[\varphi:I\rightarrow \mathscr{I}, \quad k\mapsto k'.\] Let $F: \mathscr{C}\rightarrow \{\mathrm{Vect}_{\mathrm{fd}}\}_{I\times I}$ be a morphism based on $\varphi$, cf.~ Example \ref{ExaVectBiGr}. We will again denote by $F_{kl}:\mathscr{C}_{k'l'}\rightarrow \mathrm{Vect}_{\mathrm{fd}}$ the components of $F$ at index $(k,l)$, and by $\iota$ and $\mu$ resp.~ the product and unit constraints. For $X\in \mathcal{C}_{k'\beta}$ and $Y\in \mathcal{C}_{\beta m'}$, we write the projection maps associated to the identification $F_{km}(X\otimes Y)\cong \oplus_{l\in I_\beta} \left(F_{kl}(X)\otimes F_{lm}(Y)\right)$ as \[\pi^{(klm)}_{X,Y}=(\iota^{(klm)}_{X,Y})^{*}:F_{km}(X\otimes Y) \rightarrow F_{kl}(X)\otimes F_{lm}(Y).\]
We choose a maximal family of mutually inequivalent irreducible objects $\{u_a\}_{a\in \mathcal{I}}$ in $\mathcal{C}$. We assume that the $u_a$ include the unit objects $\mathbbm{1}_{\alpha}$ for $\alpha\in \mathscr{I}$, and we may hence identify $\mathscr{I}\subseteq \mathcal{I}$. For $a\in \mathcal{I}$, we will write $u_a \in \mathcal{C}_{\lambda_a,\rho_a}$ with $\lambda_a,\rho_a\in \mathscr{I}$. For $\alpha,\beta\in \mathscr{I}$ fixed, we write $\mathcal{I}_{\alpha\beta}$ for the set of all $a\in \mathcal{I}$ with $\lambda_a=\alpha$ and $\rho_a=\beta$. When $a,b,c\in \mathcal{I}$ with $a\in \mathcal{I}_{\alpha\beta},b\in \mathcal{I}_{\beta\gamma}$ and $c\in \mathcal{I}_{\gamma\delta}$, we write $c\leq a\cdot b$ if $\mathrm{Mor}(u_c,u_a\otimes u_b)\neq \{0\}$. Note that with $a,b$ fixed, there is only a finite set of $c$ with $c\leq a\cdot b$. We also use this notation for multiple products.
\begin{Def} For $a\in \mathcal{I}$ and $k,l,m,n\in I$, define vector spaces \[\Gr{A}{k}{l}{m}{n}(a) = \delta_{k',m',\lambda_a}\delta_{l',n',\rho_a} \Hom_{\mathbb{C}}(F_{mn}(u_a),F_{kl}(u_a))^*.\] Write \[\Gr{A}{k}{l}{m}{n} =\underset{a\in \mathcal{I}}{\bigoplus}\, \Gr{A}{k}{l}{m}{n}(a),\quad A(a) = \underset{k,l,m,n}{\bigoplus} \Gr{A}{k}{l}{m}{n}(a),\quad A = \underset{k,l,m,n}{\bigoplus} \Gr{A}{k}{l}{m}{n}.\]
\end{Def}
We first turn the $\Gr{A}{k}{l}{m}{n}$ into a partial coalgebra $\mathscr{A}$ over $I^2$.
\begin{Def} For $r,s\in I$, we define \[\Delta_{rs}: \Gr{A}{k}{l}{m}{n}\rightarrow \Gr{A}{k}{l}{r}{s}\otimes \Gr{A}{r}{s}{m}{n}\] as the direct sums of the duals of the composition maps \[\Hom_{\mathbb{C}}(F_{rs}(u_a),F_{kl}(u_a)) \otimes \Hom_{\mathbb{C}}(F_{mn}(u_a),F_{rs}(u_a))\rightarrow \Hom_{\mathbb{C}}(F_{mn}(u_a),F_{kl}(u_a)),\]\[x\otimes y \mapsto x\circ y.\]
\end{Def}
\begin{Lem} The couple $(\mathscr{A},\Delta)$ is a partial coalgebra with counit map \[\epsilon:\Gr{A}{k}{l}{k}{l}(a)\rightarrow \mathbb{C},\quad f\mapsto f(\id_{F_{kl}(u_a)}).\] Moreover, for each fixed $f\in \Gr{A}{k}{l}{m}{n}(a)$, the matrix $\left(\Delta_{rs}(f)\right)_{rs}$ is rcf.
\end{Lem}
\begin{proof} Coassociativity and counitality are immediate by
duality, as for each $a$ fixed the spaces $\Hom_{\mathbb{C}}(F_{mn}(u_a),F_{kl}(u_a))$ form a partial algebra with units $\id_{F_{kl}(u_a)}$. The rcf condition follows immediately from the rcf condition for the morphism $F$.
\end{proof}
In the next step, we define a partial algebra structure on $\mathscr{A} = \{\Gr{A}{k}{l}{m}{n}\mid k,l,m,n\}$. First note that we can identify \[\Nat(F_{mn},F_{kl}) \cong \underset{\rho_a=l'=n'}{\underset{\lambda_a=k'=m'}{\prod_a}} \Hom_{\mathbb{C}}(F_{mn}(u_a),F_{kl}(u_a)),\] where $\Nat(F_{mn},F_{kl})$ denotes the space of natural transformations from $F_{mn}$ to $F_{kl}$ when $k'=m'$ and $l'=n'$. Similarly, we can identify \[\Nat(F_{mn}\otimes F_{pq},F_{kl}\otimes F_{rs}) \cong \prod_{b,c} \Hom_{\mathbb{C}}(F_{mn}(u_b)\otimes F_{pq}(u_c) ,F_{kl}(u_b)\otimes F_{rs}(u_c)),\] with the product over the appropriate index set and where \[F_{kl}\otimes F_{rs}:\mathcal{C}_{k'l'}\times \mathcal{C}_{r's'}\rightarrow \mathrm{Vect}_{\mathrm{fd}},\quad (X,Y) \mapsto F_{kl}(X)\otimes F_{rs}(Y).\] As such, there is a natural pairing of these spaces with resp.~ $\Gr{A}{k}{l}{m}{n}$ and $\Gr{A}{k}{l}{m}{n}\otimes \Gr{A}{r}{s}{p}{q}$.
\begin{Def} For $k'=r', l'=s'$ and $m'=t'$, we define a product
map \[M:\Gr{A}{k}{l}{r}{s} \otimes \Gr{A}{l}{m}{s}{t}\rightarrow
\Gr{A}{k}{m}{r}{t},\quad f\otimes g \mapsto f\cdot g\] by the
formula \[(f\cdot g)(x) = (f\otimes g)( \hat{\Delta}^{l}_{s}(x)),
\qquad x \in \Nat(F_{rt},F_{km}),\] where $\hat{\Delta}^l_s(x)$ is
the natural transformation\[\hat{\Delta}^l_s(x): F_{rs}\otimes
F_{st}\rightarrow F_{kl}\otimes F_{lm},\quad
\hat{\Delta}^l_s(x)_{X,Y} = \pi^{(klm)}_{X,Y} \circ x_{X\otimes Y}
\circ \iota^{(rst)}_{X,Y},\quad X\in \mathcal{C}_{k'l'},Y\in \mathcal{C}_{l'm'}.\]
\end{Def}
\begin{Rem} It has to be argued that $f\cdot g$ has finite support (over $\mathcal{I})$ as a functional on $\Nat(F_{rt},F_{km})$. In fact, if $f$ is supported at $b\in \mathcal{I}_{r's'}$ and $g$ at $c\in \mathcal{I}_{s't'}$, then $f\cdot g$ has support in the finite set of $a\in \mathcal{I}_{r't'}$ with $a\leq b\cdot c$, since if $x$ is a natural transformation with support outside this set, one has $x_{u_b\otimes u_c}=0$, and hence any of the $\left(\hat{\Delta}^l_s(x)\right)_{u_b,u_c} =0$.
\end{Rem}
\begin{Lem} The above product maps turn $(\mathscr{A},M)$ into an $I^2$-partial algebra.
\end{Lem}
\begin{proof} We can extend the map $(\hat{\Delta}^l_s\otimes \id)$ on
$\Nat(F_{rt},F_{km})\otimes \Nat(F_{tu},F_{mn})$ to a
map \[(\hat{\Delta}^l_s\otimes \id): \Nat(F_{rt}\otimes
F_{tu},F_{km}\otimes F_{mn}) \rightarrow \Nat(F_{rs}\otimes
F_{st}\otimes F_{tu},F_{kl}\otimes F_{lm}\otimes
F_{mn}),\] \[(\hat{\Delta}^l_s\otimes \id)(x)_{X,Y,Z} =
\left(\pi^{(klm)}_{X,Y}\otimes \id_{F_{mn}(Z)}\right) \circ
x_{X\otimes Y, Z} \circ \left(\iota^{(rst)}_{X,Y} \otimes \id_{F_{tu}(Z)}\right).\]
By finite support, we then have that \[((f\cdot g)\cdot h)(x) = (f\otimes g\otimes h)((\hat{\Delta}^l_s\otimes \id)\hat{\Delta}^m_t(x))\] for all $f\in \Gr{A}{k}{l}{r}{s},g\in \Gr{A}{l}{m}{s}{t},h\in \Gr{A}{m}{n}{t}{u}$ and $x\in \Nat(F_{ru},F_{kn})$. Similarly, \[((f\cdot g)\cdot h)(x) = (f\otimes g\otimes h)((\id\otimes \hat{\Delta}^m_t)\hat{\Delta}^l_s(x)).\] The associativity then follows from the 2-cocycle condition for the $\iota$- and $\pi$-maps.
By a similar argument, one sees that the (non-zero) units are given by
$\UnitC{k}{l}\in \Gr{A}{k}{k}{l}{l}(\mathbbm{1}_{\alpha})$ (for
$\alpha=k'=l'$) corresponding to $1$ in the canonical
identifications \[\Gr{A}{k}{k}{l}{l}(\alpha) =
\Hom_{\mathbb{C}}(F_{ll}(\mathbbm{1}_{\alpha}),F_{kk}(\mathbbm{1}_{\alpha}))^*\cong
\Hom_{\mathbb{C}}(\mathbb{C},\mathbb{C})^* \cong \mathbb{C}^* \cong \mathbb{C}.\] Indeed, to prove for
example the right unit property, we use that (essentially)
$\pi_{u_a,\mathbbm{1}_{\alpha}}^{(kll)} =(\id\otimes \mu_l)$ and
$\iota_{u_a,\mathbbm{1}_{\alpha}}^{(kll)} = (\id\otimes \mu_l^{-1})$,
while \[\UnitC{k}{l}(\mu_k \circ x_{\mathbbm{1}_{\alpha}} \circ\mu_l^{-1}) = x_{\mathbbm{1}_{\alpha}} \in \mathbb{C},\quad x\in \Nat(F_{ll},F_{kk}).\qedhere\]
\end{proof}
\begin{Prop} The partial algebra and coalgebra structures on $\mathscr{A}$ define a partial bialgebra structure on $\mathscr{A}$.
\end{Prop}
\begin{proof} Let us check the properties in Definition \ref{DefPartBiAlg}. Properties \ref{Propa} and \ref{Propc} are left to the reader. Property \ref{Propd} was proven above. Property \ref{Propb} follows from the fact that for $k'=l'=s'=m'$, \[\hat{\Delta}^{l}_s(\id_{F_{km}}) = \delta_{ls} \id_{F_{kl}}\otimes \id_{F_{lm}}.\]
It remains to show the multiplicativity property \ref{Prope}. This is equivalent with proving that, for each $x\in \Nat(F_{uw},F_{km})$ and $y\in \Nat(F_{rt},F_{uw})$ (with all first or second indices in the same class of $\mathscr{I}$), one has (pointwise) that (for $l'=s'$) \[ \hat{\Delta}^l_s(x\circ y) = \sum_{v,v'=l'} \hat{\Delta}^v_s(x)\circ \hat{\Delta}^l_v(y).\] This follows from the fact that $\sum_v \pi^{(uvw)}_{X,Y}\iota^{(uvw)}_{X,Y} \cong \id_{F_{uw}(X\otimes Y)}$ (where we again note that the left hand side sum is in fact finite).
\end{proof}
Let us show now that the resulting partial bialgebra $\mathscr{A}$ has an invariant integral.
\begin{Def} Define $\phi: \Gr{A}{k}{k}{m}{m} \rightarrow \mathbb{C}$ as the functional which is zero on $\Gr{A}{k}{k}{m}{m}(a)$ with $a\neq \mathbbm{1}_{k'}$, and the canonical identification $\Gr{A}{k}{k}{m}{m}(k')\cong \mathbb{C}$ on the unit component (for $k'=m'$).
\end{Def}
\begin{Lem} The functional $\phi$ is an invariant integral.
\end{Lem}
\begin{proof} The normalisation condition $\phi(\UnitC{k}{k})=1$ is immediate by construction. Let us check left invariance, as right invariance will follow similarly.
Let $\hat{\phi}^k_l$ be the natural transformation from $F_{ll}$ to $F_{kk}$ which has support on multiples of $\mathbbm{1}_{k'}$, and with $(\hat{\phi}^k_l)_{\mathbbm{1}_{k'}} = 1$. Then for $f\in \Gr{A}{k}{k}{l}{l}$, we have $\phi(f) = f(\hat{\phi}^k_l)$. The left invariance of $\phi$ then follows from the easy verification that for $x\in \Nat(F_{ll},F_{kn})$, \[x\circ \hat{\phi}^l_m =\delta_{k,n} \UnitC{k}{l}(x)\hat{\phi}^k_m.\qedhere\]
\end{proof}
So far, we have constructed from $\mathscr{C}$ and $F$ a partial bialgebra
$\mathscr{A}$ with invariant integral $\phi$. Let us further impose
for the rest of this section that $\mathscr{C}$ admits duality. We shall
use the following straightforward observation.
\begin{Lem}
For all $k,l$ and $X\in \mathcal{C}_{k',l'}$, the maps
\begin{align*}
\mathrm{coev}^{kl}_{X} &:= \pi^{(klk)}_{X,\hat X} \circ F_{kk}(\mathrm{coev}_{X})\colon \mathbb{C} \to F_{kl}(X)
\otimes F_{lk}(\hat X), \\
\mathrm{ev}^{kl}_{X} &:= F_{ll}(\mathrm{ev}_{X}) \circ \iota^{(lkl)}_{\hat X,X} \colon
F_{lk}(\hat X) \otimes F_{kl}(X) \to \mathbb{C}
\end{align*}
define a duality between $F_{kl}(X)$ and $F_{lk}(\hat X)$.
\end{Lem}
\begin{Prop}\label{PropAnti} The partial bialgebra $\mathscr{A}$ is a regular partial Hopf algebra.
\end{Prop}
\begin{proof}
For any $x\in \Nat(F_{mn},F_{kl})$, let us define $\hat{S}(x) \in
\Nat(F_{lk},F_{nm})$ by
\begin{align*}
\hat{S}(x)_X &=
(\id \otimes \mathrm{ev}^{lk}_{X}) \circ (\id \otimes x_{\hat X}
\otimes \id) \circ (\mathrm{coev}^{nm}_{X} \otimes \id).
\end{align*}
Then the assigment $\hat{S}$ dualizes to maps $S:\Gr{A}{k}{l}{m}{n} \rightarrow \Gr{A}{n}{m}{l}{k}$ by $S(f)(x) = f(\hat{S}(x))$. We claim that $S$ is an antipode for $\mathscr{A}$.
Let us check for example the formula \[\sum_r f_{(1){\tiny \begin{pmatrix}k&l\\mathfrak{n} & r\end{pmatrix}}} S(f_{(2){\tiny \begin{pmatrix} n & r \\ m & l\end{pmatrix}}}) = \delta_{k,m}\epsilon(f)\UnitC{k}{n}\] for $f\in \Gr{A}{k}{l}{m}{l}$. The other antipode identity follows similarly.
By duality, this is equivalent to the pointwise identity of natural
transformations \[\sum_r\hat{M}^n_r(\id\otimes
\hat{S})\hat{\Delta}^l_r(x) = \delta_{k,m}\UnitC{k}{n}(x)
\id_{F_{kl}},\quad x\in \Nat(F_{nn},F_{km})\] where $\hat{M}^n_r$ and
$(\id\otimes \hat{S})$ are dual to $\Delta_{nr}$ and $\id\otimes S$, respectively.
Let us fix $X\in \mathcal{C}_{k'l'}$. Then for any $x\in
\Nat(F_{nr},F_{kl})$, $y\in \Nat(F_{rn},F_{lm})$, we have
\begin{align*}
\left(\hat{M}^n_r(\id\otimes \hat{S})(x\otimes y)\right)_X =
\big(\id \otimes \mathrm{ev}_{X}^{ml}\big) \big(x_{X} \otimes y_{\hat X} \otimes \id\big)
\big(\mathrm{coev}_{X}^{nr} \otimes \id\big).
\end{align*}
For any $x\in \Nat(F_{nn},F_{km})$, we therefore have
\begin{align*}
\left(\hat{M}^n_r(\id\otimes \hat{S})\hat\Delta^{l}_{r}(x)\right)_X &=
\big(\id \otimes \mathrm{ev}^{ml}_{X}\big) \big(\pi^{(klm)}_{X,\hat X}x_{X\otimes \hat
X}\iota^{(nrm)}_{X,\hat X} \otimes \id\big)
\big(\mathrm{coev}^{nr}_{X} \otimes \id\big).
\end{align*}
We sum over $r$, use naturality of $x$, and obtain
\begin{align*}
\sum_{r} \left(\hat{M}^n_r(\id\otimes
\hat{S})\hat\Delta^{l}_{r}(x)\right)_X &=
\big(\id \otimes \mathrm{ev}_{X}^{ml}\big) \big(\pi^{(klm)}_{X,\hat X}x_{X\otimes \hat
X}F_{nn}(\mathrm{coev}_{X}) \otimes \id\big) \\
&=\delta_{k,m} \UnitC{k}{n}(x)
\big(\id \otimes \mathrm{ev}_{X}^{ml}\big)
\big(\pi^{(mlm)}_{X,\hat X}F_{mm}(\mathrm{coev}_{X})
\otimes \id\big) \\
&=\delta_{k,m} \UnitC{k}{n}(x)
\big(\id \otimes \mathrm{ev}_{X}^{ml}\big)
\big(\mathrm{coev}^{ml}_{X}
\otimes \id\big) \\
&= \delta_{k,m} \UnitC{k}{n}(x) \id.
\end{align*}
Similarly, one shows that $\mathscr{A}$ with the opposite multiplication has an antipode, using right duality. It follows that $\mathscr{A}$ is a regular partial Hopf algebra.
\end{proof}
Assume now that $\mathscr{C}$ is a partial fusion C$^*$-category, and $F$ a $\phi$-morphism from $\mathscr{C}$ to $\{\mathrm{Hilb}_{\mathrm{fd}}\}_{I\times I}$. Let us show that $\mathscr{A}$, as constructed above, becomes a partial Hopf $^*$-algebra with positive invariant integral.
\begin{Def} We define $^*: \Gr{A}{k}{l}{m}{n}\rightarrow \Gr{A}{l}{k}{n}{m}$ by the formula \[f^*(x) = \overline{f(\hat{S}(x)^*)},\qquad x\in \Nat(F_{nm},F_{lk}).\]
\end{Def}
\begin{Lem} The operation $^*$ is an anti-linear, anti-multiplicative, comultiplicative involution.
\end{Lem}
\begin{proof} Anti-linearity is clear. Comultiplicativity follows from the fact that $(xy)^* = y^*x^*$ and $\hat{S}(xy) = \hat{S}(y)\hat{S}(x)$ for natural transformations. To see anti-multiplicativity of $^*$, note first that, since $S$ is anti-multiplicative for $\mathscr{A}$, we have $\hat{S}$ anti-comultiplicative on natural transformations. Now as $(\iota_{X,Y}^{(klm)})^* = \pi_{X,Y}^{(klm)}$ by assumption, we also have $\hat{\Delta}^l_s(x)^* = \hat{\Delta}^s_l(x^*)$, which proves anti-multiplicativity of $^*$ on $\mathscr{A}$. Finally, involutivity follows from the involutivity of $x\mapsto \hat{S}(x)^*$, which is a consequence of the fact that one can choose $\mathrm{ev}_{\bar{X}}^{kl} = (\mathrm{coev}_{X}^{lk})^*$ and $\mathrm{coev}_{\bar{X}}^{kl} = (\mathrm{ev}_X^{lk})^*$.
\end{proof}
\begin{Prop} The couple $(\mathscr{A},\Delta)$ with the above $^*$-structure defines a partial compact quantum group.
\end{Prop}
\begin{proof} The only thing which is left to prove is that our
invariant integral $\phi$ is a positive functional. Now it is easily
seen from the definition of $\phi$ that the $\Gr{A}{k}{l}{m}{n}(a)$
are all mutually orthogonal. Hence it suffices to prove that the
sesquilinear inner product \[\langle f| g\rangle = \phi(f^*g)\] on
$\Gr{A}{k}{l}{m}{n}(a)$ is positive-definite.
Let us write $\bar{f}(x) = \overline{f(x^*)}$. Let again
$\hat{\phi}^k_m$ be the natural transformation from $F_{mm}$ to
$F_{kk}$ which is the identity on $\mathbbm{1}_{k'}$ and zero on other
irreducible objects. Then by definition, \[\phi(f^*g) =
(\bar{f}\otimes g)((\hat{S}\otimes
\id)\hat{\Delta}^k_m(\hat{\phi}^l_n)).\]
Assume that $f(x) = \langle v'| x_a v\rangle$ and
$g(x) = \langle w' | x_aw\rangle$ for $v,w\in F_{mn}(u_a)$ and
$v',w'\in F_{kl}(u_a)$. Then
$\overline{f}(x) = \langle v|x_{a} v'\rangle$ and
using the expression for $\hat{S}$ as
in Proposition \ref{PropAnti}, we find that
\begin{align*}
\phi(f^*g) &= \langle v \otimes w'|
(\mathrm{ev}_{a}^{kl})_{23}
(\hat\Delta^{k}_{m}(\hat \phi^{l}_{n})_{\bar a, a})_{24}
(\mathrm{coev}^{mn}_{a})_{12} (v'\otimes
w)\rangle.
\end{align*}
However, up to a positive non-zero scalar, which we may assume to be
1 by proper rescaling, we
have
\[\hat{\Delta}^k_m(\hat{\phi}^l_n)_{\bar{a}, a} =
(\mathrm{ev}^{kl}_{a})^{*}(\mathrm{ev}^{kl}_{a}).\] Hence
\begin{align*}
\phi(f^*g) &=
\langle v \otimes w'| (\mathrm{ev}^{kl}_{a})_{23} (
(\mathrm{ev}^{kl}_{a})^{*}(\mathrm{ev}^{kl}_{a}))_{24}
(\mathrm{coev}^{mn}_{a})_{12} (v'\otimes w)\rangle \\
&= \langle v \otimes w'| (\mathrm{ev}^{kl}_{a})_{23}
(\mathrm{ev}^{kl}_{a})^{*}_{24}
(w\otimes v')\rangle \\
&= \langle v|w\rangle (\mathrm{ev}^{kl}_{a}|v'\rangle_{2})
(\mathrm{ev}_{a}^{kl}|w'\rangle_{2})^{*},
\end{align*}
where $\mathrm{ev}_{a}^{kl}|z\rangle_{2}$ denotes the map $y \mapsto
\mathrm{ev}_{a}^{kl}(y\otimes z)$.
If $v=w$ and $v'=w'$, the expression above clearly becomes positive.
\end{proof}
Let us say that an $I$-partial compact quantum group with hyperobject set
$\mathscr{I}$ and corresponding partition function $\phi \colon \mathscr{I} \to
\mathscr{P}(I)$ is \emph{based over $\phi$}.
\begin{Theorem} \label{TheoTKPCQG}
The assigment $\mathscr{A}\rightarrow (\mathrm{Corep}_u(\mathscr{A}),F)$ is (up to isomorphism/equivalence) a one-to-one correspondence between partial compact quantum groups based over $\varphi:I\twoheadrightarrow \mathscr{I}$ and $\mathscr{I}$-partial fusion C$^*$-categories $\mathscr{C}$ with unital morphism $F$ to $\{\mathrm{Hilb}_{\mathrm{fd}}\}_{I\times I}$ based over $\varphi$.
\end{Theorem}
\begin{proof} Fix first $\mathscr{A}$, and let $\mathscr{B}$ be the
partial Hopf $^*$-algebra constructed from $\mathrm{Corep}_u(\mathscr{A})$
with its natural forgetful functor. Then we have a map $\mathscr{B}
\rightarrow \mathscr{A}$ by \[ \Gr{B}{k}{l}{m}{n}(a) =
\Hom(\Gr{V}{}{(a)}{m}{n},\Gr{V}{}{(a)}{k}{l})^* \rightarrow
\Gr{A}{k}{l}{m}{n}(a): f \mapsto (\id\otimes f)(X_a),\] where the
$(V^{(a)},\mathscr{X}_a)$ run over all irreducible unitary corepresentations
of $\mathscr{A}$. By Corollary \ref{cor:rep-pw}, this map is
bijective. From the definition of $\mathscr{B}$, it is easy to check
that this map is a morphism of partial Hopf $^*$-algebras.
Conversely, let $\mathscr{C}$ be an $\mathscr{I}$-partial fusion
C$^*$-category with unital morphism $F$ to
$\{\mathrm{Hilb}_{\mathrm{fd}}\}_{I\times I}$ based over $\varphi$. Let
$\mathscr{A}$ be the associated partial Hopf $^*$-algebra. For each
irreducible $u_a \in \mathscr{C}$, let $V^{(a)} = F(u_a)$,
and \[\Gr{(X_a)}{k}{l}{m}{n} = \sum_i e_i^*\otimes e_i,\] where
$e_i$ is a basis of $\Hom_{\mathbb{C}}(F_{mn}(u_{a}), F_{kl}(u_{a}))$ and
$e_i^*$ a dual basis. Then from the definition of $\mathscr{A}$ it
easily follows that $X_a$ is a unitary corepresentation for
$\mathscr{A}$. Clearly, $\mathscr{X}_a$ is
irreducible. As the matrix coefficients of the $\mathscr{X}_a$ span
$\mathscr{A}$, it follows that the $\mathscr{X}_a$ form a maximal class of
non-isomorphic unitary corepresentations of $\mathscr{A}$. Hence we
can make a unique equivalence \[\mathscr{C}\rightarrow
\mathrm{Corep}_u(\mathscr{A}), \quad u \mapsto (F(u),\mathscr{X}_u)\] such that
$u_a\rightarrow \mathscr{X}_a$. From the definitions of the coproduct and
product in $\mathscr{A}$, it is readily verified that the natural
morphisms $\iota^{(klm)}_{u,v}:F_{kl}(u)\otimes F_{lm}(v)\rightarrow
F_{km}(u\otimes v)$ turn it into a monoidal equivalence.
\end{proof}
\section{Examples}
\subsection{Hayashi's canonical partial compact quantum groups} \label{SubSecCan}
The following generalizes Hayashi's original construction.
\begin{Exa}
Let $\mathscr{C}$ be an $\mathscr{I}$-partial fusion C$^*$-category. Let
$\mathcal{I}$ label a distinguished maximal set $\{u_k\}$ of mutually
non-isomorphic irreducible objects of $\mathcal{C}$, with associated
bigrading $\Gru{\mathcal{I}}{\alpha}{\beta}$ over
$\mathscr{I}$. Define \[F_{kl}(X) = \Hom(u_k, X\otimes u_l),\qquad
X\in \mathcal{C}_{\alpha\beta}, k\in \Gru{\mathcal{I}}{\alpha}{\gamma},l\in \Gru{\mathcal{I}}{\beta}{\gamma}.\] Then each $F_{kl}(X)$ is a Hilbert space by the inner product $\langle f,g\rangle = f^*g$. Put $F_{kl}(X) = 0$ for $k,l$ outside their proper domains. Then clearly the application $(k,l)\mapsto F_{kl}(X)$ is rcf. Moreover, we have isometric compatibility morphisms \[F_{kl}(X)\otimes F_{lm}(Y)\rightarrow F_{km}(X\otimes Y),\quad f\otimes g \mapsto (\id\otimes g)f,\] while $F_{kl}(\mathbbm{1}_{\alpha}) \cong \delta_{kl} \mathbb{C}$ for $k,l\in \Gru{\mathcal{I}}{\alpha}{\alpha}$.
It is readily verified that $F$ defines a unital morphism from $\mathscr{C}$ to $\{\mathrm{Hilb}_{\mathrm{fd}}\}_{\mathcal{I}\times \mathcal{I}}$ based over the partition \[\mathcal{I}_{\alpha} = \bigcup_{\beta} \Gru{\mathcal{I}}{\alpha}{\beta},\quad\alpha\in \mathscr{I}.\] From the Tannaka-Kre$\breve{\textrm{\i}}$n-Woronowicz reconstruction result, we obtain a partial compact quantum group $\mathscr{A}_{\mathscr{C}}$ with object set $\mathcal{I}$, which we call the \emph{canonical partial compact quantum group} associated with $\mathscr{C}$.
\end{Exa}
\begin{Exa} More generally, let $\mathscr{C}$ be an $\mathscr{I}$-partial fusion C$^*$-category, and let $\mathscr{D}$ be a \emph{semi-simple partial $\mathscr{C}$-module C$^*$-category} based over a set $\mathscr{J}$ and function $\phi:\mathscr{J}\rightarrow \mathscr{I},k\mapsto k'$. That is, $\mathscr{D}$ consists of a collection of semi-simple C$^*$-categories $\mathcal{D}_{k}$ with $k\in \mathscr{J}$, together with tensor products $\otimes: \mathcal{C}_{k'l'}\times \mathcal{D}_{l}\rightarrow \mathcal{D}_{k}$ satisfying the appropriate associativity and unit constraints. Then if $\mathcal{I}$ labels a distinguished maximal set $\{u_a\}$ of mutually non-isomorphic irreducible objects of $\mathcal{D}$, with associated grading $\mathcal{I}_{k}$ over $\mathscr{J}$, we can again define \[F_{ab}(X) = \Hom(u_a, X\otimes u_b),\qquad X\in \mathcal{C}_{k'l'}, a\in \mathcal{I}_{k},b\in\mathcal{I}_{l},\] and we obtain a unital morphism from $\mathscr{C}$ to $\{\mathrm{Hilb}_{\mathrm{fd}}\}_{\mathcal{I}\times \mathcal{I}}$. The associated partial compact quantum group $\mathscr{A}_{\mathscr{C}}$ will be called the \emph{canonical partial compact quantum group} associated with $(\mathscr{C},\mathscr{D})$. The previous construction coincides with the special case $\mathscr{C}= \mathscr{D}$ with $\mathscr{J} = \mathscr{I}\times \mathscr{I}$ and $\phi$ projection to the first factor.
\end{Exa}
\begin{Exa}\label{ExaErgo} As a particular instance, let $\mathbb{G}$ be a compact quantum group, and consider an ergodic action of $\mathbb{G}$ on a unital C$^*$-algebra $C(\mathbb{X})$. Then the collection of finitely generated $\mathbb{G}$-equivariant $C(\mathbb{X})$-Hilbert modules forms a module C$^*$-category over $\mathrm{Rep}_u(\mathbb{G})$, cf.~ \cite{DCY1}.
\end{Exa}
\subsection{Morita equivalence}
\begin{Def} Two partial compact quantum groups $\mathscr{G}$ and $\mathscr{H}$ are said to be \emph{Morita equivalent} if there exists an equivalence $\mathrm{Rep}_u(\mathscr{G}) \rightarrow \mathrm{Rep}_u(\mathscr{H})$ of partial fusion C$^*$-categories.
\end{Def}
In particular, if $\mathscr{G}$ and $\mathscr{H}$ are Morita equivalent they have the same hyperobject set, but they need not share the same object set.
Our goal is to give a concrete implementation of Morita equivalence, as has been done for compact quantum groups \cite{BDV1}. Note that we slightly changed their terminology of monoidal equivalence into Morita equivalence, as we feel the monoidality is intrinsic to the context. We introduce the following definition, in which indices are considered modulo 2.
\begin{Def} A \emph{linking partial compact quantum group} consists of a partial compact quantum group $\mathscr{G}$ defined by a partial Hopf $^*$-algebra $\mathscr{A}$ over a set $I$ with a distinguished partition $I = I_1\sqcup I_2$ such that the units $\UnitC{i}{j} = \sum_{k\in I_i,l\in I_j} \UnitC{k}{l} \in M(A)$ are central, and such that for each $r\in I_i$, there exists $s\in I_{i+1}$ such that $\UnitC{r}{s}\neq 0$.
\end{Def}
If $\mathscr{A}$ defines a linking partial compact quantum group, we can split $A$ into four components $A^i_j = A\UnitC{i}{j}$. It is readily verified that the $A^i_i$ together with all $\Delta_{rs}$ with $r,s \in I_i$ define themselves partial compact quantum groups, which we call the \emph{corner} partial compact quantum groups of $\mathscr{A}$.
\begin{Prop} Two partial compact quantum groups are Morita equivalent iff they arise as the corners of a linking partial compact quantum group.
\end{Prop}
\begin{proof} Suppose first that $\mathscr{G}_1$ and $\mathscr{G}_2$ are Morita equivalent partial compact quantum groups with associated partial Hopf $^*$-algebras $\mathscr{A}_1$ and $\mathscr{A}_2$ over respective sets $I_1$ and $I_2$. Then we may identify their corepresentation categories with the same abstract partial tensor C$^*$-category $\mathscr{C}$ based over their common hyperobject set $\mathscr{I}$. Then $\mathscr{C}$ comes endowed with two forgetful functors $F^{(i)}$ to $\{\mathrm{Hilb}_{\mathrm{fd}}\}_{I_i\times I_i}$ corresponding to the respective $\mathscr{A}_i$.
With $I = I_1\sqcup I_2$, we may then as well combine the $F^{(i)}$ into a global unital morphism $F:\mathscr{C} \rightarrow \{\mathrm{Hilb}_{\mathrm{fd}}\}_{I\times I}$, with $F_{kl}(X)=F_{kl}^{(i)}(X)$ if $k,l\in I_i$ and $F_{kl}(X)=0$ otherwise. Let $\mathscr{A}$ be the associated partial Hopf $^*$-algebra constructed from the Tannaka-Kre$\breve{\textrm{\i}}$n-Woronowicz reconstruction procedure.
From the precise form of this reconstruction, it follows immediately that $\Gr{A}{k}{l}{m}{n} =0$ if either $k,l$ or $m,n$ do not lie in the same $I_i$. Hence the $\UnitC{i}{j} = \sum_{k\in I_i,l\in I_j} \UnitC{k}{l}$ are central.
Moreover, fix $k\in I_i$ and any $l\in I_{i+1}$ with $k'=l'$. Then $\Nat(F_{ll},F_{kk})\neq \{0\}$. It follows that $\UnitC{k}{l}\neq 0$. Hence $\mathscr{A}$ is a linking compact quantum group. It is clear that $\mathscr{A}_1$ and $\mathscr{A}_2$ are the corners of $\mathscr{A}$.
Conversely, suppose that $\mathscr{A}_1$ and $\mathscr{A}_2$ arise from the corners of a linking partial compact quantum group defined by $\mathscr{A}$ with invariant integral $\phi$. We will show that the associated partial compact quantum groups $\mathscr{G}$ and $\mathscr{G}_1$ are Morita equivalent. Then by symmetry $\mathscr{G}$ and $\mathscr{G}_2$ are Morita equivalent, and hence also $\mathscr{G}_1$ and $\mathscr{G}_2$.
For $(V,\mathscr{X}) \in \mathrm{Corep}_u(\mathscr{A})$, let $F(V,\mathscr{X})
= (W,\mathscr{Y})$ be the pair obtained from $(V,\mathscr{X})$ by
restricting all indices to those contained in $I_1$. It is immediate that $(W,\mathscr{Y})$ is a unitary corepresentation of $\mathscr{A}_1$, and that the functor $F$ becomes a unital morphism in a trivial way. What remains to show is that $F$ is an equivalence of categories, i.e.~ that $F$ is faithful and essentially surjective.
Let us first show that $F$ is faithful. Lemma
\ref{lemma:rep-invertible} implies that for every $(V,\mathscr{X}) \in
\mathrm{Corep}_u(\mathscr{A})$, we have $\Gru{V}{k}{l}=0$ whenever $k\in
I_{i}$ and $l\in I_{i+1}$. If $T$ is a morphism in
$\mathrm{Corep}_u(\mathscr{A})_{\alpha\beta}$ and $\Gru{T}{k}{l}=0$ for all
$k,l \in I_{1}$, we therefore get $\Gru{T}{k}{l}=0$ for all $k\in I$
and $l\in I_{1}$. Since $I_{\beta}\cap I_{1}$ is non-empty by
assumption, we can apply Lemma \ref{LemInjMor} and conclude that
$T=0$.
To complete the proof, we only need to show that $F$ induces a
bijection between isomorphism classes of irreducible unitary
corepresentations of $\mathcal{A}$ and of $\mathcal{A}_{1}$. Note that
by Proposition \ref{prop:rep-cosemisimple} and Lemma
\ref{lemma:rep-regular-embedding}, each such class can be represented
by a restriction of the regular corepresentation of $\mathcal{A}$ or
$\mathcal{A}_{1}$, respectively.
So, let $(W,\mathscr{Y})$ be an irreducible restriction of the regular
corepresentation of $\mathcal{A}_{1}$. Pick a non-zero $a \in
\Gru{W}{m}{n}$, define $\Gru{V}{p}{q} \subseteq \bigoplus_{k,l}
\Gr{A}{k}{l}{p}{q}$ as in \eqref{eq:element-reg-corep} and form the
regular corepresentation $(V,\mathscr{X})$ of $\mathscr{A}$. Then
$\Gru{V}{p}{q} = \Gru{W}{p}{q}$ for all $p,q\in I_{1}$ by Lemma
\ref{lemma:regular-corep} (2) and hence $F(V,\mathscr{X}) =
(W,\mathscr{Y})$. Since $F$ is faithful, $(V,\mathscr{X})$ must be
irreducible.
Conversely, let $(V,\mathscr{X})$ be an irreducible restriction of the
regular corepresentation of $\mathcal{A}$. Since $F$ is faithful,
there exist $k,l\in I_{1}$ such that $\Gru{V}{k}{l}\neq 0$. Applying
Corollary \ref{cor:rep-pw-morphisms}, we may assume that
$\Gru{V}{p}{q} \subseteq \Gr{A}{k}{l}{p}{q}$ for some $k,l\in I_{1}$
and all $p,q\in I$. But then $F(V,\mathscr{X})$ is a restriction of
the regular corepresentation of $\mathcal{A}_{1}$. If
$F(V,\mathscr{X})$ would decompose into a direct sum of several
irreducible corepresentations, then the same would be true for
$(V,\mathscr{X})$ by the argument above. Thus, $F(V,\mathscr{X})$ is irreducible.
Finally, assume that
$(V,\mathscr{X})$ and $(W,\mathscr{Y})$ are
inequivalent irreducible unitary corepresentations of $\mathcal{A}$. Then $\mathcal{C}(V,\mathscr{X}) \cap
\mathcal{C}(W,\mathscr{Y})=0$ by Corollary \ref{cor:rep-unitary-orthogonality-1} and hence $\mathcal{C}(F(V,\mathscr{X}))
\cap\mathcal{C}(F(W,\mathscr{Y})) =0$, whence $F(V,\mathscr{X})$ and
$F(W,\mathscr{Y})$ are inequivalent.
\end{proof}
\begin{Exa} If $\mathscr{G}_1$ and $\mathscr{G}_2$ are Morita equivalent compact quantum groups, the total partial compact quantum group is the co-groupoid constructed in \cite{Bic1}.
\end{Exa}
\begin{Exa} Let $\mathbb{G}$ be a compact quantum group with ergodic action on a unital C$^*$-algebra $C(\mathbb{X})$. Consider the module C$^*$-category $\mathcal{D}$ of finitely generated $\mathbb{G}$-equivariant Hilbert $C(\mathbb{X})$-modules as in Example \ref{ExaErgo}. Then $\mathbb{G}$ is Morita equivalent with the canonical partial compact quantum group constructed from $(\mathrm{Rep}_u(\mathbb{G}),\mathcal{D})$. The off-diagonal part of the associated linking partial compact quantum group was studied in \cite{DCY1}. We will make a detailed study of the case $\mathbb{G} = SU_q(2)$ in \cite{DCT2}, in particular for $\mathbb{X}$ a Podle\'{s} sphere. This will lead us to partial compact quantum group versions of the dynamical quantum $SU(2)$-group.
\end{Exa}
\subsection{Weak Morita equivalence}
\begin{Def} A \emph{linking} partial fusion C$^*$-category consists of a partial fusion C$^*$-category with a distinguished partition $\mathscr{I} =\mathscr{I}_1 \cup \mathscr{I}_2$ such that for each $\alpha\in \mathscr{I}_1$, there exists $\beta \in \mathscr{I}_{2}$ with $\mathcal{C}_{\alpha\beta}\neq \{0\}$.
The \emph{corners} of $\mathscr{C}$ are the restrictions of $\mathscr{C}$ to $\mathscr{I}_1$ and $\mathscr{I}_2$.
\end{Def}
The following notion is essentially the same as the one by M. M\"{u}ger \cite{Mug1}.
\begin{Def} Two partial semi-simple tensor C$^*$-categories $\mathscr{C}_1$ and $\mathscr{C}_2$ with duality over respective sets $\mathscr{I}_1$ and $\mathscr{I}_2$ are called \emph{Morita equivalent} if there exists a linking partial fusion C$^*$-category $\mathscr{C}$ over the set $\mathscr{I}=\mathscr{I}_1\sqcup \mathscr{I}_2$ whose corners are isomorphic to $\mathscr{C}_1$ and $\mathscr{C}_2$.
We say two partial compact quantum groups $\mathscr{G}_1$ and $\mathscr{G}_2$ are \emph{weakly Morita equivalent} if their representation categories $\mathrm{Rep}_u(\mathscr{G}_i)$ are Morita equivalent.
\end{Def}
One can prove that this is indeed an equivalence relation.
\begin{Def}\label{DefCoLink} A \emph{co-linking partial compact quantum group} consists of a partial compact quantum group $\mathscr{G}$ defined by a Hopf $^*$-algebra $\mathscr{A}$ over an index set $I$, together with a distinguished partition $I = I_1\cup I_2$ such that $\UnitC{k}{l}=0$ whenever $k\in I_i$ and $l\in I_{i+1}$, and such that for each $k\in I_i$, there exists $l\in I_{i+1}$ with $\Gr{A}{k}{l}{k}{l}\neq 0$.
\end{Def}
It is again easy to see that if we restrict all indices of a co-linking partial compact quantum group to one of the distinguished sets, we obtain a partial compact quantum group which we will call a corner. In fact, write $e_i = \sum_{k,l\in I_i} \UnitC{k}{l}$. Then we can decompose the total algebra $A$ into components $A_{ij} = e_{i}Ae_{j}$, and correspondingly write $A$ in matrix notation \[ A = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22}\end{pmatrix},\] where multiplication is matrixwise and where comultiplication is entrywise. Note that we have $A_{12}A_{21} = A_{11}$, and similarly $A_{21}A_{12} = A_{22}$. Indeed, take $k\in I_1$, and pick $l\in I_2$ with $\Gr{A}{k}{l}{k}{l}\neq \{0\}$. Then in particular, we can find an $a\in \Gr{A}{k}{l}{k}{l}$ with $\epsilon(a)\neq 0$. Hence for any $m\in I_1$, we have $\UnitC{k}{m} = \UnitC{k}{m} a_{(1)}S(a_{(2)}) \in A_{12}A_{21}$. As this latter space contains all local units of $A_{11}$ and is a right $A_{11}$-module, it follows that it is in fact equal to $A_{11}$. We hence deduce that in fact $A_{11}$ and $A_{22}$ are Morita equivalent algebras, with the Morita equivalence implemented by $A$.
\begin{Rem} For finite partial compact quantum groups, one can then
easily show that the notion of a co-linking partial compact quantum
group is dual to the notion of a linking partial compact quantum group.\end{Rem}
\begin{Def} We call two partial compact quantum groups \emph{co-Morita equivalent} if there exists a \emph{co-linking partial compact quantum group} having these partial compact quantum groups as its corners.
\end{Def}
\begin{Lem} Co-Morita equivalence is an equivalence relation.
\end{Lem}
\begin{proof} Symmetry is clear. Co-Morita equivalence of
$\mathscr{A}$ with itself follows by considering as co-linking
quantum groupoid the product of $\mathscr{A}$ with the partial
compact quantum group $M_2(\mathbb{C})$, where $\Delta(e_{ij}) =
e_{ij}\otimes e_{ij}$, arising from a groupoid as in Example \ref{ExaGrpd}.
Let us show the main elements to prove transitivity. Let us assume $\mathscr{G}_1$ and $\mathscr{G}_2$ as well as $\mathscr{G}_2$ and $\mathscr{G}_3$ are co-Morita equivalent. Let us write the global $^*$-algebras of the associated co-linking quantum groupoids as \[A_{\{1,2\}} = \begin{pmatrix} A_{11} & A_{12} \\ A_{21} & A_{22} \end{pmatrix}, \quad A_{\{2,3\}} = \begin{pmatrix} A_{22} & A_{23} \\ A_{32} & A_{33}\end{pmatrix}.\] Then we can define a new $^*$-algebra $A_{\{1,2,3\}}$ as \[ A_{\{1,2,3\}} = \begin{pmatrix} A_{11} & A_{12} & A_{13} \\ A_{21} & A_{22} & A_{23} \\ A_{31} & A_{32} & A_{33} \end{pmatrix},\] where $A_{13} = A_{12}\underset{A_{22}}{\otimes } A_{23}$ and $A_{31} = A_{32}\underset{A_{22}}{\otimes} A_{21}$, and with multiplication and $^*$-structure defined in the obvous way.
It is straightforward to verify that there exists a unique $^*$-homomorphism $\Delta: A_{\{1,2,3\}} \rightarrow M(A_{\{1,2,3\}}\otimes A_{\{1,2,3\}})$ whose restrictions to the $A_{ij}$ with $|i-j|\leq 1$ coincide with the already defined coproducts. We leave it to the reader to verify that $(A,\Delta)$ defines a regular weak multiplier Hopf $^*$-algebra satisfying the conditions of Proposition \ref{PropCharPBA}, and hence arises from a regular partial weak Hopf $^*$-algebra.
Let now $\phi$ be the functional which is zero on the off-diagonal entries $A_{ij}$ and coincides with the invariant positive integrals on the $A_{ii}$. Then it is also easily checked that $\phi$ is invariant. To show that $\phi$ is positive, we invoke Remark \ref{RemPos}. Indeed, any irreducible corepresentation of $A_{\{1,2,3\}}$ has coefficients in a single $A_{ij}$. For those $i,j$ with $|i-j|\leq 1$, we know that the corepresentation is unitarizable by restricting to a corner $2\times 2$-block. If however the corepresentation $\mathscr{X}$ has coefficients living in (say) $A_{13}$, it follows from the identity $A_{12}A_{23}=A_{13}$ that the corepresentation is a direct summand of a product $\mathscr{Y}\Circt \mathscr{Z}$ of corepresentations with coefficients in respectively $A_{12}$ and $A_{23}$. This proves unitarizability of $\mathscr{X}$. It follows from Remark \ref{RemPos} that $\phi$ is positive, and hence $\mathscr{A}_{\{1,2,3\}}$ defines a partial compact quantum group.
We claim that the subspace $\mathscr{A}_{\{1,3\}}$ (in the obvious notation) defines a co-linking compact quantum group between $\mathscr{G}_1$ and $\mathscr{G}_3$. In fact, it is clear that the $\mathscr{A}_{11}$ and $\mathscr{A}_{33}$ are corners of $\mathscr{A}_{\{1,3\}}$, and that $\UnitC{k}{l}=0$ for $k,l$ not both in $I_1$ and $I_{3}$. To finish the proof, it is sufficient to show now that for each $k\in I_1$, there exists $l\in I_{3}$ with $\Gr{A}{k}{l}{k}{l}\neq 0$, as the other case follows by symmetry using the antipode. But there exists $m\in I_2$ with $\Gr{A}{k}{m}{k}{m} \neq \{0\}$, and $l\in I_3$ with $\Gr{A}{m}{l}{m}{l}\neq\{0\}$. As in the discussion following Definition \ref{DefCoLink}, this implies that there exists $a\in \Gr{A}{k}{m}{k}{m}$ and $b\in \Gr{A}{m}{l}{m}{l}$ with $\epsilon(a)=\epsilon(b)=1$. Hence $\epsilon(ab)=1$, showing $\Gr{A}{k}{l}{k}{l}\neq \{0\}$.
\end{proof}
\begin{Prop}\label{PropCoWeak} Assume that two partial compact quantum groups $\mathscr{G}_1$ and $\mathscr{G}_2$ are co-Morita equivalent. Then they are weakly Morita equivalent.
\end{Prop}
\begin{proof}
Consider the corepresentation category $\mathscr{C}$ of a co-linking partial compact quantum group $\mathscr{A}$ over $I = I_1\cup I_2$. Let $\varphi:I\rightarrow \mathscr{I}$ define the corresponding partition along the hyperobject set. Then by the defining property of a co-linking partial compact quantum group, also $\mathscr{I} = \mathscr{I}_1\cup \mathscr{I}_2$ with $\mathscr{I}_i=\varphi(I_i)$ is a partition. Hence $\mathscr{C}$ decomposes into parts $\mathscr{C}_{ij}$ with $i,j\in \{1,2\}$ and $\mathcal{C}_{ii}\cong \mathrm{Rep}_u(\mathscr{G}_i)$.
To show that $\mathscr{G}_1$ and $\mathscr{G}_2$ are weakly Morita equivalent, it thus suffices to show that $\{\mathscr{C}_{ij}\}$ forms a linking partial fusion C$^*$-category. But fix $\alpha\in I_1$ and $k\in I_{\alpha}$. Then as $\mathscr{A}$ is co-linking, there exists $l \in I_2$ with $\Gr{A}{k}{l}{k}{l}\neq \{0\}$. Hence there exists a non-zero regular unitary corepresentation inside $\oplus_{m,n}\Gr{A}{k}{l}{m}{n}$. If then $l\in I_{\beta}$ with $\beta\in \mathscr{I}_2$, it follows that $\mathcal{C}_{\alpha\beta}\neq 0$. By symmetry, we also have that for each $\alpha \in \mathscr{I}_2$ there exists $\beta \in \mathscr{I}_1$ with $\mathcal{C}_{\alpha\beta}\neq \{0\}$. This proves that the $\{\mathscr{C}_{ij}\}$ forms a linking partial fusion C$^*$-category.
\end{proof}
\begin{Prop}\label{PropCoLink} Let $\mathscr{C}$ be a linking $\mathscr{I}$-partial fusion C$^*$-category. Then the associated canonical partial compact quantum group is a co-linking partial compact quantum group.
\end{Prop}
\begin{proof} Let $\mathscr{I}= \mathscr{I}_1\cup \mathscr{I}_2$ be the associated partition of $\mathscr{I}$. Let $\mathscr{A} = \mathscr{A}_{\mathscr{C}}$ define the canonical partial compact quantum group with object set $I$ and hyperobject partition $\varphi:I\rightarrow \mathscr{I}$. Let $I=I_1\cup I_2$ with $I_i = \varphi^{-1}(\mathscr{I}_i)$ be the corresponding decomposition of $I$. By construction, $\UnitC{k}{l}=0$ if $k$ and $l$ are not both in $I_1$ or $I_2$.
Fix now $k\in I_{\alpha}$ for some $\alpha \in I_i$. Pick $\beta\in I_{i+1}$ with $\mathcal{C}_{\alpha\beta}\neq\{0\}$, and let $(V,\mathscr{X})$ be a non-zero irreducible corepresentation inside $\mathcal{C}_{\alpha\beta}$. Then by irreducibility, we know that $\oplus_l \Gru{V}{k}{l} \neq \{0\}$, hence there exists $l\in I_{\beta}$ with $\Gru{V}{k}{l}\neq \{0\}$. As $(\epsilon\otimes \id)\Gr{X}{k}{l}{k}{l} = \id_{\Gru{V}{k}{l}}$, it follows that $\Gr{A}{k}{l}{k}{l} \neq 0$. This proves that $\mathscr{A}$ defines a co-linking partial compact quantum group.
\end{proof}
\begin{Rem} Note however that the corners of the canonical partial compact quantum group associated to linking $\mathscr{I}$-partial fusion C$^*$-category \emph{are not} the canonical partial compact quantum groups associated to the corners of the linking $\mathscr{I}$-partial fusion C$^*$-category. Rather, they are Morita equivalent copies of these.
\end{Rem}
\begin{Theorem} Two partial compact quantum groups $\mathscr{G}_1$ and $\mathscr{G}_2$ are weakly Morita equivalent if and only if they are connected by a string of Morita and co-Morita equivalences.
\end{Theorem}
\begin{proof} Clearly if two partial compact quantum groups are Morita
equivalent, they are weakly Morita equivalent. By Proposition
\ref{PropCoWeak}, the same is true for co-Morita equivalence. This proves one direction of the theorem.
Conversely, assume $\mathscr{G}_1$ and $\mathscr{G}_2$ are weakly Morita equivalent. Let $\mathscr{C}$ be a linking fusion C$^*$-category between $\mathrm{Rep}_u(\mathscr{G}_1)$ and $\mathrm{Rep}_u(\mathscr{G}_2)$. Then $\mathscr{G}_i$ are Morita equivalent with the corners of the canonical partial compact quantum group associated to $\mathscr{C}$. But Proposition \ref{PropCoLink} shows that these corners are co-Morita equivalent.
\end{proof}
\begin{Rem}
\begin{enumerate}
\item Note that it is essential that we allow the string of equivalences to pass through partial compact quantum groups, even if we start out with (genuine) compact quantum groups.
\item One can show that if $\mathscr{G}$ is a finite partial compact quantum group, then $\mathscr{G}$ is weakly Morita equivalent with its dual $\widehat{\mathscr{G}}$ (defined by the dual weak Hopf $^*$-algebra). In fact, if $\mathscr{G}$ is the canonical partial compact quantum group associated to a finite partial fusion C$^*$-category, then $\mathscr{G}$ is isomorphic to the co-opposite of its dual, e.g.~ the case of dynamical quantum $SU(2)$ at roots of unity. In any case, it follows that two finite quantum groups $H$ and $G$ are weakly Morita equivalent if and only if they can be connected by a string of 2-cocycle-elements and 2-cocycle functionals.
\end{enumerate}
\end{Rem}
\section{Partial compact quantum groups from reciprocal random walks}
In this section, we study in more detail the construction from Section \ref{SubSecCan} in case the category $\mathcal{C}$ is the Temperley-Lieb C$^*$-category.
\subsection{Reciprocal random walks}
We recall some notions introduced in \cite{DCY1}. We slightly change the terminology for the sake of convenience.
\begin{Def} Let $t\in \mathbb{R}_0$. A \emph{$t$-reciprocal random walk} consists of a quadruple $(\Gamma,w,\sgn,i)$ with \begin{itemize}
\item[$\bullet$] $\Gamma=(\Gamma^{(0)},\Gamma^{(1)},s,t)$ a graph with \emph{source} and \emph{target} maps \[s,t:\Gamma^{(1)}\rightarrow \Gamma^{(0)},\]
\item[$\bullet$] $w$ a function (the \emph{weight} function) $w:\Gamma^{(1)}\rightarrow \mathbb{R}_0^+$,
\item[$\bullet$] $\sgn$ a function (the \emph{sign} function) $\sgn:\Gamma^{(1)}\rightarrow \{\pm 1\}$,
\item[$\bullet$] $i$ an involution \[i:\Gamma^{(1)} \rightarrow \Gamma^{(1)},\quad e\mapsto \overline{e}\] with $s(\bar{e}) = t(e)$ for all edges $e$,
\end{itemize}
such that the following conditions are satisfied:
\begin{enumerate}[label=(\arabic*)]
\item (weight reciprocality) $w(e)w(\bar{e}) = 1$ for all edges $e$,
\item (sign reciprocality) $\sgn(e)\sgn(\bar{e}) = \sgn(t)$ for all edges $e$,
\item (random walk property) $p(e) = \frac{1}{|t|}w(e)$ satisfies $\sum_{s(e)=v} p(e) = 1$ for all $v\in \Gamma^{(0)}$.
\end{enumerate}
\end{Def}
Note that, by \cite[Proposition 3.1]{DCY1}, there is a uniform bound on the number of edges leaving from any given vertex $v$, i.e.~ $\Gamma$ has a finite degree.
For examples of $t$-reciprocal random walks, we refer to \cite{DCY1}. One particular example (which will be needed for our construction of dynamical quantum $SU(2)$) is the following.
\begin{Exa}\label{ExaGraphPod} Take $0<|q|<1$ and $x\in \mathbb{R}$. Write $2_q = q+q^{-1}$. Then we have the reciprocal $-2_q$-random walk \[\Gamma_x =(\Gamma_x,w,\sgn,i)\] with \[ \Gamma^{(0)} = \mathbb{Z},\quad \Gamma^{(1)} = \{(k,l)\mid |k-l|= 1\}\subseteq \mathbb{Z}\times \mathbb{Z}\] with projection on the first (resp. second) leg as source (resp. target) map, with weight function \[w(k,k\pm 1) = \frac{|q|^{x+k\pm 1}+|q|^{-(x+k\pm 1)}}{|q|^{x+k}+|q|^{-(x+k)}},\] sign function \[\sgn(k,k+1) = 1,\quad \sgn(k,k-1) = -\sgn(q),\] and involution $\overline{(k,k+1)} = (k+1,k)$.
By translation we can shift the value of $x$ by an integer. By a point reflection and changing the direction of the arrows, we can change $x$ into $-x$. It follows that by some (unoriented) graph isomorphism, we can always arrange to have $x\in \lbrack 0,\frac{1}{2}\rbrack$.
\end{Exa}
\subsection{Temperley-Lieb categories}
Let now $0<|q|\leq 1$, and let $SU_q(2)$ be Woronowicz's twisted $SU(2)$ group \cite{Wor1}. Then $SU_q(2)$ is a compact quantum group whose category of finite-dimensional unitary representations $\mathrm{Rep}(SU_q(2))$ is generated by the spin $1/2$-representation $\pi_{1/2}$ on $\mathbb{C}^2$. It has the same fusion rules as $SU(2)$, and conversely any compact quantum group with the fusion rules of $SU(2)$ has its representation category equivalent to $\mathrm{Rep}(SU_q(2))$ as a tensor C$^*$-category. Abstractly, these tensor C$^*$-categories are referred to as the \emph{Temperley-Lieb C$^*$-categories}.
Let now $\Gamma = (\Gamma,w,\sgn,i)$ be a $-2_q$-reciprocal random walk. Define $\mathcal{H}^{\Gamma}$ as the $\Gamma^{(0)}$-bigraded Hilbert space $l^2(\Gamma^{(1)})$, where the $\Gamma^{(0)}$-bigrading is given by \[\delta_e \in \Gru{\mathcal{H}^{\Gamma}}{s(e)}{t(e)}\] for the obvious Dirac functions. Note that, because $\Gamma$ has finite degree, $\mathcal{H}^{\Gamma}$ is \emph{row- and column finite-dimensional} (rcfd), i.e.~ $\oplus_{v\in \Gamma^{(0)}} \Gru{\mathcal{H}^{\Gamma}}{v}{w}$ (resp.~ $\oplus_{w\in \Gamma^{(0)}} \Gru{\mathcal{H}^{\Gamma}}{v}{w}$) is finite-dimensional for all $w$ (resp.~ all $v$).
Consider now $R_{\Gamma}$ as the (bounded) map \[R_{\Gamma}:l^2(\Gamma^{(0)})\rightarrow \mathcal{H}^{\Gamma}\underset{\Gamma^{(0)}}{\otimes} \mathcal{H}^{\Gamma}\] given by \begin{eqnarray*} R_{\Gamma} \delta_v &=& \sum_{e,s(e) = v} \sgn(e)\sqrt{w(e)}\delta_e \otimes \delta_{\bar{e}}.\end{eqnarray*} Then $R_{\Gamma}^*R_{\Gamma} = |q|+|q|^{-1}$ and \[(R_{\Gamma}^*\underset{\Gamma^{(0)}}{\otimes} \id_{\mathcal{H}^{\Gamma}})(\id_{\mathcal{H}^{\Gamma}}\underset{\Gamma^{(0)}}{\otimes} R_{\Gamma}) = -\sgn(q)\id.\]
Hence, by the universal property of $\mathrm{Rep}(SU_q(2))$ (\cite[Theorem 1.4]{DCY1}, based on \cite{Tur1,EtO1,Yam1,Pin2,Pin3}), we have a strongly monoidal $^*$-functor
\begin{equation}\label{EqForget} F_{\Gamma}: \mathrm{Rep}(SU_q(2)) \rightarrow {}^{\Gamma^{(0)}}\mathrm{Hilb}_{\rcf}^{\Gamma^{(0)}}\end{equation} into the tensor C$^*$-category of rcfd $\Gamma^{(0)}$-bigraded Hilbert spaces such that $F_{\Gamma}(\pi_{1/2}) = \mathcal{H}_{\Gamma}$ and $F_{\Gamma}(\mathscr{R}) = R_{\Gamma}$, with \[(\pi_{1/2},\mathscr{R},-\sgn(q)\mathscr{R})\] a solution for the conjugate equations for $\pi_{1/2}$. Up to equivalence, $F_{\Gamma}$ only depends upon the isomorphism class of $(\Gamma,w)$, and is independent of the chosen involution or sign structure. Conversely, any strong monoidal $^*$-functor from $\mathrm{Rep}(SU_q(2))$ into $\Gr{\mathrm{Hilb}}{I}{I}{}{\rcf}$ for some set $I$ arises in this way \cite{DCY2}.
\subsection{Universal orthogonal partial compact quantum groups}
It follows from the previous subsection and the
Tannaka-Kre$\breve{\textrm{\i}}$n-Woronowicz in Theorem \ref{TheoTKPCQG} that for each reciprocal random walk on a graph $\Gamma$, one obtains a $\Gamma^{(0)}$-partial compact quantum group $\mathscr{G}$, and conversely every partial compact quantum group $\mathscr{G}$ with the fusion rules of $SU(2)$ arises in this way. Our first aim is to give a direct representation of the associated algebras $A(\Gamma) = P(\mathscr{G})$ by generators and relations. We will write $\Gamma_{vw}\subseteq \Gamma^{(1)}$ for the set of edges with source $v$ and target $w$.
\begin{Theorem}\label{TheoGenRel} Let $0<|q|\leq 1$, and let $\Gamma = (\Gamma,w,\sgn,i)$ be a $-2_q$-reciprocal random walk. Let $A(\Gamma)$ be the total $^*$-algebra associated to the $\Gamma^{(0)}$-partial compact quantum group constructed from the fiber functor $F_{\Gamma}$ as in \eqref{EqForget}. Then $A(\Gamma)$ is the universal $^*$-algebra generated by a copy of the $^*$-algebra of finitely supported functions on $\Gamma^{(0)}\times \Gamma^{(0)}$ (with the Dirac functions written as $\UnitC{v}{w}$) and elements $(u_{e,f})_{e,f\in \Gamma^{(1)}}$ where $u_{e,f}\in \Gr{A(\Gamma)}{s(e)}{t(e)}{s(f)}{t(f)}$ and
\begin{eqnarray}
\label{EqUni1}\sum_{v\in \Gamma^{(0)}}\sum_{g\in \Gamma_{vw}} u_{g,e}^*u_{g,f} = \delta_{e,f}\mathbf{1}\Grru{w}{t(e)}, \qquad \forall w\in \Gamma^{(0)}, e,f\in \Gamma^{(1)},\\
\label{EqUni2}\sum_{w\in \Gamma^{(0)}} \sum_{g\in \Gamma_{vw}} u_{e,g}u_{f,g}^* = \delta_{e,f} \mathbf{1}\Grru{s(e)}{v}\qquad \forall v\in \Gamma{(0)}, e,f\in \Gamma^{(1)},\\
\label{EqInt}u_{e,f}^* \;=\; \sgn(e)\sgn(f)\sqrt{\frac{w(f)}{w(e)}} u_{\bar{e},\bar{f}},\qquad \forall e,f\in \Gamma^{(1)}.
\end{eqnarray}
If moreover $v,w\in \Gamma^{(0)}$ and $e,f\in \Gamma^{(1)}$, we have \[\Delta_{vw}(u_{e,f}) = \underset{t(g) = w}{\sum_{s(g) = v}} u_{e,g}\otimes u_{g,f},\]
\[\varepsilon(u_{e,f}) = \delta_{e,f}\] and \[S(u_{e,f}) = u_{f,e}^*.\]
\end{Theorem}
Note that the sums in \eqref{EqUni1} and \eqref{EqUni2} are in fact finite, as $\Gamma$ has finite degree.
\begin{proof} Let $(\mathcal{H},V)$ be the generating unitary corepresentation of $A(\Gamma)$ on $\mathcal{H} = l^2(\Gamma^{(1)})$. Then $V$ decomposes into parts \[ \Gr{V}{k}{l}{m}{n} = \sum_{e,f} v_{e,f} \otimes e_{e,f} \in \Gr{A}{k}{l}{m}{n}\otimes B(\Gru{\mathcal{H}}{m}{n},\Gru{\mathcal{H}}{k}{l}),\] where the $e_{e,f}$ are elementary matrix coefficients and with the sum over all $e$ with $s(e)=k,t(e)=l$ and all $f$ with $s(f) = m, t(f)=n$. By construction $V$ defines a unitary corepresentation of $A(\Gamma)$, hence the relations \eqref{EqUni1} and \eqref{EqUni2} are satisfied for the $v_{e,f}$. Now as $R_{\Gamma}$ is an intertwiner between the trivial representation on $\mathbb{C}^{(\Gamma^{(0)})} = \oplus_{v\in \Gamma^{(0)}} \mathbb{C}$ and $V\Circtv{\Gamma^{(0)}} V$, we have for all $v\in \Gamma^{(0)}$ that \begin{equation}\label{EqMorR}\underset{t(f)=s(h),t(e)=s(g)}{\sum_{e,f,g,h\in \Gamma^{(1)}}} v_{e,f}v_{g,h}\otimes \left((e_{e,f}\otimes e_{g,h})\circ R_{\Gamma} \delta_v\right) = \sum_w \UnitC{w}{v}\otimes R_{\Gamma}\delta_v,\end{equation} hence
\[\underset{t(e)=s(g),s(k)=v}{\sum_{e,g,k}} \sgn(k)\sqrt{w(k)}\left( v_{e,k}v_{g,\bar{k}} \otimes \delta_e\otimes \delta_{g}\right) =\underset{s(k)=w}{\sum_{w,k}}\sgn(k)\sqrt{w(k)} \left(\UnitC{w}{v} \otimes \delta_k\otimes \delta_{\bar{k}}\right).\] Hence if $t(e) = s(g)=z$, we have \[\sum_{k,s(k)=v} \sgn(k)\sqrt{w(k)} v_{e,k}v_{g,\bar{k}} = \delta_{e,\bar{g}} \sgn(e)\sqrt{w(e)}\UnitC{s(e)}{v}.\] Multiplying to the left with $v_{e,l}^*$ and summing over all $e$ with $t(e) = z$, we see from \eqref{EqUni1} that also relation \eqref{EqInt} is satisfied. Hence the $v_{e,f}$ satisfy the universal relations in the statement of the theorem. The formulas for comultiplication, counit and antipode then follow immediately from the fact that $V$ is a unitary corepresentation.
Let us now a priori denote by $B(\Gamma)$ the $^*$-algebra determined by the relations \eqref{EqUni1},\eqref{EqUni2} and \eqref{EqInt} above, and write $\mathscr{B}(\Gamma)$ for the associated $\Gamma^{(0)}\times \Gamma^{(0)}$-partial $^*$-algebra induced by the local units $\UnitC{v}{w}$. Write $\Delta(\UnitC{v}{w}) = \sum_{z\in \Gamma^{(0)}} \UnitC{v}{z}\otimes \UnitC{z}{w}$ and \[\Delta(u_{e,f}) = \sum_{g\in \Gamma^{(1)}} u_{e,g}\otimes u_{g,f},\] which makes sense in $M(B(\Gamma)\otimes B(\Gamma))$ as the degree of $\Gamma$ is finite. Then we compute for $w\in \Gamma^{(0)}$ and $e,f\in \Gamma^{(1)}$ that \begin{eqnarray*} \sum_{v\in \Gamma^{(0)}}\sum_{g\in \Gamma_{vw}}\Delta(u_{g,e})^*\Delta(u_{g,f}) &=& \sum_{v\in \Gamma^{(0)}}\sum_{g\in \Gamma_{vw}} \sum_{h,k\in \Gamma^{(1)}} u_{g,h}^*u_{g,k}\otimes u_{h,e}^*u_{k,f}\\ &=& \sum_{h,k\in \Gamma^{(1)}} \delta_{h,k} \UnitC{w}{t(h)}\otimes u_{h,e}^*u_{k,f}\\ &=& \sum_{z\in \Gamma^{(0)}}\underset{t(h)=z}{\sum_{h\in \Gamma^{(1)}}} \UnitC{w}{z}\otimes u_{h,e}^*u_{h,f} \\ &=& \delta_{e,f} \sum_{z\in \Gamma^{(0)}} \UnitC{w}{z}\otimes \UnitC{z}{t(e)}\\ &=& \delta_{e,f} \Delta(\UnitC{w}{t(e)}).\end{eqnarray*} Similarly, the analogue of \eqref{EqUni2} holds for $\Delta(u_{e,f})$. As also \eqref{EqInt} holds trivially for $\Delta(u_{e,f})$, it follows that we can define a $^*$-algebra homomorphism \[\Delta:B(\Gamma)\rightarrow M(B(\Gamma)\otimes B(\Gamma))\] sending $u_{e,f}$ to $\Delta(u_{e,f})$ and $\UnitC{v}{w}$ to $\Delta(\UnitC{v}{w})$. Cutting down, we obtain maps \[\Delta_{vw}:\Gr{B(\Gamma)}{r}{s}{t}{z}\rightarrow \Gr{B(\Gamma)}{r}{s}{v}{w}\otimes \Gr{B(\Gamma)}{v}{w}{t}{z}\] which then satisfy the properties \ref{Propa}, \ref{Propd} and \ref{Prope} of Definition \ref{DefPartBiAlg}. Moreover, the $\Delta_{vw}$ are coassociative as they are coassociative on generators.
Let now $e_{v,w}$ be the matrix units for $l^2(\Gamma^{(0)})$. Then one verifies again directly from the defining relations of $B(\Gamma)$ that one can define a $^*$-homomorphism \[\widetilde{\varepsilon}: B(\Gamma)\rightarrow B(l^2(\Gamma^{(0)})),\quad \left\{\begin{array}{lll} \UnitC{v}{w}&\mapsto &\delta_{v,w}\, e_{v,v}\\ u_{e,f}&\mapsto& \delta_{e,f}\, e_{s(e),t(e)}\end{array}\right.\] We can hence define a map $\varepsilon: B(\Gamma)\rightarrow \mathbb{C}$ such that \[\widetilde{\varepsilon}(x) = \varepsilon(x) e_{v,w},\qquad \forall x\in \Gr{B(\Gamma)}{v}{w}{v}{w},\] and which is zero elsewhere. Clearly it satisfies the conditions \ref{Propb} and \ref{Propc} of Definition \ref{DefPartBiAlg}. As $\varepsilon$ satisfies the counit condition on generators, it follows by partial multiplicativity that it satisfies the counit condition on the whole of $B(\Gamma)$, i.e.~ $B(\Gamma)$ is a partial $^*$-bialgebra.
It is clear now that the $u_{e,f}$ define a unitary corepresentation $U$ of $B(\Gamma)$ on $\mathcal{H}^{\Gamma}$. Moreover, from \eqref{EqUni1} and \eqref{EqInt} we can deduce that $R_{\Gamma}: \mathbb{C}_{\Gamma^{(0)}}\rightarrow \mathcal{H}^{\Gamma}\underset{\Gamma^{(0)}}{\otimes}\mathcal{H}^{\Gamma}$ is a morphism from $\mathbb{C}^{(\Gamma^{(0)})}$ to $U\Circtv{\Gamma^{(0)}} U$ in $\mathrm{Corep}_{\rcf,u}(\mathscr{B}(\Gamma))$, cf.~ \eqref{EqMorR}. From the universal property of $\mathrm{Rep}(SU_q(2))$, it then follows that we have a (unique and faithful) strongly monoidal $^*$-functor \[G^{\Gamma}: \mathrm{Rep}(SU_q(2)) \rightarrow \mathrm{Corep}_{\rcf,u}(\mathscr{B}(\Gamma))\] such that $G^{\Gamma}(\pi_{1/2}) = U$. On the other hand, as we have a $\Delta$-preserving $^*$-homomorphism $B(\Gamma)\rightarrow A(\Gamma)$ by the universal property of $\mathscr{B}(\Gamma)$, we have a strongly monoidal $^*$-functor $H^{\Gamma}: \mathrm{Corep}_{\rcf,u}(\mathscr{B}(\Gamma))\rightarrow \mathrm{Corep}_u(\mathscr{A}(\Gamma)) = \mathrm{Rep}(SU_q(2))$ which is inverse to $G^{\Gamma}$. Then since the commutation relations of $\mathscr{A}(\Gamma)$ are completely determined by the morphism spaces of $\mathrm{Rep}(SU_q(2))$, it follows that we have a $^*$-homomorphism $\mathscr{A}(\Gamma)\rightarrow \mathscr{B}(\Gamma)$ sending $v_{e,f}$ to $u_{e,f}$. This proves the theorem.
\end{proof}
\subsection{Dynamical quantum $SU(2)$ from the Podle\'{s} graph}
Let us now fix a $-2_q$-reciprocal random walk, and assume further
that there exists a finite set $T$ partitioning $\Gamma^{(1)} = \cup_a
\Gamma^{(1)}_a$ such that for each $a\in T$ and $v\in \Gamma^{(0)}$,
there exists a unique $e_a(v)\in \Gamma^{(1)}_a$ with source $v$. Write $av$ for the range of $e_a(v)$. Assume moreover that $T$ has an involution $a\mapsto \bar{a}$ such that $\overline{e_a(v)} = e_{\bar{a}}(av)$. Then for each $a$, the map $v\mapsto av$ is a bijection on $\Gamma^{(0)}$ with inverse $v\mapsto \bar{a}v$. In particular, also for each $w\in \Gamma^{(0)}$ there exists a unique $f_w(a) \in \Gamma^{(1)}_a$ with target $w$.
Let us further write $w_a(v) = w(e_a(v))$ and $\sgn_a(v) =
\sgn(e_a(v))$. Let $A(\Gamma)$ be the total $^*$-algebra of the
associated partial compact quantum group. Using Theorem
\ref{TheoGenRel}, we have the following presentation of
$A(\Gamma)$. Let $B$ be the $^*$-algebra of finitely supported
functions on $\Gamma^{(0)}\times \Gamma^{(0)}$, whose Dirac functions
we write as $\UnitC{v}{w}$. Then $A(\Gamma)$ is generated by a copy of
$B$ and elements \[(u_{a,b})_{v,w} := u_{e_a(v),e_b(v)} \in
\Gr{A(\Gamma)}{v}{av}{w}{bw}\] for $a,b\in T$ and $v,w\in
\Gamma^{(0)}$ with defining relations \begin{eqnarray*} \sum_{a\in T}
(u_{a,b})_{\bar{a}v,w}^* (u_{a,c})_{\bar{a}v,z}&=& \delta_{w,z}
\delta_{b,c} \UnitC{v}{bw},\\ \sum_{a\in T} (u_{b,a})_{w,v}
(u_{c,a})_{z,v}^* &=& \delta_{b,c}\delta_{w,z} \UnitC{w}{v}\\
(u_{a,b})_{v,w}^* &=&
\frac{\sgn_b(w)\sqrt{w_b(w)}}{\sgn_a(v)\sqrt{w_a(v)}}(u_{\bar{a},\bar{b}})_{av,bw}.\end{eqnarray*}
Let us now consider $M(A(\Gamma))$, the multiplier algebra of $A(\Gamma)$. For a function $f$ on $\Gamma^{(0)}\times \Gamma^{(0)}$, write $f(\lambda,\rho) = \sum_{v,w} f(v,w)\UnitC{v}{w} \in M(A(\Gamma))$. Similarly, for a function $f$ on $\Gamma^{(0)}$ we write $f(\lambda) = \sum_{v,w} f(v)\UnitC{v}{w}$ and $f(\rho) = \sum_{v,w}f(w)\UnitC{v}{w}$. We then write for example $f(a\lambda,\rho)$ for the element corresponding to the function $(v,w)\mapsto f(av,w)$.
We can further form in $M(A(\Gamma))$ the elements $u_{a,b} =
\sum_{v,w} (u_{a,b})_{v,w}$. Then $u=(u_{a,b})$ is a unitary
m$\times$m matrix for
$m=\#T$. Moreover, \begin{equation}\label{EqAdju}u_{a,b}^* =
u_{\bar{a},\bar{b}}\frac{\gamma_b(\rho)}{\gamma_a(\lambda)},\end{equation}
where $\gamma_a(v) = \sgn_a(v)\sqrt{w_a(v)}$. We then have the
following commutation relations between functions on
$\Gamma^{(0)}\times \Gamma^{(0)}$ and the entries of
$u$: \begin{equation}\label{EqGradu} f(\lambda,\rho)u_{a,b} =
u_{a,b}f(\bar{a}\lambda,\bar{b}\rho),\end{equation} where
$f(\bar{a}\lambda,\bar{b}\rho)$ is given by $(v,w) \mapsto f(\bar{a}v,\bar{b}w)$.
The coproduct is given by
$\Delta(u_{a,b}) = \Delta(1) \sum_c(u_{a,c}\otimes u_{c,b})$. Note
that the $^*$-algebra generated by the $u_{a,b}$ is no longer a weak
Hopf $^*$-algebra when $\Gamma^{(0)}$ is infinite, but rather one can
turn it into a Hopf algebroid.
\begin{Rem}
The weak multiplier Hopf algebra $A(\Gamma)$ is related to the free
orthogonal dynamical quantum groups introduced in
\cite{timmermann:free} as follows. Denote by $G$ the free group
generated by the elements of $T$ subject to the relation
$\bar{a}=a^{-1}$ for all $a\in T$. By assumption on $\Gamma$, the
formula $(af)(v):=f(\bar{a}v)$ defines a left action of $G$ on
$\mathrm{Fun}(\Gamma^{(0)})$. Denote by $C\subseteq \mathrm{Fun}(\Gamma^{(0)})$ the
unital subalgebra generated by all $\gamma_{a}$ and their inverses
and translates under $G$, write the
elements of $T \subseteq G$ as a tuple in the form
$\nabla=(a_{1},\bar{a_{1}},\ldots,a_{n},\bar{a_{n}})$, and define a
$\nabla\times\nabla$ matrix $F$ with values in $C$ by $F_{a,b} :=
\delta_{b,\bar{a}} \gamma_{a}$. Then the free orthogonal dynamical
quantum group $A_{\mathrm{o}}^{C}(\nabla,F,F)$ introduced in
\cite{timmermann:free} is the universal unital $*$-algebra generated
by a copy of $C\otimes C$ and the entries of a unitary $\nabla\times\nabla$-matrix
$v=(v_{a,b})$ satisfying
\begin{align*}
v_{a,b}(f \otimes g) &= (af\otimes bg) v_{a,b}, &
(aF_{a,\bar{a}}\otimes 1)v_{\bar{a},\bar{b}}^{*} &=
v_{a,b}(1\otimes F_{b,\bar{b}})
\end{align*}
for all $f,g\in C$ and $a,b\in \nabla$. The second equation
can be rewritten in the form
$v_{\bar{a},\bar{b}}^{*}=v_{a,b}(\gamma_{a}^{-1} \otimes
\gamma_{b})$. Comparing with
\eqref{EqAdju} and \eqref{EqGradu}, we see that there exists a
$*$-homomorphism
\begin{align*}
A^{C}_{\mathrm{o}}(\nabla,F,F) \to
M(A(\Gamma)), \quad
\begin{cases}
f\otimes g&
\mapsto f(\lambda)g(\rho), \\
v_{a,b} &\mapsto u_{\bar{a},\bar{b}}.
\end{cases}
\end{align*}
The two quantum groupoids are related by an
analogue of the unital base changes considered for dynamical quantum
groups in \cite[Proposition 2.1.12]{timmermann:free}. Indeed, Theorem
\ref{TheoGenRel} shows that $A(\Gamma)$ is the image of
$A^{C}_{\mathrm{o}}(\nabla,F,F)$ under a non-unital base change from
$C$ to $\mathrm{Fun}_{f}(\Gamma^{(0)})$ along the natural map $C \to
M(\mathrm{Fun}_{f}(\Gamma^{(0)}))$.
\end{Rem}
\begin{Exa}
As a particular example, consider the Podle\'{s} graph of Example
\ref{ExaGraphPod} at parameter $x\in \lbrack
0,\frac{1}{2}\rbrack$. Then one can take $T = \{+,-\}$ with the
non-trivial involution, and label the edges $(k,k+1)$ with $+$ and
the edges $(k+1,k)$ with $-$. Let us write \[F(k) = |q|^{-1}w_+(k) =
|q|^{-1}\frac{|q|^{x+k+1}+|q|^{-x-k-1}}{|q|^{x+k}+|q|^{-x-k}},\] and
further put\[\alpha =
\frac{F^{1/2}(\rho-1)}{F^{1/2}(\lambda-1)}u_{--},\qquad \beta =
\frac{1}{F^{1/2}(\lambda-1)}u_{-+}.\] Then the unitarity of
$(u_{\epsilon,\nu})_{\epsilon,\nu}$ together with \eqref{EqAdju} and
\eqref{EqGradu} are equivalent to the commutation
relations \begin{equation}\label{EqqCom} \alpha \beta =
qF(\rho-1)\beta\alpha \qquad \alpha\beta^* =
qF(\lambda)\beta^*\alpha\end{equation} \begin{equation}\label{EqDet}
\alpha\alpha^* +F(\lambda)\beta^*\beta = 1,\qquad
\alpha^*\alpha+q^{-2}F(\rho-1)^{-1}\beta^*\beta =
1,\end{equation}\begin{equation*} F(\rho-1)^{-1}\alpha\alpha^*
+\beta\beta^* = F(\lambda-1)^{-1},\qquad F(\lambda)\alpha^*\alpha
+q^{-2}\beta\beta^* =
F(\rho),\end{equation*} \begin{equation}\label{EqGrad}
f(\lambda)g(\rho)\alpha = \alpha f(\lambda+1)g(\rho+1),\qquad
f(\lambda)g(\rho)\beta = \beta
f(\lambda+1)g(\rho-1).\end{equation
These are precisely the commutation relations for the dynamical
quantum $SU(2)$-group as in \cite[Definition 2.6]{KoR1}, except that
the precise value of $F$ has been changed by a shift in the
parameter domain by a complex constant. The (total) coproduct on
$A_x$ also agrees with the one on the dynamical quantum
$SU(2)$-group, namely \begin{eqnarray*} \Delta(\alpha) &=& \Delta(1)
(\alpha\otimes \alpha - q^{-1}\beta\otimes \beta^*),\\
\Delta(\beta) &=& \Delta(1)(\beta\otimes \alpha^* +\alpha\otimes
\beta)\end{eqnarray*} where $\Delta(1) = \sum_{k\in \mathbb{Z}}
\rho_k\otimes \lambda_k$.
\end{Exa}
\bibliographystyle{habbrv} |
1409.1549 | \section{Introduction}
A semigroup $P$ is left cancellative if $pq = ps$ implies that $q=s$, and C*-algebras associated to such semigroups are an active topic of research in operator algebras. Li's construction \cite{Li12} of a C*-algebra $C^*(P)$ from a left cancellative semigroup $P$ generalizes Nica's quasi-lattice ordered semigroups \cite{Ni92} and encompass a great deal of interesting C*-algebras, including the Cuntz algebras and the C*-algebra of the $ax+b$ semigroup, see \cite{Cu08}. Many semigroups of interest can be embedded into groups, and \cite{Li13} represents a comprehensive study of the C*-algebras of such semigroups. Another interesting class of semigroups (which has some overlap with the previous) are the {\em right LCM semigroups}, which are semigroups in which two principal right ideals are either disjoint or intersect in another principal right ideal. The paper \cite{BL14} considers the C*-algebras of such semigroups, and obtains many results about how the properties of $P$ influence the properties of $C^*(P)$ in this case.
We are concerned with boundary quotients of such algebras. Specifically, in \cite{BRRW14} the authors define a boundary quotient $\mathcal{Q}(P)$ of $C^*(P)$ \cite[Definition 5.1]{BRRW14} when $P$ is a right LCM semigroup, and this is the principal object of study in this paper. Quotients of this type are worthy of singling out because they frequently give examples of simple C*-algebra where the original would not (unless of course they are equal), and examples of simple C*-algebras are of interest to C*-algebra classification.
It turns out that this boundary quotient can be studied by using work of Exel \cite{Ex08} on inverse semigroup C*-algebras. An {\em inverse semigroup} is a semigroup $S$ such that for each $s\in S$ there is a unique $s^*\in S$ such that $ss^*s = s$ and $s^*ss^* = s^*$, and one can define a universal C*-algebra for representations of $S$, as in \cite{Pa99}. Further work of Norling \cite{No14} determined that for a left cancellative semigroup $P$, $C^*(P)$ is isomorphic to the universal C*-algebra of a certain inverse semigroup (denoted $\mathcal{S}$ in the sequel) obtained from $P$. In the paper \cite{Ex08} Exel discovered a natural quotient for Paterson's inverse semigroup algebra called the {\em tight C*-algebra} of an inverse semigroup. This C*-algebra is universal for so called {\em tight} representations of of the inverse semigroup; representations of this kind enforce a kind of nondegeneracy condition. This construction has been studied by many authors, see \cite{EGS12}, \cite{SM11}, \cite{EPSep14}, \cite{Ste14}. Our first main result, Theorem \ref{maintheorem}, states that the tight C*-algebra of $\mathcal{S}$ is isomorphic to the boundary quotient $\mathcal{Q}(P)$. This generalizes a combination of \cite[Corollary 6.4]{EP13} and \cite[Theorem 6.7]{BRRW14} from the case of self-similar groups, and in fact it was the desire to generalize this result to other types of semigroups which was the motivation for this work.
Both Paterson's and Exel's C*-algebras can be presented as the C*-algebras of certain \'etale groupoids, and so can be analyzed by using the many results concerning \'etale groupoids in the literature. There is a small difficulty in doing so however, because the groupoids which arise in this way can be non-Hausdorff, and a majority of the results in the literature about the structure of \'etale groupoid C*-algebras assumes the Hausdorff property. One condition which guarantees Hausdorff is right cancellativity (in addition to the left cancellativity already assumed), but one can weaken this a bit to obtain a condition on $P$ which is equivalent to the groupoid being Hausdorff, see Proposition \ref{hausdorff}. Here, we employ the results in \cite{EP14}, \cite{BCFS14}, and \cite{AD97} to find conditions on $P$ which guarantee that $\mathcal{Q}(P)$ is simple and purely infinite. Of specific use is \cite{EP14}, because that paper is concerned with \'etale groupoids arising from inverse semigroup actions. We note that in the recent work \cite{Ste14}, Steinberg independently comes to many of the same conclusions as \cite{EP14}, and many of the results we use from \cite{EP14} also appear in \cite{Ste14}, but throughout this article we will reference their appearance in \cite{EP14}.
This paper is organized as follows. Section \ref{background} recalls definitions and sets notation. In Section \ref{boundary}, we establish an isomorphism between $\mathcal{Q}(P)$ and the tight C*-algebra of an inverse semigroup, and in Section \ref{properties} we deduce properties of $\mathcal{Q}(P)$ by using this realization of it as a C*-algebra of an \'etale groupoid. In Section \ref{examplessection} we give some examples of right LCM semigroups from the literature, including some arising from Zappa-Sz\'ep products of semigroups and from self-similar groups. Finally, in Appendix \ref{coreappendix} we prove a small result about inverse semigroup actions which generalizes Proposition \ref{EPcore} and which may be of independent interest.
{\bf Acknowledgements}: We would like to thank Ruy Exel for many helpful and enlightening conversations about this work.
\section{Background}\label{background}
Let $P$ be a semigroup. We say that $P$ is {\em left cancellative} if $pq = ps$ implies that $q = s$ for all $p,q,$ and $s\in P$. A {\em right ideal} of $P$ is a set $X\subset P$ such that $XP = \{xp\mid x\in X, p\in P\}$ is a subset of $X$. If $p\in P$, then the set $pP = \{pq\mid q\in P\}$ is a right ideal, and any right ideal of this form is called a {\em principal right ideal}. An element of $pP$ is called a {\em right multiple} of $p$. All semigroups are assumed to be countable and discrete.
If $X$ is a right ideal of $P$, then for all $p\in P$ the sets
\[
pX = \{px\mid x\in X\}, \hspace{1cm}p^{-1}X = \{ y\in P\mid py\in X\}
\]
are also right ideals. We let $\mathcal{J}(P)$ denote the smallest set of right ideals which contains $P$ and $\varnothing$, is closed under intersections, and such that $X\in \mathcal{J}(P)$ and $p\in P$ implies that both $pX$ and $p^{-1}X$ are in $\mathcal{J}(P)$. Then $\mathcal{J}(P)$ is a semilattice under intersection, and is called the semilattice of {\em constructible ideals}. For a left cancellative semigroup $P$, Li constructs a C*-algebra $C^*(P)$.
\begin{defn}\label{LiDef}
Let $P$ be a left cancellative semigroup, and let $\mathcal{J}(P)$ denote the set of constructible ideals of $P$. Then $C^*(P)$ is defined to be the universal C*-algebra generated by a set of isometries $\{ v_p\mid p\in P\}$ and a set of projections $\{e_X\mid X\in \mathcal{J}(P)\}$
subject to the following:
\begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip}
\item[(L1)]$v_pv_q = v_{pq}$ for all $p, q\in P$,
\item[(L2)]$v_pe_Xv^*_{p} = e_{pX}$ for all $p\in P$ and $X\in \mathcal{J}$,
\item[(L3)]$e_P = 1$ and $e_\varnothing = 0$, and
\item[(L4)]$e_Xe_Y = e_{X\cap Y}$ for all $X, Y\in \mathcal{J}$.
\end{enumerate}
\end{defn}
It is clear that $\mathcal{J}(P)$ contains every principal right ideal. In this paper we consider the following class of semigroups for which $\mathcal{J}(P)$ is equal to the set of principal right ideals.
\begin{defn}
A semigroup $P$ is called a {\em right LCM semigroup} if it is left cancellative and the intersection of any two principal right ideals is either empty or another principal right ideal.
\end{defn}
Semigroups of this type have gone by other names in the literature. In \cite{La99}, Lawson considers the dual definition (ie, right cancellative semigroups such that two principal right ideals are either disjoint or intersect in another principal right ideal) and calls these {\em CRM monoids} (named for Clifford, Reilly and McAlister). In other works such as \cite{No14}, such semigroups are said to satisfy {\em Clifford's condition}.
Let $P$ be a right LCM semigroup with identity, and let $U(P)$ denote the invertible elements of $P$ (invertible elements of $P$ are also sometimes called the {\em units} of $P$). Then if we have $p, q\in P$ such that $pP\cap qP = rP$, we see that every element of $P$ which is right multiple of both $p$ and $q$ is also a right multiple of $r$, and we say that $r$ is a {\em right least common multiple} (or {\em right LCM}) of $p$ and $q$. If $rP = sP$, then a short calculation shows that there must exist $u\in U(P)$ such that $ru = s$. Hence, if $r$ is a right LCM of $p$ and $q$ then so is $ru$ for all $u\in U(P)$. Also, if $pP \cap qP = rP$, then there exist $p', q'\in P$ such that $pp' = qq' = r$. This right least common multiple property is the source of the terminology ``right LCM''.
Let $P$ be a right LCM semigroup and suppose that we have $p, q\in P$ such that $pP\cap qP = rP$ with $pp' = qq' = r$. Then it is straightforward to show that $p^{-1}qP = p'P$, and so the set of principal right ideals is in fact equal to the set of constructible ideals.
We shall be concerned with groupoids constructed from semigroups. Recall that a {\em groupoid} consists of a set $\mathcal{G}$ together with a subset $\mathcal{G}^{(2)} \subset \mathcal{G} \times \mathcal{G}$, called the set of composable pairs, a product map $\mathcal{G}^{(2)} \to \mathcal{G}$ with $(\gamma, \eta)\mapsto \gamma\eta$, and an inverse map from $\mathcal{G}$ to $\mathcal{G}$ with $\gamma \mapsto \gamma^{-1}$ such that
\begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip}
\item $(\gamma^{-1})^{-1} = \gamma$ for all $\gamma\in \mathcal{G}$,
\item If $(\gamma, \eta), (\eta, \nu)\in \mathcal{G}^{(2)}$, then $(\gamma\eta,\nu), (\gamma, \eta\nu)\in \mathcal{G}^{(2)}$ and $(\gamma\eta)\nu = \gamma(\eta\nu)$,
\item $(\gamma, \gamma^{-1}), (\gamma^{-1},\gamma)\in \mathcal{G}^{(2)}$, and $\gamma^{-1}\gamma\eta = \eta$, $\xi\gamma\gamma^{-1}$ for all $\eta, \xi$ with $(\gamma, \eta), (\eta,\xi) \in \mathcal{G}^{(2)}$.
\end{enumerate}
The set of {\em units} of $\mathcal{G}$ is the subset $\mathcal{G}^{(0)}$ of elements of the form $\gamma\gamma^{-1}$. The maps $r: \mathcal{G}\to \mathcal{G}^{(0)}$ and $d:\mathcal{G}\to \mathcal{G}^{(0)}$ given by
\[
r(\gamma) = \gamma\gamma^{-1}, \hspace{1cm} d(\gamma) = \gamma^{-1}\gamma
\]
are called the {\em range} and {\em source} maps respectively. It is straightforward to check that $(\gamma, \eta)\in \mathcal{G}^{(2)}$ is equivalent to
$r(\eta) = d(\gamma)$.
One thinks of a groupoid $\mathcal{G}$ as a set of ``arrows'' between elements of $\mathcal{G}^{(0)}$. Given $x\in \mathcal{G}^{(0)}$, let
\[
\mathcal{G}^x := r^{-1}(x), \hspace{1cm} \mathcal{G}_x := d^{-1}(x), \hspace{1cm} \mathcal{G}^{x}_x := d^{-1}(x)\cap r^{-1}(x),
\]
which are thought of, respectively, as the arrows ending at $x$, the arrows beginning at $x$, and all the arrows both ending and beginning at $x$. For all $x\in \mathcal{G}^{(0)}$, $\mathcal{G}^{x}_x$ is a group with identity $x$ when given the operations inherited from $\mathcal{G}$, and is called the {\em isotropy group} of $x$. The set Iso$(\mathcal{G}) = \cup_{x\in \mathcal{G}^{(0)}}\mathcal{G}_x^x$ is called the {\em isotropy group bundle} of $\mathcal{G}$. The {\em orbit} of $x\in \mathcal{G}^{(0)}$ is the set $\mathcal{G}(u):= r(\mathcal{G}_x) = s(\mathcal{G}^x)$.
A {\em topological groupoid} is a groupoid which is a topological space where the inverse and
product maps are continuous, where we are considering $\mathcal{G}^{(2)}$ with the product topology inherited from $\mathcal{G}\times\mathcal{G}$. Two topological groupoids are said to be {\em isomorphic} if there is a homeomorphism between them which preserves the inverse and product operations. A groupoid with topology $\mathcal{G}$ is called {\em \'etale} if it is locally compact, second countable, and the maps $r$ and $d$ are local homeomorphisms. These properties imply that $\mathcal{G}^{(0)}$ is open in $\mathcal{G}$ and that for all $x\in \mathcal{G}^{(0)}$ the spaces $\mathcal{G}^x$ and $\mathcal{G}_x$ are discrete.
For subsets $S, T\subset \mathcal{G}$, let $ST = \{\gamma\eta\mid \gamma\in S, \eta\in T, d(\gamma) = r(\eta)\}$. A subset $S\subset \mathcal{G}$ of a topological groupoid is called a {\em bisection} if the restrictions of $r$ and $d$ to $S$ are both injective. In an \'etale groupoid $\mathcal{G}$, the collection of open bisections forms a basis for the topology of $\mathcal{G}$, cf. \cite[Proposition 3.5]{Ex08}. If $S$ and $T$ are bisections in an \'etale groupoid, then so is $ST$.
A subset $U\subset \mathcal{G}^{(0)}$ is called {\em invariant} if for all $\gamma\in \mathcal{G}$, $r(\gamma)\in U$ implies that $s(\gamma)\in U$. A topological groupoid is called {\em minimal} if the only nonempty open invariant subset of $\mathcal{G}^{(0)}$ is $\mathcal{G}^{(0)}$. We say that $\mathcal{G}$ is {\em topologically principal} if the set of $x\in \mathcal{G}^{(0)}$ for which $\mathcal{G}^x_x = \{x\}$ is dense. We will say that $\mathcal{G}$ is {\em essentially principal} if the interior of Iso$(\mathcal{G})$ is equal to $\mathcal{G}^{(0)}$, and we will say that $\mathcal{G}$ is {\em effective} if the interior of Iso$(\mathcal{G})\setminus \mathcal{G}^{(0)}$ is empty. When $\mathcal{G}$ is a locally compact, second countable, Hausdorff, \'etale groupoid, then
\[
\mathcal{G} \text{ topologically principal } \Leftrightarrow \mathcal{G} \text{ essentially principal } \Leftrightarrow \mathcal{G} \text{ effective},
\]
see \cite[Proposition 3.1]{R80} and \cite[Lemma 3.1]{BCFS14}.
In a construction from \cite{R80}, to an \'etale groupoid $\mathcal{G}$ one can associate C*-algebras $C^*(\mathcal{G})$ and $C^*_r(\mathcal{G})$, called the {\em C*-algebra of $\mathcal{G}$} and the {\em reduced C*-algebra of $\mathcal{G}$} respectively. To build these C*-algebras one starts with $C_c(\mathcal{G})$, the continuous compactly supported functions on $\mathcal{G}$, which becomes a complex $*$-algebra when given the convolution product and involution given by
\[
f\star g(\gamma) = \sum_{\gamma_1\gamma_2 = \gamma}f(\gamma_1)g(\gamma_2)\hspace{1cm} f^*(\gamma) = \overline{f(\gamma^{-1})}.
\]
We note that the fact that $\mathcal{G}$ is \'etale implies that the sum above is finite. In this work we are not concerned with the specifics, but both $C^*(\mathcal{G})$ and $C^*_r(\mathcal{G})$ are completions of $C_c(\mathcal{G})$ in suitable norms, and $C^*_r(\mathcal{G})$ is always a quotient of $C^*(\mathcal{G})$. For more details, the interested reader is directed to \cite{R80}. In this paper we construct \'etale groupoids from certain semigroups, and most of our results concern the case when $C^*(\mathcal{G}) = C^*_r(\mathcal{G})$. This happens for instance when $\mathcal{G}$ is {\em amenable}, see \cite{AR00}.
We will use the following result to deduce properties of the C*-algebras we construct.
\begin{theo}\cite[Theorem 5.1]{BCFS14}\label{groupoidsimple}
Let $\mathcal{G}$ be a second countable locally compact Hausdorff \'etale groupoid. Then $C^*(\mathcal{G})$ is simple if and only if the following conditions are satisfied:
\begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip}
\item $C^*(\mathcal{G}) = C^*_r(\mathcal{G})$,
\item $\mathcal{G}$ is topologically principal, and
\item $\mathcal{G}$ is minimal.
\end{enumerate}
\end{theo}
An \'etale groupoid is called {\em locally contracting} if for every nonempty open subset $U\subset \mathcal{G}^{(0)}$, there exists an open subset $V\subset U$ and an open bisection $S\subset \mathcal{G}$ such that $\overline{V}\subset S^{-1}S$ and $SVS^{-1}\subsetneq V$. By \cite[Proposition 2.4]{AD97}, if $C^*_r(\mathcal{G})$ is simple and $\mathcal{G}$ is locally contracting, then $C^*_r(\mathcal{G})$ is purely infinite. We assume knowledge of C*-algebras, but for the unfamiliar an excellent reference for the undefined terms above is \cite{Dav}.
\section{The boundary $\mathcal{Q}(P)$ as the tight C*-algebra of an inverse semigroup}\label{boundary}
Recall that a semigroup $S$ is called {\em regular} if for all $s\in S$ there exists an element $t\in S$ such that $tst = t$ and $sts = s$. Such an element $t$ is often called an {\em inverse} of $s$, though even if $S$ has an identity we need not have $ts = 1$. However, we always have $(ts)^2 =tsts = ts$, that is to say that $ts$ is idempotent. We let $E(S) = \{ e\in S\mid e^2=e\}$ denote the set of all idempotent elements of $S$. A regular semigroup is called an {\em inverse semigroup} if each element $s$ has a unique inverse, denoted $s^*$. It is a fact that a regular semigroup is an inverse semigroup if and only if elements of $E(S)$ commute, and we note that in this case $E(S)$ is closed under multiplication.
\begin{ex}\label{PCM}
We give an important and fundamental example of an inverse semigroup. Let $X$ be a set. Consider
\[
\mathcal{I}(X) = \{f:U\to V\mid U, V\subset X, f\text{ is bijective}\}.
\]
Then $\mathcal{I}(X)$ is an inverse semigroup when given the operation of function composition on the largest domain possible. The inverse of an element $f: U\to V$ is the inverse function $f^* = f^{-1}:V\to U$. One sees that the identity function is the identity for this inverse semigroup, and more generally every idempotent is the identity on some subset. If we have $f, g\in \mathcal{I}(X)$ such that the range of $f$ does not intersect the domain of $g$, then the composition $g\circ f$ on the largest domain possible is equal to the empty function, which acts as a zero element in $\mathcal{I}(X)$. It is an important fact in semigroup theory that every inverse semigroup can be embedded into $\mathcal{I}(X)$ for some set $X$ -- this is known as the Wagner-Preston theorem.
This example demonstrates that many inverse semigroups naturally contain a zero element. Because of this, the two algebraic objects we consider in this paper, namely ``right LCM semigroups'' and ``inverse semigroups'' should be thought of as quite different types of objects, as left cancellativity in a right LCM semigroup eliminates the possibility of a zero element in nontrivial cases.
\end{ex}
Now, given a right LCM semigroup $P$ we will construct an inverse semigroup $\mathcal{S}$. We define an equivalence relation $\sim$ on $P\times P$ by saying that $(p, q)\sim (a, b)$ if and only if there exists $u\in U(P)$ such that $au = p$ and $bu = q$. In other words, the equivalence class of $(p, q)$ consists of all elements of the form $(pu, qu)$ with $u\in U(P)$. Denote by $[p, q]$ the equivalence class of $(p, q)$. The following proposition is essentially \cite[Theorem 3]{La99}, though Lawson presents the dual construction for right cancellative left LCM semigroups.
\begin{prop}
Let $P$ be a right LCM semigroup with identity $1_P$, and let
\begin{equation}\label{Sdef}
\mathcal{S} := \{ [p, q]\mid p, q\in P\}\cup \{0\}.
\end{equation} Then $\mathcal{S}$ becomes an inverse semigroup with identity $1_\mathcal{S} = [1_P, 1_P]$ when given the operation
\[
[a,b][c,d] = \begin{cases}[ab', dc'] & \text{if } cP\cap bP = rP \text{ and } cc' = bb' = r \\ 0 & \text{if }cP\cap bP = \varnothing\end{cases}
\]
and $s0 = 0s = 0$ for all $s\in \mathcal{S}$. In this case, we have that $[a,b]^* = [b,a]$ and
\[
E(\mathcal{S}) = \{[a, a]\mid a\in P\}\cup \{0\}.
\]
\end{prop}
\begin{proof}
Before we start, we note that although this proof is straightforward it is long and tedious. However, it may be valuable if one wishes to get a feel for right LCM semigroups.
We first show that the multiplication above is well-defined. Suppose that $[a,b],[c,d]\in \mathcal{S}$. Then if $u, v\in U(P)$, we know that $buP = bP$ and $cvP = cP$, and so $[a,b][c,d] \neq 0$ if and only if $[au, bu][cv, dv]\neq 0$. So, suppose that $[a,b][c,d] = [ab',dc']$, where $bP\cap cP = rP$ with $bb' = cc' = r$. Then $buP\cap cvP = bP\cap cP = rP$, and so there exist $b'', c''\in P$ such that $bub'' = cvc'' = r$. Because $bb' = cc' = r$ and $P$ is left cancellative, we have that $ub'' = b'$ and $vc'' = c'$. Hence
\[
[au,bu][cv,dv] = [aub'', dvc''] = [ab', dc'] = [a,b][c,d]
\]
and so the multiplication is well-defined.
We now show that the given multiplication is associative. Take $[a,b], [c,d], [e,f]\in \mathcal{S}$ and first suppose that $[a,b]\Big([c,d][e,f]\Big)\neq 0$. Then there must be $r_1, d_1, e_1\in P$ such that $dP\cap eP = r_1P$, $dd_1 = ee_1 = r_1$, and $[c,d][e,f] = [cd_1, fe_1]$. Since we assumed that $[a,b][cd_1, fe_1] \neq 0$, we now must have that there exist $r_2, b_1, c_1\in P$ such that $bP\cap cd_1P = r_2P$, $bb_1 = cd_1c_1 = r_2$, and
\[
[a,b]\Big([c,d][e,f]\Big) = [ab_1, fe_1c_1] \neq 0.
\]
Since $bP\cap cd_1P\neq \varnothing$, we must have that $bP\cap cP\neq \varnothing$, so there exists $r_3, b_2, c_2\in P$ such that $bP \cap cP = r_3P$, $bb_2 = cc_2 = r_3$, and $[a, b][c,d] = [ab_2, dc_2]$. In addition, we have that $r_2P\subset r_3P$, and so there exists $q\in P$ such that $r_2 = r_3q$. Now we have that
\begin{equation}\label{d1c1c2q}
cd_1c_1 = r_2 = r_3q = cc_2q \hspace{0.5cm} \Rightarrow \hspace{0.5cm} d_1c_1 = c_2q
\end{equation}
and so we have
\[
ee_1c_1 = dd_1c_1 = dc_2q \hspace{0.5cm} \Rightarrow \hspace{0.5cm} dc_2P\cap eP \neq \varnothing.
\]
Hence $\Big([a,b][c,d]\Big)[e,f]=[ab_2, dc_2][e,f]\neq 0$. Furthermore, there exist $r_4, d_2, e_2\in P$ such that $dc_2P\cap eP = r_4P$, $dc_2d_2 = ee_2 = r_4$, and
\[
\Big([a,b][c,d]\Big)[e,f] = [ab_2, dc_2][e,f] = [ab_2d_2, fe_2].
\]
In this case, we have that $r_4P\subset r_1P$, and so there exists $p\in P$ such that $r_4 = r_1p$. Also, similar to \eqref{d1c1c2q}, we have that $c_2d_2 = d_1p$. If instead we started by insisting that $\Big([a,b][c,d]\Big)[e,f]\neq 0$, then a similar argument gives that $[a,b]\Big([c,d][e,f]\Big) \neq 0$. Thus to show associativity we can assume both products are nonzero and that we have elements $b_1, b_2, c_1, c_2, d_1, d_2, e_1, e_2, r_1, r_2, r_3, r_4, q, p\in P$ such that
\[
\begin{array}{l}
dP\cap eP = r_1P\\
bP \cap cd_1P = r_2P\\
bP \cap cP = r_3P\\
dc_2P\cap eP = r_4P
\end{array}
\hspace{1cm}
\begin{array}{l}
dd_1 = ee_1 = r_1\\
bb_1 = cd_1c_1 = r_2\\
bb_2 = cc_2 = r_3\\
dc_2d_2 = ee_2 = r_4
\end{array}
\hspace{1cm}
\begin{array}{l}
r_2 = r_3q\\
r_4 = r_1p\\
d_1c_1 = c_2q\\
c_2d_2 = d_1p
\end{array}
\]
Now, we have that $r_1c_1 = ee_1c_1 = dd_1c_1 = dc_2q$, and so $r_1c_1\in dc_2P\cap eP = r_4P$, meaning that there exists $k_1\in P$ such that $r_1c_1 = r_4k_1 = r_1pk_1$, and because $P$ is left cancellative we have that $pk_1 = c_1$.
Similarly, $r_3d_2 = bb_2d_2 = cc_2d_2 = cc_1d_1$, and so $r_3d_2\in bP\cap cd_1P = r_2P$. This means that there exists $k_2\in P$ such that $r_3d_2 = r_2k_2 = r_3qk_2$, and since $P$ is left cancellative we have that $d_2 = qk_2$.
We claim that $k_1k_2 = 1 = k_2k_1 = 1_P$, and hence $k_1, k_2\in U(P)$. We calculate
\[
d_1pk_1k_2 = d_1c_1k_2 = c_2qk_2 = c_2d_2 = d_1p \hspace{0.5cm} \Rightarrow \hspace{0.5cm} k_1k_2 = 1_P,
\]
\[
c_2qk_2k_1 = c_2d_2k_1 = d_1pk_1 = d_1c_1 = c_2q \hspace{0.5cm} \Rightarrow \hspace{0.5cm} k_2k_1 = 1_P.
\]
Now, $bb_1 = r_2 = r_3q = bb_2q$, and so $b_1 = b_2q$. Similarly, $e_2 = e_1p$. Thus,
\[
[a,b]\Big([c,d][e,f]\Big) = [ab_1, fe_1c_1] = [ab_2q, fe_1c_1],
\]
\[
\Big([a,b][c,d]\Big)[e,f] = [ab_2d_2, fe_2] =[ab_2d_2, fe_1p],
\]
and
\[
ab_2d_2k_1 = ab_2qk_2k_1 = ab_2q, \hspace{0.5cm}fe_1pk_1 = fe_1c_1.
\]
Hence $[a,b]\Big([c,d][e,f]\Big) = \Big([a,b][c,d]\Big)[e,f]$ as required.
Suppose now that $[p,q][p,q] = [p,q]$. Then $pP\cap qP = rP$ and there exist $p', q'$ such that $pp'= qq' = r$, and $[p,q]= [p,q][p,q] = [pp',qq'] = [r,r]$. Hence there exists $u\in U(P)$ such that $p= ru = q$. Hence the only idempotent elements of $\mathcal{S}$ are elements of the form $[p,p]$, together with the 0 element. Now suppose that we have $p, q\in P$ such that $[p, p][q,q] \neq 0$. Then $pP\cap qP = rP$ for some $r\in P$ and there exist $p', q'$ such that $pp'=qq' = r$, and $[p,p][q,q] = [pp',qq'] = [r,r]$. It is clear that this is equal to $[q,q][p,p]$, and that $[p, p][q,q]=0$ if and only if $[q,q][p,p] = 0$. Hence the idempotents of $\mathcal{S}$ commute.
It is obvious that $[p,q][q,p][p,q] = [p,q]$ and $[q,p][p,q][q,p] = [q,p]$. Hence each element of $\mathcal{S}$ has an inverse (0 is the inverse of 0), and so $\mathcal{S}$ is regular. As above, the idempotents of $\mathcal{S}$ commute, hence $\mathcal{S}$ is an inverse semigroup.
\end{proof}
There is another formulation of the semigroup $\mathcal{S}$ above, considered for example in \cite{No14}. Consider $\mathcal{I}(P)$ as in Example \ref{PCM}. Since $P$ is assumed to be left cancellative, the map $\lambda_p:P\to pP$ defined by $\lambda_p(q) = pq$ is a bijection, and hence an element of $\mathcal{I}(P)$. Let $\mathcal{I}_l(P)$ denote the inverse semigroup generated by the elements $\{\lambda_p\}_{p\in P}$ inside $\mathcal{I}(P)$. This is sometimes called the {\em left inverse hull} of $P$. Then the map from $\mathcal{S}$ to $\mathcal{I}_l(P)$ given by $[p,q]\mapsto \lambda_p\lambda_q^{-1}$ is an isomorphism.
The main result of this section is an isomorphism between two C*-algebras, denoted in the sequel $\mathcal{Q}(P)$ and $C^*_{\text{tight}}(\mathcal{S})$. We begin by defining $\mathcal{Q}(P)$. A finite set $F\subset P$ is called a {\em foundation set} if for all $p\in P$ there exists $f\in F$ such that $fP\cap pP \neq \varnothing$. The following definition is \cite[Definition 5.1]{BRRW14}.
\begin{defn}Let $P$ be a right LCM semigroup. The {\em boundary quotient} of $C^*(P)$, denoted $\mathcal{Q}(P)$ is the universal C*-algebra generated by sets $\{ v_p\mid p\in P\}$ and $\{e_X\mid X\in \mathcal{J}\}$ subject to the relations (L1)--(L4) in Definition \ref{LiDef} and
\[
\prod_{p\in F} (1-e_{pP}) = 0\text{ for all foundation sets }F\subset P.
\]
\end{defn}
Now that we have defined $\mathcal{Q}(P)$, we define the second algebra which concerns us. Let $A$ be a C*-algebra and let $S$ be an inverse semigroup with zero. Then a {\em representation} of $S$ is a map $\pi: S\to A$ such that for all $s, t\in S$ we have $\pi(st) = \pi(s)\pi(t)$, $\pi(s^*) = \pi(s)^*$, and $\pi(0) = 0$. The {\em universal C*-algebra of $S$}, considered in \cite{Pa99} and denoted $C^*_u(S)$, is the universal C*-algebra generated by one element for each element of $S$ such that the standard map $\pi_u: S\to C^*_u(S)$ is a representation. Note that this implies that $\pi_u(s)$ is a partial isometry for each $s\in S$.
Let $S$ be an inverse semigroup, let $\pi: S\to A$ be a representation, and let $D_\pi$ denote the C*-subalgebra of $A$ generated by $\pi(E(S))$. Since $E(S)$ is commutative, $D_\pi$ must be a commutative C*-algebra. The set
\[
\mathscr{B}_\pi = \{e\in D_\pi\mid e^2 = e\}
\]
is a Boolean algebra when given the operations
\[
e \wedge f = ef \hspace{0.5cm} e\vee f = e+f - ef\hspace{0.5cm} \neg e = 1-e
\]
We will recall a subclass of representations defined in \cite{Ex08}. Let $S$ be an inverse semigroup and let $F\subset Z\subset E(S)$. We say that $F$ is a {\em cover} of $Z$ if for every nonzero $z\in Z$ there is $f\in Z$ such that $fz \neq 0$. If $x\in E(S)$ and $F$ is a cover for $\{y\in E(S)\mid yx = x\}$, then we say that $F$ is a cover of $x$. For finite sets $X, Y\subset E(S)$, let
\[
E(S)^{X, Y} = \{e\in E(S)\mid ex = e \text{ for all } x\in X\text{ and }ey = 0\text{ for all } y\in Y\}
\]
A representation $\pi: S\to A$ is called {\em tight} if for every pair of finite sets $X, Y\subset E(S)$ and every finite cover $Z$ of $E(S)^{X,Y}$, we have
\[
\bigvee_{z\in Z}\pi(z) = \prod_{x\in X}\pi(x)\prod_{y\in Y}(1-\pi(y)).
\]
The {\em tight C*-algebra of $S$}, denoted $C^*_{\text{tight}}(S)$, is the universal C*-algebra generated by one element for each element of $S$ subject to the relations saying that the standard map $\pi_t: S\to C^*_{\text{tight}}(S)$ is a tight representation.
\begin{lem}\label{starnotcalc}Let $P$ be a right LCM semigroup, and let $\{ v_p\mid p\in P\}$ and $\{e_X\mid X\in \mathcal{J}(P)\}$ be as in Definition \ref{LiDef}. Then for all $p,q\in P$ we have
\[
v_p^*v_q = \begin{cases}v_{p'}v_{q'}^*&\text{if }pP\cap qP = rP\text{ and }r = pp' = qq'\\0&\text{if }pP\cap qP = \varnothing.\end{cases}.
\]
\end{lem}
\begin{proof}
Suppose that $pP\cap qP = rP\text{ and }r = pp' = qq'$. Then
\begin{eqnarray*}
v_p^*v_q & =& (v_p^*e_{pP})(e_{qP}v_q) = v_p^*e_{rP}v_q\\
&=& v_p^*v_rv_r^*v_q = v_p^*(v_pv_{p'})(v_qv_{q'})^*v_q = (v_p^*v_p)v_{p'}v_{q'}^*(v_{q'}^*v_q) = v_{p'}v_{q'}^*.
\end{eqnarray*}
The second equality above shows that $v_p^*v_q = 0$ if $pP\cap qP = \varnothing$.
\end{proof}
\begin{lem}\label{L1}
Let $P$ be a right LCM semigroup with identity, and let $\mathcal{S}$ be as in \eqref{Sdef}. Then the map $\pi:\mathcal{S} \to \mathcal{Q}(P)$ defined by
\[
\pi([p,q]) = v_pv_q^*
\]
\[
\pi(0) = 0
\]
is a tight representation of $\mathcal{S}$.
\end{lem}
\begin{proof}
It is easy to see from Lemma \ref{starnotcalc} and \eqref{Sdef} that the map $\pi$ above is a representation of $\mathcal{S}$. Now, suppose that $F$ is a foundation set. Then, by de Morgan's laws in a Boolean algebra we have
\[
0 = \prod_{f\in F}(1-e_{fP}) = \bigwedge_{f\in F}(\neg \pi([f,f])) = \neg\left(\bigvee_{f\in F} \pi([f,f])\right) = 1-\bigvee_{f\in F} \pi([f,f])
\]
and so $\bigvee_{f\in F} \pi([f,f]) = 1$. Hence by \cite[Proposition 11.8]{Ex08}, to show that $\pi$ is tight we need only check that for every $[p,p]\in E(\mathcal{S})$ and every finite cover $Z$ of $[p,p]$, we have the equality
\[
\bigvee_{z\in Z}\pi(z) = e_{pP}.
\]
So, take $p\in P$ and suppose that $Z$ is a finite cover of $[p,p]$. For $Z$ to be a finite cover of $[p,p]$, we must have that for all $z\in Z$, $zP \subset pP$ and whenever we have $q\in P$ such that $qP\subset pP$, there exists $z\in Z$ such that $qP\cap zP \neq \varnothing$. The first condition implies that for all $z\in Z$, there exists $a_z\in P$ such that $z = pa_z$. We claim that $\{a_z\mid z\in Z\}$ is a foundation set. Indeed, for every $q\in P$, we have that $pqP\subset pP$, and so there exists $z\in Z$ such that $zP\cap pqP = pa_zP \cap pqP\neq \varnothing$, and so $a_zP\cap qP \neq \varnothing$. Hence $1 = \bigvee_{z\in Z}e_{a_zP}$. Hence we have
\begin{eqnarray*}
e_{pP} &=& v_p1v_p^*\\
&=& v_p\left(\bigvee_{z\in Z}e_{a_zP}\right)v_p^*\\
&=& \bigvee_{z\in Z}v_pe_{a_zP}v_p^*\\
&=& \bigvee_{z\in Z}e_{pa_zP}\\
&=& \bigvee_{z\in Z}\pi(z)
\end{eqnarray*}
\end{proof}
\begin{lem}\label{L2}
Let $P$ be a right LCM semigroup with identity, let $\mathcal{S}$ be as in \eqref{Sdef}, and let $\pi$ be any tight representation of $\mathcal{S}$. Then for every foundation set $F\subset P$,
\[
\prod_{f\in F}(1- \pi([f,f])) = 0.
\]
\end{lem}
\begin{proof}
Let $F\subset P$ be a foundation set. Again, by de Morgan's laws in a Boolean algebra, we have
\[
\prod_{f\in F}(1- \pi([f,f])) = \bigwedge_{f\in F}(\neg \pi([f,f])) = \neg(\bigvee_{f\in F} \pi([f,f])) = 1 - \bigvee_{f\in F} \pi([f,f]).
\]
Hence we will be done if we can show that $\bigvee_{f\in F} \pi_t([f,f]) = 1$. Let $X = \{1_\mathcal{S}\}$ and $Y= \varnothing$. Then
\[
E(\mathcal{S})^{X, Y} = \{ e\in E(\mathcal{S}) \mid e1_\mathcal{S} = e\} = E(\mathcal{S})
\]
and since $F$ is a foundation set, $Z = \{ [f, f]\mid f\in F\}$ is a finite cover for $E(\mathcal{S})$. Thus we have
\begin{eqnarray*}
\bigvee_{z\in Z}\pi(z) &=& \prod_{x\in X}\pi(x)\prod_{y\in Y}(1-\pi(y))\\
\Rightarrow \hspace{0.5cm} \bigvee_{f\in F}\pi([f,f]) & = & \pi(1)\\
&=& 1.
\end{eqnarray*}
\end{proof}
The above two lemmas combine to give the main result of this section.
\begin{theo}\label{maintheorem}
Let $P$ be a right LCM semigroup with identity, and let $\mathcal{S}$ be as in \eqref{Sdef}. Then there is an isomorphism $\Phi: \mathcal{Q}(P)\to C^*_{\text{tight}}(\mathcal{S})$ such that $\Phi(v_pv_q^*) = \pi_t([p,q])$ for all $p,q\in P$.
\end{theo}
\begin{proof}
By Lemma \ref{L1} and the fact that $C^*_{\text{tight}}(\mathcal{S})$ is universal for tight representations of $\mathcal{S}$, there exists a $*$-homomorphism $\Phi_\pi: C^*_{\text{tight}}(\mathcal{S}) \to \mathcal{Q}(P)$ such that $\Phi_\pi\circ \pi_t([p, q]) = v_pv_q^*$. Conversely, by Lemma \ref{L2} and the universal property of $\mathcal{Q}(P)$, there exists a $*$-homomorphism $\Phi: \mathcal{Q}(P)\to C^*_{\text{tight}}(\mathcal{S})$ such that $\Phi(v_pv_q^*) = \pi_t([p,q])$. Hence $\Phi\circ\Phi_\pi$ is the identity on $\mathcal{Q}(P)$, $\Phi_\pi\circ\Phi$ is the identity on $C^*_{\text{tight}}(\mathcal{S})$, and so $\Phi$ is an isomorphism.
\end{proof}
\section{Properties of $\mathcal{Q}(P)$}\label{properties}
One of the consequences of Theorem \ref{maintheorem} is that $\mathcal{Q}(P)$ is isomorphic to the C*-algebra of an \'etale groupoid, and we may therefore study $\mathcal{Q}(P)$ by studying the groupoid.
\subsection{$\mathcal{Q}(P)$ as a groupoid C*-algebra}\label{groupoidsubsection}
We now review the construction of the tight groupoid of an inverse semigroup. For more, the interested reader is directed to \cite{Ex08}.
Let $S$ be an inverse semigroup. There is a natural partial order on $S$ given by $s \leqslant t$ if and only if $s = ts^*s$. If $e, f\in E(S)$, $e \leqslant f$ if and only if $ef = e$. This partial order is best understood in the context of the inverse semigroup $\mathcal{I}(X)$ -- here we have $\varphi \leqslant \psi$ if and only if $\psi$ extends $\varphi$ as a function.
In this order, each pair $e, f\in E(S)$ has a unique greatest lower bound, namely their product $ef$. Hence, with the order above $E(S)$ is a semilattice. If $S$ has an identity $1_S$, then it is the unique maximal element of $E(S)$, and if $S$ has a zero element it is the unique minimal element of $E(S)$.
A {\em filter} in $E(S)$ is a {\em proper} subset $\xi\subset E(S)$ which is {\em downwards directed} in the sense that $e, f\in \xi$ implies that $ef\in \xi$, and {\em upwards closed} in the sense that if $e\in \xi$, $f\in E(S)$ and $e\leqslant f$ implies that $f\in \xi$. If a subset $\xi\subset E(S)$ is proper and downwards directed it is called a {\em filter base}, and the set
\[
\overline{\xi} = \{e\in E(S)\mid f\leqslant e\text{ for some }f \in \xi\},
\]
called the {\em upwards closure} of $\xi$, is a filter. A filter is called an {\em ultrafilter} if it is not properly contained in another filter. Ultrafilters always exist by Zorn's Lemma.
We let $\widehat E_0(S)$ denote the set of filters in $E(S)$. This set has a natural topology given by seeing it as a subspace of $\{0,1\}^{E(S)}$ with the product topology. There is a convenient basis for this topology: for finite sets $X, Y\subset E(S)$, let
\[
U(X,Y) = \{ \xi \in \widehat E_0(S)\mid x\in \xi\text{ for all }x\in X, y\notin \xi \text{ for all }y\in Y\}.
\]
These sets are open and closed and generate the subspace topology on $\widehat E_0(S)$ as $X$ and $Y$ range over all finite subsets of $E(S)$. Let $\widehat E_{\infty}(S)$ denote the subspace of ultrafilters. We shall denote by $\widehat E_{\text{tight}}(S)$ the closure of $\widehat E_{\infty}(S)$ in $\widehat E_0(S)$ and call this the space of {\em tight} filters.
If $X$ is a topological space and $S$ is an inverse semigroup, recall that an {\em action} of $S$ on $X$ is a pair $(\{D_e\}_{e\in E(S)},\{\theta_s\}_{s\in S})$ such that each $D_e\subset X$ is open, the union of the $D_e$ coincides with $X$, each map $\theta_s: D_{s^*s}\to D_{ss^*}$ is continuous and bijective, and for all $s, t\in S$ we have $\theta_s\circ \theta_t = \theta_{st}$, where composition is on the largest domain possible. These properties imply that $\theta_{s^*} = \theta_s^{-1}$ and so each $\theta_s$ is actually a homeomorphism.
There is a canonical way to construct an \'etale groupoid from an inverse semigroup action. Let $\theta = (\{D_e\}_{e\in E(S)},\{\theta_s\}_{s\in S})$ be an action of an inverse semigroup $S$ on a space $X$, and let
\[
S\times_\theta X : = \{ (s, x)\in S\times X\mid x\in D_{s^*s}\}.
\]
For $(s, x), (t, y)\in S\times_\theta X$ we write $(s,x)\sim (t, y)$ if $x = y$ and there exists $e\in E(S)$ such that $x\in D_e$ and $se = te$. It is straightforward to check that $\sim$ is an equivalence relation. We write $[s, x]$ for the equivalence class of $(s, x)$ and we let $\mathcal{G}(S, X, \theta)$ denote the set of all such equivalence classes. This set becomes a groupoid when given the operations
\[
[s,x]^{-1} = [s^*, \theta_s(x)], \hspace{0.5cm} r([s,x]) = \theta_s(x), \hspace{0.5cm} d([s,x]) = x, \hspace{0.5cm} [t, \theta_s(x)][s,x] = [ts, x].
\]
For $s\in S$ and $U$ an open subset of $D_{s^*s}$, let
\[
\Theta(s, U) =\{[s, x]\mid x\in U\}.
\]
As $s$ and $U$ vary, these sets form a basis for an \'etale topology on $\mathcal{G}(S, X, \theta)$; with this topology $\mathcal{G}(S, X, \theta)$ is called the {\em groupoid of germs} of the action $\theta$. In this topology, $\Theta(s, U)$ is an open bisection, and if $U$ is in addition closed (resp. compact), $\Theta(s, U)$ is a clopen (resp. compact open) bisection. It is easy to see that the orbit of a point $x\in X$ under the groupoid of germs is the set $\{\theta_s(x)\mid s\in S\}$.
An inverse semigroup acts naturally on $\widehat E_{\text{tight}}(S)$. Let $D_e = \{\xi\in \widehat E_{\text{tight}}(S)\mid e\in \xi\}$, and define $\theta_s: D_{s^*s}\to D_{ss^*}$ by
\[
\theta_s(\xi) = \overline{s\xi s^*} = \overline{\{ses^*\mid e\in \xi\}}.
\]
The groupoid of germs of this action is denoted
\[
\mathcal{G}_{\text{tight}}(S): = \mathcal{G}(S, \widehat E_{\text{tight}}, \theta)
\]
and is called the {\em tight groupoid} of $S$. The C*-algebra of $\mathcal{G}_{\text{tight}}(S)$ is naturally isomorphic to $C^*_{\text{tight}}(S)$; in particular the map $\pi:S \to C^*(\mathcal{G}_{\text{tight}})$ given by $\pi(s) = \chi_{\Theta(s, D_{s^*s})}$ is a tight representation.
If $P$ is a right LCM semigroup and $\mathcal{S}$ is as in \eqref{Sdef}, then it is easy to see that $\mathcal{J}(P)$ and $E(\mathcal{S})$ are isomorphic as semilattices, with the isomorphism being the map $pP\mapsto [p,p]$. The spaces of filters and ultrafilters in $\mathcal{J}(P)$ were considered in previous study of C*-algebras associated to $P$, though filters were termed {\em directed} and {\em hereditary} subsets of $\mathcal{J}(P)$ while the ultrafilters were called {\em maximal} directed hereditary subsets. In what follows, we will consider the elements of $\widehat E_{\text{tight}}(\mathcal{S})$ as tight filters in $\mathcal{J}(P)$, and will shorten $D_{[p,p]}$ to $D_p$ (noting that $D_p = D_{pu}$ for all $u\in U(P)$. We will then have,
\[
\theta_{[p,q]}: D_q\to D_p
\]
\[
\theta_{[p,q]}(\xi) = \{p(q^{-1}rP)\mid rP\in \xi\},
\]
for any $p, q\in P$ and $\xi\in \widehat E_{\text{tight}}(\mathcal{S})$ with $qP\in \xi$.
\subsection{Simplicity of $\mathcal{Q}(P)$}\label{simplicitysubsection}
We now characterize when $\mathcal{Q}(P)$ is simple using the fact that it is isomorphic to the C*-algebra of the \'etale groupoid $\mathcal{G}_{\text{tight}}(\mathcal{S})$. To use the characterization of Theorem \ref{groupoidsimple} (which is \cite[Theorem 5.1]{BCFS14}), we need conditions which guarantee that $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is Hausdorff, minimal, and topologically principal. We begin with Hausdorff.
As noted in \cite[\S 6]{Ex08}, $\mathcal{G}_{\text{tight}}(S)$ is Hausdorff if $S$ is {\em E*-unitary}, that is, for all $s\in S$ and $e\in E(S)\setminus \{0\}$, $e \leqslant s$ implies that $s\in E(S)$. Norling notes in \cite[Corollary 3.24]{No14} that if $P$ is a right LCM semigroup with identity and $\mathcal{S}$ is as in \eqref{Sdef}, then $P$ is cancellative if and only if $\mathcal{S}$ is E*-unitary. Thus if $P$ is cancellative, $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is Hausdorff. However, by \cite[Theorem 3.16]{EP14} we can do a little bit better. We are more precise below, but what we prove is essentially that $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is Hausdorff if and only if the counterexamples to right cancellativity in $P$ have a ``finite cover'' in some sense.
For $p, q\in P$, let $P_{p, q} = \{b\in P\mid pb = qb\}$. If $P_{p, q}$ is nonempty then it is a right ideal of $P$, and in this case we say that $p$ and $q$ {\em meet}. We introduce the following condition that $P$ may satisfy.
\begin{enumerate}
\item[(H)]For every $p, q\in P$ which meet, there is a finite set $F\subset P$ such that $pf = qf$ for all $f\in F$ and whenever we have $b\in P$ such that $pb = qb$, there is an $f\in F$ such that $fP\cap bP \neq \varnothing$.
\end{enumerate}
One sees that (H) is weaker than right cancellativity, since if $P$ is right cancellative $p$ only meets $q$ when $p=q$, and in this case $P_{p, q} = P$ and the finite set $F = \{1_P\}$ verifies (H).
\begin{prop}\label{hausdorff}
Let $P$ be a right LCM semigroup with identity, and let $\mathcal{S}$ be as in \eqref{Sdef}. Then $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is Hausdorff if and only if $P$ satisfies condition (H).
\end{prop}
\begin{proof}
We shall show that condition (H) is equivalent to the set
\[
J_{[p,q]} = \{rP\mid[r,r]\leqslant [p,q]\}
\]
either being empty or having a finite cover for all $p,q\in P$. If we do this, then the conclusion will follow from \cite[Theorem 3.16]{EP14}. Notice that if we have $p, q, r\in P$ such that $[r,r]\leqslant [p,q]$, then $[p,q][r,r] = [r,r]$. If $rP \cap qP = kP$ and $rr'=qq' = k$, then we obtain that $[pq', rr'] = [r,r]$ implying that $r'\in U(P)$. This means that $rP = kP$, and so we may assume (perhaps by rechoosing $q'$) that $[pq', r] = [r,r]$, and so $pq' = r= qq'$. Thus, for each element $rP\in J_{[p,q]}$, there is an element $p_r:= q'$ such that $pp_r = r = qp_r$.
First, assume that for all $p, q\in P$ the set $J_{[p,q]}$ is empty or has a finite cover. Suppose that $p, q\in P$ meet, that is, there exists $b\in P$ such that $pb = qb$. Then
\[
[p,q][qb,qb] = [pb,qb] = [qb, qb]
\]
and so $qbP= pbP\in J_{[p,q]}$, meaning that $ J_{[p,q]}$ is not empty. Hence there is a finite set $F\subset P$ such that $fP\in J_{[p,q]}$ for all $f\in F$ and for all $rP\in J_{[p,q]}$ there exists $f\in F$ such that $rP\cap fP \neq \varnothing$. By the above, there exists $p_f\in P$ such that $pp_f = f = qp_f$. We now see that the finite set $\{p_f\}_{f\in F}$ verifies (H), because if we have $d$ such that $pd = qd$, there is $f\in F$ such that $fP\cap pdP \neq \varnothing$, which implies that $p_fP\cap dP \neq \varnothing$.
Conversely, suppose $P$ satisfies condition (H). If $p, q$ do not meet, then the above discussion shows that $J_{[p,q]}$ is empty. If $p,q$ do meet, let $F$ be the finite set guaranteed by (H), and consider the finite set
\[
pF = qF = \{pf\mid f\in F\}.
\]
If $rP\in J_{[p,q]}$, then again by the above there exists $r'$ such that $pr' = qr' = r$. So, there exists $f\in F$ such that $fP\cap r'P \neq \varnothing$, which implies that $pfP\cap pr'P = pfP\cap rP\neq \varnothing$ and we are done.
\end{proof}
Now that we have addressed when $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is Hausdorff, we turn to minimality.
\begin{lem}\label{minimallem}Let $P$ be a right LCM semigroup with identity, and let $\mathcal{S}$ be as in \eqref{Sdef}. Then $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is minimal.
\end{lem}
\begin{proof}
We will show that for every ultrafilter $\xi$ and open set $U(X,Y) \subset \widehat E_{\text{tight}}(\mathcal{S})$, there is a $[p, q]\in\mathcal{S}$ such that $\theta_{[p,q]}(\xi)\in U(X,Y)$. The set $U(X,Y)$ is open, so it contains an ultrafilter $\eta$ which must have the property that $xP\in \eta$ for all $x\in X$ and, for all $y\in Y$, there exists $p_y\in P$ such that $p_yP\in \eta$ and $p_yP\cap yP = \varnothing$. Because $\eta$ is closed under intersection, there must be $r\in P$ such that
\[
\left(\bigcap_{x\in X}xP \right)\bigcap \left(\bigcap_{y\in Y}p_yP\right) = rP.
\]
Now, $\zeta: = \theta_{[r, 1_P]}(\xi)$ is an ultrafilter which contains $rP$. Since $\zeta$ is upwards directed, $xP, p_yP\in \zeta$ for all $x\in X, y\in Y$, and so $\zeta\in U(X,Y)$. Thus, the orbit of every ultrafilter is dense. Each open set contains an ultrafilter, so the only nonempty open invariant subset of $\widehat E_{\text{tight}}$ is $\widehat E_{\text{tight}}$, and so $\mathcal{G}(\mathcal{S}, \widehat E_{\text{tight}}, \theta)$ is minimal.
\end{proof}
Lastly, we discuss conditions which guarantee that $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is topologically principal. We use the following concepts from \cite{EP14}. For an action $(\{D_e\}_{e\in E(S)}, \{\alpha_s\}_{s\in S})$ of an inverse semigroup $S$ on a locally compact Hausdorff space $X$, and $s\in S$, let
\[
F_s = \{ x\in X\mid \alpha_s(x) = x\}
\]
and call this the set of {\em fixed points} for $s$. Also let
\begin{eqnarray}
TF_s &=& \{ x\in X \mid \text{there exists } e\in E(S)\text{ such that } 0\neq e \leqslant s\text{ and }x\in D_e\}\nonumber\\
&=& \bigcup_{e\leqslant s}D_e \label{TFsopen}
\end{eqnarray}
and call this the set of {\em trivially fixed points} for $s$. From \cite[Theorem 3.15]{EP14}, the groupoid of germs $\mathcal{G}(S, X, \alpha)$ is Hausdorff if and only if $TF_s$ is closed in $D_{s^*s}$ for all $s\in S$.
\begin{defn}
An action $(\{D_e\}_{e\in E(S)}, \{\alpha_s\}_{s\in S})$ of an inverse semigroup $S$ on a locally compact Hausdorff space $X$ is said to be {\em topologically free} if the interior of $F_s$ is contained in $TF_s$ for all $s\in S$.
\end{defn}
We note that if $x$ is trivially fixed by some $s$ with $e\leqslant s$ and $x\in D_e$, we have $\alpha_s(x) = \alpha_s(\alpha_e(x)) = \alpha_{se}(x) = \alpha_e(x) = x$, so $x$ is fixed, that is to say that
\[
TF_s\subset F_s \hspace{1cm} \text{for all } s\in S.
\]
Also, by \eqref{TFsopen}, $TF_s$ is open and so is contained in the interior of $F_s$. Hence stating that $\alpha$ is topologically free is equivalent to saying that $TF_s = \mathring{F}_s$
\begin{theo}\cite[Theorem 4.7]{EP14}
An action $(\{D_e\}_{e\in E(S)}, \{\alpha_s\}_{s\in S})$ of an inverse semigroup $S$ on a locally compact Hausdorff space $X$ is topologically free if and only if $\mathcal{G}(S, X, \alpha)$ is essentially principal.
\end{theo}
We now show that we can characterize when the canonical action of $\mathcal{S}$ on $\widehat E_{\text{tight}}(\mathcal{S})$ is topologically free by considering the behaviour of a subsemigroup of $P$ which generalizes one originally considered in \cite{CL07}.
\begin{prop}\label{coreprop}
Let $P$ be a right LCM semigroup with identity. Then the set
\begin{equation}\label{core}
P_0 := \{p\in P\mid pP\cap qP\neq \varnothing\text{ for all }q\in P\}
\end{equation}
is a subsemigroup of $P$ which contains the identity. Furthermore,
\begin{enumerate}
\item $pq\in P_0$ implies that $p, q\in P_0$, and
\item$p, q\in P_0$ and $pP\cap qP = rP$ implies that $r\in P_0$.
\end{enumerate}
\end{prop}
\begin{proof}
The details of this proof are almost identical to that of \cite[Lemma 5.3]{CL07}. For instance, take $p, q\in P_0$, and $r\in P$. A short calculation shows that $pqP\cap rP = p(qP\cap p^{-1}(pP\cap rP))$, and since $p, q\in P_0$ this must be nonempty, whence $pq\in P_0$.
\end{proof}
\begin{defn}\label{coredef}\cite[Definition 5.4]{CL07} Let $P$ be a right LCM semigroup with identity. Then the subsemigroup $P_0\subset P$ from \eqref{core} is called the {\em core} of $P$.
\end{defn}
The subsemigroup $P_0$ was defined in $\cite{CL07}$ when $P$ is quasi-lattice ordered, though it still makes sense in our context. We note that for all $p\in P_0$, the singleton $\{p\}$ is a foundation set, and so $v_p$ is a unitary in $\mathcal{Q}(P)$. We also note that $U(P)\subset P_0$, though this inclusion may be proper.\footnote{We mention in passing that a cancellative semigroup $P$ for which $P_0 = P$ is called an {\em Ore semigroup}. We do not know if this a reason for the terminology of \cite{CL07}, but in light of this perhaps we should call $P_0$ the cOre of $P$.}
Now let
\[
\mathcal{S}_0 := \{[p,q]\in \mathcal{S}\mid p,q\in P_0\}.
\]
We will also call this the {\em core} of $\mathcal{S}$. For $a, b, c, d\in P_0$, we see that $[a,b][c,d] = [ab', dc']$ where $bP\cap cP = rP$ and $bb' = cc' = r$ implies that $r\in P_0$ and hence $b', c'\in P_0$. Thus, $\mathcal{S}_0$ is an inverse subsemigroup of $\mathcal{S}$. We note that $\mathcal{S}_0$ does not contain the zero element.
\begin{prop}\label{EPcore}\footnote{In preparation of this work we noticed that this proposition may be proved for more general inverse semigroup actions, see Appendix \ref{coreappendix}.}
Let $P$ be a right LCM semigroup with identity which satisfies condition (H), and let $\mathcal{S}$ be as in \eqref{Sdef}. Then $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is essentially principal if and only if for all $s\in \mathcal{S}_0$, each interior fixed point of $s$ is trivially fixed.
\end{prop}
\begin{proof}
The ``only if'' direction is obvious. So, assume that for each $s\in \mathcal{S}_0$, each interior fixed point of $s$ is trivially fixed, and suppose that $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is not essentially principal. Then there must exist $[p,q]\in \mathcal{S}$ such that $TF_{[p,q]}\subsetneq \mathring{F}_{[p,q]}$. Since $P$ satisfies condition (H), $TF_{[p,q]}$ is closed in $D_{q}$ and so $\mathring{F}_{[p,q]}\setminus TF_{[p,q]}$ is open, so we can find an open set $U \subset \mathring{F}_{[p,q]}\setminus TF_{[p,q]}$. Since $U$ is open, it must contain an ultrafilter $\xi\in U$. If we are able to find a $b\in P$ such that $bP\in \xi$ and $[1_P, b][p,q][b,1_P]\in \mathcal{S}_0$, then $D_b$ would contain $\xi$, and so $\theta_{[1_P,b]}(\xi)$ is fixed by $[1_P, b][p,q][b,1_P]$. By assumption, it must be in $TF_{[1_P, b][p,q][b,1_P]}$, so we can find a nonzero idempotent $[r,r]$ such that $[r,r]\leqslant [1_P, b][p,q][b,1_P]$ with $rP\in \theta_{[1_P,b]}(\xi)$. A short calculation shows that this implies that $[b,1_P][r,r][1_P, b]= [br, br]$ is a nonzero idempotent less than $[p,q]$. Furthermore, $rP\in \theta_{[1_P,b]}(\xi)$ implies that $brP\in \xi$, and so $\xi\in D_{br}$ which would imply that $\xi$ is trivially fixed by $[p,q]$, a contradiction. So finding such a $b\in P$ would prove the proposition.
So, suppose that $[1_P, b][p,q][b,1_P]\notin \mathcal{S}_0$ for all $b\in P$ such that $bP\in \xi$, and fix an element $bP\in \xi$. Because $\xi$ is fixed by $[p,q]$, we have that $p(q^{-1}(bP))\in \xi$. Hence, there exist $b_1, q_1, r_1\in P$ such that $bP\cap qP = r_1P$ and $bb_1 = qq_1 = r_1$, and so we have $p(q^{-1}(bP) = pq_1P\in \xi$. Since $bP, pq_1P\in \xi$, there exist $p_1, b_2, r_2\in P$ such that $pq_1P\cap bP = r_2P$ and $pq_1p_1 = bb_2 = r_2$. Upon redefining $a:= b_2$ and $c:= b_1p_1$, a short calculation shows that
\[
[1_P,b][p,q][b, 1_P] = [a,c].
\]
Since we are assuming that this is not an element of $\mathcal{S}_0$, we must have that one of $a$ or $c$ is not an element of $P_0$. Suppose for the moment that $a\notin P_0$, which means there exists $z\in P$ such that $zP\cap aP = \varnothing$. Letting $\xi_{bP} = \theta_{[bz, 1]}(\xi)$, we see that $bP\in \xi_{bP}$ (because $bzP\in \xi_{bP}$ and $\xi_{bP}$ is upwards closed). However, $\xi_{bP}$ is not fixed by $[p,q]$ (whether $\theta_{[p,q]}(\xi_{bP})$ is defined or not) because any filter containing $bP$ and fixed by $[p,q]$ must contain $r_2P = baP$ by the same reasoning as above, and since $\xi_{bP}$ is closed under intersections this would mean that it contains $baP\cap bzP$, which is empty by assumption. In a similar fashion, we can construct an ultrafilter $\xi_{bP}$ containing $bP$ not fixed by $[p, q]$ if we instead assume that $c\notin P_0$. Hence we have constructed a net $\{\xi_b\}_{bP\in \xi}$ of ultrafilters each of which contains $bP$ but none of which is fixed by $[p,q]$. This net converges to $\xi$, and so $U$ contains a point in this net, which is impossible since $U$ is fixed by $[p,q]$. Hence, we are forced to conclude that we can find $b\in P$ such that $bP\in \xi$ and $[1_P, b][p,q][b,1_P]\in \mathcal{S}_0$, and so we are done.
\end{proof}
We would like an algebraic condition on $P$ which guarantees $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is essentially principal. To do this we recall some terminology from \cite{EP14}
\begin{defn}
Let $S$ be an inverse semigroup, let $s\in S$. If $e\in E(S)$ is an idempotent with $e\leqslant s^*s$ it is said that
\begin{enumerate}
\item $e$ is {\em fixed} by $s$ if $e\leqslant s$ (ie, $se = e$), and
\item $e$ is {\em weakly fixed} by $s$ if $sfs^*f \neq 0$ for every nonzero idempotent $f\leqslant e$.
\end{enumerate}
\end{defn}
We translate this terminology to our situation in the following lemma.
\begin{lem}
Let $P$ be a right LCM semigroup with identity, and take $p, q, r\in P$ such that $r = qk$ for some $k\in P$. Then
\begin{enumerate}
\item $[r,r] = [qk, qk]$ is fixed by $[p,q]$ if and only if $pk=r=qk$, and
\item $[r,r] = [qk,qk]$ is weakly fixed by $[p,q]$ if and only if for all $a\in P$;
\[
qkaP\cap pkaP\neq \varnothing;
\]
\end{enumerate}
Also, if $[r,r]$ is fixed by $[p,q]$, it is weakly fixed by $[p,q]$.
\end{lem}
\begin{proof}
Note that in the statement of the lemma, we only consider $r$'s which are right multiples of $q$ because this is exactly what it means to have $[r,r]\leqslant [p,q]^*[p,q]$.
\begin{enumerate}
\item If $[r,r]$ is fixed by $[p,q]$, we have $[p,q][r,r] = [r,r]$ and so $[r,r] = [pk, r]$. Thus $pk = r = qk$. On the other hand, if $pk = qk = r$, then $[p,q][r,r] = [p,q][qk, qk]= [pk,qk] = [r,r]$, and so $[r,r]$ is fixed by $[p,q]$.
\item Suppose that $[r,r]$ is weakly fixed by $[p,q]$. An idempotent is below $[r,r]$ if and only if it is of the form $[ra, ra]$. Thus $[r,r]$ being weakly fixed by $[p,q]$ implies that for every $a\in P$,
\[
0\neq [p,q][ra,ra][q,p][ra,ra] = [p,q][qka,qka][q,p][ra,ra] = [pka, pka][ra,ra].
\]
Hence $raP\cap pkaP\neq \varnothing$. Conversely, if $raP\cap pkaP\neq \varnothing$ for all $a\in P$, then with the same calculation above we see that for all $a\in P$ the product $[p,q][ra,ra][q,p][ra,ra] \neq 0$, and so $[r,r]$ is weakly fixed by $[p,q]$.
\end{enumerate}
As in the general situation, it is clear that each fixed idempotent is weakly fixed.
\end{proof}
The following statement is implicit in the proof of \cite[Theorem 4.10]{EP14}, though we spell it out here for emphasis.
\begin{lem}
Let $S$ be an inverse semigroup, and suppose that either
\begin{enumerate}
\item[(i)]every tight filter in $E(S)$ is an ultrafilter, or
\item[(ii)]for every $s\in S$, the set $J_s = \{e\in E(S)\mid e\leqslant s\}$ has a finite cover.
\end{enumerate}
Then for each $s\in S$, $\mathring{F}_s \subset TF_s$ if and only if for all $e$ weakly fixed by $s$, there is a finite cover for $J_e$ consisting of fixed idempotents.
\end{lem}
The following is a rephrasing of the above result for our situation.
\begin{lem}\label{EPalgebraic}
Let $P$ be a right LCM semigroup with identity which satisfies condition (H), or such that the only tight filters in $\mathcal{J}(P)$ are ultrafilters. Then $\mathring{F}_{[p,q]}\subset TF_{[p,q]}$ if and only if $[p,q]$ satisfies the following condition:
\begin{enumerate}
\item[(EP)] for all $[qk, qk]$ weakly fixed by $[p,q]$, there exists a foundation set $F\subset P$ such that $qkf = pkf$ for all $f\in F$.
\end{enumerate}
\end{lem}
We note that any idempotent $[p,p]$ satisfies (EP) using the foundation set $\{1_P\}$.
One notices that we still fall slightly short of being able to apply Theorem \ref{groupoidsimple}, because we have only given conditions under which $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is essentially principal, not topologically principal. As discussed above Theorem \ref{groupoidsimple}, these two notions are equivalent when $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is Hausdorff and second countable. We are considering only countable semigroups $P$, and one easily sees that this guarantees that $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is second countable.
We now come to the main result of this section.
\begin{theo}\label{Qsimple}
Let $P$ be a right LCM semigroup with identity which satisfies condition (H), let $P_0$ be the core of $P$, and let $\mathcal{S}$ be as in \eqref{Sdef}. Then $\mathcal{Q}(P)$ is simple if and only if
\begin{enumerate}
\item $\mathcal{Q}(P)\cong C^*_r(\mathcal{G}_{\text{tight}}(\mathcal{S}))$, and
\item for all $p, q\in P_0$, the element $[p,q]$ satisfies condition (EP).
\end{enumerate}
\end{theo}
\begin{proof}
By Proposition \ref{hausdorff}, $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is Hausdorff, so we can apply Theorem \ref{groupoidsimple}. By Lemma \ref{minimallem}, $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is always minimal. By \cite[Theorem 4.7]{EP14}, Proposition \ref{EPcore}, and Lemma \ref{EPalgebraic}, $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is topologically principal if and only if we have (2) above. The result follows.
\end{proof}
\subsection{Pure infiniteness of $\mathcal{Q}(P)$}\label{purelyinfinitesubsection}
We recall a definition from \cite{EP14}.
\begin{defn}\label{locallycontractingsemigroup}
An inverse semigroup $S$ is called {\em locally contracting} if for every nonzero $e\in E(S)$ there exists $s\in S$ and a finite set $F = \{f_0, f_1, \dots f_n\}\subset E(S)\setminus \{0\}$ with $n\geq 0$ such that for all $0\leq i \leq 1$ we have
\begin{enumerate}
\item $f_i\leq es^*s$,
\item there exists $f\in F$ such that $sf_is^*f \neq 0$, and
\item $f_0sf_i = 0$.
\end{enumerate}
\end{defn}
As one might guess from the name, if $S$ is locally contracting then $\mathcal{G}_{\text{tight}}(S)$ is locally contracting by \cite[Corollary 6.6]{EP14}.
\begin{lem}\label{lemlocallycontracting}
Let $P$ be a right LCM semigroup with identity and let $\mathcal{S}$ be as in \eqref{Sdef}. Then $\mathcal{S}$ is locally contracting if and only if $P\neq P_0$.
\end{lem}
\begin{proof}
The ``only if'' direction is trivial, because if $P = P_0$ we could not satisfy part 3 of Definition \ref{locallycontractingsemigroup}, as the product of the two idempotents $f_0$ and $sf_is^*$ could not be zero.
For the ``if'' direction, we suppose that $P\neq P_0$, and hence we can find $p, q\in P$ such that $pP\cap qP = \varnothing$. By \cite[Proposition 6.7]{EP14}, we will be done if for every $r\in P$ we can find $a\in P$ and $f_0, f_1\in P$ such that $f_0P\subset f_1P\subset rP$, $af_1P\subset f_1P$ and $[f_0,f_0][a,1_P][f_1, f_1] = 0$. To this end, let
\[
a = f_1 = rp \hspace{1cm} f_0 = rprq.
\]
Then clearly $f_0P\subset f_1P\subset rP$, and $af_1P = rprpP\subset f_1P$. We also have that
\[
[f_0,f_0][a,1_P][f_1, f_1] = [rprq, rprq][rp, 1_P][rp, rp] = [rprq, rprq][rprp, rp]
\]
and since $rprqP\cap rprpP = \varnothing$ by assumption, this product is zero.
\end{proof}
\begin{theo}\label{purelyinfinite}
Let $P$ be a right LCM semigroup with identity which satisfies condition (H), and suppose that $\mathcal{Q}(P)$ is simple. Then $\mathcal{Q}(P)$ is purely infinite if and only if $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is not the trivial (one-point) groupoid.
\end{theo}
\begin{proof}
The ``only if'' direction is clear, because if $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is one point, its C*-algebra is isomorphic to $\mathbb{C}$, which is not purely infinite.
On the other hand, if $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is not the one-point groupoid, we have two cases. If $\widehat E_{\text{tight}}(\mathcal{S})$ is one point then there are no points with trivial isotropy, and so $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is not topologically principal, contradicting Theorem \ref{groupoidsimple}. If $\widehat E_{\text{tight}}(\mathcal{S})$ has more than one point, then there are at least two distinct ultrafilters in $\mathcal{J}(P)$. Hence we can find a $p\in P$ and an ultrafilter $\xi$ such that $pP\notin \xi$, and since $\xi$ is an ultrafilter there must be $qP\in \xi$ such that $pP\cap qP = \varnothing$. Thus neither $p$ nor $q$ is in $P_0$, and so $P\neq P_0$ implying that $\mathcal{S}$ is locally contracting by Lemma \ref{lemlocallycontracting}. Hence, by \cite[Corollary 6.6]{EP14} and \cite[Proposition 2.4]{AD97} $C^*_r(\mathcal{G}_{\text{tight}}(\mathcal{S}))\cong \mathcal{Q}(P)$ is purely infinite.
\end{proof}
Hence, in the presence of simplicity, pure infiniteness of $\mathcal{Q}(P)$ follows automatically in all but the most trivial cases.
In \cite[Theorem 5.3]{BL14}, the authors give conditions under which $C^*(P)$ is simple and purely infinite for a right LCM semigroup $P$. Since $\mathcal{Q}(P)$ is in some sense the smallest quotient of $C^*(P)$, it is not surprising that $\mathcal{Q}(P)$ is simple under these milder conditions.
\section{Examples}\label{examplessection}
\subsection{Free semigroups}\label{freesemigroupssection}
Let $X$ be a finite set, and let $X^n$ denote the set of words of length $n$ in $X$, with $X^0$ consisting of a single empty word, $\emptyset$. Let
\[
X^* = \bigcup_{n\geq 0}X^n.
\]
Then $X^*$ becomes a semigroup with the operation of concatenation: if $\alpha = \alpha_1\alpha_2\cdots\alpha_{k}$ and $\beta = \beta_1\beta_2\cdots\beta_l$ then their product is $\alpha\beta = \alpha_1\alpha_2\cdots\alpha_{k}\beta_1\beta_2\cdots\beta_l$, while the empty word is the identity. If $\alpha\in X^n$, we write $|\alpha| = n$ and say that the {\em length} of $\alpha$ is $n$. The core of this semigroup is $X^*_0 = U(X^*) = \{\emptyset\}$. If we have $\alpha, \beta\in X^*$, then $\alpha X^* = \beta X^*$ if and only if $\alpha = \beta$. Furthermore, $X^*$ is left cancellative, and either $\alpha X^* \cap \beta X^* = \varnothing$ or one is included in the other, so $X^*$ is right LCM.
From the relations (L1)-(L4) it follows easily that $C^*(X^*)$ is the universal unital C*-algebra generated by isometries $v_1, \dots, v_{|X|}$ such that
\[
v_i^*v_j = \begin{cases}1 &\text{if }i = j\\0&\text{ otherwise,}\end{cases}
\]
that is, $C^*(X^*)$ is isomorphic to the Toeplitz algebra $\mathcal{TO}_{|X|}$. Furthermore, the set $X = X^1\subset X^*$ is a foundation set, and so in $\mathcal{Q}(X^*)$ we have
\[
0 = \prod_{x\in X}(1 - e_{xX^*}) = 1 - \bigvee_{x\in X}v_xv_x^* = 1-\sum_{x\in X}v_xv_x^*
\]
\[
\Rightarrow \sum_{x\in X}v_xv_x^* = 1
\]
and since the Cuntz algebra $\mathcal{O}_{|X|}$ is the universal C*-algebra generated by such elements, there is a surjective $*$-homomorphism from $\mathcal{O}_{|X|}$ to $\mathcal{Q}(X^*)$ which must be an isomorphism because $\mathcal{O}_{|X|}$ is simple.
Principal left ideals of $X^*$ are either disjoint or comparable by inclusion, and hence ultrafilters are maximal well-ordered subsets of $\mathcal{J}(X^*)$. The space of ultrafilters can be identified with the compact space $\Sigma_X$ of right-infinite words in $X$ via the homeomorphism
\[
\alpha\in \Sigma_X \mapsto \{X^*, \alpha_1X^*, \alpha_1\alpha_2X^*, \alpha_1\alpha_2\alpha_3X^*, \dots\} \in \widehat E_{\text{tight}}(\mathcal{S}).
\]
Here every tight filter is an ultrafilter. Because $X^*$ is right cancellative (in fact, it can be embedded in the free group on $|X|$ elements) it satisfies condition (H). The inverse semigroup $\mathcal{S}$ from \eqref{Sdef} is known in the literature as the {\em polycyclic monoid} on $|X|$ generators. For all $\alpha\in X^*$, the idempotent $[\alpha, \alpha]$ is weakly fixed by $[\emptyset, \emptyset]$, and $[\emptyset, \emptyset]$ trivially satisfies condition (EP).
\subsection{Right LCM semigroups embedded in groups}
In \cite{Li13} Li builds upon his earlier work and makes a comprehensive study of the C*-algebras of semigroups which may be embedded into groups. There the semilattice of constructible ideals $\mathcal{J}(P)$ is considered, though it is not always equal to the set of all principal right ideals. There, the set of filters is denoted $\Sigma$, the set of ultrafilters is denoted $\Sigma_{\text{max}}$ and its closure is denoted $\partial\Sigma$. In \cite[\S 5.2]{Li13} an inverse semigroup analogous to our $\mathcal{S}$ from \eqref{Sdef} is defined; this inverse semigroup acts on $\Sigma$. The groupoid of germs of this action is the universal groupoid for $\mathcal{S}$, and the C*-algebra of its reduction to $\partial\Sigma$ (that is to say, $\mathcal{G}_{\text{tight}}(\mathcal{S})$), is identified as a suitable boundary quotient. Our only contribution to the literature for this situation would seem to be the isomorphism between this boundary quotient and the one defined in \cite{BRRW14}. We do note that our result Proposition \ref{EPcore} is analogous to \cite[Proposition 7.20]{Li13}, and indeed it seems both were inspired by \cite[Proposition 5.5]{CL07}.
\subsection{Zappa-Sz\'ep products of semigroups}\label{zappaszepsection}
The following is a construction considered in \cite{BRRW14}. Let $U$ and $A$ be semigroups with identities $1_U$ and $1_A$ respectively and suppose there exist maps $A\times U \to U$ given by $(a, u)\mapsto a\cdot u$, and $A\times U \to A$ given by $(a,u)\mapsto \left.a\right|_u$ which satisfy
\vspace{-0.3cm}\begin{tabular}{p{7cm} p{7cm}}
\begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip}
\item[(ZS1)] $1_A\cdot u = u$
\item[(ZS2)] $(ab)\cdot u = a\cdot(b\cdot u)$
\item[(ZS3)] $a\cdot 1_U = 1_U$
\item[(ZS4)] $a\cdot(uv) = (a\cdot u)(\left.a\right|_u\cdot v)$
\end{enumerate}
&
\begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip}
\item[(ZS5)] $\left.a\right|_{1_U} = a$
\item[(ZS6)] $\left.a\right|_{uv} = \left.a\right|_u\left.\hspace{-0.1cm}\right|_v$
\item[(ZS7)] $\left.1_A\right|_u = 1_A$
\item[(ZS8)] $\left.ab\right|_u = \left.a\right|_{b\cdot u}\left.b\right|_u$
\end{enumerate}
\end{tabular}
\noindent for all $u, v\in U$ and $a, b\in A$. Then $U\times A$ becomes a semigroup with identity $(1_U, 1_A)$ when given the operation
\[
(u,a)(v,b) = (u(a\cdot v), \left.a\right|_vb).
\]
This is called the {\em Zappa-Sz\'ep product} of $U$ and $A$, and is denoted $U\bowtie A$. If in addition to the above, we have that
\begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip}
\item[(i)]$U$ and $A$ are both left cancellative,
\item[(ii)]$U$ is right LCM,
\item[(iii)]$\mathcal{J}(A)$ is totally ordered by inclusion, and
\item[(iv)]the map $u\mapsto a\cdot u$ is a bijection on $U$ for each $a\in A$,
\end{enumerate}
then $U\bowtie A$ is a right LCM semigroup as well, see \cite[Lemma 3.3]{BRRW14}.
By Theorem \ref{maintheorem}, the boundary quotient $\mathcal{Q}(U\bowtie A)$ defined in \cite[Definition 5.1]{BRRW14} is isomorphic to the C*-algebra of an \'etale groupoid $\mathcal{G}_{\text{tight}}(\mathcal{S})$ (where $\mathcal{S}$ is as in \eqref{Sdef}) whose unit space is homeomorphic to the space of tight filters in $\mathcal{J}(U\bowtie A)$. To use Theorems \ref{Qsimple} and \ref{purelyinfinite} requires that we know the nature of the core of our semigroup, and in this case the core has an easily describable form. Firstly, by \cite[Lemma 5.3(a)]{BRRW14}, each element $(1_U,a)$ is in the core of $U\bowtie A$. Furthermore, by \cite[Remark 3.4]{BRRW14}, for $u, v\in U$ and $a, b\in A$, we have
\[
(u, a)U\bowtie A \cap (v, b)U\bowtie A = \varnothing \hspace{0.5cm}\Leftrightarrow\hspace{0.5cm} uU\cap vU = \varnothing.
\]
Therefore, $\{(u,a)\}$ is a one-point foundation set in $U\bowtie A$ if and only if $\{u\}$ is a foundation set in $U$. Hence the core of $U\bowtie A$ is
\[
(U\bowtie A)_0 = \{(u, a)\in U\bowtie A\mid u\in U_0\}.
\]
By Proposition \ref{coreprop} this is a subsemigroup of $U\bowtie A$, and in particular, we have that for all $u\in U_0$, $a\cdot u \in U_0$ for all $a\in A$. Thus we are justified writing $(U\bowtie A)_0 = U_0\bowtie A$. Without having more information about $U$ and $A$ we cannot say much more, though in the sequel we consider a specific example for which we can.
\subsection{Self-similar groups}
We close with an example which is a specific case of the situation from \S \ref{zappaszepsection}. The conclusions we come to in this section are known, and combine the results of \cite{EPSep14} and \cite{BRRW14}. Indeed, generalizing the results implicit in combining \cite{EP13} (which was a preliminary version of \cite{EPSep14}) and \cite{BRRW14} was a major inspiration for this work. We present what follows to illustrate our results in the context of this interesting example.
Let $X$ be a finite set, let $G$ be a group, and let $X^*$ be as in \S \ref{freesemigroupssection}. Suppose that we have a length-preserving action of $G$ on $X^*$, with $(g, \alpha)\mapsto g\cdot \alpha$, such that for all $g\in G$, $x\in X$ there exists a unique element of $G$, denoted $\left.g\right|_x$, such that for all $\alpha\in X^*$
\[
g(x\alpha) = (g\cdot x)(\left.g\right|_x\cdot \alpha).
\]
In this case, the pair $(G,X)$ is called a {\em self-similar group}. In \cite{Nek09}, Nekrashevych associates a C*-algebra to $(G, X)$, denoted $\mathcal{O}_{G, X}$, which is the universal C*-algebra generated by a set of isometries $\{s_x\}_{x\in X}$ and a unitary representation $\{u_g\}_{g\in G}$ satisfying
\begin{enumerate}\addtolength{\itemsep}{-0.5\baselineskip}
\item[(i)]$s_x^*s_y = 0$ if $x\neq y$,
\item[(ii)]$\sum_{x\in X}s_xs_x^* = 1$,
\item[(iii)]$u_gs_x = s_{g\cdot x}u_{\left.g\right|_x}$.
\end{enumerate}
If one defines, for $\alpha\in X^*$ and $g\in G$,
\[
\left.g\right|_\alpha := \left.g\right|_{\alpha_1}\hspace{-0.1cm}\left.\right|_{\alpha_2}\cdots \hspace{-0.1cm}\left.\right|_{\alpha_{|\alpha|}}
\]
then the free semigroup $X^*$ and the group $G$ (viewed as a semigroup) together with the maps $(g, \alpha)\mapsto g\cdot \alpha$ and $(g, \alpha)\mapsto \left.g\right|_\alpha$ satisfy the conditions (ZS1)--(ZS8), and so we may form the Zappa-Sz\'ep product $X^*\bowtie G$. Furthermore, conditions (i)--(iv) from \S 5.3 are easily seen to hold, so $X^*\bowtie G$ is a right LCM semigroup. The semilattice of principal right ideals of $X^*\bowtie G$ is isomorphic to that of $X^*$ via the map $(\alpha, g)X^*\bowtie G \mapsto \alpha X^*$, and so one may identify $\mathcal{J}(X^*\bowtie G)$ with $\mathcal{J}(X^*)$, with inclusion order given by
\[
\alpha X^*\subset \beta X^* \Leftrightarrow \beta\text{ is a prefix of }\alpha.
\]
as before, principal right ideals are either disjoint or are comparable by inclusion. Hence as before, the unit space of the tight groupoid is homeomorphic to $\Sigma_X$, which is homeomorphic to the Cantor set. By \cite[Theorem 6.7]{BRRW14}, $\mathcal{Q}(X^*\bowtie G)\cong \mathcal{O}_{G, X}$.
In general, $X^*\bowtie G$ is not cancellative, although it is embeddable into a group if and only if it is cancellative \cite[Theorem 5.5]{LW13}. We recall the following concepts from \cite{EPSep14}. Let $\alpha \in X^*$, and $g\in G$. Then $\alpha$ is said to be {\em strongly fixed by} $g$ if $g\cdot \alpha = \alpha$ and $\left.g\right|_\alpha = 1_G$, and we let
\[
SF_g = \{\alpha\in X^*\mid \alpha\text{ strongly fixed by g}\}.
\]
Of course, if $\alpha\in SF_g$, then $\alpha\gamma\in SF_g$ for every word $\gamma\in X^*$. We will say that a strongly fixed word $\alpha$ is {\em minimal} by $g$ if $\alpha\in SF_g$ and no prefix of $\alpha$ is strongly fixed by $g$, and will denote this set by
\[
MSF_g = \{\alpha\in X^*\mid \alpha\text{ minimal strongly fixed}\}\subset SF_g.
\]
The self-similar group $(G, X)$ is said to be {\em pseudo-free} if $SF_g$ is empty for all $g\neq 1_G$. A short calculation shows that $X^*\bowtie G$ is cancellative if and only if $(G,X)$ is pseudo-free (see \cite[Proposition 3.11]{LW13} or \cite[Lemma 3.2]{ES14}).
As mentioned earlier, our condition (H) is slightly weaker than right cancellativity, so one hopes that we can give conditions on $(G,X)$ which are equivalent to (H). If $(\alpha, g), (\beta, h)\in X^*\bowtie G$ meet, then there exists $(\gamma, k)\in X^*\bowtie G$ such that
\begin{eqnarray*}
(\alpha, g)(\gamma, k) &=& (\beta, h)(\gamma, k)\\
(\alpha(g\cdot \gamma), \left.g\right|_\gamma k) &=& (\beta(h\cdot \gamma), \left.h\right|_\gamma k),
\end{eqnarray*}
and since the action of $G$ on $X^*$ fixes lengths, we must have that $\alpha = \beta$, $g\cdot \gamma = h\cdot\gamma$ and $\left.g\right|_\gamma=\left.h\right|_\gamma$. After noticing that the definition of a self-similar group implies that $\left.k\right|^{-1}_v = \left.k^{-1}\right|_{k\cdot v}$ for all $k\in G, v\in X^*$, this is easily seen to imply that $\gamma$ is strongly fixed by $g^{-1}h$. Hence, $(X^*\bowtie G)_{(\alpha, g), (\alpha, h)}$ is only nonempty when $g^{-1}h$ has a strongly fixed word, and
\[
(X^*\bowtie G)_{(\alpha, g), (\alpha, h)} = \{ (\gamma, k)\mid k\in G, \gamma\in SF_{g^{-1}h}\} = (X^*\bowtie G)_{(\emptyset, g), (\emptyset, h)}.
\]
Thus, $X^*\bowtie G$ will satisfy condition (H) if we can show that for all $g\in G\setminus \{1_G\}$, there exists a finite set $F\subset SF_g$ such that for all $\alpha\in SF_g$ there exists $\beta \in F$ such that $\beta X^*\cap \alpha X^*\neq \varnothing$. The following result appears in \cite{EPSep14} gives conditions for when this occurs.
\begin{lem}\cite[Theorem 12.2]{EPSep14}\label{MSFH}
Let $(G, X)$ be a self-similar group. Then $X^*\bowtie G$ satisfies condition (H) if and only if, for all $g\in G\setminus\{1_G\}$, the set $MSF_g$ is finite.
\end{lem}
\begin{proof}
One easily sees that if $MSF_g$ is finite, it will satisfy the above condition, as each strongly fixed word must have a prefix which is minimal. Conversely, if such a finite $F$ exists, and $MSF_g$ is infinite, find a $\gamma\in MSF_g$ such that $|\gamma| > \max_{\alpha\in F}|\alpha|$. Then there must exist $\alpha\in F$ such that $\alpha X^*\cap \gamma X^*\neq \varnothing$, and since $|\gamma|> |\alpha|$, $\alpha$ must be a prefix of $\gamma$. But $\alpha$ is strongly fixed, and $\gamma$ is supposed to be minimal, so we have a contradiction. Hence $MSF_g$ is finite.
\end{proof}
We now address condition (EP). In this example, the core of $X^*\bowtie G$ coincides with the group of units of $X^*\bowtie G$, which is
\[
(X^*\bowtie G)_0 = U(X^*\bowtie G) = \{ (\emptyset, g)\mid g\in G\}
\]
and can be identified with the group $G$. The inverse semigroup \eqref{Sdef} has been previously constructed in \cite{EPSep14}, and generalizing their results there was an inspiration for this work. Let
\[
\mathcal{S}_{G, X} = \{ (\alpha, g, \beta)\mid \alpha,\beta\in X^*, g\in G\}.
\]
This set becomes an inverse semigroup when given the operation
\[
(\alpha, g, \beta)(\gamma, h, \nu) = \begin{cases}(\alpha (g\cdot\gamma'), \left.g\right|_{\gamma'}h, \nu), &\text{if }\gamma = \beta\gamma',\\ (\alpha, g(\left.h^{-1}\right|_{\beta'})^{-1}, \nu (h^{-1}\cdot\beta')), & \text{if } \beta = \gamma\beta',\\ 0 &\text{otherwise}\end{cases}
\]
with
\[
(\alpha, g, \beta)^* = (\beta, g^{-1}, \alpha).
\]
Then the map from our $\mathcal{S}$ to $\mathcal{S}_{G, X}$ given by
\[
[(\alpha, g), (\beta, h)]\mapsto (\alpha, gh^{-1}, \beta)
\]
is an isomorphism of inverse semigroups, so from now on we will use this identification to discuss condition (EP). We note at this point that it follows from \cite[Corollary 6.4]{EPSep14}, $C^*_{\text{tight}}(\mathcal{S}_{G, X})\cong \mathcal{O}_{G, X}$, and so in this case our Theorem \ref{maintheorem} is already known.
An element of $\mathcal{S}_{G, X}$ is an idempotent if and only if it is of the form $(\alpha, 1_G, \alpha)$. Identifying the core with $G$, we see that an idempotent $(\alpha, 1_G, \alpha)$ is weakly fixed by $g\in G$ if and only if, for all $\gamma\in X^*$, $(g\cdot \alpha)\gamma X^*\cap \alpha\gamma X^* \neq \varnothing$. By length considerations, this is equivalent to saying that $g\cdot \alpha = \alpha$ and for all $\gamma\in X^*$, $\left.g\right|_\alpha \cdot \gamma = \gamma$. If the action of $G$ on $X^*$ is {\em faithful} (which is to say that for all $g\in G\setminus \{1_G\}$ there exists $\alpha\in X^*$ such that $g\cdot \alpha \neq \alpha$), then this is equivalent to saying that $\alpha$ is strongly fixed by $g$. So, in the presence of faithfulness,
\[
(\alpha, 1_G, \alpha)\text{ weakly fixed by }(\emptyset, g, \emptyset) \Leftrightarrow \alpha\text{ strongly fixed by }g \Leftrightarrow (\alpha, 1_G,\alpha)\text{ fixed by }(\emptyset, g, \emptyset).
\]
Hence we have the following.
\begin{lem}\label{ssgEP}
Let $(G, X)$ be a faithful self-similar group, and let $g\in G$. Then $(\emptyset, g,\emptyset)$ satisfies condition (EP).
\end{lem}
\indent We now come to the following result on self-similar groups. We note that it is not original to this work, and also follows from \cite[Proposition 5.5]{Nek09} and alternatively \cite[Proposition 17.1]{EPSep14}.
\begin{theo}\label{ssgclassification}
Let $(G, X)$ be a faithful self-similar group, suppose $G$ is amenable, and suppose that for all $g\in G\setminus\{1_G\}$, $MSF_g$ is finite. Then $\mathcal{O}_{G, X} \cong \mathcal{Q}(X^*\bowtie G)$ is nuclear, simple, and purely infinite.
\end{theo}
\begin{proof}
Let $\mathcal{S}$ be as in \eqref{Sdef} for the semigroup $X^*\bowtie G$. Because $MSF_g$ is finite for all $g\in G\setminus \{1_G\}$, $X^*\bowtie G$ satisfies condition (H) by Lemma \ref{MSFH}. Because $G$ is amenable, we may apply \cite[Corollary 10.16]{EPSep14} to get that $\mathcal{Q}(X^*\bowtie G)$ is nuclear. Since nuclearity passes to quotients, this implies that $C^*_r(\mathcal{G}_{\text{tight}}(S))$ is nuclear. Thus, by \cite[Theorem 5.6.18]{BO08}, $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is amenable and so $C^*_r(\mathcal{G}_{\text{tight}}(S))\cong C^*(\mathcal{G}_{\text{tight}}(\mathcal{S})) \cong \mathcal{Q}(X^*\bowtie G)$. By Lemma \ref{ssgEP}, every element of $\mathcal{S}_0$ satisfies (EP). Hence we may use Theorem \ref{Qsimple} to conclude that $\mathcal{Q}(X^*\bowtie G)$ is simple. Since we are assuming that $|X|>1$, $X^*\bowtie G\neq (X^*\bowtie G)_0$ implying that $\mathcal{G}_{\text{tight}}(\mathcal{S})$ is not the trivial groupoid, and so by Theorem \ref{purelyinfinite} we have that $\mathcal{Q}(X^*\bowtie G)$ is purely infinite.
\end{proof}
\begin{ex}{\bf The Odometer and Modified Odometer}
We will give two examples of faithful self-similar groups. For the first, let $X = \{0, 1\}$, let $\mathbb{Z} = \left\langle z\right\rangle$ be the group of integers with identity $e$ written multiplicatively. The {\em 2-odometer} is the self-similar group $(\mathbb{Z}, X)$ determined by
\[
z\cdot 0 = 1\hspace{1cm} \left.z\right|_0 = e
\]
\[
z\cdot 1 = 0\hspace{1cm} \left.z\right|_1 = z.
\]
If one views a word $\alpha\in X^*$ as a binary number (written backwards), then $z\cdot \alpha$ is the same as 1 added to the binary number for $\alpha$, truncated to the length of $\alpha$ if needed. If such truncation is not needed, $\left.z\right|_\alpha = e$, but if truncation is needed, $\left.z\right|_\alpha = z$. This self-similar group is faithful and pseudo-free \cite[Example 3.4]{ES14}. Hence $(\mathbb{Z}, X)$ satisfies the hypotheses of Theorem \ref{ssgclassification}, and so $\mathcal{Q}(X^*\bowtie \mathbb{Z})$ is nuclear, simple, and purely infinite. In fact, this C*-algebra was shown in \cite[Example 6.5]{BRRW14} to be isomorphic to the C*-algebra $\mathcal{Q}_2$ defined in \cite{LL12}, and there the authors prove directly that it is nuclear, simple, and purely infinite. In \cite[Example 4.5]{ES14} we showed that this C*-algebra is isomorphic to a partial crossed product of the continuous functions on the Cantor set by the Baumslag-Solitar group $BS(1,2)$.
Since pseudo-freeness is stronger than what is needed to imply condition (H), we give a modified version of this example which is not pseudo-free but whose Zappa-Sz\'ep product does satisfy condition (H). To this end, let $X_B = \{0,1,B\}$, and let $\mathbb{Z}$ be written multiplicatively as before. Define
\[
z\cdot 0 = 1\hspace{1cm} \left.z\right|_0 = e
\]
\[
z\cdot 1 = 0\hspace{1cm} \left.z\right|_1 = z.
\]
\[
z\cdot B = B\hspace{1cm} \left.z\right|_B = e.
\]
One notices that the first two lines above are the same as in the previous example, but we have added a new symbol $B$ which is fixed by every group element; in fact, the word $B$ is strongly fixed by each group element. If one is wondering why we are calling this symbol ``$B$'', one could think of it as a {\bf B}rick wall past which no group element can travel or, if one pictures the odometer as acting like a car odometer, one could think of it as a {\bf B}roken digit. In any case, our new self-similar group $(X_B,\mathbb{Z})$ is not pseudo-free.
If $\alpha\in MSF_{z^m}$ with $m>0$, then one quickly sees that $\alpha = \beta B$ for some $\beta\in \{0,1\}^*$, such that $z^m\cdot \beta = \beta$. Due to the description of the action as adding in binary, one sees that for a given $\beta\in \{0,1\}^*$, this equation is satisfied if and only if $k2^{|\beta|} = m$ for some $k>0$. Hence for a fixed $m$, only words $\beta$ of length less than $\log_2(m)$ could possibly satisfy $z^m\cdot \beta = \beta$. There are only finitely many such words, so for all $m>0$ the set $MSF_{z^m}$ is finite. The case $m<0$ is similar. Hence $X^*_B\bowtie \mathbb{Z}$ satisfies condtion (H). Direct application of Theorem \ref{ssgclassification} gives that $\mathcal{Q}(X^*_B\bowtie \mathbb{Z})$ is nuclear, simple, and purely infinite.
\end{ex} |
2205.00140 | \section*{Acknowledgement}
The author would like to thank Kangning Wang and Zhaohua Chen for reading an earlier draft, discussion about the content, and their helpful suggestions on the presentation of the paper.
\section{Introduction}
Two-sided markets, with strategic players on both the sell-side and the buy-side, has been an important research topic in economics. This paper considers the simplest model of such market, the two-agent single-item bilateral trade.
\subsection{Model}
Suppose there is a single seller and a single buyer on the market. There is an item held by the seller and to be sold to the buyer. The buyer's private value for the item is a random variable $v$ with cumulative distribution function (CDF) $F$, and the seller's private cost of selling it out is a random variable $c$ with CDF $G$. We assume that $v$ and $c$ are independent, and that $F$ and $G$ have finite first moments. We need to design a mechanism which, on input $v$ and $c$, decides whether the transaction should happen. Let $x(\cdot,\cdot)$ be a (Borel-measurable) function from $\mathbb{R}^{2}$ to $[0,1]$, which denotes the trading probability decided by the mechanism, or the ``allocation rule''. The \textit{gains-from-trade} ($\mathsf{GFT}$), or the expected social utility gain from trading, is defined as
$$\mathsf{GFT}:=\underset{\substack{v\sim F\\ c\sim G}}{\mathbb{E}}\left[(v-c)\cdot x(v,c)\right].$$
Thinking of $F$ and $G$ as given, what choice of the function $x$ maximizes $\mathsf{GFT}$? It is clear that whatever $F$ and $G$ are, the quantity $\mathsf{GFT}$ is always maximized at $x^{*}(v,c):=\mathbbm{1}\{v\geq c\}$. However, in reality, we have to take into consideration the strategic behavior of both sides of the market. As in many mechanism design problems, in order to make an allocation rule Bayesian incentive compatible (BIC) and individually rational (IR), we need to include a ``money transfer'' rule. But unlike in the case of one-sided auctions, the ``payment'' from the sell-side can be negative in two-sided markets. Thus, in addition to the commonly studied BIC and IR conditions, the ``balance of budget'' is another important consideration in the design of bilateral trade mechanisms. In particular, we are concerned with the following requirement:
\begin{definition}
If the payment from the buyer is always at least the revenue of the seller, we say the mechanism is weakly budget balanced (WBB).
\end{definition}
\cite{MS:1983} show that for very general $F$ and $G$, the ideal allocation rule $x^{*}(v,c)=\mathbbm{1}\{v\geq c\}$ cannot be made into a BIC, IR and WBB mechanism. But on the other hand, this idealism does offer a benchmark which we can try to approximate using mechanisms with these properties. We call the gains-from-trade achieved by $x^{*}(v,c)$ the \textit{first-best} $\mathsf{GFT}$. Namely, define
$$\mathsf{FB}:=\E_{\substack{v\sim F\\ c\sim G}}\left[(v-c)\cdot \mathbbm{1}\{v\geq c\}\right].$$
The maximum $\mathsf{GFT}$ achievable by Bayesian incentive compatible, individually rational and weakly budget balanced mechanisms is denoted $\mathsf{SB}$, or the \textit{second-best}. \cite{MS:1983} also describe a BIC, IR and WBB mechanism that actually achieves the second-best gains-from-trade. However, this second-best mechanism is complicated and difficult to implement in practice. As a consequence, the understanding of the second-best mechanism and the search for simple and practical alternatives have become major research problems. The following are some of the most studied simple mechanisms:
\begin{itemize}
\item The fixed-price mechanism. A fixed price $p$ is set and the transaction happens iff $v\geq p\geq c$. Formally, the allocation rule is $x(v,c)=\mathbbm{1}\{v\geq p\geq c\}$ and the buyer pays the seller the price $p$. The resulting gains-from-trade is
$$\mathsf{FixedP}=\sup_{p\in\mathbb{R}}\int_{-\infty}^{p}\int_{p}^{+\infty}(v-c)\d F(v)\d G(c).$$
The fixed price mechanism has the additional advantage that it's dominant strategy incentive compatible (DSIC).
\item
The seller-pricing mechanism. The seller gets the right to set the price and the buyer can only decide whether or not to buy the item and pay this price. The buyer chooses to buy it iff $v\geq p$. Knowing his private cost $c$, the seller sets the price $p$ that maximizes $(p-c)(1-F(p^{-}))$, where $F(p^{-}):=\lim_{x\rightarrow p^{-}}F(x)$. The resulting gains-from-trade is
$$\mathsf{SellerP}=\int_{-\infty}^{+\infty}\int_{p_{c}}^{\infty}(v-c)\d F(v)\d G(c),\quad\text{where }p_{c}\in\argmax_{p}(p-c)(1-F(p^{-})).$$
\item The buyer-pricing mechanism. The buyer gets the right to set the price and the seller can only decide whether or not to sell the item and take this price. The seller chooses to sell it iff $c\leq p$. Knowing his private value $v$, the buyer sets the price $p$ that maximizes $(v-p)G(p)$. The resulting gains-from-trade is
$$\mathsf{BuyerP}=\int_{-\infty}^{+\infty}\int_{-\infty}^{p_{v}}(v-c)\d G(c)\d F(v),\quad\text{where }p_{v}\in\argmax_{p}(v-p)G(p).$$
\item The random-offerer mechanism. Flip a fair coin to decide who gets to set the price. This is a 50:50 mixture of the seller pricing mechanism and the buyer price mechanism, which extracts a gains-from-trade of
$$\mathsf{RandOff}=\frac{1}{2}\mathsf{SellerP}+\frac{1}{2}\mathsf{BuyerP}.$$
\end{itemize}
\begin{remark}\label{rem:symmetry}
It's well-known that the quantities $\mathsf{BuyerP}$ and $\mathsf{SellerP}$ are symmetric to each other. In fact, if the buyer's private value suddenly became the random variable $-c$ and the seller's private cost became the random variable $-v$, then the two quantities would swap with each other. As a consequence, in terms of the gains-from-trade, there is a good symmetry between the seller-pricing mechanism and the buyer-pricing mechanism.
\end{remark}
\subsection{Our Results}\label{subsec:results}
In this paper, we focus on the following problems:
\begin{problem}\label{prob:basic}
Among all problems surrounding the bilateral trade setting, one basic open problem is to determine the worst-case approximation ratio of $\mathsf{SB}$ to $\mathsf{FB}$, i.e.
$$\inf_{F,G}\frac{\mathsf{SB}}{\mathsf{FB}}.$$
\end{problem}
\begin{problem}\label{prob:simple}
Seeing that the second best mechanism is usually too complex to be practical, it's natural to ask the same thing for simple mechanisms: Are there some simple BIC, IR and WBB mechanisms that achieve good approximations to the first best $\mathsf{GFT}$?
\end{problem}
It was not until the recent groundbreaking work of \cite*{DMSW:2022} that the second-best gains-from-trade was shown to provide a constant approximation to the first-best, i.e. $\inf_{F,G}(\mathsf{SB}/\mathsf{FB})>0$. In fact, they attack \Cref{prob:basic} by attacking \Cref{prob:simple} instead: they show that $\mathsf{FB}\leq 8.23\cdot\mathsf{RandOff}$. Since $\mathsf{SB}\geq \mathsf{RandOff}$, this implies that $\mathsf{FB}\leq 8.23\cdot\mathsf{SB}$. In other words, we can write $$\inf_{F,G}\frac{\mathsf{SB}}{\mathsf{FB}}\geq\inf_{F,G}\frac{\mathsf{RandOff}}{\mathsf{FB}}\geq \frac{1}{8.23}\approx 0.121.$$
In the hardness direction, \cite*{LLR:1989} and \cite{BM:2016} exhibit distributions for which the ratio $\mathsf{SB}/\mathsf{FB}$ is $2/e\approx 0.736$. \cite*{BDK:2021} show that $\inf_{F,G}(\mathsf{RandOff}/\mathsf{FB})\leq 0.495$. \cite*{CGMZ:2021} (in their arxiv version) independently give a proof that $\inf_{F,G}(\mathsf{RandOff}/\mathsf{FB})\leq \frac{1}{2}-\varepsilon$ for some constant $\varepsilon>0$.
This leaves a relatively large gap between the lower bound of $0.121$ and the upper bound of $0.736$ for \Cref{prob:basic}. For the approximation ratio of $\mathsf{RandOff}$, the gap is left at $[0.121, 0.495]$. In \Cref{sec:proof1}, we improve the lower bounds of both problems from $1/8.23\approx 0.121$ to $1/3.15\approx 0.317$, as another step towards the ultimate goal of closing the gaps.
\begin{theorem}
For any pair of distributions $F$ and $G$, the inequality $\mathsf{FB}\leq 3.15\cdot\mathsf{RandOff}$ holds.
\end{theorem}
In contrast to the constant approximation provided by the random-offerer mechanism, most of the other simple mechanisms, including the buyer-pricing mechanism, the seller-pricing mechanism and the fixed-price mechanism, cannot approximate $\mathsf{FB}$ in general. For the seller-pricing mechanism, it's well known that taking $G$ to be a single point mass at 0 and $F$ an equal-revenue-distribution makes the ratio $\mathsf{SellerP}/\mathsf{FB}$ tend to 0. Since $\mathsf{BuyerP}$ is symmetric to $\mathsf{SellerP}$, the buyer-pricing mechanism also has an approximation ratio of 0. The case of the fixed-price mechanism follows from taking $F$ and $G$ to be exponential distributions, as is shown by \cite{BD:2016}. Partly compensating for this hardness of approximation, there is another type of results concerned with the setting where $F$ and $G$ are subject to some restrictions. For example, by \citet{McAfee:2008} and \cite*{KV:2019},
$$\inf_{F=G}\frac{\mathsf{FixedP}}{\mathsf{FB}}=\frac{1}{2}.$$
As another example, \citet{BM:2016} consider a ``monotone hazard rate'' condition (see \Cref{def:MHR}) on $F$, and prove that
$$0.368\approx\frac{1}{e}\leq\inf_{\substack{F\in\mathcal{MHR}\\G}}\frac{\mathsf{SellerP}}{\mathsf{FB}}.$$
In \Cref{sec:proof2}, we show that the approximation ratio of $\mathsf{SellerP}$ to $\mathsf{FB}$ assuming monotone hazard rate of the buyer's distribution can be determined exactly (to be $1/(e-1)\approx 0.582$):
\begin{theorem}
If $F\in\mathcal{MHR}$, then $\mathsf{FB}\leq(e-1)\cdot\mathsf{SellerP}$, and the constant $(e-1)$ is optimal.
\end{theorem}
\subsection{More Related Work}
One might also ask about how much gains-from-trade would be lost if we use simple mechanisms instead of the known second-best mechanism. That is, the worst-case approximation ratios of simple mechanisms to $\mathsf{SB}$ are also of interest. \cite*{BCWZ:2017} show that
$\inf_{F,G}(\mathsf{RandOff}/\mathsf{SB})=\frac{1}{2}$.
There is also a line of research concerning the approximation of welfare, instead of gains-from-trade. Since we have
$$\mathsf{Welfare}=\E_{\substack{v\sim F\\ c\sim G}}\left[v\cdot x(v,c)+c\cdot\Big(1-x(v,c)\Big)\right]=\E_{c\sim G}[c]+\mathsf{GFT},$$
maximizing welfare is equivalent to maximizing gains from trade. In addition, providing a constant approximation to the first-best $\mathsf{GFT}$ also provides a constant approximation to the first-best welfare (but not vice versa). \citet{BD:2016} show that the fixed-price mechanism provides a $\left(1-\frac{1}{e}\right)$-approximation to the first-best welfare (improved to $1-\frac{1}{e}+0.0001$ by \cite*{KPV:2021}). \citet{BM:2016} show that the first-best welfare is inapproximable to above a fraction of 0.934 by BIC, IR and WBB mechanisms.
\section{Bounding the Approximation Ratio}
\label{sec:proof1}
In this section, we will prove the main result
$\mathsf{FB}\leq 3.15\cdot\mathsf{RandOff}$.
As pointed out by \cite*{DMSW:2022}, as far as $\mathsf{SellerP},\mathsf{BuyerP}$ and $\mathsf{FB}$ are concerned, it is without loss of generality to assume that the distribution of $v$ and $c$ are supported on $[0,1]$ and have continuous and positive densities.
\subsection{Notational Preparations}
\begin{enumerate}
\item
The major advantage of assuming the existence of positive densities is that we can define the following ``quantile function'', a powerful tool for analysis introduced by \cite*{DMSW:2022}: for each $x\in[0,1]$, define $\mu(x)$ to be the $(1-\lambda)$-quantile of $F|_{\geq x}$, i.e.
$$\mu(x)=F^{-1}\Big(1-\lambda+\lambda F(x)\Big),$$
where $\lambda\in(0,1)$ is a parameter to be chosen. Since $F$ is strictly increasing and continuous on $[0,1]$, so are $F^{-1}$ and $\mu$. Then, since $\mu$ is strictly increasing and continuous on [0,1], we can also define its inverse $\mu^{-1}$ on $[\mu(0),\mu(1)]=[\mu(0),1]$.
\item
Let $\mathsf{SProfit}$ denote the maximum profit that the seller can gain in a seller-pricing mechanism, and let $\mathsf{BProfit}$ denote the maximum utility that the buyer can gain in a buyer-pricing mechanism. Obviously,
$\mathsf{SProfit}\leq\mathsf{SellerP}$ and $\mathsf{BProfit}\leq\mathsf{BuyerP}$. Since the quantities $\mathsf{SProfit}$ and $\mathsf{BProfit}$ are much easier to handle than $\mathsf{SellerP}$ and $\mathsf{BuyerP}$, we will prove the result $\mathsf{FB}\leq 3.15\cdot\left(\frac{1}{2}\mathsf{SellerP}+\frac{1}{2}\mathsf{BuyerP}\right)$ by showing $\mathsf{FB}\leq 3.15\cdot\left(\frac{1}{2}\mathsf{SProfit}+\frac{1}{2}\mathsf{BProfit}\right)$ instead.
\end{enumerate}
\subsection{Proof of Main Theorem}
The main theme in the proof is to put everything as an integration over the seller's cost $c$. Note that the definition of $\mathsf{SProfit}$ is already of this form, and $\mathsf{FB}$ can also easily be expressed in this form. \cite*{DMSW:2022} notice that $\mathsf{BProfit}$ can also be put into this form (at the expense of possibly shrinking it a little). We state it as follows:
\begin{lemma}\label{lemma:integration-over-c}
Let $\mu$ be the quantile function defined by any $\lambda\in(0,1)$. Then
$$\int_{0}^{1}\int_{\mu(c)}^{1}\Big(s-\mu^{-1}(s)\Big) \d F(s) \d G(c)\leq \mathsf{BProfit}.$$
\begin{proof}
By Fubini-Tonelli theorem, we can change the order of integration as follows:
\begin{align*}
&\int_{0}^{1}\int_{\mu(c)}^{1}\Big(s-\mu^{-1}(s)\Big)\d F(s)\d G(c)\\
=& \int_{\mu(0)}^{1}\int_{0}^{\mu^{-1}(v)}\Big(v-\mu^{-1}(v)\Big)\d G(c)\d F(v)\\
=& \int_{\mu(0)}^{1}\Big(v-\mu^{-1}(v)\Big)G(\mu^{-1}(v))\d F(v)\\
\leq& \int_{\mu(0)}^{1}\max_{p}(v-p)G(p)\d F(v)\\
\leq& \mathsf{BProfit}.\qedhere
\end{align*}
\end{proof}
\end{lemma}
In light of \Cref{lemma:integration-over-c}, we can now put all three quantities $\mathsf{FB},\mathsf{SProfit}$ and $\mathsf{BProfit}$ into integrations over $\d G(c)$. The integrands are:
\begin{align*}
\mathsf{FB}(c)&:=\int_{c}^{1}(v-c)\d F(v),\\
\mathsf{SProfit}(c)&:=\max_{p}(p-c)(1-F(p)),\\
\mathsf{BProfit}(c)&:=\int_{\mu(c)}^{1}\Big(s-\mu^{-1}(s)\Big) \d F(s).
\end{align*}
We have (the first two directly follow from the definition of $\mathsf{FB}$ and $\mathsf{SProfit}$, while the third follows from \Cref{lemma:integration-over-c})
\begin{align*}
\mathsf{FB}=&\int_{0}^{1}\mathsf{FB}(c)\d G(c),\\
\mathsf{SProfit}=&\int_{0}^{1}\mathsf{SProfit}(c)\d G(c),\\
\mathsf{BProfit}\geq&\int_{0}^{1}\mathsf{BProfit}(c)\d G(c).
\end{align*}
However, the expression of $\mathsf{BProfit}(c)$ looks nothing like $\mathsf{FB}(c)$ or $\mathsf{SProfit}(c)$. Our next step is to transform it into a more familiar form:
\begin{lemma}\label{lemma:transforming}
Let $\mu$ be the quantile function defined by any $\lambda\in(0,1)$. Then for any $c\in[0,1]$,
$$\mathsf{BProfit}(c)=(1-\lambda)\cdot\mathsf{FB}(c)-\int_{c}^{\mu(c)}(v-c)\d F(v).$$
\end{lemma}
\begin{proof}
This follows from successive uses of Fubini-Tonelli theorem:
\begin{align*}
&\mathsf{BProfit}(c)\\=&\int_{\mu(c)}^{1}(s-\mu^{-1}(s))\d F(s)\\
=&\int\int_{\substack{\mu(c)\leq s\leq 1 \\ \mu^{-1}(s)\leq t\leq s}}1\d t\d F(s)\\
=&\int\int_{\substack{c\leq t\leq \mu(c)\\ \mu(c)\leq s\leq \mu(t)}}1\d t\d F(s)+\int\int_{\substack{\mu(c)<t<1\\ t\leq s\leq \mu(t)}}1\d t\d F(s)\\
=&\int_{c}^{\mu(c)}\Big(F(\mu(t))-F(\mu(c))\Big)\d t+
\int_{\mu(c)}^{1}\Big(F(\mu(t))-F(t)\Big)\d t\\
=&\int_{c}^{\mu(c)}\Big(F(\mu(t))-F(t)\Big)\d t-
\int_{c}^{\mu(c)}\Big(F(\mu(c))-F(t)\Big)\d t+
\int_{\mu(c)}^{1}\Big(F(\mu(t))-F(t)\Big)\d t\\
=&\int_{c}^{1}\Big(F(\mu(t))-F(t)\Big)\d t-\int_{c}^{\mu(c)}\Big(F(\mu(c))-F(t)\Big)\d t\\
=&\int_{c}^{1}(1-\lambda)\Big(1-F(t)\Big)\d t-\int_{c}^{\mu(c)}\Big(F(\mu(c))-F(t)\Big)\d t\\
=&(1-\lambda)\int_{c}^{1}\int_{t}^{1}1\d F(v)\d t-\int_{c}^{\mu(c)}\int_{t}^{\mu(c)}1\d F(v)\d t\\
=&(1-\lambda)\int_{c}^{1}\int_{c}^{v}1\d t\d F(v)-\int_{c}^{\mu(c)}\int_{c}^{v}1\d t\d F(v)\\
=&(1-\lambda)\int_{c}^{1}(v-c)\d F(v)-\int_{c}^{\mu(c)}(v-c)\d F(v)\\
=&(1-\lambda)\cdot\mathsf{FB}(c)-\int_{c}^{\mu(c)}(v-c)\d F(v).\qedhere
\end{align*}
\end{proof}
\begin{remark}
The work by \cite*{DMSW:2022} also contains a major part that transforms $\mathsf{BProfit}(c)$ into a nicer form. A key step in their transformation is to use summations to bound integrations, which incurs a loss in the constant. In comparison, our \Cref{lemma:transforming} only involves integrations and is completely lossless.
\end{remark}
Note that the minuend on the right hand side of the preceding lemma is already a fraction of $\mathsf{FB}(c)$. The next lemma shows that the diminution caused by the subtrahend is under control:
\begin{lemma}\label{lemma:controlling}
Let $\mu$ be the quantile function defined by any $\lambda\in(0,1)$. Then for any $c\in[0,1]$,
$$\int_{c}^{\mu(c)}(v-c)\d F(v)\leq \ln\frac{1}{\lambda}\cdot\mathsf{SProfit}(c).$$
\end{lemma}
\begin{proof}
We resort to the definition of $\mathsf{SProfit}(c)$:
\begin{align*}
\int_{c}^{\mu(c)}(v-c)\d F(v)
&\leq\int_{c}^{\mu(c)}\frac{1}{1-F(v)}\cdot\max_{p}(p-c)(1-F(p))\d F(v)\\
&=\mathsf{SProfit}(c)\cdot\int_{c}^{\mu(c)}\frac{\d F(v)}{1-F(v)}\\
&=\mathsf{SProfit}(c)\cdot\ln\left(\frac{1-F(c)}{1-F(\mu(c))}\right)\\
&=\ln\frac{1}{\lambda}\cdot\mathsf{SProfit}(c)\qedhere
\end{align*}
\end{proof}
Now we can prove the theorem by combining the preceding three lemmas.
\begin{theorem}
$\mathsf{FB}\leq 3.15\cdot\mathsf{RandOff}$.
\end{theorem}
\begin{proof}
Let $\mu$ be the quantile function defined by any $\lambda\in(0,1)$. We have
\begin{align*}
\mathsf{FB}(c)=&\frac{1}{1-\lambda}\left((1-\lambda)\cdot\mathsf{FB}(c)-\int_{c}^{\mu(c)}(v-c)\d F(v)\right)+\frac{1}{1-\lambda}\int_{c}^{\mu(c)}(v-c)\d F(v)\\
\leq & \frac{1}{1-\lambda}\mathsf{BProfit}(c)+ \frac{1}{1-\lambda}\ln\frac{1}{\lambda}\cdot\mathsf{SProfit}(c)\quad(\text{By \Cref{lemma:transforming} and \Cref{lemma:controlling}}).
\end{align*}
Therefore,
\begin{align*}
\mathsf{FB}&=\int_{0}^{1}\mathsf{FB}(c)\d G(c)\\
&\leq \frac{1}{1-\lambda}\int_{0}^{1}\mathsf{BProfit}(c)\d G(c)+\frac{1}{1-\lambda}\ln\frac{1}{\lambda}\int_{0}^{1}\mathsf{SProfit}(c)\\
&\leq \frac{1}{1-\lambda}\mathsf{BProfit}+\frac{1}{1-\lambda}\ln\frac{1}{\lambda}\cdot\mathsf{SProfit}\\
&\leq \frac{1}{1-\lambda}\mathsf{BuyerP}+\frac{1}{1-\lambda}\ln\frac{1}{\lambda}\cdot\mathsf{SellerP}.
\end{align*}
By the symmetry described in \Cref{rem:symmetry}, we also have
$$\mathsf{FB}\leq\frac{1}{1-\lambda}\mathsf{SellerP}+\frac{1}{1-\lambda}\ln\frac{1}{\lambda}\cdot\mathsf{BuyerP}.$$
Adding up the preceding two inequalities, we get
\begin{align*}
\mathsf{FB}&\leq\left(\frac{1}{1-\lambda}+\frac{1}{1-\lambda}\ln\frac{1}{\lambda}\right)\left(\frac{1}{2}\mathsf{SellerP}+\frac{1}{2}\mathsf{BuyerP}\right)\\
&=\left(\frac{1}{1-\lambda}+\frac{1}{1-\lambda}\ln\frac{1}{\lambda}\right)\mathsf{RandOff}.
\end{align*}
Note that the above holds for all $\lambda\in(0,1)$. Thus the conclusion follows by calculating
\[\min_{0<\lambda<1}\left(\frac{1}{1-\lambda}+\frac{1}{1-\lambda}\ln\frac{1}{\lambda}\right)\approx 3.1462.\qedhere\]
\end{proof}
\section{Approximation Ratio under the MHR Condition}
\label{sec:proof2}
In \cref{sec:proof1}, we use the quantities $\mathsf{SProfit}$ and $\mathsf{BProfit}$ to lower-bound $\mathsf{SellerP}$ and $\mathsf{BuyerP}$, a (painful) compromise we make in the face of the difficulty in analyzing $\mathsf{SellerP}$ and $\mathsf{BuyerP}$ themselves. In this section, we will see that by imposing a restriction on the distribution of the buyer's value, one can significantly reduce the difficulty of analysis.
\subsection{Preliminaries}
We state the definition of the \textit{hazard rate} and the \textit{virtual value function}, which are commonly studied in auction theory since the work of \cite{Mye:1981}.
\begin{definition}\label{def:MHR}
A distribution on $[0,1]$ with CDF $F$ and continuous and positive density function $f$ is said to have the \textit{monotone hazard rate} (MHR) property if the \textit{hazard rate}
$$h(x)=\frac{f(x)}{1-F(x)}$$
is a monotone non-decreasing function of $x$.
\end{definition}
\begin{definition}
Define the \textit{virtual value function} of the buyer to be
$$\varphi(x)=x-\frac{1-F(x)}{f(x)}.$$
\end{definition}
It can be seen from this definition that the MHR property of $F$ implies that $\varphi$ is strictly increasing, and hence its inverse function $\varphi^{-1}$ exists on $[\varphi(0),1]\supset [0,1]$. What greatly reduces the difficulty of analysis is the following fact:
\begin{proposition}\label{prop:observe}
If the distribution of the buyer's value satisfies the MHR property, then the price $p_{c}$ that the seller would set given his cost $c$ is exactly $\varphi^{-1}(c)$.
\end{proposition}
\begin{proof}
This is immediate from Myerson's auction theory (\cite{Mye:1981}). Here we give a short explanation, for the sake of completeness. Given $c\in [0,1]$, we have for each $p\in[c,1]$
$$
\frac{\d}{\d p}(p-c)(1-F(p))\\
=(1-F(p))-(p-c)f(p)\\
= f(p)(c-\varphi(p)).
$$
Since $\varphi$ is strictly increasing, the function
$p\mapsto (p-c)(1-F(p)) $
has a unique maximum $p^{*}=\varphi^{-1}(c)$.
\end{proof}
Let $\mathcal{MHR}$ be the collection of all MHR distributions on $[0,1]$. Our goal in this section is to show that
$$\inf_{\substack{F\in\mathcal{MHR}\\G}}\frac{\mathsf{SellerP}}{\mathsf{FB}}=\frac{1}{e-1}.$$
In \Cref{subsec:lower-bound}, we will prove that when $F\in\mathcal{MHR}$, the inequality $\mathsf{FB}\leq(e-1)\cdot\mathsf{SellerP}$ holds. Then, in \Cref{subsec:upper-bound} we will prove that the constant $(e-1)$ is optimal in the above inequality.
\begin{remark}
We can assume that the distribution defined by $G$ is supported on $[0,1]$ as well. Indeed, if we ``truncate'' $G$ into $G_{\text{truncate}}(x)=\begin{cases}
0 &\text{if } x<0\\
G(x) &\text{if } 0\leq x<1\\
1 &\text{if } x\geq 1
\end{cases}$, the ratio $\mathsf{SellerP}/\mathsf{FB}$ will not increase. Since we are concerned with the infimum of this ratio, this truncation is without loss of generality.
\end{remark}
\subsection{Proof of Lower Bound}
\label{subsec:lower-bound}
In light of \Cref{prop:observe}, we can define
$$\mathsf{SellerP}(c)=\int_{\varphi^{-1}(c)}^{1}(v-c)\d F(v),$$
and from the definition of $\mathsf{SellerP}$ we have
$$\mathsf{SellerP}=\int_{0}^{1}\mathsf{SellerP}(c)\d G(c).$$
Since we also have
$$\mathsf{FB}=\int_{0}^{1}\mathsf{FB}(c)\d G(c),$$
it suffices to show that $\mathsf{FB}(c)\leq (e-1)\cdot\mathsf{SellerP}(c)$ for each $c\in[0,1]$. Integrating by parts, we have
\begin{equation}\label{eq:1}
\begin{split}
\mathsf{FB}(c)&=\int_{c}^{1}(v-c)\d F(v)=(1-c)-\int_{c}^{1}F(v)\d (v-c)=\int_{c}^{1}(1-F(v))\d v\\
&=\int_{c}^{\varphi^{-1}(c)}(1-F(v))\d v+\int_{\varphi^{-1}(c)}^{1}(1-F(v))\d v
\end{split}
\end{equation}
and
\begin{equation}\label{eq:2}
\begin{split}
\mathsf{SellerP}(c)&=\int_{\varphi^{-1}(c)}^{1}(v-c)\d F(v)\\
&=(1-c)-\Big(\varphi^{-1}(c)-c\Big)F\left(\varphi^{-1}(c)\right)-\int_{\varphi^{-1}(c)}^{1}F(v)\d (v-c)\\
&=\int_{c}^{1}1\d v-\int_{c}^{\varphi^{-1}(c)}F\left(\varphi^{-1}(c)\right)\d v-\int_{\varphi^{-1}(c)}^{1}F(v)\d v\\
&=\int_{c}^{\varphi^{-1}(c)}\Big(1-F\left(\varphi^{-1}(c)\right)\Big)\d v+\int_{\varphi^{-1}(c)}^{1}\left(1-F(v)\right)\d v.
\end{split}
\end{equation}
Note that the right hand sides of $\cref{eq:2}$ and $\cref{eq:1}$ already have the second term in common. The next lemma relates their first terms also to each other.
\begin{lemma}\label{lemma:relates}
When $0\leq c\leq v\leq \varphi^{-1}(c),$ we have
$$\frac{1-F(v)}{1-F\left(\varphi^{-1}(c)\right)}\leq\exp\left(\frac{\varphi^{-1}(c)-v}{\varphi^{-1}(c)-c}\right).$$
\end{lemma}
\begin{proof}
By definition of the function $\varphi$,
$$c=\varphi\left(\varphi^{-1}(c)\right)=\varphi^{-1}(c)-\frac{1}{h\left(\varphi^{-1}(c)\right)}.$$
Hence we get a nice expression for the hazard rate at $\varphi^{-1}(c)$:
$$h\left(\varphi^{-1}(c)\right)=\frac{1}{\varphi^{-1}(c)-c}.$$
Define a function $H(x)=-\ln (1-F(x))$ on $[0,1)$ (known as the \textit{cumulative hazard function}), we have for each $x\in[c,\varphi^{-1}(c)]$
$$H'(x)=\frac{f(x)}{1-F(x)}=h(x)\leq h\left(\varphi^{-1}(c)\right)=\frac{1}{\varphi^{-1}(c)-c}.$$
Integrating with respect to $x$ both sides of the above inequality from $v$ to $\varphi^{-1}(c)$, we get
$$H\left(\varphi^{-1}(c)\right)-H(v)\leq \frac{\varphi^{-1}(c)-v}{\varphi^{-1}(c)-c},$$
or
\[\frac{1-F(v)}{1-F\left(\varphi^{-1}(c)\right)}\leq\exp\left(\frac{\varphi^{-1}(c)-v}{\varphi^{-1}(c)-c}\right).\qedhere\]
\end{proof}
\begin{remark}
The proof in \cite{BM:2016} also contains a step that is equivalent to the special case $v=c$ of the preceding lemma, but they apply it in a different way to a different framework of analysis.
\end{remark}
\begin{theorem}\label{thm:e-1}
When $F\in\mathcal{MHR}$, the inequality $\mathsf{FB}\leq(e-1)\cdot\mathsf{SellerP}$ holds.
\end{theorem}
\begin{proof}
By \Cref{lemma:relates}, for each $0\leq c<1$ and $c\leq v\leq\varphi^{-1}(c)$,
$$(1-F(v))\leq \exp\left(\frac{\varphi^{-1}(c)-v}{\varphi^{-1}(c)-c}\right)\cdot \Big(1-F\left(\varphi^{-1}(c)\right)\Big).$$
Integrating with respect to $v$ the above inequality from $c$ to $\varphi^{-1}(c)$, we get
\begin{align*}
&\int_{c}^{\varphi^{-1}(c)}(1-F(v))\d v\\\leq& \Big(1-F\left(\varphi^{-1}(c)\right)\Big)\cdot \int_{c}^{\varphi^{-1}(c)}\exp\left(\frac{\varphi^{-1}(c)-v}{\varphi^{-1}(c)-c}\right) \d v\\
=&\Big(1-F\left(\varphi^{-1}(c)\right)\Big)\cdot\Big(\varphi^{-1}(c)-c\Big)\cdot\int_{0}^{1}\exp(x)\d x \quad\left(\text{substituting }x=\frac{\varphi^{-1}(c)-v}{\varphi^{-1}(c)-c}\right)\\
=&(e-1)\cdot \int_{c}^{\varphi^{-1}(c)}\Big(1-F\left(\varphi^{-1}(c)\right)\Big)\d v.
\end{align*}
Therefore, combining the above with \cref{eq:1} and \cref{eq:2}, we have for each $c\in[0,1]$
\begin{align*}
\mathsf{FB}(c)&=\int_{c}^{\varphi^{-1}(c)}(1-F(v))\d v+\int_{\varphi^{-1}(c)}^{1}(1-F(v))\d v\\
&\leq (e-1)\int_{c}^{\varphi^{-1}(c)}\Big(1-F\left(\varphi^{-1}(c)\right)\Big)\d v+\int_{\varphi^{-1}(c)}^{1}\left(1-F(v)\right)\d v\\
&\leq (e-1)\int_{c}^{\varphi^{-1}(c)}\Big(1-F\left(\varphi^{-1}(c)\right)\Big)\d v+(e-1)\int_{\varphi^{-1}(c)}^{1}\left(1-F(v)\right)\d v\\
&=(e-1)\cdot\mathsf{SellerP}(c).
\end{align*}
It follows that $\mathsf{FB}\leq(e-1)\cdot\mathsf{SellerP}$.
\end{proof}
\subsection{Proof of Upper Bound}\label{subsec:upper-bound}
\begin{theorem}
The constant $(e-1)$ in \Cref{thm:e-1} is optimal.
\end{theorem}
\begin{proof}
We need only to construct examples of $F$ and $G$ such that the ratio $\mathsf{FB}/\mathsf{SellerP}$ can be infinitely close to $(e-1)$. Let $G$ be a single point mass at 0, and $F$ be the piecewise function:
$$
F(x)=\begin{cases}
1-e^{-x} & \text{ if } x\leq 1-\delta \\
1-e^{-1+\delta}(1-x)\left(\frac{1-\delta}{\delta^{2}}x-\frac{1-3\delta+\delta^{2}}{\delta^{2}}\right) & \text{ if } 1-\delta\leq x\leq 1 \\
\end{cases},
$$
where $\delta>0$ is very close to 0. The function $F$ is defined to be the CDF of an exponential distribution on $[0,1-\delta]$ and a quadratic function on $[1-\delta,1]$ that smoothly connects the exponential part and the point $F(1)=1$:
\begin{center}
\begin{tikzpicture}[
declare function={
func(\x)= (\x < 0.9)*(1-exp(-\x)) +
(\x >= 0.9) * ((90*exp(-0.9)*\x-71*exp(-0.9))*(\x-1)+1)
;
}
]
\begin{axis}[
xmin = 0, xmax = 1,
ymin = 0, ymax = 1,
minor tick num = 1,
width = 0.35\textwidth,
height = 0.35\textwidth,
legend pos=north west]
\addplot[
domain = 0:1,
samples = 400,
smooth,
thick,
blue,
] {func(x)};
\addlegendentry{\(F(x)\)}
\addplot[
domain = 0:1,
samples = 400,
smooth,
dashed,
blue,
] {1-exp(-x)};
\addlegendentry{\(1-e^{-x}\)}
\end{axis}
\end{tikzpicture}
\end{center}
For $x\leq 1-\delta$, the CDF $F$ has a constant hazard rate $f(x)/(1-F(x))=e^{-x}/e^{-x}=1$, while on $[1-\delta,1]$, the function $F'=f$ is monotone increasing and hence the hazard rate of $F$ is monotone increasing on $[1-\delta,1]$. Thus $F$, as is defined above, satisfies the MHR property. For $x\leq 1-\delta$,
$$\varphi(x)=x-\frac{1-F(x)}{f(x)}=x-1<0,$$
and hence we must have $\varphi^{-1}(0)>1-\delta$. This implies that
$$\mathsf{SellerP}=\mathsf{SellerP}(0)=\int_{\varphi^{-1}(0)}^{1}(v-0)\d F(v)\leq \int_{1-\delta}^{1}v\d F(v)\leq \int_{1-\delta}^{1}1\d F(v)=F(1)-F(1-\delta)=e^{-1+\delta},$$
which tends to $e^{-1}$ as $\delta\rightarrow 0$. But we also have according to \cref{eq:1}
$$\mathsf{FB}=\mathsf{FB}(0)=\int_{0}^{1}(1-F(v))\d v,$$
which clearly tends to $\int_{0}^{1}e^{-x}\d x=1-e^{-1}$ when $\delta\rightarrow 0$.
This shows that
\[\inf_{\substack{F\in\mathcal{MHR}\\ G}}\frac{\mathsf{SellerP}}{\mathsf{FB}}\leq\frac{e^{-1}}{1-e^{-1}}=\frac{1}{e-1}.\qedhere\]
\end{proof}
\begin{remark}
Note that the hard case given above is actually when the seller has no cost. In this case, the gains-from-trade is equal to the welfare. Since any lower bound of the gains-from-trade approximation ratio always applies to the welfare approximation ratio as well, we can combine this observation with the lower bound proved in \Cref{subsec:lower-bound} to conclude that, assuming MHR of the buyer's distribution, the welfare approximation ratio of the seller-pricing mechanism to the first-best mechanism is also equal to $(e-1)$.
\end{remark}
\begin{remark}
If we take the buyer's CDF $F$ to be the one in the preceding proof, and the seller's CDF to be $G(x)=1-F(1-x)$, it's easy to find that $\mathsf{RandOff}/\mathsf{FB}\rightarrow 1-1/e$ when the parameter $\delta\rightarrow 0$. Combining this with the lower bound in \Cref{subsec:lower-bound}, we have
$$0.582\approx\frac{1}{e-1}\leq\inf_{F,G^{\text{R}}\in\mathcal{MHR}}\frac{\mathsf{RandOff}}{\mathsf{FB}}\leq 1-\frac{1}{e}\approx 0.632,$$
where $G^{\text{R}}:=1-G(1-x)$. This shows that $\inf_{F,G^{\text{R}}\in\mathcal{MHR}}(\mathsf{RandOff}/\mathsf{FB})$ is strictly larger than $\inf_{F,G}(\mathsf{RandOff}/\mathsf{FB})$, which lies in $[0,317,0.495]$ (see \Cref{subsec:results}).
\end{remark} |
1909.01117 | \section*{Introduction}
There are various different notions of Chern classes for singular varieties, each
having its own interest and characteristics. Perhaps the most
important of these are the total Schwartz-MacPherson class
$c^{SM}(X)$ and the total Fulton-Johnson class $c^{FJ}(X)$. In the complex analytic context these are elements in the
homology ring $H_{2*}(X, \mathbb Z)$ and in the algebraic context these
are elements in the Chow group $A_*(X)$.
Both of these classes
$c^{SM}(X)$ and $c^{FJ}(X)$ are defined by means of an embedding of $X$ in some complex manifold $M$, but they turn out to be independent of the choice of embedding; when $X$ is non-singular these are the Poincar\'e duals of the usual Chern classes. By definition the total Milnor class of $X$ is:
\begin{equationth}\label{def Milnor class}
{\mathcal M}(X):=(-1)^{{\rm dim} X}\left(c^{FJ}(X)-c^{SM}(X) \right).
\end{equationth}
Milnor classes are a generalization of the classical Milnor number to varieties $X$ with arbitrary singular set. These have
support in the singular set ${\rm Sing}(X)$.
There is a
Milnor class in each dimension from 0 to that of ${\rm Sing}(X)$.
In particular, when the singularities of $X$ are all isolated, then there is only a $0$-degree Milnor
class which is an integer, and if $X$ further is a local complete intersection, then this integer is the sum of the local Milnor
numbers of $X$ at its singular points (by \cite{SS, Suwa}).
Milnor classes are important invariants that encode much
information about the varieties in question, see for instance \cite{Aluffi, Aluffi2, Alu-Mar, BSS, BMS, PP, Par-Pr, Sch5}.
Yet, most of the work on Milnor classes in the literature is for hypersurfaces, the
complete intersection case being much harder ({\it cf.} \cite {BLSS2, BMS-2, MSS}): that is the setting we envisage here. Our work is somehow inspired by the product formulas for the Milnor class of Ohmoto and Yokura in \cite{O-Y}.
We prove:
\begin{Thm}\label{theorem-1}
Let $M$ be an $n$-dimensional compact complex
analytic manifold and let $\{E_{1},\cdot \ldots \cdot,E_r\}$, $
r\ge 1$, be
holomorphic vector bundles over $M$ of ranks $d_{i}\ge1$. For each $i =1,\cdot \ldots \cdot r$, let $X_i$ be the $(n-d_{i})$-dimensional local
complete intersection in $M$ defined by the zeroes of a regular section
$s_{i}$ of $E_{i}$. Assume further that the $X_i$ are equipped with
Whitney stratifications ${\mathcal S}_i$ such that all the intersections amongst strata in the various $X_i$ are transversal. Set $X= X_1 \cap \cdot \ldots \cdot \cap X_r$, a local complete intesection of dimension $n -d_1 -\cdot \ldots \cdot -d_r$. Then:
\begin{itemize}
\item [(i)] $\; \; \;c^{SM}(X)= c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1} \, \cap \,\;\Big(c^{SM}(X_{1}) \cdot \ldots \cdot c^{SM}(X_{r})\Big);$
\item [(ii)] $\; \;c^{FJ}(X)= c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1} \, \cap \,\;\Big(c^{FJ}(X_{1}) \cdot \ldots \cdot c^{FJ}(X_{r})\Big);$ and therefore
\item [(iii)] ${\mathcal M}(X)=(-1)^{{\rm dim} X} \, c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1} \, \cap \,\;\Big(
c^{FJ}(X_{1}) \cdot \ldots \cdot c^{FJ}(X_{r}) - c^{SM}(X_{1}) \cdot \ldots \cdot c^{SM}(X_{r})\Big).$
\end{itemize}
\end{Thm}
The transversality condition in this Theorem can be relaxed (see section \ref{sec. main lemma}).
Similar transversality conditions were used in \cite{Sch5} to prove a refined intersection formula for the Chern-Schwartz-MacPherson classes.
The proof of Theorem \ref{theorem-1} takes most of this article.
The first step is proving Verdier-Riemann-Roch type formulae for the Schwartz-MacPherson,
the Fulton-Johnson and, therefore, for the Milnor classes of local complete intersections. In the last section of this article
we give various applications. The first is Theorem \ref{main-lemma} that
describes the Milnor class of $X$
in terms of the Milnor and the Schwartz-MacPherson classes of the $X_i$ and the Chern classes of $M$ restricted to $X$.
For instance, for $r=2$ we get the beautiful formula:
$${\mathcal M}(X)=c\left( \left( TM|_{X}\right) \right)^{-1}\cap
\Big((-1)^{n}{\mathcal
M}(X_1)\cdot {\mathcal M}(X_2)+ (-1)^{d_{1}} c^{SM}(X_1)\cdot {\mathcal
M}(X_2)+(-1)^{d_{2}}{\mathcal M}(X_1)\cdot c^{SM}(X_2)\Big).$$
For $r= 3$ we get: {\small
$${\mathcal M}(X)= c\left( \left( TM|_{X}\right)^{\oplus 2}
\right)^{-1}\cap \;\; \Big({\mathcal M}(X_1)\cdot {\mathcal M}(X_2)\cdot
{\mathcal M}(X_3)+ (-1)^{(d_{1}+d_{2})} c^{SM}(X_1)\cdot
c^{SM}(X_2)\cdot {\mathcal M}(X_3) +$$
$$+(-1)^{(d_{1}+d_{3})} c^{SM}(X_1)\cdot {\mathcal
M}(X_2)\cdot c^{SM}(X_3)+(-1)^{(d_{2}+d_{3})} {\mathcal M}(X_1)\cdot
c^{SM}(X_2)\cdot c^{SM}(X_3)+$$
$$+(-1)^{(n-d_{1})} c^{SM}(X_1)\cdot {\mathcal M}(X_2)\cdot {\mathcal
M}(X_3)+(-1)^{(n-d_{2})} {\mathcal M}(X_1)\cdot c^{SM}(X_2)\cdot {\mathcal
M}(X_3)+$$
$$+(-1)^{(n-d_{3})} {\mathcal M}(X_1)\cdot {\mathcal M}(X_2)\cdot c^{SM}(X_3)
\Big),$$} \hskip-3pt and so on.
This highlights why understanding the Milnor classes of complete intersections is {\it a priori} far more difficult than in the hypersurface case, though the formula in Theorem \ref{theorem-1} is surprisingly simple.
We then restrict the
discussion to the case where the bundles in question are all line bundles $L_i$. We get two interesting applications of Theorem \ref{main-lemma}:
\vspace{0,2cm} \noindent
{\bf i)}
A Parusi\'nski-Pragacz type formula for local
complete intersections as above (Corollary \ref{P-P}).
This expresses the
Milnor classes using only Schwartz-MacPherson classes, and it answers positively the expected description given
by Ohmoto and Yokura in \cite{O-Y} for the total Milnor class of a local
complete intersection. We notice that a different generalization of the Parusi\'nski-Pragacz formula for complete intersections has been given recently in \cite{MSS}.
\vspace{0,2cm} \noindent
{\bf ii)} A description of the total Milnor class of the local complete intersection $X$ in the vein of Aluffi's formula in \cite{Aluffi} for hypersurfaces, using Aluffi's $\mu$-classes (Corollary \ref{Aluffi}).
\vskip.2cm
This work is a refinement of our unpublished article \cite{BMS-1} (cf. also \cite {BMS-2}). We are indebted to the referee and to J\"{o}rg Sch\"{u}rmann for valuable suggestions.
We are also grateful to Nivaldo Medeiros and Marcelo Saia for fruitful
conversations.
\section{Chern classes and the diagonal embedding}
\subsection{Derived categories}
We assume some basic knowledge on derived categories as described for instance in
\cite{Dimca}.
If $X$ is a complex analytic space then ${\mathcal D}^{b}_{c}(X)$
denotes the derived category of bounded constructible complexes of
sheaves of $\mathbb C$-vector spaces on $X$. We denote the objects of
${\mathcal D}^{b}_{c}(X)$ by something of the form $F^{\bullet}$. The
shifted complex $F^{\bullet}[l]$ is defined by
$(F^{\bullet}[l])^{k}=F^{l+k}$ and its differential is
$d^{k}_{[l]}=(-1)^{l}d^{k+l}$. The constant sheaf $\mathbb C_{X}$ on $X$
induces an object $\mathbb C_{X}^{\bullet} \in {\mathcal D}^{b}_{c}(X)$ by
letting $\mathbb C_{X}^{0}=\mathbb C_{X}$ and $\mathbb C_{X}^{k}=0$ for $k\neq 0$.
If $h:X\rightarrow \mathbb C$ is an analytic map and $F^{\bullet}\in
{\mathcal D}^{b}_{c}(X)$, then we denote the sheaf of vanishing cycles
of $F^{\bullet}$ with respect to $h$ by $\phi_{h}F^{\bullet}$.
For $F^{\bullet}\in {\mathcal D}^{b}_{c}(X)$ and $p \in X$, we denote
by ${\mathcal H}^{*}(F^{\bullet})_{p}$ the stalk cohomology of
$F^{\bullet}$ at $p$, and by $\chi(F^{\bullet})_{p}$ its Euler
characteristic. That is,
$$\chi(F^{\bullet})_{p}=\sum_{k}(-1)^{k}{\rm dim}_{\mathbb C}{\mathcal
H}^{k}(F^{\bullet})_{p}.$$
We also denote by $\chi(X,F^{\bullet})$ the Euler characteristic of
$X$ with coefficients in $F^{\bullet}$, {\it i.e.},
$$\chi(X,F^{\bullet})=\sum_{k}(-1)^{k}{\rm dim}_{\mathbb C}\;\mathbb{H}^{k}(X,F^{\bullet}),$$
where $\mathbb{H}^{*}(X,F^{\bullet})$ denotes the hypercohomology
groups of $X$ with coefficients in $F^{\bullet}$.
When $F^{\bullet}\in {\mathcal D}^{b}_{c}(X)$ is ${\mathcal
S}$-constructible, where ${\mathcal S}$ is a Whitney stratification of
$X$, we denote it by $F^{\bullet}\in {\mathcal D}^{b}_{{\mathcal S}}(X)$. Setting
$\chi(F^{\bullet}_{S})=\chi(F^{\bullet})_{p}$ for an arbitrary point
$p \in S$,
we have
\cite[Theorem 4.1.22]{Dimca}:
\begin{equation}\label{EulerCharact}\chi(X,F^{\bullet})=\sum_{S\in {\mathcal
S}}\chi(F^{\bullet}_{S})\chi(S)\,.\end{equation}
For a subvariety $X$ in a complex manifold $M$ we
denote its conormal variety by
$T^{*}_{X}M$. That is, $$T^{*}_{X}M:={\rm closure}\;\{ (x, \theta) \in T^{*}M\;|\; x
\in X_{{\rm reg}}\;{\rm and}\; \theta_{|_{T_{x} X_{{\rm
reg}}}}\equiv 0\}\,,$$ where $T^{*}M$ is the cotangent bundle and
$X_{{\rm reg}}$ is the regular part.
One has (see
\cite{GM}):
\begin{definition}
Let $ X $ be an analytic subvariety of a complex manifold $ M $, $\{
S_{\alpha} \}$ a Whitney stratification of $M$ adapted to $ X $ and
$x\in S_\alpha$ a point in $X$. Consider $g:(M,x)\rightarrow (\mathbb C,0)$
a germ of holomorphic function such that $d_{x}g $ is a {\it
non-degenerate covector} at $x$ with respect to the fixed
stratification. That is, $d_{x}g \in T^{*}_{S_\alpha}M$ and $d_{x}g
\not\in T^{*}_{S^{'}}M$ for all stratum $S^{'} \neq S_\alpha$.
Let $N$ be a germ of a closed complex submanifold of $M$ which is
transversal to $S_\alpha$, with $N \cap S_\alpha=\{ x\}$. Define the
{\it complex link } $l_{S_\alpha}$ of $S_\alpha$ by:
\begin{center}$l_{S_\alpha}:= X\cap N \cap
B_{\delta}(x)\cap \{g=w\}\quad{\rm for}\;0<|w|<\!\!< \delta<\!\!<
1.$\end{center}
The {\it normal Morse datum} and the {\it normal Morse index} of the stratum $S_\alpha$
are, respectively:
$$NMD(S_\alpha):=(X\cap N \cap B_{\delta}(x),l_{S_\alpha}) \quad \hbox{and} \quad
\eta(S_\alpha,F^{\bullet}):=\chi(NMD(S),F^{\bullet})\,,$$
where the right-hand-side means the Euler characteristic of the
relative hypercohomology. \end{definition}
In fact, the slice $N$ normal to the stratum $S_\alpha$ at $x$ is transversal to all other stratum that contain $x$ in their closure, by Whitney regularity. Therefore the Whitney stratification on $X$ induces a Whitney stratification on $NMD(S_\alpha)$. Hence the sheaf $F^{\bullet}$ restricted to $NMD(S_\alpha)$ is constructible and therefore the relative hypercohomology is well-defined.
By \cite[Theorem 2.3]{GM} we
get that $\eta(S_\alpha,F^{\bullet})$ does not depend on
the choices of $x\in S_\alpha,\; g$ and $N$. By \cite[p. 283]{Sch4}, the normal Morse index $\eta(S_\alpha,F^{\bullet})$ can be computed in terms of sheaves of vanishing cycles as
\begin{equationth}\label{Dimca}
\eta(S_\alpha,F^{\bullet})=-\chi(\phi_{g|_{N}}(F^{\bullet}|_{N})).
\end{equationth}
By \cite[Remark 2.4.5(ii)]{Dimca} this can also be expressed as:
\begin{equation}\label{RelativeEuler}\eta(S_\alpha,F^{\bullet})=\chi(X\cap N \cap
B_{\delta}(x),F^{\bullet})-\chi(l_{S_\alpha},F^{\bullet})\,.\end{equation}
\subsection{Chern classes for singular varieties}
From now on, let $M$ be an $n$-dimensional compact complex
analytic manifold and let $E$ be a holomorphic vector bundle over
$M$ of rank $d$. Let $X$ be the zero scheme of a regular holomorphic
section of $E$, which is an $(n-d)$-dimensional local complete
intersection. Consider {\it {the virtual bundle}} $\tau(X;M):= T
M|_{_{X}}-E|_{_{X}}$, where $T M$ denotes the tangent bundle of $
M$ and the difference is in the K-theory of $X$. The element $\tau(X;M)$ actually is independent of $M$ (see \cite[Appendix B.7.6.]{Ful}) and is called {\it {the virtual tangent bundle of}} $X$. {\it The Fulton-Johnson
homology class of} $X$ is defined by the Chern class of $\tau(X;M)$
via the Poincar\'e morphism, that is (cf. \cite{Ful}):
$$c^{FJ}(X;M)=c(\tau(X;M))\cap
[X]:=c^{}(T M|_X) c^{}(E|_X)^{-1}\cap [X].$$
For simplicity we denote the virtual bundle and the Fulton-Johnson classes simply by $\tau(X)$ and $c^{FJ}(X)$.
Consider now the Nash blow up $\tilde{X} \stackrel{\nu}{\rightarrow}
X$ of $X$, its Nash bundle ${\tilde T} \stackrel{\pi}{\rightarrow}
\tilde{ X} $ and the Chern classes of $\tilde{T}$, $c^{j}(\tilde{T})
\in H^{2j}(\tilde{ X})$, $j=1,\ldots,n$. {\it The Mather classes} of $X$ are:
$$c^{Ma}_{k}( X):=v_{*}(c^{n-d-k}(\tilde{T})\cap [\tilde{ X}])\in H_{2k}( X),\;\;k=0,\ldots,n \,.$$
We equip $X$ with a Whitney stratification $X_{\alpha}$.
The MacPherson classes are obtained from the Mather classes by considering
appropriate ``weights" for each stratum, determined by the local
Euler obstruction ${\rm Eu}_{{ X_{\alpha}}}(x)$. This is an integer associated in \cite{MacP}
to each point $x \in X_{\alpha}$. It is proved in \cite{MacP} that
there exists a unique set of integers $b_{\alpha}$,
for which the
equation $\sum b_{\alpha} {\rm Eu}_{\bar{ X}_{\alpha}}(x)=1$ is
satisfied for all points $x \in X$. Here, $\bar{ X}_{\alpha}$ denotes the closure of the stratum, which is itself analytic; the sum runs over all
strata $ X_{\alpha}$ containing $x$ in their closure.
Then {\it the MacPherson class of degree} $k
$ is defined by
$$c^{M}_{k}( X):=\sum
b_{\alpha}\;i_{*}(c^{Ma}_{k}(\bar{ X}_{\alpha})),$$
where
$i:\bar{ X}_{\alpha}\hookrightarrow X$ is the inclusion map.
We remark that by \cite{BS}, the MacPherson classes coincide,
up to Alexander duality, with the classes defined by
M.-H. Schwartz in \cite{Sch}. Thus, following the modern
literature (see for instance \cite{Par-Pr, BLSS2, BSS}), these are called
Schwartz-MacPherson classes of $X$ and denoted
by $c^{SM}_{k}(X)$.
\begin{definition}
The total Milnor class of $X$ is (see \cite{BSS,Par-Pr}):
$${\mathcal M}(X):=(-1)^{n-d}\left(c^{FJ}(X)-c^{SM}(X) \right).$$
\end{definition}
\subsection{Milnor classes and the diagonal embedding}
Given a manifold $M$ as before, set $M^{(r)}:=M\times \ldots \times M$, $r$ times. We let $E$ be a holomorphic vector
bundle over $M^{(r)}$ of rank $d$. Consider $\Delta: M\rightarrow
M^{(r)}$ the diagonal morphism, which is a regular embedding of
codimension $nr-n$. Let $t$ be a regular holomorphic section of $E$.
The set of the zeros of $t$ is a closed subvariety $Z(t)$ of
$M^{(r)}$ of dimension $nr-d$. Consider $Z(\Delta^{*}(t))$ the set
of the zeros of the pull back section of $t$ by
$\Delta$.
Following \cite[Chapter 6]{Ful} we have that
$\Delta$ induces the refined Gysin homomorphism
$$\Delta^{!}:H_{2k}(Z(t))\rightarrow H_{2(k-nr+n)}(Z(\Delta^{*}(t))).$$
The refined intersection product is defined by
${\alpha}_1 \cdot \ldots \cdot {\alpha}_r:=\Delta^{!}({\alpha}_1\times\ldots \times
{\alpha}_r)$.
For the usual homology this is defined by duality between
homology and cohomology:
$$\Delta^{!}\! = \Delta^{*}\! : H_{2k}(Z(t);\mathbb Z) \simeq H^{2(nr-k)} ({Z(t)};\mathbb Z)\rightarrow H^{2(nr-k)}({Z(\Delta^{*}(t))};\mathbb Z) \simeq H_{2(k-nr+n)}(Z(\Delta^{*}(t));\mathbb Z).$$
\begin{remark}\label{Fu}
\begin{enumerate}
\item In \cite[Proposition 14.1, (c) and (d)(ii)]{Ful} it is proved that if $f:X'\rightarrow X$ is a local complete intersection morphism between purely dimensional schemes, $E$ is a vector bundle on $X,$ $s$ is a regular section of $E$ and $s'=f^*s$ is the induced section on $f^*E,$ then $f^![Z(s)]=[Z(s')],$ where $f^!$ is the refined Gysin homomorphism induced by $f.$
\vspace{0.3cm}\item In \cite[Proposition 6.3]{Ful} it is proved that if $\iota:X\rightarrow Y$ is a regular embedding and $F$ is a vector bundle on $Y,$ then $\iota^!(c_m(F)\cap \alpha)=c_m(\iota^*F)\cap \iota^! \alpha,$ for all $\alpha\in H_{2k}(Y,\mathbb Z).$ Applying this result to the diagonal morphism $\Delta: M\rightarrow M^{(r)},$ which is a regular embedding, we have that for any vector bundle $F$ on $M^{(r)}$ holds that $\Delta^!(c_m(F) \cap \alpha)=c_m(\Delta^*F) \cap\Delta^!(\alpha)$ for all $\alpha \in H_{2k}(M^{(r)};\mathbb Z)$
and $m\geq 0.$
\end{enumerate}
\end{remark}
These two remarks are used for following a Verdier-Riemann-Roch type theorem for the Fulton-Johnson classes:
\begin{proposition}\label{t:1} The refined Gysin morphism satisfies: $$\Delta^{!}\left( \;c^{FJ}(Z(t))\;\right)=
c\left( \left( TM|_{Z(\Delta^{*}t)}\right)^{\oplus r-1} \right)\cap c^{FJ}(Z(\Delta^{*}t))\,.$$\end{proposition}
\begin{proof}
By definition of the Fulton-Johnson class we have $$\Delta^{!}\;c^{FJ}(Z(t))=\Delta^{!}\left(
c\left( TM^{(r)}|_{Z(t)}\right) c\left(
{E}|_{Z(t)}\right)^{-1} \cap [Z(t)] \right).$$
Applying Remark \ref{Fu} (2) to the diagonal morphism $\Delta: M\rightarrow M^{(r)},$ which is a regular embedding, and using the virtual bundle we have that
$$\Delta^{!}\;c^{FJ}(Z(t))=c\left(\Delta^{*}\left(
TM^{(r)}|_{Z(t)}\right)\right) c\left(
\Delta^{*}\left({E}|_{Z(t)}\right)\right)^{-1} \cap
\Delta^{!}\;[Z(t)].$$ Note that
$\Delta^{*}\left({E}|_{Z(t)}\right)=\Delta^{*}{E}|_{Z(\Delta^{*}t)}$
and, applying Remark \ref{Fu} (1) to the diagonal morphism $\Delta: M\rightarrow M^{(r)}$, which is a local complete intersection morphism, and to the regular section $t$ of $E$ we obtain that $\Delta^![Z(t)]=[Z(\Delta^*(t))].$
Moreover, since $\Delta^{*} TM^{(r)}= TM\oplus
\ldots \oplus TM$, we have
$$c\left(\Delta^{*}\left(
TM^{(r)}|_{Z(t)}\right)\right)=c\left(\left(
TM|_{Z(\Delta^{*}t)}\right)^{\oplus\;r}\right) \,$$ and the result
follows.
\end{proof}
Let $F(M)$ be the free abelian group of constructible functions on
$M$ with respect to a Whitney stratification $\{ S_{\alpha} \}$. It is proved in \cite{MacP} that every element $\xi$ in $F(M)$ can be written uniquely in the form:
$$\xi = \sum n_W {\rm Eu}_W \,,$$
for some appropriate subvarieties $W$ and integers $n_W$.
Let $L( M)$ be the free abelian group of all cycles generated by
the conormal spaces $T^{*}_{W} M$, where $W$ varies over all
subvarieties of $ M$. Given $\xi \in F(M)$
define
an element $Ch(\xi)$ in $L(M)$ by:
\begin{equationth}\label{ch}
Ch(\xi):=\sum_{\alpha} (-1)^{\dim W} n_W\cdot T^{*}_{W} M \,.
\end{equationth}
This
induces an isomorphism $Ch : F(M) \rightarrow L(M)$.
Define the map $cn:Z( M)\rightarrow L( M)$ by $cn(X):=T^{*}_{X}
M$. Clearly, this is also an isomorphism.
We know from \cite[Section 3]{BMS} that we have a commutative
diagram:
\begin{equation}\label{diagrama}
\xymatrix { &Z(M) \ar[r]^{\check{E}u} \ar[d]^{cn} & F(M)\ar[d]^{\mbox{id}}\\
& L(M) \ar[r]^{Ch} & F(M) }\end{equation}
The commutativity of this diagram amounts to
saying:
$$\beta=\sum_{\alpha}\eta(S_{\alpha},\beta) \cdot
{E}u_{S_{\alpha}},$$ for any function $\beta:X\rightarrow
\mathbb Z$ which is constructible for the given Whitney stratification, where
$\eta(S_\alpha,\xi)=\eta(S_\alpha,F^{\bullet})$, with $F^{\bullet}$ being the
complex of sheaves such that $\chi( F^{\bullet})_{p}=\xi(p)$.
Substituting in equation (\ref{ch}) we get:
\begin{equationth}\label{nueva}
Ch(\xi):=\sum_{\alpha} (-1)^{\dim S_\alpha}\eta(S_\alpha,\xi)\cdot T^{*}_{\overline{S}_\alpha} M \,.
\end{equationth}
\vspace{0,2cm}
Now consider the projectivized cotangent bundles $\mathbb{P}(T^{*}M)$ and $\mathbb{P}(T^{*}(M^{(r)}))$; we denote by $\mathbb{P}((T^{*}M)^{\oplus r})\,$
the bundle $\mathbb{P}(T^{*}M\oplus \ldots \oplus T^{*}M)$.
Notice that one has a fibre square diagram (see \cite[pag. 428]{Ful}):
\begin{equation}\label{fiber square}
\xymatrix { \mathbb{P}((T^{*}M)^{\oplus r})
\ar[r]^{\delta} \ar[d]_{p} & \mathbb{P}(T^{*}(M^{(r)})) \ar[d]^{\pi^{(r)}} \\ M \ar[r]^{\Delta} & M^{(r)} }
\end{equation}
where $\pi^{(r)}$ is the natural proper map. Let $i: \mathbb{P}(T^{*}M) \to \mathbb{P}((T^{*}M)^{\oplus r}) $ be the morphism
induced by the diagonal embedding
$T^{*}M \to T^{*}M \oplus \ldots \oplus T^{*}M$.
\begin{proposition} \label{transversal} Let $\beta$ be a constructible function on $M^{(r)}$ with respect to a Whitney stratification $\{{\mathcal T}_\gamma\}$, which we assume transversal to
$\Delta(M)$. Then:
$$\delta^{!}\;[\mathbb{P}(Ch(\beta))]=(-1)^{nr-n}\; i_{*}\;[\mathbb{P}(Ch(\Delta^{*}(\beta)))].$$
\end{proposition}
\begin{proof} Since the stratification $\{{\mathcal T}_\gamma\}$ is transversal to $\Delta(M)$, we have that
$\{\Delta^{-1}({\mathcal T}_\gamma)\}$ is a Whitney stratification of
$M$ with respect to which $\Delta^{*}(\beta)$ is a constructible function. Moreover, if $T$ is a normal slice of $\Delta^{-1}({\mathcal T}_\gamma)$ at $x$ then $\Delta(T)$ is a normal slice of
${\mathcal T}_\gamma$ at $(x,\ldots,x).$ Set $N=\Delta(T).$
By equations (\ref{Dimca}) and (\ref{nueva}) we have $\,\mathbb{P}(Ch(\beta))=\sum
m_{\gamma}\mathbb{P}\left(\overline{T_{{\mathcal T}_{\gamma}}^{*}M^{(r)}}\right)\,,$
where
$m_{\gamma}:= (-1)^{nr-d-1}\chi\left(\phi_{f| N}F^{\bullet}|_{N}
\right)_{(x,\ldots,x)}$, and $F^{\bullet}$ is the bounded complex of sheaves such that $\chi( F^{\bullet})_{p}=\beta(p)$ and $f:
(M^{(r)},(x,\ldots,x))\rightarrow (\mathbb C,0)$ is a germ such that $d_{(x,\ldots,x)}f$ is a non-degenerate covector at $(x,\ldots,x)$ with respect to $\{{\mathcal T}_{\gamma}\}$.
Analogously,
$$\mathbb{P}(Ch(\Delta^{*}(\beta))=\sum_{\gamma}
n_{\gamma}\mathbb{P}\left(\overline{T_{\Delta^{-1}({\mathcal T}_{\gamma})}^{*}M}\right),$$
where $n_{\gamma}:=
(-1)^{n-d-1}\chi\left(\phi_{g|T}G^{\bullet}|_{T}
\right)_{x}$, where
$G^{\bullet}=\Delta^{*}F^{\bullet},$ which is the
bounded complex of sheaves such that $\chi(
G^{\bullet})_{q}=\Delta^{*}(\beta)(q),$ and $g:
(M,x) \rightarrow (\mathbb C,0)$ is a germ such that $d_xg$ is a non-degenerate covector at $x$ with respect to $\{\Delta^{-1}({\mathcal T}_{\gamma})\}$. Notice that we can take $g=\Delta^{*}f$ since these definitions do not depend on the choices of $g.$
Notice that $\Delta|_T:T\rightarrow N$ is an isomorphism. Hence $$\phi_{\Delta^{*}(f| N)}\Delta^{*}(F^{\bullet}|_{ N})\simeq \Delta^{*}\left(\phi_{f|N}\left(F^{\bullet}|_{ N}\right)\right)\,.$$ \noindent But clearly
$\phi_{\Delta^{*}(f| N)}\Delta^{*}(F^{\bullet}|_{ N})=\phi_{g| T}G^{\bullet}|_{T},$ thus $$\chi\left(\phi_{g| T}G^{\bullet}|_{T}
\right)_{x}=\chi \left(\Delta^{*}\left(\phi_{f|N}\left(F^{\bullet}|_{ N}\right)\right)\right)_{x}=\chi\left(\phi_{f|N}F^{\bullet}|_{N}
\right)_{(x,\ldots,x)}\,.$$ Therefore
\begin{equation}\label{6}m_{\gamma}=(-1)^{nr-n} n_{\gamma}. \end{equation}
Proposition \ref{transversal} is now an immediate consequence of the next lemma:
\end{proof}
\begin{lemma} \label{7} One has:
$$\delta^{!}\;\left[\mathbb{P}\left(\overline{T_{{\mathcal T}_{\gamma}}^{*}M^{(r)}}\right)\right]=
i_{*}\;\left[\mathbb{P}\left(\overline{T_{\Delta^{-1}({\mathcal T}_{\gamma})}^{*}M}\right)\right].$$
\end{lemma}
\begin{proof}
Consider the projectivized cotangent bundles $\mathbb{P}(T^{*}M)$ and $\mathbb{P}(T^{*}(M^{(r)}))$; we denote by $\mathbb{P}((T^{*}M)^{\oplus r})\,$
the bundle $\mathbb{P}(T^{*}M\oplus \ldots \oplus T^{*}M)$.
Notice that one has a fibre square
diagram :
$$
\xymatrix { \mathbb{P}\left(\overline{\Delta^{*}(T^{*}_{{\mathcal T}_\gamma}M^{(r)})}\right)\ar[r]^{{\delta}'}\ar[d]_{j'} & \mathbb{P}\left(\overline{T^{*}_{{\mathcal T}_\gamma}M^{(r)}}\right) \ar[d]^{j} \\
\mathbb{P}(\Delta^{*}T^{*}(M^{(r)}))
\ar[r]^{\delta} \ar[d]_{p} & \mathbb{P}(T^{*}(M^{(r)})) \ar[d]^{\pi^{(r)}} \\ M \ar[r]^{\Delta} & M^{(r)} }
$$
where $\pi^{(r)}$ is the natural proper map.
Notice that $$\mathbb{P}(\Delta^{*}T^{*}(M^{(r)}))=\mathbb{P}((T^{*}M)^{\oplus r})$$ and
$$\mathbb{P}\left(\overline{\Delta^{*}(T^{*}_{{\mathcal T}_\gamma}M^{(r)})}\right)= \mathbb{P}\left(\overline{T^*_{\Delta^{-1}({\mathcal T}_\gamma)}M} \right).$$
Thus $j'$ is induced by the diagonal embedding $i.$
Notice that $\Delta,\, \delta$ and $\delta'$ are regular embeddings of codimension $nr-n.$ Hence
$$N_{\mathbb{P}\left(\overline{\Delta^{*}(T^{*}_{{\mathcal T}_\gamma}M^{(r)})}\right)}\mathbb{P}\left(\overline{T^{*}_{{\mathcal T}_\gamma}M^{(r)}}\right)={j'}^*N_{\mathbb{P}(\Delta^{*}T^{*}(M^{(r)}))}\mathbb{P}(T^{*}(M^{(r)})).$$
Therefore $$\delta^{!}\;\left[\mathbb{P}\left(\overline{T_{{\mathcal T}_{\gamma}}^{*}M^{(r)}}\right)\right]={j'}_*\left[\mathbb{P}\left(\overline{\Delta^{*}(T^{*}_{{\mathcal T}_\gamma}M^{(r)})}\right)\right]=
i_{*}\;\left[\mathbb{P}\left(\overline{T_{\Delta^{-1}({\mathcal T}_{\gamma})}^{*}M}\right)\right].$$
Hence the result follows.
\end{proof}
\begin{corollary}\label{L:4} Let $Z(t)$ be as in Proposition \ref{t:1}. Assume that $Z(t)$ admits a Whitney stratification $\{{\mathcal T}_\gamma\}$ transversal to $\Delta(M)$. Then:
$$\delta^{!}\;[\mathbb{P}(Ch({\mathbbm 1}_{Z(t)}))]=(-1)^{nr-n}\; i_{*}\;[\mathbb{P}(Ch({\mathbbm 1}_{Z(\Delta^{*}t)}))],$$
where ${\mathbbm 1}_{(\;)}$ denotes the characteristic function.
\end{corollary}
\begin{remark}\label{Fulton}
\begin{enumerate}
\item In \cite[Theorem 6.2. (a)]{Ful} it is proved the following: Consider a fiber square diagram
$$\xymatrix { X'
\ar[r]^{\iota'} \ar[d]_{q} & Y' \ar[d]^{p} \\ X \ar[r]^{\iota} & Y },$$
\noindent where $\iota$ is a regular embedding of codimension $d$ and $p$ is a proper morphism, then $\iota^!p_*(\alpha)=q_*(\iota^!(\alpha)),$ for all $\alpha \in H_{2k}(Y',\mathbb Z).$ Also in \cite[Theorem 6.2. (c)]{Ful} it is proved that if $\iota'$ is also a regular embedding of codimension $d,$ then $\iota^!(\alpha)=\iota'^!(\alpha),$ for all $\alpha \in H_{2k}(Y',\mathbb Z).$
\vspace{0.3cm}\item In \cite[Equation (14)]{Par-Pr} Parusi\'{n}ski and Pragacz gave the following description of the Schwartz-MacPherson classes: Let $Z$ be a smooth complex manifold, let $V$ be a closed subvariety of $Z$ and $\pi:\mathbb{P}(T^*Z)\rightarrow Z$ be the projectivized cotangent bundle of $Z,$ then the Schwartz-MacPherson class of $V$ is given by
$$c^{SM}(V)=(-1)^{\dim Z-1}c(TZ|_V)\cap \pi_*\left(c({\mathcal{O}(1)})^{-1}\cap [\mathbb{P}(Ch({\mathbbm 1}_{V}))]\right),$$
\noindent where ${\mathcal{O}(1)}$ is the tautological line bundle of $\mathbb{P}(T^*Z).$
\end{enumerate}
\end{remark}
\begin{theorem}\label{T:2} With the assumptions of Corollary \ref{L:4} we have: $$\Delta^{!}\left( \;c^{SM}(Z(t))\;\right)=
c\left( \left( TM|_{Z(\Delta^{*}t)}\right)^{\oplus r-1} \right)\cap c^{SM}(Z(\Delta^{*}t))\,.$$\end{theorem}
\begin{proof}
Applying Remark \ref{Fulton} (2) to the projectivized cotangent
bundle $\pi^{(r)}:\mathbb{P}(T^{*}M^{(r)})\rightarrow M^{(r)}$ we
obtain
$$c^{SM}(Z(t))=(-1)^{nr-1}c\left( TM^{(r)}|_{Z(t)} \right) \cap
\pi_{*}^{(r)} \left( c({\mathcal O}_{r}(1))^{-1}\cap
[\mathbb{P}(Ch({\mathbbm 1}_{Z(t)}))]\right),$$
where ${\mathcal O}_{r}(1)$ denotes the tautological line
bundle of $\mathbb{P}(T^{*}M^{(r)}).$
Applying Remark \ref{Fu} (2) we have that
\begin{equation}\label{8}{\small \Delta^{!}\;c^{SM}(Z(t))=(-1)^{nr-1}c\left(\Delta^{*}\left(
TM^{(r)}|_{Z(t)} \right)\right) \cap \Delta^{!}\pi_{*}^{(r)} \left(
c({\mathcal O}_{r}(1)))^{-1}\cap
[\mathbb{P}(Ch({\mathbbm 1}_{Z(t)}))]\right).}\end{equation}
Applying Remark \ref{Fulton} (1) to the fiber square diagram (\ref{fiber square}) we get that
\begin{equation}\label{9}\Delta^{!}\pi_{*}^{(r)} \left( c({\mathcal
O}_{r}(1)))^{-1}\cap [\mathbb{P}(Ch({\mathbbm 1}_{Z(t)}))]\right)= p_{*} \left(
\delta^{!}(c({\mathcal O}_{r}(1))^{-1}\cap
\;[\mathbb{P}(Ch({\mathbbm 1}_{Z(t)}))])\right).\end{equation}
Applying again Remark \ref{Fu} (2) \ we have,
\begin{equation}\label{100}\Delta^{!}\pi_{*}^{(r)} \left( c({\mathcal
O}_{r}(1)))^{-1}\cap [\mathbb{P}(Ch({\mathbbm 1}_{Z(t)}))]\right)= p_{*} \left(
c(\delta^{*}{\mathcal O}_{r}(1)))^{-1}\cap
\delta^{!}\;[\mathbb{P}(Ch({\mathbbm 1}_{Z(t)}))]\right).\end{equation}
Since $\delta^{*}{\mathcal O}_{r}(1)= {\mathcal
O}_{\mathbb{P}\left((T^{*}M)^{\oplus r} \right)}(1)$ is the
tautological line bundle on the projectivization
$\mathbb{P}((T^{*}M)^{\oplus r})\rightarrow M$, by Corollary \ref{L:4} and
the equations (\ref{8}), (\ref{9}) and (\ref{100}), we get:
$$\begin{array}{l}
\Delta^{!}\left( \;c^{SM}(Z(t))\;\right)= (-1)^{n-1}c\left( \left(
TM|_{Z(\Delta^{*}t)}\right)^{\oplus r} \right)\cap\\\quad \quad\quad\quad\quad\quad\quad \,\;\cap\; p_{*} \left(
c({\mathcal O}_{\mathbb{P}\left((T^{*}M)^{\oplus r}
\right)}(1))^{-1}\cap
i_{*}\;[\mathbb{P}(Ch({\mathbbm 1}_{Z(\Delta^{*}t)}))]\right).\end{array}$$
Hence, by the projection formula for proper morphism (see \cite[Theorem 3.2 (c)]{Ful}) we have that $$\begin{array}{l}
\Delta^{!}\left( \;c^{SM}(Z(t))\;\right)= (-1)^{n-1}c\left( \left(
TM|_{Z(\Delta^{*}t)}\right)^{\oplus r-1} \right)c\left( \left(
TM|_{Z(\Delta^{*}t)}\right) \right)\cap\\\quad \quad\quad\quad\quad\quad\quad \,\;\cap\; (p\circ i)_{*} \left(
c(i^*{\mathcal O}_{\mathbb{P}\left((T^{*}M)^{\oplus r}
\right)}(1))^{-1}\cap
\;[\mathbb{P}(Ch({\mathbbm 1}_{Z(\Delta^{*}t)}))]\right).\end{array}$$
Now, using the fact that $i^*{\mathcal O}_{\mathbb{P}\left((T^{*}M)^{\oplus r}
\right)}(1)={\mathcal O}_{\mathbb{P}\left((T^{*}M)
\right)}(1)$ and that $p\circ i =q:\mathbb{P}(T^{*}M)\rightarrow M$ is the projectivized cotangent morphism we have that
$$\begin{array}{l}
\Delta^{!}\left( \;c^{SM}(Z(t))\;\right)= (-1)^{n-1}c\left( \left(
TM|_{Z(\Delta^{*}t)}\right)^{\oplus r-1} \right)c\left( \left(
TM|_{Z(\Delta^{*}t)}\right) \right)\cap\\\quad \quad\quad\quad\quad\quad\quad \,\;\cap\; q_{*} \left(
c({\mathcal O}_{\mathbb{P}\left((T^{*}M)
\right)}(1))^{-1}\cap
\;[\mathbb{P}(Ch({\mathbbm 1}_{Z(\Delta^{*}t)}))]\right).\end{array}$$
Applying Remark \ref{Fulton} (2) to the
projectivize cotangent bundle $q:\mathbb{P}(T^{*}M)\rightarrow M$ we obtain that
$$c^{SM}(Z(\Delta^{*}t))=(-1)^{n-1}c\left( \left(
TM|_{Z(\Delta^{*}t)}\right) \right)\cap q_{*} \left(
c({\mathcal O}_{\mathbb{P}\left((T^{*}M)
\right)}(1))^{-1}\cap
\;[\mathbb{P}(Ch({\mathbbm 1}_{Z(\Delta^{*}t)}))]\right)$$
Hence we have that
$$\Delta^{!}\left( \;c^{SM}(Z(t))\;\right)=c\left( \left( TM|_{Z(\Delta^{*}t)}\right)^{\oplus r-1}
\right)\cap c^{SM}(Z(\Delta^{*}t)).$$
\end{proof}
Theorem \ref{T:2} is a Verdier-Riemann-Roch type formula for the Schwarz-MacPherson classes (cf. \cite{Sch2}).
Analogously, the next result is a Verdier-Riemann-Roch type theorem for the Milnor classes. The proof is a straightforward application of Proposition \ref{t:1} and Theorem \ref{T:2}.
\begin{corollary} \label{10}With the assumptions of Corollary \ref{L:4} we have: $$\Delta^{!}{\mathcal M}(Z(t))=(-1)^{nr-n}c\left( \left( TM|_{Z(\Delta^{*}t)}\right)^{\oplus
r-1} \right)\cap{\mathcal M}(Z(\Delta^{*}t))\,.$$\end{corollary}
\section{Intersection product formulas}\label{sec. main lemma}
As before, let $M$ be an $n$-dimensional compact complex
analytic manifold. Let $\{E_{i}\}$ be a finite collection of
holomorphic vector bundles over $M$ of rank $d_{i}$, $1 \leq i\leq
r$. For each of these bundles, let $s_{i}$ be a regular
holomorphic section
and $X_{i}$ the $(n-d_{i})$-dimensional local
complete intersections defined by the zeroes of
$s_{i}$.
In this section we assume that we can equip the product $X_1 \times \ldots \times X_r$ with a Whitney stratification such that
the diagonal embedding $\Delta$ is transversal to all
strata. This
transversality condition is necessary for using Proposition \ref{transversal} and this is precisely the transversality condition that we need in Theorem \ref {theorem-1}.
Let $p_i:M^{(r)}\rightarrow M$ be the $i^{th}$-projection,
then we have the holomorphic exterior product section
$$s=s_1\oplus\ldots\oplus s_r:M^{(r)}\rightarrow
p_1^*E_1\oplus\ldots \oplus p_r^*E_r,$$ given by
$s(x_1,\dots,x_r)=(s_1(x_1),\dots, s_r(x_r)).$ Then
$Z(s)=X_1\times\ldots \times X_r$ and $Z(\Delta^*(s))=X_{1}\cap
\ldots \cap X_{r}.$ Set $X=Z(\Delta^*(s)).$
The next result describes the total Schwartz-MacPherson class of $X$ in terms
of the total Schwartz-MacPherson classes of the $X_i.$
\begin{proposition}\label{SM-inter}
$$c^{SM}(X)= c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1} \, \cap \,\;\Big(c^{SM}(X_{1}) \cdot \ldots \cdot c^{SM}(X_{r})\Big).$$
\end{proposition}
\begin{proof} By Theorem \ref{T:2} we have that
$$\Delta^{!}\left( \;c^{SM}(Z(s))\;\right)=c\left( \left( TM|_{Z(\Delta^{*}s)}\right)^{\oplus r-1}
\right)\cap c^{SM}(Z(\Delta^{*}s)).$$
Hence
$$c^{SM}(X))=c\left( \left( TM|_{X}\right)^{\oplus r-1}
\right)^{-1}\cap \Delta^{!}\left( c^{SM}(X_1\times\ldots \times X_r)\right).$$
Now, M. Kwieci\'{n}ski proved in \cite{Kw} that Schwartz-MacPherson classes behave well with
respect to the exterior products, that is $$c^{SM}(X_1\times\ldots \times X_r)=c^{SM}(X_1)\times\ldots \times c^{SM}(X_r).$$
Hence $$\Delta^{!}\left(c^{SM}(X_1\times\ldots \times X_r)\right )=c^{SM}(X_1)\cdot\ldots \cdot c^{SM}(X_r)$$
\noindent and the result follows.
\end{proof}
\begin{remark}\label{Fulton 3.2.8}
In \cite[Example 3.2.8.]{Ful} it is proved the following: Let $Y$ and $Z$ be schemes, $p$ and $q$ the projections from $Y\times Z$ to $Y$ and $Z,$ $E$ and $F$ vector bundles on $Y$ and $Z,$
$\alpha \in H_*(Y,\mathbb Z)$ and $\beta\in H_*(Z,\mathbb Z).$ Then $$\left(c(E)\cap \alpha\right)\times \beta=c(p^*E)\cap(\alpha\times\beta)$$
\noindent and $$\left(c(E)\cap \alpha\right)\times\left(c(F)\cap\beta\right)=c(p^*E\oplus q^*F)\cap(\alpha\times\beta).$$
Since $c(p^*E)\cap((c(E)^{-1}\cap\alpha )\times\beta)=\alpha\times\beta$ we have that $$(c(E)^{-1}\cap\alpha)\times\beta=c(p^*E)^{-1}\cap(\alpha\times\beta).$$
Analogously, $$\left(c(E)^{-1}\cap \alpha\right)\times\left(c(F)^{-1}\cap\beta\right)=c(p^*E\oplus q^*F)^{-1}\cap(\alpha\times\beta).$$
\end{remark}
In \cite{O-Y} was stated without proof that Fulton-Johnson classes behave well with
respect to the exterior products. For completeness we include it proof here
\begin{lemma}\label{FJ-product}
$$c^{FJ}(X_1\times \ldots \times X_r)= c^{FJ}(X_{1}) \times \ldots \times c^{FJ}(X_{r}).$$
\end{lemma}
\begin{proof}
$$\begin{array}{lcl}
c^{FJ}(X) & = & c\left(TM^{(r)}|_{X_1\times \ldots \times X_r}\right)c\left(p_1^*E_1\oplus\ldots \oplus p_r^*E_r\right)^{-1}\cap \left[X_1\times \ldots \times X_r\right] \\
\\
& = & c\left(p_1^*TM|_{X_1}\oplus \ldots \oplus p_r^*TM|_{X_r}\right)c\left(p_1^*E_1\oplus\ldots \oplus p_r^*E_r\right)^{-1}\cap \left(\left[X_1\right]\times \ldots \times \left[X_r\right] \right)\\
\\
& = & \left(c(TM|_{X_1})c(E_1)^{-1}\cap\left[X_1\right]\right)\times\ldots\times \left(c(TM|_{X_r})c(E_r)^{-1}\cap\left[X_r\right]\right) \\
\\
& = & c^{FJ}(X_{1}) \times \ldots \times c^{FJ}(X_{r}).
\end{array}$$
\noindent where the third equality follows by Remark \ref{Fulton 3.2.8}.
\end{proof}
\begin{proposition}\label{FJ-inter}
$$c^{FJ}(X)= c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1} \, \cap \,\;\Big(c^{FJ}(X_{1}) \cdot \ldots \cdot c^{FJ}(X_{r})\Big).$$
\end{proposition}
\begin{proof}
By Proposition \ref{t:1} we have that
$$\Delta^{!}\left( \;c^{FJ}(Z(s))\;\right)=c\left( \left( TM|_{Z(\Delta^{*}s)}\right)^{\oplus r-1}
\right)\cap c^{FJ}(Z(\Delta^{*}s)).$$
Hence
$$c^{FJ}(X))=c\left( \left( TM|_{X}\right)^{\oplus r-1}
\right)^{-1}\cap \Delta^{!}\left( c^{FJ}(X_1\times\ldots \times X_r)\right).$$
By Lemma \ref{FJ-product} we have that
$$c^{FJ}(X))=c\left( \left( TM|_{X}\right)^{\oplus r-1}
\right)^{-1}\cap \Delta^{!}\left( c^{FJ}(X_1)\times\ldots \times c^{FJ}(X_r)\right).$$
Since $$\Delta^{!}\left( c^{FJ}(X_1)\times\ldots \times c^{FJ}(X_r)\right)=c^{FJ}(X_{1}) \cdot \ldots \cdot c^{FJ}(X_{r})$$
\noindent the result follows.
\end{proof}
\begin{proaf}{\sc of Theorem \ref{theorem-1}:}
This follows immediately from Proposition \ref{SM-inter} and Proposition \ref{FJ-inter}.
\end{proaf}
\begin{example} {\rm Let $Z_1$ and $Z_2$ be the hypersurfaces of $\mathbb{P}^4$ defined by $$H(x_0,\dots, x_4)=x_0 x_1 \ \quad \hbox{and} \quad
G(x_0,\dots, x_4)=x_3 \;.$$
The line bundle of $Z_1$ is ${\mathcal O}(2H)$, where $H= c_1({\mathcal O}(1))$, so the class of the virtual tangent bundle of $Z_1$ is:
$$(1+H)^5 2H / (1+ 2H) = 2H + 6H^2 + 8H^3 + 4H^4,$$
while the Schwartz-MacPherson class is, by the inclusion-exclusion formula in \cite{Aluffi2}:
$$2 c(T\mathbb{P}^3) - c(T\mathbb{P}^2) = 2( (1+H)^4 H ) - (1+H)^3 H^2 = 2H + 7H^2 + 9H^3 + 5H^4$$
Therefore the Milnor class of $Z_1$ is $H^2 + H^3 + H^4$.
On the other hand, since $Z_2$ is smooth, the Schwartz-MacPherson class and the Fulton-Johnson class of $Z_2$ are $(1+H)^4 H= H+4H^2+6H^3+4H^4$.
Therefore, by Theorem \ref{theorem-1}, the Milnor class of $Z_1 \cap Z_2$ is given by
$${\mathcal M}(Z_1 \cap Z_2)=-c(T\mathbb{P}^4)^{-1}\cap c^{SM}(Z_2){\mathcal M}(Z_1)= -H^3.$$
}
\end{example}
\begin{remark} Take the
complete intersection $X=X_1 \cap X_2$, where $X_1$ is a smooth quadric surface in $\mathbb P^3$ and $X_2$ is a tangent plane get two distinct lines meeting at a point. The Milnor class of $X$ is simply the class of a point, but the Milnor classes of $X_1$ and $X_2$ are both zero because they both are smooth. This shows that a
transversality condition is necessary for our formula in \ref{main-lemma}.\end{remark}
\section{Applications to line bundles}
\begin{theorem} \label{main-lemma} With the conditions of Theorem \ref{theorem-1}
we have:
$${\mathcal M}(X)=(-1)^{nr-n}c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1}\cap
\sum\;(-1)^{(n-d_{1})\epsilon_{1}+\ldots+(n-d_{r})\epsilon_{r}}
P_{1}\;\cdot\ldots\cdot\; P_{r}\in H_{*}(X),$$
where the sum runs over all choices of $P_{i}\in \left\{{\mathcal
M}(X_i),c^{SM}(X_i)\right\},\,i=1,\dots,r,$ except
$(P_{1},\ldots,P_{r})=(c^{SM}(X_1),\ldots,c^{SM}(X_r))$ and where
$$\epsilon_{i}=\left\{\begin{array}{rcl} 1&,& \mbox{if} \;P_{i}=c^{SM}(X_i)\\
0&,& if \;P_{i}={\mathcal M}(X_i)\\
\end{array}\right. .$$
\end{theorem}
\begin{proof}
By Corollary \ref{10},
$$\Delta^{!}{\mathcal M}(Z(s))=(-1)^{nr-n}c\left( \left( TM|_{Z(\Delta^{*}s)}\right)^{\oplus
r-1} \right)\cap{\mathcal M}(Z(\Delta^{*}s)).$$ Thus,
$${\mathcal M}(X)=(-1)^{nr-n}c\left( \left(
TM|_{Z(\Delta^{*}s)}\right)^{\oplus r-1}
\right)^{-1}\cap\Delta^{!}{\mathcal M}(X_{1}\times \ldots \times
X_{r}),$$
and using the description of the Milnor classes of a product due to \cite[Corollary 3.1]{O-Y}, we have:
$${\mathcal M}(X)=(-1)^{nr-n}c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1}\cap
\;\sum(-1)^{(n-d_{1})\epsilon_{1}+\ldots+(n-d_{r})\epsilon_{r}}\Delta^{!}\left(
P_{1}\times \ldots\times P_{r}\right),$$
where the sum runs over all choices of $P_{i}\in \left\{{\mathcal
M}(X_i),c^{SM}(X_i)\right\},\,i=1,\dots,r,$ except
$(P_{1},\ldots,P_{r})=(c^{SM}(X_1),\ldots,c^{SM}(X_r))$ and
where $$\epsilon_{i}=\left\{\begin{array}{rcl} 1&,& if \; P_{i}=c^{SM}(X_i)\\
0&,& if \; P_{i}={\mathcal M}(X_i)\\
\end{array}\right. .$$
The result follows because $\Delta^{!}\left(
P_{1}\times \ldots\times P_{r}\right)= P_{1}\;\cdot\ldots\cdot\;
P_{r}\in H_{*}(X).$
\end{proof}
\begin{corollary}\label{11} $${\mathcal M}(X)= c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1}\cap
\displaystyle\sum_{i=1}^{r} (-1)^{D-d_i} a_{1,i}\cdot...\cdot
a_{r-1,i} \cdot {\mathcal M}(X_{i}),$$ where
$D=\displaystyle\sum_{j=1}^{r}d_j$ and
$a_{j,i}=\left\{\begin{array}{lcl}
c^{SM}(X_{j+1}) &\;{\rm if}\;& i \leq j\\
c^{FJ}(X_{j}) &\;{\rm if}\;& i> j\\
\end{array}\right. .$
\end{corollary}
From now on we replace the bundles $E_i$ by line
bundles $L_i$.
\subsection{Aluffi type formula} Let $X=X_{1} \cap \ldots \cap X_{r}$ be as above.
The $\mu$-classes were
introduced by P. Aluffi in \cite{Aluffi}.
For each $X_i$, the Aluffi's
$\mu$-class of the
singular locus is defined by the formula
$$\mu_{L_i}({\rm Sing}(X_i))=c(T^*M\otimes {L_i})\cap s({\rm Sing}(X_i),M),$$ where
$s({\rm Sing}(X_i),M)$ is the Segre class of ${\rm Sing}(X_i)$ in $M$ (see
\cite[Chapter 4]{Ful}).
Given a cycle $\alpha \in H_{2*}(X_i,\mathbb Z)$ and $\alpha=\sum_{j\geq
0}{\alpha}^j,$ where ${\alpha}^j$ is the codimension $j$
component of $\alpha,$ then Aluffi introduced the following cycles
$$\alpha^{\vee}:=\sum_{j\geq 0}(-1)^j{\alpha}^j \quad \hbox{and} \quad
\alpha \otimes L_i:=\sum_{j\geq 0}\frac{{\alpha}^j}{c(L_i)^j} \;.$$
Then Aluffi proved in \cite{Aluffi} that the total Milnor class
${\mathcal M}(X_i)$ can be described as follows:
\begin{equation}\label{Aluffi-formula}{\mathcal M}(X_i)=(-1)^{n-1}c(L_i)^{n-1}\cap (\mu_{L_i}({\rm Sing}(X_i))^{\vee
}\otimes L_i).\end{equation}
Again using Corollary \ref{11}, the above
equation yields:
\begin{corollary} \label{Aluffi} The Total Milnor class of $X := X_1 \cap \ldots \cap X_r$ is:
$${\mathcal M}(X)=(-1)^{n-1}c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1}\cap \left(\displaystyle\sum_{i=1}^{r} (-1)^{r-1} a_{1,i}\cdot\ldots\cdot a_{r-1,i}\cdot c(L_{i})^{n-1}\cap (\mu_{L_{i}}({\rm Sing}{(X_{i})\;})^{\vee
}\otimes L_{i})\right),$$
where $a_{j,i}=\left\{\begin{array}{lcl}
c^{SM}(X_{j+1}) &\;{\rm if}\;& i \leq j\\
c^{FJ}(X_{j}) &\;{\rm if}\;& i> j\\
\end{array}\right. .$
\end{corollary}
\subsection{Parusi\'nski-Pragacz-type formula}
\label{section-P-P-formula}
We now assume each $X_i$ has a Whitney stratification ${\mathcal S}_{i}$. One has in \cite{Par-Pr} the following characterization of the Milnor classes
of hypersurfaces in compact manifolds:
\begin{equation}\label{P}{\mathcal M}(X_i):=\sum_{S \in {\mathcal S}_{i}}\;\gamma_{S}\left(c(L_{i|_{X_i}})^{-1} \cap
c^{SM}(\overline{S})\right)\in H_*(X_i),\end{equation}
where
$\gamma_{S}$ is the function defined on each stratum $S$
as follows: for each $x \in S \subset X_i$, let $F_x$ be a
{\it local Milnor fibre}, and let $\chi(F_{x})$ be its Euler
characteristic. We set:
$$\mu(x;X_i):= (-1)^{n}\;(\chi(F_{x})-1) \,,$$ and
call it the {\it local Milnor number}. This number
is constant on each Whitney stratum, so we denote it $\mu_{S}$. Then
$\gamma_{S}$ is defined inductively by:
$$\gamma_{S}=\mu_{S} - \sum_{S' \neq S,\;\overline{S'} \supset S}
\gamma_{S'}.$$
\begin{lemma}\label{3.2.8.-modificado}
Let $Y$ and $Z$ be subschemes of $M$, $W=Y\cap Z,$ $E$ a vector bundle on $Y,$
$\alpha \in H_*(Y,\mathbb Z)$ and $\beta\in H_*(Z,\mathbb Z).$ Then $$\left(c(E)\cap \alpha\right)\cdot \beta=c(E|_W)\cap(\alpha\cdot\beta).$$
\end{lemma}
\begin{proof}
Let $p$ be the projections from $Y\times Z$ to $Y$ and $d:W\rightarrow Y\times Z$ be the diagonal embedding.
$$\begin{array}{lcl}
\left(c(E)\cap \alpha\right)\cdot \beta & = & d^!\left(\left(c(E)\cap \alpha\right)\times \beta \right) \\
\\
& = & d^!\left(c(p^*E)\cap \left(\alpha\times \beta \right)\right) \\
\\
& = & c(d^*p^*E)\cap d^!\left(\alpha\times \beta \right) \\
\\
& = & c(E|_W)\cap(\alpha\cdot\beta).
\end{array}$$
\noindent where the second and third equalities follows by Remark \ref{Fulton 3.2.8} and Remark \ref{Fu} (2) respectively.
\end{proof}
\begin{corollary}[Parusi\'{n}ski-Pragacz formula for local
complete intersections]\label{P-P} We have:
$${\mathcal M}(X)=(-1)^{nr-n}c\left( \left(
TM|_{X}\right)^{\oplus r-1} \right)^{-1} \cap \, \bigg(\sum{\alpha}_{S_{1},\dots,S_{r}}^{\epsilon_{1}, \dots, \epsilon_{r}}
\frac{c(L_1)^{\epsilon_{1}}\cdot \ldots \cdot c(L_r)^{\epsilon_{r}}}{c(L_1\oplus\ldots\oplus L_r)}\cap c^{SM}(\overline{S_1})\cdot \ldots \cdot c^{SM}(\overline{S_r})\bigg)\,,$$
\noindent where the sum runs over all possible choices of the strata provided $(S_{1},\dots,S_{r})\neq ((X_1)_{reg},\dots,(X_r)_{reg})$,
$${\alpha}_{S_{1},\dots,S_{r}}^{\epsilon_{1}, \dots,
\epsilon_{r}}=(-1)^{(n-{1})(\epsilon_{1}+\ldots+\epsilon_{r})}
\gamma_{S_1}^{1-\epsilon_{1}}\cdot \ldots \cdot
\gamma_{S_r}^{1-\epsilon_{r}}\quad , \quad \hbox{and} \quad
\epsilon_{i}=\left\{\begin{array}{rcl} 1&,& if \;S_{i}\subseteq (X_i)_{reg}\\
0&,& if \;\dim(S_i)<n-1\\
\end{array}\right. .$$
\end{corollary}
\begin{proof}
The proof will be by induction on $r.$ For $r=1$ this is Parusi\'{n}ski-Pragacz formula given in equation (\ref{P}) for $X_1.$ Let $Y=X_1\cap\ldots\cap X_{r-1}.$ Then $\dim Y=n-(r-1)$ and $X=Y\cap X_r.$
Hence $ {\mathcal M}(X)= {\mathcal M}(Y\cap X_r).$
By Theorem \ref{main-lemma} we have that
$$ {\mathcal M}(Y\cap X_r)=(-1)^{n}c\left( \left( TM|_{X}\right)
\right)^{-1}\cap \Big({\mathcal M}(Y)\cdot {\mathcal
M}(X_r)+ (-1)^{\dim Y} c^{SM}(Y)\cdot {\mathcal
M}(X_r)+(-1)^{n-1}{\mathcal M}(Y)\cdot
c^{SM}(X_r)\Big).$$
By induction hypotheses, equation (\ref{P}) for $X_r,$ Proposition \ref{SM-inter} and Lemma \ref{3.2.8.-modificado} we have that
$${\tiny \begin{array}{l}
{\mathcal M}(X) = (-1)^n c\left( \left( TM|_{X}\right)^{\oplus r-1}\right)^{-1}\cap \Big[(-1)^{nr} \bigg(\displaystyle\sum_{\underline{S}\neq \underline{X}}{\alpha}_{S_{1},\dots,S_{r-1}}^{\epsilon_{1}, \dots, \epsilon_{r-1}}
\frac{c(L_1)^{\epsilon_{1}}\cdot \ldots \cdot c(L_{r-1})^{\epsilon_{r-1}}}{c(L_1\oplus\ldots\oplus L_{r-1})}\cap \prod_{i=1}^{r-1}c^{SM}(\overline{S_i})\bigg)\cdot\bigg(\sum_{S_r}\;\gamma_{S_r}\left(c(L_{r|_{X_r}})^{-1} \cap
c^{SM}(\overline{S_r})\right)\bigg)
\\
\\\quad\quad\quad\quad\quad\quad\quad\quad\quad + (-1)^{n-r+1}\left(\prod_{i=1}^{r-1}c^{SM}(\overline{S_i})\right)\cdot \bigg(\sum_{S_r}\;\gamma_{S_r}\left(c(L_{r|_{X_r}})^{-1} \cap
c^{SM}(\overline{S_r})\right)\bigg)
\\
\\\quad\quad\quad\quad\quad\quad\quad\quad\quad + (-1)^{n-1+nr}\bigg(\displaystyle\sum_{\underline{S}\neq \underline{X}}{\alpha}_{S_{1},\dots,S_{r-1}}^{\epsilon_{1}, \dots, \epsilon_{r-1}}
\frac{c(L_1)^{\epsilon_{1}}\cdot \ldots \cdot c(L_{r-1})^{\epsilon_{r-1}}}{c(L_1\oplus\ldots\oplus L_{r-1})}\cap \prod_{i=1}^{r-1}c^{SM}(\overline{S_i})\bigg)\cdot c^{SM}(X_r)\Big]
\end{array}}$$
\noindent where $\underline{S}\neq \underline{X}$ means that $(S_{1},\dots,S_{r-1})\neq ((X_1)_{reg},\dots,(X_{r-1})_{reg}).$
Notice that $\gamma_{S_r}=0$ if $S_{r}\subseteq (X_r)_{reg}$ and that
$${\alpha}_{S_{1},\dots,S_{r}}^{\epsilon_{1}, \dots,
\epsilon_{r}}=\left\{\begin{array}{rcl} (-1)^{n-1}{\alpha}_{S_{1},\dots,S_{r-1}}^{\epsilon_{1}, \dots,
\epsilon_{r-1}}&,& if \;S_{r}\subseteq (X_r)_{reg}\\
\\
{\alpha}_{S_{1},\dots,S_{r-1}}^{\epsilon_{1}, \dots,
\epsilon_{r-1}}\cdot \gamma_{S_r}&,& if \;\dim(S_r)<n-1\\
\end{array}\right. .$$
Hence
$${\tiny \begin{array}{l}
{\mathcal M}(X) = (-1)^{nr-n} c\left( \left( TM|_{X}\right)^{\oplus r-1}\right)^{-1}\cap \Big[\displaystyle\sum_{\underline{S}\neq \underline{X}\;\mbox{and}\; S_r\neq X_r}{\alpha}_{S_{1},\dots,S_{r}}^{\epsilon_{1}, \dots, \epsilon_{r}}
\frac{c(L_1)^{\epsilon_{1}}\cdot \ldots \cdot c(L_{r})^{\epsilon_{r}}}{c(L_1\oplus\ldots\oplus L_{r})}\cap \prod_{i=1}^{r}c^{SM}(\overline{S_i})
\\
\\\quad\quad\quad\quad\quad\quad\quad\quad\quad +\displaystyle\sum_{\underline{S}= \underline{X}\;\mbox{and}\; S_r\neq X_r}{\alpha}_{S_{1},\dots,S_{r}}^{\epsilon_{1}, \dots, \epsilon_{r}}
\frac{c(L_1)^{\epsilon_{1}}\cdot \ldots \cdot c(L_{r})^{\epsilon_{r}}}{c(L_1\oplus\ldots\oplus L_{r})}\cap \prod_{i=1}^{r}c^{SM}(\overline{S_i})
\\
\\\quad\quad\quad\quad\quad\quad\quad\quad\quad + \displaystyle\sum_{\underline{S}\neq \underline{X}\;\mbox{and}\; S_r= X_r}{\alpha}_{S_{1},\dots,S_{r}}^{\epsilon_{1}, \dots, \epsilon_{r}}
\frac{c(L_1)^{\epsilon_{1}}\cdot \ldots \cdot c(L_{r})^{\epsilon_{r}}}{c(L_1\oplus\ldots\oplus L_{r})}\cap \prod_{i=1}^{r}c^{SM}(\overline{S_i})\Big]
\end{array}}.$$
Now the result follows straightforwardly.
\end{proof}
\begin{remark}
[{\bf Milnor classes and global L\^e classes}]
In \cite {BMS} there is a concept of global
L\^e classes of a singular hypersurface $Z$ in a smooth complex
submanifold $M$ of $\mathbb{P}^{N}$, and a formula relating
these with the Milnor classes of $Z$.
The L\^e classes extend the notion of the local L\^e cycles introduced
in \cite{Massey}. Using Corollary \ref{11} one gets
also a
description of the Milnor classes of the local complete intersections
$X=X_{1} \cap \ldots \cap X_{r}$ via the L\^e classes of each
hypersurface $X_{i}$.
\end{remark} |
1903.03110 | \section{Introduction}
A solar scaling relation is a formula for estimating some unknown property of a star from observations by scaling from the known properties of the Sun.
These relations have the form
\begin{equation} \label{eq:scaling}
\frac{Y}{\text{Y}_\odot}
\simeq
\prod_i \left(\frac{X_i}{\text{X}_{\odot,i}}\right)^{P_i}
\end{equation}
where $Y$ is some property we wish to estimate, such as the radius of the star.
The quantity $\text{Y}_\odot$ is the corresponding property of the Sun (e.g., the solar radius), the vector $\mathbf X$ contains measurable properties of the star (e.g., its effective temperature and luminosity), the vector $\mathbf X_{\odot}$ contains the corresponding solar properties, and $\mathbf P$ is some vector of exponents.
An example scaling relation for estimating stellar radii can be derived from the Stefan--Boltzmann law as:
\begin{equation}
\frac{R}{\text{R}_\odot}
\simeq
\left(
\frac{T_{\text{eff}}}{\text{T}_{\text{eff},\odot}}
\right)^{-2}
\left(
\frac{L}{\text{L}_\odot}
\right)^{\frac{1}{2}}
\end{equation}
where $R$ is the radius, $T_{\text{eff}}$ the effective temperature, and $L$ the luminosity.
In the era of space asteroseismology, relations known as the \emph{seismic scaling relations} have enjoyed wide usage.
They are used to estimate the unknown masses and radii of stars by scaling asteroseismic observations with their helioseismic counterparts.
These observations include the average frequency spacing between radial mode oscillations of consecutive radial order---the \emph{large frequency separation}, $\Delta\nu$---and the \emph{frequency at maximum oscillation power}, $\nu_{\max}$.
From these and the observed effective temperature, one can estimate the stellar mass $M$ and radius $R$ of a star via:
\begin{align}
\frac{M}{\text{M}_\odot}
&\simeq
\bigg(
\frac{\nu_{\max}}{\nu_{\max,\odot}}
\bigg)^3
\bigg(
\frac{\Delta\nu}{\Delta\nu_\odot}
\bigg)^{-4}
\bigg(
\frac{T_{\text{eff}}}{\text{T}_{\text{eff},\odot}}
\bigg)^\frac{3}{2} \label{eq:scalingM}
\end{align}
\begin{align}
\frac{R}{\text{R}_\odot}
&\simeq
\bigg(
\frac{\nu_{\max}}{\nu_{\max,\odot}}
\bigg)
\bigg(
\frac{\Delta\nu}{\Delta\nu_\odot}
\bigg)^{-2}
\bigg(
\frac{T_{\text{eff}}}{\text{T}_{\text{eff},\odot}}
\bigg)^\frac{1}{2} \label{eq:scalingR}
\end{align}
where ${\nu_{\max,\odot} = 3090 \pm 30~\mu\text{Hz}}$, ${\Delta\nu_\odot = 135.1 \pm 0.1~\mu\text{Hz}}$ \mb{\citep{2011ApJ...743..143H}}, and ${\text{T}_{\text{eff},\odot} = 5772.0 \pm 0.8~\text{K}}$ \citep{2016AJ....152...41P}.
These relations are useful thanks to the exquisite precision with which asteroseismic data can be obtained.
For a typical well-observed solar-like star, $\Delta\nu$ and $\nu_{\max}$ can be measured with an estimated relative error of only 0.1\% and 1\%, respectively \citep[see, e.g., Figure~5 of][]{Bellinger2019}.
These relations
can be analytically derived from the fact that $\Delta\nu$ scales with the mean density of the star and $\nu_{\max}$ scales with the acoustic cut-off frequency \mb{\citep{1986ApJ...306L..37U, 1991ApJ...368..599B, 1995A&A...293...87K, Stello_2008, 2010A&A...509A..77K}}.
\mbb{These relations are not perfectly accurate, however \citep[e.g.,][]{2009MNRAS.400L..80S, 2011A&A...530A.142B, 2016ApJ...832..121G, 2018MNRAS.478.4669T, 2019ApJ...870...41O}, especially when it comes to evolved stars, which has resulted in several suggested corrections \citep{2011ApJ...743..161W, 2016ApJ...822...15S, 2016MNRAS.460.4277G, 2017MNRAS.470.2069G}}.
In recent years, there has been a great push for improved determination of stellar properties---and particularly stellar ages, for which asteroseismology is uniquely capable.
Besides their intrinsic interest, knowledge of the ages of stars is useful for a broad spectrum of activities in astrophysics, ranging from charting the history of the Galaxy \mb{\citep[e.g.,][]{2013MNRAS.429..423M, doi:10.1146/annurev-astro-081915-023441, 2016MNRAS.455..987C, 2018MNRAS.475.5487S, 2018arXiv180900914S}} to understanding the processes of stellar and exoplanetary formation and evolution \mb{\citep[e.g.,][]{1981A&A....93..136M, 1996Natur.380..606L, doi:10.1146/annurev.earth.30.091201.140357, doi:10.1146/annurev-astro-081309-130806, 2015ARA&A..53..409W, 2017A&A...608A.112N, 2017ApJ...851...80B}}.
Unlike with mass and radius, there is no scaling relation for stellar age.
Instead, a multitude of methods have been developed for matching asteroseismic observations of stars to theoretical models of stellar evolution \mb{\citep[e.g.,][]{1994ApJ...427.1013B, 2004MNRAS.351..487P, 2009ApJ...700.1589S, Gai_2011, 2012MNRAS.427.1847B, 2014ApJS..214...27M, 2014A&A...569A..21L, 2014A&A...566A..82L, 2015MNRAS.452.2127S, 2017ApJ...835..173S, 2016ApJ...830...31B, 2017ApJ...839..116A, 2019MNRAS.484..771R}}, which then yields their ages.
Scaling relations are attractive resources because they can easily and immediately be applied to observations without requiring access to theoretical models.
In this paper, I seek to develop such a relation to estimate the ages of \mb{main sequence} stars, as well as to improve the scaling relations for estimating their mass and radius.
The strategy is as follows.
It is by now well-known that the core-hydrogen abundance (and, by proxy, the age) of a main-sequence star is correlated with the average frequency spacing between radial and quadrupole oscillation modes \citep[e.g.,][]{1984srps.conf...11C, 2010aste.book.....A, basuchaplin}.
This spacing is known as the small frequency separation and is denoted by $\delta\nu$.
The diagnostic power of $\delta\nu$ is owed to its sensitivity to the sound-speed gradient of the stellar core, which in turn is affected by the mean molecular weight, which increases monotonically over the main-sequence lifetime as a byproduct of hydrogen--helium fusion.
Here I formulate new scaling relations that make use of this spacing $\delta\nu$, and I also add a term for metallicity.
Rather than by analytic derivation, I calibrate the exponents of this relation using 80 solar-type stars whose ages and other parameters have been previously determined through detailed fits to stellar evolution simulations.
Finally, I perform cross-validation to estimate the accuracy of the new relations.
\section{Data}
I obtained spectroscopic and asteroseismic measurements of solar-like stars from the \emph{Kepler} Ages \citep{2015MNRAS.452.2127S, 2016MNRAS.456.2183D} and LEGACY \citep{2017ApJ...835..172L, 2017ApJ...835..173S} samples.
These stars were observed by the \emph{Kepler} spacecraft during its nominal mission
\citep{2010Sci...327..977B}.
\mb{These stars are all main-sequence or early sub-giant stars with no indications of mixed modes. Their positions in the $\nu_{\max}$--$T_{\text{eff}}$ plane are shown in Figure~\ref{fig:teff-numax}.
The large and small frequency separations of these stars range from 38 to 180~$\mu$Hz and from 2.8 to 13~$\mu$Hz, respectively.}
The ages, masses, and radii of these stars are taken from \citet{Bellinger2019} as derived using the \emph{Stellar Parameters in an Instant} \citep[SPI,][]{2016ApJ...830...31B} pipeline.
The SPI method uses machine learning to rapidly compute stellar ages, masses, and radii of stars by connecting their observations to theoretical models.
For the present study, I selected 80 of these stars having $\delta\nu$ measurements with uncertainties smaller than 10\% and $\nu_{\max}$ measurements with uncertainties smaller than 5\%.
\begin{figure}%
\centering%
\includegraphics[width=\linewidth]{figs/teff-numax.pdf}%
\caption{\mb{Locations of the stars used in this study in the $\nu_{\max}$--$T_{\text{eff}}$ diagram.
The lines in the background are theoretical stellar evolution tracks of the indicated masses computed with \textsc{MESA} \citep{2011ApJS..192....3P,2013ApJS..208....4P,2015ApJS..220...15P,2018ApJS..234...34P}.
The zero-age main sequence is indicated with a dashed line; stars evolve upward and to the right. The background color indicates spectral type. The position of the Sun is indicated with the solar symbol ($\odot$).} \label{fig:teff-numax}}%
\end{figure}%
\section{Methods}
Here I will detail the construction of the new scaling relations and the procedure for their calibration.
The scaling relations shown in Equations~\ref{eq:scalingM} and \ref{eq:scalingR} can be written more generically as follows.
For a given quantity $Y$ (and corresponding solar quantity $\text{Y}_\odot$),
\begin{align}
\frac{Y}{\text{Y}_\odot}
&\simeq
\bigg(
\frac{\nu_{\max}}{\nu_{\max,\odot}}
\bigg)^\alpha
\bigg(
\frac{\Delta\nu}{\Delta\nu_\odot}
\bigg)^\beta
\bigg(
\frac{\delta\nu}{\delta\nu_\odot}
\bigg)^\gamma
\bigg(
\frac{T_{\text{eff}}}{T_{\text{eff},\odot}}
\bigg)^\delta
\exp\bigg(
\text{[Fe/H]}
\bigg)^\epsilon \label{eq:scalingX}
\end{align}
for suitable choices of the powers $\mathbf P = [\alpha, \beta, \gamma, \delta, \epsilon]$.
Note the metallicity is first exponentiated.
The uncertainties on all solar quantities are propagated except in the case of metallicity, where there is no agreed upon uncertainty \citep[e.g.,][]{2014dapb.book..245B}.
From an analysis of solar data \citep{2014MNRAS.439.2025D} one can find ${\delta\nu_\odot = 8.957 \pm 0.059~\mu\text{Hz}}$.
For the solar age I use ${\tau_\odot = 4.569 \pm 0.006~\text{Gyr}}$ \citep{2015A&A...580A.130B}.
To give a concrete example, in order to recover the classical radius scaling relation (Equation~\ref{eq:scalingR}), i.e., ${Y=R}$, we have ${\mathbf{P}=[1,-2,0,1/2,0]}$.
I now seek to find the exponents for scaling relations that best match to the literature values of radius, mass, and age.
I define the goodness-of-fit $\chi^2$
for a given vector $\mathbf P$ as
\begin{align}
\chi^2 &= \sum_i \left(
\frac{\hat Y_i - Y_i}{\sigma_i}
\right)^2
\end{align}
where $\hat Y_i$ is the literature value of the desired quantity (e.g., age) for the $i$th star, $Y_i$ is the result of applying the scaling relation in Equation~\ref{eq:scalingX} with the given powers $\mathbf P$, and the uncertainties $\boldsymbol\sigma$ are the measurement uncertainty of $\boldsymbol{\hat{Y}}$.
Given this function, I use Markov chain Monte Carlo \citep{2013PASP..125..306F, corner} to determine the posterior distribution of $\mathbf P$ for each of $Y=R, M, \tau$.
Finally, I use the mean-shift algorithm \citep{fukunaga1975estimation, scikit-learn} to find the mode of the posterior distribution (see Figure~\ref{fig:corner}).
This yields the desired exponents for each scaling law.
\begin{figure*}
\centering
\begin{overpic}[width=0.97\linewidth, tics=10, trim={0 0.4cm 0 1cm}, clip]{figs/age_corner.pdf}
\put (9.5,93.5) {$\displaystyle\nu_{\max}$}
\put (27,75.5) {$\displaystyle\Delta\nu$}
\put (45,57.5) {$\displaystyle\delta\nu$}
\put (63,40) {$\displaystyle{T}_{\text{eff}}$}
\put (80.5,22) {$\displaystyle{\text{[Fe/H]}}$}
\end{overpic}
\caption{A standard corner plot showing the posterior distributions of each of the exponents in the MCMC-fitting of the age scaling relation (\emph{cf.}~Equation~\ref{eq:scalingX}).
Since $\nu_{\max}$ and $\Delta\nu$ are correlated quantities \citep[e.g.,][]{2009MNRAS.400L..80S}, the respective exponents for these quantities ($\alpha$ and $\beta$) are correlated as well.
The blue lines and points indicate the mode of the posterior distribution, which are listed in Table~\ref{tab:scaling}.
\label{fig:corner}}
\end{figure*}
\section{Results}
The best exponents for each of the computed scaling relations are shown in Table~\ref{tab:scaling}.
I fit the mass and radius scaling relations both with and without a dependence on $\delta\nu$. I found that its inclusion made little difference, however, so I have omitted it for those two relations.
\begin{table}
\centering
\caption{Classical and \mb{new} MCMC-fitted exponents for scaling relations (see Equation~\ref{eq:scalingX}).
\label{tab:scaling}}
\begin{tabular}{ccccccc}
\hline
& & $\nu_{\max}$ & $\Delta\nu$ & $\delta\nu$ & $T_{\text{eff}}$ & [Fe/H]\\\hline
& $Y$ & $\alpha$ & $\beta$ & $\gamma$ & $\delta$ & $\epsilon$\\\hline\hline
Classic & $M$ & 3 & -4 & -- & 1.5 & -- \\
New & $M$ & 0.975 & -1.435 & -- & 1.216 & 0.270 \\\hline
Classic & $R$ & 1 & -2 & -- & 0.5 & -- \\
New & $R$ & 0.305 & -1.129 & -- & 0.312 & 0.100 \\
Seismic & $R$ & 0.883 & -1.859 & -- & -- & -- \\\hline
New & Age & -6.556 & 9.059 & -1.292 & -4.245 & -0.426 \\
\hline\hline
\end{tabular}
\end{table}
A striking feature of the new radius scaling relation is its small dependence on spectroscopic variables.
To explore this further, I have additionally fit a radius scaling relation using only $\Delta\nu$ and $\nu_{\max}$, \mb{referred to as `Seismic' in Table~\ref{tab:scaling}}.
We shall soon see in the next section that although it is not quite as accurate as the full new radius relation, it does outperform the classical radius scaling relation, despite requiring no spectroscopic information.
A notable aspect of the new mass and radius scaling relations is that their exponents are smaller in magnitude than those of the classical relations.
As a consequence, the resulting uncertainties are also smaller.
We may estimate the typical uncertainties of applying these relations by examining the typical uncertainties of their inputs.
The median relative errors on $\nu_{\max}$, $\Delta\nu$, $\delta\nu$, $T_\text{eff}$, and $\exp\text{[Fe/H]}$ of the \emph{Kepler} Ages and LEGACY stars are approximately 1\%, 0.1\%, 4\%, 1\%, and 10\%, respectively \citep[e.g., Figure~5 of][]{Bellinger2019}.
Thus, for a solar twin observed with such uncertainties, using Equation~\ref{eq:scalingX} with the exponents given in Table~\ref{tab:scaling} yields uncertainties of 0.032~$\text{M}_\odot$ (3.3\%), 0.011~$\text{R}_\odot$ (1.1\%), and 0.56~Gyr (12\%).
These values are similar to those from fits to models \citep[e.g., Figure~6 of][]{Bellinger2019}.
\section{Benchmarking} \label{sec:cv}
I now seek to determine the accuracy of the relations: i.e., how well do these relations actually work?
The fitted values cannot simply be compared to the literature values: it would be unsurprising if they match, being that they were numerically optimized to do so.
Instead, we may use cross-validation to answer this question.
The procedure is as follows.
We take our same data set as before, but instead of training on the entire data set, we remove one of the stars.
We then fit the relations using the other 79 stars that were not held out using the procedure described in the previous section.
Finally, the newly fitted relations are tested on the held-out star.
This test is then repeated for every star.
Comparisons of these cross-validated relations to literature values are shown in Figures~\ref{fig:scalingM}, \ref{fig:scalingR}, and \ref{fig:scaling-age}.
The classical mass and radius scaling relations are also shown, and there it can be seen that the new relations are better at reproducing the literature values.
The age scaling relation is also compared to the BASTA ages \citep{2015MNRAS.452.2127S, 2017ApJ...835..173S}, which were fit in a different way and using a different grid of theoretical models, \mb{and have some systematic differences with the SPI ages}.
The age scaling relation shows a larger dispersion at older ages, which is most likely due to the input data having larger uncertainties there (see Figure~\ref{fig:unc-age}).
\mb{\citet{2011ApJ...743..161W} sought to improve the $\Delta\nu$ scaling relation by developing an analytical correction function from models.
However, \citet{2018MNRAS.481L.125S} found that this correction actually degrades the agreement for main-sequence stars due to surface effects in the models.
For comparison purposes, the \citet{2011ApJ...743..161W} radius scaling relation applied to these stars is shown in Figure~\ref{fig:white}.}
The purely seismic radius scaling relation is shown in Figure~\ref{fig:scalingR-seismic}.
\mb{There is a systematic trend at low radius; this is likely due to not including all relevant physics.
Still, it similarly outperforms the classical scaling relation, despite having no temperature dependence.
It is thus a potentially useful tool for measuring stellar radii without spectroscopy.}
\begin{figure*}
\vspace*{1.5\baselineskip}
\centering%
\begin{minipage}[t]{0.477\linewidth}%
\includegraphics[width=\linewidth]{figs/comp/M_classical.pdf}%
\end{minipage}\hfill%
\begin{minipage}[t]{0.477\linewidth}%
\includegraphics[width=\linewidth]{figs/comp/M_MCMC_cv_no_dnu.pdf}%
\end{minipage}%
\caption{Classical (left, squares) and new (right, circles) scaling relations for estimating stellar mass.
Each point is a star observed by \emph{Kepler}.
The masses on the x-axis \mb{are} the literature values; they were estimated using the SPI method with reference to a grid of theoretical stellar models.
The masses on the y-axis have been estimated using the scaling relation (whose fitting did not involve the star being plotted, see Section~\ref{sec:cv} for details).
The bottom panel shows the relative difference between the scaling mass and the literature mass.
The weighted mean and standard deviation of the ratios are given.
The Sun is denoted by the solar symbol ($\odot$). \label{fig:scalingM}}
\vspace*{1.5\baselineskip}
\centering
\begin{minipage}[t]{0.477\linewidth}%
\includegraphics[width=\linewidth]{figs/comp/R_classical.pdf}%
\end{minipage}\hfill%
\begin{minipage}[t]{0.477\linewidth}%
\includegraphics[width=\linewidth]{figs/comp/R_MCMC_cv_no_dnu.pdf}%
\end{minipage}%
\caption{Classical (left, squares) and new (right, circles) scaling relations for estimating stellar radius. \label{fig:scalingR}}
\end{figure*}
\begin{figure*}%
\vspace*{1.5\baselineskip}
\centering%
\includegraphics[width=0.477\linewidth]{figs/comp/age_MCMC_cv.pdf}\hfill%
\includegraphics[width=0.477\linewidth]{figs/comp/age_BASTA.pdf}%
\caption{A comparison of the scaling relation for stellar age to literature age values. Left panel: cross-validated scaling ages compared to literature values from SPI. Right panel: scaling ages using the values from Table~\ref{tab:scaling} in comparison with BASTA ages.
\label{fig:scaling-age}}%
\end{figure*}%
\begin{figure}%
\centering%
\includegraphics[width=\linewidth]{figs/age_unc.pdf}%
\caption{The uncertainties of stellar ages from the literature as a function of age ($\tau$). Top panel: absolute uncertainties; bottom panel: relative uncertainties, in the sense of $\sigma_\tau / \tau$. Trend lines are shown to guide the eye. Older stars have more uncertain ages, in an absolute but not relative sense. \label{fig:unc-age}}%
\end{figure}%
\begin{figure*}%
\vspace*{1.5\baselineskip}
\centering
\begin{minipage}[t]{0.477\linewidth}
\centering%
\includegraphics[width=\linewidth]{figs/comp/R_White.pdf}%
\caption{Application of the \citet{2011ApJ...743..161W} radius scaling relation to the 80 stars. \label{fig:white}}%
\end{minipage}\hfill%
\begin{minipage}[t]{0.477\linewidth}
\centering%
\includegraphics[width=\linewidth]{figs/comp/R_MCMC_cv_seismic.pdf}%
\caption{Cross-validation of the purely seismic scaling relation for stellar radius (compare with the classical radius scaling relation in Figure~\ref{fig:scalingR}). \label{fig:scalingR-seismic}}%
\end{minipage}\hfill
\end{figure*}
\iffalse
\begin{figure}%
\centering%
\includegraphics[width=\linewidth]{figs/comp/R_MCMC_cv_seismic.pdf}%
\caption{A purely seismic scaling relation for stellar radius (compare with the classical radius scaling relation in Figure~\ref{fig:scalingR}). \label{fig:scalingR-seismic}}%
\end{figure}%
\begin{figure}%
\centering%
\includegraphics[width=\linewidth]{figs/comp/R_White.pdf}%
\caption{Application of the \citet{2011ApJ...743..161W} radius scaling relation to the 80 stars. \label{fig:white}}%
\end{figure}
\fi
It is interesting to try to pinpoint the underlying cause of the discrepancies in the classical scaling relations seen in Figures~\ref{fig:scalingM} and \ref{fig:scalingR}.
As mentioned, these relations come from
\begin{align}
\nu_{\max} &\propto \nu_{\text{ac}} \propto g / \sqrt{T_{\text{eff}}} \label{eq:nu_max} \\
\Delta\nu &\propto \sqrt{\bar\rho} \label{eq:Delta_nu}
\end{align}
where $\nu_{\text{ac}}$ is the acoustic cut-off frequency of the star, $g$ is its surface gravity and $\bar\rho$ is its mean density.
Figure~\ref{fig:scaling} compares the left and right sides for both of these relations.
While the $\Delta\nu$ scaling relation holds well (generally within about 1\%), the $\nu_{\max}$ scaling relation has larger scatter.
This coincides with other recent findings that the classical $\nu_{\max}$ scaling relation should have additional dependencies \citep{2015A&A...583A..74J, 2017ApJ...843...11V}.
A more accurate $\nu_{\max}$ relation can easily be inferred from the values given in Table~\ref{tab:scaling}.
\begin{figure*}
\vspace*{1.5\baselineskip}
\centering
\includegraphics[width=0.477\linewidth]{figs/nu_max.pdf}\hfill%
\includegraphics[width=0.477\linewidth]{figs/Delta_nu.pdf}
\caption{The $\nu_{\max}$ (left, Equation~\ref{eq:nu_max}) and $\Delta\nu$ (right, Equation~\ref{eq:Delta_nu}) scaling relations. %
All quantities have been computed in solar units. %
The surface gravities and mean densities are derived from the literature values of mass and radius; all other quantities are observed. %
\label{fig:scaling}}
\end{figure*}
\mb{The cross-validation} procedure may also be used to estimate the stability of the derived exponents.
Figure~\ref{fig:stability} compares the exponents for each of the 80 fits when holding out one star each time.
The exponents do not change much between the different fits, which indicates convergence.
\mb{As a final test, I have applied the new mass and radius scaling relations to the APOKASC+SDSS sample of dwarfs and sub-giant stars \citep{2017ApJS..233...23S}.
This sample includes more evolved stars than those in the training set, with $\nu_{\max}$ and $\Delta\nu$ values down to 250 and 17 $\mu$Hz, respectively.
\mbb{Figures~\ref{fig:APOKASC-M} and \ref{fig:APOKASC-R} show that} the new radius, purely seismic radius, and new mass scaling solutions for these stars match the grid-based modeling results on average within 1.8\%, 3.7\%, and 5\%, respectively.
Being that the average uncertainties of these stars from modelling are 2.4\% in radius and 4.2\% in mass, the new relations give solutions that are of order or beneath typical uncertainties in modelling.
\mbb{Comparisons with the \citet{2016ApJ...822...15S} scaling relations, which are calculated by interpolating in a grid of models, are shown in those figures as well.
It can be seen that the new scaling relations have less scatter and smaller uncertainties.}}
\begin{figure}%
\centering%
\includegraphics[width=\linewidth]{figs/stability.pdf}%
\caption{Boxplots comparing the estimated exponents across the 80 cross-validation fits to the values given in Table~\ref{tab:scaling}, with zero being no difference.
The middle line shows the median of the residuals, the box shows the interquartile range, and the whiskers extend to the farthest points.
Note the differences in scale.
\label{fig:stability}}%
\end{figure}%
\iffalse
\begin{figure}%
\centering%
\includegraphics[width=\linewidth]{figs/R_MCMC_APOKASC.pdf}%
\caption{\mb{Application of the new scaling radius relation to the APOKASC sample of main-sequence and sub-giant stars, whose radii were determined through grid-based modelling.}
\label{fig:APO-R}}%
\end{figure}
\fi
\iffalse
\begin{figure*}%
\vspace*{1.5\baselineskip}
\centering%
\begin{minipage}[t]{0.477\linewidth}
\includegraphics[width=\linewidth]{figs/stability.pdf}%
\caption{Boxplots comparing the estimated exponents across the 80 cross-validation fits to the values given in Table~\ref{tab:scaling}, with zero being no difference.
The middle line shows the median of the residuals, the box shows the interquartile range, and the whiskers extend to the farthest points.
Note the differences in scale.
\label{fig:stability}}%
\end{minipage}\hfill
\begin{minipage}[t]{0.477\linewidth}
\centering%
\includegraphics[width=\linewidth]{figs/R_MCMC_APOKASC.pdf}%
\caption{\mb{Application of the new scaling radius relation to the APOKASC sample of main-sequence and sub-giant stars, whose radii were determined through grid-based modelling.}
\label{fig:APO-R}}%
\end{minipage}
\end{figure*}
\fi
\begin{figure*}
\vspace*{1.5\baselineskip}
\centering%
\begin{minipage}[t]{0.477\linewidth}%
\includegraphics[width=\linewidth]{figs/M_Sharma.pdf}%
\end{minipage}\hfill%
\begin{minipage}[t]{0.477\linewidth}%
\includegraphics[width=\linewidth]{figs/M_MCMC_APOKASC.pdf}%
\end{minipage}%
\caption{\mbb{Comparison of \citealt{2016ApJ...822...15S} (left, squares) and new (right, circles) scaling relations as applied to the APOKASC sample of 408 main-sequence and sub-giant stars. The x-axis shows the masses of these stars as given in the APOKASC catalogue, which were determined through grid-based modelling.} \label{fig:APOKASC-M}}
\vspace*{1.5\baselineskip}
\centering
\begin{minipage}[t]{0.477\linewidth}%
\includegraphics[width=\linewidth]{figs/R_Sharma.pdf}%
\end{minipage}\hfill%
\begin{minipage}[t]{0.477\linewidth}%
\includegraphics[width=\linewidth]{figs/R_MCMC_APOKASC.pdf}%
\end{minipage}%
\caption{\mbb{\citealt{2016ApJ...822...15S} (left, squares) and new (right, circles) scaling relations for estimating stellar radius applied to the same APOKASC sample of stars as in Figure~\ref{fig:APOKASC-M}.} \label{fig:APOKASC-R}}
\end{figure*}
\section{Discussion \& Conclusions}
In this paper, I formulated new scaling relations for estimating the mass, radius, and age of solar-like stars.
I calibrated the free parameters of these relations using 80 well-studied stars whose ages have been previously determined from fits to theoretical stellar models.
The values for the calibrated relations are listed in Table~\ref{tab:scaling}.
I used cross-validation to gauge the stability of the exponents of the relations, and also to assess their accuracy.
The relations were found to have a typical precision that is similar to methods which make reference to a grid of theoretical models.
Finally, when compared with the classical scaling relations and other proposed corrections, the new relations were found to be in better agreement with the literature values of mass and radius.
For easy use, source code for these relations can be found in the \hyperref[sec:code]{Appendix}.
A few points of discussion are in order.
These relations have been fit to literature values of age, mass, and radius, which themselves were determined via fits to a grid of theoretical stellar models.
Therefore, these relations are model-dependent.
It follows that errors in the literature values may affect the accuracy of these relations.
Several sources of systematic errors may exist in the theoretical models used to estimate stellar ages, such as unpropagated uncertainties in nuclear reaction rates or atmospheric abundances.
When stellar models are inevitably improved, the stars should be fit again, and these relations re-calibrated.
That being said, the literature values used to calibrate these relations have been found to be in good agreement with external constraints, such as \emph{Gaia} radii and luminosities, interferometry, and also with other modeling efforts which are based on different theoretical models \citep{2016ApJ...830...31B, Bellinger2019}.
\mb{A large effort was made in those works to propagate sources of uncertainty stemming from the unknown model physics inputs of diffusion, convective mixing length, and convective over- and under-shooting.}
One might be tempted to calibrate these relations with stellar models directly, circumventing the need to use real stars whose ages have been fit to said models.
However, it must be kept in mind that theoretical values of $\nu_{\max}$ are generally computed using the scaling relation.
Furthermore, theoretical calculations of the large and small frequency separation are affected by the asteroseismic surface term, giving rise to systematic discrepancies between theory and observation.
The ages used here were instead determined using asteroseismic frequency ratios, which are insensitive to surface effects \citep{2003A&A...411..215R, 2005MNRAS.356..671O}, but more difficult for observers to measure.
This approach allows us the convenience of using the observed large and small frequency separations and $\nu_{\max}$.
\mb{While this paper constitutes the first effort, as far as I am aware, to develop a scaling relation for stellar age, several previous studies have sought to improve the mass and radius scaling relations.
Generally the focus has been on red-giant stars, where the discrepancies are most apparent.
A few approaches have been tried.
\citet{2011ApJ...743..161W} and \citet{2016MNRAS.460.4277G, 2017MNRAS.470.2069G} developed analytic functions to correct for deviations in the $\Delta\nu$ scaling based on stellar models.
In another approach, \citet{2016ApJ...822...15S} interpolated corrections based on a grid of models.
\citet{2018A&A...616A.104K} developed empirical functions fitting to six red giants with orbital measurements of mass and radius.}
\mb{One distinction, apart from accuracy, is that the relations developed here are not arbitrary in their form; instead, they are explicit solar homology relations following Equation~\ref{eq:scaling}.
The tests presented in this manuscript demonstrate that the relations work well within the tested ranges (i.e., on main-sequence and early sub-giant stars).
Extrapolation of these relations (i.e., to late sub-giant and red-giant stars) is not recommended; the development of new scaling relations for red giants is to be explored in a future work, at which point it will be interesting to compare with the red-giant correction functions.
It will also be interesting to calibrate a new age relation to such stars, as there the small frequency separation ceases to be a useful diagnostic.}
\iffalse
\mb{The TESS mission is currently monitoring stars in our Galaxy for oscillations and transits \citep{2015JATIS...1a4003R, 2016ApJ...830..138C}.
The PLATO mission, set to launch in 2026, will additionally resolve individual mode frequencies for $\sim$80\,000 main-sequence stars \citep{2017AN....338..644M}.
Application of these relations will facilitate rapid characterization of these stars, thereby providing the opportunity to improve our understanding of the evolution of the Milky Way.}
\fi
\section*{Acknowledgements}
The author thanks Hans Kjeldsen, J{\o}rgen Christensen-Dalsgaard, and the anonymous referee for their suggestions which have improved the manuscript.
Funding for the Stellar Astrophysics Centre is provided by The Danish National Research Foundation (Grant agreement no.: DNRF106).
The numerical results presented in this work were obtained at the Centre for Scientific Computing, Aarhus\footnote{\url{http://phys.au.dk/forskning/cscaa/}}.
\bibliographystyle{mnras} |
1903.02900 | \section{Introduction} \label{sec:introdcution}
Among the nonlinear excitations that arise in Bose-Einstein condensates
(BECs)~\cite{Anderson1995, Davis1995},
matter-wave dark~\cite{Frantzeskakis_2010} and bright~\cite{tomio}
solitons constitute the fundamental signatures.
These structures stem from the balance between dispersion and nonlinearity and exist in single component BECs with
repulsive and attractive interparticle interactions respectively~\cite{Zakharov1972,Zakharov1973}.
Also more complex structures consisting of dark solitons in one component and bright solitons hosted in the second component of
a binary BEC have been experimentally realized~\cite{Becker2008,Middelkamp2011,Hamner2011,Yan2011,Hoefer2011,Yan2012}.
The existence and robustness of a single dark-bright (DB) soliton as well as
interactions between multiple DB states both with each other as well as with
impurities have been exhaustively studied in such
settings~\cite{brazhnyi2011stable,yin2011coherent,Yan2011,Achilleos2011,Alvarez2013,yan2015dark,karamatskos2015stability,
katsimiga2017dark,katsimiga2017stability,katsimiga2018dark}.
In contrast to single component setups, DB solitons are the building blocks that emerge in repulsively interacting
two-component BECs~\cite{Kevrekidis2016}. In such a repulsive environment
(where bright solitonic states cannot exist on their own)
DB states owe their existence to the effective potential
created by each of the participating dark solitons
into which each of the bright solitons is trapped and
consequently waveguided~\cite{Trillo1988,Christodoulides1988,Ostrovskaya1998}.
This waveguiding notion has been firstly introduced in the context of nonlinear
optics~\cite{Afanasyev1989,Kivshar1993,Christodoulides1996,Buryak1996,Sheppard1997,Chen1997,Kivshar1998,Ostrovskaya1999,Park2000}.
Besides the aforementioned two-component BECs, the experimental realization of
spinor BECs~\cite{Stamper-Kurn2001,Chang2004,Chang2005,Kawaguchi2012,Stamper-Kurn2013}
offers
new possibilities of investigating the different soliton entities that arise in
them~\cite{Ohmi1998,Tsuchida1998,Ieda2004,Ieda2004a,Ieda2005,Li2005,Wadati2005,Ieda2006,Uchiyama2006,Ieda2007,Zhang2007,
Kurosaki2007,Dabrowska-Wuster2007,Kawaguchi2012,Stamper-Kurn2013}.
In this context, more complex compounds in the form of dark-dark-bright (DDB) and dark-bright-bright (DBB)
solitons have been theoretically predicted~\cite{Nistazakis2008,Xiong2010}
and very recently experimentally observed~\cite{Bersano2018}.
There are multiple ways of generating single and multiple dark solitons~\cite{katsimiga2018many}
(with the latter sometimes referred to as the dark soliton train~\cite{Brazhnyi2003}),
in single component BECs.
Common techniques consist of density engineering~\cite{Dutton2001,Engels2007,Shomroni2009},
phase engineering~\cite{Burger1999,Denschlag2000,Anderson2001,Becker2008},
and collision of two spatially separated condensates~\cite{Weller2008,hoefer2009matter}
(see also~\cite{bpa} for an interesting geometric higher dimensional
implementation of the latter process so as to produce vortices).
This latter generation process can be thought of as a consequence of matter
wave interference
of the two condensates~\cite{Reinhardt1997,Scott1998,Weller2008,Theocharis2010}.
Additionally, also known are the conditions under which the controllable formation of dark soliton trains can be
achieved~\cite{Reinhardt1997,Scott1998,Brazhnyi2003,Weller2008,Theocharis2010}.
In particular, it has been demonstrated that the number of generated dark solitons depends on the phase and momentum of
the colliding condensates~\cite{Weller2008,Theocharis2010}.
On the contrary, in multi-component settings such as two-component and spinor BECs the dynamics is much more involved.
In this context, large scale counterflow experiments exist according to
which also DB soliton trains can be created~\cite{Hamner2011}.
However, to the best of our knowledge a systematic study regarding the controllable formation of these more complex solitonic
structures and their relevant extensions in spinorial BECs is absent in the current literature.
This controlled formation process represents the core of the present investigation.
Motivated by recent experimental advances in one-dimensional (1D)
two-component~\cite{Hamner2011,Middelkamp2011,Yan2011,Hoefer2011,Yan2012} and more importantly spinor
BECs~\cite{Bersano2018}, here we report on the controllable generation of multiple soliton complexes.
These include DB solitons in two-component BECs,
and variants of these structures, i.e. DDB and DBB soliton arrays,
in three-component and spinor BECs.
For all models under consideration,
the creation process of the relevant states is based on the so-called matter wave interference
of separated condensates being generalized to multi-component systems.
In all cases, the homogeneous setting is initially discussed
and subsequently we generalize our findings to the case where an experimentally
motivated parabolic confinement, i.e. trap, is present.
Specifically, for the homogeneous settings investigated herein
the creation process is as follows.
To set up the interference dynamics,
an initial inverted rectangular pulse (IRP)
is considered~\cite{Zakharov1973} for the component(s) that will
host later on the dark solitons.
The counterflow process relies on the collision of the two sides of the
pulse.
For the remaining component(s), that will
host later on the bright solitons, a Gaussian pulse initial condition
is introduced.
It is shown that such a process ensures the formation of dark soliton arrays
the number of solitons in
which can be manipulated by properly adjusting the width
of the initial IRP.
Additionally, the dispersive behavior of the Gaussian used,
due to the defocusing nature of each system,
allows its confinement in the effective potential created by each of the
formed dark solitons
and thus leads to the formation of the desired localized humps.
The latter are trapped and subsequently waveguided by the corresponding dark solitons.
In this way, arrays of robustly propagating DB, DDB and DBB solitons
in two-component, three-component and spinor systems are showcased.
Indeed, and as far as the two-component system is concerned,
we verify among others, that during evolution the trajectory of each of the nucleated pairs
follows the analytical predictions stemming from the exact single DB state.
Also for the three-component scenario generalized expressions for the soliton characteristics are
extracted and deviations from the latter when different initializations are considered are
discussed in detail.
In the spinor setting the controlled nucleation
of arrays consisting of multiple DBB and DDB solitons is demonstrated,
a result that can be tested in current state-of-the-art experiments~\cite{Bersano2018}.
Remarkably enough, in the DDB nucleation process,
the originally formed DDB arrays soon after their formation transition
into beating dark solitons that gradually arise in all three hyperfine
components~\cite{Park2000,Ohberg2001,Hoefer2011,Yan2012}.
This transition stems, as we will explain in more detail below,
from the spin-mixing dynamics that allows for particle
exchange among the hyperfine components.
After the proof-of-principle in the spatially homogeneous case,
we turn to the harmonically trapped models, where once
again in order to induce the dynamics,
counterflow
techniques are utilized~\cite{Weller2008,Theocharis2010,Hamner2011}.
Now the background on top of which the spatially separated BECs are initially set
up asymptotes to a
Thomas-Fermi (TF) profile for all the participating components.
The counterflowing components are initially relaxed
in a double-well potential, while the other
component encounters a tight harmonic trap.
The system is then released and evolves in a common
parabolic potential.
It is found that by properly adjusting the initial separation of the
condensates
or the chemical potential in each of the participating components leads to
the controlled nucleation of a desired number of soliton structures, in this
case too, with similar functional dependences of the soliton number
on the system characteristics as above.
For the two- and three-component systems it is found that
the generated soliton arrays travel within the parabolic trap
oscillating and interacting with one another for large evolution times.
Finally, in the genuine spinor case and for a DDB formation process
again arrays but of oscillating and interacting beating dark solitons
emerge in all hyperfine components.
We find that these states
occur earlier in time when compared to the homogeneous scenario.
The spin-mixing dynamics is explained
via monitoring the population of the three hyperfine states.
Damping oscillations of the latter are observed in line
with the predictions in spinor $F=1$ BECs~\cite{Pu1999,Chang2005}.
The work-flow of this presentation proceeds as follows.
In Sec.~\ref{sec:model_setup} we present the different models under consideration.
In particular, the spinor $F=1$ BEC system is initially introduced
and the complexity of the model is reduced
all the way down to the single-component setting.
Subsequently, a brief discussion summarizing prior
results regarding the controllable generation of dark soliton trains
emerging in single-component systems is provided.
Finally here, we comment on the initial state preparation utilized herein in order
to controllably generate multiple soliton complexes of the DB type in multi-component BECs.
Sec.~\ref{sec:results} contains our numerical findings ranging from two-component to spinor BEC systems.
In all the cases presented, the homogeneous setting is initially investigated,
and we next elaborate on the relevant findings in the presence of traps.
To conclude this work, in Sec.~\ref{sec:summ_concl} we summarize our findings
and we also discuss future directions.
\section{Models and setups} \label{sec:model_setup}
\subsection{Equations of motion} \label{subsec:models}
We consider a 1D harmonically confined spinor $F=1$ BEC.
Such a system can be described by three coupled Gross-Pitaevskii equations (CGPEs),
one for each of the three hyperfine states $m_F=-1,0,+1$, of e.g. a $^{87}$Rb gas.
In the mean-field framework the wavefunctions, ${\bf \Psi}(x,t)=\left[\Psi_{+1}(x,t),\Psi_{0}(x,t),\Psi_{-1}(x,t)\right]^T$,
of the aforementioned hyperfine components are known to obey the following
GPEs (see e.g.~\cite{Stamper-Kurn2013,Bersano2018}):
\begin{subequations}\label{eq:spinor_hamiltonian}
\begin{eqnarray}
i\partial_t\Psi_{\pm 1}&= {\cal{H}}_0\Psi_{\pm 1}
+ g_n \left( |\Psi_{+1}|^2+ |\Psi_0|^2+|\Psi_{-1}|^2 \right) \Psi_{\pm1} \nonumber \\
&+g_s\left(|\Psi_{\pm 1}|^2+|\Psi_{0}|^2-|\Psi_{\mp 1}|^2\right)\Psi_{\pm 1}
+g_s\Psi_0^2\Psi^*_{\mp 1}, \nonumber \\
\label{eq:spinor_hamiltonian_a}
\end{eqnarray}
\begin{eqnarray}
i\partial_t\Psi_{0}&= {\cal{H}}_0\Psi_{0}
+g_n\left(|\Psi_{+1}|^2+|\Psi_0|^2+|\Psi_{-1}|^2\right)\Psi_{0} \nonumber \\
&+g_s\left(|\Psi_{+1}|^2+|\Psi_{-1}|^2\right)\Psi_0
+2g_s\Psi_{+1}\Psi^*_{0}\Psi_{-1}. \nonumber \\
\label{eq:spinor_hamiltonian_b}
\end{eqnarray}
\end{subequations}
In the above expressions the asterisk denotes the complex conjugate and
${\mathcal{H}}_0=-\frac{1}{2}\partial^2_x+V(x)$ is the single-particle Hamiltonian.
Here, $V(x)=\left(1/2\right)\Omega^2 x^2$ denotes (unless indicated otherwise)
the external harmonic potential with frequency $\Omega=\omega_x/\omega_{\perp}$ and $\omega_{\perp}$
is the trapping frequency in the transverse direction.
Eqs.~(\ref{eq:spinor_hamiltonian_a})-(\ref{eq:spinor_hamiltonian_b}) were made dimensionless
by measuring length, time, and energy in units: $a_{\perp}=\sqrt{\hbar/(M\omega_{\perp})}$,
$\omega_{\perp}^{-1}$, and $\hbar \omega_{\perp}$ respectively.
Here, $a_{\perp}$ is the transverse oscillator length.
In this work we consider condensates consisting of $^{87}$Rb atoms of mass $M$,
and we assume strongly anisotropic clouds having a transverse trapping frequency
$\omega_{\perp}=2\pi \times 175$~Hz $\gg \omega_x$ that is typically used in
experiments with spinor $F=1$ BECs of $^{87}$Rb atoms~\citep{Bersano2018}.
In general, spinor BECs exhibit both symmetric or spin-independent and
asymmetric or spin-dependent interatomic interactions.
In particular, $g_n$ is the so-called spin-independent interaction strength
being positive (negative) for repulsive (attractive) interatomic interactions.
$g_s$ denotes the so-called spin-dependent interaction strength
being in turn positive (negative) for antiferromagnetic (ferromagnetic) interactions~\cite{Ho1998}.
Specifically, for a 1D spin-1 BEC $g_n=\frac{2(a_0+2a_2)}{3a_{\perp}}$ and $g_s=\frac{2(a_2-a_0)}{3a_{\perp}}$.
Here, $a_0$ and $a_2$ are the corresponding $s$-wave scattering lengths of two atoms in the scattering channels
with total spin $F=0$ and $F=2$, respectively.
The measured values of the aforementioned scattering lengths for $^{87}$Rb are
$a_0=101.8a_B$ and $a_2=100.4a_B$ where $a_B$ is the Bohr radius,
resulting in a ferromagnetic spinor BEC~\cite{Klausen2001,vanKempen2002}.
Finally, the total number of particles and the total magnetization for the
system of Eqs.~(\ref{eq:spinor_hamiltonian_a})-(\ref{eq:spinor_hamiltonian_b}) are defined as
$N=\sum_{m_F} \int |\Psi_{m_F}|^2 \text{d}x$, and $M_z=\int \left(|\Psi_{+1}|^2-|\Psi_{-1}|^2\right) \text{d}x$,
respectively.
Simplified BEC models can be easily obtained from Eqs.~(\ref{eq:spinor_hamiltonian_a})-(\ref{eq:spinor_hamiltonian_b}).
In particular, when the spin degrees of freedom are frozen, namely for $g_s=0$,
the aforementioned system reduces to the following three-component one
\begin{equation}
i\partial_t\Psi_j={\mathcal{H}}_0\Psi_j+g_n\left(|\Psi_j|^2+|\Psi_k|^2+|\Psi_l|^2\right)\Psi_j.
\label{eq:3-CGPE}
\end{equation}
The indices $j,k,l$ here refer to each of the three $m_F=+1,0,-1$ components,
with $j \neq k \neq l$.
This three-component system, in the absence of an external confinement (i.e.,
for $V(x)=0$) and for constant $g_n$ which, without loss of
generality, can be set to $g_n=1$,
is said to be integrable and reduces to the so-called Manakov model~\cite{Tsuchida1998,Ieda2007,Manakov1973}.
As such it admits exact soliton solutions of the DDB and DBB type~\cite{biondini2016three}.
Accounting for repulsive inter- and intra-species interactions (up
to a rescaling), we will set $g_n=1$ in our subsequent results discussion.
Additionally, the two-component BEC can be retrieved by setting e.g. $\Psi_l=0$ in
Eq.~(\ref{eq:3-CGPE}).
Note that such a binary mixture consists of two different spin states,
e.g. one with $\ket{F=1}$ and one with
$\ket{F=2}$, of the same atomic species and is
theoretically described by the following
GPEs~\cite{kevrekidis2015defocusing}
\begin{equation}
i\partial_t\Psi_j=\mathcal{H}_0 \Psi_j + g_n \left(|\Psi_j|^2 + |\Psi_k|^2 \right) \Psi_j.
\label{eq:CGPE}
\end{equation}
Here, the indices $j,k$ refer to each of the two participating species.
Finally, the single-component case is retrieved by setting in Eq.~(\ref{eq:CGPE}) $\Psi_k=0$.
The corresponding GPE reads~\cite{Gross1961,Pitaevskii1961}
\begin{equation}
i\partial_t\Psi=\mathcal{H}_0 \Psi + g_n |\Psi|^2\Psi.
\label{eq:GPE}
\end{equation}
In the forthcoming section we will first focus on the integrable version of Eq.~(\ref{eq:GPE})
and the exact arrays of dark soliton solutions that it admits.
\subsection{Prior analytical considerations and initial state preparation} \label{subsec:setup_homo}
It is well-known and experimentally confirmed that multiple dark solitons
can be systematically generated in single-component BECs,
via the so-called matter wave interference of two initially separated
condensates~\cite{Reinhardt1997,Scott1998,Weller2008,Theocharis2010}.
Aiming to generalize this mechanism to multi-component systems, below we briefly discuss
previous studies on this topic.
In particular, the problem of determining the parameters of a dark soliton formed by
an initial excitation on a uniform background has been analytically
solved by the inverse scattering method~\cite{Zakharov1973}.
In this framework, Eq.~(\ref{eq:GPE}) (with $V(x)=0$) is associated
with the Zakharov-Shabat (ZS)~\cite{Zakharov1973, Espinola-Rocha2009} linear spectral problem.
The corresponding soliton parameters are related to the eigenvalues of this spectral problem,
calculated for a given initial condensate wavefunction $\Psi(x,0)$.
Specifically, let us assume that $\Psi(x,0)$ has the form corresponding to an IRP
\begin{eqnarray}
\Psi(x,0)&=&u_0 \hspace{1.3cm} \text{at} \hspace{0.67cm} x<-a \nonumber,
\\
\Psi(x,0)&=&0 \hspace{1.5cm} \text{at} \hspace{0.5cm} -a<x<a \nonumber,
\\
\Psi(x,0)&=&u_1 e^{i\Delta\phi} \hspace{0.65cm} \text{at} \hspace{0.67cm} x>a,
\label{eq:square_well}
\end{eqnarray}
with $a$, $u_0$, $u_1$, $\Delta\phi$ denoting respectively the half-width, the
two amplitudes
and the phase difference between the two sides of the IRP.
Subsequently, for the case of $|u_0|=|u_1|=|u|$, it has been shown~\cite{Zakharov1973}
that the number of dark soliton pairs depends on the amplitude, $|u|$,
and the phase difference $\Delta\phi$ of the initial IRP.
Namely, for $\Delta\phi=0$ which corresponds to a symmetric or in-phase (IP) IRP
[see Eq.~(\ref{eq:square_well})],
there exist $n$-symmetrical pairs of dark soliton solutions that are given by the solutions
of the (transcendental) eigenvalue equations
\begin{equation}
\abs{u}\cos(2a_n\lambda_n)=\pm\lambda_n.
\label{eq:cos}
\end{equation}
Here, $\lambda_n$, are the corresponding eigenvalues being bounded within the interval $[0,~|u|]$.
Importantly, solutions of Eq. (\ref{eq:cos}) exist only within the intervals
$2a_n\lambda_n \in \left[(n-1)\pi,\left(n-\frac{1}{2}\right)\pi\right]$ with $n=1,2,3,\dots$.
Notice also, that for $n=1$ Eq. (\ref{eq:cos}) has at least one root within the interval
$0< 2a_n\lambda_n <\frac{\pi}{2}$. Thus, there exists at least $1$-pair of coherent structures.
Multiple roots of Eq. (\ref{eq:cos}) can be found but for appropriate
values of the half-width $a_n$ that lie within the aforementioned interval.
Therefore, there exists a threshold for the width $a_n$ above which solitons can be created.
It has been demonstrated that the
lower bound for the width of the IP-IRP in order to obtain $n$-symmetrical pairs of soliton solutions
has the form
\begin{equation}
W_{IP}=2a_n=\frac{(n-1)\pi}{|u|}.
\label{eq:w_n_IP}
\end{equation}
In the above expression we have defined $W_{IP}\equiv 2a_n$.
Moreover, as dictated by Eq.~(\ref{eq:cos}) the total number of solitons is always even.
Additionally, in order to obtain at least $1$-pair of soliton solutions, i.e. for $n=1$, then $W_{IP}> 0$
according to Eq.~(\ref{eq:w_n_IP}).
On the other hand, for $\Delta\phi=\pi$ [see Eq.~(\ref{eq:square_well})],
i.e. for an asymmetric IRP or out-of-phase (OP) initial conditions, the $n$-pairs of soliton solutions are given by
the following eigenvalue equations
\begin{equation}
\abs{u}\sin(2a_n\lambda_n)=\pm\lambda_n.
\label{eq:sin}
\end{equation}
Here, $2a_n\lambda_n \in \left[\left(n-\frac{1}{2}\right)\pi, n\pi\right]$ with $n=1,2,3,\dots$.
In the OP case the corresponding threshold for the width $2a_n$ reads [see Eq.~(\ref{eq:sin})]
\begin{equation}
W_{OP}=2a_n=\frac{(n-\frac{1}{2})\pi}{|u|},
\label{eq:w_n_OP}
\end{equation}
where $W_{OP}\equiv 2a_n$ is introduced.
For both IP- and OP-IRPs the amplitude, $\nu_n$,
of each dark soliton pair is defined by the eigenvalues $0\leq |\lambda_n| \leq |u|$
through the relation $\nu_n=\sqrt{|u|^2-\lambda_n^2}$.
Also each soliton's velocity is given by $v_n=\pm\lambda_n$.
From Eq.~(\ref{eq:w_n_OP}) and for $n=1$,
we can again obtain the minimum width to assure the existence of at least a 1-pair solution.
The latter reads $W_{OP}=\pi/(2|u|)$.
Although Eq.~(\ref{eq:sin}) gives the solutions for $n$-pairs of solitons, there exists also
an isolated wave for $\lambda=0$ corresponding to
a black soliton with $\nu=|u|$ and $v=0$.
Summarizing, an odd number of dark solitons
is expected to be generated for OP initial conditions.
We should remark at this point that Eq.~(\ref{eq:w_n_IP}) and Eq.~(\ref{eq:w_n_OP})
dictate the dependence of the generated number of dark solitons not only on the phase,
but also on the momenta (through the relation $v_n=\pm\lambda_n$)
of the colliding condensates~\cite{Weller2008,Theocharis2010}.
In particular, for larger initial widths
the number of dark solitons generated increases
since the two sides of the IRP acquire, during the counterflow, larger momenta
(see also here the works of Refs.~\cite{Ostrovskaya1999,Nikolov2004} and references
therein for relevant studies in nonlinear optics).
Importantly, also, the effective intuition
of the number of solitons filling in the space between the two sides
in ``units'' of the healing length, namely in dark solitons, is
a relevant one to qualitatively bear in mind.
Finally, we must also note that in the BEC context an initial state preparation
having the form of Eq.~(\ref{eq:square_well}) can, in principle, be achieved
by standard phase imprinting methods and the
use of phase masks~\cite{Denschlag2000,scherer2007vortex,Becker2008}.
Figures~\ref{fig:Ds}(a) and \ref{fig:Ds}(b) illustrate profile snapshots (at $t=150$)
of the density, $|\Psi|^2$, for IP- and OP-IRPs, respectively.
As per our discussion above, an even number of dark solitons is expected and indeed observed for an IP-IRP
[see Fig.~\ref{fig:Ds}(a)]. In particular,
for an initial amplitude $|u|=1$ and half-width $a=5$,
three pairs of dark solitons symmetrically placed around
the origin ($x=0$) are clearly generated.
On the other hand,
for an OP-IRP an odd number of solitons occurs, consisting of three
pairs of dark states
formed symmetrically around $x=0$ and an isolated black soliton residing at $x=0$ [see Fig.\ref{fig:Ds}(b)].
In both cases, by inspecting the relevant phase, $\phi$, the characteristic phase-jump, $\Delta\phi$, located right at the dark
density minima, expected for each of the nucleated dark states can be clearly inferred
[see dashed lines in Figs.~\ref{fig:Ds}(a) and \ref{fig:Ds}(b)].
Notice that all the solitons formed for both IP- and OP-IRPs are gray (moving) ones since $0<\Delta\phi<\pi$,
except for the black one shown in Fig.~\ref{fig:Ds}(b)
which has a phase shift $\Delta\phi=\pi$.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig1-eps-converted-to.pdf}
\caption{Left axes: Profile snapshots of the density, $|\Psi|^2$, at $t=150$ showcasing the generated dark solitons
for (a) IP-IRP and (b) OP-IRP initial conditions. In both cases $|u|=1$ and $a=5$, resulting in three pairs of dark solitons
being formed in (a) and three pairs and a central black soliton in (b).
Right axes: Snapshots of the corresponding phase, $\phi$,
(see legend) for (a) an IP-IRP and (b) an OP-IRP
illustrating the characteristic phase-jump occurring at each of the dark soliton minima.
Phase-shifts $0 < \Delta\phi < \pi$ correspond to moving (gray) solitons, and the maximum phase-shift $\Delta\phi=\pi$
belongs to the black soliton centered at $x=0$ in the OP case (b).}
\label{fig:Ds}
\end{figure}
Up to this point we have briefly reviewed the well-known results regarding the controllable generation of multiple dark solitons in
homogeneous single-component settings.
Below, we focus on the controllable formation of more complex solitonic entities that appear in multi-component BECs.
In this latter context analytical expressions like the ones provided by Eq.~(\ref{eq:w_n_IP}) and Eq.~(\ref{eq:w_n_OP}) are,
to the best of our knowledge, currently unavailable in the literature
for the initial waveforms considered herein.
Thus, in the following we resort to a systematic numerical investigation aiming at controlling the emergence of more complex solitonic
structures consisting of multiple solitons of the DB type.
In particular, we initially focus on the simplest case scenario, i.e. a two-component BEC [see Eq.\eqref{eq:CGPE}].
Next in our systematic progression, we consider a three-component mixture [see Eq.(\ref{eq:3-CGPE})]; finally, we turn our attention
to the true spinorial BEC system [see Eqs.(\ref{eq:spinor_hamiltonian_a})-(\ref{eq:spinor_hamiltonian_b})].
Additionally, and also in all cases that will be examined herein,
in order to initialize the dynamics, we use as initial condition for the component(s) that
during the evolution will host multiple dark solitons the IRP wavefunction given by Eq.~(\ref{eq:square_well}).
Furthermore, for the component(s) that during the evolution will host multiple bright states a Gaussian pulse is used.
The latter ansatz is given by
\begin{equation}
\Psi(x,0)=\sqrt{A}\exp\left[-\frac{1}{2}\kappa^2(x-X_0)^2\right],
\label{eq:gaussian}
\end{equation}
with $A$, $\kappa$ and $X_0$ denoting respectively the amplitude, the inverse width and the center of the Gaussian pulse.
To minimize the emitted radiation during the counterflow process, in the trapped scenarios the following procedure is used.
The multi-component system is initially relaxed to its ground state configuration. For the relaxation process
we use as an initial guess Thomas-Fermi (TF) profiles for
all the participating components, i.e. $\Psi (x)=\sqrt{\mu-V_i(x)}$.
Here, $\mu$ denotes the common chemical potential assumed throughout this work
for all models under consideration.
It is relevant to mention in passing here that the selection of a common
$\mu$ is a necessity (due to the spin-dependent interaction) in the spinor
system, but not in the Manakov case (where it constitutes a simplification
in order to reduce the large number of parameters in the problem).
Additionally, $i=d,b$ indicates the different traps used
for the participating species.
In particular, the component(s) that will host during evolution dark solitons is (are) confined in a double-well potential
that reads~\cite{Reinhardt1997,Theocharis2010}
\begin{equation}
V_d(x)=V(x)+G\exp\left(-x^2/w^2\right).
\label{eq:V_m}
\end{equation}
In Eq.~(\ref{eq:V_m}), $V(x)$ is the standard harmonic potential,
while $G$ and $w$ are the amplitude and width of the Gaussian barrier used.
Tuning $G$ and $w$ allows us to control the spatial separation of the two condensates.
We also note in passing that the choice of Eq.~(\ref{eq:V_m}) is based on the standard way
to induce the counterflow dynamics in single-component BEC experiments~\cite{Weller2008,hoefer2009matter}.
The remaining component(s) that during evolution will host bright solitons
are trapped solely in a harmonic potential $V_b(x)=\frac{1}{2}\Omega_b^2x^2$, with $\Omega_b>\Omega$.
The latter choice is made in order to reduce the initial spatial overlap between the components which, in turn, facilitates soliton
generation during the dynamics.
After the above-discussed relaxation process the system is left to dynamically evolve in the common harmonic potential $V(x)$
by switching off the barrier in Eq.~(\ref{eq:V_m}), i.e.
setting $G=0$, and also removing $V_b$ by setting $\Omega_b=0$.
In all cases under investigation, in order to simulate the counterflow dynamics
of the relevant mixture a fourth-order Runge-Kutta integrator is employed,
and a second-order finite differences method is used for the spatial derivatives.
The spatial and time discretization are $\diff x=0.1$ and $\diff t=0.001$ respectively.
Moreover, unless stated otherwise, throughout this work we
fix $|u|=1$, $\Delta\phi=0$ [see Eq.~(\ref{eq:square_well})]
and $A=1$, $\kappa=1$, $X_0=0$ [see Eq.~(\ref{eq:gaussian})].
The default parameters for the trapped scenarios are
$\mu_j=\mu=1$, (with $j$ denoting the participating components)
$G=5\mu$, $w^2=5$, $\Omega=0.05$ and $\Omega_b=30\Omega$.
We have checked that slight deviations from these parametric
selections do not significantly affect our qualitative observations
reported below.
Additionally, for the spinor BEC system we also fix
$g_s=-4.6{\times}10^{-3}$. Notice that the chosen value is exactly the ratio
$\frac{a_2-a_0}{a_0+2a_2}$ that is (in the range)
typically used in ferromagnetic spinor $F=1$ BEC of $^{87}$Rb
atoms~\cite{Klausen2001,vanKempen2002}.
However, we note that the numerical findings to be presented below
are not altered significantly
even upon considering a spinor $F=1$ BEC of $^{23}$Na atoms.
Finally, focusing on $^{87}$Rb BEC systems,
our dimensionless parameters can be expressed in dimensional form by
assuming a transversal trapping frequency $\omega_{\perp}=2\pi \times 175$~Hz.
Then all time scales must be rescaled by $8.1$~s and all length scales
by $100~\rm{\mu m}$.
This yields an axial trapping frequency $\omega_{x} \approx 2\pi \times 1.1$~Hz
which is accessible by current state-of-the-art experiments~\cite{Bersano2018}.
The corresponding aspect ratio is $\omega_{x}/\omega_{\perp}= 5 \times 10^{-3}$
and as such lies within the range of applicability of the 1D GP theory according to the
criterion $N a^4_{\perp}/a^2 a^2_z \gg 1$~\cite{pitaevskii2016bose}.
Here $a_{\perp}$, and $a_z$ denote respectively the oscillator
length in the transversal and axial direction, while $a$ is the three-dimensional $s$-wave scattering length.
\section{Numerical Results And Discussion} \label{sec:results}
\subsection{Two-Component BEC} \label{subsec:psuedo-spinor}
In this section we present our findings regarding the controlled generation of arrays
of DB solitons and their robust evolution in
two-component BECs~\cite{Becker2008,Middelkamp2011,Hamner2011,Yan2011,Hoefer2011,Yan2012}.
To induce the counterflow dynamics we utilize the methods introduced in Sec.~\ref{sec:model_setup} B.
Before delving into the associated dynamics we should first recall that in the integrable limit, i.e. $g_n=1$ and $V(x)=0$,
the system of Eqs.~(\ref{eq:CGPE}) admits an exact DB soliton solution. The corresponding DB waveforms
read~\cite{Yan2011,biondini2016three,katsimiga2017dark,katsimiga2017stability,katsimiga2018dark}
\begin{eqnarray}
\Psi_d (x,t) &=& \Big[\nu \tanh\left[ \mathcal{D} \left (x-x_0(t)\right) \right]+i\lambda \Big]e^{-it},
\label{eq:DB_d} \\
\Psi_b (x,t) &=& \eta \sech\left[\mathcal{D} \left(x-x_0(t)\right) \right] e^{\left[ ikx+i\varphi(t) \right]},
\label{eq:DB_b}
\end{eqnarray}
and are subject to the boundary conditions $|\Psi_d|^2\rightarrow 1$ and
$|\Psi_b|^2\rightarrow 0$ as $|x|\rightarrow \infty$, in the dimensionless units adopted herein.
In Eqs.~(\ref{eq:DB_d})-(\ref{eq:DB_b}) $\Psi_d$ ($\Psi_b$) is the wavefunction of the dark (bright) soliton component.
In the aforementioned solutions, $\nu$ and $\eta$ are the amplitudes of the dark and the bright soliton respectively,
while $\lambda$ sets the velocity of the dark soliton.
Furthermore, $\mathcal{D}$ denotes the common --across components--
inverse width
parameter and $x_0(t)$, which will be traced numerically later on,
refers to the center position of the DB soliton (see also our discussion below).
Additionally, in the above expressions $k=\mathcal{D}\left(\lambda/\nu\right)$
is the constant wavenumber of the bright soliton associated with the DB soliton's velocity,
and $\varphi(t)$ is its phase.
Inserting the solutions of Eqs.~(\ref{eq:DB_d})-(\ref{eq:DB_b}) in the system of
Eqs.~(\ref{eq:CGPE}) leads to the following conditions that the DB soliton parameters must satisfy for the above solution
to exist
\begin{eqnarray}
\mathcal{D}^2 &=& \nu^2-\eta^2 ,
\label{eq:DB_D}
\\
\dot{x}_0 &=& \mathcal{D}\frac{\lambda}{\nu},
\label{eq:DB_v}
\end{eqnarray}
where $\dot{x}_0$ is the DB soliton velocity.
Through the normalization of $\Psi_b$ we can connect the number of particles of the bright component, $N_b$, with $\eta$ and $\mathcal{D}$
\begin{equation}
N_b=\int |\Psi_b(x,t)|^2 \text{d}x=\frac{2\eta^2}{\mathcal{D}}.
\label{eq:DB_N}
\end{equation}
In the following we will use the aforementioned conditions, namely Eqs.~(\ref{eq:DB_D})-(\ref{eq:DB_v}),
not only to verify the nature of the emergent states but also
to compare the trajectories of the evolved DBs to the analytical prediction provided by Eq.~(\ref{eq:DB_v}).
Moreover, by making use of Eq.~(\ref{eq:DB_N}) we will further estimate
the number of particles hosted in the bright soliton component of the mixture.
The outcome of the counterflow process for different variations of the half-width $a$ of the initial IP-IRP
is illustrated in Figs.~\ref{fig:DB}(a)-\ref{fig:DB}(f).
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig2-eps-converted-to.pdf}
\caption{Spatio-temporal evolution of the density $\abs{\Psi_1}^2$ ($\abs{\Psi_2}^2$)
of the first (second) component upon varying the
half-width $a$ of the initial IRP.
From left to right $a=3$, $a=5$ and $a=7$, allowing the generation of four [(a)-(d)],
six [(b)-(e)], and ten [(c)-(f)] DB solitons, respectively.
In all cases, top (bottom) panels illustrate the formation of dark (bright) solitons in the first (second)
component of the two-component system.
Labels $(1)$-$(3)$ introduced in panels $(b)$, $(e)$ number the DB solitons discussed in Table \ref{tab:DB_char}.}
\label{fig:DB}
\end{figure}
In particular, in all cases depicted in this figure, the spatio-temporal evolution of the densities, $|\Psi_j|^2$ (with $j=1,2$),
of both components for propagation times up to $t=150$ is presented.
It is found that from the very early stages of the dynamics the interference fringes in the first component evolve into
several dark soliton states being generated in this component. E.g. four dark solitons can be readily seen
in Fig.~\ref{fig:DB}(a) for $a=3$.
The nucleation of these dark states leads in turn to the emergence,
via the confinement of the spreading Gaussian pulse,
also of four bright solitons in the second component of the binary mixture [Fig.~\ref{fig:DB}(d)].
The latter bright waveforms are created in each of the corresponding dark minima, and are subsequently waveguided
by their dark counterparts.
The robust propagation of the newly formed array of DB solitons is illustrated for times up to $t=150$.
Importantly here, we were able to showcase that by tuning the half-width of the initial IRP a controllable formation
of arrays of DB solitons can be achieved.
In particular, it is found that upon increasing the initial half-width of the IP-IRP leads to a larger number of
DB solitons being generated.
Indeed, as shown in Figs.~\ref{fig:DB}(b) and \ref{fig:DB}(e) six
DB states are formed for $a=5$, while for $a=7$ the resulting array consists of
ten DB solitons as illustrated in Figs.~\ref{fig:DB}(c) and \ref{fig:DB}(f).
We should remark also here that since an IP-IRP is utilized only an even number of DB solitons
is expected and indeed observed in all of the aforementioned cases.
This result is in line with the analytical predictions discussed in the single-component scenario [see also Eq.~(\ref{eq:w_n_IP})].
Moreover, to verify that indeed the entities formed are DB solitons we proceed as follows.
Firstly, upon fitting it is confirmed that the evolved dark and bight states have the standard
$\tanh$- and $\rm{sech}$-shaped waveform respectively [see Eqs.~(\ref{eq:DB_d})-(\ref{eq:DB_b})].
Then, by monitoring during evolution a selected DB pair we measure the amplitudes
$\nu$ and $\eta$ of the dark and the bright constituents, respectively.
Having at hand the numerically obtained amplitudes we then use the analytical expressions stemming from the single DB
soliton solution, namely Eqs.~(\ref{eq:DB_D})-(\ref{eq:DB_N}).
In this way estimates of the corresponding DB trajectory
as well as the number of particles, $N_b$, hosted in the selected bright soliton are extracted.
Via the aforementioned procedure and e.g. for the closest to the origin ($x=0$) right moving DB solitary wave
labeled as (1) and shown in Figs.~\ref{fig:DB}(b)-\ref{fig:DB}(e)
it is found that $N_b=0.3611$ while the numerically obtained value is $N^{num}_b=0.3607$.
Notice that the deviation between the semi-analytical calculation and the numerical one is less than $1\%$.
To have access to $N^{num}_b$ we simply integrated $|\Psi_2|^2$ within a small region around the center of the bright part
of the selected DB pair. Additionally, for the same DB pair $\dot{x}_0=0.1467$ while $\dot{x}^{num}_0=0.1495$.
After confirming that all entities illustrated in Figs.~\ref{fig:DB}(a)-\ref{fig:DB}(f)
are indeed DB solitons, with each of the resulting DBs following the analytical predictions of
Eqs.~(\ref{eq:DB_D})-(\ref{eq:DB_N}), we next consider different parametric variations.
In particular, we will investigate modifications in the DB soliton
characteristics when the number of the nucleated DB states is held fixed.
To this end, below we fix $a=5$ and we then vary within the interval $[0.5, 2]$ one of the
following parameters at a time: $|u|$, $A$, $\kappa$.
Before proceeding, two important remarks are of relevance at this point.
(i) Fixing $a=5$ is not by itself sufficient to {\it a priori} ensure that a fixed number of
DB solitons will be generated via the interference process.
This is due to the fact that the number of solitons formed is proportional to $a$ and
$|u|$ as detected by Eq.~(\ref{eq:w_n_IP}).
This is the reason for restricting ourselves to the aforementioned interval in terms of $|u|$ ($|u| \in [0.5, 2]$).
This selection
leads to the formation of an array consisting of only six DB solitons
like the ones shown in Figs.~\ref{fig:DB}(b) and \ref{fig:DB}(e) for $A=\kappa=1$.
(ii) Additionally here, variations of either $A$, or $\kappa$ could in principle affect the bright soliton
formation, however we have not found this to be the case in our intervals
of consideration.
Taking advantage of the symmetric formation of these six DB structures
in the analysis that follows we will
focus our attention to the three, i.e. (1), (2), and (3), right moving with respect to $x=0$
DB solitons shown in Figs.~\ref{fig:DB}(b) and \ref{fig:DB}(e).
\begin{table}[t]
\def1.5{1.5}
\centering
\begin{tabular}{c | ccccc | ccccc | ccccc }
\toprule
[0.5, 2]
& \multicolumn{5}{c}{$\abs{u} \uparrow$}
& \multicolumn{5}{ | c | }{$A \uparrow$}
& \multicolumn{5}{c}{$\kappa \uparrow$} \\
\toprule
DB
& $\nu$ & $\eta$ & $D$ & $n_b$ & $\dot{x}_0$
& $\nu$ & $\eta$ & $D$ & $n_b$ & $\dot{x}_0$
& $\nu$ & $\eta$ & $D$ & $n_b$ & $\dot{x}_0$ \\
\hline
(1) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$
& $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ & $\downarrow$
& $\uparrow$ & $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ \\
(2) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$
& $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ & $\downarrow$
& $\uparrow$ & $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ \\
(3) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\downarrow$ & $\uparrow$
& $\downarrow$ & $\uparrow$ & $\downarrow$ & $\uparrow$ & $\downarrow$
& $\uparrow$ & $\downarrow$ & $\uparrow$ & $\uparrow$ & $\downarrow$ \\
\toprule
\end{tabular}
\caption{Changes in the DB soliton characteristics upon considering different variations of the systems' parameters
for fixed $a=5$.
Here, $(1)$ [$(3)$] refers to the most inner [outer] DB solitons [see Figs.~\ref{fig:DB}(b) and \ref{fig:DB}(e)].
The top row indicates the distinct variations, namely of each
$|u|$, $A$, and $\kappa$, performed separately within the interval $[0.5, 2]$.
The second row contains the soliton characteristics such as the dark, $\nu$, and bright, $\eta$, amplitudes.
Also shown are the inverse width, $\mathcal{D}$, the normalized number of particles, $n_b$, hosted in each
of the bright solitons formed in the second component
of the mixture, and the velocity, $\dot{x}_0$, of the DB pair.
$\uparrow$ ($\downarrow$) arrows indicate an increase (decrease) of the corresponding quantity.}
\label{tab:DB_char}
\end{table}
The effect that different parametric variations have on the characteristics
of these three DB solitons are summarized in Table~\ref{tab:DB_char}.
In particular in this table, the arrow $\uparrow$ ($\downarrow$) indicates an
increase (decrease) of the corresponding soliton characteristic as one of the parameters
$|u|$, $A$ and $\kappa$ is increased within the chosen interval.
In general, it is found that as the amplitude, $|u|$, of the initial IP-IRP
increases the amplitudes, $\nu$, $\eta$, of all three DB structures increase
as well [see the second column in Table~\ref{tab:DB_char}].
Also, the resulting DB states are found to be narrower
(larger inverse width $\mathcal{D}$) and faster (larger $\dot{x}_0$).
However, the normalized number of particles, $n_b$,
hosted in each of the bright soliton constituents is found to increase for the two
innermost DB states [i.e. (1) and (2)] while it decreases for the outer one [i.e. (3)].
For instance, for the inner DB wave labeled (1) shown in the first column of Table~\ref{tab:DB_char},
$n_b$ is found to be $n_b = 0.196$ for $|u|= 0.5$, while $n_b = 0.204$ for $|u|= 1$.
Thus, the symbol $\uparrow$ is used to describe the increasing tendency of $n_b$
[see the second column of Table~\ref{tab:DB_char}].
We defined $n_b$ according to $n_b=N^{num}_b/N_2$ with $N_2=\int|\Psi_{2}|^2\text{d}x$ being the total number of
particles in the second component of the binary mixture.
For comparison here, for the outer DB soliton labeled (3)
$n_b = 0.092$ for $|u|= 0.5$ while $n_b = 0.075$ for $|u|= 1$ and thus a
symbol $\downarrow$
is introduced [see again the second column in Table~\ref{tab:DB_char}].
On the contrary upon increasing the amplitude, $A$, of the initial Gaussian pulse [see Eq.~(\ref{eq:gaussian})]
the amplitudes of all dark (bright) solitons for all three DB pairs decrease (increase), thus a decrease of the corresponding
inverse width results in wider and slower soliton pairs [see the third column in Table~\ref{tab:DB_char}].
Moreover, $n_b$ is found to decrease for the two inner DB pairs while it increases for the outer one.
Variations of the inverse width, $\kappa$, of the Gaussian pulse have more or less the opposite to the above-described effect.
As $\kappa$ increases, the resulting dark (bright) states have larger (smaller)
amplitudes for all three DB pairs but the solitons are narrower and slower
[see the fourth column in Table~\ref{tab:DB_char}].
Recall that narrower does not directly imply faster states since
the amplitude of the generated dark solitons is also involved [see Eqs.~(\ref{eq:DB_D}), (\ref{eq:DB_v})].
Also in this case $n_b$ increases for the outer DB pair [see the fourth column in Table~\ref{tab:DB_char}].
Finally, we also considered different displacements, $X_0$, of the initial Gaussian pulse
within the interval $[0, 7.5]$.
A behavior similar to the aforementioned $\kappa$ variation
is observed. However, the produced solitons are found to be asymmetric for $X_0 \neq 0$
due to the asymmetric positioning of the two components.
On the other hand, for $X_0 \geq a$ ($a=5$) we never observe DB soliton generation.
Having discussed in detail the homogeneous system,
we next turn our attention to the harmonically confined one [see Eq.~(\ref{eq:CGPE})].
Recall that in this case the initial guesses used for both components of the binary mixture are TF profiles.
The first component is initially confined in the double-well potential $V_d(x)$
with the width $w$ of the barrier controlling the spatial separation of the two parts of the condensate [see
Eq.~(\ref{eq:V_m})].
The corresponding second component
is in turn trapped in the harmonic potential $V_b(x)$ (see Sec.~\ref{sec:model_setup} B).
After relaxation the two-component system is left to dynamically evolve in the common parabolic trap $V(x)$.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig3-eps-converted-to.pdf}
\caption{Spatio-temporal evolution of the density $\abs{\Psi_1}^2$ ($\abs{\Psi_2}^2$) of the first (second) component
in the trapped scenario upon varying the width of the double-well barrier $w$ used for the preparation of the initial state.
From left to right $w^2=1$, $w^2=5$ and $w^2=10$,
allowing the generation of two [(a)-(d)], four [(b)-(e)], and six [(c)-(f)] DB solitons respectively.
In all cases, top (bottom) panels illustrate the formation of dark (bright) solitons in the first (second)
component of the two-component system.}
\label{fig:DB_trap}
\end{figure}
In line with our findings for the homogeneous setting, also here a desirable number of DB solitons
can be achieved by properly adjusting either $w$ or the chemical potentials $\mu_i$ (with $i=1,2$) of the binary mixture.
Note that in this latter case, the amplitude of the system is directly related to $\mu$ [see Eq.~(\ref{eq:w_n_IP})].
In both cases, it is found that an increase of $w$ or $\mu$ results to more DB solitons being generated.
In particular, Figs.~\ref{fig:DB_trap}(a)-\ref{fig:DB_trap}(c) [Figs.~\ref{fig:DB_trap}(d)-\ref{fig:DB_trap}(f)]
illustrate the dynamical evolution of the density, $|\Psi_1|^2$ ($|\Psi_2|^2$), of the first (second) component
of the mixture upon increasing $w$. An array consisting of two, four and six DB solitons pairs can be observed for $w^2=1$,
$w^2=5$ and $w^2=10$ respectively.
In all cases depicted in this figure the DB states are formed from the very early stages of the
dynamics. After their formation the states begin to oscillate within the parabolic trap.
Monitoring their propagation for evolution times up to $t=450$, it is found that while coherent oscillations are
observed for the two DB case [see Figs.~\ref{fig:DB_trap}(a), \ref{fig:DB_trap}(d)], this picture is altered for larger DB
soliton arrays.
In the former case measurements of the oscillation frequency, $\omega_{osc}$,
verify that it closely follows the analytical
predictions for the single DB soliton.
Namely, $\omega_{osc}=\Omega^2\left(\frac{1}{2}-\frac{\chi}{\chi_0}\right)$, with
$\chi=N_2/\sqrt{\mu}$ and $\chi_0=8\sqrt{1+\left(\frac{\chi}{4}\right)^2}$~\cite{Busch2001,Yan2011}.
For instance, our semi-analytical calculation stemming from the aforementioned theoretical prediction
gives $\omega_{osc}=34.3 \times 10^{-3}$, while direct measurements from our numerical
simulations provide $\omega_{osc}^{num}=35.3 \times 10^{-3}$.
This represents a $3\%$ discrepancy, which can be attributed to the interaction
of the solitons both with one another but also
with the background
excitations, with the latter having the form of sound waves.
Additionally, it should be noted that the theoretical prediction is
valid in the large $\mu$ limit (which may be partially responsible
for the relevant discrepancy).
However, for larger DB soliton arrays the number of collisions is higher and the background density is more excited,
as can be deduced by comparing Figs.~\ref{fig:DB_trap}(a), \ref{fig:DB_trap}(d)
to Figs.~\ref{fig:DB_trap}(b), \ref{fig:DB_trap}(e) and Figs.~\ref{fig:DB_trap}(c), \ref{fig:DB_trap}(f).
Importantly here the generated DB states are of different mass and thus each DB soliton oscillates
with its own $\omega_{osc}$. It is this mass difference that results in
the progressive ``dephasing'' observed during evolution.
Notice also that in all cases illustrated in the aforementioned figures
the outer (faster) DB solitons are the ones that are affected the most.
The above effect is enhanced
for larger initial separations $w$ [compare Figs.~\ref{fig:DB_trap}(b), \ref{fig:DB_trap}(e)
to Figs.~\ref{fig:DB_trap}(c), \ref{fig:DB_trap}(f)],
leading to discrepancies up to $11.6\%$ between $\omega_{osc}$ and $\omega^{num}_{osc}$ observed for the
outermost DB pair shown in Figs.~\ref{fig:DB_trap}(c), \ref{fig:DB_trap}(f).
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig4-eps-converted-to.pdf}
\caption{Spatio-temporal evolution of the density $\abs{\Psi_1}^2$ ($\abs{\Psi_2}^2$)
of the first (second) component in the trapped scenario
upon varying the chemical potential $\mu$ while fixing $w^2=5$.
From top to bottom $\mu=1$, $\mu=3$ and $\mu=5$,
leading to the emergence of four [(a)-(b)], six [(c)-(d)], and eight [(e)-(f)] DB solitons
respectively.
In all cases, left (right) panels illustrate the formation of dark (bright) solitons
in the first (second) component of the two-component system.}
\label{fig:DB_trap_mu}
\end{figure}
As mentioned above, besides $w$ also the chemical potential $\mu$ serves as a controlling parameter.
Indeed, by inspecting the spatio-temporal evolution of the densities, $|\Psi_j|^2$ (with $j=1,2$), shown in
Figs.~\ref{fig:DB_trap_mu}(a)-\ref{fig:DB_trap_mu}(f) for fixed $w^2=5$ it becomes apparent that
increasing $\mu$ leads to an increased number of DB solitons
being generated.
Four, six and eight DB solitons
are seen to be nucleated for
$\mu=1$, $\mu=3$ and $\mu=5$ respectively, and to propagate
within the BEC medium for long evolution times.
Notice that Figs.~\ref{fig:DB_trap_mu}(a), \ref{fig:DB_trap_mu}(b)
are the same as Figs.~\ref{fig:DB_trap}(b), \ref{fig:DB_trap}(e).
Increasing the system size reduces the impact that the radiation expelled
(when matter-wave interference takes place) has on the resulting DB states, as can be deduced
by comparing Figs.~\ref{fig:DB_trap_mu}(c), \ref{fig:DB_trap_mu}(d) to Figs.~\ref{fig:DB_trap}(c), \ref{fig:DB_trap}(f).
Indeed, further measurements of $\omega_{osc}$ reveal that the maximum
discrepancy observed for the outermost DB solitons when $\mu=1$
[see Figs.~\ref{fig:DB_trap_mu}(a),\ref{fig:DB_trap_mu}(b)] is of about $8.5\%$,
while upon increasing $\mu$ the discrepancy is significantly reduced.
The latter reduction is attributed to the fact that for larger $\mu$
the asymptotic prediction of $\omega_{osc}$ is progressively more accurate.
More specifically, for $\mu=5$ we obtain a discrepancy of
only $0.3\%$ for the third, with respect to $x=0$, DB soliton pair
shown in Figs.~\ref{fig:DB_trap_mu}(e), \ref{fig:DB_trap_mu}(f).
Yet still, the emergent DB states have different periods of oscillation,
leading in turn to several collision events taking place during
evolution. Nevertheless, in all cases presented above a common feature
of the solitary waves is that they survive throughout our computational
horizon.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig5-eps-converted-to.pdf}
\caption{Spatio-temporal evolution of the density $|\Psi_{+1}|^2$ ($|\Psi_{-1}|^2$) of the $m_F={+1}$ ($m_F={-1}$)
component upon varying the half-width $a$ of the initial IRP.
From left to right $a_{+1}=a_{0}=3$, $a_{+1}=a_{0}=5$ and $a_{+1}=a_{0}=7$, resulting in the nucleation
of six [(a), (d)], eight [(b), (e)], and twelve [(c), (f)] DDB solitons respectively.
In all cases, top (bottom) panels illustrate the formation of dark (bright) solitons in the $m_F={+1}$ ($m_F={-1}$) component
of the three-component system.
Since the evolution of the $m_F={0}$ component is the same as the one depicted for the $m_F={+1}$,
only the two components that differ from one another are illustrated.
The labels, (1)-(4) introduced in [(b), (e)] number the DDB solitons that are discussed in Table~\ref{tab:DDB_char}. }
\label{fig:DDB}
\end{figure}
\subsection{Three-component BEC mixtures} \label{subsec:three-component}
Now, we increase the complexity of the system
by adding yet another component to the previously discussed two-component mixture.
Namely we consider a three-component mixture consisting of three different hyperfine states
of the same alkali isotope such as $^{87}$Rb.
We aim at revealing the DB soliton complexes
that arise in such a system and their controllable formation via the interference processes introduced in
Sec.~\ref{subsec:setup_homo}.
From a theoretical point of view, such a three-component BEC mixture is described by a system of three coupled GPEs
(see Eqs.~(\ref{eq:3-CGPE}) in Sec.~\ref{subsec:models}), i.e., one for each of the participating $m_F=+1,0,- 1$ components.
To begin our analysis, we start with the integrable version
of the problem at hand.
Namely, we fix $g_n=1$ and we set $V(x)=0$ in the corresponding Eqs.~(\ref{eq:3-CGPE}).
This homogeneous mixture admits exact solutions in the form of DDB and DBB solitons
as it was rigorously proven via the inverse scattering method~\cite{biondini2016three}.
In the following, we will attempt to produce in a controlled fashion arrays consisting of these types of
soliton compounds.
We further note that in the numerical findings to be presented below
the abbreviations in the form XYZ (with X,Y,Z=D or B) reflect the $m_F=+1,0,-1$ order.
E.g. a DDB abbreviation indicates that dark solitons are generated in the $m_F=+1,0$ components
while bright solitons are generated in the $m_F=-1$ component of the mixture.
As it was done in the two-component setting, in order to generate a DDB configuration
the counterflow dynamics is performed by two of the participating hyperfine components.
Recall that dark solitons in each hyperfine state emerge via the destructive interference
that takes place at the origin where the two spatially separated sides of the initial IP-IRP collide.
Specifically, the initial ansatz used for the
$m_F=+1,0$ states is provided by Eq.~(\ref{eq:square_well}) and the corresponding ansatz for the $m_F=-1$ component
is the Gaussian of Eq.~(\ref{eq:gaussian}).
It turns out that we can again tailor the number of nucleated DDB solitons
by manipulating the half-width, $a_{m_F}$ (with ${m_F}=+1, 0$), of the initial IP-IRP.
To showcase the latter in Figs.~\ref{fig:DDB}(a)-\ref{fig:DDB}(f)
we present the outcome of the distinct variations of $a_{m_F}$.
Notice that as $a_{m_F}$ increases arrays consisting of a progressively
larger number of DDB solitons are formed.
Namely, $a_{+1}=a_{0}=3$ results in an array of six DDB solitons~[Figs.~\ref{fig:DDB}(a), \ref{fig:DDB}(d)].
Accordingly, when $a_{+1}=a_{0}=5$ the nucleation of eight DDB waveforms is observed
[see Figs.~\ref{fig:DDB}(b), \ref{fig:DDB}(e)],
while twelve such states occur for $a_{+1}=a_{0}=7$ [Figs.~\ref{fig:DDB}(c), \ref{fig:DDB}(f)].
In all of the above cases the spatio-temporal evolution of the densities
$|\Psi_{+1}|^2$, and $|\Psi_{-1}|^2$, are shown in the top and bottom panels of Fig.~\ref{fig:DDB} respectively.
The resulting propagation of the ensuing DDB states is monitored for evolution times up to $t=150$.
Moreover, only the $m_F=\pm 1$ components are depicted in the aforementioned figure.
This is due to the fact that the evolution of the $m_F=0$ component is
essentially identical to the one shown for the $m_F=+1$ component.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig6-eps-converted-to.pdf}
\caption{Same as in Fig.~\ref{fig:DDB} but showcasing the generation of DBB solitons.
In this case, from left to right $a_{+1}=3$, $a_{+1}=5$ and $a_{+1}=7$,
allowing the generation of four [(a)-(d)], six [(b)-(e)],
and eight [(c)-(f)] DBB solitons respectively.
The labels, (1)-(3) introduced in [(b), (e)] number the DBB solitons that are discussed in Table~\ref{tab:DBB_char}.}
\label{fig:DBB}
\end{figure}
\begin{table*}
\def1.5{1.5}
\centering
\begin{tabular}{c | cccccc | cccccc | cccccc}
\toprule
[0.5 , 2]
& \multicolumn{6}{c}{$\abs{u_0} \uparrow$}
& \multicolumn{6}{ | c | }{$A_{-1} \uparrow$}
& \multicolumn{6}{c}{$\kappa_{-1} \uparrow$} \\
\toprule
DDB
& $\nu_{+1}$ & $\nu_0$ & $\eta_{-1}$ & $\mathcal{D}$ & $n_b$ & $\dot{x}_0$
& $\nu_{+1}$ & $\nu_0$ & $\eta_{-1}$ & $\mathcal{D}$ & $n_b$ & $\dot{x}_0$
& $\nu_{+1}$ & $\nu_0$ & $\eta_{-1}$ & $\mathcal{D}$ & $n_b$ & $\dot{x}_0$ \\
\hline
(1) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$
& $\downarrow$ & $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ & $\updownarrow$
& $\uparrow$ & $\uparrow$ & $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ \\
(2) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$
& $\downarrow$ & $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ & $\updownarrow$
& $\uparrow$ & $\uparrow$ & $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ \\
(3) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\downarrow$ & $\uparrow$
& $\downarrow$ & $\downarrow$ & $\uparrow$ & $\downarrow$ & $\uparrow$ & $\updownarrow$
& $\uparrow$ & $\uparrow$ & $\downarrow$ & $\uparrow$ & $\uparrow$ & $\downarrow$ \\
(4) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\downarrow$ & $\uparrow$
& $\downarrow$ & $\downarrow$ & $\uparrow$ & $\downarrow$ & $\uparrow$ & $\updownarrow$
& $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\downarrow$ \\
\toprule
\end{tabular}
\caption{Changes in the DDB soliton characteristics upon considering different variations of the systems' parameters
and monitoring the four right moving DDB solitons generated for fixed $|u_{+1}|=1$ and $a_{+1}=a_0=5$.
Here, $(1)$ [$(4)$] refers to the inner [outer] most DDB state [see Figs.~\ref{fig:DDB}(b) and \ref{fig:DDB}(e)].
The top row indicates the distinct variations, namely $|u_0|$, $A_{-1}$, and $\kappa_{-1}$,
performed within the interval $[0.5, 2]$.
The second row contains the soliton characteristics, i.e. the dark, $\nu_{+1,0}$, and bright, $\eta_{-1}$, amplitudes,
the common inverse width, $\mathcal{D}$, the normalized number of particles, $n_b$,
hosted in the bright soliton component and the velocity, $\dot{x}_0$, of the DDB pair.
$\uparrow$ arrows ($\downarrow$) indicate an increase (decrease) of the corresponding quantity.
$\updownarrow$ arrows indicate that within the above interval a non-monotonic tendency of the respective quantity is
observed.}
\label{tab:DDB_char}
\end{table*}
The same overall picture is qualitatively valid for the corresponding DBB soliton formation.
Note that in contrast to the DDB nucleation,
to generate DBB soliton arrays the counterflow dynamics is featured solely by one of the hyperfine components
which as per our choice is the $m_{F}={+1}$ one.
The remaining two hyperfine components, namely $m_{F}={0, -1}$
share the same Gaussian-shaped initial profile.
In Figs.~\ref{fig:DBB}(a)-\ref{fig:DBB}(f) the formation of four, six and eight DBB soliton
complexes is shown for $a_{+1}=3$, $a_{+1}=5$ and $a_{+1}=7$ respectively.
Notice that the number of the generated DBB states appears
to be lower when compared to the DDB solitons formed for the same value of $a_{+1}$.
For instance, four DBB solitons are formed for $a_{+1}=3$ [Figs.~\ref{fig:DBB}(a), \ref{fig:DBB}(d)] while the corresponding
DDB soliton count is six [Figs.~\ref{fig:DDB}(a), \ref{fig:DDB}(d)].
The observed difference between the number of nucleated DBB and DDB states can be intuitively
understood as follows.
For a DDB production the total number of particles is $N=2990$ while for a DBB one is $N=1498$
e.g. for the case examples presented in
Figs.~\ref{fig:DDB}(a), \ref{fig:DDB}(d) and Figs.~\ref{fig:DBB}(a), \ref{fig:DBB}(d) respectively.
Recall that in our simulations we fix the chemical potential and thus $N$
is a free parameter.
The significantly lower number of particles in a DBB nucleation process stems from the fact
that two of the participating components have a Gaussian initial profile and as such host fewer particles.
This decrease of the system size for a DBB realization when compared to a DDB one
may be partially responsible for the observed
decreased DBB soliton count.
Moreover, in the DDB case the presence of two components (namely $m_F=+1, 0$ components each one characterized by an
amplitude $|u|$) with a finite background leads to a total amplitude $|u_{eff}|\approx 2|u|$.
Thus, as dictated by Eq. (7), the number of solitons is expected to be higher as well.
Further adding to the above, for a DBB formation only one component develops, via interference, dark solitons.
These dark solitons are, in turn, responsible for the trapping
of bright solitons in the other two hyperfine components.
However, since two components develop bright solitons effectively the number of particles
that have to be sustained by each effective dark well increases.
As such in the DBB case, the system prefers to develop fewer but also wider and deeper dark solitons than in the DDB process;
this is also inter-related with the smaller counterflow induced momentum in the DBB cse.
These deeper dark solitons can in turn efficiently trap and waveguide
the resulting also fewer bright solitons.
The above intuitive explanation is fairly supported by our findings.
Indeed, both the dark and the bright solitons illustrated in
Figs.~\ref{fig:DBB}(a), \ref{fig:DBB}(d) appear to be wider having also larger amplitudes
when compared to the ones formed in the DDB interference process shown in Figs.~\ref{fig:DDB}(a), \ref{fig:DDB}(d).
In all cases presented in Figs.~\ref{fig:DDB}(a)-\ref{fig:DDB}(f)
and Figs.~\ref{fig:DBB}(a)-\ref{fig:DBB}(f), we were able to showcase upon fitting that
the evolved dark and bright states have the standard $\tanh$- and $\sech$-shaped waveform
respectively [see Eqs.~(\ref{eq:DB_d})-(\ref{eq:DB_b})].
Moreover, following the procedure described in the two-component setting (see Sec.~\ref{subsec:psuedo-spinor}),
we verified that the number of particles hosted in each of the bright solitons formed follows Eq.~(\ref{eq:DB_N}),
with the common inverse width, $\mathcal{D}$, satisfying the generalized conditions
\begin{eqnarray}
\mathcal{D}^2&=&\nu_j^2+\nu_k^2-\eta_l^2, \label{eq:DDB_D}
\\
\mathcal{D}^2&=&\nu_j^2-\eta_k^2-\eta_l^2, \label{eq:DBB_D}
\end{eqnarray}
for the DDB and the DBB cases respectively.
The indices $j,k,l$ in the above expressions denote the three (distinct) hyperfine
components.
As a case example, for one of the DDB states
shown in Figs.~\ref{fig:DDB}(b), \ref{fig:DDB}(e) $N^{num}_b=0.3715$ while the semi-analytical
prediction gives $N_b=0.3721$. Notice that the deviation is again smaller than $1\%$.
As a next step, we attempt to appreciate the effect that
different initial configurations have on the characteristics of the resulting DDB and DBB soliton compounds.
Our findings are summarized in Tables~\ref{tab:DDB_char} and \ref{tab:DBB_char} respectively.
Specifically, for a DDB nucleation process $|u_{+1}|=1$ and $a_{+1}=a_0=5$ are held fixed.
The remaining parameters are varied (one of them at a time) within the interval $[0.5, 2]$.
The above selection leads to the appearance of eight DDB solitons symmetrically formed around the
origin as already illustrated in Figs.~\ref{fig:DDB}(d), \ref{fig:DDB}(e).
Exploiting this symmetric formation only the four, i.e. (1)-(4), right moving DDB solitons
indicated in Figs.~\ref{fig:DDB}(d), \ref{fig:DDB}(e) are monitored and shown in Table~\ref{tab:DDB_char}.
\begin{table*}
\def1.5{1.5}
\centering
\begin{tabular}{c | ccccccc | ccccccc | ccccccc}
\toprule
[0.5 , 2]
& \multicolumn{7}{c}{$\abs{u_{+1}} \uparrow$}
& \multicolumn{7}{ | c | }{$A_{0} \uparrow$}
& \multicolumn{7}{c}{$\kappa_{0} \uparrow$} \\
\toprule
DDB
& $\nu_{+1}$ & $\eta_0$ & $\eta_{-1}$ & $\mathcal{D}$ & $n_0$ & $n_{-1}$ & $\dot{x}_0$
& $\nu_{+1}$ & $\eta_0$ & $\eta_{-1}$ & $\mathcal{D}$ & $n_0$ & $n_{-1}$ & $\dot{x}_0$
& $\nu_{+1}$ & $\eta_0$ & $\eta_{-1}$ & $\mathcal{D}$ & $n_0$ & $n_{-1}$ & $\dot{x}_0$ \\
\hline
(1) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$
& $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ & $\downarrow$ & $\downarrow$ & $\downarrow$
& $\uparrow$ & $\downarrow$ & $\uparrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ & $\updownarrow$\\
(2) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$
& $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ & $\downarrow$ & $\downarrow$ & $\downarrow$
& $\uparrow$ & $\downarrow$ & $\uparrow$ & $\uparrow$ & $\updownarrow$& $\uparrow$ & $\updownarrow$\\
(3) & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ & $\uparrow$
& $\downarrow$ & $\uparrow$ & $\downarrow$ & $\downarrow$ & $\uparrow$ & $\uparrow$ & $\downarrow$
& $\uparrow$ & $\downarrow$ & $\uparrow$ & $\uparrow$ & $\updownarrow$& $\updownarrow$& $\updownarrow$ \\
\toprule
\end{tabular}
\caption{Same as Table~\ref{tab:DDB_char} but for the three right moving
DBB solitons generated for fixed $a_{+1}=5$. $(1)$ [$(3)$] denotes the
inner [outer] most DBB structure shown in Figs.~\ref{fig:DBB}(b) and \ref{fig:DBB}(e).
Other parameters used are $A_{-1}=\kappa_{-1}=1$.}
\label{tab:DBB_char}
\end{table*}
It becomes apparent, by comparing the DDB results of Table~\ref{tab:DDB_char}
with the ones found in the two-component scenario (see Table~\ref{tab:DB_char}),
that the inclusion of an extra $m_F=+1$ component prone to host dark solitons
leads to the following comparison of the resulting
soliton characteristics.
As $|u_0|$ is increased within the interval $[0.5, 2]$,
all the generated DDB states appear to be faster and narrower
(see second column in Table~\ref{tab:DDB_char}).
Additionally, larger amplitudes are observed
for all the solitons in all the three hyperfine components.
The same qualitative results were also found in the relevant variation but for the two-component system
(see second column in Table~\ref{tab:DB_char}).
It is important to stress at this point that further increase of $|u_0|$
and/or different initial values of $|u_{+1}|$,
can lead to a change in the number of states generated as suggested
also by Eq.~(\ref{eq:w_n_IP}).
This is the reason for limiting our variations to the aforementioned interval.
However, our findings for choices of $|u_{+1}| \neq |u_{0}|$ suggest that,
given the same half-width $a_{+1}=a_{0}$, the number of
nucleated DDB solitons will be determined by the larger $|u_{m_F}|$ value.
On the contrary, upon varying the amplitude, $A_{-1}$,
of the initial Gaussian pulse the impact of the additional $m_F=+1$ component
is imprinted on the velocity outcome of the resulting DDB solitons
(see third column in Table~\ref{tab:DDB_char}).
Indeed, as $A_{-1}$ increases a uniquely defined tendency of the velocity of the resulting
states cannot be inferred at least within the interval of interest here.
This result differs from the systematic
overall decrease of the DB soliton velocity
observed in the two-component scenario (see third column in Table~\ref{tab:DB_char}).
Notice also that all the remaining soliton characteristics here are similar to the ones found in the two-component scenario
(compare the third column in Tables~\ref{tab:DDB_char} and~\ref{tab:DB_char}, respectively).
Additionally, the presence of the extra $m_F=+1$
component leads to no modification
on the observed DDB soliton characteristics
when considering variations of the
inverse width, $\kappa_{-1}$, of the Gaussian pulse (see fourth column in Table~\ref{tab:DDB_char}).
Namely, for increasing $\kappa_{-1}$ all the resulting DDB states are narrower and slower.
An outcome that was also found in this
type of variation but in the two-component setting (see fourth column in Table~\ref{tab:DB_char}).
Next we will check the same diagnostics but for the DBB nucleation process.
Along the same lines, the initial parameters used for a DBB realization are
$A_0=\kappa_0=1$ and $a_{+1}=5$ [as per Eq.~(\ref{eq:gaussian}) and Eq.~(\ref{eq:square_well}) respectively].
This choice, results in the six DBB solitons illustrated in Figs.~\ref{fig:DBB}(d), \ref{fig:DBB}(e).
Again due to symmetry only the three, (1)-(3), right moving states indicated in Figs.~\ref{fig:DBB}(d), \ref{fig:DBB}(e)
are monitored in Table~\ref{tab:DBB_char}.
When comparing the relevant findings presented in Table~\ref{tab:DBB_char} to those shown in
Table~\ref{tab:DB_char} the following conclusions can be drawn.
For increasing $|u_{+1}|$ the generated DBB solitons are found to be narrower and faster
similarly to the evolved DB states observed in the two-component scenario
(see second column in Table~\ref{tab:DB_char}).
Alterations occur only upon varying the characteristics of the initial Gaussian pulse.
In particular, the effect of adding an extra bright component upon increasing the amplitude, $A_{0}$,
of the initial Gaussian pulse is the observed increased amplitude of all bright solitons formed in this component
(see third column in Table~\ref{tab:DBB_char}).
Yet, all the resulting DBB states are found to be wider and slower for increasing $A_{0}$,
an outcome which is similar to that found in the
two-component setting (see third column in Table~\ref{tab:DB_char}).
Lastly, upon increasing $\kappa_{0}$ the impact that the extra
$m_F=0$ component has on the resulting DBB solitons is the following (see fourth column in Table~\ref{tab:DBB_char}).
Besides the observed decreased amplitude of all bright solitons formed
in this component, the outermost DBB states, i.e. (2), (3), are the ones that are affected the most.
Notice that a non-monotonic response of the normalized number of particles, $n_{0}$,
hosted in this $m_F=0$ component is found as $\kappa_{0}$ increases within the interval $[0.5, 2]$.
Additionally, the velocity of all three DBB solitons shows a non-monotonic tendency as $\kappa_{0}$ is increased.
It is relevant to note that this result is in contrast to the decrease observed in the two-component scenario
(see fourth column in Table~\ref{tab:DB_char}).
It is worth commenting at this point that in both of the above-discussed processes
we also considered variations of the relevant in each case $a_{m_F}$.
Recall that $a_{+1,0}$ and $a_{+1}$ are the associated half-widths of the initial IP-IRP
for a DDB and DBB nucleation process respectively.
In particular, by fixing all parameters to their default values (see here Sec.~\ref{subsec:setup_homo})
we varied the relevant $a_{m_F}$ within the interval $[1, 10]$.
The general conclusion for such a variation, in both processes, is that increasing $a_{m_F}$ results to
more states which become narrower and slower as their number is increased.
Differences here, mostly refer to the relevant amplitudes
of the resulting solitons and the normalized number of particles hosted in each bright soliton constituent.
Importantly, and referring solely to the DDB process, it is found that
given the same initial amplitude, $|u_{m{_F}}|$, the number of solitons generated depends on the
smallest initial $a_{m_F}$. For this latter case ($a_{+1} \neq a_{0}$)
a spatially modulated density background occurs for the components hosting the dark states.
Finally, and also in both processes
we were able to verify that for displacements, $X_0$, of the initial Gaussian pulse
$X_0\geq a$ generation of DDB and DBB solitons is absent.
This is in line with our findings in the two-component system.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig7-eps-converted-to.pdf}
\caption{Evolution of the densities $|\Psi_{+1}|^2$, $|\Psi_{-1}|^2$ showcasing the generated
DDB solitons in a harmonic trap with $\Omega=0.05$.
Increasing the width $w$ of the double-well barrier
allows the generation of DDB soliton arrays consisting of two [(a), (b)], four [(c), (d)] and six [(e), (f)]
DDB solitons respectively for $w^2=1$, $w^2=5$ and $w^2=10$.
In all cases top (bottom) panels illustrate the formation of dark (bright) solitons in the $m_F=+1$ ($m_F=-1$) component.
Since the evolution of the $m_F=0$ component is identical to the one shown for the $m_F=+1$ component it is omitted.
}
\label{fig:DDB_trap}
\end{figure}
We now turn to the trapped three-component DDB and DBB case, in analogy
with the corresponding two-component one.
As in the latter, for the systematic production of multiple DDB and DBB solitons
we use as a control parameter the width, $w$, of the double-well potential [see Eq.~(\ref{eq:V_m})].
Figures~\ref{fig:DDB_trap}(a)-\ref{fig:DDB_trap}(f) illustrate the formation of two, four and six DDB
solitons for $w^2=1$, $w^2=5$ and $w^2=10$ respectively.
In all cases presented in this figure top (bottom) panels depict the evolution of the density, $|\Psi_{+1}|^2$,
($|\Psi_{-1}|^2$) of the $m_F=+1$ ($m_F=-1$) component.
Notice the close resemblance of the dynamical evolution of the DDB states when compared to the relevant evolution
of the DB soliton arrays shown in Figs.~\ref{fig:DB_trap}(a)-\ref{fig:DB_trap}(f).
We remark here, that the above-observed evolution holds equally for the corresponding DBB states (results not shown here for brevity).
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig8-eps-converted-to.pdf}
\caption{Profile snapshots of the density $\abs{\Psi_{m_{F}}}^2$ with $m_F=\pm 1$
at $t=45$, illustrating the generated DDB (left) and DBB (right) solitons in the trapped scenario.
In all cases $w^2=5$ and we vary the corresponding chemical potential.
From top to bottom $\mu=1$, $\mu=3$ and $\mu=5$, resulting in turn to
four [(a)-(b)], six [(c)-(d)], and eight [(e)-(f)] DDB-DBB solitons respectively.
The $m_F=0$ component is omitted since it shows the same profile as the $m_F=+1$ ($m_F=-1$) for the DDB (DBB)
nucleation process.}
\label{fig:DDB_vs_DBB_trap}
\end{figure}
Furthermore, below we briefly report on the systematic production of the desired number of DDB and
DBB solitons upon varying the common chemical potential $\mu$ of the confined three-component system.
In Figs.~\ref{fig:DDB_vs_DBB_trap}(a)-\ref{fig:DDB_vs_DBB_trap}(f)
a direct comparison of the resulting DDB and DBB soliton compounds is provided for three
different values of $\mu$.
Evidently, the number of DDB and DBB soliton complexes generated in each different initialization
is exactly the same and as expected, it increases for increasing $\mu$.
E.g for $\mu=3$ illustrated in Figs.~\ref{fig:DDB_vs_DBB_trap}(c) and \ref{fig:DDB_vs_DBB_trap}(d)
for the DDB and DBB processes respectively, the nucleated states at $t=45$ are six.
Note also, that in all cases the $m_F=0$ component overlaps either with $m_F=+1$ component (DDB nucleation process)
or with the $m_F=-1$ (DBB nucleation process) and as such it is not shown in the relevant profiles.
Concluding, the dynamical evolution of both types of soliton arrays is qualitatively the same and closely resembles the
one observed in the two-component setting. Also, in all the different parametric variations and for both nucleation
processes studied above, the resulting arrays of DDB and DBB solitons remain robust, while oscillating and
colliding with one another, for evolution times up to $t=450$ that we have checked.
\subsection{Spinor BEC}\label{subsec:spinor}
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig9-eps-converted-to.pdf}
\caption{Spatio-temporal evolution of the density $\abs{\Psi_{m_F}}^2$
of the (a)-(c) $m_F={+1}$, (d)-(f) $m_F={0}$, and (g)-(i) $m_F={-1}$
component respectively for varying $a_{+1}$.
From left to right $a_{+1}=3$, $a_{+1}=5$ and $a_{+1}=7$,
allowing the generation of four [(a), (d), (g)], six [(b), (e), (h)] and
eight [(c), (f), (i)] DBB solitons respectively for the homogeneous spinor setting.
In all cases, (a)-(c) illustrate the formation of dark solitons in the $m_F={+1}$ component and (d)-(f) [(g)-(i)]
depict the formation of bright solitons in the $m_F={0}$ [$m_F={-1}$] component of the spinor system.
(j) Evolution of the population, $n_{m{_F}}$, of each hyperfine component,
and of the total magnetization, $M_z(t)$ for $a_{+1}=5$.
Notice that $n_0$ and $n_{-1}$ are three orders of magnitude smaller than $n_{+1}$ which is why
a second axis is introduced.}
\label{fig:SDBB}
\end{figure}
Up to now, the controlled formation of multiple soliton complexes of the DB type in two- and three-component BECs
has been established. In what follows we turn our attention to the spinor $F=1$ BEC~\cite{Bersano2018}.
In this way, we will be able to address the fate of the generated DDB and DBB soliton arrays
when spin degrees of freedom are taken into account.
Recall, that the evolution of this system is dictated by
Eqs.~(\ref{eq:spinor_hamiltonian_a})-(\ref{eq:spinor_hamiltonian_b}).
In order to induce the dynamics we will utilize once more the counterflow processes introduced in
Sec.~\ref{subsec:setup_homo}.
As usual, we start our analysis by considering the homogeneous system.
As in the previous section, for a DDB generation process the initial ansatz used
for the $m_F=+1,0$ [$m_F=-1$] components is given by Eq.~(\ref{eq:square_well}) [Eq.~(\ref{eq:gaussian})].
Accordingly, to dynamically produce DBB soliton arrays the corresponding initial conditions
are provided by Eq.~(\ref{eq:gaussian}) [Eq.~(\ref{eq:square_well})] for the $m_F=0,-1$ [$m_F=+1$] hyperfine components.
Figures~\ref{fig:SDBB}(a)-\ref{fig:SDBB}(j) and Figs.~\ref{fig:SDDB}(a)-\ref{fig:SDDB}(j)
summarize our numerical findings.
In particular, Figs.~\ref{fig:SDBB}(a)-(i) illustrate the evolution of the density, $\abs{\Psi_{m_F}}^2$,
of all three $m_F=+1,0,-1$ components.
The controlled generation
of four, six and eight DBB soliton arrays can be readily seen as $a_{+1}$ is increased from $a_{+1}=3$ to $a_{+1}=7$
[see Figs.~\ref{fig:SDBB}(a), (d), (g), Figs.~\ref{fig:SDBB}(b), (e), (h) and Figs.~\ref{fig:SDBB}(c), (f), (i),
respectively].
Comparing the dynamical evolution of the spinor system to the one observed in the corresponding three-component setting
[see Figs.~\ref{fig:DBB}(a)-\ref{fig:DBB}(f)]
it becomes apparent that the inclusion of the spin interaction
has a minuscule effect on both the nucleation and the long time evolution of the DBB states.
To appreciate the latter, in Fig.~\ref{fig:SDBB}(j) we monitor the temporal evolution of the
population, i.e. $n_{m_{F}}(t)=\frac{1}{N}\int |\Psi_{m_{F}}\left(x,t\right)|^2\text{d}x$,
of each hyperfine component, as well as the total magnetization
$M_z(t)=\int\left( |\Psi_{+1}\left(x,t\right)|^2-|\Psi_{-1}\left(x,t\right)|^2\right)\text{d}x$
of the spinor system for $a_{+1}=5$ (see also Sec.~\ref{subsec:models}).
Note that $n_{0}(t)$, $n_{-1}(t)$ are multiplied by a factor of $10^3$ in order to be visible,
and that the same picture holds equally for all the distinct variations of $a_{+1}$ presented in Fig.~\ref{fig:SDBB}.
As it can be deduced, oscillations of $n_{+1}(t)$, $n_{0}(t)$ and $n_{-1}(t)$ occur during the evolution.
Recall now, that a spinor condensate is subject to the so-called spin relaxation process.
The latter, allows for collisions of two $m_F=0$ atoms that can in turn produce a pair of particles in the
$m_F=+1$ and $m_F=-1$ component and vice versa~\cite{Chang2005}.
It is this continuous exchange of particles
that leads to the oscillatory trajectories observed for the bright soliton constituents of the resulting DBB arrays.
Notice that $n_{+1}(t)$ is significantly larger when compared to $n_{0,-1}(t)\sim 10^{-3}$
and due to the rescaling used appears almost constant ($n_{+1}(t)\approx1$) during evolution.
However, we must stress that also not discernible
oscillations of the population of this hyperfine component are present and are similar to the ones observed for
the $n_{-1}(t)$ component. Therefore $M_z(t)$ remains constant during
the evolution while being of order unity.
Contrary to the DBB nucleation process investigated above, for a DDB realization
the spin-mixing dynamics plays a crucial role.
As in the previous scenario, Figs.~\ref{fig:SDDB}(a)-\ref{fig:SDDB}(i) show
the spatio-temporal evolution of the densities, $\abs{\Psi_{m_F}}^2$, of all three $m_F$ components.
Also here, by manipulating $a_{+1}=a_{0}=a$ we were able to controllably generate
arrays of DDB solitons in this homogeneous spinor setting.
From left to right in this figure, six, eight and twelve solitons are formed,
corresponding to $a=3$, $a=5$ and $a=7$ respectively.
In particular, Figs.~\ref{fig:SDDB}(a)-(c) [Figs.~\ref{fig:SDDB}(d)-(f)] depict the dark solitons formed
in the $m_F=+1$ [$m_F=0$] component.
Additionally, Figs.~\ref{fig:SDDB}(g)-(i) illustrate the bright states formed in the respective $m_F=-1$ component.
Strikingly enough, as it is observed in all of the aforementioned contour plots,
as time evolves, the background density gradually changes (notice the change in the color gradient).
This result, as we will show later on, is attributed to the spin-mixing dynamics
that significantly alters the evolution of the DDB
soliton arrays formed.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig10-eps-converted-to.pdf}
\caption{ Same as in Fig.~\ref{fig:SDBB} but showcasing the generation of DDB solitons.
In all cases, (a)-(c) [(d)-(f)] illustrate the formation of dark solitons in the $m_F={+1,0}$ components and
(g)-(i) illustrate the formation of bright solitons in the $m_F={-1}$ component of the spinor system.
($a_1$), ($d_1$), ($g_1$) Trajectory of the closest to the origin right moving originally formed DDB soliton shown in (a), (d), (g)
transitioning during evolution into beating dark states.
(j) Evolution of the population, $n_{m_F}$, of each hyperfine component,
as well as of the total magnetization, $M_z(t)$, for $a_{+1}=a_{0}=5$.}
\label{fig:SDDB}
\end{figure}
To shed light on the observed dynamics, below
let us focus our attention on Figs.~\ref{fig:SDDB}(a), \ref{fig:SDDB}(d) and \ref{fig:SDDB}(g) for $a=3$.
Here, also a zoom is provided in Figs.~\ref{fig:SDDB}($a.1$), \ref{fig:SDDB}($d.1$) and \ref{fig:SDDB}($g.1$)
to elucidate our analysis. In the latter figures
the closest to the origin DDB pair is monitored.
As time evolves the background density of the $m_F=+1$ component increases which suggests that transfer
of particles from the lower hyperfine components takes place.
The latter can indeed be confirmed by inspecting the evolution of the $m_F=0$ component.
Evidently, the background density of this component gradually decreases.
The corresponding density of the $m_F=-1$ component is also seen to increase.
Monitoring the evolution of the respective populations, $n_{m_{F}}(t)$, shown in Fig.~\ref{fig:SDDB}(j),
delineates the above trend.
Indeed, at $t=0$ $n_{+1}(0)=n_{0}(0)=0.5$ while $n_{-1}(0)\sim 10^{-3}$.
However, during evolution $n_{+1}(t)$ increases reaching the value of $n_{+1}(t=200)\approx 0.66$.
Accordingly, $n_{0}(t)$ decreases drastically during propagation acquiring a similar value with
$n_{-1}(t)$ at later evolution times, i.e. $n_{0}(t=200) \approx n_{-1}(t=200)\approx 0.16$.
Note also, that the total magnetization of the system is preserved with $M_z(t)=0.5$
throughout the evolution.
Returning now to the relevant densities, since the background density of the $m_F=0$ component
decreases, the dark states formed in this component begin to deform.
At later times ($t>150$) the solitonic states developed in this hyperfine
component have both a dark and a bright component [see Fig.~\ref{fig:SDDB}($d.1$)].
Similarly, at early times the $m_F=-1$ component hosts bright solitons.
Since the number of particles, in this case increases, a finite background slowly appears~\cite{Li2005,Kurosaki2007}.
As such, also the bright solitons of this component begin to deform.
The latter deformation leads in turn to the formation of solitonic structures that
again have both a bright and a dark part, involving a breathing
between the two and are formed also faster in this $m_F=-1$
when compared to $m_F=0$ one [see Fig.~\ref{fig:SDDB}($g.1$)].
The same deformation occurs also in the $m_F=+1$ component but at propagation times even larger than the
ones depicted in Fig.~\ref{fig:SDDB}(a). Indeed, by inspecting the evolution
of the closest to the origin dark soliton of the originally formed DDB state shown in Fig.~\ref{fig:SDDB}($a.1$),
the dark soliton is also deformed in this case, yet the beating pattern
of panel ($g.1$) [and even that of ($d.1$)] is not as straightforwardly
discernible.
Nevertheless, close inspection indicates
that the evolved states in all three $m_F$ components bear similar characteristics
to the so-called beating dark solitons that were experimentally observed in
two-component systems~\cite{Yan2012}.
As such, these states can be thought of as the generalization of the beating
dark solitons in spinor BECs.
Before proceeding to the harmonically confined spinor BEC system, a final
comment is of relevance here.
Investigating the current setting, we also considered different initializations
in which the symmetric, with respect to the $m_F=0$, hyperfine states have the same initial
conditions. In this way we were able to generate symmetric variants of the DDB and
DBB states discussed above. Namely, DBD and BDB soliton arrays.
In these cases, our simulations indicate that the resulting states show all features found in the three-component setting.
Although the spin interaction is present, the conversion of particles from one component to another is six orders of
magnitude smaller than the total number of particles.
As such the spin interaction is negligible. For these systems, also the total magnetization is
zero, in contrast to the finite one observed for the asymmetric, in the above sense, DDB and DBB
soliton arrays addressed herein. In that light, it appears as if the
drastic effect of the spin-interaction contribution in the previous
realization is able to excite the beating dark soliton generalizations.
On the other hand, following the approach of~\cite{Yan2012} in the
three-component Manakov case, it is also possible to excite beating solitons
in the latter (spin-independent) case. However, a more systematic theoretical
analysis of the beating states is deferred for a separated work.
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig11-eps-converted-to.pdf}
\caption{
Spatio-temporal evolution of the densities $\abs{\Psi_{m_F}}^2$ of the (a)-(c) $m_F={+1}$, (d)-(f) $m_F={0}$ and (g)-(i)
$m_F={-1}$ components upon varying the width, $w$, of the double-well barrier.
From left to right $w^2=1$, $w^2=5$ and $w^2=10$, allowing the generation of two [(a), (d), (g)], four [(b), (e), (h)]
and six [(c), (f), (i)] DDB solitons respectively in the spinor system.
In all cases, (a)-(c) [(d)-(f)] illustrate the formation of dark solitons
in the $m_F={+1}$ [$m_F={0}$] component and (g)-(i)
the generated bright solitons, transitioning into beating dark states, in the $m_F={-1}$ component.
(j)-(l) Vertical cuts of $\abs{\Psi_{-1}}^2$ for the three distinct values of $w$ (see legend).
In (j) solid rectangle indicates a beating dark soliton.}
\label{fig:SDDB_trap}
\end{figure}
In the trapped scenario we exclusively present our findings for a DDB generation process.
This is due to the fact that only this nucleation process entails new features stemming from the spin-mixing dynamics.
The initial state preparation used herein is the one described in the confined three-component setting
(see Sec.~\ref{subsec:three-component}).
Once more, by properly adjusting the initial width, $w$, of the double-well potential [see Eq.~(\ref{eq:V_m})]
the controlled formation of multiple DDB soliton complexes is achieved in this harmonically trapped spinor system.
From top to bottom Figs.~\ref{fig:SDDB_trap}(a)-(i)
show the evolution of $\abs{\Psi_{m_F}}^2$ (with $m_F=+1,0,-1$).
As $w^2$ is increased from $w^2=1$ to $w^2=10$ two, four and six such
solitons are formed
[e.g. see Figs.~\ref{fig:SDDB_trap}(a)-(c)].
Dark solitons emerge in the $m_F=+1,0$ components [see Figs.~\ref{fig:SDDB_trap}(a)-(c), \ref{fig:SDDB_trap}(d)-(f)]
while bright states are generated in
the $m_F=-1$ component [see Figs.~\ref{fig:SDDB_trap}(g)-(i)].
However, as in the homogeneous scenario, soon after their formation
all states formed and also in all hyperfine components begin to deform.
This deformation occurs faster in the less populated $m_F=-1$ component
and later on in the other two hyperfine states.
This phenomenon is yet again attributed to the spin-mixing dynamics
that allows for particle exchange between the components.
Focusing on Figs.~\ref{fig:SDDB_trap}(a), (d), (g),
the background densities of both the $m_F=\pm1$ components increase
while the density of the $m_F=0$ one decreases.
This exchange in population leads in turn during evolution to a transition
of the soliton states in each component into states that
bear both a dark and a bright part.
Thus, in line with our findings in the homogeneous case,
beating dark solitons are progressively formed in all three hyperfine components.
Since these beating structures are more pronounced in the $m_F=-1$ component,
in Fig.~\ref{fig:SDDB_trap}(j) profile snapshots of the density of this component
are illustrated. In particular, $|\Psi_{-1}|^2$ is depicted for
two different time instants, namely $t=100$ and $t=500$, during
the evolution and for $w^2=1$.
At initial times the two bright solitons originally formed in this component
are now on top of a still small, yet finite background.
Namely, they are already deformed into states that are reminiscent of the
so-called antidark
solitons~\cite{danaila2016vector,kevrekidis2004families,Katsimiga2017}.
At larger evolution times, instead of the aforementioned antidark solitons,
two beating dark states are seen to propagate. One of them is indicated
in Fig.~\ref{fig:SDDB_trap}(j) by a black rectangle.
Notice that this beating state has a density dip followed be a density hump.
The above-discussed dynamical evolution of the spinor system holds equally
for all the different variations illustrated in Figs.~\ref{fig:SDDB_trap}(a)-(i).
However, the deformation of the DDB states is found to be delayed as $w^2$ is increased.
The latter result can be deduced by comparing at earlier evolution times
the density profile shown in Fig.~\ref{fig:SDDB_trap}(l) to the relevant ones illustrated in
Figs.~\ref{fig:SDDB_trap}(j), (k).
Additionally, and also in all cases depicted in
Figs.~\ref{fig:SDDB_trap}(a)-(i),
the initially formed DDB structures that evolve later on into beating dark solitons
are seen to oscillate and interact within the parabolic trap.
However, while coherent oscillations are observed in Figs.~\ref{fig:SDDB_trap}(a), (d), (g),
incoherent ones occur when the number of states is increased (i.e. for increasing $w^2$).
In these latter cases, as shown in Figs.~\ref{fig:SDDB_trap}(b), (e), (h),
several collision events between the outer and the inner beating states take place.
Despite the much more involved dynamical evolution of the spinor system in such cases,
these beating states remain robust for all the evolution examples
that we have checked.
Furthermore, we also explored the dynamical evolution of the spinorial BEC system
for different values of the chemical potential, $\mu$.
Similarly to the aforementioned $w$ variation,
a controlled formation of larger DDB arrays as $\mu$ increases can be once more verified.
The resulting states in increasing order, in terms of $\mu$, are presented in
Figs.~\ref{fig:SDDB_trap_mu}(a)-(i) for fixed $w^2=5$.
Notice that since $w^2=5$ Figs.~\ref{fig:SDDB_trap_mu}(a)-(c) are respectively identical to
Figs.~\ref{fig:SDDB_trap}(b), \ref{fig:SDDB_trap}(e) and \ref{fig:SDDB_trap}(h).
However, increasing $\mu$ increases the system's size.
As such, arrays consisting of a larger number of DDB solitons are formed.
Indeed, six and eight DDB states are generated for $\mu=3$ and $\mu=5$ respectively.
Importantly here, it is found that the presence of the spin interaction
has a more dramatic effect on the resulting states when compared to the previous variation.
Namely, the originally formed DDB structures transition into beating dark ones
much faster when compared to the $w$ variation.
A case example can be seen in Fig.~\ref{fig:SDDB_trap_mu}(g), corresponding to $\mu=5$,
where the dark solitons of the $m_F=+1$ component evolve into beating ones already at $t=150$.
Even for the largest $w^2=10$ value considered above, such a
transition occurs for this hyperfine
component at evolution times $t \approx 300$ [see Fig.~\ref{fig:SDDB_trap}(c)].
To appreciate the effect of the spin interaction, we monitor
during evolution the population, $n_{m_{F}}(t)$ ($m_F=+1,0,-1$),
of each hyperfine component and for all the different values of $\mu$
considered herein.
In particular, from left to right Figs.~\ref{fig:SDDB_trap_mu}(j)-(l) illustrate
$n_{+1}(t)$, $n_{0}(t)$ and $n_{-1}(t)$ respectively.
Notice that the population of each hyperfine component is affected more, the larger
the value of $\mu$ is.
Evidently, the monotonic increase ($n_{\pm1}(t)$)
or decrease ($n_0(t)$) for $\mu=1$ turns into damping oscillations as $\mu$
increases.
Such a coherent spin-mixing dynamics is in line with earlier predictions in
spinor $F=1$ BECs~\cite{Pu1999,Chang2005}.
Finally, we verified that the total magnetization $M_z(t)$ remains constant during evolution
acquiring a slightly smaller value as $\mu$ is increased [see the inset in Fig.~\ref{fig:SDDB_trap_mu}(l)].
\begin{figure}[t]
\centering
\includegraphics[width=0.48\textwidth]{Fig12-eps-converted-to.pdf}
\caption{Same as Fig.~\ref{fig:SDDB_trap} but upon varying the chemical potential $\mu$.
From top to bottom $\mu=1$, $\mu=3$ and $\mu=5$, allowing the generation of four [(a)-(c)], six [(d)-(f)]
and eight [(g)-(i)] DDB solitons respectively.
In all cases, (a), (d), (g) [(b), (e), (h)] illustrate the formation of dark solitons in the $m_F={+1}$ [$m_F={0}$] component
and (c), (f), (i) the generated bright solitons in the $m_F={-1}$ component of the spinor system.
Notice that the colormap has a $2.5:2:1$ ratio between the columns.
(j)-(l) Evolution of the normalized number of particles, $n_{m_F}(t)$, for each value of $\mu$.
The inset in (l) shows the total magnetization, $M_z(t)$, for each value of $\mu$. }
\label{fig:SDDB_trap_mu}
\end{figure}
\section{Conclusions and Future perspectives} \label{sec:summ_concl}
In this work the controlled creation of multiple soliton complexes
of the DB type that appear in one-dimensional two-component,
three-component and spinor BECs has been investigated.
Direct numerical simulations of each system's dynamical evolution have been performed
both in the absence and in the presence of a parabolic trap.
In all models considered herein, the nucleation process is based
on the so-called matter wave interference of separated condensates
being utilized to study multi-component systems.
In this sense, this work offers a generalization of
earlier findings in single-component setups
to the much more involved multi-component ones, enabling the identification
of dark-bright solitons in two-component and dark-dark-bright and
dark-bright-bright solitons in three-component (and spinorial) gases.
To achieve control over each system's dynamical evolution
different parametric variations have been considered.
In particular, for the homogeneous systems addressed in this effort,
inverse rectangular pulses were employed for the components
featuring interference,
and Gaussian ones for the remaining participating components.
Destructive interference of the two sides of the former pulse
leads to the nucleation of an array of dark soltions.
Additionally, the dispersion of the Gaussian pulse
and its subsequent confinement in the effective potential
created by each of the nucleated dark solitons
results in the formation of bright solitons that are
subsequently trapped and waveguided by their corresponding
dark counterparts.
It is found that manipulating the width of the IRP
is sufficient to ensure the desired nucleation
of multiple soliton compounds of the DB type.
This way, arrays of DB, and DDB and DBB solitons
are dynamically produced in the two-component and spinor cases, respectively.
Moreover, for the two-component system it is showcased that each of the
generated DB solitons follows the analytical expressions
stemming from the integrable theory of the Manakov system.
The same holds true also for the DBB and DDB states nucleated in the three component system.
In the latter, generalized expressions that connect the soliton parameters are extracted and used
to appreciate modifications of the soliton characteristics under different
parametric variations.
While the same overall dynamical evolution is observed for the two- and three-component systems,
a significantly different picture can be drawn for the spinorial case.
Strikingly, and for a DDB nucleation process, it is found
that during evolution the originally formed DDB soliton arrays
begin to deform due to the spin-mixing dynamics.
The latter allows for exchange of particles between the hyperfine components.
The aforementioned deformation leads in turn to the gradual formation
of arrays of beating dark states.
The latter, once formed, are seen to robustly propagate for large evolution
times. The existence of beating dark states in spinor systems
has not, to the best of our knowledge, been reported previously
and it is an interesting topic for further exploration.
For the harmonically trapped scenarios our numerical findings suggest similar
characteristics as in the homogeneous cases in terms of the nucleation
process, although naturally the dynamics is rendered more complex
due to the confinement and the induced interactions between the
produced solitary waves.
In all cases it is found that by adjusting the width of the rectangular
pulse or the chemical potential of the participating
components, the desirable number of DB, DDB and DBB soliton complexes can be generated.
This provides a sense of dynamical control and design of
desired configurations in our system.
The number of the resulting coherent structures is found to increase upon increasing each of the above parameters.
In the trapped case, the resulting multi-soliton arrays, irrespectively
of their type, are found to oscillate and interact
within the parabolic trap being robust for large evolution times.
Contrary to the above findings, for the spinorial BEC system
a departure of the initially formed DDB states to the beating dark ones is
showcased.
Here, coherent spin-mixing dynamics is observed when monitoring
the population of each hyperfine component.
Damping oscillations of the latter occur, that are found to be enhanced upon
increasing, for example, the chemical potential of each component.
Additionally, and also in comparison to the homogeneous case,
the beating dark states are formed faster in the trapped setting.
This formation is further enhanced the larger the chemical
potential is.
It is found that the beating dark solitons persist
while oscillating and interacting with one another.
The existence of these spinorial beating states can
be tested in current state-of-the art experiments~\cite{Bersano2018},
and it is clearly a direction of interest in its own right for
future studies.
More specifically, it would be particularly interesting to generalize the
findings associated with the two-component beating dark solitons~\cite{Yan2012}
to the spinor case and study in a detailed manner the formation
and interactions of the spinor beating dark states identified herein.
Yet, another interesting perspective would be to compare and contrast the
numerically identified DDB and DBB states of the three-component system
to the analytical expressions that are available, at least for the integrable
version of this model~\cite{biondini2016three}. More specifically,
one could generalize the criteria of the single component
IRP scenario obtained in the earlier works of~\cite{Zakharov1973}
to the formation of
both DB and also DDB or DBB solitons from similar initial
data in the multi-component case and compare
these predictions against the corresponding numerical computations.
Then, one could depart from the above Manakov limit and also study the fate of these structures
in non-integrable systems~\cite{katsimiga2018dark}, including the spinor one.
The breaking of integrability would allow in turn for effects such as the miscibility/immiscibility of the involved
components to come into play~\cite{kiehn2019spontaneous}.
The interplay of the resulting density variations with the potential persistence of
the solitary wave structures is an interesting topic for future study.
Also in the same context it would be interesting to systematically examine interactions
between multiple DDB and DBB states.
The role of other effects such as the potential Rabi coupling between the components could also
be of interest in its own right~\cite{pitaevskii2016bose,pitaevskii2016bose}.
Lastly, as has been discussed in relevant reviews such
as~\cite{Kevrekidis2016}, many of these ideas, such
as the DB solitons (generalizing to vortex-bright ones),
the beating dark solitons, etc., naturally generalize to corresponding
higher-dimensional states. Examining the potential for such states
as a result of interference or possibly other methods more concretely
associated with higher dimensions such as the transverse
instability would be of particular interest in its own right.
\vspace{0.5cm}
\section*{Acknowledgements}
A.R.-R. thanks M. Pyzh and K. Keiler for helpful discussions.
This material is based upon work supported by the U.S.
\ National Science Foundation under Grant No.\ PHY-1602994 and under Grant No.\ DMS-1809074 (P.G.K.).
P.G.K. is also grateful to the Alexander von Humboldt Foundation for
support and to Mr. Kevin Geier
(University of Heidelberg) for preliminary iterations on the subject
of the present work.
\bibliographystyle{apsrev4-1} |
1811.02440 | \section{Introduction}
Gradually typed languages are designed to support a mix of dynamically
typed and statically typed programming styles and preserve the
benefits of each.
Dynamically typed code can be written without conforming to a
syntactic type discipline, so the programmer can always run their
program interactively with minimal work.
On the other hand, statically typed code provides mathematically
sound reasoning principles that justify type-based refactorings,
enable compiler optimizations, and underlie formal software verification.
The difficulty is accommodating both of these styles and their benefits simultaneously:
allowing the dynamic and static code to interact without forcing the
dynamic code to be statically checked or violating the correctness of
type-based reasoning.
The linchpin to the design of a gradually typed language is the
semantics of \emph{runtime type casts}. These are runtime checks that ensure
that typed reasoning principles are valid by checking types of dynamically typed
code at the boundary between static and dynamic typing.
For instance, when a statically typed function $f : \texttt{Num} \to
\texttt{Num}$ is applied to a dynamically typed argument $x : {?}$,
the language runtime must check if $x$ is a number, and otherwise
raise a dynamic type error.
A programmer familiar with dynamically typed programming might object
that this is overly strong: for instance if $f$ is just a constant
function $f = \lambda x:\texttt{Num}. 0$ then why bother checking if
$x$ is a number since the body of the program does not seem to depend
on it?
The reason the value is rejected is because the annotation $x :
\texttt{Num}$ should introduce an assumption that that the programmer,
compiler and automated tools can rely on for behavioral reasoning in the
body of the function.
For instance, if the variable $x$ is guaranteed to only be
instantiated with numbers, then the programmer is free to replace $0$
with $x - x$ or vice-versa.
However, if $x$ can be instantiated with a closure, then $x - x$ will
raise a runtime type error while $0$ will succeed, violating the
programmers intuition about the correctness of refactorings.
We can formalize such relationships by \emph{observational equivalence} of
programs: the two closures $\lambda x:\texttt{Num}. 0$ and $\lambda
x:\texttt{Num}. x - x$ are indistinguishable to any other program in
the language.
This is precisely the difference between gradual typing and so-called
\emph{optional} typing: in an optionally typed language (Hack,
TypeScript, Flow), annotations are checked for consistency but are unreliable
to the user, so provide no leverage for reasoning.
In a gradually typed language, type annotations should relieve the
programmer of the
burden of reasoning about incorrect inputs, as long as we are willing to accept that the program
as a whole may crash, which is already a possibility in many \emph{effectful}
statically typed languages.
However, the dichotomy between gradual and optional typing is not as
firm as one might like.
There have been many different proposed semantics of run-time type
checking: ``transient'' cast semantics~\citep{vitousekswordssiek:2017}
only checks the head connective of a type (number, function, list,
\ldots), ``eager'' cast semantics~\citep{herman2010spaceefficient} checks
run-time type information on closures, whereas ``lazy'' cast
semantics~\citep{findler-felleisen02} will always delay a type-check on
a function until it is called (and there are other possibilities, see
e.g. \cite{siek+09designspace,greenberg15spaceefficient}).
The extent to which these different semantics have been shown to
validate type-based reasoning has been limited to syntactic type
soundness and blame soundness theorems.
In their strongest form, these theorems say ``If $t$ is a closed
program of type $A$ then it diverges, or reduces to a runtime error
blaming dynamically typed code, or reduces to a value that satisfies $A$ to a
certain extent.''
However, the theorem at this level of generality is quite weak, and
justifies almost no program equivalences without more information.
Saying that a resulting value satisfies type $A$ might be a strong
statement, but in transient semantics constrains only the head
connective.
The blame soundness theorem might also be quite strong, but depends on
the definition of blame, which is part of the operational semantics of
the language being defined.
We argue that these type soundness theorems are only indirectly
expressing the actual desired properties of the gradual language,
which are \emph{program equivalences in the typed portion of the code} that are
not valid in the dynamically typed portion.
Such program equivalences typically include $\beta$-like principles,
which arise from computation steps, as well as \emph{$\eta$ equalities},
which express the uniqueness or universality of certain constructions.
The $\eta$ law of the untyped $\lambda$-calculus, which
states that any $\lambda$-term $M \equiv \lambda x. M x$, is
restricted in a typed language to only hold for terms of function type $M
: A \to B$ ($\lambda$ is the unique/universal way of making an element
of the function type).
This famously ``fails'' to hold in call-by-value languages in the
presence of effects: if $M$ is a program that prints \texttt{"hello"}
before returning a function, then $M$ will print \emph{now}, whereas
$\lambda x. M x$ will only print when given an argument. But this can be
accommodated with one further modification: the $\eta$ law is valid in
simple call-by-value languages\footnote{This does not hold in languages
with some intensional feature of functions such as reference
equality. We discuss the applicability of our main results more generally in Section \ref{sec:related}.} (e.g. SML) if we have a ``value
restriction'' $V \equiv \lambda x. V x$.
This illustrates that $\eta$/extensionality rules must be stated for
each type connective, and be sensitive to the effects/evaluation order
of the terms involved.
For instance, the $\eta$ principle for the boolean type $\texttt{Bool}$
\emph{in call-by-value} is that for any term $M$ with a free variable $x :
\texttt{Bool}$, $M$ is equivalent to a term that performs an if
statement on $x$: $M \equiv \kw{if} x (M[\texttt{true}/x])
(M[\texttt{false}/x])$.
If we have an \texttt{if} form that is strongly typed (i.e., errors on
non-booleans) then this tells us that it is \emph{safe} to run an if
statement on any input of boolean type (in CBN, by contrast an if
statement forces a thunk and so is not necessarily safe).
In addition, even if our \texttt{if} statement does some kind of
coercion, this tells us that the term $M$ only cares about whether $x$
is ``truthy'' or ``falsy'' and so a client is free to change e.g. one
truthy value to a different one without changing behavior.
This $\eta$ principle justifies a number of program optimizations,
such as dead-code and common subexpression elimination, and
hoisting an if
statement outside of the body of a function if it is well-scoped
($\lambda x. \kw{if} y \, M \, N \equiv \kw {if} y \, (\lambda x.M) \, (\lambda x.N)$).
Any eager datatype, one whose elimination form is given by pattern
matching such as $0, +, 1, \times, \mathtt{list}$, has a similar $\eta$
principle which enables similar reasoning, such as proofs by induction.
The $\eta$ principles for lazy types \emph{in call-by-name} support dual
behavioral reasoning about lazy functions, records, and streams.
\textbf{An Axiomatic Approach to Gradual Typing.}
In this paper, we systematically study questions of program equivalence
for a class of gradually typed languages by working in an
\emph{axiomatic theory} of gradual program equivalence, a language and
logic we call \emph{gradual type theory} (GTT).
Gradual type theory is the combination of a language of terms and
gradual types with a simple logic for proving program equivalence and
\emph{error approximation} (equivalence up to one program erroring when
the other does not) results.
The logic axiomatizes the equational properties gradual
programs should satisfy, and offers a high-level syntax for proving
theorems about many languages at once:
if a language models gradual type theory, then it satisfies all
provable equivalences/approximations.
Due to its type-theoretic design, different axioms of program
equivalence are easily added or removed.
Gradual type theory can be used both to explore language design questions and
to verify behavioral properties of specific programs, such as correctness of
optimizations and refactorings.
To get off the ground, we take two properties of the gradual language
for granted.
First, we assume a compositionality property: that any cast from $A$
to $B$ can be factored through the dynamic type ${?}$, i.e., the cast
$\obcast{B}{A}{t}$ is equivalent to first casting up from $A$ to
${?}$ and then down to $B$: $\obcast{B}{{?}}\obcast{{?}}{A} t$.
These casts often have quite different performance characteristics,
but should have the same extensional behavior: of the cast semantics
presented in \citet{siek+09designspace}, only the partially eager
detection strategy violates this principle, and this strategy is not
common.
The second property we take for granted is that the language satisfies
the \emph{dynamic gradual guarantee}~\cite{refined} (``graduality'')---a
strong correctness theorem of gradual typing--- which constrains how
changing type annotations changes behavior. Graduality says that if we
change the types in a program to be ``more precise''---e.g., by changing
from the dynamic type to a more precise type such as integers or
functions---the program will either produce the same behavior as the
original or raise a dynamic type error. Conversely, if a program does
not error and some types are made ``less precise'' then behavior does
not change.
We then study what program equivalences are provable in GTT under
various assumptions.
Our central application is to study when the $\beta, \eta$ equalities
are satisfied in a gradually typed language.
We approach this problem by a surprising tack: rather than defining the
behavior of dynamic type casts and then verifying or invalidating the
$\beta$ and $\eta$ equalities, we \emph{assume} the language satisfies
$\beta$ and $\eta$ equality and then show that certain reductions of
casts are in fact program equivalence \emph{theorems} deducible from the
axioms of GTT.
The cast reductions that we show satisfy all three constraints are
those given by the ``lazy cast semantics''~\cite{findler-felleisen02,siek+09designspace}.
As a contrapositive, any gradually typed language for which these
reductions are not program equivalences is \emph{not} a model of the
axioms of gradual type theory.
This mean the language violates either compositionality, the gradual
guarantee, or one of the $\beta, \eta$ axioms---and in practice, it is
usually $\eta$.
For instance, a transient semantics, where only the top-level
connectives are checked, violates $\eta$ for strict pairs
\begin{small}
\[ {x : A_1 \times A_2} \vdash (\letXbeYinZ x {(x_1,x_2)} 0) \neq 0 \]
\end{small}%
because the top-level connectives of $A_1$ and $A_2$ are only checked
when the pattern match is introduced. As a concrete counterexample to
contextual equivalence, let $A_1, A_2$ all be \texttt{String}. Because
only the top-level connective is checked, $(0,1)$ is a valid value of
type $\texttt{String} \times \texttt{String}$, but pattern matching on
the pair ensures that the two components are checked to be strings, so
the left-hand side $\letXbeYinZ {(0,1)} {(x_1,x_2)} 0 \mapsto \mho$
(raises a type error). On the right-hand side, with no pattern, match a
value (0) is returned. This means simple program changes that are valid
in a typed language, such as changing a function of two arguments to
take a single pair of those arguments, are invalidated by the transient
semantics.
In summary, transient semantics is ``lazier'' than the types dictate,
catching errors only when the term is inspected.
As a subtler example, in call-by-value ``eager cast semantics'' the
$\beta\eta$ principles for all of the eager datatypes ($0, +, 1,
\times$, lists, etc.) will be satisfied, but the $\eta$ principle for
the function type $\to$ is violated: there are values $V : A \to A'$ for
which $V \neq \lambda x:A. V x $.
For instance, take an arbitrary function value $V : A \to
\texttt{String}$ for some type $A$, and let $V' = \obcast{A \to
{?}}{A \to \texttt{String}}{V}$ be the result of casting it to have a
dynamically typed output.
Then in eager semantics, the following programs are not equivalent:
\begin{small}
\[ \lambda x:A. V' x \neq V' : A \to {?}\]
\end{small}
We cannot observe any difference between these two programs by
applying them to arguments, however, they are distinguished from each
other by their behavior when \emph{cast}.
Specifically, if we cast both sides to $A \to \texttt{Number}$, then
$\obcast{A\to \texttt{Number}}{A\to{?}}(\lambda x:A.V' x)$ is a
value, but $\obcast{A \to \texttt{Number}}{A\to {?}}V'$ reduces to an
error because $\texttt{Number}$ is incompatible with
$\texttt{String}$.
However this type error might not correspond to any actual typing
violation of the program involved.
For one thing, the resulting function might never be executed.
Furthermore, in the presence of effects, it may be that the original
function $V : A \to \texttt{String}$ never returns a string (because
it diverges, raises an exception or invokes a continuation), and so
that same value casted to $A \to \texttt{Number}$ might be a perfectly
valid inhabitant of that type.
In summary the ``eager'' cast semantics is in fact overly eager: in
its effort to find bugs faster than ``lazy'' semantics it disables the
very type-based reasoning that gradual typing should provide.
While criticisms of transient semantics on the basis of type soundness
have been made before \citep{greenmanfelleisen:2018}, our development
shows that the $\eta$ principles of types are enough to uniquely
determine a cast semantics, and helps clarify the trade-off between
eager and lazy semantics of function casts.
\textbf{Technical Overview of GTT.} The gradual type theory developed
in this paper unifies our previous work on
operational (logical relations) reasoning for gradual typing in a
call-by-value setting~\citep{newahmed18} (which did not consider a proof theory), and on an
axiomatic proof theory for gradual typing~\citep{newlicata2018-fscd} in
a call-by-name setting (which considered only function and product
types, and denotational but not operational models).
In this paper, we develop an axiomatic gradual type theory GTT for a unified
language that includes \emph{both} call-by-value/eager types and
call-by-name/lazy types (Sections~\ref{sec:gtt}, \ref{sec:theorems-in-gtt}), and
show that it is sound for contextual equivalence via a logical relations model
(Sections~\ref{sec:contract}, \ref{sec:complex}, \ref{sec:operational}).
Because the $\eta$ principles for types play a key role in our approach, it is
necessary to work in a setting where we can have $\eta$ principles for both
eager and lazy types. We use Levy's
Call-by-Push-Value~\citep{levy03cbpvbook} (CBPV), which fully and faithfully
embeds both call-by-value and call-by-name evaluation with both eager and lazy
datatypes,\footnote{The distinction between ``lazy'' vs ``eager'' casts above is
different than lazy vs. eager datatypes.} and underlies much recent work on
reasoning about effectful programs~\cite{bauerpretnar13eff,lindley+17frank}.
GTT can prove results in and about existing call-by-value gradually typed
languages, and also suggests a design for call-by-name and full
call-by-push-value gradually typed languages.
In the prior work \cite{newlicata2018-fscd,newahmed18}, gradual type
casts are decomposed into upcasts and downcasts, as suggested above.
A \emph{type dynamism}
relation (corresponding to type precision~\cite{refined} and na\"ive
subtyping~\cite{wadler-findler09}) controls which casts exist: a type
dynamism $A \sqsubseteq A'$ induces an upcast from $A$ to $A'$ and a downcast
from $A'$ to $A$. Then, a \emph{term dynamism} judgement is used for
equational/approximational reasoning about programs. Term dynamism
relates two terms whose types are related by type dynamism, and the
upcasts and downcasts are each \emph{specified} by certain term
dynamism judgements holding.
This specification axiomatizes only the properties of casts needed to
ensure the graduality theorem, and not their precise behavior, so cast
reductions can be \emph{proved from it}, rather than stipulated in
advance. The specification defines the casts ``uniquely up to
equivalence'', which means that any two implementations satisfying it
are behaviorally equivalent.
We generalize this axiomatic approach to call-by-push-value
(Section~\ref{sec:gtt}), where there are both eager/value types and
lazy/computation types. This is both a subtler question than it might at
first seem, and has a surprisingly nice answer: we find that upcasts are
naturally associated with eager/value types and downcasts with
lazy/computation types, and that the modalities relating values and
computations induce the downcasts for eager/value types and upcasts for
lazy/computation types. Moreover, this analysis articulates an
important behavioral property of casts that was proved operationally for
call-by-value in \citep{newahmed18} but missed for call-by-name in
\citep{newlicata2018-fscd}: upcasts for eager types and downcasts for
lazy types are both ``pure'' in a suitable sense, which enables more
refactorings and program optimizations. In particular, we show that
these casts can be taken to be (and are essentially forced to be)
``complex values'' and ``complex stacks'' (respectively) in
call-by-push-value, which corresponds to a behavioral property of
\emph{thunkability} and
\emph{linearity}~\cite{munchmaccagnoni14nonassociative}. We argue in
Section~\ref{sec:related} that this property is related to blame
soundness. Our gradual type theory naturally has two dynamic types, a
dynamic eager/value type and a dynamic lazy/computation type, where the
former can be thought of as a sum of all possible values, and the latter
as a product of all possible behaviors. At the language design level,
gradual type theory can be used to prove that, for a variety of
eager/value and lazy/computation types, the ``lazy'' semantics of casts
is the unique implementation satisfying $\beta,\eta$ and graduality
(Section~\ref{sec:theorems-in-gtt}). These behavioral equivalences can
then be used in reasoning about optimizations, refactorings, and
correctness of specific programs.
\textbf{Contract-Based Models.}
To show the consistency of GTT as a theory, and to give a concrete
operational interpretation of its axioms and rules, we provide a
concrete model based on an operational semantics.
The model is a \emph{contract} interpretation of GTT in that the
``built-in'' casts of GTT are translated to ordinary functions in a
CBPV language that perform the necessary checks.
To keep the proofs high-level, we break the proof into two steps.
First (Sections~\ref{sec:contract}, \ref{sec:complex}), we translate the
axiomatic theory of GTT into an axiomatic theory of CBPV extended with
recursive types and an uncatchable error, implementing casts by CBPV
code that does contract checking.
Then
(Section~\ref{sec:operational}) we give an operational semantics
for the extended CBPV and define a step-indexed biorthogonal logical
relation that interprets the ordering relation on terms as contextual
error approximation, which underlies the definition of graduality as
presented in \citep{newahmed18}.
Combining these theorems gives an implementation of the term
language of GTT in which $\beta, \eta$ are observational equivalences
and the dynamic gradual guarantee is satisfied.
Due to the uniqueness theorems of GTT, the only part of this translation that is not
predetermined is the definition of the dynamic types themselves and the
casts between ``ground'' types and the dynamic types.
We use CBPV to explore the design space of possible implementations of
the dynamic types, and give one that faithfully distinguishes all types
of GTT, and another more Scheme-like implementation that implements sums
and lazy pairs by tag bits.
Both can be restricted to the CBV or CBN subsets of CBPV, but the
unrestricted variant is actually more faithful to Scheme-like
dynamically typed programming, because it accounts for variable-argument
functions.
Our modular proof architecture allows us to easily prove correctness
of $\beta, \eta$ and graduality for all of these interpretations.
\textbf{Contributions.}
The main contributions of the paper are as follows.
\begin{enumerate}
\item We present Gradual Type Theory in Section \ref{sec:gtt}, a simple
axiomatic theory of gradual typing. The theory axiomatizes three
simple assumptions about a gradual language: compositionality,
graduality, and type-based reasoning in the form of $\eta$
equivalences.
\item We prove many theorems in the formal logic of Gradual Type
Theory in Section \ref{sec:theorems-in-gtt}. These include the
unique implementation theorems for casts, which show that for
each type connective of GTT, the $\eta$ principle for the type
ensures that the casts must implement the lazy contract
semantics. Furthermore, we show that upcasts are always pure
functions and dually that downcasts are always strict functions, as
long as the base type casts are pure/strict.
\item To substantiate that GTT is a reasonable axiomatic theory for
gradual typing, we construct \emph{models} of GTT in Sections
\ref{sec:contract}, \ref{sec:complex} and \ref{sec:lr}. This proceeds
in two stages. First (Section \ref{sec:contract}), we use
call-by-push-value as a typed metalanguage to construct several models
of GTT using different recursive types to implement the dynamic types
of GTT and interpret the casts as embedding-projection pairs. This
extends standard translations of dynamic typing into static typing
using type tags: the dynamic value type is constructed as a recursive
sum of basic value types, but dually the dynamic computation type is
constructed as a recursive \emph{product} of basic computation
types. This dynamic computation type naturally models stack-based
implementations of variable-arity functions as used in the Scheme
language.
\item We then give an operational model of the term dynamism ordering
as contextual error approximation in Sections \ref{sec:complex} and
\ref{sec:lr}. To construct this model, we extend previous work on
logical relations for error approximation from call-by-value to
call-by-push-value \cite{newahmed18}, simplifying the presentation
in the process.
\end{enumerate}
\begin{shortonly}
\textbf{Extended version:} An extended version of the paper, which
includes the omitted cases of definitions, lemmas, and proofs is
available in \citet{newlicataahmed19:extended}.
\end{shortonly}
\section{Axiomatic Gradual Type Theory}
\label{sec:gtt}
In this section we introduce the syntax of Gradual Type Theory, an
extension of Call-by-push-value~\citep{levy03cbpvbook} to support the constructions of
gradual typing.
First we introduce call-by-push-value and then describe in turn the
gradual typing features: dynamic types, casts, and the dynamism
orderings on types and terms.
\begin{figure}
\begin{small}
\[
\begin{array}{l}
\begin{array}{rl|rl}
A ::= & \colorbox{lightgray}{${?}$} \mid U \u B \mid 0 \mid A_1 + A_2 \mid 1 \mid A_1 \times A_2 &
\u B ::= & \colorbox{lightgray}{$\u {\text{?`}}$} \mid \u F A \mid \top \mid \u B_1 \mathbin{\&} \u B_2 \mid A \to \u B\\
V ::= & \begin{array}{l}
\colorbox{lightgray}{$\upcast A {A'} V$} \mid x \mid \kw {abort}{V} \\
\mid \kw{inl}{V} \mid \kw{inr}{V} \\
\mid \caseofXthenYelseZ V {x_1. V_1}{x_2.V_2} \\
\mid () \mid \pmpairWtoinZ V V' \\
\mid (V_1,V_2) \mid \pmpairWtoXYinZ V x y V' \\
\mid \kw{thunk}{M}
\end{array} &
M,S ::= & \begin{array}{l}
\colorbox{lightgray}{$\dncast{\u B} {\u B'} M$} \mid \bullet \mid \mho_{\u B} \\
\mid \kw {abort}{V} \mid \caseofXthenYelseZ V {x_1. M_1}{x_2.M_2}\\
\mid \pmpairWtoinZ V M \mid\pmpairWtoXYinZ V x y M \\
\mid \kw{force}{V} \mid \kw{ret}{V} \mid \bindXtoYinZ{M}{x}{N}\\
\mid \lambda x:A.M \mid M\,V\\
\mid \emptypair \mid \pair{M_1}{M_2} \\
\mid \pi M \mid \pi' M
\end{array}\\
\Gamma ::= & \cdot \mid \Gamma, x : A &
\Delta ::= & \cdot \mid \bullet : \u B \\
\colorbox{lightgray}{$\Phi$} ::= & \colorbox{lightgray}{$\cdot \mid \Phi, x \sqsubseteq x': A \sqsubseteq A'$} &
\colorbox{lightgray}{$\Psi$} ::= & \colorbox{lightgray}{$\cdot \mid \bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B'$} \\
\end{array}\\\\
\iflong
\begin{array}{c}
\hspace{2.5in} T ::= A \mid \u B \\
\hspace{2.5in} E ::= V \mid M \\
\end{array}\\\\
\fi
%
\begin{array}{c}
\framebox{$\Gamma \vdash V : A$ and $\Gamma \mid \Delta \vdash M : \u B$} \qquad
\colorbox{lightgray}{
$\inferrule*[lab=UpCast]
{\Gamma \vdash V : A \and A \sqsubseteq A'}
{\Gamma \vdash \upcast A {A'} V : A'}$
\qquad
$\inferrule*[lab=DnCast]
{\Gamma\,\,|\,\, \Delta \vdash M : \u B' \and \u B \sqsubseteq \u B'}
{\Gamma\,\,|\,\, \Delta \vdash \dncast{\u B}{\u B'} M : \u B}$
}
\\\\
\inferrule*[lab=Var]
{ }
{\Gamma,x : A,\Gamma' \vdash x : A}
\qquad
\inferrule*[lab=Hole]
{ }
{\Gamma\,\,|\,\, \bullet : \u B \vdash \bullet : \u B}
\qquad
\inferrule*[lab=Err]
{ }
{\Gamma \mid \cdot \vdash \mho_{\u B} : \u B}
\\
\iflong
\\
\inferrule*[lab=$0$E]
{\Gamma \vdash V : 0}
{\Gamma \mid \Delta \vdash \kw {abort} V : T}
\qquad
\inferrule*[lab=$+$Il]
{\Gamma \vdash V : A_1}
{\Gamma \vdash \kw{inl} V : A_1 + A_2}
\qquad
\inferrule*[lab=$+$Ir]
{\Gamma \vdash V : A_2}
{\Gamma \vdash \kw{inr} V : A_1 + A_2}
\qquad
\inferrule*[lab=$+$E]
{
\Gamma \vdash V : A_1 + A_2 \\\\
\Gamma, x_1 : A_1 \mid \Delta \vdash E_1 : T \\\\
\Gamma, x_2 : A_2 \mid \Delta \vdash E_2 : T
}
{\Gamma \mid \Delta \vdash \caseofXthenYelseZ V {x_1. E_1}{x_2.E_2} : T}
\\\\
\fi
\inferrule*[lab=$1$I]
{ }
{\Gamma \vdash (): 1}
\,\,\,
\inferrule*[lab=$1$E]
{\Gamma \vdash V : 1 \and
\Gamma \mid \Delta \vdash E : T
}
{\Gamma \mid \Delta \vdash \pmpairWtoinZ V E : T}
\,\,\,
\inferrule*[lab=$\times$I]
{\Gamma \vdash V_1 : A_1\and
\Gamma\vdash V_2 : A_2}
{\Gamma \vdash (V_1,V_2) : A_1 \times A_2}
\,\,\,
\inferrule*[lab=$\times$E]
{\Gamma \vdash V : A_1 \times A_2 \\\\
\Gamma, x : A_1,y : A_2 \mid \Delta \vdash E : T
}
{\Gamma \mid \Delta \vdash \pmpairWtoXYinZ V x y E : T}
\\\\
\inferrule*[lab=$U$I]
{\Gamma \mid \cdot \vdash M : \u B}
{\Gamma \vdash \kw{thunk} M : U \u B}
\,\,\,
\inferrule*[lab=$U$E]
{\Gamma \vdash V : U \u B}
{\Gamma \,\,|\,\, \cdot \vdash \kw{force} V : \u B}
\,\,\,
\inferrule*[lab=$F$I]
{\Gamma \vdash V : A}
{\Gamma \,\,|\,\, \cdot \vdash \kw{ret} V : \u F A}
\,\,\,
\inferrule*[lab=$F$E]
{\Gamma \,\,|\,\, \Delta \vdash M : \u F A \\
\Gamma, x: A \,\,|\,\, \cdot \vdash N : \u B}
{\Gamma \,\,|\,\, \Delta \vdash \bindXtoYinZ M x N : \u B}
\\\\
\inferrule*[lab=$\to$I]
{\Gamma, x: A \,\,|\,\, \Delta \vdash M : \u B}
{\Gamma \,\,|\,\, \Delta \vdash \lambda x : A . M : A \to \u B}
\quad
\inferrule*[lab=$\to$E]
{\Gamma \,\,|\,\, \Delta \vdash M : A \to \u B\and
\Gamma \vdash V : A}
{\Gamma \,\,|\,\, \Delta \vdash M\,V : \u B }
\iflong
\\\\
\inferrule*[lab=$\top$I]{ }{\Gamma \mid \Delta \vdash \emptypair : \top}
\quad
\inferrule*[lab=$\mathbin{\&}$I]
{\Gamma \mid \Delta \vdash M_1 : \u B_1\and
\Gamma \mid \Delta \vdash M_2 : \u B_2}
{\Gamma \mid \Delta \vdash \pair {M_1} {M_2} : \u B_1 \mathbin{\&} \u B_2}
\quad
\inferrule*[lab=$\mathbin{\&}$E]
{\Gamma \mid \Delta \vdash M : \u B_1 \mathbin{\&} \u B_2}
{\Gamma \mid \Delta \vdash \pi M : \u B_1}
\quad
\inferrule*[lab=$\mathbin{\&}$E']
{\Gamma \mid \Delta \vdash M : \u B_1 \mathbin{\&} \u B_2}
{\Gamma \mid \Delta \vdash \pi' M : \u B_2}
\fi
\end{array}
\end{array}
\]
\end{small}
\vspace{-0.1in}
\caption{GTT Syntax and Term Typing \ifshort{($+$ and $\mathbin{\&}$ typing rules in extended version)}\fi}
\label{fig:gtt-syntax-and-terms}
\end{figure}
\subsection{Background: Call-by-Push-Value}
GTT is an extension of CBPV, so we first present CBPV as the unshaded rules in
Figure~\ref{fig:gtt-syntax-and-terms}. CBPV makes a distinction between
\emph{value types} $A$ and \emph{computation types} $\u B$, where value
types classify \emph{values} $\Gamma \vdash V : A$ and computation types
classify \emph{computations} $\Gamma \vdash M : \u B$. Effects are
computations: for example, we might have an error computation $\mho_{\u
B} : \u B$ of every computation type, or printing $\kw{print} V;M : \u B$
if $V : \kw{string}$ and $M : \u B$, which prints $V$ and then behaves as
$M$.
\emph{Value types and complex values.}
The value types include \emph{eager} products $1$ and $A_1 \times A_2$
and sums $0$ and $A_1 + A_2$, which behave as in a call-by-value/eager
language (e.g. a pair is only a value when its components are). The
notion of value $V$ is more permissive than one might expect, and
expressions $\Gamma \vdash V : A$ are sometimes called \emph{complex
values} to emphasize this point: complex values include not only
closed runtime values, but also open values that have free value
variables (e.g. $x : A_1 , x_2 : A_2 \vdash (x_1,x_2) : A_1 \times
A_2$), and expressions that pattern-match on values (e.g. $p : A_1
\times A_2 \vdash \pmpairWtoXYinZ{p}{x_1}{x_2}{(x_2,x_1)} : A_2 \times
A_1$). Thus, the complex values $x : A \vdash V : A'$ are a syntactic
class of ``pure functions'' from $A$ to $A'$ (though there is no pure
function \emph{type} internalizing this judgement), which can be treated
like values by a compiler because they have no effects (e.g. they can be
dead-code-eliminated, common-subexpression-eliminated, and so on).
\begin{longonly}
In focusing~\cite{andreoli92focus} terminology, complex
values consist of left inversion and right focus rules.
\end{longonly}
For each pattern-matching construct (e.g. case analysis on a sum,
splitting a pair), we have both an elimination rule whose branches are
values (e.g. $\pmpairWtoXYinZ{p}{x_1}{x_2}{V}$) and one whose branches
are computations (e.g. $\pmpairWtoXYinZ{p}{x_1}{x_2}{M}$). To
abbreviate the typing rules for both in
Figure~\ref{fig:gtt-syntax-and-terms}, we use the following convention:
we write $E ::= V \mid M$ for either a complex value or a computation,
and $T ::= A \mid \u B$ for either a value type $A$ or a computation
type $\u B$, and a judgement $\Gamma \mid \Delta \vdash E : T$ for
either $\Gamma \vdash V : A$ or $\Gamma \mid \Delta \vdash M : \u B$
(this is a bit of an abuse of notation because $\Delta$ is not present
in the former). Complex values can be translated away without loss of
expressiveness by moving all pattern-matching into computations (see
Section~\ref{sec:complex}), at the expense of using a behavioral
condition of \emph{thunkability}~\cite{munchmaccagnoni14nonassociative} to capture the properties
complex values have (for example, an analogue of
$\letXbeYinZ{V}{x}{\letXbeYinZ{V'}{x'}{M}} \equiv
\letXbeYinZ{V'}{x'}{\letXbeYinZ{V}{x}{M}}$ --- complex values can be
reordered, while arbitrary computations cannot).
\emph{Shifts.}
A key notion in CBPV is the \emph{shift} types $\u F A$ and $U \u B$,
which mediate between value and computation types: $\u F A$ is the
computation type of potentially effectful programs that return a value
of type $A$, while $U \u B$ is the value type of thunked computations of
type $\u B$. The introduction rule for $\u F A$ is returning a value of
type $A$ (\kw{ret}{V}), while the elimination rule is sequencing a
computation $M : \u F A$ with a computation $x : A \vdash N : \u B$ to
produce a computation of a $\u B$ ($\bindXtoYinZ{M}{x}{N}$). While any
closed complex value $V$ is equivalent to an actual value, a computation
of type $\u F A$ might perform effects (e.g. printing) before returning
a value, or might error or non-terminate and not return a value at all.
The introduction and elimination rules for $U$ are written $\kw{thunk}{M}$
and $\kw{force}{V}$, and say that computations of type $\u B$ are bijective
with values of type $U \u B$. As an example of the action of the
shifts,
\begin{longonly}
$0$ is the empty value type, so $\u F 0$ classifies effectful
computations that never return, but may perform effects (and then, must
e.g. non-terminate or error), while $U \u F 0$ is the value type where
such computations are thunked/delayed and considered as values.
\end{longonly}
$1$ is the trivial value type, so $\u F 1$ is the type of computations
that can perform effects with the possibility of terminating
successfully by returning $()$, and $U \u F 1$ is the value type where
such computations are delayed values.
\begin{longonly}
$U \u F$ is a monad on value
types~\citep{moggi91}, while $\u F U$ is a comonad on computation types.
\end{longonly}
\emph{Computation types.}
The computation type constructors in CBPV include lazy unit/products
$\top$ and $\u B_1 \mathbin{\&} \u B_2$, which behave as in a call-by-name/lazy
language (e.g. a component of a lazy pair is evaluated only when it is
projected). Functions $A \to \u B$ have a value type as input and a
computation type as a result. The equational theory of effects in CBPV
computations may be surprising to those familiar only with
call-by-value, because at higher computation types effects have a
call-by-name-like equational theory. For example, at computation type
$A \to \u B$, we have an equality $\kw{print} c; \lambda x. M = \lambda
x.\kw{print} c; M$. Intuitively, the reason is that $A \to \u B$ is not
treated as an \emph{observable} type (one where computations are run):
the states of the operational semantics are only those computations of
type $\u F A$ for some value type $A$. Thus, ``running'' a function
computation means supplying it with an argument, and applying both of
the above to an argument $V$ is defined to result in $\kw{print} c;M[V/x]$.
This does \emph{not} imply that the corresponding equations holds for
the call-by-value function type, which we discuss below.
\begin{longonly}
As another
example, \emph{all} computations are considered equal at type $\top$,
even computations that perform different effects ($\kw{print} c$ vs. $\{\}$
vs. $\mho$), because there is by definition \emph{no} way to extract an
observable of type $\u F A$ from a computation of type $\top$.
Consequently, $U \top$ is isomorphic to $1$.
\end{longonly}
\emph{Complex stacks.} Just as the complex values $V$ are a syntactic
class terms that have no effects, CBPV includes a judgement for
``stacks'' $S$, a syntactic class of terms that reflect \emph{all}
effects of their input. A \emph{stack} $\Gamma \mid \bullet : \u B
\vdash S : \u B'$ can be thought of as a linear/strict function from $\u
B$ to $\u B'$, which \emph{must} use its input hole $\bullet$
\emph{exactly} once at the head redex position. Consequently, effects
can be hoisted out of stacks, because we know the stack will run them
exactly once and first. For example, there will be contextual
equivalences $S[\mho/\bullet] = \mho$ and $S[\kw{print} V;M] = \kw{print}
V;S[M/\bullet]$. Just as complex values include pattern-matching,
\emph{complex stacks} include pattern-matching on values and
introduction forms for the stack's output type. For example, $\bullet :
\u B_1 \mathbin{\&} \u B_2 \vdash \pair{\pi' \bullet}{\pi \bullet} : \u B_2
\mathbin{\&} \u B_1$ is a complex stack, even though it mentions $\bullet$ more
than once, because running it requires choosing a projection to get to
an observable of type $\u F A$, so \emph{each time it is run} it uses
$\bullet$ exactly once.
\begin{longonly}
In
focusing terms, complex stacks include both left and right inversion,
and left focus rules.
\end{longonly}
In the equational theory of CBPV, $\u F$ and $U$
are \emph{adjoint}, in the sense that stacks $\bullet : \u F A \vdash S
: \u B$ are bijective with values $x : A \vdash V : U \u B$, as both are
bijective with computations $x : A \vdash M : \u B$.
To compress the presentation in Figure~\ref{fig:gtt-syntax-and-terms},
we use a typing judgement $\Gamma \mid \Delta \vdash M : \u B$ with a
``stoup'', a typing context $\Delta$ that is either
empty or contains exactly one assumption $\bullet : \u B$, so $\Gamma
\mid \cdot \vdash M : \u B$ is a computation, while $\Gamma \mid \bullet
: \u B \vdash M : \u B'$ is a stack. The \ifshort{(omitted) }\fi typing
rules for $\top$ and $\mathbin{\&}$ treat the stoup additively
(it is arbitrary in the conclusion and the same in all premises); for a
function application to be a stack, the stack input must occur in the
function position. The elimination form for $U \u B$, $\kw{force}{V}$, is
the prototypical non-stack computation ($\Delta$ is required to be
empty), because forcing a thunk does not use the stack's input.
\emph{Embedding call-by-value and call-by-name.} To translate
call-by-value (CBV) into CBPV, a judgement $x_1 : A_1, \ldots, x_n : A_n
\vdash e : A$ is interpreted as a computation $x_1 : A_1^v, \ldots, x_n
: A_n^v \vdash e^v : \u F A^v$, where call-by-value products and sums
are interpreted as $\times$ and $+$, and the call-by-value function type
$A \to A'$ as $U(A^v \to \u F A'^v)$. Thus, a call-by-value term $e : A
\to A'$, which should mean an effectful computation of a function value,
is translated to a computation $e^v : \u F U (A^v \to \u F A'^v)$. Here,
the comonad $\u F U$ offers an opportunity to perform effects
\emph{before} returning a function value---so under translation the CBV
terms $\kw{print} c; \lambda x. e$ and $\lambda x.\kw{print} c; e$ will not be
contextually equivalent. To translate call-by-name (CBN) to CBPV, a
judgement $x_1 : \u B_1, \ldots, x_m : \u B_m \vdash e : \u B$ is
translated to $x_1 : U \u {B_1}^n, \ldots, x_m : U \u {B_m}^n \vdash e^n
: \u B^n$, representing the fact that call-by-name terms are passed
thunked arguments. Product types are translated to $\top$ and $\times$,
while a CBN function $B \to B'$ is translated to $U \u B^n \to \u B'^n$
with a thunked argument. Sums $B_1 + B_2$ are translated to $\u F (U \u
{B_1}^n + U \u {B_2}^n)$, making the ``lifting'' in lazy sums explicit.
Call-by-push-value \emph{subsumes} call-by-value and call-by-name in
that these embeddings are \emph{full and faithful}: two CBV or CBN programs are
equivalent if and only if their embeddings into CBPV are equivalent, and
every CBPV program with a CBV or CBN type can be back-translated.
\emph{Extensionality/$\eta$ Principles.} The main advantage of CBPV for
our purposes is that it accounts for the $\eta$/extensionality
principles of both eager/value and lazy/computation types, because
value types have $\eta$ principles relating them to the value
assumptions in the context $\Gamma$, while computation types have $\eta$
principles relating them to the result type of a computation $\u B$. For
example, the $\eta$ principle for sums says that any complex
value or computation $x : A_1 + A_2 \vdash E : T$ is equivalent to
$\caseofXthenYelseZ{x}{x_1.E[\kw{inl}{x_1}/x]}{x_2.E[\kw{inr}{x_2}/x]}$, i.e. a
case on a value can be moved to any point in a program (where all
variables are in scope) in an optimization. Given this, the above
translations of CBV and CBN into CBPV explain why $\eta$ for
sums holds in CBV but not CBN: in CBV, $x : A_1 + A_2 \vdash E : T$ is
translated to a term with $x : A_1 + A_2$ free, but in CBN, $x : B_1 +
B_2 \vdash E : T$ is translated to a term with $x : U \u F(U \u B_1 + U
\u B_2)$ free, and the type $U \u F(U \u B_1 + U \u B_2)$ of monadic
computations that return a sum does not satisfy the $\eta$ principle for
sums in CBPV. Dually, the $\eta$ principle for functions in CBPV is
that any computation $M : A \to \u B$ is equal to $\lambda x.M \, x$. A
CBN term $e : B \to B'$ is translated to a CBPV computation of type $U
\u B \to \u B'$, to which CBPV function extensionality applies, while a
CBV term $e : A \to A'$ is translated to a computation of type $\u F U(A
\to \u F A')$, which does not satisfy the $\eta$ rule for functions. We
discuss a formal statement of these $\eta$ principles with term
dynamism below.
\ifshort \vspace{-0.1in} \fi
\subsection{The Dynamic Type(s)}
Next, we discuss the additions that make CBPV into our gradual type
theory GTT. A dynamic type plays a key role in gradual typing, and
since GTT has two different kinds of types, we have a new question of
whether the dynamic type should be a value type, or a computation type,
or whether we should have \emph{both} a dynamic value type and a dynamic
computation type.
Our modular, type-theoretic presentation of gradual typing allows us to
easily explore these options, though we find that having
both a dynamic value ${?}$ and a dynamic computation type $\u {\text{?`}}$
gives the most natural implementation (see
Section~\ref{sec:dynamic-type-interp}). Thus, we add both ${?}$ and
$\u {\text{?`}}$ to the grammar of types in
Figure~\ref{fig:gtt-syntax-and-terms}. We do \emph{not} give
introduction and elimination rules for the dynamic types, because we
would like constructions in GTT to imply results for many different
possible implementations of them. Instead, the terms for the dynamic
types will arise from type dynamism and casts.
\ifshort \vspace{-0.12in} \fi
\subsection{Type Dynamism}
The \emph{type dynamism} relation of gradual type theory is written $A
\sqsubseteq A'$ and read as ``$A$ is less dynamic than $A'$''; intuitively,
this means that $A'$ supports more behaviors than $A$.
Our previous work~\citep{newahmed18,newlicata2018-fscd} analyzes this as the existence of an \emph{upcast}
from $A$ to $A'$ and a downcast from $A'$ to $A$ which form an
embedding-projection pair (\emph{ep pair}) for term error approximation
(an ordering where runtime errors are minimal): the upcast followed by the
downcast is a no-op, while the downcast followed by the upcast might
error more than the original term, because it imposes a run-time type
check. Syntactically, type dynamism is defined (1) to be reflexive and
transitive (a preorder), (2) where every type constructor is monotone in
all positions, and (3) where the dynamic type is greatest in the type
dynamism ordering. This last condition, \emph{the
dynamic type is the most dynamic type}, implies the existence of an
upcast $\upcast{A}{{?}}$ and a downcast $\dncast{A}{{?}}$ for every
type $A$: any type can be embedded
into it and projected from it. However, this by design does not
characterize ${?}$ uniquely---instead, it is open-ended exactly
which types exist (so that we can always add more), and some properties
of the casts are undetermined; we exploit this freedom in
Section~\ref{sec:dynamic-type-interp}.
This extends in a straightforward way to CBPV's distinction between
value and computation types in Figure~\ref{fig:gtt-type-dynamism}: there
is a type dynamism relation for value types $A \sqsubseteq A'$ and for
computation types $\u B \sqsubseteq \u B'$, which (1) each are preorders
(\textsc{VTyRefl}, \textsc{VTyTrans}, \textsc{CTyRefl}, \textsc{CTyTrans}),
(2) every type constructor is monotone
(\textsc{$+$Mon}, \textsc{$\times$Mon}, \textsc{$\mathbin{\&}$Mon} ,\textsc{$\to$Mon})
where the shifts $\u F$ and $U$ switch which relation is being
considered (\textsc{$U$Mon}, \textsc{$F$Mon}), and (3) the dynamic types
${?}$ and $\u {\text{?`}}$ are the most dynamic value and computation types
respectively (\textsc{VTyTop}, \textsc{CTyTop}). For example, we have
$U(A \to \u F A') \sqsubseteq U({?} \to \u F {?})$, which is the analogue
of $A \to A' \sqsubseteq {?} \to {?}$ in call-by-value: because $\to$
preserves embedding-retraction pairs, it is monotone, not contravariant,
in the domain~\citep{newahmed18,newlicata2018-fscd}.
\begin{figure}
\begin{small}
\begin{mathpar}
\framebox{$A \sqsubseteq A'$ and $\u B \sqsubseteq \u B'$}
\inferrule*[lab=VTyRefl]{ }{A \sqsubseteq A}
\inferrule*[lab=VTyTrans]{A \sqsubseteq A' \and A' \sqsubseteq A''}
{A \sqsubseteq A''}
\inferrule*[lab=CTyRefl]{ }{\u B \sqsubseteq \u B'}
\inferrule*[lab=CTyTrans]{\u B \sqsubseteq \u B' \and \u B' \sqsubseteq \u B''}
{\u B \sqsubseteq \u B''}
\inferrule*[lab=VTyTop]{ }{A \sqsubseteq {?}}
\inferrule*[lab=$U$Mon]{\u B \sqsubseteq \u B'}
{U B \sqsubseteq U B'}
\inferrule*[lab=$+$Mon]{A_1 \sqsubseteq A_1' \and A_2 \sqsubseteq A_2' }
{A_1 + A_2 \sqsubseteq A_1' + A_2'}
\inferrule*[lab=$\times$Mon]{A_1 \sqsubseteq A_1' \and A_2 \sqsubseteq A_2' }
{A_1 \times A_2 \sqsubseteq A_1' \times A_2'}
\\
\inferrule*[lab=CTyTop]{ }{\u B \sqsubseteq \u {\text{?`}}}
\inferrule*[lab=$F$Mon]{A \sqsubseteq A' }{ \u F A \sqsubseteq \u F A'}
\inferrule*[lab=$\mathbin{\&}$Mon]{\u B_1 \sqsubseteq \u B_1' \and \u B_2 \sqsubseteq \u B_2'}
{\u B_1 \mathbin{\&} \u B_2 \sqsubseteq \u B_1' \mathbin{\&} \u B_2'}
\inferrule*[lab=$\to$Mon]{A \sqsubseteq A' \and \u B \sqsubseteq \u B'}
{A \to \u B \sqsubseteq A' \to \u B'}
\begin{longonly}
\\
\framebox{Dynamism contexts}
\quad
\inferrule{ }{\cdot \, \dynvctx}
\quad
\inferrule{\Phi \, \dynvctx \and
A \sqsubseteq A'}
{\Phi, x \sqsubseteq x' : A \sqsubseteq A' \, \dynvctx}
\quad
\inferrule{ }{\cdot \, \dyncctx}
\quad
\inferrule{\u B \sqsubseteq \u B'}
{(\bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B') \, \dyncctx}
\end{longonly}
\end{mathpar}
\vspace{-0.2in}
\caption{GTT Type Dynamism \iflong and Dynamism Contexts \fi}
\label{fig:gtt-type-dynamism}
\end{small}
\end{figure}
\subsection{Casts}
\label{sec:gtt-casts}
It is not immediately obvious how to add type casts to CPBV, because
CBPV exposes finer judgemental distinctions than previous work
considered. However, we can arrive at a first proposal by considering
how previous work would be embedded into CBPV.
In the previous work on both CBV and
CBN~\citep{newahmed18,newlicata2018-fscd} every type dynamism judgement
$A \sqsubseteq A'$ induces both an upcast from $A$ to $A'$ and a downcast
from $A'$ to $A$.
Because CBV types are associated to CBPV value types and CBN types are
associated to CBPV computation types, this suggests that each value type
dynamism $A \sqsubseteq A'$ should induce an upcast and a downcast, and each
computation type dynamism $\u B \sqsubseteq \u B'$ should also induce an
upcast and a downcast.
In CBV, a cast from $A$ to $A'$ typically can be represented by a CBV
function $A \to A'$, whose analogue in CBPV is $U(A \to \u F A')$, and
values of this type are bijective with computations $x : A \vdash M : \u
F A'$, and further with stacks $\bullet : \u F A \vdash
S : \u F A'$. This suggests that a
\emph{value} type dynamism $A \sqsubseteq A'$ should induce an
embedding-projection pair of \emph{stacks} $\bullet : \u F A \vdash S_u
: \u F A'$ and $\bullet : \u F A' \vdash S_d : \u F A$, which allow both
the upcast and downcast to a priori be effectful computations.
Dually, a CBN cast typically can be represented by a CBN function of
type $B \to B'$, whose CBPV analogue is a computation of type $U \u B
\to \u B'$, which is equivalent with a computation $x : U \u B \vdash M : \u B'$,
and with a value $x : U \u B \vdash V : U \u B'$. This suggests that a
\emph{computation} type dynamism $\u B \sqsubseteq \u B'$ should induce an
embedding-projection pair of \emph{values} $x : U \u B \vdash V_u : U \u
B'$ and $x : U \u B' \vdash V_d : U \u B$, where both the upcast and the
downcast again may a priori be (co)effectful, in the sense that they may
not reflect all effects of their input.
However, this analysis ignores an important property of CBV casts in practice:
\emph{upcasts} always terminate without performing any effects, and in
some systems upcasts are even defined to be values, while only the
\emph{downcasts} are effectful (introduce errors). For example, for many types $A$, the
upcast from $A$ to ${?}$ is an injection into a sum/recursive type,
which is a value constructor. Our previous work on a logical
relation for call-by-value gradual typing~\cite{newahmed18} proved that all
upcasts were pure in this sense as a consequence of the embedding-projection pair properties (but their proof depended on the only effects being
divergence and type error).
In GTT, we can make this property explicit
in the syntax of the casts, by making the upcast $\upcast{A}{A'}$
induced by a value type dynamism $A \sqsubseteq A'$ itself a complex value,
rather than computation. On the other hand, many downcasts between value
types are implemented as a case-analysis looking for a specific
tag and erroring otherwise, and so are not complex values.
We can also make a dual observation about CBN casts. The
\emph{downcast} arising from $\u B \sqsubseteq \u B'$ has a stronger property
than being a computation $x : U \u B' \vdash M : \u B$ as suggested
above: it can be taken to be a stack $\bullet : \u B' \vdash \dncast{\u
B}{\u B'}{\bullet} : \u B$, because a downcasted computation
evaluates the computation it is ``wrapping'' exactly once. One
intuitive justification for this point of view, which we make precise
in Section \ref{sec:contract}, is to think of the dynamic computation type $\u {\text{?`}}$ as a
recursive \emph{product} of all possible behaviors that a computation
might have, and the downcast as a recursive type unrolling and product
projection, which is a stack. From this point of view, an \emph{upcast}
can introduce errors, because the upcast of an object supporting some
``methods'' to one with all possible methods will error dynamically on
the unimplemented ones.
These observations are expressed in the (shaded) \textsc{UpCast} and
\textsc{DnCasts} rules for casts in
Figure~\ref{fig:gtt-syntax-and-terms}: the upcast for a value type
dynamism is a complex value, while the downcast for a computation type
dynamism is a stack (if its argument is). Indeed, this description of
casts is simpler than the intuition we began the section with: rather
than putting in both upcasts and downcasts for all value and computation
type dynamisms, it suffices to put in only \emph{upcasts} for
\emph{value} type dynamisms and \emph{downcasts} for \emph{computation}
type dynamisms, because of monotonicity of type dynamism for $U$/$\u F$
types. The \emph{downcast} for a \emph{value} type dynamism $A \sqsubseteq
A'$, as a stack $\bullet : \u F A' \vdash \dncast{\u F A}{\u F
A'}{\bullet} : \u F A$ as described above, is obtained from $\u F A
\sqsubseteq \u F A'$ as computation types. The upcast for a computation type
dynamism $\u B \sqsubseteq \u B'$ as a value $x : U \u B \vdash \upcast{U \u
B}{U \u B'}{x} : U \u B'$ is obtained from $U \u B \sqsubseteq U \u B'$ as
value types. Moreover, we will show below that the value upcast
$\upcast{A}{A'}$ induces a stack $\bullet : \u F A \vdash \ldots : \u F
A'$ that behaves like an upcast, and dually for the downcast, so this
formulation implies the original formulation above.
We justify this design in two ways in the remainder of the paper. In
Section~\ref{sec:contract}, we show how to implement casts by a
contract translation to CBPV where upcasts are complex values and
downcasts are complex stacks.
However, one goal of
GTT is to be able to prove things about many gradually typed languages
at once, by giving different models, so one might wonder whether this
design rules out useful models of gradual typing where casts can have more general effects. In
Theorem~\ref{thm:upcasts-values-downcasts-stacks}, we show instead that
our design choice is forced for all casts, as long as the casts between ground types and the dynamic types are values/stacks.
\subsection{Term Dynamism: Judgements and Structural Rules}
\begin{figure}
\begin{small}
\[
\begin{array}{c}
\framebox{$\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'$ and $\Phi \mid \Psi \vdash M \sqsubseteq M' : \u B \sqsubseteq \u B'$}
\\\\
\inferrule*[lab=TmDynRefl]{ }{\Gamma \sqsubseteq \Gamma \mid \Delta \sqsubseteq \Delta \vdash E \sqsubseteq E : T \sqsubseteq T}
\qquad
\inferrule*[lab=TmDynVar]
{ }
{\Phi,x \sqsubseteq x' : A \sqsubseteq A',\Phi' \vdash x \sqsubseteq x' : A \sqsubseteq A'}
\\\\
\inferrule*[lab=TmDynTrans]{\Gamma \sqsubseteq \Gamma' \mid \Delta \sqsubseteq \Delta' \vdash E \sqsubseteq E' : T \sqsubseteq T' \\\\
\Gamma' \sqsubseteq \Gamma'' \mid \Delta' \sqsubseteq \Delta'' \vdash E' \sqsubseteq E'' : T' \sqsubseteq T''
}
{\Gamma \sqsubseteq \Gamma'' \mid \Delta \sqsubseteq \Delta'' \vdash E \sqsubseteq E'' : T \sqsubseteq T''}
\qquad
\inferrule*[lab=TmDynValSubst]
{\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A' \\\\
\Phi, x \sqsubseteq x' : A \sqsubseteq A',\Phi' \,\,|\,\, \Psi \vdash E \sqsubseteq E' : T \sqsubseteq T'
}
{\Phi \mid \Psi \vdash E[V/x] \sqsubseteq E'[V'/x'] : T \sqsubseteq T'}
\\\\
\inferrule*[lab=TmDynHole]
{ }
{\Phi \,\,|\,\, \bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B' \vdash \bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B'}
\qquad
\inferrule*[lab=TmDynStkSubst]
{\Phi \,\,|\,\, \Psi \vdash M_1 \sqsubseteq M_1' : \u B_1 \sqsubseteq \u B_1' \\\\
\Phi \,\,|\,\, \bullet \sqsubseteq \bullet : \u B_1 \sqsubseteq \u B_1' \vdash M_2 \sqsubseteq M_2' : \u B_2 \sqsubseteq \u B_2'
}
{\Phi \mid \Psi \vdash M_2[M_1/\bullet] \sqsubseteq M_2'[M_1'/\bullet] : \u B_2 \sqsubseteq \u B_2'}
\\\\
\ifshort
\inferrule*[lab=$\times$ICong]
{\Phi \vdash V_1 \sqsubseteq V_1' : A_1 \sqsubseteq A_1'\\\\
\Phi\vdash V_2 \sqsubseteq V_2' : A_2 \sqsubseteq A_2'}
{\Phi \vdash (V_1,V_2) \sqsubseteq (V_1',V_2') : A_1 \times A_2 \sqsubseteq A_1' \times A_2'}
\quad
\inferrule*[lab=$\to$ICong]
{\Phi, x \sqsubseteq x' : A \sqsubseteq A' \,\,|\,\, \Psi \vdash M \sqsubseteq M' : \u B \sqsubseteq \u B'}
{\Phi \,\,|\,\, \Psi \vdash \lambda x : A . M \sqsubseteq \lambda x' : A' . M' : A \to \u B \sqsubseteq A' \to \u B'}
\\\\
\inferrule*[lab=$\times$ECong]
{\Phi \vdash V \sqsubseteq V' : A_1 \times A_2 \sqsubseteq A_1' \times A_2' \\\\
\Phi, x \sqsubseteq x' : A_1 \sqsubseteq A_1', y \sqsubseteq y' : A_2 \sqsubseteq A_2' \mid \Psi \vdash E \sqsubseteq E' : T \sqsubseteq T'
}
{\Phi \mid \Psi \vdash \pmpairWtoXYinZ V x y E \sqsubseteq \pmpairWtoXYinZ {V'} {x'} {y'} {E'} : T \sqsubseteq T'}
\,\,
\inferrule*[lab=$\to$ECong]
{\Phi \,\,|\,\, \Psi \vdash M \sqsubseteq M' : A \to \u B \sqsubseteq A' \to \u B' \\\\
\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'}
{\Phi \,\,|\,\, \Psi \vdash M\,V \sqsubseteq M'\,V' : \u B \sqsubseteq \u B' }
\\\\
\inferrule*[lab=$F$ICong]
{\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'}
{\Phi \,\,|\,\, \cdot \vdash \kw{ret} V \sqsubseteq \kw{ret} V' : \u F A \sqsubseteq \u F A'}
\qquad
\inferrule*[lab=$F$ECong]
{\Phi \,\,|\,\, \Psi \vdash M \sqsubseteq M' : \u F A \sqsubseteq \u F A' \\\\
\Phi, x \sqsubseteq x' : A \sqsubseteq A' \,\,|\,\, \cdot \vdash N \sqsubseteq N' : \u B \sqsubseteq \u B'}
{\Phi \,\,|\,\, \Psi \vdash \bindXtoYinZ M x N \sqsubseteq \bindXtoYinZ {M'} {x'} {N'} : \u B \sqsubseteq \u B'}
\\\\
\fi
\end{array}
\]
\vspace{-0.25in}
\caption{GTT Term Dynamism (Structural \ifshort and Congruence\fi Rules) \ifshort
(Rules for $U,1,+,0,\mathbin{\&},\top$ in extended version)
\fi}
\label{fig:gtt-term-dynamism-structural}
\end{small}
\end{figure}
\iflong
\begin{figure}
\begin{small}
\[
\begin{array}{c}
\inferrule*[lab=$+$IlCong]
{\Phi \vdash V \sqsubseteq V' : A_1 \sqsubseteq A_1'}
{\Phi \vdash \kw{inl} V \sqsubseteq \kw{inl} V' : A_1 + A_2 \sqsubseteq A_1' + A_2'}
\qquad
\inferrule*[lab=$+$IrCong]
{\Phi \vdash V \sqsubseteq V' : A_2 \sqsubseteq A_2'}
{\Phi \vdash \kw{inr} V \sqsubseteq \kw{inr} V' : A_1 + A_2 \sqsubseteq A_1' + A_2'}
\\\\
\inferrule*[lab=$+$ECong]
{
\Phi \vdash V \sqsubseteq V' : A_1 + A_2 \sqsubseteq A_1' + A_2' \\\\
\Phi, x_1 \sqsubseteq x_1' : A_1 \sqsubseteq A_1' \mid \Psi \vdash E_1 \sqsubseteq E_1' : T \sqsubseteq T' \\\\
\Phi, x_2 \sqsubseteq x_2' : A_2 \sqsubseteq A_2' \mid \Psi \vdash E_2 \sqsubseteq E_2' : T \sqsubseteq T'
}
{\Phi \mid \Psi \vdash \caseofXthenYelseZ V {x_1. E_1}{x_2.E_2} \sqsubseteq \caseofXthenYelseZ V {x_1'. E_1'}{x_2'.E_2'} : T'}
\qquad
\inferrule*[lab=$0$ECong]
{\Phi \vdash V \sqsubseteq V' : 0 \sqsubseteq 0}
{\Phi \mid \Psi \vdash \kw {abort} V \sqsubseteq \kw {abort} V' : T \sqsubseteq T'}
\\\\
\inferrule*[lab=$1$ICong]{ }{\Phi \vdash () \sqsubseteq () : 1 \sqsubseteq 1}
\qquad
\inferrule*[lab=$1$ECong]
{\Phi \vdash V \sqsubseteq V' : 1 \sqsubseteq 1 \\\\
\Phi \mid \Psi \vdash E \sqsubseteq E' : T \sqsubseteq T'
}
{\Phi \mid \Psi \vdash \pmpairWtoinZ V E \sqsubseteq \pmpairWtoinZ V' E' : T \sqsubseteq T'}
\\\\
\inferrule*[lab=$\times$ICong]
{\Phi \vdash V_1 \sqsubseteq V_1' : A_1 \sqsubseteq A_1'\\\\
\Phi\vdash V_2 \sqsubseteq V_2' : A_2 \sqsubseteq A_2'}
{\Phi \vdash (V_1,V_2) \sqsubseteq (V_1',V_2') : A_1 \times A_2 \sqsubseteq A_1' \times A_2'}
\quad
\inferrule*[lab=$\to$ICong]
{\Phi, x \sqsubseteq x' : A \sqsubseteq A' \,\,|\,\, \Psi \vdash M \sqsubseteq M' : \u B \sqsubseteq \u B'}
{\Phi \,\,|\,\, \Psi \vdash \lambda x : A . M \sqsubseteq \lambda x' : A' . M' : A \to \u B \sqsubseteq A' \to \u B'}
\\\\
\inferrule*[lab=$\times$ECong]
{\Phi \vdash V \sqsubseteq V' : A_1 \times A_2 \sqsubseteq A_1' \times A_2' \\\\
\Phi, x \sqsubseteq x' : A_1 \sqsubseteq A_1', y \sqsubseteq y' : A_2 \sqsubseteq A_2' \mid \Psi \vdash E \sqsubseteq E' : T \sqsubseteq T'
}
{\Phi \mid \Psi \vdash \pmpairWtoXYinZ V x y E \sqsubseteq \pmpairWtoXYinZ {V'} {x'} {y'} {E'} : T \sqsubseteq T'}
\,\,
\inferrule*[lab=$\to$ECong]
{\Phi \,\,|\,\, \Psi \vdash M \sqsubseteq M' : A \to \u B \sqsubseteq A' \to \u B' \\\\
\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'}
{\Phi \,\,|\,\, \Psi \vdash M\,V \sqsubseteq M'\,V' : \u B \sqsubseteq \u B' }
\\\\
\inferrule*[lab=$U$ICong]
{\Phi \mid \cdot \vdash M \sqsubseteq M' : \u B \sqsubseteq \u B'}
{\Phi \vdash \kw{thunk} M \sqsubseteq \kw{thunk} M' : U \u B \sqsubseteq U \u B'}
\qquad
\inferrule*[lab=$U$ECong]
{\Phi \vdash V \sqsubseteq V' : U \u B \sqsubseteq U \u B'}
{\Phi \,\,|\,\, \cdot \vdash \kw{force} V \sqsubseteq \kw{force} V' : \u B \sqsubseteq \u B'}
\\\\
\inferrule*[lab=$F$ICong]
{\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'}
{\Phi \,\,|\,\, \cdot \vdash \kw{ret} V \sqsubseteq \kw{ret} V' : \u F A \sqsubseteq \u F A'}
\qquad
\inferrule*[lab=$F$ECong]
{\Phi \,\,|\,\, \Psi \vdash M \sqsubseteq M' : \u F A \sqsubseteq \u F A' \\\\
\Phi, x \sqsubseteq x' : A \sqsubseteq A' \,\,|\,\, \cdot \vdash N \sqsubseteq N' : \u B \sqsubseteq \u B'}
{\Phi \,\,|\,\, \Psi \vdash \bindXtoYinZ M x N \sqsubseteq \bindXtoYinZ {M'} {x'} {N'} : \u B \sqsubseteq \u B'}
\\\\
\inferrule*[lab=$\top$ICong]{ }{\Phi \mid \Psi \vdash \{\} \sqsubseteq \{\} : \top \sqsubseteq \top}
\qquad
\inferrule*[lab=$\mathbin{\&}$ICong]
{\Phi \mid \Psi \vdash M_1 \sqsubseteq M_1' : \u B_1 \sqsubseteq \u B_1'\and
\Phi \mid \Psi \vdash M_2 \sqsubseteq M_2' : \u B_2 \sqsubseteq \u B_2'}
{\Phi \mid \Psi \vdash \pair {M_1} {M_2} \sqsubseteq \pair {M_1'} {M_2'} : \u B_1 \mathbin{\&} \u B_2 \sqsubseteq \u B_1' \mathbin{\&} \u B_2'}
\\\\
\inferrule*[lab=$\mathbin{\&}$ECong]
{\Phi \mid \Psi \vdash M \sqsubseteq M' : \u B_1 \mathbin{\&} \u B_2 \sqsubseteq \u B_1' \mathbin{\&} \u B_2'}
{\Phi \mid \Psi \vdash \pi M \sqsubseteq \pi M' : \u B_1 \sqsubseteq \u B_1'}
\qquad
\inferrule*[lab=$\mathbin{\&}$E'Cong]
{\Phi \mid \Psi \vdash M \sqsubseteq M' : \u B_1 \mathbin{\&} \u B_2 \sqsubseteq \u B_1' \mathbin{\&} \u B_2'}
{\Phi \mid \Psi \vdash \pi' M \sqsubseteq \pi' M' : \u B_2 \sqsubseteq \u B_2'}
\end{array}
\]
\caption{GTT Term Dynamism (Congruence Rules)}
\label{fig:gtt-term-dynamism-ext-congruence}
\end{small}
\end{figure}
\fi
The final piece of GTT is the \emph{term dynamism} relation, a syntactic
judgement that is used for reasoning about the behavioral properties of
terms in GTT. To a first approximation, term dynamism can be thought of
as syntactic rules for reasoning about \emph{contextual approximation}
relative to errors (not divergence), where $E \sqsubseteq E'$ means that
either $E$ errors or $E$ and $E'$ have the same result. However, a key
idea in GTT is to consider a \emph{heterogeneous} term dynamism
judgement $E \sqsubseteq E' : T \sqsubseteq T'$ between terms $E : T$ and $E' :
T'$ where $T \sqsubseteq T'$---i.e. relating two terms at two different
types, where the type on the right is more dynamic than the type on the
right. This judgement structure allows simple axioms characterizing the
behavior of casts~\cite{newlicata2018-fscd} and axiomatizes the
graduality property~\cite{refined}.
Here, we break this judgement up into
value dynamism $V \sqsubseteq V' : A \sqsubseteq A'$ and computation dynamism $M
\sqsubseteq M' : \u B \sqsubseteq \u B'$. To support reasoning about open terms,
the full form of the judgements are
\begin{itemize}
\item $\Gamma \sqsubseteq \Gamma' \vdash V \sqsubseteq V' : A \sqsubseteq A'$ where
$\Gamma \vdash V : A$ and $\Gamma' \vdash V' : A'$ and $\Gamma \sqsubseteq
\Gamma'$ and $A \sqsubseteq A'$.
\item
$\Gamma \sqsubseteq \Gamma' \mid \Delta \sqsubseteq \Delta' \vdash M \sqsubseteq M' :
\u B \sqsubseteq \u B'$ where $\Gamma \mid \Delta \vdash M : \u B$ and
$\Gamma' \mid \Delta' \vdash M' : \u B'$.
\end{itemize}
where $\Gamma \sqsubseteq \Gamma'$ is the pointwise lifting of value type
dynamism, and $\Delta \sqsubseteq \Delta'$ is the optional lifting of
computation type dynamism. We write $\Phi : \Gamma \sqsubseteq \Gamma'$ and
$\Psi : \Delta \sqsubseteq \Delta'$ as syntax for ``zipped'' pairs of
contexts that are pointwise related by type dynamism, $x_1 \sqsubseteq x_1' : A_1 \sqsubseteq A_1', \ldots, x_n \sqsubseteq x_n' :
A_n \sqsubseteq A_n'$, which correctly suggests that one can substitute related
terms for related variables. We will implicitly zip/unzip pairs of
contexts, and sometimes write e.g. $\Gamma \sqsubseteq \Gamma$ to mean $x
\sqsubseteq x : A \sqsubseteq A$ for all $x : A$ in $\Gamma$.
The main point of our rules for term dynamism is that \emph{there are no
type-specific axioms in the definition} beyond the $\beta\eta$-axioms
that the type satisfies in a non-gradual language. Thus, adding a new
type to gradual type theory does not require any a priori consideration
of its gradual behavior in the language definition; instead, this is
deduced as a theorem in the type theory. The basic structural rules of
term dynamism in Figure~\ref{fig:gtt-term-dynamism-structural}\iflong\ and Figure~\ref{fig:gtt-term-dynamism-ext-congruence}\fi\ say that
it is reflexive and transitive (\textsc{TmDynRefl},
\textsc{TmDynTrans}), that assumptions can be used and substituted for
(\textsc{TmDynVar}, \textsc{TmDynValSubst}, \textsc{TmDynHole},
\textsc{TmDynStkSubst}), and that every term constructor is monotone
(the \textsc{Cong} rules).
\begin{longonly}
While we could add congruence rules for errors and casts,
these follow from the axioms characterizing their behavior below.
\end{longonly}
We will often abbreviate a ``homogeneous'' term dynamism (where the type
or context dynamism is given by reflexivity) by writing e.g. $\Gamma
\vdash V \sqsubseteq V' : A \sqsubseteq A'$ for $\Gamma \sqsubseteq \Gamma \vdash V
\sqsubseteq V' : A \sqsubseteq A'$, or $\Phi \vdash V \sqsubseteq V' : A$ for $\Phi
\vdash V \sqsubseteq V' : A \sqsubseteq A$, and similarly for computations. The
entirely homogeneous judgements $\Gamma \vdash V \sqsubseteq V' : A$ and
$\Gamma \mid \Delta \vdash M \sqsubseteq M' : \u B$ can be thought of as a
syntax for contextual error approximation (as we prove below). We write
$V \mathrel{\gtdyn\ltdyn} V'$ (``equidynamism'') to mean term dynamism relations in
both directions (which requires that the types are also equidynamic
$\Gamma \mathrel{\gtdyn\ltdyn} \Gamma'$ and $A \sqsubseteq A'$), which is a syntactic
judgement for contextual equivalence.
\ifshort \vspace{-0.1in} \fi
\subsection{Term Dynamism: Axioms}
Finally, we assert some term dynamism axioms that describe the behavior
of programs. The cast universal properties at the top of
Figure~\ref{fig:gtt-term-dyn-axioms}, following~\citet{newlicata2018-fscd}, say that
the defining property of an upcast from $A$ to $A'$ is that it is the
least dynamic term of type $A'$ that is more dynamic that $x$, a ``least
upper bound''. That is, $\upcast{A}{A'}{x}$ is a term of type $A'$ that
is more dynamic that $x$ (the ``bound'' rule), and for any other term
$x'$ of type $A'$ that is more dynamic than $x$, $\upcast{A}{A'}{x}$ is
less dynamic than $x'$ (the ``best'' rule). Dually, the downcast
$\dncast{\u B}{\u B'}{\bullet}$ is the most dynamic term of type $\u B$
that is less dynamic than $\bullet$, a ``greatest lower bound''. These
defining properties are entirely independent of the types involved in
the casts, and do not change as we add or remove types from the system.
We will show that these defining properties already imply that the shift
of the upcast $\upcast{A}{A'}$ forms a Galois connection/adjunction with
the downcast $\dncast{\u F A}{\u F A'}$, and dually for computation
types (see Theorem~\ref{thm:cast-adjunction}). They do not
automatically form a Galois insertion/coreflection/embedding-projection
pair, but we can add this by the retract axioms in
Figure~\ref{fig:gtt-term-dyn-axioms}. Together with other theorems of
GTT, these axioms imply that any upcast followed by its corresponding
downcast is the identity (see Theorem~\ref{thm:retract-general}). This
specification of casts leaves some behavior undefined: for example, we
cannot prove in the theory that $\dncast{\u F 1+1}{\u
F{?}}\upcast{1}{{?}}$ reduces to an error. We choose this design
because there are valid models in which it is not an error, for instance
if the unique value of $1$ is represented as the boolean \texttt{true}. In
Section~\ref{sec:dynamic-type-interp}, we show additional axioms that
fully characterize the behavior of the dynamic type.
The type universal properties in the middle of the figure, which are
taken directly from CBPV, assert the $\beta\eta$ rules for each type as
(homogeneous) term equidynamisms---these should be understood as having,
as implicit premises, the typing conditions that make both sides type
check, in equidynamic contexts.
The final axioms assert properties of the run-time error term $\mho$: it
is the least dynamic term (has the fewest behaviors) of every
computation type, and all complex stacks are strict in errors, because
stacks force their evaluation position. We state the first axiom in a
heterogeneous way, which includes congruence $\Gamma \sqsubseteq \Gamma'
\vdash \mho_{\u B} \sqsubseteq \mho_{\u B'} : \u B \sqsubseteq \u B'$.
\begin{figure}
\begin{small}
\framebox{Cast Universal Properties}
\medskip
\begin{tabular}{c|c|c}
& Bound & Best \\
\hline
Up &
${x : A \vdash x \sqsubseteq \upcast A {A'} x : A \sqsubseteq A'}$ &
${x \sqsubseteq x' : A \sqsubseteq A' \vdash \upcast A {A'} x \sqsubseteq x' : A' }$\\
\hline
Down &
${\bullet : \u B' \vdash \dncast{\u B}{\u B'} \bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B'}$
&
${\bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B' \vdash \bullet \sqsubseteq \dncast{\u B}{\u B'} \bullet : \u B}$\\
\end{tabular}
\[
\framebox{Retract Axiom}
\quad
\begin{array}{c}
x : A \vdash \dncast{\u F A}{\u F \, {?}}{(\kw{ret}{(\upcast{A}{{?}}{x})})} \sqsubseteq \kw{ret}{x} : \u F A \\
x : U \u B \vdash \dncast{\u B}{\u {\text{?`}}}{(\kw{force}{(\upcast{U \u B}{U \u {\text{?`}}}{x})})} \sqsubseteq \kw{force}{x} : \u B \\
\end{array}
\]
\bigskip
\framebox{Type Universal Properties}
\medskip
\begin{tabular}{c|c|c}
Type & $\beta$ & $\eta$\\
\hline
+ &
$\begin{array}{l}
{\caseofXthenYelseZ{\kw{inl} V}{x_1. E_1}{\ldots} \mathrel{\gtdyn\ltdyn} E_1[V/x_1]}\\
{\caseofXthenYelseZ{\kw{inr} V}{\ldots}{x_2. E_2} \mathrel{\gtdyn\ltdyn}
E_2[V/x_2]}
\end{array}$
&
$\begin{array}{l}
E \mathrel{\gtdyn\ltdyn} \caseofXthenYelseZ x {x_1. E[\kw{inl} x_1/x]}{x_2. E[\kw{inr} x_2/x]}\\
\text{where } x:A_1+A_2 \vdash E : T
\end{array}$
\\
\iflong
\hline
$0$
& $-$
& $\begin{array}{l}
E \mathrel{\gtdyn\ltdyn} \kw {abort} x\\
\text{where } x:0 \vdash E : T
\end{array}$ \\
\hline
$\times$ &
${\pmpairWtoXYinZ{(V_1,V_2)}{x_1}{x_2}{E} \mathrel{\gtdyn\ltdyn} E[V_1/x_1,V_2/x_2]}$
&
$\begin{array}{l}
E \mathrel{\gtdyn\ltdyn} \pmpairWtoXYinZ x {x_1}{x_2} E[(x_1,x_2)/x] \\
\text{where } {x : A_1 \times A_2 \vdash E : T}
\end{array}$\\
\hline
$1$
& $\pmpairWtoinZ{()}{E} \mathrel{\gtdyn\ltdyn} E$
&
$\begin{array}{l}
{x : 1 \vdash E \mathrel{\gtdyn\ltdyn} \pmpairWtoinZ{x}{E[()/x]} : T}\\
\text{where } {x : 1 \vdash E : T}
\end{array}$\\
\fi
\hline
$U$
& ${\kw{force}\kw{thunk} M \mathrel{\gtdyn\ltdyn} M}$
& ${x : U \u B \vdash x \mathrel{\gtdyn\ltdyn} \kw{thunk}\kw{force} x : U \u B}$\\
\hline
$F$
&
${\bindXtoYinZ {\kw{ret} V} x M \mathrel{\gtdyn\ltdyn} M[V/x]}$
&
${\bullet : \u F A \vdash M \mathrel{\gtdyn\ltdyn} \bindXtoYinZ \bullet x M[\kw{ret} x/\bullet] : \u B}$\\
\hline
$\to$
&
${(\lambda x:A. M)\,V \mathrel{\gtdyn\ltdyn} M[V/x]}$
&
${\bullet : A \to \u B \vdash \bullet \mathrel{\gtdyn\ltdyn} \lambda x:A. \bullet\,x : A \to \u B}$\\
\hline
$\mathbin{\&}$
&
$\begin{array}{l}
{\pi \pair{M}{M'} \mathrel{\gtdyn\ltdyn} M}\\
{\pi' \pair{M}{M'} \mathrel{\gtdyn\ltdyn} M'}
\end{array}$
& ${\bullet : \u B_1 \mathbin{\&} \u B_2 \vdash \bullet \mathrel{\gtdyn\ltdyn}\pair{\pi \bullet}{\pi' \bullet} : \u B_1 \mathbin{\&} \u B_2}$ \\
\iflong
\hline
$\top$
& -
&
${\bullet : \top \vdash \bullet \mathrel{\gtdyn\ltdyn} \{\} : \top}$\\
\fi
\end{tabular}
\smallskip
\begin{mathpar}
\framebox{Error Properties}
\qquad
\inferrule*[lab=ErrBot]{ \Gamma' \mid \cdot \vdash M' : \u B' }
{ \Gamma \sqsubseteq \Gamma' \mid \cdot \vdash \mho \sqsubseteq M' : \u B \sqsubseteq \u B'}
\qquad
\inferrule*[lab=StkStrict] { \Gamma \mid x : \u B \vdash S : \u B'}
{\Gamma \mid \cdot \vdash S[\mho_{\u B}] \sqsubseteq \mho_{\u{B'}} : \u B'}
\end{mathpar}
\end{small}
\caption{GTT Term Dynamism Axioms \ifshort($0$,$\times$,$1$,$\top$ in extended version)\fi}
\label{fig:gtt-term-dyn-axioms}
\end{figure}
\section{Theorems in Gradual Type Theory}
\label{sec:theorems-in-gtt}
In this section, we show that the axiomatics of gradual type theory
determine most properties of casts, which shows that these behaviors of
casts are forced in any implementation of gradual typing satisfying
graduality and $\beta,\eta$.
\begin{shortonly}
For proofs, see the extended version of the paper.
\end{shortonly}
\begin{longonly}
\subsection{Properties inherited from CBPV}
Because the GTT term equidynamism relation $\mathrel{\gtdyn\ltdyn}$ includes the
congruence and $\beta\eta$ axioms of the CBPV equational theory, types
inherit the universal properties they have there~\cite{levy03cbpvbook}. We recall
some relevant definitions and facts.
\begin{definition}[Isomorphism] ~
\begin{enumerate}
\item We write $A \cong_v A'$ for a \emph{value isomorphism between
$A$ and $A'$}, which consists of two complex values $x : A \vdash V'
: A'$ and $x' : A' \vdash V : A$ such that $x : A \vdash V[V'/x']
\mathrel{\gtdyn\ltdyn} x : A$ and $x' : A' \vdash V'[V/x] \mathrel{\gtdyn\ltdyn} x' : A'$.
\item We write $\u B \cong_c \u B'$ for a \emph{computation
isomorphism between $\u B$ and $\u B'$}, which consists of two
complex stacks $\bullet : \u B \vdash S' : \u B'$ and $\bullet' : \u
B' \vdash S : \u B$ such that $\bullet : \u B \vdash S[S'/x']
\mathrel{\gtdyn\ltdyn} \bullet : \u B$ and $\bullet' : \u B' \vdash S'[S/\bullet]
\mathrel{\gtdyn\ltdyn} \bullet' : \u B'$.
\end{enumerate}
\end{definition}
Note that a value isomorphism is a strong condition, and an isomorphism
in call-by-value between types $A$ and $A'$ corresponds to a computation
isomorphism $\u F A \cong \u F A'$, and dually~\cite{levy17popl}.
\smallskip
\begin{lemma}[Initial objects] ~ \label{lem:initial}
\begin{enumerate}
\item For all (value or computation) types $T$, there exists a unique
expression $x : 0 \vdash E : T$.
\item For all $\u B$, there exists a unique stack $\bullet : \u F 0
\vdash S : \u B$.
\item
0 is strictly initial: Suppose there is a type $A$ with a complex
value $x : A \vdash V : 0$. Then $V$ is an isomorphism $A \cong_v
0$.
\item $\u F 0$ is not provably \emph{strictly} initial among computation types.
\end{enumerate}
\end{lemma}
\begin{proof}~
\begin{enumerate}
\item Take $E$ to be $x : 0 \vdash \kw {abort}{x} : T$. Given any $E'$,
we have $E \mathrel{\gtdyn\ltdyn} E'$ by the $\eta$ principle for $0$.
\item Take $S$ to be $\bullet : \u F 0 \vdash
\bindXtoYinZ{\bullet}{x}{\kw {abort}{x}} : \u B$. Given another $S'$,
by the $\eta$ principle for $F$ types, $S' \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ{\bullet}{x}{S'[\kw{ret} x]}$. By congruence, to show $S
\mathrel{\gtdyn\ltdyn} S'$, it suffices to show $x : 0 \vdash \kw {abort}{x} \mathrel{\gtdyn\ltdyn}
S[\kw{ret}{x}] : \u B$, which is an instance of the previous part.
\item
We have $y : 0 \vdash \kw {abort}{y} : A$. The composite $y : 0 \vdash
V[\kw {abort}{y}/x] : 0$ is equidynamic with $y$ by the $\eta$
principle for $0$, which says that any two complex values with
domain $0$ are equal.
The composite $x : A \vdash \kw {abort}{V} : A$ is equidynamic
with $x$, because
\[
x : A, y : A, z : 0 \vdash x \mathrel{\gtdyn\ltdyn} \kw {abort}{z} \mathrel{\gtdyn\ltdyn} y : A
\]
where the first is by $\eta$ with $x : A, y : A, z : 0 \vdash E[z] :=
x : A$ and the second with $x : 0, y : 0 \vdash E[z] := y : A$ (this
depends on the fact that $0$ is ``distributive'', i.e. $\Gamma,x:0$
has the universal property of $0$). Substituting $\kw {abort}{V}$ for $y$
and $V$ for $z$, we have $\kw {abort}{V} \mathrel{\gtdyn\ltdyn} x$.
\item $\u F 0$ is not \emph{strictly} initial among computation types,
though. Proof sketch: a domain model along the lines of
\citep{newlicata2018-fscd} with only non-termination and type errors shows this,
because there $\u F 0$ and $\top$ are isomorphic (the same object is
both initial and terminal), so if $\u F 0$ were strictly initial (any
type $\u B$ with a stack $\bullet : B \vdash S : \u F 0$ is isomorphic
to $\u F 0$), then because every type $\u B$ has a stack to $\top$
(terminal) and therefore $\u F 0$, every type would be isomorphic to
$\top$/$\u F 0$---i.e. the stack category would be trivial. But there
are non-trivial computation types in this model.
\end{enumerate}
\end{proof}
\begin{lemma}[Terminal objects] ~ \label{lem:terminal}
\begin{enumerate}
\item For any computation type $\u B$, there exists a unique stack
$\bullet : \u B \vdash S : \top$.
\item (In any context $\Gamma$,) there exists a unique complex value
$V : U \top$.
\item (In any context $\Gamma$,) there exists a unique complex value
$V : 1$.
\item $U \top \cong_v 1$
\item $\top$ is not a strict terminal object.
\end{enumerate}
\end{lemma}
\begin{proof} ~
\begin{enumerate}
\item Take $S = \{\}$. The $\eta$ rule for $\top$, $\bullet : \top
\vdash \bullet \mathrel{\gtdyn\ltdyn} \{\} : \top$, under the substitution of
$\bullet : \u B \vdash S : \top$, gives $S \mathrel{\gtdyn\ltdyn}
\{\}[S/\bullet] = \{\}$.
\item Take $V = \kw{thunk}{\{\}}$. We have $x : U \top \vdash x
\mathrel{\gtdyn\ltdyn} \kw{thunk}{\kw{force}{x}} \mathrel{\gtdyn\ltdyn} \kw{thunk}{\{\}} : U \top$ by the
$\eta$ rules for $U$ and $\top$.
\item Take $V = ()$. By $\eta$ for $1$ with $x : 1 \vdash E[x] :=
() : 1$, we have $x : 1 \vdash () \mathrel{\gtdyn\ltdyn} \pmmuXtoYinZ{x}{()} :
1$. By $\eta$ fro $1$ with $x : 1 \vdash E[x] := x : 1$, we have
$x : 1 \vdash x \mathrel{\gtdyn\ltdyn} \pmmuXtoYinZ{x}{()}$. Therefore $x : 1
\vdash x \mathrel{\gtdyn\ltdyn} () : 1$.
\item We have maps $x : U \top \vdash () : 1$ and $x : 1 \vdash
\kw{thunk}{\{\}} : U \top$. The composite on $1$ is the identity by
the previous part. The composite on $\top$ is the identity by
part (2).
\item Proof sketch: As above, there is a domain model with
$\top \cong \u F 0$, so if $\top$ were a strict terminal object,
then $\u F 0$ would be too. But $\u F 0$ is also initial, so it
has a map to every type, and therefore every type would be
isomorphic to $\u F 0$ and $\top$. But there are non-trivial
computation types in the model.
\end{enumerate}
\end{proof}
\end{longonly}
\begin{longonly}
\subsection{Derived Cast Rules}
As noted above, monotonicity of type dynamism for $U$ and $\u F$ means
that we have the following as instances of the general cast rules:
\begin{lemma}[Shifted Casts]
The following are derivable:
\begin{small}
\begin{mathpar}
\inferrule
{\Gamma \,\,|\,\, \Delta \vdash M : \u F A' \and A \sqsubseteq A'}
{\Gamma \,\,|\,\, \Delta \vdash \dncast {\u F A} {\u F A'} M : \u F A}
\inferrule
{\Gamma \vdash V : U \u B \and \u B \sqsubseteq \u B'}
{\Gamma \vdash \upcast {U \u B} {U \u B'} V : U \u B'}
\end{mathpar}
\end{small}
\end{lemma}
\begin{longproof}
They are instances of the general upcast and downcast rules, using the
fact that $U$ and $\u F$ are congruences for type dynamism, so in the
first rule $\u F A \sqsubseteq \u F A'$, and in the second, $U \u B \sqsubseteq
U \u B'$.
\end{longproof}
The cast universal properties in Figure~\ref{fig:gtt-term-dyn-axioms}
imply the following seemingly more general rules for reasoning about
casts:
\begin{lemma}[Upcast and downcast left and right rules] \label{lem:cast-left-right}
The following are derivable:
\begin{small}
\begin{mathpar}
\inferrule*[Right=UpR]
{A \sqsubseteq A' \and \Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'}
{\Phi \vdash V \sqsubseteq \upcast {A'} {A''} {V'} : A \sqsubseteq A''}
\inferrule*[Right=UpL]
{\Phi \vdash V \sqsubseteq V'' : A \sqsubseteq A''}
{\Phi \vdash \upcast A {A'} V \sqsubseteq V'' : A' \sqsubseteq A'' }
\inferrule*[Right=DnL]
{ \u B' \sqsubseteq \u B'' \and \Phi \mid \Psi \vdash M' \sqsubseteq M'' : \u B' \sqsubseteq \u B''}
{ \Phi \mid \Psi \vdash \dncast{\u B}{\u B'} M' \sqsubseteq M'' : \u B \sqsubseteq \u B''}
\inferrule*[Right=DnR]
{ \Phi \mid \Psi \vdash M \sqsubseteq M'' : B \sqsubseteq B'' }
{ \Phi \mid \Psi \vdash M \sqsubseteq \dncast{\u B'}{\u B''} M'' : \u B \sqsubseteq \u B''}
\end{mathpar}
\end{small}
\end{lemma}
In sequent calculus terminology, an upcast is left-invertible, while a
downcast is right-invertible, in the sense that any time we have a
conclusion with a upcast on the left/downcast on the right, we can
without loss of generality apply these rules (this comes from upcasts
and downcasts forming a Galois connection). We write the $A \sqsubseteq A'$
and $\u B' \sqsubseteq \u B''$ premises on the non-invertible rules to
emphasize that the premise is not necessarily well-formed given that the
conclusion is.
\begin{longproof}
For upcast left, substitute $V'$ into the axiom $x \sqsubseteq
\upcast{A'}{A''}{x} : A' \sqsubseteq A''$ to get $V' \sqsubseteq
\upcast{A'}{A''}{V'}$, and then use transitivity with the premise.
For upcast right, by transitivity of
\[
x \sqsubseteq x' : A \sqsubseteq A' \vdash \upcast{A}{A'}{x} \sqsubseteq x' : A' \sqsubseteq A' \qquad
x' \sqsubseteq x'' : A' \sqsubseteq A'' \vdash x' \sqsubseteq x'' : A' \sqsubseteq A''
\]
we have
\[
x \sqsubseteq x'' : A \sqsubseteq A'' \vdash \upcast{A}{A'}{x} \sqsubseteq x'' : A' \sqsubseteq A''
\]
Substituting the premise into this gives the conclusion.
For downcast left, substituting $M'$ into the axiom $\dncast{\u B}{\u
B'}{\bullet} \sqsubseteq \bullet : \u B \sqsubseteq \u B'$ gives $\dncast{\u
B}{\u B'}{M} \sqsubseteq M$, and then transitivity with the premise gives
the result.
For downcast right, transitivity of
\[
\bullet \sqsubseteq \bullet' : \u B \sqsubseteq \u B' \vdash \bullet \sqsubseteq \bullet' : \u B \sqsubseteq \u B' \quad
\bullet' \sqsubseteq \bullet'' : \u B' \sqsubseteq \u B'' \vdash \bullet' \sqsubseteq \dncast{\u B'}{\u B''}{\bullet''}
\]
gives $\bullet \sqsubseteq \bullet'' : \u B \sqsubseteq \u B'' \vdash \bullet \sqsubseteq \dncast{\u B'}{\u B''}{\bullet''}$,
and then substitution of the premise into this gives the conclusion.
\end{longproof}
Though we did not include congruence rules for casts in
Figure~\ifshort\ref{fig:gtt-term-dynamism-structural}\else\ref{fig:gtt-term-dynamism-ext-congruence}\fi, it is derivable:
\begin{lemma}[Cast congruence rules] \label{lem:cast-congruence}
The following congruence rules for casts are derivable:
\begin{small}
\begin{mathpar}
\inferrule
{ A \sqsubseteq A' \and A' \sqsubseteq A''}
{ x \sqsubseteq x' : A \sqsubseteq A' \vdash \upcast{A}{A''}{x} \sqsubseteq \upcast{A'}{A''}{x'} : A''}
\and
\inferrule
{ A \sqsubseteq A' \and A' \sqsubseteq A''}
{ x : A \vdash \upcast{A}{A'}{x} \sqsubseteq \upcast{A}{A''}{x} : A' \sqsubseteq A''}
\inferrule
{ \u B \sqsubseteq \u B' \and \u B' \sqsubseteq \u B''}
{ \bullet' \sqsubseteq \bullet'' : \u B' \sqsubseteq \u B'' \vdash \dncast{\u B}{\u B'}{\bullet'} \sqsubseteq \dncast{\u B}{\u B''}{\bullet''} : \u B}
\and
\inferrule
{ \u B \sqsubseteq \u B' \and \u B' \sqsubseteq \u B''}
{ \bullet'' : \u B'' \vdash \dncast{\u B}{\u B''}{\bullet''}\sqsubseteq \dncast{\u B'}{\u B''}{\bullet''} : \u B \sqsubseteq \u B'}
\end{mathpar}
\end{small}
\end{lemma}
\begin{longproof}
In all cases, uses the invertible and then non-invertible rule for the
cast. For the first rule, by upcast left, it suffices to show $x \sqsubseteq
x' : A \sqsubseteq A' \vdash {x} \sqsubseteq \upcast{A'}{A''}{x'} : A \sqsubseteq A''$
which is true by upcast right, using $x \sqsubseteq x'$ in the premise.
For the second, by upcast left, it suffices to show
$x : A \vdash {x} \sqsubseteq \upcast{A}{A''}{x} : A \sqsubseteq A''$,
which is true by upcast right.
For the third, by downcast right, it suffices to show
$\bullet' \sqsubseteq \bullet'' : \u B' \sqsubseteq \u B'' \vdash \dncast{\u B}{\u B'}{\bullet'} \sqsubseteq {\bullet''} : \u B \sqsubseteq \u B''$,
which is true by downcast left, using $\bullet' \sqsubseteq \bullet''$ in the premise.
For the fourth, by downcast right, it suffices show
$\dncast{\u B}{\u B''}{\bullet''}\sqsubseteq {\bullet''} : \u B \sqsubseteq \u B''$,
which is true by downcast left.
\end{longproof}
\end{longonly}
\subsection{Type-generic Properties of Casts}
The universal property axioms for upcasts and downcasts in
Figure~\ref{fig:gtt-term-dyn-axioms} define them \emph{uniquely} up to
equidynamism ($\mathrel{\gtdyn\ltdyn}$): anything with the same property
is behaviorally equivalent to a cast.
\begin{theorem}[Specification for Casts is a Universal Property]
~ \label{thm:casts-unique}
\begin{enumerate}
\item
If $A \sqsubseteq A'$ and $x : A \vdash V : A'$ is a complex value such that
${x : A \vdash x \sqsubseteq V : A \sqsubseteq A'}$
and
${x \sqsubseteq x' : A \sqsubseteq A' \vdash V \sqsubseteq x' : A'}$
then $x : A \vdash V \mathrel{\gtdyn\ltdyn} \upcast{A}{A'}{x} : A'$.
\item
If $\u B \sqsubseteq \u B'$ and $\bullet' : \u B' \vdash S :
\u B$ is a complex stack such that
${\bullet' : \u B' \vdash S \sqsubseteq \bullet' : \u B \sqsubseteq \u B'}$ and
${\bullet \sqsubseteq \bullet' : \u B \sqsubseteq \u B' \vdash \bullet \sqsubseteq S : \u B}$
then $\bullet' : \u B' \vdash S \mathrel{\gtdyn\ltdyn} \dncast{\u B}{\u B'}\bullet' : \u B$
\end{enumerate}
\end{theorem}
\begin{longproof}
For the first part, to show $\upcast{A}{A'}{x} \sqsubseteq V$, by upcast
left, it suffices to show $x \sqsubseteq V : A \sqsubseteq A'$, which is one
assumption. To show $V \sqsubseteq \upcast{A}{A'}{x}$, we substitute into
the second assumption with $x \sqsubseteq \upcast{A}{A'}{x} : A \sqsubseteq A'$,
which is true by upcast right.
For the second part, to show $S \sqsubseteq \dncast{\u B}{\u
B'}{\bullet'}$, by downcast right, it suffices to show $S \sqsubseteq
\bullet' : \u B \sqsubseteq \u B'$, which is one of the assumptions. To
show $\dncast{\u B}{\u B'}{\bullet'} \sqsubseteq S$, we substitute into the
second assumption with $\dncast{\u B}{\u B'}{\bullet'} \sqsubseteq
\bullet'$, which is true by downcast left.
\end{longproof}
Casts satisfy an identity and composition law:
\begin{theorem}[Casts (de)composition] \label{thm:decomposition}
For any $A \sqsubseteq A' \sqsubseteq A''$ and $\u B \sqsubseteq \u B' \sqsubseteq \u B''$:
\begin{enumerate}
\item $x : A \vdash \upcast A A x \mathrel{\gtdyn\ltdyn} x : A$
\item $x : A \vdash \upcast A {A''}x \mathrel{\gtdyn\ltdyn} \upcast{A'}{A''}\upcast A{A'} x : A''$
\item $\bullet : \u B \vdash \dncast {\u B}{\u B} \bullet \mathrel{\gtdyn\ltdyn} \bullet : \u B$
\item $\bullet : \u B'' \vdash \dncast {\u B}{\u B''} \bullet \mathrel{\gtdyn\ltdyn}
\dncast{\u B}{\u B'}{(\dncast{\u B'}{\u B''} \bullet)} : \u B \sqsubseteq
\u B$
\end{enumerate}
\end{theorem}
\begin{longproof} ~
We use Theorem~\ref{thm:casts-unique} in all cases, and show that the
right-hand side has the universal property of the left.
\begin{enumerate}
\item Both parts expand to showing
$x \sqsubseteq x : A \sqsubseteq A \vdash x \sqsubseteq x : A \sqsubseteq A$,
which is true by assumption.
\item
First, we need to show $x \sqsubseteq \upcast{A'}{A''}{(\upcast A{A'} x)} :
A \sqsubseteq A''$. By upcast right, it suffices to show $x \sqsubseteq
\upcast{A}{A'}{x} : A \sqsubseteq A'$, which is also true by upcast right.
For $x \sqsubseteq x'' : A \sqsubseteq A'' \vdash \upcast{A'}{A''}{(\upcast
A{A'} x)} \sqsubseteq x''$, by upcast left twice, it suffices to show $x
\sqsubseteq x'' : A \sqsubseteq A''$, which is true by assumption.
\item Both parts expand to showing $\bullet : \u B \vdash \bullet \sqsubseteq
\bullet : \u B$, which is true by assumption.
\item
To show $\bullet \sqsubseteq \bullet'' : \u B \sqsubseteq \u B'' \vdash \bullet
\sqsubseteq \dncast{\u B}{\u B'}{(\dncast{\u B'}{\u B''} \bullet)}$, by
downcast right (twice), it suffices to show $\bullet : \u B \sqsubseteq
\bullet'' : \u B'' \vdash {\bullet} \sqsubseteq \bullet'' : \u B \sqsubseteq \u
B''$, which is true by assumption. Next, we have to show $\dncast{\u
B}{\u B'}{(\dncast{\u B'}{\u B''} \bullet)} \sqsubseteq \bullet : \u B
\sqsubseteq \u B''$, and by downcast left, it suffices to show $\dncast{\u
B'}{\u B''}{\bullet} \sqsubseteq \bullet : \u B' \sqsubseteq \u B''$, which is
also true by downcast left.
\end{enumerate}
\end{longproof}
\noindent In particular, this composition property implies that the casts into and
out of the dynamic type are coherent, for example if $A \sqsubseteq A'$
then
$\upcast{A}{{?}}{x} \mathrel{\gtdyn\ltdyn} \upcast{A'}{{?}}{\upcast{A}{A'}{x}}$.
The following theorem says essentially that $x \sqsubseteq
\dncast{T}{T'}{\upcast{T}{T'}{x}}$ (upcast then downcast might error
less but but otherwise does not change the behavior) and
$\upcast{T}{T'}{\dncast{T}{T'}{x}} \sqsubseteq x$ (downcast then upcast
might error more but otherwise does not change the behavior). However,
since a value type dynamism $A \sqsubseteq A'$ induces a value upcast $x :
A \vdash \upcast{A}{A'}{x} : A'$ but a stack downcast $\bullet : \u F
A' \vdash \dncast{\u F A}{\u F A'}{\bullet} : \u F A$ (and dually for
computations), the statement of the theorem wraps one cast with
the constructors for $U$ and $\u F$ types (functoriality of $\u F/U$).
\begin{theorem}[Casts are a Galois Connection] \label{thm:cast-adjunction} ~~~
\begin{enumerate}
\item $\bullet' : \u F A' \vdash \bindXtoYinZ{\dncast{\u F A}{\u F A'}{\bullet'}}{x}{\kw{ret}{(\upcast{A}{A'}{x})}} \sqsubseteq \bullet' : \u F A'$
\item $\bullet : \u F A \vdash \bullet \sqsubseteq \bindXtoYinZ{\bullet}{x}{\dncast{\u F A}{\u F A'}{(\kw{ret}{(\upcast{A}{A'}{x})})}} : \u F A$
\item $x : U \u B' \vdash {\upcast{U \u B}{U \u B'}{(\kw{thunk}{({\dncast{\u B}{\u B'}{\kw{force} x}})})}} \sqsubseteq x : U \u B'$
\item $x : U \u B \vdash x \sqsubseteq \kw{thunk}{(\dncast{B}{B'}{(\kw{force}{(\upcast{U \u B}{U \u B'}{x})})})} : U \u B$
\end{enumerate}
\end{theorem}
\begin{longproof} ~
\begin{enumerate}
\item By $\eta$ for $F$ types, $\bullet' : \u F A' \vdash \bullet'
\mathrel{\gtdyn\ltdyn} \bindXtoYinZ{\bullet'}{x'}{\kw{ret}{x'}} : \u F A'$, so it
suffices to show
\[
\bindXtoYinZ{\dncast{\u F A}{\u F A'}{\bullet'}}{x}{\kw{ret}{(\upcast{A}{A'}{x})}} \sqsubseteq \bindXtoYinZ{\bullet'}{x':A'}{\kw{ret}{x'}}
\]
By congruence, it suffices to show ${\dncast{\u F A}{\u F
A'}{\bullet'}} \sqsubseteq \bullet' : \u F A \sqsubseteq \u F A'$, which
is true by downcast left, and
$x \sqsubseteq x' : A \sqsubseteq A' \vdash {\kw{ret}{(\upcast{A}{A'}{x})}} \sqsubseteq
{\kw{ret}{x'}} : A'$,
which is true by congruence for $\mathsf{ret}$, upcast left, and the assumption.
\item By $\eta$ for $F$ types, it suffices to show
\[
\bullet : \u F A \vdash \bindXtoYinZ{x}{\bullet}{\kw{ret}{x}} \sqsubseteq \bindXtoYinZ{\bullet}{x}{\dncast{\u F A}{\u F A'}{(\kw{ret}{(\upcast{A}{A'}{x})})}} : \u F A
\]
so by congruence,
\[
x : A \vdash \kw{ret}{x} \sqsubseteq {\dncast{\u F A}{\u F A'}{(\kw{ret}{(\upcast{A}{A'}{x})})}}
\]
By downcast right, it suffices to show
\[
x : A \vdash \kw{ret}{x} \sqsubseteq (\kw{ret}{(\upcast{A}{A'}{x})}) : \u F A \sqsubseteq \u F A'
\]
and by congruence
\[
x : A \vdash x \sqsubseteq ({(\upcast{A}{A'}{x})}) : A \sqsubseteq A'
\]
which is true by upcast right.
\item By $\eta$ for $U$ types, it suffices to show
\[
x : U \u B' \vdash {\upcast{U \u B}{U \u B'}{(\kw{thunk}{({\dncast{\u B}{\u B'}{\kw{force} x}})})}} \sqsubseteq \kw{thunk}{(\kw{force}{x})} : U \u B'
\]
By upcast left, it suffices to show
\[
x : U \u B' \vdash {(\kw{thunk}{({\dncast{\u B}{\u B'}{\kw{force} x}})})} \sqsubseteq \kw{thunk}{(\kw{force}{x})} : U \u B \sqsubseteq U \u B'
\]
and by congruence
\[
x : U \u B' \vdash {\dncast{\u B}{\u B'}{\kw{force} x}} \sqsubseteq \kw{force}{x} : \u B \sqsubseteq \u B'
\]
which is true by downcast left.
\item By $\eta$ for $U$ types, it suffices to show
\[
x : U \u B \vdash \kw{thunk}{(\kw{force} x)} \sqsubseteq \kw{thunk}{(\dncast{B}{B'}{(\kw{force}{(\upcast{U \u B}{U \u B'}{x})})})} : U \u B
\]
and by congruence
\[
x : U \u B \vdash {(\kw{force} x)} \sqsubseteq {(\dncast{B}{B'}{(\kw{force}{(\upcast{U \u B}{U \u B'}{x})})})} : \u B
\]
By downcast right, it suffices to show
\[
x : U \u B \vdash {(\kw{force} x)} \sqsubseteq {(\kw{force}{(\upcast{U \u B}{U \u B'}{x})})} : \u B \sqsubseteq \u B'
\]
and by congruence
\[
x : U \u B \vdash {x} \sqsubseteq {(\upcast{U \u B}{U \u B'}{x})} : \u B \sqsubseteq \u B'
\]
which is true by upcast right.
\end{enumerate}
\end{longproof}
The retract property says roughly that $x \mathrel{\gtdyn\ltdyn}
\dncast{T'}{T}{\upcast{T}{T'}{x}}$ (upcast then downcast does not change
the behavior), strengthening the $\sqsubseteq$ of
Theorem~\ref{thm:cast-adjunction}. In
Figure~\ref{fig:gtt-term-dyn-axioms}, we asserted the retract axiom for
casts with the dynamic type. This and the composition property implies
the retraction property for general casts:
\begin{theorem}[Retract Property for General Casts] ~~~ \label{thm:retract-general}
\begin{enumerate}
\item
$\bullet : \u F A \vdash \bindXtoYinZ{\bullet}{x}{\dncast{\u F A}{\u F A'}{(\kw{ret}{(\upcast{A}{A'}{x})})}} \mathrel{\gtdyn\ltdyn} \bullet : \u F A$
\item
$x : U \u B \vdash \kw{thunk}{(\dncast{\u B}{\u B'}{(\kw{force}{(\upcast{U \u B}{U \u B'}{x})})})} \mathrel{\gtdyn\ltdyn} x : U \u B$
\end{enumerate}
\end{theorem}
\begin{longproof}
We need only to show the $\sqsubseteq$ direction, because the converse is
Theorem~\ref{thm:cast-adjunction}.
\begin{enumerate}
\item
Substituting $\kw{ret}{(\upcast{A}{A'}{x})}$ into
Theorem~\ref{thm:cast-adjunction}'s
\[\bullet : \u F A \vdash \bullet \sqsubseteq \bindXtoYinZ{\bullet}{x}{\dncast{\u F A}{\u F A'}{(\kw{ret}{(\upcast{A}{A'}{x})})}} : \u F A
\]
and $\beta$-reducing gives
\[
x : A \vdash {\kw{ret}{(\upcast{A}{A'}{x})}} \sqsubseteq {\dncast{\u F A}{\u F {?}}{(\kw{ret}{(\upcast{A'}{{?}}{\upcast{A}{A'}{x}})})}}
\]
Using this, after $\eta$-expanding $\bullet : \u F A$ on the right and using congruence for
$\mathsf{bind}$, it suffices to derive as follows:
\[
\begin{array}{lll}
\dncast{\u F A}{\u F A'}{(\kw{ret}{(\upcast{A}{A'}{x})})} & \sqsubseteq & \text{ congruence }\\
\dncast{\u F A}{\u F A'}{\dncast{\u F A'}{\u F {?}}{(\kw{ret}{(\upcast{A'}{{?}}{\upcast{A}{A'}{x}})})}} & \sqsubseteq & \text{ composition }\\
\dncast{\u F A}{\u F {?}}{(\kw{ret}{{(\upcast{A}{{?}}{x})}})} & \sqsubseteq & \text{ retract axiom for $\upcast{A}{{?}}$ }\\
\kw{ret}{x}\\
\end{array}
\]
\item After using $\eta$ for $U$ and congruence, it suffices to show
\[
x : U \u B \vdash \dncast{\u B}{\u B'}{(\kw{force}{(\upcast{U \u B}{U \u B'}{x})})} \sqsubseteq \kw{force}{x} : \u B
\]
Substituting $x : U \u B \vdash {\upcast{U \u B}{U \u B'}{x}} : U \u B'$
into Theorem~\ref{thm:cast-adjunction}'s
\[
x : U \u B' \vdash x \sqsubseteq \kw{thunk}{(\dncast{B'}{\u {\text{?`}}}{(\kw{force}{(\upcast{U \u B'}{U \u {\text{?`}}}{x})})})} : U \u B'
\]
gives
\[
x : U \u B \vdash {\upcast{U \u B}{U \u B'}{x}} \sqsubseteq \kw{thunk}{(\dncast{B'}{\u {\text{?`}}}{(\kw{force}{(\upcast{U \u B'}{U \u {\text{?`}}}{{\upcast{U \u B}{U \u B'}{x}}})})})} : U \u B'
\]
So we have
\[
\begin{array}{lll}
\dncast{B}{B'}{(\kw{force}{{\upcast{U \u B}{U \u B'}{x}}})} & \sqsubseteq \\
\dncast{B}{B'}{\kw{force}{(\kw{thunk}{(\dncast{B'}{\u {\text{?`}}}{(\kw{force}{(\upcast{U \u B'}{U \u {\text{?`}}}{{\upcast{U \u B}{U \u B'}{x}}})})})})}} & \sqsubseteq & \beta\\
\dncast{B}{B'}{(\dncast{B'}{\u {\text{?`}}}{(\kw{force}{(\upcast{U \u B'}{U \u {\text{?`}}}{{\upcast{U \u B}{U \u B'}{x}}})})})} & \sqsubseteq & \text{composition}\\
\dncast{B}{\u {\text{?`}}}{(\kw{force}{(\upcast{U \u B}{U \u {\text{?`}}}{x})})} & \sqsubseteq & \text{retract axiom for $\dncast{\u B}{\u {\text{?`}}}$}\\
\kw{ret}{x} & \sqsubseteq & \text{composition}\\
\end{array}
\]
\end{enumerate}
\end{longproof}
\subsection{Unique Implementations of Casts}
\begin{longonly}
\begin{definition}
Let a \emph{type constructor} $C$ be a (value or computation) type that
well-formed according to the grammar in Figure~\ref{fig:gtt-syntax-and-terms} with
additional hypotheses $X \,\,\text{val type}$ and $\u Y \,\,\text{comp type}$ standing for value
or computation types, respectively. We write $C[A/X]$ and $C[\u B/\u
Y]$ for the substitution of a type for a variable.
\end{definition}
For example,
\[
\begin{array}{l}
X_1 \,\,\text{val type}, X_2 \,\,\text{val type} \vdash X_1 + X_2 \,\,\text{val type} \\
\u Y \,\,\text{comp type} \vdash U \u Y \,\,\text{val type} \\
X_1 \,\,\text{val type}, X_2 \,\,\text{val type} \vdash \u F(X_1 + X_2) \,\,\text{comp type}
\end{array}
\]
are type constructors.
It is admissible that all type constructors are monotone in type
dynamism, because we included a congruence rule for every type
constructor in Figure~\ref{fig:gtt-type-dynamism}:
\begin{lemma}[Monotonicity of Type Constructors]
For any type constructor $X \,\,\text{val type} \vdash C$, if $A \sqsubseteq A'$ then
$C[A/X] \sqsubseteq C[A'/x]$. For any type constructor $\u Y \,\,\text{comp type} \vdash
C$, if $\u B \sqsubseteq \u B'$ then $C[\u B/\u Y] \sqsubseteq C[\u B'/\u Y]$.
\end{lemma}
\begin{proof}
Induction on $C$. In the case for a variable $X$ or $\u Y$, $A \sqsubseteq
A'$ or $\u B \sqsubseteq \u B'$ by assumption. In all other cases, the
result follows from the inductive hypotheses and the congruence rule for
type dynamism for the type constructor
(Figure~\ref{fig:gtt-type-dynamism}). For example, in the case for $+$,
$A_1[A/x] \sqsubseteq A_1[A'/x]$ and $A_2[A/x] \sqsubseteq A_2[A'/x]$, so
$A_1[A/x] + A_2[A/x] \sqsubseteq A_1[A'/x] + A_2[A'/x]$.
\end{proof}
The following lemma helps show that a complex value
$\defupcast{C[A_i/X_i,\u B_i/\u Y_i]}{C[A_i'/X_i,\u B_i'/\u Y_i]}$ is an
upcast from $C[A_i/X_i,\u B_i/\u Y_i]$ to $C[A_i'/X_i,\u B_i'/\u Y_i]$.
\begin{lemma}[Upcast Lemma] \label{lem:upcast}
Let $X_1 \,\,\text{val type}, \ldots X_n \,\,\text{val type}, \u Y_1 \,\,\text{comp type}, \ldots \u Y_n
\,\,\text{comp type} \vdash C \,\,\text{val type}$ be a value type constructor. We abbreviate
the instantiation \\ $C[A_1/X_1,\ldots,A_n/X_n,\u B_1/\u Y_i,\ldots,\u
B_m/\u Y_m]$ by $C[A_i,\u B_i]$.
Suppose $\defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{-}$ is a complex
value (depending on $C$ and each $A_i,A_i',\u B_i,\u B_i'$) such that
\begin{enumerate}
\item
For all value types $A_1,\ldots,A_n$ and $A_1',\ldots,A_n'$ with
$A_i \sqsubseteq A_i'$, and all computation types $\u B_1,\ldots,\u B_m$
and $\u B_1',\ldots,\u B_n'$ with $\u B_i \sqsubseteq \u B_i'$,
\[
x : C[A_i,\u B_i] \vdash \defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x} : C[A_i',\u B_i']
\]
\item
For all value types $A_i \sqsubseteq A_i'$ and computation types $\u B_i
\sqsubseteq \u B_i'$,
\begin{small}
\[
\begin{array}{c}
x : C[A_i,\u B_i] \vdash \defupcast{C[A_i,\u B_i]}{C[A_i,\u B_i]}{x} \sqsubseteq \defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x} : C[A_i,\u B_i] \sqsubseteq C[A_i',\u B_i']\\
x \sqsubseteq x' : C[A_i,\u B_i] \sqsubseteq C[A_i',\u B_i'] \vdash
\defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x} \sqsubseteq \defupcast{C[A_i',\u B_i']}{C[A_i',\u B_i']}{x'} : C[A_i',\u B_i']
\end{array}
\]
\end{small}
\item For all value types $A_1,\ldots,A_n$ and all computation types
$\u B_1,\ldots,\u B_m$,
\[
x : C[A_i,\u B_i] \vdash \defupcast{C[A_i,\u B_i]}{C[A_i,\u B_i]}{x} \mathrel{\gtdyn\ltdyn} x : C[A_i,\u B_i]
\]
\end{enumerate}
Then $\defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}$ satisfies the
universal property of an upcast, so by Theorem~\ref{thm:casts-unique}
\[
x : C[A_i,\u B_i] \vdash \defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x} \mathrel{\gtdyn\ltdyn} \upcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x} : C[A_i',\u B_i']
\]
Moreover, the left-to-right direction uses only the left-to-right
direction of assumption (3), and the right-to-left uses only the
right-to-left direction of assumption (3).
\end{lemma}
\begin{proof}
First, we show that $\defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}$
satisfies the universal property of an upcast.
To show
\[
x \sqsubseteq x' : {C[A_i,\u B_i]} \sqsubseteq {C[A_i',\u B_i']} \vdash \defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x} \sqsubseteq x' : {C[A_i',\u B_i']}
\]
assumption (2) part 2 gives
\[
\defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x} \sqsubseteq \defupcast{C[A_i',\u B_i']}{C[A_i',\u B_i']}{x'} : C[A_i',\u B_i']
\]
Then transitivity with the left-to-right direction of assumption (3)
\[
\defupcast{C[A_i',\u B_i']}{C[A_i',\u B_i']}{\upcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x}}
\sqsubseteq {\upcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x}}
\]
gives the result.
To show
\[
{x} \sqsubseteq \defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x} : {C[A_i,\u B_i]} \sqsubseteq {C[A_i',\u B_i']}
\]
By assumption (2) part 1, we have
\[
\defupcast{C[A_i,\u B_i]}{C[A_i,\u B_i]}{x} \sqsubseteq \defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']} : C[A_i,\u B_i] \sqsubseteq C[A_i',\u B_i']
\]
so transitivity with the right-to-left direction of assumption (3)
gives the result:
\[
x \sqsubseteq \defupcast{C[A_i,\u B_i]}{C[A_i,\u B_i]}{x}
\]
Then Theorem~\ref{thm:casts-unique} implies that
$\defupcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}$ is equivalent to
$\upcast{C[A_i,\u B_i]}{C[A_i',\u B_i']}$.
\end{proof}
Dually, we have
\begin{lemma}[Downcast Lemma] \label{lem:downcast}
Let $X_1 \,\,\text{val type}, \ldots X_n \,\,\text{val type}, \u Y_1 \,\,\text{comp type}, \ldots \u Y_n
\,\,\text{comp type} \vdash C \,\,\text{comp type}$ be a computation type constructor. We
abbreviate the instantiation \\
$C[A_1/X_1,\ldots,A_n/X_n,\u B_1/\u Y_i,\ldots,\u B_m/\u Y_m]$ by $C[A_i,\u B_i]$.
Suppose $\defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{-}$ is a complex
stack (depending on $C$ and each $A_i,A_i',\u B_i,\u B_i'$) such that
\begin{enumerate}
\item
For all value types $A_1,\ldots,A_n$ and $A_1',\ldots,A_n'$ with
$A_i \sqsubseteq A_i'$, and all computation types $\u B_1,\ldots,\u B_m$
and $\u B_1',\ldots,\u B_n'$ with $\u B_i \sqsubseteq \u B_i'$,
\[
\bullet : C[A_i',\u B_i'] \vdash \defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{\bullet} : C[A_i,\u B_i]
\]
\item
For all value types $A_i \sqsubseteq A_i'$ and computation types $\u B_i
\sqsubseteq \u B_i'$,
\begin{small}
\[
\begin{array}{c}
\bullet : C[A_i',\u B_i'] \vdash \defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{\bullet} \sqsubseteq \defdncast{C[A_i',\u B_i']}{C[A_i',\u B_i']}{\bullet} : C[A_i,\u B_i] \sqsubseteq C[A_i',\u B_i']\\
\bullet \sqsubseteq \bullet : C[A_i,\u B_i] \sqsubseteq C[A_i',\u B_i'] \vdash
\defdncast{C[A_i,\u B_i]}{C[A_i,\u B_i]}{x} \sqsubseteq \defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{x'} : C[A_i,\u B_i]
\end{array}
\]
\end{small}
\item For all value types $A_1,\ldots,A_n$ and all computation types
$\u B_1,\ldots,\u B_m$,
\[
\bullet : C[A_i,\u B_i] \vdash \defdncast{C[A_i,\u B_i]}{C[A_i,\u B_i]}{\bullet} \mathrel{\gtdyn\ltdyn} \bullet : C[A_i,\u B_i]
\]
\end{enumerate}
Then $\defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}$ satisfies the
universal property of a downcast, so by Theorem~\ref{thm:casts-unique}
\[
\bullet : C[A_i',\u B_i'] \vdash \defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{\bullet} \mathrel{\gtdyn\ltdyn} \dncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{\bullet} : C[A_i,\u B_i]
\]
Moreover, the left-to-right direction uses only the left-to-right
direction of assumption (3), and the right-to-left uses only the
right-to-left direction of assumption (3).
\end{lemma}
\begin{proof}
First, we show that $\dncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}$
satisfies the universal property of a downcast, and then apply
Theorem~\ref{thm:casts-unique}.
To show
\[
\bullet \sqsubseteq \bullet' : C[A_i,\u B_i] \sqsubseteq C[A_i',\u B_i'] \vdash \bullet \sqsubseteq \defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{\bullet'} : C[A_i,\u B_i]
\]
assumption (2) part 2 gives
\[
\defdncast{C[A_i,\u B_i]}{C[A_i,\u B_i]}{\bullet} \sqsubseteq \defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{\bullet'}
\]
Then transitivity with the right-to-left direction of assumption (3)
\[
\bullet
\sqsubseteq
\defdncast{C[A_i,\u B_i]}{C[A_i,\u B_i]}{\bullet}
\]
gives the result.
To show
\[
\defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{\bullet} \sqsubseteq \bullet : {C[A_i,\u B_i]} \sqsubseteq {C[A_i',\u B_i']}
\]
by assumption (2) part 1, we have
\[
\defdncast{C[A_i,\u B_i]}{C[A_i',\u B_i']}{\bullet} \sqsubseteq \defdncast{C[A_i',\u B_i']}{C[A_i',\u B_i']}{\bullet} : C[A_i,\u B_i] \sqsubseteq C[A_i',\u B_i']
\]
so transitivity with the left-to-right direction of assumption (3)
\[
\defdncast{C[A_i',\u B_i']}{C[A_i',\u B_i']}{\bullet} \sqsubseteq \bullet
\]
gives the result.
\end{proof}
\end{longonly}
\begin{longonly}
\subsubsection{Functions, Products, and Sums}
\end{longonly}
Together, the universal property for casts and the $\eta$ principles for
each type imply that the casts must behave as in lazy cast semantics:
\begin{theorem}[Cast Unique Implementation Theorem for $+,\times,\to,\mathbin{\&}$] \label{thm:functorial-casts}
The casts' behavior is uniquely determined as follows: \ifshort (See the extended version for $+$, $\mathbin{\&}$.) \fi
\begin{small}
\[
\begin{array}{c}
\iflong
\upcast{A_1 + A_2}{A_1' + A_2'}{s} \mathrel{\gtdyn\ltdyn} \caseofXthenYelseZ{s}{x_1.\kw{inl}{(\upcast{A_1}{A_1'}{x_1})}}{x_2.\kw{inr}{(\upcast{A_2}{A_2'}{x_2})}}\\\\
\begin{array}{rcl}
\dncast{\u F (A_1' + A_2')}{\u F (A_1 + A_2)}{\bullet} & \mathrel{\gtdyn\ltdyn}
& \bindXtoYinZ{\bullet}{(s : (A_1' + A_2'))}\caseofX{s}\\
& & \{{x_1'.\bindXtoYinZ{(\dncast{\u F A_1}{\u F A_1'}{(\kw{ret}{x_1'})})}{x_1}{\kw{ret}{(\kw{inl} {x_1})}}} \\
& & \mid {x_2'.\bindXtoYinZ{(\dncast{\u F A_2}{\u F A_2'}{(\kw{ret}{x_2'})})}{x_2}{\kw{ret}{(\kw{inr} {x_2})}}} \}\\
\end{array}
\\\\
\fi
\upcast{A_1 \times A_2}{A_1' \times A_2'}{p} \mathrel{\gtdyn\ltdyn} \pmpairWtoXYinZ{p}{x_1}{x_2}{(\upcast{A_1}{A_1'}{x_1},\upcast{A_2}{A_2'}{x_2})} \\\\
\begin{array}{rcl}
\dncast{\u F (A_1' \times A_2')}{\u F (A_1 \times A_2)}{\bullet} &
\mathrel{\gtdyn\ltdyn} &
\bindXtoYinZ{\bullet}{p'} {\pmpairWtoXYinZ{p'}{x_1'}{x_2'}{}}\\
& & \bindXtoYinZ{\dncast{\u F A_1}{\u F A_1'}{\kw{ret} x_1'}}{x_1}\\
& & \bindXtoYinZ{\dncast{\u F A_2}{\u F A_2'}{\kw{ret} x_2'}}{x_2} {\kw{ret} (x_1,x_2) }\\
\iflong
& \mathrel{\gtdyn\ltdyn} &
\bindXtoYinZ{\bullet}{p'} \pmpairWtoXYinZ{p'}{x_1'}{x_2'}{} \\
& & \bindXtoYinZ{\dncast{\u F A_2}{\u F A_2'}{\kw{ret} x_2'}}{x_2} \\
& & \bindXtoYinZ{\dncast{\u F A_1}{\u F A_1'}{\kw{ret} x_1'}}{x_1} {\kw{ret} (x_1,x_2) }
\fi
\end{array}\\\\
\iflong
\dncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}{\bullet} \mathrel{\gtdyn\ltdyn}
\pair{\dncast{\u B_1}{\u B_1'}{\pi \bullet}}{\dncast{\u B_2}{\u B_2'}{\pi' \bullet}}\\\\
\begin{array}{rcll}
\upcast{U (\u B_1 \mathbin{\&} \u B_2)}{U (\u B_1' \mathbin{\&} \u B_2')}{p} & \mathrel{\gtdyn\ltdyn} &
\kw{thunk}{} & {\{\pi \mapsto {\kw{force}{(\upcast{U \u B_1}{U \u B_1'}{(\kw{thunk}{\pi (\kw{force}{p})})})}}}\\
&&& \pi' \mapsto {\kw{force}{(\upcast{U \u B_2}{U \u B_2'}{(\kw{thunk}{\pi' (\kw{force}{p})})})}} \}
\end{array}\\\\
\fi
\dncast{A \to \u B}{A' \to \u B'}{\bullet} \mathrel{\gtdyn\ltdyn}
\lambda{x}.{\dncast{\u B}{\u B'}{(\bullet \, (\upcast{A}{A'}{x}))}} \\\\
\begin{array}{rcll}
\upcast{U (A \to \u B)}{U (A' \to \u B')}{f} & \mathrel{\gtdyn\ltdyn} &
\kw{thunk} (\lambda x'. & \bindXtoYinZ{\dncast{\u F A}{\u F A'}{(\kw{ret} x')}}{x}{} \\
& & & { \kw{force}{(\upcast{U \u B}{U \u B'}{(\kw{thunk}{(\kw{force}{(f)}\,x)})})}} )
\end{array}
\\
\end{array}
\]
\end{small}
\end{theorem}
In the case for an eager product $\times$, we can actually also show
that reversing the order and running ${\dncast{\u F A_2}{\u F A_2'}{\kw{ret}
x_2'}}$ and then ${\dncast{\u F A_1}{\u F A_1'}{\kw{ret} x_1'}}$ is also
an implementation of this cast, and therefore equal to the above.
Intuitively, this is sensible because the only effect a downcast
introduces is a run-time error, and if either downcast errors, both
possible implementations will.
\begin{longproof}~\\
\begin{enumerate}
\item Sums upcast. We use Lemma~\ref{lem:upcast} with the type
constructor $X_1 \,\,\text{val type}, X_2 \,\,\text{val type} \vdash X_1 + X_2 \,\,\text{val type}$.
Suppose $A_1 \sqsubseteq A_1'$ and $A_2 \sqsubseteq A_2'$ and let
\[s : A_1 + A_2 \vdash \defupcast{A_1 + A_2}{A_1' + A_2'}{s} : A_1' + A_2'
\]
stand for
\[
\caseofXthenYelseZ{s}{x_1.\kw{inl}{(\upcast{A_1}{A_1'}{x_1})}}{x_2.\kw{inr}{(\upcast{A_2}{A_2'}{x_2})}}
\]
which has the type required for the lemma's assumption (1).
Assumption (2) requires two condition, both of which are proved by
the congruence rules for $\mathsf{case}$, $\mathsf{inl}$,
$\mathsf{inr}$, and upcasts. The first,
\[
s : A_1 + A_2 \vdash \defupcast{A_1 + A_2}{A_1 + A_2}{s} \sqsubseteq \defupcast{A_1 + A_2}{A_1' + A_2'}{s} : A_1 + A_2 \sqsubseteq A_1' + A_2'\\
\]
expands to
\[
\begin{array}{c}
\caseofXthenYelseZ{s}{x_1.\kw{inl}{(\upcast{A_1}{A_1}{x_1})}}{x_2.\kw{inr}{(\upcast{A_2}{A_2}{x_2})}} \\
\sqsubseteq \\
\caseofXthenYelseZ{s}{x_1.\kw{inl}{(\upcast{A_1}{A_1'}{x_1})}}{x_2.\kw{inr}{(\upcast{A_2}{A_2'}{x_2})}}
\end{array}
\]
The second,
\[
s \sqsubseteq s' : A_1 + A_2 \sqsubseteq A_1' + A_2' \vdash
\defupcast{A_1 + A_2}{A_1' + A_2'}{s} \sqsubseteq \defupcast{A_1' + A_2'}{A_1' + A_2'}{s'} : A_1' + A_2'
\]
expands to
\[
\begin{array}{c}
\caseofXthenYelseZ{s}{x_1.\kw{inl}{(\upcast{A_1}{A_1'}{x_1})}}{x_2.\kw{inr}{(\upcast{A_2}{A_2'}{x_2})}} \\
\sqsubseteq \\
\caseofXthenYelseZ{s'}{x_1.\kw{inl}{(\upcast{A_1'}{A_1'}{x_1'})}}{x_2.\kw{inr}{(\upcast{A_2'}{A_2'}{x_2'})}}
\end{array}
\]
Finally, for assumption (3), we need to show
\[
\caseofXthenYelseZ{s}{x_1.\kw{inl}{(\upcast{A_1}{A_1}{x_1})}}{x_2.\kw{inr}{(\upcast{A_2}{A_2}{x_2})}}
\mathrel{\gtdyn\ltdyn} s
\]
which is true because $\upcast{A_1}{A_1}$ and $\upcast{A_2}{A_2}$
are the identity, and using ``weak $\eta$'' for sums,
$\caseofXthenYelseZ{s}{x_1.\kw{inl}{x_1}}{x_2.\kw{inr}{x_2}} \mathrel{\gtdyn\ltdyn} x$,
which is the special case of the $\eta$ rule in
Figure~\ref{fig:gtt-term-dyn-axioms} for the identity complex
value:
\[
\begin{array}{rcl}
\caseofXthenYelseZ{s}{x_1.\kw{inl}{(\upcast{A_1}{A_1}{x_1})}}{x_2.\kw{inr}{(\upcast{A_2}{A_2}{x_2})}} & \mathrel{\gtdyn\ltdyn} &\\
\caseofXthenYelseZ{s}{x_1.\kw{inl}{({x_1})}}{x_2.\kw{inr}{({x_2})}} & \mathrel{\gtdyn\ltdyn} &\\
s
\end{array}
\]
\item Sums downcast. We use the downcast lemma with $X_1 \,\,\text{val type}, X_2
\,\,\text{val type} \vdash \u F(X_1 + X_2) \,\,\text{comp type}$. Let
\[
\bullet' : \u F (A_1' + A_2') \vdash \defdncast{\u F (A_1 + A_2)}{\u F (A_1' + A_2')}{\bullet'} : \u F (A_1 + \u A_2)
\]
stand for
\[
\bindXtoYinZ{\bullet}{(s : (A_1' +
A_2'))}{}
{\caseofXthenYelseZ{s}
{x_1'.\bindXtoYinZ{(\dncast{\u F A_1}{\u F A_1'}{(\kw{ret}{x_1'})})}{x_1}{\kw{ret}{(\kw{inl} {x_1})}}}
{\ldots}}\\
\]
(where, as in the theorem statement, $\mathsf{inr}$ branch is
analogous), which has the correct type for the lemma's assumption
(1).
For assumption (2), we first need to show
\begin{small}
\[
\bullet : {\u F (A_1' + A_2')} \vdash
\defdncast{\u F (A_1 + A_2)}{\u F (A_1' + A_2')}{\bullet'}
\sqsubseteq
\defdncast{\u F (A_1' + A_2')}{\u F (A_1' + A_2')}{\bullet'}
: {\u F (A_1 + A_2)} \sqsubseteq {\u F (A_1' + A_2')}
\]
\end{small}
i.e.
\begin{small}
\[
\begin{array}{c}
\bindXtoYinZ{\bullet}{(s' : (A_1' + A_2'))}{}
{\caseofXthenYelseZ{s'}
{x_1'.\bindXtoYinZ{(\dncast{\u F A_1}{\u F A_1'}{(\kw{ret}{x_1'})})}{x_1}{\kw{ret}{(\kw{inl} {x_1})}}}
{\ldots}}\\
\sqsubseteq\\
\bindXtoYinZ{\bullet}{(s' : (A_1' + A_2'))}{}
{\caseofXthenYelseZ{s'}
{x_1'.\bindXtoYinZ{(\dncast{\u F A_1'}{\u F A_1'}{(\kw{ret}{x_1'})})}{x_1'}{\kw{ret}{(\kw{inl} {x_1'})}}}
{\ldots}}
\end{array}
\]
\end{small}
which is true by the congruence rules for $\mathsf{bind}$,
$\mathsf{case}$, downcasts, $\mathsf{ret}$, and $\mathsf{inl}/\mathsf{inr}$.
Next, we need to show
\begin{small}
\[
\bullet \sqsubseteq \bullet' : {\u F (A_1 + A_2)} \sqsubseteq {\u F (A_1' + A_2')} \vdash
\defdncast{\u F (A_1 + A_2)}{\u F (A_1 + A_2)}{\bullet}
\sqsubseteq
\defdncast{\u F (A_1 + A_2)}{\u F (A_1' + A_2')}{\bullet'}
: {\u F (A_1 + A_2)}
\]
\end{small}
i.e.
\begin{small}
\[
\begin{array}{c}
\bindXtoYinZ{\bullet}{(s : (A_1 + A_2))}{}
{\caseofXthenYelseZ{s}
{x_1.\bindXtoYinZ{(\dncast{\u F A_1}{\u F A_1}{(\kw{ret}{x_1})})}{x_1}{\kw{ret}{(\kw{inl} {x_1})}}}
{\ldots}}\\
\sqsubseteq\\
\bindXtoYinZ{\bullet}{(s' : (A_1' + A_2'))}{}
{\caseofXthenYelseZ{s'}
{x_1'.\bindXtoYinZ{(\dncast{\u F A_1}{\u F A_1'}{(\kw{ret}{x_1'})})}{x_1}{\kw{ret}{(\kw{inl} {x_1})}}}
{\ldots}}\\
\end{array}
\]
\end{small}
which is also true by congruence.
Finally, for assumption (3), we show
\begin{small}
\[
\begin{array}{lll}
\bindXtoYinZ{\bullet}{(s : (A_1 + A_2))}{}
{\caseofXthenYelseZ{s}
{x_1.\bindXtoYinZ{(\dncast{\u F A_1}{\u F A_1}{(\kw{ret}{x_1})})}{x_1}{\kw{ret}{(\kw{inl} {x_1})}}}
{\ldots}} & \mathrel{\gtdyn\ltdyn} & \\
\bindXtoYinZ{\bullet}{(s : (A_1 + A_2))}{}
{\caseofXthenYelseZ{s}
{x_1.\bindXtoYinZ{({(\kw{ret}{x_1})})}{x_1}{\kw{ret}{(\kw{inl} {x_1})}}}
{\ldots}} & \mathrel{\gtdyn\ltdyn} & \\
\bindXtoYinZ{\bullet}{(s : (A_1 + A_2))}{}
{\caseofXthenYelseZ{s}
{x_1.{\kw{ret}{(\kw{inl} {x_1})}}}
{x_2.{\kw{ret}{(\kw{inr} {x_2})}}}} & \mathrel{\gtdyn\ltdyn} & \\
\bindXtoYinZ{\bullet}{(s : (A_1 + A_2))}{}
{\kw{ret}{s}} & \mathrel{\gtdyn\ltdyn} &\\
\bullet
\end{array}
\]
\end{small}
using the downcast identity, $\beta$ for $\u F$ types, $\eta$ for
sums, and $\eta$ for $\u F$ types.
\item Eager product upcast. We use Lemma~\ref{lem:upcast} with the type
constructor $X_1 \,\,\text{val type}, X_2 \,\,\text{val type} \vdash X_1 \times X_2 \,\,\text{val type}$.
Let
\[p : A_1 \times A_2 \vdash \defupcast{A_1 \times A_2}{A_1' \times A_2'}{s} : A_1' \times A_2'
\]
stand for
\[
\pmpairWtoXYinZ{p}{x_1}{x_2}{(\upcast{A_1}{A_1'}{x_1},\upcast{A_2}{A_2'}{x_2})}
\]
which has the type required for the lemma's assumption (1).
Assumption (2) requires two condition, both of which are proved by
the congruence rules for $\mathsf{split}$, pairing, and upcasts.
The first,
\[
p : A_1 \times A_2 \vdash \defupcast{A_1 \times A_2}{A_1 \times A_2}{s} \sqsubseteq \defupcast{A_1 \times A_2}{A_1' \times A_2'}{s} : A_1 \times A_2 \sqsubseteq A_1' \times A_2'\\
\]
expands to
\[
\begin{array}{c}
\pmpairWtoXYinZ{p}{x_1}{x_2}{(\upcast{A_1}{A_1}{x_1},\upcast{A_2}{A_2}{x_2})}\\
\sqsubseteq \\
\pmpairWtoXYinZ{p}{x_1}{x_2}{(\upcast{A_1}{A_1'}{x_1},\upcast{A_2}{A_2'}{x_2})}\\
\end{array}
\]
The second,
\[
p \sqsubseteq p' : A_1 \times A_2 \sqsubseteq A_1' \times A_2' \vdash
\defupcast{A_1 \times A_2}{A_1' \times A_2'}{s} \sqsubseteq \defupcast{A_1' \times A_2'}{A_1' \times A_2'}{s'} : A_1' \times A_2'
\]
expands to
\[
\begin{array}{c}
\pmpairWtoXYinZ{p}{x_1}{x_2}{(\upcast{A_1}{A_1'}{x_1},\upcast{A_2}{A_2'}{x_2})}\\
\sqsubseteq \\
\pmpairWtoXYinZ{p'}{x_1'}{x_2'}{(\upcast{A_1'}{A_1'}{x_1'},\upcast{A_2'}{A_2'}{x_2'})}\\
\end{array}
\]
Finally, for assumption (3), using $\eta$ for products and
the fact that $\upcast{A}{A}{}$ is the identity, we have
\[
\pmpairWtoXYinZ{p}{x_1}{x_2}{(\upcast{A_1}{A_1}{x_1},\upcast{A_2}{A_2}{x_2})} \mathrel{\gtdyn\ltdyn}
\pmpairWtoXYinZ{p}{x_1}{x_2}{({x_1},{x_2})} \mathrel{\gtdyn\ltdyn}
p
\]
\item Eager product downcast.
We use the downcast lemma with $X_1 \,\,\text{val type}, X_2 \,\,\text{val type} \vdash \u
F(X_1 \times X_2) \,\,\text{comp type}$. Let
\[
\bullet' : \u F (A_1' \times A_2') \vdash \defdncast{\u F (A_1 \times A_2)}{\u F (A_1' \times A_2')}{\bullet'} : \u F (A_1 \times \u A_2)
\]
stand for
\begin{small}
\[
\bindXtoYinZ{\bullet}{p'}{\pmpairWtoXYinZ{p'}{x_1'}{x_2'}{
\bindXtoYinZ{\dncast{\u F A_1}{\u F A_1'}{\kw{ret} x_1'}}{x_1}{
\bindXtoYinZ{\dncast{\u F A_2}{\u F A_2'}{\kw{ret} x_2'}}{x_2} {\kw{ret} (x_1,x_2) }}}}
\]
\end{small}
which has the correct type for the lemma's assumption (1).
For assumption (2), we first need to show
\begin{small}
\[
\bullet : {\u F (A_1' \times A_2')} \vdash
\defdncast{\u F (A_1 \times A_2)}{\u F (A_1' \times A_2')}{\bullet'}
\sqsubseteq
\defdncast{\u F (A_1' \times A_2')}{\u F (A_1' \times A_2')}{\bullet'}
: {\u F (A_1 \times A_2)} \sqsubseteq {\u F (A_1' \times A_2')}
\]
\end{small}
i.e.
\begin{small}
\[
\begin{array}{c}
\bindXtoYinZ{\bullet}{p'}{\pmpairWtoXYinZ{p'}{x_1'}{x_2'}{
\bindXtoYinZ{\dncast{\u F A_1}{\u F A_1'}{\kw{ret} x_1'}}{x_1}{
\bindXtoYinZ{\dncast{\u F A_2}{\u F A_2'}{\kw{ret} x_2'}}{x_2} {\kw{ret} (x_1,x_2) }}}}\\
\sqsubseteq\\
\bindXtoYinZ{\bullet}{p'}{\pmpairWtoXYinZ{p'}{x_1'}{x_2'}{
\bindXtoYinZ{\dncast{\u F A_1'}{\u F A_1'}{\kw{ret} x_1'}}{x_1'}{
\bindXtoYinZ{\dncast{\u F A_2'}{\u F A_2'}{\kw{ret} x_2'}}{x_2'} {\kw{ret} (x_1',x_2') }}}}
\end{array}
\]
\end{small}
which is true by the congruence rules for $\mathsf{bind}$,
$\mathsf{split}$, downcasts, $\mathsf{ret}$, and pairing.
Next, we need to show
\begin{small}
\[
\bullet \sqsubseteq \bullet' : {\u F (A_1 \times A_2)} \sqsubseteq {\u F (A_1' \times A_2')} \vdash
\defdncast{\u F (A_1 \times A_2)}{\u F (A_1 \times A_2)}{\bullet}
\sqsubseteq
\defdncast{\u F (A_1 \times A_2)}{\u F (A_1' \times A_2')}{\bullet'}
: {\u F (A_1 + A_2)}
\]
\end{small}
i.e.
\begin{small}
\[
\begin{array}{c}
\bindXtoYinZ{\bullet}{p}{\pmpairWtoXYinZ{p}{x_1}{x_2}{
\bindXtoYinZ{\dncast{\u F A_1}{\u F A_1}{\kw{ret} x_1}}{x_1}{
\bindXtoYinZ{\dncast{\u F A_2}{\u F A_2'}{\kw{ret} x_2}}{x_2} {\kw{ret} (x_1,x_2) }}}}\\
\sqsubseteq\\
\bindXtoYinZ{\bullet}{p'}{\pmpairWtoXYinZ{p'}{x_1'}{x_2'}{
\bindXtoYinZ{\dncast{\u F A_1}{\u F A_1'}{\kw{ret} x_1'}}{x_1}{
\bindXtoYinZ{\dncast{\u F A_2}{\u F A_2'}{\kw{ret} x_2'}}{x_2} {\kw{ret} (x_1,x_2) }}}}\\
\end{array}
\]
\end{small}
which is also true by congruence.
Finally, for assumption (3), we show
\begin{small}
\[
\begin{array}{lll}
\bindXtoYinZ{\bullet}{p}{\pmpairWtoXYinZ{p}{x_1}{x_2}{
\bindXtoYinZ{\dncast{\u F A_1}{\u F A_1}{\kw{ret} x_1}}{x_1}{
\bindXtoYinZ{\dncast{\u F A_2}{\u F A_2'}{\kw{ret} x_2}}{x_2} {\kw{ret} (x_1,x_2) }}}} & \mathrel{\gtdyn\ltdyn} & \\
\bindXtoYinZ{\bullet}{p}{\pmpairWtoXYinZ{p}{x_1}{x_2}{
\bindXtoYinZ{{\kw{ret} x_1}}{x_1}{
\bindXtoYinZ{{\kw{ret} x_2}}{x_2} {\kw{ret} (x_1,x_2) }}}} & \mathrel{\gtdyn\ltdyn} & \\
\bindXtoYinZ{\bullet}{p}{\pmpairWtoXYinZ{p}{x_1}{x_2}{{\kw{ret} (x_1,x_2) }}} & \mathrel{\gtdyn\ltdyn} & \\
\bindXtoYinZ{\bullet}{p}{\kw{ret} p} & \mathrel{\gtdyn\ltdyn} & \\
\bullet \\
\end{array}
\]
\end{small}
using the downcast identity, $\beta$ for $\u F$ types, $\eta$ for
eager products, and $\eta$ for $\u F$ types.
An analogous argument works if we sequence the downcasts of the
components in the opposite order:
\begin{small}
\[
\bindXtoYinZ{\bullet}{p'}{\pmpairWtoXYinZ{p'}{x_1'}{x_2'}{\bindXtoYinZ{\dncast{\u F A_2}{\u F A_2'}{\kw{ret} x_2'}}{x_2} {\bindXtoYinZ{\dncast{\u F A_1}{\u F A_1'}{\kw{ret} x_1'}}{x_1} {\kw{ret} (x_1,x_2) }}}}
\]
\end{small}
(the only facts about downcasts used above are congruence and the
downcast identity), which shows that these two implementations of
the downcast are themselves equidynamic.
\item Lazy product downcast.
We use Lemma~\ref{lem:downcast} with
the type constructor $\u Y_1 \,\,\text{comp type}, \u Y_2 \,\,\text{comp type} \vdash \u Y_1 \mathbin{\&} \u Y_2 \,\,\text{val type}$.
Let
\[\bullet' : \u B_1' \mathbin{\&} \u B_2' \vdash \defdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1 \mathbin{\&} \u B_2}{\bullet'} : \u B_1 \mathbin{\&} \u B_2
\]
stand for
\begin{small}
\[
\pair{\dncast{\u B_1}{\u B_1'}{\pi \bullet'}}{\dncast{\u B_2}{\u B_2'}{\pi' \bullet'}}\\
\]
\end{small}
which has the type required for the lemma's assumption (1).
Assumption (2) requires two conditions, both of which are proved by
the congruence rules for pairing, projection, and downcasts. The first,
\begin{small}
\[\bullet' : \u B_1' \mathbin{\&} \u B_2' \vdash \defdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}{\bullet'} \sqsubseteq
\defdncast{\u B_1' \mathbin{\&} \u B_2'}{\u B_1' \mathbin{\&} \u B_2'}{\bullet'} : \u B_1 \mathbin{\&} \u B_2 \sqsubseteq \u B_1' \mathbin{\&} \u B_2'
\]
\end{small}
expands to
\begin{small}
\[
\begin{array}{c}
\pair{\dncast{\u B_1}{\u B_1'}{\pi \bullet'}}{\dncast{\u B_2}{\u B_2'}{\pi' \bullet'}} \\
\sqsubseteq \\
\pair{\dncast{\u B_1'}{\u B_1'}{\pi \bullet'}}{\dncast{\u B_2'}{\u B_2'}{\pi' \bullet'}} \\
\end{array}
\]
\end{small}
The second,
\begin{small}
\[
\bullet \sqsubseteq \bullet' : \u B_1 \mathbin{\&} \u B_2 \sqsubseteq \u B_1' \mathbin{\&} \u B_2' \vdash
\defdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1 \mathbin{\&} \u B_2}{\bullet} \sqsubseteq
\defdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}{\bullet'} : \u B_1 \mathbin{\&} \u B_2
\]
\end{small}
expands to
\[
\begin{array}{c}
\pair{\dncast{\u B_1}{\u B_1}{\pi \bullet}}{\dncast{\u B_2}{\u B_2}{\pi' \bullet}} \\
\sqsubseteq \\
\pair{\dncast{\u B_1}{\u B_1'}{\pi \bullet'}}{\dncast{\u B_2}{\u B_2'}{\pi' \bullet'}} \\
\end{array}
\]
For assumption (3), we have, using $\dncast{\u B}{\u B}$ is the
identity and $\eta$ for $\mathbin{\&}$,
\[
\pair{\dncast{\u B_1}{\u B_1}{\pi \bullet}}{\dncast{\u B_2}{\u B_2}{\pi' \bullet}}
\mathrel{\gtdyn\ltdyn}
\pair{{\pi \bullet}}{{\pi' \bullet}}
\mathrel{\gtdyn\ltdyn}
\bullet
\]
\item Lazy product upcast.
We use Lemma~\ref{lem:upcast} with the type
constructor $\u Y_1 \,\,\text{comp type}, \u Y_2 \,\,\text{comp type} \vdash U (\u Y_1 \mathbin{\&} \u Y_2) \,\,\text{val type}$.
Let
\[p : U (\u B_1 \mathbin{\&} \u B_2) \vdash \defupcast{U (\u B_1 \mathbin{\&} \u B_2)}{U (\u B_1 \mathbin{\&} \u B_2)}{p} : U (\u B_1' \mathbin{\&} \u B_2')
\]
stand for
\begin{small}
\[
\kw{thunk}{\pair{\kw{force}{(\upcast{U \u B_1}{U \u B_1'}{(\kw{thunk}{\pi (\kw{force}{p})})})}}{\kw{force}{(\upcast{U \u B_2}{U \u B_2'}{(\kw{thunk}{\pi' (\kw{force}{p})})})}}}
\]
\end{small}
which has the type required for the lemma's assumption (1).
Assumption (2) requires two conditions, both of which are proved by
the congruence rules for $\mathsf{thunk}$, $\mathsf{force}$,
pairing, projections, and upcasts. The first,
\begin{small}
\[p : U (\u B_1 \mathbin{\&} \u B_2) \vdash \defupcast{U (\u B_1 \mathbin{\&} \u B_2)}{U (\u B_1 \mathbin{\&} \u B_2)}{p} \sqsubseteq \defupcast{U (\u B_1 \mathbin{\&} \u B_2)}{U (\u B_1' \mathbin{\&} \u B_2')}{p} : U (\u B_1 \mathbin{\&} \u B_2) \sqsubseteq U (\u B_1' \mathbin{\&} \u B_2')
\]
\end{small}
expands to
\begin{small}
\[
\begin{array}{c}
\kw{thunk}{\pair{\kw{force}{(\upcast{U \u B_1}{U \u B_1}{(\kw{thunk}{\pi (\kw{force}{p})})})}}{\kw{force}{(\upcast{U \u B_2}{U \u B_2}{(\kw{thunk}{\pi' (\kw{force}{p})})})}}}\\
\sqsubseteq \\
\kw{thunk}{\pair{\kw{force}{(\upcast{U \u B_1}{U \u B_1'}{(\kw{thunk}{\pi (\kw{force}{p})})})}}{\kw{force}{(\upcast{U \u B_2}{U \u B_2'}{(\kw{thunk}{\pi' (\kw{force}{p})})})}}}
\end{array}
\]
\end{small}
The second,
\begin{small}
\[
p \sqsubseteq p' : U (\u B_1 \mathbin{\&} \u B_2) \sqsubseteq U (\u B_1' \mathbin{\&} \u B_2') \vdash
\defupcast{U (\u B_1 \mathbin{\&} \u B_2)}{U (\u B_1' \mathbin{\&} \u B_2')}{p} \sqsubseteq \defupcast{U (\u B_1' \mathbin{\&} \u B_2')}{U (\u B_1' \mathbin{\&} \u B_2')}{p} : U (\u B_1' \mathbin{\&} \u B_2')
\]
\end{small}
expands to
\begin{small}
\[
\begin{array}{c}
\kw{thunk}{\pair{\kw{force}{(\upcast{U \u B_1}{U \u B_1'}{(\kw{thunk}{\pi (\kw{force}{p})})})}}{\kw{force}{(\upcast{U \u B_2}{U \u B_2'}{(\kw{thunk}{\pi' (\kw{force}{p})})})}}}\\
\sqsubseteq \\
\kw{thunk}{\pair{\kw{force}{(\upcast{U \u B_1'}{U \u B_1'}{(\kw{thunk}{\pi (\kw{force}{p'})})})}}{\kw{force}{(\upcast{U \u B_2'}{U \u B_2'}{(\kw{thunk}{\pi' (\kw{force}{p'})})})}}}
\end{array}
\]
\end{small}
Finally, for assumption (3), using $\eta$ for $times$, $\beta$ and
$\eta$ for $U$ types, and the fact that $\upcast{A}{A}{}$ is the
identity, we have
\begin{small}
\[
\begin{array}{rl}
\kw{thunk}{\pair{\kw{force}{(\upcast{U \u B_1}{U \u B_1}{(\kw{thunk}{\pi (\kw{force}{p})})})}}{\kw{force}{(\upcast{U \u B_2}{U \u B_2}{(\kw{thunk}{\pi' (\kw{force}{p})})})}}} & \mathrel{\gtdyn\ltdyn} \\
\kw{thunk}{\pair{\kw{force}{(\kw{thunk}{\pi (\kw{force}{p})})}}{\kw{force}{(\kw{thunk}{\pi' (\kw{force}{p})})}}} & \mathrel{\gtdyn\ltdyn} \\
\kw{thunk}{\pair{\pi (\kw{force}{p})}{\pi' (\kw{force}{p})}} & \mathrel{\gtdyn\ltdyn} \\
\kw{thunk}{(\kw{force}{p})} & \mathrel{\gtdyn\ltdyn} \\
p
\end{array}
\]
\end{small}
\item Function downcast.
We use Lemma~\ref{lem:downcast} with
the type constructor $X \,\,\text{val type}, \u Y \,\,\text{comp type} \vdash X \to \u Y \,\,\text{comp type}$.
Let
\[\bullet' : A' \to \u B' \vdash \defdncast{A \to \u B}{A' \to \u B'}{\bullet'} : A \to \u B
\]
stand for
\[
\lambda{x}.{\dncast{\u B}{\u B'}{(\bullet \, (\upcast{A}{A'}{x}))}} \\
\]
which has the type required for the lemma's assumption (1).
Assumption (2) requires two conditions, both of which are proved by
the congruence rules for $\lambda$, application, upcasts, and
downcasts. The first,
\begin{small}
\[\bullet' : A' \to \u B' \vdash \defdncast{A \to \u B}{A' \to \u B'}{\bullet'} \sqsubseteq
\defdncast{A' \to \u B'}{A' \to \u B'}{\bullet'} : A \to \u B \sqsubseteq \u A' \to \u B'
\]
\end{small}
expands to
\[
\begin{array}{c}
\lambda{x}.{\dncast{\u B}{\u B'}{(\bullet \, (\upcast{A}{A'}{x}))}} \\
\sqsubseteq \\
\lambda{x'}.{\dncast{\u B'}{\u B'}{(\bullet \, (\upcast{A'}{A'}{x'}))}} \\
\end{array}
\]
The second,
\begin{small}
\[
\bullet \sqsubseteq \bullet' : \u A \to \u B \sqsubseteq A' \to \u B' \vdash
\defdncast{A \to \u B}{A \to \u B}{\bullet} \sqsubseteq
\defdncast{A \to \u B}{A' \to \u B'}{\bullet'} : A \to \u B
\]
\end{small}
expands to
\[
\begin{array}{c}
\lambda{x}.{\dncast{\u B}{\u B}{(\bullet \, (\upcast{A}{A}{x}))}} \\
\sqsubseteq \\
\lambda{x}.{\dncast{\u B}{\u B'}{(\bullet' \, (\upcast{A}{A'}{x}))}} \\
\end{array}
\]
For assumption (3), we have, using $\upcast{A}{A}$ and $\dncast{\u
B}{\u B}$ are the identity and $\eta$ for $\to$,
\[
\lambda{x}.{\dncast{\u B}{\u B}{(\bullet \, (\upcast{A}{A}{x}))}} \\
\mathrel{\gtdyn\ltdyn}
\lambda{x}.{{(\bullet \, ({x}))}} \\
\mathrel{\gtdyn\ltdyn}
\bullet
\]
\item Function upcast.
We use Lemma~\ref{lem:upcast} with the type
constructor $\u X \,\,\text{val type}, \u Y \,\,\text{comp type} \vdash U (\u X \to \u Y) \,\,\text{val type}$.
Suppose $A \sqsubseteq A'$ as value types and $\u B \sqsubseteq \u B'$ as
computation types and let
\[p : U (A \to \u B) \vdash \defupcast{U (A \to \u B)}{U (A \to \u B)}{p} : U (A' \to \u B')
\]
stand for
\begin{small}
\[
\kw{thunk}{(\lambda x'.\bindXtoYinZ{\dncast{\u F A}{\u F A'}{(\kw{ret}
x')}}{x}{ \kw{force}{(\upcast{U \u B}{U \u B'}{(\kw{thunk}{(\kw{force}{(f)}\,x)})})}})}
\]
\end{small}
which has the type required for the lemma's assumption (1).
Assumption (2) requires two conditions, both of which are proved by
the congruence rules for $\mathsf{thunk}$, $\mathsf{force}$,
functions, application, upcasts, and downcasts. The first,
\begin{small}
\[
f : U (A \to \u B) \vdash \defupcast{U (A \to \u B)}{U (A \to \u B)}{f} \sqsubseteq \defupcast{U (A \to \u B)}{U (A' \to \u B')}{f} : U (A \to \u B) \sqsubseteq U (A' \to \u B')
\]
\end{small}
expands to
\begin{small}
\[
\begin{array}{c}
\kw{thunk}{(\lambda x.\bindXtoYinZ{\dncast{\u F A}{\u F A}{(\kw{ret} x)}}{x}{ \kw{force}{(\upcast{U \u B}{U \u B}{(\kw{thunk}{(\kw{force}{(f)}\,x)})})}})}\\
\sqsubseteq \\
\kw{thunk}{(\lambda x'.\bindXtoYinZ{\dncast{\u F A}{\u F A'}{(\kw{ret} x')}}{x}{ \kw{force}{(\upcast{U \u B}{U \u B'}{(\kw{thunk}{(\kw{force}{(f)}\,x)})})}})}
\end{array}
\]
\end{small}
The second,
\begin{small}
\[
f \sqsubseteq f' : U (A \to \u B) \sqsubseteq U (A' \to \u B') \vdash
\defupcast{U (A \to \u B)}{U (A' \to \u B')}{f} \sqsubseteq
\defupcast{U (A' \to \u B')}{U (A' \to \u B')}{f'} : U (A' \to \u B')
\]
\end{small}
expands to
\begin{small}
\[
\begin{array}{c}
\kw{thunk}{(\lambda x'.\bindXtoYinZ{\dncast{\u F A}{\u F A'}{(\kw{ret} x')}}{x}{ \kw{force}{(\upcast{U \u B}{U \u B'}{(\kw{thunk}{(\kw{force}{(f)}\,x)})})}})}\\
\sqsubseteq \\
\kw{thunk}{(\lambda x'.\bindXtoYinZ{\dncast{\u F A'}{\u F A'}{(\kw{ret} x')}}{x'}{ \kw{force}{(\upcast{U \u B'}{U \u B'}{(\kw{thunk}{(\kw{force}{(f')}\,x')})})}})}
\end{array}
\]
\end{small}
Finally, for assumption (3), using $\eta$ for $\to$, $\beta$ for $F$
types and
$\beta/\eta$ for $U$ types, and the fact that $\upcast{\u B}{\u B}{}$ and
$\dncast{A}{A}$ are the identity, we have
\begin{small}
\[
\begin{array}{rl}
\kw{thunk}{(\lambda x.\bindXtoYinZ{\dncast{\u F A}{\u F A}{(\kw{ret} x)}}{x}{ \kw{force}{(\upcast{U \u B}{U \u B}{(\kw{thunk}{(\kw{force}{(f)}\,x)})})}})} & \mathrel{\gtdyn\ltdyn} \\
\kw{thunk}{(\lambda x.\bindXtoYinZ{{(\kw{ret} x)}}{x}{\kw{force}{{(\kw{thunk}{(\kw{force}{(f)}\,x)})}}})} & \mathrel{\gtdyn\ltdyn} \\
\kw{thunk}{(\lambda x.\kw{force}{{(\kw{thunk}{(\kw{force}{(f)}\,x)})}})} & \mathrel{\gtdyn\ltdyn} \\
\kw{thunk}{(\lambda x.(\kw{force}{(f)}\,x))} & \mathrel{\gtdyn\ltdyn} \\
\kw{thunk}{(\kw{force}{(f)})} & \mathrel{\gtdyn\ltdyn} \\
f
\end{array}
\]
\end{small}
\item $z : 0 \vdash \upcast{0}{A}z \mathrel{\gtdyn\ltdyn} \kw{absurd} z : A$ is
immediate by $\eta$ for 0 on the map $z : 0 \vdash \upcast{0}{A}z :
A$.
\end{enumerate}
\end{longproof}
\begin{longonly}
\subsubsection{Shifts}
\end{longonly}
In GTT, we assert the existence of value upcasts and computation
downcasts for derivable type dynamism relations. While we do not assert
the existence of all \emph{value} downcasts and \emph{computation}
upcasts, we can define the universal property that identifies a term as
such:
\begin{definition}[Stack upcasts/value downcasts] \label{def:value-down-computation-up} ~
\begin{enumerate}
\item
If $\u B \sqsubseteq \u B'$, a \emph{stack upcast from $B$ to $B'$}
is a stack $\bullet : \u B \vdash \defupcast{\u B}{\u B'} \bullet : \u
B'$ that satisfies the computation dynamism rules of an upcast
${\bullet : \u B \vdash \bullet \sqsubseteq \defupcast{\u B}{\u B'} \bullet
: \u B \sqsubseteq \u B'}$ and
${\bullet \sqsubseteq \bullet' : \u B \sqsubseteq \u B' \vdash \defupcast{\u B}{\u B'} \bullet \sqsubseteq \bullet' : \u B'}$.
\item If $A \sqsubseteq A'$, a \emph{value downcast from $A'$ to $A$} is a
complex value $x : A' \vdash \defdncast{A}{A'} x : A$ that satisfies
the value dynamism rules of a downcast
${x : A' \vdash \defdncast{A}{A'}{x} \sqsubseteq x : A \sqsubseteq A'}$
and
${x \sqsubseteq x' : A \sqsubseteq A' \vdash x \sqsubseteq \defdncast{A}{A'} x' : A}$.
\end{enumerate}
\end{definition}
\begin{longonly}
Because the proofs of Lemma~\ref{lem:cast-left-right},
Lemma~\ref{lem:cast-congruence}, Theorem~\ref{thm:decomposition},
Theorem~\ref{thm:casts-unique} rely only on the axioms for
upcasts/downcasts, the analogues of these theorems hold for stack
upcasts and value downcasts as well.
\end{longonly}
Some value downcasts and computation upcasts do exist, leading to a
characterization of the casts for the monad $U \u F A$ and comonad $\u F
U \u B$ of $F \dashv U$:
\begin{theorem}[Cast Unique Implementation Theorem for $U \u F, \u F U$] \label{thm:monadic-comonadic-casts}
Let $A \sqsubseteq A'$ and $\u B \sqsubseteq \u B'$.
\begin{enumerate}
\item $\bullet : \u F A \vdash
\bindXtoYinZ{\bullet}{x:A}{\kw{ret}{(\upcast{A}{A'}{x})}} : \u F A'$
is a stack upcast.
\item If $\defupcast{\u B}{\u B'}$ is a stack upcast, then\\
$x : \u U B \vdash \upcast{U \u B}{\u U B'}{x} \mathrel{\gtdyn\ltdyn} \kw{thunk}{(\defupcast{\u B}{\u B'}{(\kw{force} x)})} : U \u B'$
\item $x : \u U B' \vdash \kw{thunk}{(\dncast{\u B}{\u B'}{(\kw{force} x)})} : U
\u B$ is a value downcast.
\item If $\defdncast{A}{A'}$ is a value downcast, then\\
$\bullet : \u F A' \vdash \dncast{\u F A}{\u F A'}{\bullet} \mathrel{\gtdyn\ltdyn} \bindXtoYinZ{\bullet}{x':A'}{\kw{ret}{(\dncast{A}{A'}{x})}}$
\item
$\begin{array}{c}
x : U \u F A \vdash \upcast{U \u F A}{U \u F A'}{x} \mathrel{\gtdyn\ltdyn} \kw{thunk}{ (\bindXtoYinZ{{\kw{force} x}}{x:A}{\kw{ret}{(\upcast{A}{A'}{x})}})}\\
\bullet : \u F U \u B' \vdash \dncast{\u F U \u B}{\u F U \u B'}{\bullet} \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ{\bullet}{x':U \u B'}{\kw{ret}{(\kw{thunk}{(\dncast{\u B}{\u B'}{(\kw{force} x)})})}}
\end{array}$
\end{enumerate}
\end{theorem}
\begin{longproof}
\begin{enumerate}
\item
To show
\[
\bullet : \u F A \vdash \bullet \sqsubseteq
\bindXtoYinZ{\bullet}{x:A}{\kw{ret}{(\upcast{A}{A'}{x})}} : \u F A
\sqsubseteq \u F A'
\]
we can $\eta$-expand $\bullet \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ{\bullet}{x}{\kw{ret}{x}}$ on the left, at which point by
congruence it suffices to show $x \sqsubseteq \upcast{A}{A'}{x}$, which
is true up upcast right. To show
\[
\bullet \sqsubseteq \bullet' : \u F A \sqsubseteq \u F A' \vdash
\bindXtoYinZ{\bullet}{x:A}{\kw{ret}{(\upcast{A}{A'}{x})}}
\sqsubseteq
\bullet'
: \u F A'
\]
we can $\eta$-expand $\bullet' \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ{\bullet'}{x'}{\kw{ret}{x'}}$ on the right,
and then apply congruence, the assumption that $\bullet \sqsubseteq
\bullet'$, and upcast left.
\item We apply the upcast lemma with the type constructor $\u Y \,\,\text{comp type}
\vdash U \u Y \,\,\text{val type}$. The term $\kw{thunk}{(\defupcast{\u B}{\u
B'}{(\kw{force} x)})}$ has the correct type for assumption (1). For
assumption (2), we show
\[
x : U \u B \vdash \kw{thunk}{(\defupcast{\u B}{\u B}{(\kw{force} x)})} \sqsubseteq
\kw{thunk}{(\defupcast{\u B}{\u B'}{(\kw{force} x)})} : U \u B \sqsubseteq U \u B'
\]
by congruence for $\mathsf{thunk}$, $\defupcast{\u B}{\u B}$ (proved
analogously to Lemma~\ref{lem:cast-congruence}), and $\mathsf{force}$.
We show
\[
x \sqsubseteq x' : U \u B \sqsubseteq U \u B' \vdash
\kw{thunk}{(\defupcast{\u B}{\u B'}{(\kw{force} x)})}
\kw{thunk}{(\defupcast{\u B'}{\u B'}{(\kw{force} x')})}
: U \u B'
\]
by congruence as well.
Finally, for assumption (3), we have
\[
\begin{array}{cc}
\kw{thunk}{(\defupcast{\u B}{\u B}{(\kw{force} x)})} & \mathrel{\gtdyn\ltdyn} \\
\kw{thunk}{({(\kw{force} x)})} & \mathrel{\gtdyn\ltdyn} \\
x
\end{array}
\]
using $\eta$ for $U$ types and the identity principle for
$\defupcast{\u B}{\u B}$ (proved analogously to
Theorem~\ref{thm:decomposition}).
\item To show
\[
x' : U \u B' \vdash \kw{thunk}{(\dncast{\u B}{\u B'}{(\kw{force} x')})} \sqsubseteq x' : U \u B \sqsubseteq U \u B'
\]
we can $\eta$-expand $x'$ to $\kw{thunk}{\kw{force}{x'}}$, and then by
congruence it suffices to show $\dncast{\u B}{\u B'}{(\kw{force} x')}
\sqsubseteq \kw{force}{x'} : \u B \sqsubseteq \u B'$, which is downcast left.
Conversely, for
\[
x \sqsubseteq x' : U \u B \sqsubseteq U \u B' \vdash x \sqsubseteq \kw{thunk}{(\dncast{\u B}{\u B'}{(\kw{force} x')})} : U \u B
\]
we $\eta$-expand $x$ to $\kw{thunk}{(\kw{force}{x})}$, and then it suffices
to show $\dncast{\u B}{\u B'}{(\kw{force}{x})} \sqsubseteq \kw{force}{x'}$, which
is true by downcast right and congruence of $\mathsf{force}$ on the
assumption $x \sqsubseteq x'$.
\item We use the downcast lemma with $X \,\,\text{val type} \vdash \u F X \,\,\text{comp type}$,
where $\bindXtoYinZ{\bullet}{x':A'}{\kw{ret}{(\defdncast{A}{A'}{x})}}$
has the correct type for assumption (1). For assumption (2), we
show
\[
\bullet : \u F A' \vdash
\bindXtoYinZ{\bullet}{x':A'}{\kw{ret}{(\defdncast{A}{A'}{x})}}
\sqsubseteq
\bindXtoYinZ{\bullet}{x':A'}{\kw{ret}{(\defdncast{A'}{A'}{\bullet})}}
\]
by congruence for $\mathsf{bind}$, $\mathsf{ret}$, and
$\defdncast{A'}{A'}$ (which is proved analogously to
Lemma~\ref{lem:cast-congruence}).
We also show
\[
\bullet \sqsubseteq \bullet' : \u F A \sqsubseteq \u F A' \vdash
\bindXtoYinZ{\bullet}{x:A}{\kw{ret}{(\defdncast{A}{A}{x})}}
\sqsubseteq
\bindXtoYinZ{\bullet}{x':A'}{\kw{ret}{(\defdncast{A}{A'}{\bullet'})}}
: \u F A
\]
by congruence.
Finally, for assumption (3), we have
\[
\begin{array}{rc}
\bindXtoYinZ{\bullet}{x:A}{\kw{ret}{(\defdncast{A}{A}{x})}} & \mathrel{\gtdyn\ltdyn} \\
\bindXtoYinZ{\bullet}{x:A}{\kw{ret}{({x})}} & \mathrel{\gtdyn\ltdyn} \\
\bullet
\end{array}
\]
using the identity principle for $\defdncast{A}{A}$ (proved
analogously to Theorem~\ref{thm:decomposition}) and $\eta$ for $F$
types.
\item Combining parts (1) and (2) gives the first equation, while
combining parts (3) and (4) gives the second equation.
\end{enumerate}
\end{longproof}
\begin{longonly}
\subsubsection{Derived Rules for Call-by-value Function Types}
\end{longonly}
Recall that for value types $A_1$ and $A_2$, the CBV function type is
$U(A_1 \to \u F A_2)$. As a corollary of
Theorems~\ref{thm:functorial-casts} and
\ref{thm:monadic-comonadic-casts}, we have
\begin{corollary}[Cast Unique Implementation for CBV Functions]
\[
\begin{small}
\begin{array}{l}
\begin{array}{rcll}
\upcast{U(A_1 \to \u F A_2)}{U(A_1' \to \u F A_2')}{f} & \mathrel{\gtdyn\ltdyn} &
\kw{thunk} (\lambda x'. & \bindXtoYinZ{\dncast{\u F A_1}{\u F A_1'}{(\kw{ret} x')}}{x}{} \\
& & & \bindXtoYinZ{(\kw{force}{(f)}\,x)}{y}\\ & & & {\kw{ret}{(\upcast{A_2}{A_2'}{y})}})\\
\end{array}
\\
\begin{array}{rcl}
\dncast{\u F U(A_1 \to \u F A_2)}{\u F U(A_1' \to \u F A_2')}{\bullet} & \mathrel{\gtdyn\ltdyn} &
\bindXtoYinZ{\bullet}{f}\\
& & {\kw{ret}{\lambda{x}.{\dncast{\u F A_2}{\u F A_2'}{(\kw{force}{(f)} \, (\upcast{A_1}{A_1'}{x}))}}}}
\end{array}
\end{array}
\end{small}
\]
\end{corollary}
\begin{longproof}
For the upcast, by Theorem~\ref{thm:functorial-casts}, it's equal to
\[
\kw{thunk} (\lambda x'. \bindXtoYinZ{\dncast{\u F A_1}{\u F A_1'}{(\kw{ret} x')}}{x}{}
{ \kw{force}{(\upcast{U \u F A_2}{U \u F A_2'}{(\kw{thunk}{(\kw{force}{(f)}\,x)})})}} )
\]
By Theorem~\ref{thm:monadic-comonadic-casts}, $\upcast{U \u F A_2}{U \u F A_2'}$ is equal to
\[
\kw{thunk}{ (\bindXtoYinZ{{\kw{force} -}}{x}{\kw{ret}{(\upcast{A_2}{A_2'}{x})}})}
\]
so $\beta$-reducing $\kw{force}$ and $\kw{thunk}$ twice gives the result.
For the downcast, by Theorem~\ref{thm:monadic-comonadic-casts}, it's
equal to
\[
\bindXtoYinZ{\bullet}{x}{\kw{ret}{(\kw{thunk}{(\dncast{\u (A_1 \to \u F A_2)}{\u (A_1 \to \u F A_2)}{(\kw{force} x)})})}}
\]
and by Theorem~\ref{thm:functorial-casts} $\dncast{\u (A_1 \to \u F A_2)}{\u (A_1 \to \u F A_2)}{-}$ is equal to
\[
\lambda{x}.{\dncast{\u F A_2}{\u F A_2'}{(- \, (\upcast{A_1}{A_1'}{x}))}}
\]
\end{longproof}
These are equivalent to the CBPV translations of the standard CBV wrapping
implementations; for example, the CBV upcast term
$\lambda x'.\letXbeYinZ{\dncast{A_1}{A_1'}{x'}}{x}{\upcast{A_2}{A_2'}{(f x')}}$
has its evaluation order made explicit, and the fact that its upcast is
a (complex) value exposed. In the downcast, the GTT term is free to
let-bind $(\upcast{A_1}{A_1'}{x})$ to avoid duplicating it, but because
it is a (complex) value, it can also be substituted directly, which
might expose reductions that can be optimized.
\begin{longonly}
\subsection{Least Dynamic Types}
\begin{longonly}
\begin{theorem}[Least Dynamic Value Type]
If $\leastdynv$ is a type such that $\leastdynv \sqsubseteq A$ for all $A$,
then in GTT with a strict initial object $0$, $\leastdynv \cong_{v}
0$.
\end{theorem}
\begin{proof}
We have the upcast $x : \leastdynv \vdash \upcast{\leastdynv}{0}{x} :
0$, so Lemma~\ref{lem:initial} gives the result.
\end{proof}
The fact that $\leastdynv$ is strictly initial seems to depend on the
fact that we have a strictly initial object: In GTT without a $0$ type,
it seems that we cannot prove that $x : \leastdynv \vdash
\upcast{\leastdynv}{A}{x} : A$ is the unique such map.
\begin{theorem}[Least Dynamic Computation Type]
If $\leastdync$ is a type such that $\leastdync \sqsubseteq \u B$ for all $\u
B$, and we have a terminal computation type $\top$, then $U \leastdync
\cong_{v} U \top$.
\end{theorem}
\begin{proof}
We have stacks $\bullet : \top \dncast{\leastdync}{\top}{\bullet} :
\leastdync$ and $\bullet : \leastdync \vdash \{\} : \top$. The
composite at $\top$ is the identity by Lemma~\ref{lem:terminal}. However,
because $\top$ is not a strict terminal object, the dual of the above
argument does not give a stack isomorphism $\leastdync \cong_c \top$.
However, using the retract axiom, we have
\[
\begin{array}{c}
x : U \leastdync \vdash \upcast{U \leastdync}{U \top}{x} : U \top\\
y : U \top \vdash \kw{thunk}{(\dncast{\leastdync}{\top}{(\kw{force}{x})})} : U \leastdync\\
x : U \leastdync \vdash \kw{thunk}{(\dncast{\leastdync}{\top}{(\kw{force}{(\upcast{U \leastdync}{U \top}{x})})})} \mathrel{\gtdyn\ltdyn} x : U \leastdync
\end{array}
\]
and the composite
\[
y : U \top \vdash \upcast{U \leastdync}{U \top}{(\kw{thunk}{(\dncast{\leastdync}{\top}{(\kw{force}{x})})})} : U \top
\]
is the identity by uniqueness for $U \top$ (Lemma~\ref{lem:terminal}).
\end{proof}
This suggests taking $\bot_v := 0$ and $\bot_c := \top$.
\end{longonly}
\begin{theorem}
The casts determined by $0 \sqsubseteq A$ are
\[
\upcast{0}{A}z \mathrel{\gtdyn\ltdyn} \kw{absurd} z \qquad \dncast{\u F 0}{\u F A}{\bullet} \mathrel{\gtdyn\ltdyn} \bindXtoYinZ{\bullet}{\_}{\mho}
\]
Dually, the casts determined by $\top \sqsubseteq \u B$ are
\[
\dncast{\top}{\u B}{\bullet} \mathrel{\gtdyn\ltdyn} \{\} \qquad \upcast{U \top}{U \u B}{u} \mathrel{\gtdyn\ltdyn} \kw{thunk} \mho
\]
\end{theorem}
\begin{longproof}
\begin{enumerate}
\item $x : 0 \vdash \upcast{0}{A}{x} \mathrel{\gtdyn\ltdyn} \kw {abort}{x} : A$ is
immediate by $\eta$ for $0$.
\item First, to show
$\bullet : \u F A \vdash \bindXtoYinZ{\bullet}{\_}{\mho} \sqsubseteq \dncast{\u F 0}{\u F A}{\bullet}$,
we can $\eta$-expand the right-hand side into
$\bindXtoYinZ{\bullet}{x:A}{\dncast{\u F 0}{\u F A}{\kw{ret}{x}}}$,
at which point the result follows by congruence
and the fact that type error is minimal, so
$\mho \sqsubseteq {\dncast{\u F 0}{\u F A}{\kw{ret}{x}}}$.
Second, to show
$\bullet : \u F A \vdash \dncast{\u F 0}{\u F A}{\bullet} \sqsubseteq \bindXtoYinZ{\bullet}{\_}{\mho}$,
we can $\eta$-expand the left-hand side to
$\bullet : \u F A \vdash \bindXtoYinZ{\dncast{\u F 0}{\u F A}{\bullet}}{y}{\kw{ret} y}$,
so we need to show
\[
\bullet: \u F A \vdash \bindXtoYinZ{\dncast{\u F 0}{\u F A}{\bullet}}{y:0}{\kw{ret} y} \sqsubseteq \bindXtoYinZ{\bullet}{y':A}{\mho} : \u F 0
\]
We apply congruence, with $\bullet : \u F A \vdash {\dncast{\u F
0}{\u F A}{\bullet}} \sqsubseteq \bullet : 0 \sqsubseteq A$ by the
universal property of downcasts in the first premise, so it suffices
to show
\[
y \sqsubseteq y' : 0 \sqsubseteq A \vdash \kw{ret}{y} \sqsubseteq \mho_{\u F 0} : \u F 0
\]
By transitivity with $y \sqsubseteq y' : 0 \sqsubseteq A \vdash \mho_{\u F 0}
\sqsubseteq \mho_{\u F 0} : \u F 0 \sqsubseteq \u F 0$, it suffices to show
\[
y \sqsubseteq y : 0 \sqsubseteq 0 \vdash \kw{ret}{y} \sqsubseteq \mho_{\u F 0} : \u F 0
\]
But now both sides are maps out of $0$, and therefore equal by
Lemma~\ref{lem:initial}.
\item The downcast is immediate by $\eta$ for $\top$,
Lemma~\ref{lem:terminal}.
\item First,
\[
u : U \top \vdash \kw{thunk} \mho \sqsubseteq \kw{thunk}{(\kw{force}{(\upcast{U \top}{U \u B}{u})})} \mathrel{\gtdyn\ltdyn} {\upcast{U \top}{U \u B}{u}} : U \u B
\]
by congruence, $\eta$ for $U$, and the fact that error is minimal.
Conversely, to show
\[
u : U \top \vdash {\upcast{U \top}{U \u B}{u}} \sqsubseteq \kw{thunk} \mho : U \u B
\]
it suffices to show
\[
u : U \top \vdash u \sqsubseteq \kw{thunk} \mho_{\u B} : U \top \sqsubseteq U \u B
\]
by the universal property of an upcast. By Lemma~\ref{lem:terminal},
any two elements of $U \top$ are equidynamic, so in particular $u
\mathrel{\gtdyn\ltdyn} \kw{thunk}{\mho_{\top}}$, at which point congruence for
$\mathsf{thunk}$ and $\mho_\top \sqsubseteq \mho_{\u B } : \top \sqsubseteq \u
B$ gives the result.
\end{enumerate}
\end{longproof}
\end{longonly}
\subsection{Upcasts are Values, Downcasts are Stacks}
\label{sec:upcasts-necessarily-values}
Since GTT is an axiomatic theory, we can consider different fragments
than the one presented in Section~\ref{sec:gtt}. Here, we use this
flexibility to show that taking upcasts to be complex values and
downcasts to be complex stacks is forced if this property holds for
casts between \emph{ground} types and ${?}$/$\u {\text{?`}}$. For this section, we define a \emph{ground
type}\footnote{In gradual
typing, ``ground'' is used to mean a one-level unrolling of a dynamic type, not first-order data.} to be generated by the following grammar:
\[
G ::= 1 \mid {?} \times {?} \mid 0 \mid {?} + {?} \mid U \u {\text{?`}}
\qquad
\u G ::= {?} \to \u {\text{?`}} \mid \top \mid \u {\text{?`}} \mathbin{\&} \u {\text{?`}} \mid \u F {?}
\]
\begin{longonly}
\begin{definition}[Ground type dynamism]
Let $A \sqsubseteq' A'$ and $\u B \sqsubseteq' \u B'$ be the relations defined
by the rules in Figure~\ref{fig:gtt-type-dynamism}
with the axioms $A \sqsubseteq {?}$ and $\u B \sqsubseteq \u {\text{?`}}$ restricted to
ground types---i.e., replaced by $G \sqsubseteq {?}$ and $\u G \sqsubseteq \u {\text{?`}}$.
\end{definition}
\begin{lemma} \label{lem:find-ground-type}
For any type $A$, $A \sqsubseteq' {?}$.
For any type $\u B$, $\u B \sqsubseteq' \u {\text{?`}}$.
\end{lemma}
\begin{proof}
By induction on the type. For example, in the case for $A_1 + A_2$, we
have by the inductive hypothesis $A_1 \sqsubseteq' {?}$ and $A_2 \sqsubseteq'
{?}$, so $A_1 + A_2 \sqsubseteq' {?} + {?} \sqsubseteq {?}$ by congruence
and transitivity, because ${?} + {?}$ is ground. In the case for
$\u F A$, we have $A \sqsubseteq {?}$ by the inductive hypothesis, so $\u F
A \sqsubseteq \u F {?} \sqsubseteq \u {\text{?`}}$.
\end{proof}
\begin{lemma}[$\sqsubseteq$ and $\sqsubseteq'$ agree]
$A \sqsubseteq A'$ iff $A \sqsubseteq' A'$ and $\u B \sqsubseteq \u B'$ iff $\u B
\sqsubseteq' \u B'$
\end{lemma}
\begin{proof}
The ``if'' direction is immediate by induction because every rule of
$\sqsubseteq'$ is a rule of $\sqsubseteq$. To show $\sqsubseteq$ is contained in
$\sqsubseteq'$, we do induction on the derivation of $\sqsubseteq$, where every
rule is true for $\sqsubseteq'$, except $A \sqsubseteq {?}$ and $\u B \sqsubseteq
\u {\text{?`}}$, and for these, we use Lemma~\ref{lem:find-ground-type}.
\end{proof}
\end{longonly}
Let GTT$_G$ be the fragment of GTT where the only primitive casts are
those between ground types and the dynamic types, i.e. the cast terms
are restricted to the substitution closures of
\[
\begin{small}
\begin{array}{llll}
x : G \vdash \upcast{G}{{?}}{x} : {?} &
\bullet : \u F {?} \vdash \dncast{\u F G}{\u F {?}}{\bullet} : \u F {?} &
\bullet : \u {\text{?`}} \vdash \dncast{\u G}{\u {\text{?`}}}{\bullet} : \u {\text{?`}} &
x : U \u G \vdash \upcast{U \u G}{U \u {\text{?`}}}{x} : U \u {\text{?`}}
\end{array}
\end{small}
\]
\begin{lemma}[Casts are Admissible] \label{lem:casts-admissible}
In GTT$_G$ it is admissible that
\begin{enumerate}
\item for all $A \sqsubseteq A'$
there is a complex value $\defupcast{A}{A'}$
satisfying the universal property of an upcast
and a complex stack $\defdncast{\u F A}{\u F A'}$
satisfying the universal property of a downcast
\item for all $\u B \sqsubseteq \u B'$ there is a complex
stack $\defdncast{\u B}{\u B'}$ satisfying the universal property of a
downcast and a complex value $\defupcast{U \u B}{U \u B'}$ satisfying
the universal property of an upcast.
\end{enumerate}
\end{lemma}
\begin{proof}
To streamline the exposition above, we stated
Theorems~\ref{thm:decomposition}, Theorem~\ref{thm:functorial-casts}
Theorem~\ref{thm:monadic-comonadic-casts} as showing that the
``definitions'' of each cast are equidynamic with the cast that is a
priori postulated to exist (e.g. $\upcast{A}{A''} \mathrel{\gtdyn\ltdyn}
\upcast{A'}{A''}{\upcast{A}{A'}}$). However, the proofs
\begin{longonly}
factor
through Theorem~\ref{thm:casts-unique} and Lemma~\ref{lem:upcast} and
Lemma~\ref{lem:downcast}, which
\end{longonly}
show directly that the right-hand sides have the desired universal
property---i.e. the stipulation that some cast with the correct
universal property exists is not used in the proof that the
implementation has the desired universal property. Moreover, the
proofs given do not rely on any axioms of GTT besides the universal
properties of the ``smaller'' casts used in the definition and the
$\beta\eta$ rules for the relevant types. So these proofs can be used
as the inductive steps here, in GTT$_G$.
\begin{shortonly}
In the extended version we define an alternative type dynamism
relation where casts into dynamic types are factored through ground
types, and use that to drive the induction here.
\end{shortonly}
\begin{longonly}
By induction on type dynamism $A \sqsubseteq' A'$ and $\u B \sqsubseteq' \u B'$.
(We chose not to make this more explicit above, because we believe the
equational description in a language with all casts is a clearer
description of the results, because it avoids needing to hypothesize
terms that behave as the smaller casts in each case.)
We show a few representative cases:
In the cases for $G \sqsubseteq {?}$ or $\u G \sqsubseteq \u {\text{?`}}$, we have
assumed appropriate casts $\upcast{G}{{?}}$ and
$\dncast{\u F G}{\u F {?}}$ and
$\dncast{\u G}{\u {\text{?`}}}$ and
$\upcast{U \u G}{U \u {\text{?`}}}$.
In the case for identity $A \sqsubseteq A$, we need to show that there is
an upcast $\defupcast{A}{A}$ and a downcast $\defdncast{\u F A}{\u F A}$
The proof of Theorem~\ref{thm:decomposition} shows that the identity
value and stack have the correct universal property.
In the case where type dynamism was concluded by
transitivity between $A \sqsubseteq A'$ and $A' \sqsubseteq A''$, by the
inductive hypotheses we get upcasts $\defupcast{A}{A'}$ and
$\defupcast{A'}{A''}$, and the proof of
Theorem~\ref{thm:decomposition} shows that defining
$\defupcast{A}{A''}$ to be $\defupcast{A'}{A''}{\defupcast{A}{A'}}$
has the correct universal property. For the downcast, we get
$\defdncast{\u F A}{\u F A'}$ and
$\defdncast{\u F A'}{\u F A''}$ by the inductive hypotheses, and the
proof of Theorem~\ref{thm:decomposition} shows that their composition
has the correct universal property.
In the case where type dynamism was concluded by the congruence rule
for $A_1 + A_2 \sqsubseteq A_1' + A_2'$ from $A_i \sqsubseteq A_i'$, we have
upcasts $\defupcast{A_i}{A_i'}$ and downcasts $\defdncast{\u F A_i}{\u
F A_i'}$ by the inductive hypothesis, and the proof of
Theorem~\ref{thm:decomposition} shows that the definitions given there
have the desired universal property.
In the case where type dynamism was concluded by the congruence rule
for $\u F A \sqsubseteq \u F A'$ from $A \sqsubseteq A'$, we obtain by induction
an \emph{upcast} $A \sqsubseteq A'$ and a downcast $\defdncast{\u F A}{\u F A'}$.
We need a
\emph{downcast} $\defdncast{\u F A}{F A'}$, which we have,
and an \emph{upcast} $\defdncast{U \u F A}{U \u F A'}$, which is
constructed as in Theorem~\ref{thm:monadic-comonadic-casts}.
\end{longonly}
\end{proof}
As discussed in Section~\ref{sec:gtt-casts}, rather than an upcast being
a complex value $x : A \vdash \upcast{A}{A'}{x} : A'$, an a priori more
general type would be a stack $\bullet : \u F A \vdash \upcast{\u F
A}{\u F A'}{\bullet} : \u F A'$, which allows the upcast to perform
effects; dually, an a priori more general type for a downcast $\bullet :
\u B' \vdash \dncast{\u B}{\u B'}{\bullet} : \u B$ would be a value $x :
U \u B' \vdash \dncast{U \u B}{U \u B'}{x} : U \u B$, which allows the
downcast to ignore its argument. The following shows that in GTT$_G$,
if we postulate such stack upcasts/value downcasts as originally
suggested in Section~\ref{sec:gtt-casts}, then in fact these casts
\emph{must} be equal to the action of $U$/$\u F$ on some
value upcasts/stack downcasts, so the potential
for (co)effectfulness affords no additional flexibility.
\begin{theorem}[Upcasts are Necessarily Values, Downcasts are Necessarily Stacks]
\label{thm:upcasts-values-downcasts-stacks}
Suppose we extend GTT$_G$ with the following postulated stack upcasts
and value downcasts (in the sense of
Definition~\ref{def:value-down-computation-up}): For every type
precision $A \sqsubseteq A'$, there is a stack upcast $\bullet : \u F A
\vdash \upcast{\u F A}{\u F A'}{\bullet} : \u F A'$, and for every $\u
B \sqsubseteq \u B'$, there is a complex value downcast $x : U \u B' \vdash
\dncast{U \u B}{U \u B'}{x} : U \u B$.
Then there exists a value upcast $\defupcast{A}{A'}$ and a stack
downcast $\defdncast{\u B}{\u B'}$ such that
\[
\begin{array}{c}
\bullet : \u F A \vdash \upcast{\u F A}{\u F A'}{\bullet} \mathrel{\gtdyn\ltdyn} { (\bindXtoYinZ{{\bullet}}{x:A}{\kw{ret}{(\defupcast{A}{A'}{x})}})}\\
x : U \u B' \vdash \dncast{U \u B}{U \u B'}{x} \mathrel{\gtdyn\ltdyn} {(\kw{thunk}{(\defdncast{\u B}{\u B'}{(\kw{force} x)})})}
\end{array}
\]
\end{theorem}
\begin{proof}
Lemma~\ref{lem:casts-admissible} constructs $\defupcast{A}{A'}$ and
$\defdncast{\u B}{\u B'}$, so the proof of
Theorem~\ref{thm:monadic-comonadic-casts} (which really works for any
$\defupcast{A}{A'}$ and $\defdncast{\u B}{\u B'}$ with the correct
universal properties, not only the postulated casts) implies that the
right-hand sides of the above equations are stack upcasts and value
downcasts of the appropriate type. Since stack upcasts/value downcasts
are unique by an argument analogous to Theorem~\ref{thm:casts-unique},
the postulated casts must be equal to these.
\end{proof}
\begin{longonly}
Indeed, the following a priori even more general assumption provides no
more flexibility:
\begin{theorem}[Upcasts are Necessarily Values, Downcasts are Necessarily Stacks II]
Suppose we extend GTT$_G$ only with postulated monadic upcasts $x : U
\u F A \vdash \upcast{U \u F A}{U \u F A'}{x} : U \u F A'$ for every
$A \sqsubseteq A'$ and comonadic downcasts $\bullet : \u F U \u B' \vdash
\dncast{\u F U \u B}{\u F U \u B'}{\bullet} : \u F U \u B$ for every
$\u B \sqsubseteq \u B'$.
Then there exists a value upcast $\defupcast{A}{A'}$ and a stack
downcast $\defdncast{\u B}{\u B'}$ such that
\[
\begin{array}{c}
x : U \u F A \vdash \upcast{U \u F A}{U \u F A'}{x} \mathrel{\gtdyn\ltdyn} \kw{thunk}{ (\bindXtoYinZ{{\kw{force} x}}{x:A}{\kw{ret}{(\defupcast{A}{A'}{x})}})}\\
\bullet : \u F U \u B' \vdash \dncast{\u F U \u B}{\u F U \u B'}{\bullet} \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ{\bullet}{x':U \u B'}{\kw{ret}{(\kw{thunk}{(\defdncast{\u B}{\u B'}{(\kw{force} x)})})}}
\end{array}
\]
\end{theorem}
In CBV terms, the monadic upcast is like an upcast from $A$ to $A'$
taking having type $(1 \to A) \to A'$, i.e. it takes a thunked
effectful computation of an $A$ as input and produces an effectful
computation of an $A'$.
\begin{proof}
Again, Lemma~\ref{lem:casts-admissible} constructs $\defupcast{A}{A'}$
and $\defdncast{\u B}{\u B'}$, so the proof of part (5) of
Theorem~\ref{thm:monadic-comonadic-casts} gives the result.
\end{proof}
\end{longonly}
\begin{longonly}
\subsection{Equidynamic Types are Isomorphic}
\begin{theorem}[Equidynamism implies Isomorphism]
\begin{enumerate}
\item
If $A \sqsubseteq A'$ and $A' \sqsubseteq A$ then $A \cong_v A'$.
\item
If $\u B \sqsubseteq \u B'$ and $\u B' \sqsubseteq \u B$ then $\u B \cong_c \u B'$.
\end{enumerate}
\end{theorem}
\begin{proof}
\begin{enumerate}
\item We have upcasts $x : A \vdash \upcast{A}{A'}{x} : A'$ and $x' : A' \vdash \upcast{A'}{A}{x'} : A$.
For the composites, to show
$x : A \vdash \upcast{A'}{A}{\upcast{A}{A'}{x}} \sqsubseteq x$
we apply upcast left twice, and conclude $x \sqsubseteq x$ by assumption.
To show,
$x : A \vdash x \sqsubseteq \upcast{A'}{A}{\upcast{A}{A'}{x}}$,
we have $x : A \vdash x \sqsubseteq {\upcast{A}{A'}{x}} : A \sqsubseteq A'$
by upcast right, and therefore
$x : A \vdash x \sqsubseteq \upcast{A'}{A}{\upcast{A}{A'}{x}} : A \sqsubseteq A$
again by upcast right.
The other composite is the same proof with $A$ and $A'$ swapped.
\item We have downcasts $\bullet : \u B \vdash \dncast{\u B}{\u B'}{\bullet} : \u B'$ and
$\bullet : \u B' \vdash \dncast{\u B'}{\u B}{\bullet} : \u B$.
For the composites, to show $\bullet : \u B' \vdash \bullet \sqsubseteq
\dncast{\u B'}{\u B}{\dncast{\u B}{\u B'}}{\bullet}$, we apply
downcast right twice, and conclude $\bullet \sqsubseteq \bullet$. For
$\dncast{\u B'}{\u B}{\dncast{\u B}{\u B'}}{\bullet} \sqsubseteq
\bullet$, we first have $\dncast{\u B}{\u B'}{\bullet} \sqsubseteq
\bullet : \u B \sqsubseteq \u B'$ by downcast left, and then the result
by another application of downcast left.
The other composite is the same proof with $\u B$ and $\u B'$ swapped.
\end{enumerate}
\end{proof}
\end{longonly}
\section{Contract Models of GTT}
\label{sec:contract}
To show the soundness of our theory, and demonstrate its relationship to
operational definitions of observational equivalence and the gradual
guarantee, we develop \emph{models} of GTT using observational error
approximation of a \emph{non-gradual} CBPV.
We call this the \emph{contract translation} because it translates the
built-in casts of the gradual language into ordinary terms implemented
in a non-gradual language.
While contracts are typically implemented in a dynamically typed
language, our target is typed, retaining type information similarly to
manifest contracts \cite{greenberg-manifest}.
We give implementations of the dynamic value type in the usual way as
a recursive sum of basic value types, i.e., using type tags, and we
give implementations of the dynamic computation type as the dual: a
recursive product of basic computation types.
Writing $\sem{M}$ for any of the contract translations, the remaining
sections of the paper establish:
\begin{theorem}[Equi-dynamism implies Observational Equivalence]
If $\Gamma \vdash M_1 \mathrel{\gtdyn\ltdyn} M_2 : \u B$, then for any closing
GTT context $C : (\Gamma \vdash \u B) \Rightarrow (\cdot \vdash \u F
(1+1))$, $\sem{C[M_1]}$ and $\sem{C[M_2]}$ have the same behavior: both diverge,
both run to an error, or both run to $\texttt{true}$ or both run to $\texttt{false}$.
\end{theorem}
\begin{theorem}[Graduality]
If $\Gamma_1 \sqsubseteq \Gamma_2 \vdash M_1 \sqsubseteq M_2 : B_1 \sqsubseteq B_2$,
then for any GTT context $C : (\Gamma_1 \vdash B_1) \Rightarrow (\cdot
\vdash \u F (1+1))$, and any valid interpretation of the dynamic
types, either
\begin{enumerate}
\item $\sem{C[M_1]} \Downarrow \mho$, or
\item $\sem{C[M_1]} \Uparrow$ and $\sem{C[\dncast{B_1}{B_2}M_2[\upcast{\Gamma_1}{\Gamma_2}{\Gamma_1}]]} \Uparrow$, or
\item $\sem{C[M_1]} \Downarrow \kw{ret} V$,~~
$\sem{C[\dncast{B_1}{B_2}M_2[\upcast{\Gamma_1}{\Gamma_2}{\Gamma_1}]]}
\Downarrow \kw{ret} V$, and $V = \texttt{true}$ or $V = \texttt{false}$.
\end{enumerate}
\end{theorem}
As a consequence we will also get consistency of our logic of
dynamism:
\begin{corollary}[Consistency \iflong of GTT \fi]
$\cdot \vdash \kw{ret} \kw{true} \sqsubseteq \kw{ret} \kw{false} : \u F(1+1)$ is not
provable in GTT.
\end{corollary}
\begin{longproof}
They are distinguished by the identity context.
\end{longproof}
We break down this proof into 3 major steps.
\begin{enumerate}
\item (This section) We translate GTT into a statically typed CBPV*\/
language where the casts of GTT are translated to ``contracts'' in
GTT: i.e., CBPV terms that implement the runtime type checking. We
translate the term dynamism of GTT to an inequational theory for CBPV.
Our translation is parameterized by the implementation of the dynamic
types, and we demonstrate two valid implementations, one more direct
and one more Scheme-like.
\item (Section \ref{sec:complex}) Next, we eliminate all uses of complex
values and stacks from the CBPV language. We translate the complex
values and stacks to terms with a proof that they are ``pure''
(thunkable or linear~\cite{munchmaccagnoni14nonassociative}). This part has little to do with GTT
specifically, except that it shows the behavioral property that
corresponds to upcasts being complex values and downcasts being
complex stacks.
\item (Section \ref{sec:lr}) Finally, with complex values and stacks
eliminated, we give a standard operational semantics for CBPV and
define a \emph{logical relation} that is sound and complete with
respect to observational error approximation. Using the logical
relation, we show that the inequational theory of CBPV is sound for
observational error approximation.
\end{enumerate}
By composing these, we get a model of GTT where equidynamism is sound
for observational equivalence and an operational semantics that
satisfies the graduality theorem.
\subsection{Call-by-push-value}
\label{sec:cbpvstar}
Next, we define the call-by-push-value language CBPV*\ that will be
the target for our contract translations of GTT.
CBPV*\ is the axiomatic version of call-by-push-value \emph{with}
complex values and stacks, while CBPV\ (Section~\ref{sec:complex}) will
designate the operational version of call-by-push-value with only
operational values and stacks.
CBPV*\ is almost a subset of GTT obtained as follows: We remove the
casts and the dynamic types ${?}, \u {\text{?`}}$ (the shaded pieces) from the
syntax and typing rules in Figure~\ref{fig:gtt-syntax-and-terms}. There
is no type dynamism, and the inequational theory of CBPV* is the
homogeneous fragment of term dynamism in
Figure~\ref{fig:gtt-term-dynamism-structural}\iflong\ and Figure~\ref{fig:gtt-term-dynamism-ext-congruence}\fi\ (judgements $\Gamma \vdash
E \sqsubseteq E' : T$ where $\Gamma \vdash E,E' : T$, with all the same rules
in that figure thus restricted). The inequational axioms are the
Type Universal Properties ($\beta\eta$ rules)
and Error Properties (with \textsc{ErrBot} made homogeneous) from
Figure~\ref{fig:gtt-term-dyn-axioms}.
To implement the casts and dynamic types, we \emph{add} general
\emph{recursive} value types ($\mu X.A$, the fixed point of $X \,\,\text{val type}
\vdash A \,\,\text{val type}$) and \emph{corecursive} computation types ($\nu \u Y.\u
B$, the fixed point of $\u Y \,\,\text{comp type} \vdash \u B \,\,\text{comp type}$).
The recursive type $\mu X.A$ is a value type with constructor
$\texttt{roll}$, whose eliminator is pattern matching, whereas the
corecursive type $\nu \u Y.\u B$ is a computation type defined by its
eliminator (\texttt{unroll}), with an introduction form that we also write
as \texttt{roll}.
We extend the inequational theory with monotonicity of each term constructor of
the recursive types, and with their $\beta\eta$ rules.
\begin{shortonly}
The rules for recursive types are in the extended version.
\end{shortonly}
\begin{longonly}
In the following figure, we write $\mathrel{\bf +::=}$ and $\mathrel{\bf -::=}$ to indicate
the diff from the grammar in Figure~\ref{fig:gtt-syntax-and-terms}.
\begin{figure}[h]
\begin{small}
\[
\begin{array}{lrcl}
\text{Value Types} & A & \mathrel{\bf +::=} & \mu X. A \bnfalt X\\
& & \mathrel{\bf -::=} & {?} \\
\text{Computation Types} & \u B & \mathrel{\bf +::=} & \nu \u Y. \u B \bnfalt \u Y\\
& & \mathrel{\bf -::=} & \u {\text{?`}}\\
\text{Values} & V & \mathrel{\bf +::=} & \rollty{\mu X.A} V\\
& & \mathrel{\bf -::=} & \upcast{A}{A} V\\
\text{Terms} & M & \mathrel{\bf +::=} & \rollty{\nu \u Y. \u B} M \bnfalt \kw{unroll} M\\
& M & \mathrel{\bf -::=} & \dncast{\u B}{\u B}M\\
\text{Both} & E & \mathrel{\bf +::=} & \pmmuXtoYinZ V x E
\end{array}
\]
\begin{mathpar}
\inferrule*[right=$\mu$I]
{\Gamma \vdash V : A[\mu X. A/X]}
{\Gamma \vdash \rollty{\mu X. A} V : \mu X.A}
\qquad
\inferrule*[right=$\mu$E]
{ \Gamma \vdash V : \mu X. A \\\\
\Gamma, x : A[\mu X.A/X] \,\,|\,\, \Delta \vdash E : T
}
{\Gamma\,\,|\,\,\Delta \vdash \pmmuXtoYinZ V x E : T}
\inferrule*[right=$\nu$I]
{\Gamma \mid \Delta \vdash M : \u B[\nu \u Y. \u B]}
{\Gamma \mid \Delta \vdash \rollty{\nu \u Y. \u B} M : \nu \u Y. \u B}\\
\qquad
\inferrule*[right=$\nu$E]
{\Gamma \mid \Delta \vdash M : \nu \u Y. \u B}
{\Gamma \mid \Delta \vdash \kw{unroll} M : \u B[\nu \u Y. \u B]}
\inferrule*[right=$\mu$ICong]
{\Gamma \vdash V \sqsubseteq V' : A[\mu X. A/X]}
{\Gamma \vdash \kw{roll} V \sqsubseteq \kw{roll} V' : \mu X. A}
\inferrule*[right=$\mu$ECong]
{\Gamma \vdash V \sqsubseteq V' : \mu X.A\and
\Gamma, x : A[\mu X. A/X] \,\,|\,\, \Delta \vdash E \sqsubseteq E' : T}
{\Gamma \,\,|\,\, \Delta \vdash \pmmuXtoYinZ V x E \sqsubseteq\pmmuXtoYinZ {V'} x {E'} : T}
\inferrule*[right=$\nu$ICong]
{\Gamma\,\,|\,\, \Delta \vdash M \sqsubseteq M' : \u B[\nu \u Y. \u B/\u Y]}
{\Gamma\,\,|\,\, \Delta \vdash \kw{roll} M \sqsubseteq \kw{roll} M' : \nu \u Y. \u B}
\inferrule*[right=$\nu$ECong]
{\Gamma\,\,|\,\, \Delta \vdash M \sqsubseteq M' : \nu \u Y. \u B}
{\Gamma\,\,|\,\, \Delta \vdash \kw{unroll} M \sqsubseteq \kw{unroll} M' : \u B[\nu \u Y. \u B/\u Y]}\\
\framebox{Recursive Type Axioms}
\medskip
\end{mathpar}
\begin{tabular}{c|c|c}
Type & $\beta$ & $\eta$\\
\hline
$\mu$
&
${\pmmuXtoYinZ{\kw{roll} V}{x}{E} \mathrel{\gtdyn\ltdyn} E[V/x]}$
&
$\begin{array}{l}
E \mathrel{\gtdyn\ltdyn} \pmmuXtoYinZ x {y} E[\kw{roll} y/x] \\
\text{where } {x : \mu X. A \vdash E : T}
\end{array}$\\
\hline
$\nu$
&
${\kw{unroll}\kw{roll} M \mathrel{\gtdyn\ltdyn} M}$
&
${\bullet : \nu \u Y. \u B \vdash \bullet \mathrel{\gtdyn\ltdyn} \kw{roll}\kw{unroll} \bullet : \nu \u Y. \u B}$\\
\end{tabular}
\end{small}
\caption{CBPV*\ types, terms, recursive types (diff from GTT),
full rules in the extended version}
\label{fig:cbpv-star}
\end{figure}
\end{longonly}
\subsection{Interpreting the Dynamic Types}
\label{sec:dynamic-type-interp}
As shown in Theorems~\ref{thm:decomposition},
\ref{thm:functorial-casts}, \ref{thm:monadic-comonadic-casts}, almost
all of the contract translation is uniquely determined already.
However, the interpretation of the dynamic types and the casts between
the dynamic types and ground types $G$ and $\u G$ are not determined
(they were still postulated in Lemma~\ref{lem:casts-admissible}).
For this reason, our translation is \emph{parameterized} by an
interpretation of the dynamic types and the ground casts.
By Theorems~\ref{thm:cast-adjunction}, \ref{thm:retract-general}, we know
that these must be \emph{embedding-projection pairs} (ep pairs), which
we now define in CBPV*.
\begin{longonly}
There are two kinds of ep pairs we consider: those between value types
(where the embedding models an upcast) and those between computation
types (where the projection models a downcast).
\end{longonly}
\begin{definition}[Value and Computation Embedding-Projection Pairs] ~~ \label{def:cbpvstar-eppairs}
\begin{enumerate}
\item
A \emph{value ep pair} from $A$ to $A'$ consists of
an \emph{embedding} value $x:A\vdash V_e : A'$
and \emph{projection} stack $\bullet : \u F A' \vdash S_p : \u F A$,
satisfying the \emph{retraction} and \emph{projection} properties:
\[
x : A \vdash \kw{ret} x \mathrel{\gtdyn\ltdyn} S_p[\kw{ret} V_e] : \u F A
\qquad
\bullet : \u F A' \vdash \bindXtoYinZ {S_p} x \kw{ret} V_e \sqsubseteq \bullet : \u F A'
\]
\item
A \emph{computation ep pair} from $\u B$ to $\u B'$ consists of
an \emph{embedding} value $z : U \u B \vdash V_e : U \u B'$
and a \emph{projection} stack $\bullet : \u B' \vdash S_p : \u B$
satisfying \emph{retraction} and \emph{projection} properties:
\[
z : U \u B \vdash \kw{force} z \mathrel{\gtdyn\ltdyn} S_p[\kw{force} V_e] : \u B
\qquad
w : U \u B' \vdash V_e[\kw{thunk} {S_p[\kw{force} w]}] \sqsubseteq w : U \u B'
\]
\end{enumerate}
\end{definition}
\begin{longonly}
While this formulation is very convenient in that both kinds of ep
pairs are pairs of a value and a stack, the projection properties are
often occur more naturally in the following forms:
\begin{lemma}[Alternative Projection]
If $(V_e,S_p)$ is a value ep pair from $A$ to $A'$ and $\Gamma,
y:A'\,\,|\,\,\Delta \vdash M : \u B$, then
\[ \Gamma , x' : A' \vdash \bindXtoYinZ {S_p[\kw{ret} x']} x M[V_e/y] \sqsubseteq M[x'/y] \]
Similarly, if $(V_e,S_p)$ is a computation ep pair from $\u B$ to
$\u B'$, and $\Gamma \vdash M : \u B'$then
\[ \Gamma \vdash V_e[\kw{thunk} S_p[M]] \sqsubseteq \kw{thunk} M : U \u B' \]
\end{lemma}
\begin{longproof}
For the first,
\begin{align*}
\bindXtoYinZ {S_p[\kw{ret} x']} x M[V_e/y]
& \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {(\bindXtoYinZ {S_p[\kw{ret} x']} x \kw{ret} V_e)} y M\tag{comm conv, $\u F \beta$}\\
&\bindXtoYinZ {\kw{ret} x'} y M\tag{projection}\\
&M[x'/y]\tag{$\u F\beta$}
\end{align*}
For the second,
\begin{align*}
V_e[\kw{thunk} S_p[M]]
&\mathrel{\gtdyn\ltdyn} V_e[\kw{thunk} S_p[\kw{force}\kw{thunk} M]] \tag{$U\beta$}\\
&\sqsubseteq \kw{thunk} M\tag{projection}
\end{align*}
\end{longproof}
\end{longonly}
Using this, and using the notion of ground type from
Section~\ref{sec:upcasts-necessarily-values} \emph{with $0$ and $\top$ removed}, we define
\begin{definition}[Dynamic Type Interpretation]
A ${?},\u {\text{?`}}$ interpretation $\rho$ consists of (1) a
\cbpv\ value type $\rho({?})$, (2) a \cbpv\ computation
type $\rho(\u {\text{?`}})$,
(3)
for each value ground type $G$,
a value ep pair $(x.\rho_{e}(G), \rho_{p}(G))$ from $\srho G$ to
$\rho({?})$, and (4) for each computation ground type $\u G$, a
computation ep pair $(z.\rho_{e}(\u G), \rho_{p}(\u G))$ from
$\srho{\u G}$ to $\rho(\u {\text{?`}})$. We write
$\srho G$ and $\srho {\u G}$ for the interpretation of a ground type,
replacing ${?}$ with $\rho({?})$, $\u {\text{?`}}$ with $\rho(\u {\text{?`}})$, and
compositionally otherwise.
\end{definition}
Next, we show several possible interpretations of the dynamic type
that will all give, by construction, implementations that satisfy the
gradual guarantee.
Our interpretations of the value dynamic type are not surprising.
They are the usual construction of the dynamic type using type tags:
i.e., a recursive sum of basic value types.
On the other hand, our interpretations of the computation dynamic type
are less familiar.
In duality with the interpretation of ${?}$, we interpret $\u {\text{?`}}$ as
a recursive \emph{product} of basic computation types.
This interpretation has some analogues in previous work on the duality
of computation \citep{girard01locussolum,zeilberger09thesis}, but the
most direct interpretation (definition \ref{def:natural-type-interp})
does not correspond to any known work on dynamic/gradual typing.
Then we show that a particular choice of which computation types is
basic and which are derived produces an interpretation of the dynamic
computation type as a type of variable-arity functions whose arguments
are passed on the stack, producing a model similar to Scheme without
accounting for control effects (definition
\ref{def:scheme-like-type-interp}).
\subsubsection{Natural Dynamic Type Interpretation}
Our first dynamic type interpretation is to make the value and
computation dynamic types sums and products of the ground value and
computation types, respectively.
This forms a model of GTT for the following reasons.
For the value dynamic type ${?}$, we need a value embedding (the
upcast) from each ground value type $G$ with a corresponding projection.
The easiest way to do this would be if for each $G$, we could rewrite
${?}$ as a sum of the values that fit $G$ and those that don't:
${?} \cong G + {?}_{-G}$ because of the following lemma.
\begin{lemma}[Sum Injections are Value Embeddings]\label{lem:injections-are-embeddings}
For any $A, A'$, there are value ep pairs from $A$ and $A'$ to
$A+A'$ where the embeddings are $\kw{inl}$ and $\kw{inr}$.
\end{lemma}
\begin{proof}
Define the embedding of $A$ to just be $x. \kw{inl} x$ and the
projection to be $\bindXtoYinZ \bullet y \caseofXthenYelseZ y {\kw{inl} x. \kw{ret}
x}{\kw{inr} _. \mho}$.
\begin{longonly}
This satisfies retraction (using $\u F(+)$ induction (lemma \ref{lem:f-induction}), $\kw{inr}$ case is the same):
\begin{align*}
\bindXtoYinZ {\kw{inl} x} y \caseofXthenYelseZ y {\kw{inl} x. \kw{ret} x}{\kw{inr} _. \mho}
&\mathrel{\gtdyn\ltdyn} \caseofXthenYelseZ {\kw{inl} x} {\kw{inl} x. \kw{ret} x}{\kw{inr} _. \mho}\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{ret} x\tag{$+\beta$}
\end{align*}
and projection (similarly using $\u F(+)$ induction):
\begin{align*}
x': A+A'
&\vdash \bindXtoYinZ x {(\bindXtoYinZ {\kw{ret} x'} y \caseofXthenYelseZ y {\kw{inl} x. \kw{ret} x}{\kw{inr} _. \mho})} \kw{ret} \kw{inl} x\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ x {(\caseofXthenYelseZ {x'} {\kw{inl} x. \kw{ret} x}{\kw{inr} _. \mho})}\kw{ret} \kw{inl} x\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} {(\caseofXthenYelseZ {x'} {\kw{inl} x. \bindXtoYinZ {\kw{ret} x} x \kw{ret}\kw{inl} x}{\kw{inr} _. \bindXtoYinZ \mho x \kw{ret}\kw{inl} x})}\tag{commuting conversion}\\
&\mathrel{\gtdyn\ltdyn} {(\caseofXthenYelseZ {x'} {\kw{inl} x. \kw{ret}\kw{inl} x}{\kw{inr} _. \mho})}\tag{$\u F\beta,\mho$ strictness}\\
&\sqsubseteq {(\caseofXthenYelseZ {x'} {\kw{inl} x. \kw{ret}\kw{inl} x}{\kw{inr} y. \kw{ret}\kw{inl} y})}\tag{$\mho$ bottom}\\
&\mathrel{\gtdyn\ltdyn} \kw{ret} x' \tag{$+\eta$}
\end{align*}
\end{longonly}
\end{proof}
\begin{longonly}
Whose proof relies on the following induction principle for the
returner type:
\begin{lemma}[$\u F(+)$ Induction Principle]
\label{lem:f-induction}
$\Gamma\,\,|\,\, \cdot : \u F (A_1 + A_2) \vdash M_1 \sqsubseteq M_2 : \u B$
holds if and only if
$\Gamma, V_1: A_1 \vdash M_1[\kw{ret} \kw{inl} V_1] \sqsubseteq M_2[\kw{ret} \kw{inl} V_2] : \u B$ and
$\Gamma, V_2: A_2 \vdash M_2[\kw{ret} \kw{inr} V_2] \sqsubseteq M_2[\kw{ret} \kw{inr} V_2] : \u B$
\end{lemma}
\end{longonly}
This shows why the type tag interpretation works: it makes the dynamic
type in some sense the minimal type with injections from each $G$:
the sum of all value ground types $? \cong \Sigma_{G} G$.
The dynamic computation type $\u {\text{?`}}$ can be naturally defined by a
dual construction, by the following dual argument.
First, we want a computation ep pair from $\u G$ to $\u {\text{?`}}$ for each
ground computation type $\u G$.
Specifically, this means we want a stack from $\u {\text{?`}}$ to $\u G$ (the
downcast) with an embedding.
The easiest way to get this is if, for each ground computation type
$\u G$, $\u {\text{?`}}$ is equivalent to a lazy product of $\u G$ and ``the
other behaviors'', i.e., $\u {\text{?`}} \cong \u G \mathbin{\&} \u {\text{?`}}_{-\u G}$.
Then the embedding on $\pi$ performs the embedded computation, but on
$\pi'$ raises a type error.
The following lemma, dual to lemma \ref{lem:injections-are-embeddings}
shows this forms a computation ep pair:
\begin{lemma}[Lazy Product Projections are Computation Projections]\label{lem:projections-are-projections}
For any $\u B, \u B'$, there are computation ep pairs from $\u B$
and $\u B'$ to $\u B \mathbin{\&} \u B'$ where the projections are $\pi$
and $\pi'$.
\end{lemma}
\begin{proof}
Define the projection for $\u B$ to be $\pi$. Define the embedding
by $z. \pair{\kw{force} z}{\mho}$. Similarly define the projection for
$\u B'$.
\begin{longonly}
This satisfies retraction:
\begin{align*}
\pi\kw{force}\kw{thunk}\pair{\kw{force} z}{\mho}
&\mathrel{\gtdyn\ltdyn} \pi\pair{\kw{force} z}{\mho}\tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{force} z\tag{$\mathbin{\&}\beta$}
\end{align*}
and projection:
\begin{align*}
&\kw{thunk}\pair{\kw{force}\kw{thunk}\pi\kw{force} w}{\mho}\\
&\mathrel{\gtdyn\ltdyn} \kw{thunk}\pair{\pi\kw{force} w}{\mho} \tag{$U\beta$}\\
&\sqsubseteq \kw{thunk}\pair{\pi\kw{force} w}{\pi'\kw{force} w}\tag{$\mho$ bottom}\\
&\mathrel{\gtdyn\ltdyn} \kw{thunk}\kw{force} w\tag{$\mathbin{\&}\eta$}\\
&\mathrel{\gtdyn\ltdyn} w \tag{$U\eta$}
\end{align*}
\end{longonly}
\end{proof}
From this, we see that the easiest way to construct an interpretation
of the dynamic computation type is to make it a lazy product of all
the ground types $\u G$: $\u {\text{?`}} \cong \With_{\u G} \u G$.
Using recursive types, we can easily make this a definition of the
interpretations:
\begin{definition}[Natural Dynamic Type Interpretation]
\label{def:natural-type-interp}
The following defines a dynamic type interpretation.
%
We define the types to satisfy the isomorphisms
\[
{?} \cong 1 + ({?} \times {?}) + ({?} + {?}) + U\u {\text{?`}} \qquad
\u {\text{?`}} \cong (\u {\text{?`}} \mathbin{\&} \u {\text{?`}}) \mathbin{\&} ({?} \to \u {\text{?`}}) \mathbin{\&} \u F {?}
\]
with the ep pairs defined as in
Lemma~\ref{lem:injections-are-embeddings} and
\ref{lem:projections-are-projections}.
\end{definition}
\begin{longproof}
We can construct ${?}, \u {\text{?`}}$ explicitly using recursive and
corecursive types. Specifically, we make the recursion explicit by
defining open versions of the types:
\begin{align*}
X,\u Y \vdash {?}_o &= 1 + (X \times X) + (X + X) + U\u Y\,\,\text{val type} \\
X,\u Y \vdash \u {\text{?`}}_o &= (\u Y \mathbin{\&} \u Y) \mathbin{\&} (X \to \u Y) \mathbin{\&} \u F X \,\,\text{comp type}
\end{align*}
Then we define the types ${?}, \u {\text{?`}}$ using a standard encoding:
\begin{align*}
{?} &= \mu X. {?}_o[\nu \u Y. \u {\text{?`}}_o/\u Y]\\
\u {\text{?`}} &= \nu \u Y. \u {\text{?`}}_o[\mu X. {?}_o/X]
\end{align*}
Then clearly by the roll/unroll isomorphism we get the desired
isomorphisms:
\begin{align*}
{?} &\cong {?}_o[\u {\text{?`}}/\u Y,{?}/X] = 1 + ({?} \times {?}) + ({?} + {?}) + U\u {\text{?`}} \\
\u {\text{?`}} &\cong{?}_c[{?}/X,\u {\text{?`}}/\u Y] = (\u {\text{?`}} \mathbin{\&} \u {\text{?`}}) \mathbin{\&} ({?} \to \u {\text{?`}}) \mathbin{\&} \u F {?}
\end{align*}
\end{longproof}
This dynamic type interpretation is a natural fit for CBPV because the
introduction forms for ${?}$ are exactly the introduction forms for
all of the value types (unit, pairing,$\texttt{inl}$, $\texttt{inr}$, $\texttt{force}$), while
elimination forms are all of the elimination forms for computation types
($\pi$, $\pi'$, application and binding); such ``bityped'' languages
are related to \citet{girard01locussolum,zeilberger09thesis}.
\begin{shortonly}
In the extended version, we give an extension of GTT axiomatizing this
implementation of the dynamic types.
\end{shortonly}
\begin{longonly}
Based on this dynamic type interpretation, we can extend GTT to support
a truly dynamically typed style of programming, where one can perform
case-analysis on the dynamic types at runtime, in addition to the type
assertions provided by upcasts and downcasts.
\begin{figure}
\begin{small}
\begin{mathpar}
\inferrule*[right=${?}$E]
{\Gamma\,\,|\,\, \Delta \vdash V : {?}\\
\Gamma,x_1 : 1\,\,|\,\, \Delta \vdash E_1 : T\\
\Gamma,x_\times : {?}\times{?}\,\,|\,\, \Delta \vdash E_\times : T\\
\Gamma,x_+ : {?}+{?}\,\,|\,\, \Delta \vdash E_+ : T\\
\Gamma,x_U : U\u {\text{?`}} \,\,|\,\, \Delta \vdash E_U : T\\
}
{\Gamma\,\,|\,\, \Delta \vdash \dyncaseofXthenOnePairSumU {V} {x_{1}. E_1}{x_{\times}. E_{\times}}{x_{+}. E_{+}}{x_{U}. E_U} : T}\and
\dyncaseofXthenOnePairSumU {(\upcast{G}{{?}}V)} {x_{1}. E_1}{x_{\times}. E_{\times}}{x_{+}. E_{+}}{x_{U}. E_U} \mathrel{\gtdyn\ltdyn} E_{G}[V/x_G]\qquad({?}\beta)\and
\inferrule*[right=${?}\eta$]
{\Gamma , x : {?} \,\,|\,\, \Delta \vdash E : \u B}
{E \mathrel{\gtdyn\ltdyn} \dyncaseofXthenOnePairSumU x
{x_1. E[\upcast{1}{{?}}/x_1]}
{x_{\times}. E[\upcast{{\times}}{{?}}/x_{\times}]}
{x_+. E[\upcast{+}{{?}}/x_+]}
{x_U. E[\upcast{U}{{?}}/x_U]}}\and
\inferrule*[right=$\u {\text{?`}}$]
{\Gamma \,\,|\,\, \Delta \vdash M_{\to} : {?} \to \u {\text{?`}}\\
\Gamma \,\,|\,\, \Delta \vdash M_{\mathbin{\&}} : \u {\text{?`}} \mathbin{\&} \u {\text{?`}}\\
\Gamma \,\,|\,\, \Delta \vdash M_{\u F} : \u F}
{\Gamma \,\,|\,\, \Delta \vdash \dyncocaseWithFunF{M_{\mathbin{\&}}}{M_{\to}}{M_{\u F}} : \u {\text{?`}}}\and
\dncast{\u G}{\u {\text{?`}}}\dyncocaseWithFunF{M_{\mathbin{\&}}}{M_{\to}}{M_{\u F}} \mathrel{\gtdyn\ltdyn} M_{\u G}\quad(\u {\text{?`}}\beta)\and
{\bullet : \u {\text{?`}} \vdash \bullet
\mathrel{\gtdyn\ltdyn}
\dyncocaseWithFunF
{\dncast{\u {\text{?`}}\mathbin{\&}\u {\text{?`}}}{\u {\text{?`}}}\bullet}
{\dncast{{?}\to\u {\text{?`}}}{\u {\text{?`}}}\bullet}
{\dncast{\u F{?}}{\u {\text{?`}}}\bullet}}\quad(\u {\text{?`}}\eta)
\end{mathpar}
\end{small}
\caption{Natural Dynamic Type Extension of GTT}
\end{figure}
The axioms we choose might seem to under-specify the dynamic type, but
because of the uniqueness of adjoints, the following are derivable.
\begin{lemma}[Natural Dynamic Type Extension Theorems]
The following are derivable in GTT with the natural dynamic type extension
\begin{mathpar}
{\dncast{\u F 1}{\u F {?}}\kw{ret} V \mathrel{\gtdyn\ltdyn} \dyncaseofXthenYelseZ V {x_1. \kw{ret} x_1}{\kw {else} \mho}}\\
{\dncast{\u F({?}\times{?})}{\u F {?}}\kw{ret} V \mathrel{\gtdyn\ltdyn} \dyncaseofXthenYelseZ V {x_\times. \kw{ret} x_\times}{\kw {else} \mho}}\\
{\dncast{\u F({?} + {?})}{\u F {?}}\kw{ret} V \mathrel{\gtdyn\ltdyn} \dyncaseofXthenYelseZ V {x_+. \kw{ret} x_+}{\kw {else} \mho}}\\
{\dncast{\u F U\u {\text{?`}}}{\u F{?}}\kw{ret} V \mathrel{\gtdyn\ltdyn} \dyncaseofXthenYelseZ V {x_U. \kw{ret} x_U}{\kw {else} \mho}}\\
\kw{force}\upcast{U(\u {\text{?`}}\mathbin{\&}\u {\text{?`}})}{U\u {\text{?`}}}V \mathrel{\gtdyn\ltdyn} \dyncocaseWithFunF{\kw{force} V}{\mho}{\mho}\\
\kw{force}\upcast{U({?} \to \u {\text{?`}})}{U\u {\text{?`}}}V \mathrel{\gtdyn\ltdyn} \dyncocaseWithFunF{\mho}{\kw{force} V}{\mho}\\
\kw{force}\upcast{U\u F{?}}{U\u {\text{?`}}}V \mathrel{\gtdyn\ltdyn} \dyncocaseWithFunF{\mho}{\mho}{\kw{force} V}\\
\end{mathpar}
\end{lemma}
We explore this in more detail with the next dynamic type
interpretation.
\end{longonly}
\begin{longonly}
Next, we easily see that if we want to limit GTT to just the CBV types
(i.e. the only computation types are $A \to \u F A'$), then we can
restrict the dynamic types as follows:
\begin{definition}[CBV Dynamic Type Interpretation]
The following is a dynamic type interpretation for the ground types of
GTT with only function computation types:
\[
{?} \cong 1 + ({?} + {?}) + ({?} \times {?}) + U(\u {\text{?`}}) \qquad
\u {\text{?`}} \cong {?} \to \u F {?}
\]
\end{definition}
And finally if we restrict GTT to only CBN types (i.e., the only value
type is booleans $1+1$), we can restrict the dynamic types as follows:
\begin{definition}[CBN Dynamic Type Interpretation]
The following is a dynamic type interpretation for the ground types of
GTT with only boolean value types:
\[
{?} = (1 + 1) \qquad
\u {\text{?`}} \cong (\u {\text{?`}} \mathbin{\&} \u {\text{?`}}) \mathbin{\&} (U\u {\text{?`}} \to \u {\text{?`}})
\mathbin{\&} \u F {?}
\]
\end{definition}
\end{longonly}
\subsubsection{Scheme-like Dynamic Type Interpretation}
The above dynamic type interpretation does not correspond to any
dynamically typed language used in practice, in part because it
includes explicit cases for the ``additives'', the sum type $+$ and
lazy product type $\mathbin{\&}$.
Normally, these are not included in this way, but rather sums are
encoded by making each case use a fresh constructor (using nominal
techniques like opaque structs in Racket) and then making the sum the
union of the constructors, as argued in \citet{siekth16recursiveunion}.
We leave modeling this nominal structure to future work, but in
minimalist languages, such as simple dialects of Scheme and Lisp, sum
types are often encoded \emph{structurally} rather than nominally by
using some fixed sum type of \emph{symbols}, also called \emph{atoms}.
Then a value of a sum type is modeled by a pair of a symbol (to indicate
the case) and a payload with the actual value.
We can model this by using the canonical isomorphisms
\[ {?} + {?} \cong ((1+1) \times {?}) \qquad \u {\text{?`}} \mathbin{\&} \u {\text{?`}} \cong (1+1) \to \u {\text{?`}} \]
and representing sums as pairs, and lazy products as functions.
\begin{longonly}
The fact that isomorphisms are ep pairs is useful for constructing the
ep pairs needed in the dynamic type interpretation.
\begin{lemma}[Isomorphisms are EP Pairs]
\label{lem:isos-are-ep}
If $x:A \vdash V' : A'$ and $x':A' \vdash V : A$ are an isomorphism in
that $V[V'/x'] \mathrel{\gtdyn\ltdyn} x$ and $V[V/x]\mathrel{\gtdyn\ltdyn} x'$, then $(x.V',
\bindXtoYinZ \bullet {x'} \kw{ret} V')$ are a value ep pair from $A$ to
$A'$. Similarly if $\bullet : \u B \vdash S' : \u B'$ and $\bullet :
\u B' \vdash S : \u B$ are an isomorphism in that $S[S']\equiv
\bullet$ and $S'[S] \equiv \bullet$ then $(z. S'[\kw{force} z], S)$ is an
ep pair from $\u B$ to $\u B'$.
\end{lemma}
\end{longonly}
With this in mind, we remove the cases for sums and lazy pairs from the
natural dynamic types, and include some atomic type as a case of
${?}$---for simplicity we will just use booleans.
We also do not need a case for $1$, because we can identify it with one
of the booleans, say $\texttt{true}$.
This leads to the following definition:
\begin{definition}[Scheme-like Dynamic Type Interpretation] \label{def:scheme-like-type-interp}
We can define a dynamic type interpretation with the following type
isomorphisms:
\begin{mathpar}
{?} \cong (1+1) + U\u {\text{?`}} + ({?} \times {?})\and
\u {\text{?`}} \cong ({?} \to \u {\text{?`}}) \mathbin{\&} \u F {?}
\end{mathpar}
\end{definition}
\begin{proof}
\begin{shortonly}
The details of constructing the two mutually recursive types from
our recursive type mechanism are in the extended version.
\end{shortonly}
\begin{longonly}
We construct ${?}, \u {\text{?`}}$ explicitly as follows.
First define $X : \,\,\text{val type} \vdash \texttt{Tree}[X] \,\,\text{val type}$ to be the
type of binary trees:
\[ \texttt{Tree} = \mu X'. X + (X' \times X') \]
Next, define $X:\,\,\text{val type}, \u Y: ctype \vdash \texttt{VarArg}[X,\u Y]
\,\,\text{comp type}$ to be the type of variable-arity functions from $X$ to $\u
Y$:
\[ \texttt{VarArg} = \nu \u Y'. \u Y \mathbin{\&} (X \to \u Y') \]
Then we define an open version of ${?}, \u {\text{?`}}$ with respect to a
variable representing the occurrences of ${?}$ in $\u {\text{?`}}$:
\begin{align*}
X \,\,\text{val type} \vdash {?}_o &= \texttt{Tree}[(1+1) + U \u {\text{?`}}_o] \,\,\text{comp type}\\
X \,\,\text{val type} \vdash \u {\text{?`}}_o &= \texttt{VarArg}[\u F X/\u Y] \,\,\text{comp type}\\
\end{align*}
Then we can define the closed versions using a recursive type:
\begin{mathpar}
{?} = \mu X. {?}_o\and \u {\text{?`}} = \u {\text{?`}}_o[{?}]
\end{mathpar}
\end{longonly}
\ The ep pairs for $\times, U,\u F, \to$ are clear. To define the
rest, first note that there is an ep pair from $1+1$ to ${?}$ by
Lemma~\ref{lem:injections-are-embeddings}. Next, we can define $1$ to
be the ep pair to $1+1$ defined by the left case and
Lemma~\ref{lem:injections-are-embeddings}, composed with this. The ep
pair for ${?} + {?}$ is defined by composing the isomorphism
(which is always an ep pair)
$({?} + {?}) \cong ((1+1) \times {?})$ with the ep pair for
$1+1$ using the action of product types on ep pairs (proven as part of
Theorem \ref{thm:axiomatic-graduality}): $({?} + {?}) \cong
((1+1)\times {?}) \,\triangleleft\, ({?} \times {?}) \,\triangleleft\,
{?}$ (where we write $A \triangleleft A'$ to mean there is an ep
pair from $A$ to $A'$). Similarly, for $\u {\text{?`}} \mathbin{\&} \u {\text{?`}}$, we use
action of the function type on ep pairs (also proven as part of
Theorem \ref{thm:axiomatic-graduality}): $\u {\text{?`}} \mathbin{\&} \u {\text{?`}} \cong
((1+1) \to \u {\text{?`}}) \,\triangleleft\, ({?} \to \u {\text{?`}}) \,\triangleleft\, \u {\text{?`}}$
\end{proof}
\begin{shortonly}
Intuitively, the above definition of ${?}$ says that it is a binary
tree whose leaves are either booleans or closures---a simple type of
S-expressions. On the other hand, the above definition of $\u {\text{?`}}$
models a \emph{variable-arity function} (as in Scheme), which is
called with any number of dynamically typed value arguments ${?}$
and returns a dynamically typed result $\u F {?}$. To see why a
$\u {\text{?`}}$ can be called with any number of arguments, observe that its
infinite unrolling is $\u F {?} \mathbin{\&} ({?} \to \u F {?}) \mathbin{\&}
({?} \to {?} \to \u F {?}) \mathbin{\&} \ldots$. This type is
isomorphic to a function that takes a list of ${?}$ as input ($(\mu
X. 1 + ({?} \times X)) \to \u F {?}$), but operationally $\u {\text{?`}}$
is a more faithful model of Scheme implementations, because all of the
arguments are passed individually on the stack, not as a
heap-allocated single list argument. These two are distinguished in
Scheme and the ``dot args'' notation witnesses the isomorphism.
\end{shortonly}
\begin{longonly}
If we factor out some of the recursion to use inductive and
coinductive types, we get the following isomorphisms:
\begin{mathpar}
{?} \cong \texttt{Tree}[(1+1) + U\u {\text{?`}}]\and
\u {\text{?`}} \cong \texttt{VarArg}[{?}][\u F {?}]
\end{mathpar}
That is a dynamically typed value is a binary tree whose leaves are
either booleans or closures.
We think of this as a simple type of S-expressions.
A dynamically typed computation is a variable-arity function that is
called with some number of dynamically typed value arguments ${?}$
and returns a dynamically typed result $\u F {?}$.
This captures precisely the function type of Scheme, which allows for
variable arity functions!
What's least clear is \emph{why} the type
\[
\texttt{VarArg}[X][\u Y] = \nu \u Y'. (X \to \u Y') \mathbin{\&} \u Y
\]
Should be thought of as a type of variable arity functions.
First consider the infinite unrolling of this type:
\[
\texttt{VarArg}[X][\u Y] \simeq \u Y \mathbin{\&} (X \to \u Y) \mathbin{\&} (X \to X \to \u Y) \mathbin{\&} \cdots
\]
this says that a term of type $\texttt{VarArg}[X][Y]$ offers an
infinite number of possible behaviors: it can act as a function from
$X^n \to \u Y$ for any $n$.
Similarly in Scheme, a function can be called with any number of
arguments.
Finally note that this type is isomorphic to a function that takes a
\emph{cons-list} of arguments:
\begin{align*}
&\u Y \mathbin{\&} (X \to \u Y) \mathbin{\&} (X \to X \to \u Y) \mathbin{\&} \cdots\\
&\cong(1 \to \u Y) \mathbin{\&} ((X \times 1) \to \u Y) \mathbin{\&} ((X \times X \times 1) \to \u Y) \mathbin{\&} \cdots\\
&\cong(1 + (X \times 1) + (X \times X \times 1) + \cdots) \to \u Y\\
&\cong(\mu X'. 1 + (X\times X')) \to \u Y
\end{align*}
But operationally the type $\texttt{VarArg}[{?}][\u F{?}]$ is a
more faithful model of Scheme implementations because all of the
arguments are passed individually on the stack, whereas the type $(\mu
X. 1 + ({?} \times X)) \to \u F X$ is a function that takes a single
argument that is a list.
These two are distinguished in Scheme and the ``dot args'' notation
witnesses the isomorphism.
\end{longonly}
Based on this dynamic type interpretation we can make a ``Scheme-like''
extension to GTT in Figure~\ref{fig:scheme}.
First, we add a boolean type $\mathbb{B}$ with $\texttt{true}$, $\texttt{false}$ and
if-then-else.
Next, we add in the elimination form for ${?}$ and the introduction
form for $\u {\text{?`}}$.
The elimination form for ${?}$ is a typed version of Scheme's
\emph{match} macro.
The introduction form for $\u {\text{?`}}$ is a typed, CBPV version of Scheme's
\emph{case-lambda} construct.
Finally, we add type dynamism rules expressing the representations of
$1$, $A + A$, and $A \times A$ in terms of booleans that were explicit
in the ep pairs used in Definition~\ref{def:scheme-like-type-interp}.
\begin{shortonly}
In the extended version of the paper, we include the appropriate term
dynamism axioms, which are straightforward syntactifications of the
properties of the dynamic type interpretation, and prove a unique
implementation theorem for the new casts.
\end{shortonly}
\begin{figure}
\begin{small}
\begin{mathpar}
1 \sqsubseteq \mathbb{B}\and
A + A \mathrel{\gtdyn\ltdyn} \mathbb{B} \times A\and
\u B \mathbin{\&} \u B \mathrel{\gtdyn\ltdyn} \mathbb{B} \to \u B
\begin{longonly}
\\
\inferrule*[right=$\mathbb{B}$I]
{ }
{\Gamma \vdash \texttt{true}, \texttt{false} : \mathbb{B}}
\inferrule*[right=$\mathbb{B}$E]
{\Gamma \vdash V : \mathbb{B}\\
\Gamma \vdash E_t : T\\
\Gamma \vdash E_f : T}
{\Gamma \,\,|\,\, \Delta \vdash \ifXthenYelseZ V {E_t} {E_f} : T}
\\
\ifXthenYelseZ \texttt{true} {E_t} {E_f} \mathrel{\gtdyn\ltdyn} E_t\and
\ifXthenYelseZ \texttt{false} {E_t} {E_f} \mathrel{\gtdyn\ltdyn} E_f\\
x : \mathbb{B} \vdash E \mathrel{\gtdyn\ltdyn} \ifXthenYelseZ x {E[\texttt{true}/x]} {E[\texttt{false}/x]}\\
\upcast{1}{\mathbb{B}}V \mathrel{\gtdyn\ltdyn} \texttt{true}\and
\upcast{A+A}{\mathbb{B} \times A}\kw{inl} V \mathrel{\gtdyn\ltdyn} (\texttt{true}, V)\and
\upcast{A+A}{\mathbb{B} \times A}\kw{inr} V \mathrel{\gtdyn\ltdyn} (\texttt{false}, V)\\
\pi\dncast{\u B\mathbin{\&}\u B}{\mathbb{B} \to \u B}M \mathrel{\gtdyn\ltdyn} M\,\texttt{true}\and
\pi'\dncast{\u B\mathbin{\&}\u B}{\mathbb{B} \to \u B}M \mathrel{\gtdyn\ltdyn} M\,\texttt{false}\\
\end{longonly}
\inferrule*[right=$\u {\text{?`}}$I]
{\Gamma \,\,|\,\, \Delta \vdash M_{\to} : {?} \to \u {\text{?`}}\\
\Gamma \,\,|\,\, \Delta \vdash M_{\u F} : \u F {?}}
{\Gamma \,\,|\,\, \Delta \vdash \dyncocaseFunF{M_{\to}}{M_{\u F}} : \u {\text{?`}}}\\
\begin{longonly}
\dncast{\u G}{\u {\text{?`}}}\dyncocaseFunF{M_{\to}}{M_{\u F}} \mathrel{\gtdyn\ltdyn} M_{\u G}\quad(\u {\text{?`}}\beta)
{\bullet : \u {\text{?`}} \vdash \bullet
\mathrel{\gtdyn\ltdyn}
\dyncocaseFunF
{\dncast{{?}\to\u {\text{?`}}}{\u {\text{?`}}}\bullet}
{\dncast{\u F{?}}{\u {\text{?`}}}\bullet}}\quad(\u {\text{?`}}\eta)\\
\end{longonly}
\inferrule*[right=${?}$E]
{\Gamma\,\,|\,\, \Delta \vdash V : {?} \and
\Gamma, x_{\mathbb{B}}:\mathbb{B}\,\,|\,\, \Delta \vdash E_\mathbb{B} : T\and
\Gamma,x_U : U\u {\text{?`}} \,\,|\,\, \Delta \vdash E_U : T\and
\Gamma,x_\times : {?}\times{?}\,\,|\,\, \Delta \vdash E_\times : T\and
}
{\Gamma\,\,|\,\, \Delta \vdash \dyncaseofXthenBoolUPair {V} {x_{\mathbb{B}}. E_{\mathbb{B}}}{x_{U}. E_U}{x_{\times}. E_{\times}} : T}\\
\begin{longonly}
\inferrule
{G \in \{ \mathbb{B}, \times, U\}}
{\dyncaseofXthenBoolUPair {(\upcast{G}{{?}}V)} {x_{\mathbb{B}}. E_{\mathbb{B}}}{x_{U}. E_U}{x_{\times}. E_{\times}} \mathrel{\gtdyn\ltdyn} E_{G}[V/x_G]} \qquad({?}\beta)\\
\inferrule*[right=${?}\eta$]
{\Gamma , x : {?} \,\,|\,\, \Delta \vdash E : \u B}
{E \mathrel{\gtdyn\ltdyn} \dyncaseofXthenBoolUPair x
{x_\mathbb{B}. E[\upcast{\mathbb{B}}{{?}}/x_\mathbb{B}]}
{x_{\times}. E[\upcast{{\times}}{{?}}/x_{\times}]}
{x_U. E[\upcast{U}{{?}}/x_U]}}\\
\end{longonly}
\end{mathpar}
\end{small}
\vspace{-0.4in}
\caption{Scheme-like Extension to GTT}
\label{fig:scheme}
\end{figure}
\begin{longonly}
The reader may be surprised by how \emph{few} axioms we need to add
to GTT for this extension: for instance we only define the upcast
from $1$ to $\mathbb{B}$ and not vice-versa, and similarly the sum/lazy
pair type isomorphisms only have one cast defined when a priori
there are $4$ to be defined.
%
Finally for the dynamic types we define $\beta$ and $\eta$ laws
that use the ground casts as injections and projections
respectively, but we don't define the corresponding dual casts (the
ones that possibly error).
In fact all of these expected axioms can be \emph{proven} from those
we have shown.
%
Again we see the surprising rigidity of GTT: because an $\u F$
downcast is determined by its dual value upcast (and vice-versa for
$U$ upcasts), we only need to define the upcast as long as the
downcast \emph{could} be implemented already.
%
Because we give the dynamic types the universal property of a
sum/lazy product type respectively, we can derive the
implementations of the ``checking'' casts.
%
All of the proofs are direct from the uniqueness of adjoints
lemma.
\begin{theorem}[Boolean to Unit Downcast]
In Scheme-like GTT, we can prove
\[
\dncast{\u F1}{\u F\mathbb{B}}\bullet
\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ \bullet x \ifXthenYelseZ x {\kw{ret}()}{\mho}
\]
\end{theorem}
\begin{theorem}[Tagged Value to Sum]
In Scheme-like GTT, we can prove
\[
\upcast{\mathbb{B} \times A}{A+A}V \mathrel{\gtdyn\ltdyn} \pmpairWtoXYinZ V {x}{y} \ifXthenYelseZ x {\kw{inl} y}{\kw{inr} y}
\]
and the downcasts are given by lemma \ref{lem:isos-are-ep}.
\end{theorem}
\begin{theorem}[Lazy Product to Tag Checking Function]
In Scheme-like GTT, we can prove
\[
\dncast{\mathbb{B}\to \u B}{\u B\mathbin{\&}\u B}\bullet
\mathrel{\gtdyn\ltdyn}
\lambda x:\mathbb{B}. \ifXthenYelseZ x {\pi \bullet}{\pi' \bullet}
\]
and the upcasts are given by lemma \ref{lem:isos-are-ep}.
\end{theorem}
\begin{theorem}[Ground Mismatches are Errors]
In Scheme-like GTT we can prove
\begin{mathpar}
{\dncast{\u F \mathbb{B}}{\u F {?}}\kw{ret} V \mathrel{\gtdyn\ltdyn} \dyncaseofXthenYelseZ V {x_\mathbb{B}. \kw{ret} x_\mathbb{B}}{\kw {else} \mho}}\\
{\dncast{\u F({?}\times{?})}{\u F {?}}\kw{ret} V \mathrel{\gtdyn\ltdyn} \dyncaseofXthenYelseZ V {x_\times. \kw{ret} x_\times}{\kw {else} \mho}}\\
{\dncast{\u F U\u {\text{?`}}}{\u F{?}}\kw{ret} V \mathrel{\gtdyn\ltdyn} \dyncaseofXthenYelseZ V {x_U. \kw{ret} x_U}{\kw {else} \mho}}\\
\kw{force}\upcast{U({?} \to \u {\text{?`}})}{U\u {\text{?`}}}V \mathrel{\gtdyn\ltdyn} \dyncocaseFunF{\kw{force} V}{\mho}\\
\kw{force}\upcast{U\u F{?}}{U\u {\text{?`}}}V \mathrel{\gtdyn\ltdyn} \dyncocaseFunF{\mho}{\kw{force} V}\\
\end{mathpar}
\end{theorem}
Finally, we note now that all of these axioms are satisfied when
using the Scheme-like dynamic type interpretation and extending the
translation of GTT into CBPV*\ with the following, tediously
explicit definition:
\begin{align*}
&\sem{\mathbb{B}} = 1+1\\
&\sem{\texttt{true}} =\kw{inl}()\\
&\sem{\texttt{false}} =\kw{inr}()\\
&\sem{\ifXthenYelseZ V {E_t} {E_f}} = \caseofXthenYelseZ {\sem V}{x. E_t}{x.E_f}\\
&\sem{\dyncaseofXthenBoolUPair x {x_\mathbb{B}. E_\mathbb{B}}{x_U. E_U}{x_\times. E_\times}}
=\\
&\quad\pmmuXtoYinZ {(x:{?})} {x'} \pmmuXtoYinZ {x' : \texttt{Tree}[(1+1)+U\u {\text{?`}}]}t \caseofX t\\
&\qquad\{{l. \caseofXthenYelseZ l {x_\mathbb{B}. \sem{E_\mathbb{B}}}{x_U. \sem{E_U}}}\\
&\qquad\elseZ{x_\times. \sem{E_\times}}\\
&\sem{\dyncocaseFunF{M_\to}{M_{\u F}}}
= \rollty{\nu \u Y. ({?} \to \u Y)\mathbin{\&} \u F{?}}\pair{\sem{M_\to}}{\sem{M_{\u F}}}
\end{align*}
\end{longonly}
\subsection{Contract Translation}
Having defined the data parameterizing the translation, we now consider
the translation of GTT into CBPV*\ itself.
For the remainder of the paper, we assume that we have a fixed dynamic
type interpretation $\rho$, and all proofs and definitions work for any
interpretation.
\begin{figure}
\begin{small}
\begin{mathpar}
x:\sem{A} \vdash \sem{\upcast{A}{A'}} : \sem{A'}\and
\bullet:\sem{\u B'} \vdash \sem{\dncast{\u B}{\u B'}} : \sem{\u B}\\
\begin{array}{rcl}
\iflong
x : 0 \vdash \supcast{0}{A} & = & \kw{absurd} x\\
\bullet : A \vdash \sdncast{\u F0}{\u F A} &=& \bindXtoYinZ \bullet x \mho\\
\fi
x : \sem{{?}} \vdash \sem{\upcast{{?}}{{?}}} & = & x\\
\bullet : \u F {?} \vdash \sdncast{\u F {?}}{\u F{?}} &=& \bullet\\
x : \sem{G} \vdash \sem{\upcast{G}{{?}}} & = & \rho_{up}(G)\\
\bullet : \u F {?} \vdash \sdncast{\u F G}{\u F{?}} &=& \rho_{dn}(G)\\
x : \sem{A} \vdash \sem{\upcast{A}{{?}}} & = & \sem{\upcast{\lfloor A \rfloor}{{?}}}[{\sem{\upcast{A}{\lfloor A \rfloor}}}/x]\\
\bullet: \u F{?} \vdash \sdncast{A}{{?}} &=& \sdncast{A}{\floor A}[{\sdncast{\floor A}{{?}}}]\\
\iflong
x : \sem{A_1} + \sem{A_2} \vdash \sem{\upcast{A_1 + A_2}{A_1' + A_2'}}
& = & \caseofX x \\
&& \{{x_1. \sem{\upcast{A_1}{A_1'}}[x_1/x]}\\
&& \elseZ{x_2. \sem{\upcast{A_2}{A_2'}}[x_2/x]}\\
\bullet : \sem{A_1} + \sem{A_2} \vdash
\sem{\dncast{\u F(A_1 + A_2)}{\u F(A_1' + A_2')}}
&=&
\bindXtoYinZ \bullet {x'} \caseofX {x'}\\
&&\{{x_1'. \bindXtoYinZ {(\sdncast{\u FA_1}{\u F A_1'}\kw{ret} x_1')} {x_1} \kw{ret} x_1}\\
&&\elseZ{x_2'. \bindXtoYinZ {(\sdncast{\u FA_2}{\u F A_2'}\kw{ret} x_2')} {x_2} \kw{ret} x_2}\\
x : 1 \vdash \supcast{1}{1} &=& x\\
\bullet : \u F 1 \vdash \sdncast{\u F1}{\u F1} &=& x\\
\fi
x : \sem{A_1}\times\sem{A_2} \vdash \sem{\upcast{A_1\times A_2}{A_1'\times A_2'}} &=& \pmpairWtoXYinZ x {x_1}{x_2}\\
&&(\supcast{A_1}{A_1'}[x_1], \supcast{A_2}{A_2'}[x_2])\\
\bullet \vdash \sdncast{\u F(A_1 \times A_2)}{\u F(A_1' \times A_2')}
&=&
\bindXtoYinZ \bullet {x'} \pmpairWtoXYinZ {x'} {x_1'}{x_2'}\\
&&\bindXtoYinZ {\sdncast{\u FA_1}{\u FA_1'}\kw{ret} x_1'} {x_1}\\
&& \bindXtoYinZ {\sdncast{\u FA_2}{\u FA_2'}\kw{ret} x_2'} {x_2} \kw{ret}(x_1,x_2)\\
x : U\u F \sem{A} \vdash \sem{\upcast{U\u F A}{U \u F A'}} &=&
\kw{thunk} (\bindXtoYinZ {\kw{force} x} y \kw{ret} \sem{\upcast{A}{A'}}[y/x])\\\\
\iflong
\bullet : \u B \vdash \sdncast{\top}{\u B} &=& \{ \}\\
x:U\top \vdash \supcast{U\top}{U\u B} &=& \kw{thunk} \mho\\
\bullet : \u {\text{?`}} \vdash \sdncast{\u {\text{?`}}}{\u {\text{?`}}} &=& \bullet\\
x:U\u {\text{?`}} \vdash \supcast{U\u {\text{?`}}}{U\u {\text{?`}}} &=& x\\
\bullet : \u {\text{?`}} \vdash \sdncast{\u G}{\u {\text{?`}}} &=& \rho_{dn}(\u G)\\
x:U\u G \vdash \supcast{U\u G}{U\u {\text{?`}}} &=& \rho_{up}(\u G)\\
\bullet : \u {\text{?`}} \vdash \sdncast{\u B}{\u {\text{?`}}} &=& \sdncast{\u B}{\floor {\u B}}[\sdncast{\floor{\u B}}{\u {\text{?`}}}]\\
x:U\u {\text{?`}} \vdash \supcast{U\u B}{U\u {\text{?`}}}&=& \supcast{U\floor{\u B}}{U\u {\text{?`}}}[\supcast{U\u B}{U\floor{\u B}}]\\
\bullet : \sem{\u B_1'}\mathbin{\&} \sem{\u B_2'}\vdash \sdncast{\u B_1\mathbin{\&}\u B_2}{\u B_1'\mathbin{\&}\u B_2'} &=& \pairone{\sdncast{\u B_1}{\u B_1'}\pi\bullet}\\
&&\pairtwo{\sdncast{\u B_2}{\u B_2'}\pi'\bullet}\\
x : U(\sem{\u B_1}\mathbin{\&} \sem{\u B_2}) \vdash
{\supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1'\mathbin{\&}\u B_2')}}
&=&
\kw{thunk}\\
&&\pairone{\kw{force} \supcast{\u B_1}{\u B_1'}{(\kw{thunk} \pi\kw{force} x)}}\\
&&\pairtwo{\kw{force} \supcast{\u B_2}{\u B_2'}{(\kw{thunk} \pi'\kw{force} x)}}\\
\bullet \vdash \sdncast{A \to \u B}{A' \to \u B'} &=& \lambda x:A. \sdncast{\u B}{\u B'}{(\bullet\, (\supcast{A}{A'}{x}))}\\
f : U(\sem{A} \to \sem{\u B}) \vdash
\supcast{U(A \to \u B)}{U(A' \to \u B')}
&=&
\kw{thunk}\lambda x':A'. \\
&&\bindXtoYinZ {\sdncast{\u F A}{\u FA'}\kw{ret} x'} x\\
&&\kw{force}\supcast{U\u B}{U\u B'}\kw{thunk} {(\kw{force} f)\, x'}\\
\bullet : \u FU\u B' \vdash \sdncast{\u FU\u B}{\u FU\u B'}
&=&
\bindXtoYinZ \bullet {x'} \sdncast{\u B}{\u B'}\kw{force} x'
\fi
\end{array}
\end{mathpar}
\vspace{-0.15in}
\caption{Cast to Contract Translation \ifshort(selected cases)\fi}
\label{fig:cast-to-contract}
\end{small}
\end{figure}
\begin{longonly}
\subsubsection{Interpreting Casts as Contracts} ~
\end{longonly}
The main idea of the translation is an extension of the dynamic type
interpretation to an interpretation of \emph{all} casts in GTT
(Figure~\ref{fig:cast-to-contract}) as contracts in CBPV*, following
the definitions in Lemma~\ref{lem:casts-admissible}.
\begin{shortonly}
To verify the totality and coherence of this definition, we define (in
the extended version) a normalized version of the type dynamism rules
from Figure~\ref{fig:gtt-type-dynamism}, which is interderivable but
has at most one derivation of $T \sqsubseteq T'$ for a given $T$ and $T'$.
The main idea is to restrict reflexivity to base types, and restrict
transitivity to $A \sqsubseteq \floor{A} \sqsubseteq {?}$, where $\floor{A}$
is the ground type with the same outer connective as $A$.
\end{shortonly}
\begin{longonly}
Some clauses of the translation are overlapping, which we resolve by
considering them as ordered (though we will ultimately show they are
equivalent).
The definition is also not obviously total: we need to verify that it
covers every possible case where $A \sqsubseteq A'$ and $\u B \sqsubseteq \u
B'$.
To prove totality and coherence, we could try induction on the type
dynamism relation of Figure~\ref{fig:gtt-type-dynamism}, but it is
convenient to first give an alternative, normalized set of rules for
type dynamism that proves the same relations, which we do in
Figure~\ref{fig:normalized}.
\begin{figure}
\begin{small}
\begin{mathpar}
\inferrule
{A \in \{{?}, 1\}}
{A \sqsubseteq A}
\inferrule
{A \in \{{?}, 0\}}
{0 \sqsubseteq A}
\inferrule
{A \sqsubseteq \floor A\and
A \not\in\{0,{?} \}}
{A \sqsubseteq {?}}\\
\inferrule
{\u B \sqsubseteq \u B'}
{U B \sqsubseteq U B'}
\inferrule
{A_1 \sqsubseteq A_1' \and A_2 \sqsubseteq A_2' }
{A_1 + A_2 \sqsubseteq A_1' + A_2'}
\inferrule
{A_1 \sqsubseteq A_1' \and A_2 \sqsubseteq A_2' }
{A_1 \times A_2 \sqsubseteq A_1' \times A_2'}\\
\inferrule
{}
{\u {\text{?`}} \sqsubseteq \u {\text{?`}}}
\inferrule
{\u B \in \{ \u {\text{?`}}, \top \}}
{\top \sqsubseteq \u B}
\inferrule
{\u B \sqsubseteq \floor {\u B} \and \u B \not\in \{ \top, \u {\text{?`}} \}}
{\u B \sqsubseteq \u {\text{?`}}}\\
\inferrule
{A \sqsubseteq A'}
{\u F A \sqsubseteq \u F A'}
\inferrule
{\u B_1 \sqsubseteq \u B_1' \and \u B_2 \sqsubseteq \u B_2'}
{\u B_1 \mathbin{\&} \u B_2 \sqsubseteq \u B_1' \mathbin{\&} \u B_2'}
\inferrule
{A \sqsubseteq A' \and \u B \sqsubseteq \u B'}
{A \to \u B \sqsubseteq A' \to \u B'}
\end{mathpar}
\end{small}
\caption{Normalized Type Dynamism Relation}
\label{fig:normalized}
\end{figure}
\begin{lemma}[Normalized Type Dynamism is Equivalent to Original]
\label{lem:norm-type-dyn}
$T \sqsubseteq T'$ is provable in the normalized typed dynamism
definition iff it is provable in the original typed
dynamism definition.
\end{lemma}
\begin{longproof}
It is clear that the normalized system is a subset of the original:
every normalized rule corresponds directly to a rule of the original
system, except the normalized $A \sqsubseteq {?}$ and $\u B \sqsubseteq \u {\text{?`}}$
rules have a subderivation that was not present originally.
For the converse, first we show by induction that reflexivity is
admissible:
\begin{enumerate}
\item If $A \in \{{?}, 1, 0\}$, we use a normalized rule.
\item If $A \not\in\{{?}, 1, 0\}$, we use the inductive hypothesis
and the monotonicity rule.
\item If $\u B\in \{\u {\text{?`}}, \top\}$ use the normalized rule.
\item If $\u B \not\in\{\u {\text{?`}}, \top\}$ use the inductive hypothesis
and monotonicity rule.
\end{enumerate}
Next, we show that transitivity is admissible:
\begin{enumerate}
\item Assume we have $A \sqsubseteq A' \sqsubseteq A''$
\begin{enumerate}
\item If the left rule is $0 \sqsubseteq A'$, then either $A' = {?}$
or $A' = 0$. If $A' = 0$ the right rule is $0 \sqsubseteq A''$ and we
can use that proof. Otherwise, $A' = {?}$ then the right rule
is ${?} \sqsubseteq {?}$ and we can use $0 \sqsubseteq {?}$.
\item If the left rule is $A \sqsubseteq A$ where $A \in \{ {?}, 1\}$
then either $A = {?}$, in which case $A'' = {?}$ and we're
done. Otherwise the right rule is either $1 \sqsubseteq 1$ (done) or
$1 \sqsubseteq {?}$ (also done).
\item If the left rule is $A \sqsubseteq {?}$ with
$A\not\in\{0,{?}\}$ then the right rule must be ${?} \sqsubseteq
{?}$ and we're done.
\item Otherwise the left rule is a monotonicity rule for one of
$U, +, \times$ and the right rule is either monotonicity (use
the inductive hypothesis) or the right rule is $A' \sqsubseteq {?}$
with a sub-proof of $A' \sqsubseteq \floor{A'}$. Since the left rule
is monotonicity, $\floor{A} = \floor{A'}$, so we inductively use
transitivity of the proof of $A \sqsubseteq A'$ with the proof of $A'
\sqsubseteq \floor{A'}$ to get a proof $A \sqsubseteq \floor{A}$ and thus
$A \sqsubseteq {?}$.
\end{enumerate}
\item Assume we have $\u B \sqsubseteq \u B' \sqsubseteq \u B''$.
\begin{enumerate}
\item If the left rule is $\top \sqsubseteq \u B'$ then $\u B'' \in
\{\u {\text{?`}}, \top\}$ so we apply that rule.
\item If the left rule is $\u {\text{?`}}\sqsubseteq \u {\text{?`}}$, the right rule must
be as well.
\item If the left rule is $\u B \sqsubseteq \u {\text{?`}}$ the right rule must
be reflexivity.
\item If the left rule is a monotonicity rule for $\mathbin{\&}, \to, \u
F$ then the right rule is either also monotonicity (use the
inductive hypothesis) or it's a $\u B \sqsubseteq \u {\text{?`}}$ rule and we
proceed with ${?}$ above
\end{enumerate}
\end{enumerate}
Finally we show $A \sqsubseteq {?}$, $\u B \sqsubseteq \u {\text{?`}}$ are admissible
by induction on $A$, $\u B$.
\begin{enumerate}
\item If $A \in \{ {?}, 0\}$ we use the primitive rule.
\item If $A \not\in \{ {?}, 0 \}$ we use the $A \sqsubseteq {?}$ rule
and we need to show $A \sqsubseteq \floor A$. If $A = 1$, we use the
$1\sqsubseteq 1$ rule, otherwise we use the inductive hypothesis and
monotonicity.
\item If $\u B \in \{ \u {\text{?`}}, \top\}$ we use the primitive rule.
\item If $\u B \not\in \{ \u {\text{?`}}, \top \}$ we use the $\u B \sqsubseteq
\u {\text{?`}}$ rule and we need to show $\u B \sqsubseteq \floor {\u B}$, which
follows by inductive hypothesis and monotonicity.
\end{enumerate}
Every other rule in Figure~\ref{fig:gtt-type-dynamism} is a rule of
the normalized system in Figure~\ref{fig:normalized}.
\end{longproof}
Based on normalized type dynamism, we show
\begin{theorem}
If $A \sqsubseteq A'$ according to Figure~\ref{fig:normalized}, then there is
a unique complex value $x : A \vdash \supcast{A}{A'}{x} : A'$
and
if $\u B \sqsubseteq \u B'$ according to Figure~\ref{fig:normalized}, then there is
a unique complex stack $x : \u B \vdash \supcast{\u B}{\u B'}{x} : \u B'$
\end{theorem}
\smallskip
\subsubsection{Interpretation of Terms}~
\end{longonly}
\ Next, we extend the translation of casts to a translation of all terms
by congruence, since all terms in GTT besides casts are
in CBPV*. This satisfies:
\begin{lemma}[Contract Translation Type Preservation]
If $\Gamma\,\,|\,\,\Delta \vdash E : T$ in GTT, then $\sem{\Gamma}
\,\,|\,\,\sem\Delta\vdash \sem E : \sem T$ in CBPV*.
\end{lemma}
\iflong
\subsubsection{Interpretation of Term Dynamism}
\fi
We have now given an interpretation of the types, terms, and
type dynamism proofs of GTT in CBPV*.
To complete this to form a \emph{model} of GTT, we need to give an
interpretation of the \emph{term dynamism} proofs, which is
established by the
following ``axiomatic graduality'' theorem.
GTT has \emph{heterogeneous} term dynamism
rules indexed by type dynamism, but CBPV*\ has only \emph{homogeneous}
inequalities between terms, i.e., if $E \sqsubseteq E'$, then $E,E'$ have
the \emph{same} context and types.
Since every type dynamism judgement has an associated contract, we can
translate a heterogeneous term dynamism to a homogeneous inequality
\emph{up to contract}. Our next overall goal is to prove
\begin{theorem}[Axiomatic Graduality] \label{thm:axiomatic-graduality}
For any dynamic type interpretation,
\begin{small}
\[
\inferrule
{\Phi : \Gamma \sqsubseteq \Gamma'\\
\Psi : \Delta \sqsubseteq \Delta'\\
\Phi \,\,|\,\, \Psi \vdash M \sqsubseteq M' : \u B \sqsubseteq \u B'}
{\sem\Gamma \,\,|\,\, \sem{\Delta'} \vdash \sem M[\sem{\Psi}] \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem{\Phi}]] : \sem{\u B}}
\quad
\inferrule
{\Phi : \Gamma \sqsubseteq \Gamma' \\
\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'}
{\sem{\Gamma} \vdash \supcast{A}{A'}[\sem{V}] \sqsubseteq\sem{V'}[\sem\Phi] : \sem {A'}}
\]
\end{small}
where we define $\sem{\Phi}$ to upcast each variable, and
$\sem{\Delta}$ to downcast $\bullet$ if it is nonempty, and if
$\Delta = \cdot$, then $M[\sem{\Delta}] = M$.
\begin{longonly}
More explicitly,
\begin{enumerate}
\item If $\Phi : \Gamma \sqsubseteq \Gamma'$, then there exists $n$
such that $\Gamma = x_1:A_1,\ldots,x_n:A_n$ and $\Gamma' =
x_1':A_1',\ldots,x_n':A_n'$ where $A_i \sqsubseteq A_i'$ for each
$i\leq n$.
Then $\sem{\Phi}$ is a substitution from $\sem{\Gamma}$ to $\sem{\Gamma'}$
defined as
\[ \sem{\Phi} = \supcast{A_1}{A_1'}x_1/x_1',\ldots\supcast{A_n}{A_n'}x_n/x_n' \]
\item If $\Psi : \Delta \sqsubseteq \Delta'$, then we similarly define
$\sem{\Psi}$ as a ``linear substitution''. That is, if $\Delta =
\Delta' = \cdot$, then $\sem{\Psi}$ is an empty substitution and
$M[\sem{\Psi}] = M$, otherwise $\sem{\Psi}$ is a linear
substitution from $\Delta' = \bullet : \u B'$ to $\Delta =
\bullet : \u B$ where $\u B \sqsubseteq \u B'$ defined as
\[ \sem\Psi = \sdncast{\u B}{\u B'}\bullet/\bullet \]
\end{enumerate}
\end{longonly}
\end{theorem}
\begin{longonly}
Relative to previous work on graduality \citep{newahmed18},
the distinction between complex value upcasts and complex stack
downcasts guides the formulation of the theorem; e.g. using upcasts in
the left-hand theorem would require more thunks/forces.
\end{longonly}
\begin{shortonly}
The full proof can be found in the extended version, and uses a
sequence of lemmas. The first lemma shows that the translations of
casts in Figure~\ref{fig:cast-to-contract} do form ep pairs in the
sense of Definition~\ref{def:cbpvstar-eppairs}. One of the biggest
advantages of using an explicit syntax for complex values and complex
stacks is that the ``shifted'' casts (the downcast between $\u F$
types for $A \sqsubseteq A'$ and the upcast between $U$ types for $\u B
\sqsubseteq \u B'$) are the only effectful terms, and this lemma is the
only place where we need to reason about their definitions
explicitly---afterwards, we can simply use the fact that they are ep
pairs with the ``pure'' value upcasts and stack downcasts, which
compose much more nicely than effectful terms. This is justified by two
additional lemmas, which show that a projection is determined
by its embedding and vice versa, and that embedding-projections
satisfy an adjunction/Galois connection property. The final lemmas
show that, according to Figure~\ref{fig:cast-to-contract},
$\supcast{A}{A'}$ is equivalent to the identity and
$\supcast{A'}{A''}\supcast{A}{A'}$ is $\supcast{A}{A''}$, and
similarly for downcasts. All of these properties are theorems in GTT
(Section~\ref{sec:theorems-in-gtt}), and in the extended version it
takes quite a bit of work to prove them true under translation, which
illustrates that the axiomatic theory of GTT encodes a lot of
information with relatively few rules.
\end{shortonly}
\begin{longonly}
We now develop some lemmas on the way towards proving this result.
First, to keep proofs high-level, we establish the following cast
reductions that follow easily from $\beta,\eta$ principles.
\begin{lemma}[Cast Reductions]
The following are all provable
\begin{align*}
&\sem{\upcast{A_1+A_2}{A_1'+A_2'}}[\kw{inl} V] \mathrel{\gtdyn\ltdyn} \kw{inl} \sem{\upcast{A_1}{A_1'}}[V]\\
&\sem{\upcast{A_1+A_2}{A_1'+A_2'}}[\kw{inr} V] \mathrel{\gtdyn\ltdyn} \kw{inr} \sem{\upcast{A_2}{A_2'}}[V]\\
&\sem{\dncast{\u F(A_1+A_2)}{\u F(A_1'+A_2')}}[\kw{ret} \kw{inl} V] \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\sem{\dncast{A_1}{A_1'}}[\kw{ret} V]} {x_1} \kw{ret} \kw{inl} x_1\\
&\sem{\dncast{\u F(A_1+A_2)}{\u F(A_1'+A_2')}}[\kw{ret} \kw{inr} V] \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\sem{\dncast{A_2}{A_2'}}[\kw{ret} V]} {x_2} \kw{ret} \kw{inr} x_2\\
&\sem{\dncast{\u F 1}{\u F1}} \mathrel{\gtdyn\ltdyn} \bullet\\
&\sem{\upcast{ 1}{1}}[x] \mathrel{\gtdyn\ltdyn} x\\
&\sem{\dncast{\u F(A_1\times A_2)}{\u F(A_1'\times A_2')}}[\kw{ret} (V_1,V_2)]\\
&\quad\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\sdncast{\u FA_1}{\u F A_1'}[\kw{ret} V_1]} {x_1} \bindXtoYinZ {\sdncast{\u FA_2}{\u F A_2'}[\kw{ret} V_2]} {x_2} \kw{ret} (x_1,x_2)\\
&\supcast{A_1\times A_2}{A_1'\times A_2'}[(V_1,V_2)] \mathrel{\gtdyn\ltdyn} (\supcast{A_1}{A_1'}[V_1], \supcast{A_2}{A_2'}[V_2])\\
&(\sdncast{A \to \u B}{A' \to \u B'} M)\, V \mathrel{\gtdyn\ltdyn}
(\sdncast{\u B}{\u B'} M)\, (\supcast{A}{A'}{V})\\
&(\kw{force} (\supcast{U(A\to\u B)}{U(A'\to\u B')} V))\,V'\\
&\quad\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\dncast{\u FA}{\u FA'}[\kw{ret} V']} x {\kw{force} (\supcast{U\u B}{U\u B'}{(\kw{thunk} (\kw{force} V\, x))})}\\
&\pi \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'} M \mathrel{\gtdyn\ltdyn}
\sdncast{\u B_1}{\u B_1'} \pi M\\
&\pi' \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'} M \mathrel{\gtdyn\ltdyn}
\sdncast{\u B_2}{\u B_2'} \pi' M\\
&\pi \kw{force} (\supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')} V)
\mathrel{\gtdyn\ltdyn}
\kw{force} \supcast{U\u B_1}{U\u B_1'}{\kw{thunk} (\pi \kw{force} V)}\\
&\pi' \kw{force} (\supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')} V)
\mathrel{\gtdyn\ltdyn}
\kw{force} \supcast{U\u B_2}{U\u B_2'}{\kw{thunk} (\pi' \kw{force} V)}\\
&\sdncast{\u F U \u B}{\u F U \u B'}[\kw{ret} V] \mathrel{\gtdyn\ltdyn} \kw{ret}\kw{thunk} \sdncast{\u B}{\u B'}\kw{force} V\\
&\kw{force} \supcast{U\u FA}{U \u F A'}[V]
\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\kw{force} V} x \kw{thunk}\kw{ret}\upcast{A}{A'} x
\end{align*}
\end{lemma}
Our next goal is to show that from the basic casts being ep pairs, we
can prove that all casts as defined in Figure~\ref{fig:cast-to-contract}
are ep pairs. Before doing so, we prove the following lemma, which is
used for transitivity (e.g. in the $A \sqsubseteq {?}$ rule, which uses a
composition $A \sqsubseteq \floor{A} \sqsubseteq {?}$):
\begin{lemma}[EP Pairs Compose]\hfill
\label{lem:ep-pairs-compose}
\begin{enumerate}
\item If $(V_1, S_1)$ is a value ep pair from $A_1$ to $A_2$ and
$(V_2,S_2)$ is a value ep pair from $A_2$ to $A_3$, then
$(V_2[V_1], S_1[S_2])$ is a value ep pair from $A_1$ to $A_3$.
\item If $(V_1, S_1)$ is a computation ep pair from $\u B_1$ to $\u B_2$ and
$(V_2,S_2)$ is a computation ep pair from $\u B_2$ to $\u B_3$, then
$(V_2[V_1], S_1[S_2])$ is a computation ep pair from $\u B_1$ to $\u B_3$.
\end{enumerate}
\end{lemma}
\begin{longproof}
\begin{enumerate}
\item First, retraction follows from retraction twice:
\[ S_1[S_2[\kw{ret} V_2[V_1[x]]]] \mathrel{\gtdyn\ltdyn} S_1[\kw{ret} [V_1[x]]] \mathrel{\gtdyn\ltdyn} x \]
and projection follows from projection twice:
\begin{align*}
\bindXtoYinZ {S_1[S_2[\bullet]]} x \kw{ret} V_2[V_1[x]]
&\mathrel{\gtdyn\ltdyn}
{\bindXtoYinZ {S_1[S_2[\bullet]]} x \bindXtoYinZ {\kw{ret} [V_1[x]]} y \kw{ret} V_2[y]}\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {(\bindXtoYinZ {S_1[S_2[\bullet]]} x {\kw{ret} [V_1[x]]})} y \kw{ret} V_2[y]\tag{Commuting conversion}\\
&\sqsubseteq
\bindXtoYinZ {S_2[\bullet]} y \kw{ret} V_2[y]\tag{Projection}\\
&\sqsubseteq \bullet \tag{Projection}
\end{align*}
\item Again retraction follows from retraction twice:
\[ S_1[S_2[\kw{force} V_2[V_1[z]]]] \mathrel{\gtdyn\ltdyn} S_1[\kw{force} V_1[z]] \mathrel{\gtdyn\ltdyn} \kw{force} z \]
and projection from projection twice:
\begin{align*}
V_2[V_1[\kw{thunk} S_1[S_2[\kw{force} w]]]]
&\mathrel{\gtdyn\ltdyn} V_2[V_1[\kw{thunk} S_1[\kw{force} \kw{thunk} S_2[\kw{force} w]]]]\tag{$U\beta$}\\
&\sqsubseteq V_2[\kw{thunk} S_2[\kw{force} w]] \tag{Projection}\\
&\sqsubseteq w \tag{Projection}
\end{align*}
\end{enumerate}
\end{longproof}
\begin{longonly}
\begin{lemma}[Identity EP Pair]
\label{ep-pair-id}
$(x. x, \bullet)$ is an ep pair (value or computation).
\end{lemma}
\end{longonly}
Now, we show that all casts are ep pairs.
The proof is a somewhat tedious, but straightforward calculation.
\begin{lemma}[Casts are EP Pairs]\hfill
\label{lem:casts-are-ep-pairs}
\begin{enumerate}
\item For any $A \sqsubseteq A'$, the casts $(x.\sem{\upcast{A}{A'}x},
\sem{\dncast{\u F A}{\u F A'}})$ are a value ep pair from
$\sem{A}$ to $\sem{A'}$
\item For any $\u B \sqsubseteq \u B'$, the casts $(z. \sem{\upcast{U \u
B}{U \u B'}z}, \sem{\dncast{\u B}{\u B'}})$ are a computation ep
pair from $\sem{\u B}$ to $\sem{\u B'}$.
\end{enumerate}
\end{lemma}
\begin{longproof}
By induction on normalized type dynamism derivations.
\begin{enumerate}
\item $A \sqsubseteq A$ ($A \in \{{?}, 1\}$), because identity is an ep pair.
\item $0 \sqsubseteq A$ (that $A \in \{ {?}, 0 \}$ is not important):
\begin{enumerate}
\item Retraction is
\[ x : 0 \vdash \kw{ret} x \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret}\kw{absurd} x} y \mho : \u F A \]
which holds by $0\eta$
\item Projection is
\[ \bullet : \u F A \vdash \bindXtoYinZ {(\bindXtoYinZ \bullet y \mho)} x {\kw{ret}\kw{absurd} x} \sqsubseteq \bullet : \u F A \]
Which we calculate:
\begin{align*}
&\bindXtoYinZ {(\bindXtoYinZ \bullet y \mho)} x {\kw{ret}\kw{absurd} x}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ \bullet y \bindXtoYinZ \mho x {\kw{ret}\kw{absurd} x}\tag{comm conv}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ \bullet y \mho \tag{Strictness of Stacks}\\
&\sqsubseteq \bindXtoYinZ \bullet y \kw{ret} y \tag{$\mho$ is $\bot$}\\
&\mathrel{\gtdyn\ltdyn} \bullet \tag{$\u F\eta$}
\end{align*}
\end{enumerate}
\item $+$:
\begin{enumerate}
\item Retraction is
\begin{align*}
&x : A_1 + A_2 \vdash\\
&\sem{\dncast{\u F(A_1+A_2)}{\u F(A_1'+A_2')}}[\kw{ret} \sem{\upcast{A_1+A_2}{A_1'+A_2'}}[x]]\\
&=\sdncast{\u F(A_1+A_2)}{\u F(A_1'+A_2')}[\kw{ret}\caseofXthenYelseZ x {x_1. \kw{inl}\supcast{A_1}{A_1'}[x_1]}{x_1. \kw{inr}\supcast{A_2}{A_2'}[x_2]}]\\
&\mathrel{\gtdyn\ltdyn}
\caseofX x\tag{commuting conversion}\\
&\quad\{ {x_1. \sdncast{\u F(A_1+A_2)}{\u F(A_1'+A_2')}[\kw{ret}\kw{inl}\supcast{A_1}{A_1'}[x_1]]}\\
&\quad\elseZ {x_2. \sdncast{\u F(A_1+A_2)}{\u F(A_1'+A_2')}[\kw{ret}\kw{inr}\supcast{A_2}{A_2'}[x_2]]}\\
&\mathrel{\gtdyn\ltdyn}
\caseofX x\tag{cast computation}\\
&\quad\{{x_1. \bindXtoYinZ {\sdncast{\u F A_1}{\u F A_1'}[\kw{ret} \supcast{A_1}{A_1'}x_1]} {x_1} \kw{ret} \kw{inl} x_1}\\
&\quad\elseZ{x_2. \bindXtoYinZ {\sdncast{\u F A_2}{\u F A_2'}[\kw{ret} \supcast{A_2}{A_2'}x_2]} {x_2} \kw{ret} \kw{inr} x_2}\\
&\mathrel{\gtdyn\ltdyn} \caseofXthenYelseZ x {x_1. \kw{ret} \kw{inl} x_1} {x_2. \kw{ret} \kw{inr} x_2}\tag{IH retraction}\\
&\mathrel{\gtdyn\ltdyn} \kw{ret} x\tag{$+\eta$}
\end{align*}
\item For Projection:
\begin{align*}
&\bullet : A_1' + A_2' \vdash\\
&\bindXtoYinZ {\sdncast{\u F(A_1+A_2)}{\u F(A_1'+A_2')}} x \supcast{A_1+A_2}{A_1'+A_2'}[x]\\
&=
\bindXtoYinZ {(\bindXtoYinZ \bullet {x'} \caseofXthenYelseZ {x'} {x_1'. \bindXtoYinZ {\sem{\dncast{\u FA_1}{\u FA_1'}}[\kw{ret} x_1']} {x_1} \kw{ret}\kw{inl} x_1}{x_2'. \cdots})} x\\
&\quad\supcast{A_1+A_2}{A_1'+A_2'}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ \bullet x' \caseofX {x'} \tag{Commuting Conversion}\\
&\qquad \{ {x_1'. \bindXtoYinZ {\sem{\dncast{\u FA_1}{\u FA_1'}}[\kw{ret} x_1']} {x_1} \supcast{A_1+A_2}{A_1'+A_2'}{\kw{ret}\kw{inl} x_1}}\\
&\qquad \elseZ {x_2'. \bindXtoYinZ {\sem{\dncast{\u FA_2}{\u FA_2'}}[\kw{ret} x_2']} {x_2} \supcast{A_1+A_2}{A_1'+A_2'}{\kw{ret}\kw{inr} x_2}}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ \bullet x' \caseofX {x'}\tag{Cast Computation}\\
&\qquad \{ {x_1'. \bindXtoYinZ {\sem{\dncast{\u FA_1}{\u FA_1'}}[\kw{ret} x_1']} {x_1} {\kw{ret}\kw{inl} \supcast{A_1}{A_1'}x_1}}\\
&\qquad \elseZ {x_2'. \bindXtoYinZ {\sem{\dncast{\u FA_2}{\u FA_2'}}[\kw{ret} x_2']} {x_2} {\kw{ret}\kw{inr} \supcast{A_2}{A_2'}x_2}}\\
&\sqsubseteq
\bindXtoYinZ \bullet x' \caseofXthenYelseZ {x'} {x_1'. \kw{ret}\kw{inl} x_1'} {x_2'. \kw{ret} \kw{inr} x_2'}\tag{IH projection}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ \bullet x' \kw{ret} x'\tag{$+\eta$}\\
&\mathrel{\gtdyn\ltdyn} \bullet \tag{$\u F\eta$}\\
\end{align*}\
\end{enumerate}
\item $\times$:
\begin{enumerate}
\item First, Retraction:
\begin{align*}
&x : A_1\times A_2 \vdash\\
&\sdncast{\u F(A_1\times A_2)}{\u F(A_1'\times A_2')}[\kw{ret} \supcast{A_1\times A_2}{A_1' \times A_2'}[x]]\\
&=\sdncast{\u F(A_1\times A_2)}{\u F(A_1'\times A_2')}[\kw{ret}\pmpairWtoXYinZ x {x_1}{x_2} (\supcast{A_1}{A_1'}[x_1], \supcast{A_2}{A_2'}[x_2])]\\
&\mathrel{\gtdyn\ltdyn}
\pmpairWtoXYinZ x {x_1} {x_2} \sdncast{\u F(A_1\times A_2)}{\u F(A_1'\times A_2')}[\kw{ret}(\supcast{A_1}{A_1'}[x_1], \supcast{A_2}{A_2'}[x_2])]\tag{commuting conversion}\\
&\mathrel{\gtdyn\ltdyn}
\pmpairWtoXYinZ x {x_1} {x_2} \tag{cast reduction}\\
&\quad\bindXtoYinZ {\sdncast{\u F A_1}{\u F A_1'}[\kw{ret}\supcast{A_1}{A_1'}[x_1]]} {y_1}\\
&\quad\bindXtoYinZ {\sdncast{\u F A_2}{\u F A_2'}[\kw{ret}\supcast{A_2}{A_2'}[x_2]]} {y_2}\\
&\quad\kw{ret}(y_1, y_2)\\
&\mathrel{\gtdyn\ltdyn}
\pmpairWtoXYinZ x {x_1} {x_2} \bindXtoYinZ {\kw{ret} x_1} {y_1} \bindXtoYinZ {\kw{ret} x_2} {y_2} \kw{ret}(y_1, y_2)\tag{IH retraction}\\
&\mathrel{\gtdyn\ltdyn}
\pmpairWtoXYinZ x {x_1} {x_2} \kw{ret}(x_1,x_2) \tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{ret} x\tag{$\times\eta$}
\end{align*}
\item Next, Projection:
\begin{align*}
&\bullet : \u F A'\vdash\\
&\bindXtoYinZ {\sdncast{\u F(A_1\times A_2)}{\u F(A_1'\times A_2')}[\bullet]} x \kw{ret}\supcast{A_1\times A_2}{A_1' \times A_2'}[x]\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ \bullet {x'} \pmpairWtoXYinZ {{x'}} {x_1'}{x_2'} \tag{$\u F\eta, \times\eta$}\\
&\quad \bindXtoYinZ {\sdncast{\u F(A_1\times A_2)}{\u F(A_1'\times A_2')}[\kw{ret} (x_1',x_2')]} x\\
&\quad \kw{ret}\supcast{A_1\times A_2}{A_1' \times A_2'}[x]\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ \bullet {x'} \pmpairWtoXYinZ {{x'}} {x_1'}{x_2'}\tag{cast reduction}\\
&\quad \bindXtoYinZ {\sdncast{\u F A_1}{\u F A_1'}[\kw{ret} x_1']} {x_1}\\
&\quad \bindXtoYinZ {\sdncast{\u F A_2}{\u F A_2'}[\kw{ret} x_2']} {x_2}\\
&\quad \kw{ret}\supcast{A_1\times A_2}{A_1' \times A_2'}[(x_1,x_2)]\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ \bullet {x'} \pmpairWtoXYinZ {{x'}} {x_1'}{x_2'}\tag{cast reduction}\\
&\quad \bindXtoYinZ {\sdncast{\u F A_1}{\u F A_1'}[\kw{ret} x_1']} {x_1}\\
&\quad \bindXtoYinZ {\sdncast{\u F A_2}{\u F A_2'}[\kw{ret} x_2']} {x_2}\\
&\quad \kw{ret}(\supcast{A_1}{A_1'}[x_1], \supcast{A_2}{A_2'}[x_2])\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ \bullet {x'} \pmpairWtoXYinZ {{x'}} {x_1'}{x_2'}\tag{$\u F \beta$, twice}\\
&\quad \bindXtoYinZ {\sdncast{\u F A_1}{\u F A_1'}[\kw{ret} x_1']} {x_1}\\
&\quad \bindXtoYinZ {\sdncast{\u F A_2}{\u F A_2'}[\kw{ret} x_2']} {x_2}\\
&\quad \bindXtoYinZ {\kw{ret} \supcast{A_2}{A_2'}[x_2]}{y_2'}\\
&\quad \bindXtoYinZ {\kw{ret} \supcast{A_1}{A_1'}[x_1]}{y_1'}\\
&\quad \kw{ret}(y_1',y_2')\\
&\sqsubseteq\bindXtoYinZ \bullet {x'} \pmpairWtoXYinZ {{x'}} {x_1'}{x_2'}\tag{IH Projection}\\
&\quad \bindXtoYinZ {\sdncast{\u F A_1}{\u F A_1'}[\kw{ret} x_1']} {x_1}\\
&\quad \bindXtoYinZ {\kw{ret} x_2'} {y_2'}\\
&\quad \bindXtoYinZ {\kw{ret} \supcast{A_1}{A_1'}[x_1]} {y_1'}\\
&\quad \kw{ret}(y_1',y_2')\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ \bullet {x'} \pmpairWtoXYinZ {{x'}} {x_1'}{x_2'}\tag{$\u F\beta$}\\
&\quad \bindXtoYinZ {\sdncast{\u F A_1}{\u F A_1'}[\kw{ret} x_1']} {x_1}\\
&\quad \bindXtoYinZ {\kw{ret} \supcast{A_1}{A_1'}[x_1]} {y_1'}\\
&\quad \kw{ret}(x_1',y_2')\\
&\sqsubseteq\bindXtoYinZ \bullet {x'} \pmpairWtoXYinZ {{x'}} {x_1'}{x_2'}\tag{IH Projection}\\
&\quad \bindXtoYinZ {\kw{ret} x_1'} {y_1'}\\
&\quad \kw{ret}(x_1',y_2')\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ \bullet {x'} \pmpairWtoXYinZ {{x'}} {x_1'}{x_2'}
\kw{ret}(x_1',x_2')\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ \bullet {x'} \kw{ret} {x'}\tag{$\times\eta$}\\
&\mathrel{\gtdyn\ltdyn}\bullet \tag{$\u F \eta$}
\end{align*}
\end{enumerate}
\item $U$: By inductive hypothesis, $(x.\sem{\upcast{U\u B}{U\u
B'}}, \dncast{\u B}{\u B'})$ is a computation ep pair
\begin{enumerate}
\item To show retraction we need to prove:
\[
x : U \u B \vdash \kw{ret} x \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {(\kw{ret} \kw{thunk} {\sem{\upcast{U\u B}{U \u B'}}})} y {\kw{ret} \kw{thunk} \sem{\dncast{\u B}{\u B'}}[\kw{force} y]} : \u F U \u B'
\]
Which we calculate as follows:
\begin{align*}
&x : U\u B \vdash \\
&\sdncast{\u FU\u B}{\u FU\u B'}[{(\kw{ret} {\sem{\upcast{U\u B}{U \u B'}}[x]})}]\\
&\mathrel{\gtdyn\ltdyn}
\kw{ret}\kw{thunk}(\sdncast{\u B}{\u B'}[\kw{force} {\sem{\upcast{U\u B}{U \u B'}}}[x]])\tag{Cast Reduction}\\
&\mathrel{\gtdyn\ltdyn} \kw{ret}\kw{thunk} \kw{force} x \tag{IH Retraction}\\
&\mathrel{\gtdyn\ltdyn} \kw{ret} x \tag{$U\eta$}
\end{align*}
\item To show projection we calculate:
\begin{align*}
&\bindXtoYinZ {\sdncast{\u FU\u B}{\u FU\u B'}[\bullet]} x \supcast{U\u B}{U\u B'}[x]\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ \bullet {x'} \bindXtoYinZ {\sdncast{\u FU\u B}{\u FU\u B'}[\kw{ret} x']} x \supcast{U\u B}{U\u B'}[x]\tag{$\u F\eta$}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ \bullet {x'} \bindXtoYinZ {\kw{ret}\kw{thunk}(\sdncast{\u B}{\u B'}[\kw{force} x'])} x \supcast{U\u B}{U\u B'}[x]\tag{Cast Reduction}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ \bullet {x'} \supcast{U\u B}{U\u B'}[\kw{thunk}(\sdncast{\u B}{\u B'}[\kw{force} x'])] \tag{$\u F\beta$}\\
&\sqsubseteq \bindXtoYinZ \bullet {x'} x'\tag{IH Projection}\\
&\mathrel{\gtdyn\ltdyn} \bullet \tag{$\u F\eta$}
\end{align*}
\end{enumerate}
\end{enumerate}
\begin{enumerate}
\item There's a few base cases about the dynamic computation type, then
\item $\top$:
\begin{enumerate}
\item Retraction is by $\top\eta$:
\begin{align*}
z : U \top \vdash \kw{force} z \mathrel{\gtdyn\ltdyn} \{ \} : \top
\end{align*}
\item Projection is
\begin{align*}
\kw{thunk} \mho
&\sqsubseteq \kw{thunk} \kw{force} w \tag{$\mho$ is $\bot$}\\
&\mathrel{\gtdyn\ltdyn} w \tag{$U\eta$}
\end{align*}
\end{enumerate}
\item $\mathbin{\&}$:
\begin{enumerate}
\item Retraction
\begin{align*}
&z : U (\u B_1 \mathbin{\&} \u B_2)\vdash \\
&\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} \supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')}[z]]\\
&\mathrel{\gtdyn\ltdyn}
\pairone{\pi\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} \supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')}[z]]} \tag{$\mathbin{\&}\eta$}\\
&\qquad\pairtwo{\pi' \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} \supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')}[z]]}\\
&\mathrel{\gtdyn\ltdyn}
\pairone{\sdncast{\u B_1}{\u B_1'}[\pi\kw{force} \supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')}[z]]} \tag{Cast reduction}\\
&\qquad\pairtwo{\sdncast{\u B_2}{\u B_2'}[\pi'\kw{force} \supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')}[z]]}\\
&\mathrel{\gtdyn\ltdyn}
\pairone{\sdncast{\u B_1}{\u B_1'}[\kw{force}\supcast{U\u B_1}{U\u B_1'}[\kw{thunk} \pi \kw{force} z]]} \tag{Cast reduction}\\
& \qquad\pairtwo{\sdncast{\u B_2}{\u B_2'}[\kw{force}\supcast{U\u B_2}{U\u B_2'}[\kw{thunk} \pi' \kw{force} z]]}\\
&\mathrel{\gtdyn\ltdyn}
\pair{\kw{force} \kw{thunk} \pi \kw{force} z}{\kw{force} \kw{thunk} \pi' \kw{force} z} \tag{IH retraction}\\
&\mathrel{\gtdyn\ltdyn} \pair{\pi \kw{force} z}{\pi' \kw{force} z}\tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{force} z \tag{$\mathbin{\&}\eta$}
\end{align*}
\item Projection
\begin{align*}
&w : U {\u B_1' \mathbin{\&} \u B_2'} \vdash\\
& \supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')}[\kw{thunk} \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} w]]\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\kw{force}\supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')}[\kw{thunk} \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} w]]\tag{$U\eta$}\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\pairone{\pi\kw{force}\supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')}[\kw{thunk} \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} w]]}\\
&\qquad\qquad\pairtwo{\pi'\kw{force}\supcast{U(\u B_1 \mathbin{\&} \u B_2)}{U(\u B_1' \mathbin{\&} \u B_2')}[\kw{thunk} \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} w]]}\tag{$\mathbin{\&}\eta$}\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\pairone{\kw{force}\supcast{U\u B_1}{U\u B_1'}[\kw{thunk}\pi\kw{force}\kw{thunk} \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} w]]}\\
&\qquad\qquad\pairtwo{\kw{force}\supcast{U\u B_2}{U\u B_2'}[\kw{thunk}\pi'\kw{force}\kw{thunk} \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} w]]}\tag{cast reduction}\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\pairone{\kw{force}\supcast{U\u B_1}{U\u B_1'}[\kw{thunk}\pi \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} w]]} \tag{$U\beta$}\\
&\qquad\qquad\pairtwo{\kw{force}\supcast{U\u B_2}{U\u B_2'}[\kw{thunk}\pi' \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\kw{force} w]]}\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\pairone{\kw{force}\supcast{U\u B_1}{U\u B_1'}[\kw{thunk}\sdncast{\u B_1}{\u B_1'}[\pi\kw{force} w]]} \tag{cast reduction}\\
&\qquad\qquad\pairtwo{\kw{force}\supcast{U\u B_2}{U\u B_2'}[\kw{thunk}\sdncast{\u B_2}{\u B_2'}[\pi'\kw{force} w]]}\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\pairone{\kw{force}\supcast{U\u B_1}{U\u B_1'}[\kw{thunk}\sdncast{\u B_1}{\u B_1'}[\kw{force}\kw{thunk}\pi\kw{force} w]]} \tag{$U\beta$}\\
&\qquad\qquad\pairtwo{\kw{force}\supcast{U\u B_2}{U\u B_2'}[\kw{thunk}\sdncast{\u B_2}{\u B_2'}[\kw{force}\kw{thunk}\pi'\kw{force} w]]}\\
&\sqsubseteq
\kw{thunk}\pair{\kw{force}\kw{thunk}\pi\kw{force} w}{\kw{force}\kw{thunk}\pi'\kw{force} w} \tag{IH projection}\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\pair{\pi\kw{force} w}{\pi'\kw{force} w} \tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{thunk}\kw{force} w \tag{$\mathbin{\&}\eta$}\\
&\mathrel{\gtdyn\ltdyn} w \tag{$U\eta$}\\
\end{align*}
\end{enumerate}
\item $\to$:
\begin{enumerate}
\item Retraction
\begin{align*}
&z : U (A \to \u B)\vdash \\
&\sdncast{A \to \u B}{A' \to \u B'}[\kw{force} \supcast{U(A \to \u B)}{U(A' \to \u B')}[z]]\\
&\mathrel{\gtdyn\ltdyn}
\lambda x:A. (\sdncast{A \to \u B}{A' \to \u B'}[\kw{force} \supcast{U(A \to \u B)}{U(A' \to \u B')}[z]])\,x\tag{$\to\eta$}\\
&\mathrel{\gtdyn\ltdyn}
\lambda x:A.
\sdncast{\u B}{\u B'}[(\kw{force} \supcast{U(A \to \u B)}{U(A' \to \u B')}[z])(\supcast{A}{A'}[x])] \tag{cast reduction}\\
&\mathrel{\gtdyn\ltdyn}
\lambda x:A.\tag{cast reduction}\\
&\quad
\sdncast{\u B}{\u B'}[\bindXtoYinZ{\sdncast{\u F A}{\u F A'}[\kw{ret} \upcast{A}{A'}[x]]} {y} \kw{force} \upcast{U \u B}{U\u B'}[\kw{thunk}((\kw{force} z)\, y)]]\\
&\mathrel{\gtdyn\ltdyn}
\lambda x:A. \sdncast{\u B}{\u B'}[\bindXtoYinZ{\kw{ret} x} {y} \kw{force} \upcast{U \u B}{U\u B'}[\kw{thunk}((\kw{force} z)\, y)]]\tag{IH Retraction}\\
&\mathrel{\gtdyn\ltdyn}
\lambda x:A. \sdncast{\u B}{\u B'}[\kw{force} \upcast{U \u B}{U\u B'}[\kw{thunk}((\kw{force} z)\, x)]]\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} \lambda x:A. \kw{force} \kw{thunk}((\kw{force} z)\, x) \tag{IH retraction}\\
&\mathrel{\gtdyn\ltdyn} \lambda x:A. (\kw{force} z)\, x \tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{force} z \tag{$\to\eta$}\\
\end{align*}
\item Projection
\begin{align*}
&w : U (A' \to \u B') \vdash\\
& \supcast{U(A \to \u B)}{U(A' \to \u B')}[\kw{thunk} \sdncast{A \to \u B}{A' \to \u B'}[\kw{force} w]]\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\kw{force}\supcast{U(A \to \u B)}{U(A' \to \u B')}[\kw{thunk} \sdncast{A \to \u B}{A' \to \u B'}[\kw{force} w]]\tag{$U\eta$}\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\lambda x':A'.\\
&\quad(\kw{force}\supcast{U(A \to \u B)}{U(A' \to \u B')}[\kw{thunk} \sdncast{A \to \u B}{A' \to \u B'}[\kw{force} w]])\,x'\tag{$\to\eta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{thunk}\lambda x':A'.\\
&\qquad \bindXtoYinZ {\sdncast{\u F A}{\u F A'}[\kw{ret} x']} x\tag{cast reduction}\\
&\qquad
\kw{force}\supcast{U\u B}{U \u B'}[\kw{thunk}((\kw{force} \kw{thunk} \sdncast{A \to \u B}{A' \to \u B'}[\kw{force} w])\, x)]\\
&\mathrel{\gtdyn\ltdyn} \kw{thunk}\lambda x':A'.\\
&\qquad \bindXtoYinZ {\sdncast{\u F A}{\u F A'}[\kw{ret} x']} x\tag{$U\beta$}\\
&\qquad
\kw{force}\supcast{U\u B}{U \u B'}[\kw{thunk}((\sdncast{A \to \u B}{A' \to \u B'}[\kw{force} w])\, x)]\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\lambda x':A'.\\
&\qquad \bindXtoYinZ {\sdncast{\u F A}{\u F A'}[\kw{ret} x']} x\tag{cast reduction}\\
&\qquad
\kw{force}\supcast{U\u B}{U \u B'}[\kw{thunk}\sdncast{\u B}{\u B'}[(\kw{force} w)\,(\upcast{A}{A'}[x])]]\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\lambda x':A'.\\
&\qquad\bindXtoYinZ {\sdncast{\u F A}{\u F A'}[\kw{ret} x']} x\tag{$\u F\beta$}\\
&\qquad\bindXtoYinZ {\kw{ret}{\upcast{A}{A'}[x]}} {x'}\\
&\qquad\kw{force}\supcast{U\u B}{U \u B'}[\kw{thunk}\sdncast{\u B}{\u B'}[(\kw{force} w)\,x']]\\
&\sqsubseteq
\kw{thunk}\lambda x':A'.\tag{IH projection}\\
&\qquad\bindXtoYinZ {\kw{ret} x'} {x'}\\
&\qquad\kw{force}\supcast{U\u B}{U \u B'}[\kw{thunk}\sdncast{\u B}{\u B'}[(\kw{force} w)\,x']]\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\lambda x':A'.
\kw{force}\supcast{U\u B}{U \u B'}[\kw{thunk}\sdncast{\u B}{\u B'}[(\kw{force} w)\,x']]\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn}
\kw{thunk}\lambda x':A'.
\kw{force}\supcast{U\u B}{U \u B'}[\kw{thunk}\sdncast{\u B}{\u B'}[\kw{force}\kw{thunk}((\kw{force} w)\,x')]]\tag{$\u F\beta$}\\
&\sqsubseteq
\kw{thunk}\lambda x':A'. \kw{force}\kw{thunk}((\kw{force} w)\,x')\tag{IH projection}\\
&\mathrel{\gtdyn\ltdyn} \kw{thunk}\lambda x':A'. ((\kw{force} w)\,x')\tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{thunk}\kw{force} w\tag{$\to\eta$}\\
&\mathrel{\gtdyn\ltdyn} w \tag{$U\eta$}\\
\end{align*}
\end{enumerate}
\item $\u F$:
\begin{enumerate}
\item To show retraction we need to show
\[
z : U \u F A \vdash
\kw{force} z \mathrel{\gtdyn\ltdyn}
\sem{\dncast{\u F A}{\u F A'}}[\kw{force} \kw{thunk} (\bindXtoYinZ {\kw{force} z} x \kw{ret} \sem{\upcast{A}{A'}})]
\]
We calculate:
\begin{align*}
&\sem{\dncast{\u F A}{\u F A'}}[\kw{force} \kw{thunk} (\bindXtoYinZ {\kw{force} z} x \kw{ret} \sem{\upcast{A}{A'}})]\\
&\mathrel{\gtdyn\ltdyn}
\sem{\dncast{\u F A}{\u F A'}}[(\bindXtoYinZ {\kw{force} z} x \kw{ret} \sem{\upcast{A}{A'}})]\tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\kw{force} z} x \sem{\dncast{\u F A}{\u F A'}}[\kw{ret} \sem{\upcast{A}{A'}}] \tag{comm conv}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\kw{force} z} x \kw{ret} x \tag{IH value retraction}\\
&\mathrel{\gtdyn\ltdyn} \kw{force} z \tag{$\u F\eta$}
\end{align*}
\item To show projection we need to show
\[
w : U \u F A' \vdash
\kw{thunk} {(\bindXtoYinZ {\kw{force} {\kw{thunk} \sem{\dncast{\u F A}{\u F A'}}[\kw{force} w]}} x \kw{ret} \sem{\upcast{A}{A'}})}
\sqsubseteq w : U \u B'
\]
We calculate as follows
\begin{align*}
&\kw{thunk} {(\bindXtoYinZ {\kw{force} {\kw{thunk} \sem{\dncast{\u F A}{\u F A'}}[\kw{force} w]}} x \kw{ret} \sem{\upcast{A}{A'}})}\\
&\mathrel{\gtdyn\ltdyn}\kw{thunk} {(\bindXtoYinZ {{\sem{\dncast{\u F A}{\u F A'}}[\kw{force} w]}} x \kw{ret} \sem{\upcast{A}{A'}})} \tag{$U\beta$}\\
&\sqsubseteq \kw{thunk} {\kw{force} w} \tag{IH value projection}\\
& \mathrel{\gtdyn\ltdyn} w \tag{$U\eta$}
\end{align*}
\end{enumerate}
\end{enumerate}
\end{longproof}
While the above was tedious, this pays off greatly in later proofs: this
is the \emph{only} proof in the entire development that needs to inspect
the definition of a ``shifted'' cast (a downcast between $\u F$ types or
an upcast between $U$ types).
All later lemmas
have cases for these shifted casts, but \emph{only} use the property
that they are part of an ep pair.
This is one of the biggest advantages of using an explicit syntax for
complex values and complex stacks: the shifted casts are the only ones
that non-trivially use effectful terms, so after this lemma is
established we only have to manipulate values and stacks, which
compose much more nicely than effectful terms.
Conceptually, the main reason we can avoid reasoning about the
definitions of the shifted casts directly is that any two shifted casts
that form an ep pair with the same value embedding/stack projection are
equal:
\begin{lemma}[Value Embedding determines Projection, Computation Projection determines Embedding]
\label{lem:adjoints-unique-cbpvstar}
For any value $x : A \vdash V_e : A'$ and stacks $\bullet : \u F A'
\vdash S_1 : \u F A$ and $\bullet : \u F A' \vdash S_2 : \u F A$, if
$(V_e, S_1)$ and $(V_e, S_2)$ are both value ep pairs, then
\[ S_1 \mathrel{\gtdyn\ltdyn} S_2 \]
Similarly for any values $x : U\u B \vdash V_1 : U \u B'$ and $x :
U\u B \vdash V_2 : U \u B'$ and stack $\bullet : \u B' \vdash S_p :
\u B$, if $(V_1, S_p)$ and $(V_2, S_p)$ are both computation ep pairs then
\[ V_1 \mathrel{\gtdyn\ltdyn} V_2 \]
\end{lemma}
\begin{longproof}
By symmetry it is sufficient to show $S_1 \sqsubseteq S_2$.
\begin{mathpar}
\inferrul
{\inferrul
{\inferrul
{\inferrul
{S_1 \sqsubseteq S_1}
{\bindXtoYinZ {S_1} x \kw{ret} x \sqsubseteq \bindXtoYinZ \bullet x S_1[\kw{ret} x]}}
{\bindXtoYinZ {S_1} x \kw{ret} V_e \sqsubseteq \bindXtoYinZ \bullet x \kw{ret} x}}
{\bindXtoYinZ {S_1} x \kw{ret} x \sqsubseteq \bindXtoYinZ \bullet x S_2[\kw{ret} x]}}
{\bullet : \u F A' \vdash S_1 \sqsubseteq S_2 : \u F A}
\end{mathpar}
similarly to show $V_1 \sqsubseteq V_2$:
\begin{mathpar}
\inferrule%
{\inferrule
{\inferrule
{x : U \u B \vdash \kw{thunk}\kw{force} V_2 \sqsubseteq \kw{thunk} \kw{force} V_2 : U \u B'}
{x : U \u B \vdash \kw{thunk} \kw{force} x \sqsubseteq \kw{thunk} S_p[\kw{force} V_2]}}
{x : U \u B \vdash \kw{thunk}\kw{force} V_1 \sqsubseteq \kw{thunk} \kw{force} V_2 : U \u B'}}
{x : U \u B \vdash V_1 \sqsubseteq V_2 : U \u B'}
\end{mathpar}
\end{longproof}
The next two lemmas on the way to axiomatic graduality show that
Figure~\ref{fig:cast-to-contract} translates $\upcast{A}{A}$ to the
identity and $\upcast{A'}{A''}{\upcast{A}{A'}}$ to the same contract as
$\upcast{A}{A''}$, and similarly for downcasts.
Intuitively, for all connectives except $\u F, U$, this is because of
functoriality of the type constructors on values and stacks.
For the $\u F, U$ cases, we will use the corresponding fact about the
dual cast, i.e., to prove the $\u F A$ to $\u F A$ downcast is the
identity stack, we know by inductive hypothesis that the $A$ to $A$
upcast is the identity, and that the identity stack is a projection
for the identity.
Therefore Lemma~\ref{lem:adjoints-unique-cbpvstar} implies that the $\u
FA$ downcast must be equivalent to the identity.
We now discuss these two lemmas and their proofs in detail.
First, we show that the casts from a type to itself are equivalent to
the identity.
Below, we will use this lemma to prove the reflexivity case of the
axiomatic graduality theorem, and to prove a conservativity result,
which says that a GTT homogeneous term dynamism is the same as a
CBPV*\/ inequality between their translations.
\begin{lemma}[Identity Expansion]
\label{lem:ident-expansion}
For any $A$ and $\u B$,
\begin{mathpar}
x:A \vdash \sem{\upcast{A}{A}} \mathrel{\gtdyn\ltdyn} x : A\and
\bullet : \u B \vdash \sem{\dncast{\u B}{\u B}} \mathrel{\gtdyn\ltdyn} \bullet : \u B
\end{mathpar}
\end{lemma}
\begin{proof}
We proceed by induction on $A, \u B$, following the proof that
reflexivity is admissible given in Lemma \ref{lem:norm-type-dyn}.
\begin{enumerate}
\item If $A \in \{1, {?} \}$, then $\supcast{A}{A}[x] = x$.
\item If $A = 0$, then $\kw{absurd} x \mathrel{\gtdyn\ltdyn} x$ by $0\eta$.
\item If $A = U \u B$, then by inductive hypothesis $\sdncast{\u
B}{\u B} \mathrel{\gtdyn\ltdyn} \bullet$. By Lemma \ref{ep-pair-id},
$(x. x, \bullet)$ is a computation ep pair from $\u B$ to
itself. But by Lemma \ref{lem:casts-are-ep-pairs}, $(\supcast{U\u
B}{U\u B}[x], \bullet)$ is also a computation ep pair so the
result follows by uniqueness of embeddings from computation
projections Lemma \ref{lem:adjoints-unique-cbpvstar}.
\item If $A = A_1\times A_2$ or $A = A_1+A_2$, the result follows by
the $\eta$ principle and inductive hypothesis.
\item If $\u B = \u {\text{?`}}$, $\sdncast{\u {\text{?`}}}{\u {\text{?`}}} = \bullet$.
\item For $\u B = \top$, the result follows by $\top\eta$.
\item For $\u B = \u B_1 \mathbin{\&} \u B_2$ or $\u B = A \to \u B'$, the
result follows by inductive hypothesis and $\eta$.
\item For $\u B = \u FA$, by inductive hypothesis, the downcast is a
projection for the value embedding $x.x$, so the result follows by
identity ep pair and uniqueness of projections from value
embeddings.
\end{enumerate}
\end{proof}
Second, we show that a composition of upcasts is translated to the same
thing as a direct upcast, and similarly for downcasts. Below, we will
use this lemma to translate \emph{transitivity} of term dynamism in GTT.
\begin{lemma}[Cast Decomposition]
For any dynamic type interpretation $\rho$,
\begin{small}
\begin{mathpar}
\inferrule
{A \sqsubseteq A' \sqsubseteq A''}
{x : A \vdash \srho{\upcast A {A''}} \mathrel{\gtdyn\ltdyn} \srho{\upcast {A'} {A''}}[\srho{\upcast {A} {A'}}] : A''}
\inferrule
{\u B \sqsubseteq \u B' \sqsubseteq \u B''}
{\bullet : \u B'' \vdash \srho{\dncast{\u B}{\u B''}} \mathrel{\gtdyn\ltdyn}
\srho{\dncast{\u B}{\u B'}}[\srho{\dncast{\u B'}{\u B''}}]}
\end{mathpar}
\end{small}
\end{lemma}
\begin{longproof}
By mutual induction on $A, \u B$.
\begin{enumerate}
\item $A \sqsubseteq A' \sqsubseteq A''$
\begin{enumerate}
\item If $A = 0$, we need to show $x : 0 \vdash
\supcast{0}{A''}[x] \mathrel{\gtdyn\ltdyn}
\supcast{A'}{A''}[\supcast{0}{A'}[x]] : A''$ which follows by
$0\eta$.
\item If $A = {?}$, then $A' = A'' = {?}$, and both casts are
the identity.
\item If $A \not\in \{{?}, 0 \}$ and $A' = {?}$, then $A'' =
{?}$ and $\supcast{{?}}{{?}}[\supcast{A}{{?}}] =
\supcast{A}{{?}}$ by definition.
\item If $A, A' \not\in \{{?}, 0 \}$ and $A'' = {?}$, then
$\floor A = \floor {A'}$, which we call $G$ and
\[ \supcast{A}{{?}} = \supcast{G}{{?}}[\supcast{A}{G}] \]
and
\[ \supcast{A'}{{?}}[\supcast{A}{A'}] = \supcast{G}{{?}}[\supcast{A'}{G}[\supcast{A}{A'}]] \]
so this reduces to the case for $A \sqsubseteq A' \sqsubseteq G$, below.
\item If $A,A',A'' \not\in \{{?}, 0 \}$, then they all have the same
top-level constructor:
\begin{enumerate}
\item $+$: We need to show for $A_1 \sqsubseteq A_1' \sqsubseteq A_1''$
and $A_2 \sqsubseteq A_2' \sqsubseteq A_2''$:
\[
x : \sem{A_1} + \sem{A_2} \vdash
\supcast{A_1'+A_2'}{A_1''+A_2''}[\supcast{A_1+A_2}{A_1'+A_2'}[x]]\mathrel{\gtdyn\ltdyn}
\supcast{A_1+A_2}{A_1''+A_2''}[x]
: \sem{A_1''}+\sem{A_2''}.
\]
We proceed as follows:
\begin{align*}
&\supcast{A_1'+A_2'}{A_1''+A_2''}[\supcast{A_1+A_2}{A_1'+A_2'}[x]]\\
&\mathrel{\gtdyn\ltdyn} \caseofX {x}\tag{$+\eta$}\\
&\qquad\{ {x_1. \supcast{A_1'+A_2'}{A_1''+A_2''}[\supcast{A_1+A_2}{A_1'+A_2'}[\kw{inl} x_1]]}\\
&\qquad\elseZ {x_2. \supcast{A_1'+A_2'}{A_1''+A_2''}[\supcast{A_1+A_2}{A_1'+A_2'}[\kw{inr} x_2]]}\\
&\mathrel{\gtdyn\ltdyn} \caseofX {x}\tag{cast reduction}\\
&\qquad\{ {x_1. \supcast{A_1'+A_2'}{A_1''+A_2''}[\kw{inl}\supcast{A_1}{A_1'}[x_1]]}\\
&\qquad\elseZ {x_2. \supcast{A_1'+A_2'}{A_1''+A_2''}[\kw{inr}\supcast{A_2}{A_2'}[x_2]]}\\
&\mathrel{\gtdyn\ltdyn} \caseofX {x}\tag{cast reduction}\\
&\qquad\{ {x_1. \kw{inl}\supcast{A_1'}{A_1''}[\supcast{A_1}{A_1'}[x_1]]}\\
&\qquad\elseZ {x_2. \kw{inr}\supcast{A_2'}{A_2''}[\supcast{A_2}{A_2'}[x_2]]}\\
&\mathrel{\gtdyn\ltdyn} \caseofX {x}\tag{IH}\\
&\qquad\{ {x_1. \kw{inl}\supcast{A_1}{A_1''}[x_1]}\\
&\qquad\elseZ {x_2. \kw{inr}\supcast{A_2}{A_2''}[x_2]}\\
&= \supcast{A_1+A_2}{A_1''+A_2''}[x] \tag{definition}
\end{align*}
\item $1$: By definition both sides are the identity.
\item $\times$: We need to show for $A_1 \sqsubseteq A_1' \sqsubseteq A_1''$
and $A_2 \sqsubseteq A_2' \sqsubseteq A_2''$:
\[
x : \sem{A_1} \times \sem{A_2} \vdash
\supcast{A_1'\times A_2'}{A_1''\times A_2''}[\supcast{A_1\times A_2}{A_1'\times A_2'}[x]]\mathrel{\gtdyn\ltdyn}
\supcast{A_1\times A_2}{A_1''\times A_2''}[x]
: \sem{A_1''}\times \sem{A_2''}.
\]
We proceed as follows:
\begin{align*}
&\supcast{A_1'\times A_2'}{A_1''\times A_2''}[\supcast{A_1\times A_2}{A_1'\times A_2'}[x]]\\
&\mathrel{\gtdyn\ltdyn}\pmpairWtoXYinZ x y z \supcast{A_1'\times A_2'}{A_1''\times A_2''}[\supcast{A_1\times A_2}{A_1'\times A_2'}[(y,z)]]\tag{$\times\eta$}\\
&\mathrel{\gtdyn\ltdyn}\pmpairWtoXYinZ x y z \supcast{A_1'\times A_2'}{A_1''\times A_2''}[(\supcast{A_1}{A_1'}[y], \supcast{A_2}{A_2'}[z])]\tag{cast reduction}\\
&\mathrel{\gtdyn\ltdyn}\pmpairWtoXYinZ x y z (\supcast{A_1'}{A_1''}[\supcast{A_1}{A_1'}[y]], \supcast{A_2'}{A_2''}[\supcast{A_2}{A_2'}[z]])\tag{cast reduction}\\
&\mathrel{\gtdyn\ltdyn}\pmpairWtoXYinZ x y z (\supcast{A_1}{A_1''}[y], \supcast{A_2}{A_2''}[z])\tag{IH}\\
&=\supcast{A_1\times A_2}{A_1'' \times A_2''}[x]\tag{definition}
\end{align*}
\item $U \u B \sqsubseteq U \u B' \sqsubseteq U \u B''$.
We need to show
\[
x : U \u B \vdash \supcast{U\u B'}{U\u B''}[\supcast{U\u B}{U\u B'}[x]] \mathrel{\gtdyn\ltdyn}
\supcast{U\u B}{U\u B''}[x] : U\u B''
\]
By composition of ep pairs, we know $(x.\supcast{U\u B'}{U\u
B''}[\supcast{U\u B}{U\u B'}[x]], \sdncast{\u B}{\u
B'}[\sdncast{\u B'}{\u B''}])$ is a computation ep pair.
Furthermore, by inductive hypothesis, we know
\[ \sdncast{\u B}{\u B'}[\sdncast{\u B'}{\u B''}] \mathrel{\gtdyn\ltdyn} \sdncast{\u B}{\u B''}\]
so then both sides form ep pairs paired with $\sdncast{\u
B}{\u B''}$, so it follows because computation projections
determine embeddings \ref{lem:adjoints-unique-cbpvstar}.
\end{enumerate}
\end{enumerate}
\item $\u B \sqsubseteq \u B' \sqsubseteq \u B''$
\begin{enumerate}
\item If $\u B = \top$, then the result is immediate by $\eta\top$.
\item If $\u B = \u {\text{?`}}$, then $\u B' = \u B'' = \u {\text{?`}}$ then both
sides are just $\bullet$.
\item If $\u B \not\in \{\u {\text{?`}}, \top\}$, and $\u B' = \u {\text{?`}}$, then
$\u B'' = \u {\text{?`}}$
\[ \sdncast{\u B}{\u {\text{?`}}}[\sdncast{\u {\text{?`}}}{\u {\text{?`}}}] = \sdncast{\u B}{\u {\text{?`}}} \]
\item If $\u B,\u B' \not\in \{\u {\text{?`}},\top\}$, and $\u B'' = \u {\text{?`}}$ , and $\floor {\u B} = \floor {\u B'}$, which we
call $\u G$. Then we need to show
\[ \sdncast{\u B}{\u B'}[\sdncast{\u B'}{\u G}[\sdncast{\u G}{\u {\text{?`}}}]]
\mathrel{\gtdyn\ltdyn}
\sdncast{\u B}{\u G}[\sdncast{\u G}[\u {\text{?`}}]]
\]
so the result follows from the case $\u B \sqsubseteq \u B' \sqsubseteq \u
G$, which is handled below.
\item If $\u B,\u B',\u B'' \not\in \{\u {\text{?`}}, \top\}$, then they all have the
same top-level constructor:
\begin{enumerate}
\item $\mathbin{\&}$ We are given $\u B_1 \sqsubseteq \u B_1' \sqsubseteq \u
B_1''$ and $\u B_2 \sqsubseteq \u B_2' \sqsubseteq \u B_2''$ and we need to show
\[
\bullet : \u B_1'' \mathbin{\&} \u B_2''
\vdash
\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\sdncast{\u B_1' \mathbin{\&} \u B_2'}{\u B_1'' \mathbin{\&} \u B_2''}]
: \u B_1 \mathbin{\&} \u B_2
\]
We proceed as follows:
\begin{align*}
&\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\sdncast{\u B_1' \mathbin{\&} \u B_2'}{\u B_1'' \mathbin{\&} \u B_2''}]\\
&\mathrel{\gtdyn\ltdyn}\pairone{\pi\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\sdncast{\u B_1' \mathbin{\&} \u B_2'}{\u B_1'' \mathbin{\&} \u B_2''}]}\tag{$\mathbin{\&}\eta$}\\
&\quad\pairtwo{\pi'\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\sdncast{\u B_1' \mathbin{\&} \u B_2'}{\u B_1'' \mathbin{\&} \u B_2''}]}\\
&\mathrel{\gtdyn\ltdyn}\pairone{\sdncast{\u B_1}{\u B_1'}[\pi\sdncast{\u B_1' \mathbin{\&} \u B_2'}{\u B_1'' \mathbin{\&} \u B_2''}]}\tag{cast reduction}\\
&\quad\pairtwo{\sdncast{\u B_2}{\u B_2'}[\pi'\sdncast{\u B_1' \mathbin{\&} \u B_2'}{\u B_1'' \mathbin{\&} \u B_2''}]}\\
&\mathrel{\gtdyn\ltdyn}\pairone{\sdncast{\u B_1}{\u B_1'}[\sdncast{\u B_1'}{\u B_1''}[\pi\bullet]]}\tag{cast reduction}\\
&\quad\pairtwo{\sdncast{\u B_2}{\u B_2'}\sdncast{\u B_2'}{\u B_2''}[\pi'\bullet]}\\
&\mathrel{\gtdyn\ltdyn}\pair{\sdncast{\u B_1}{\u B_1''}[\pi\bullet]}{\sdncast{\u B_2}{\u B_2''}[\pi'\bullet]}\tag{IH}\\
&= \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1'' \mathbin{\&} \u B_2''} \tag{definition}
\end{align*}
\item $\to$, assume we are given $A \sqsubseteq A' \sqsubseteq A''$ and
$\u B \sqsubseteq \u B' \sqsubseteq \u B''$, then we proceed:
\begin{align*}
&\sdncast{A \to \u B}{A' \to \u B'}[\sdncast{A' \to \u B'}{A'' \to \u B''}]\\
&\mathrel{\gtdyn\ltdyn} \lambda x:A. (\sdncast{A \to \u B}{A' \to \u B'}[\sdncast{A' \to \u B'}{A'' \to \u B''}][\bullet])\,x\tag{$\to\eta$}\\
&\mathrel{\gtdyn\ltdyn} \lambda x:A. \sdncast{\u B}{\u B'}[(\sdncast{A' \to \u B'}{A'' \to \u B''}[\bullet])\, \supcast{A}{A'}[x]] \tag{cast reduction}\\
&\mathrel{\gtdyn\ltdyn} \lambda x:A. \sdncast{\u B}{\u B'}[\sdncast{\u B'}{\u B''}[\bullet\, \supcast{A'}{A''}[\supcast{A}{A'}[x]]]]\tag{cast reduction}\\
&\mathrel{\gtdyn\ltdyn} \lambda x:A. \sdncast{\u B}{\u B''}[\bullet\,\supcast{A}{A''}[x]]\\
&= \sdncast{A \to \u B}{A \to \u B''}[\bullet]\tag{definition}
\end{align*}
\item $\u F A \sqsubseteq \u F A' \sqsubseteq \u F A''$. First, by
composition of ep pairs, we know
\[ (x. \supcast{A'}{A''}[\supcast{A}{A'}[x]], \sdncast{\u F
A}{\u F A'})[\sdncast{\u F A'}{\u F A''}]\]
form a value ep pair.
Furthermore, by inductive hypothesis, we know
\[ x : A \vdash \supcast{A'}{A''}[\supcast{A}{A'}[x]] \mathrel{\gtdyn\ltdyn} \supcast{A}{A''}[x] \]
so the two sides of our equation are both projections with the
same value embedding, so the equation follows from uniqueness
of projections from value embeddings.
\end{enumerate}
\end{enumerate}
\end{enumerate}
\end{longproof}
The final lemma before the graduality theorem lets us ``move a cast''
from left to right or vice-versa, via the adjunction property for ep
pairs.
These arise in the proof cases for $\kw{return}$ and $\kw{thunk}$, because in those
cases the inductive hypothesis is in terms of an upcast (downcast) and
the conclusion is in terms of a a downcast (upcast).
\begin{lemma}[Hom-set formulation of Adjunction]
\label{lem:hom-set-adj}
For any value embedding-projection pair $V_e,S_p$ from $A$ to $A'$,
the following are equivalent:
\begin{small}
\begin{mathpar}
\mprset{fraction={===}}
\inferrule
{\Gamma \vdash \kw{ret} V_e[V] \sqsubseteq M : \u F A'}
{\Gamma \vdash \kw{ret} V \sqsubseteq S_p[M] : \u F A}
\end{mathpar}
\end{small}
For any computation ep pair $(V_e,S_p)$ from $\u B$ to $\u B'$, the
following are equivalent:
\begin{small}
\begin{mathpar}
\mprset{fraction={===}}
\inferrule
{\Gamma, z' : U \u B' \vdash M \sqsubseteq S[S_p[\kw{force} z']] : \u C}
{\Gamma, z : U \u B \vdash M[V_e/z'] \sqsubseteq S[\kw{force} z] : \u C}
\end{mathpar}
\end{small}
\end{lemma}
\begin{longproof}
\begin{enumerate}
\item Assume $\kw{ret} V_e[V] \sqsubseteq M : \u F A'$. Then by retraction,
$\kw{ret} V \sqsubseteq S_p[\kw{ret} V_e[V]]$ so by transitivity, the result
follows by substitution:
\begin{mathpar}
\inferrule
{S_p \sqsubseteq S_p \and \kw{ret} V_e[V] \sqsubseteq M}
{S_p[\kw{ret} V_e[V]] \sqsubseteq M}
\end{mathpar}
\item Assume $\kw{ret} V \sqsubseteq S_p[M] : \u F A$. Then by projection,
$\bindXtoYinZ {S_p[M]} x \kw{ret} V_e[x] \sqsubseteq M$, so it is sufficient to show
\[ \kw{ret} V_e[V] \sqsubseteq \bindXtoYinZ {S_p[M]} x \kw{ret} V_e[x] \]
but again by substitution we have
\[ \bindXtoYinZ {\kw{ret} V} x \kw{ret} V_e[x] \sqsubseteq \bindXtoYinZ {S_p[M]} x \kw{ret} V_e[x]\]
and by $\u F\beta$, the LHS is equivalent to $\kw{ret} V_e[V]$.
\item Assume $z' : U\u {B'} \vdash M \sqsubseteq S[S_p[\kw{force} z']]$, then
by projection, $S[S_p[\kw{force} V_e]] \sqsubseteq S[\kw{force} z]$
and by substitution:
\begin{mathpar}
\inferrule
{M \sqsubseteq S[S_p[\kw{force} z']]\and V_e \sqsubseteq V_e \and S[S_p[\kw{force} V_e]] = (S[S_p[\kw{force} z']])[V_e/z']}
{M[V_e/z'] \sqsubseteq S[S_p[\kw{force} V_e]]}
\end{mathpar}
\item Assume $z : U \u B \vdash M[V_e/z'] \sqsubseteq S[\kw{force} z]$. Then
by retraction, $M \sqsubseteq M[V_e[\kw{thunk}{S_p[\kw{force} z]}]]$ and by
substitution:
\[ M[V_e[\kw{thunk}{S_p[\kw{force} z]}]] \sqsubseteq S[\kw{force} \kw{thunk}{S_p[\kw{force} z]}] \]
and the right is equivalent to $S[S_p[\kw{force} z]]$ by $U\beta$.
\end{enumerate}
\end{longproof}
Finally, we prove the axiomatic graduality theorem.
In addition to the lemmas above, the main task is to prove the
``compatibility'' cases which are the congruence cases for introduction
and elimination rules.
These come down to proving that the casts ``commute'' with
introduction/elimination forms, and are all simple calculations.
\begin{nonnum-theorem}[Axiomatic Graduality]
For any dynamic type interpretation, the following are true:
\begin{small}
\begin{mathpar}
\inferrule
{\Phi : \Gamma \sqsubseteq \Gamma'\\
\Psi : \Delta \sqsubseteq \Delta'\\
\Phi \,\,|\,\, \Psi \vdash M \sqsubseteq M' : \u B \sqsubseteq \u B'}
{\sem\Gamma \,\,|\,\, \sem{\Delta'} \vdash \sem M[\sem{\Psi}] \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem{\Phi}]] : \sem{\u B}}
\inferrule
{\Phi : \Gamma \sqsubseteq \Gamma' \\
\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'}
{\sem{\Gamma} \vdash \supcast{A}{A'}[\sem{V}] \sqsubseteq\sem{V'}[\sem\Phi] : \sem {A'}}
\end{mathpar}
\end{small}
\end{nonnum-theorem}
\begin{longproof}
By mutual induction over term dynamism derivations. For the $\beta,
\eta$ and reflexivity rules, we use the identity expansion lemma and
the corresponding $\beta, \eta$ rule of
CBPV*\ref{lem:ident-expansion}.
For compatibility rules a pattern emerges. Universal rules
(positive intro, negative elim) are easy, we don't need to reason about
casts at all. For ``(co)-pattern matching rules'' (positive elim,
negative intro), we need to invoke the $\eta$ principle (or
commuting conversion, which is derived from the $\eta$ principle).
In all compatibility cases, the cast reduction lemma keeps the
proof straightforward.
Fortunately, all reasoning about ``shifted'' casts is handled in
lemmas, and here we only deal with the ``nice'' value upcasts/stack
downcasts.
\begin{enumerate}
\item Transitivity for values: The GTT rule is
\[
\inferrule{
\Phi : \Gamma \sqsubseteq \Gamma' \and \Phi' : \Gamma' \sqsubseteq \Gamma'' \and
\Phi'' : \Gamma \sqsubseteq \Gamma''
\\
\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'\\
\Phi' \vdash V' \sqsubseteq V'' : A' \sqsubseteq A''\\
}
{ \Phi'' \vdash V \sqsubseteq V'' : A \sqsubseteq A''}
\]
Which under translation (and the same assumptions about the contexts) is
\[
\inferrule
{\sem{\Gamma} \vdash \supcast{A}{A'}[\sem{V}] \sqsubseteq \sem{V'}[\sem{\Phi}] : \sem{A'}\\
\sem{\Gamma'} \vdash \supcast{A'}{A'}[\sem{V'}] \sqsubseteq \sem{V''}[\sem{\Phi'}] : \sem{A''}
}
{\sem{\Gamma} \vdash \supcast{A}{A''}[\sem{V}] \sqsubseteq \sem{V''}[\sem{\Phi''}] : \sem{A''}}
\]
We proceed as follows, the key lemma here is the cast decomposition lemma:
\begin{align*}
\supcast{A}{A''}[\sem{V}]
&\mathrel{\gtdyn\ltdyn}
\supcast{A'}{A''}[\supcast{A}{A'}[\sem{V}]] \tag{cast decomposition}\\
&\sqsubseteq \supcast{A'}{A''}[\sem{V'}[\sem{\Phi}]] \tag{IH}\\
&\sqsubseteq \sem{V''}[\sem{\Phi'}][\sem{\Phi}] \tag{IH}\\
&\mathrel{\gtdyn\ltdyn} \sem{V''}[\sem{\Phi''}] \tag{cast decomposition}
\end{align*}
\item Transitivity for terms:
The GTT rule is
\[
\inferrule{
\Phi : \Gamma \sqsubseteq \Gamma' \and \Phi' : \Gamma' \sqsubseteq \Gamma'' \and
\Phi'' : \Gamma \sqsubseteq \Gamma''
\and \Psi : \Delta \sqsubseteq \Delta' \and \Psi : \Delta' \sqsubseteq \Delta''
\and \Psi'' : \Delta\sqsubseteq \Delta''
\\
\Phi \,\,|\,\, \Psi \vdash M \sqsubseteq M' : \u B \sqsubseteq \u B'\\
\Phi' \,\,|\,\, \Psi' \vdash M' \sqsubseteq M'' : \u B' \sqsubseteq \u B''\\
}
{ \Phi'' \,\,|\,\, \Psi'' \vdash M \sqsubseteq M'' : \u B \sqsubseteq \u B''}
\]
Which under translation (and the same assumptions about the contexts) is
\[
\inferrule
{\sem{\Gamma} \,\,|\,\, \sem{\Delta'} \vdash \sem{M}[\sem{\Psi}] \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem{\Phi}]] : \sem{\u B}\\
\sem{\Gamma'} \,\,|\,\, \sem{\Delta''} \vdash \sem{M'}[\sem{\Psi'}] \sqsubseteq \sdncast{\u B'}{\u B''}[\sem{M''}[\sem{\Phi'}]] : \sem{\u B'}}
{\sem{\Gamma} \,\,|\,\, \sem{\Delta''} \vdash \sem{M}[\sem{\Psi''}] \sqsubseteq \sdncast{\u B}{\u B''}[\sem{M''}[\sem{\Phi''}]] : \sem{\u B}}
\]
We proceed as follows, the key lemma here is the cast decomposition lemma:
\begin{align*}
\sem{M}[\sem{\Psi''}]
&\mathrel{\gtdyn\ltdyn}
\sem{M}[\sem{\Psi}][\sem{\Psi'}] \tag{Cast decomposition}\\
&\sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem{\Psi'}][\sem{\Phi}]]\tag{IH}\\
&\sqsubseteq \sdncast{\u B}{\u B'}[\sdncast{\u B'}{\u B''}[\sem{M''}[\sem{\Phi'}][\sem{\Phi}]]]\tag{IH}\\
&\mathrel{\gtdyn\ltdyn} \sdncast{\u B}{\u B''}[\sem{M''}[\sem{\Phi''}]] \tag{Cast decomposition}
\end{align*}
\item Substitution of a value in a value:
The GTT rule is
\[
\inferrule
{\Phi, x \sqsubseteq x' : A_1 \sqsubseteq A_1' \vdash V_2 \sqsubseteq V_2' : A_2 \sqsubseteq A_2'\\
\Phi \vdash V_1 \sqsubseteq V_1' : A_1 \sqsubseteq A_1'}
{\Phi \vdash V_2[V_1/x]\sqsubseteq V_2'[V_1'/x'] : A_2 \sqsubseteq A_2'}
\]
Where $\Phi : \Gamma \sqsubseteq \Gamma'$. Under translation, we need to show
\[
\inferrule
{\sem\Gamma, x : \sem{A_1} \vdash \supcast{A_2}{A_2'}[\sem{V_2}] \sqsubseteq \sem{V_2'}[\sem\Phi][\supcast{A_1}{A_1'}[x]/x'] : \sem{A_2'}\\
\sem\Gamma \vdash \supcast{A_1}{A_1'}[\sem{V_1}] \sqsubseteq \sem{V_1'}[\sem\Phi] : \sem{A_1'}}
{\sem\Gamma \vdash \supcast{A_2}{A_2'}[\sem{V_2[V_1/x]}] \sqsubseteq \sem{V_2'[V_1'/x']}[\sem\Phi] : \sem{A_2'}}
\]
Which follows by compositionality:
\begin{align*}
\supcast{A_2}{A_2'}[\sem{V_2[V_1/x]}]
&= (\supcast{A_2}{A_2'}[\sem{V_2}])[\sem{V_1}/x] \tag{Compositionality}\\
&\sqsubseteq \sem{V_2'}[\sem\Phi][\supcast{A_1}{A_1'}[x]/x'][\sem{V_1}/x]\tag{IH}\\
&= \sem{V_2'}[\sem\Phi][\supcast{A_1}{A_1'}[\sem{V_1}]/x']\\
&\sqsubseteq \sem{V_2'}[\sem\Phi][\sem{V_1'}[\sem\Phi]/x']\tag{IH}\\
&= \sem{V_2'[V_1'/x']}[\sem\Phi]
\end{align*}
\item Substitution of a value in a term:
The GTT rule is
\[
\inferrule
{\Phi, x \sqsubseteq x' : A \sqsubseteq A' \,\,|\,\, \Psi \vdash M \sqsubseteq M' : \u B \sqsubseteq \u B'\\
\Phi \vdash V \sqsubseteq V' : A \sqsubseteq A'
}
{\Phi \vdash M[V/x] \sqsubseteq M'[V'/x'] : \u B \sqsubseteq \u B'}
\]
Where $\Phi : \Gamma \sqsubseteq \Gamma'$ and $\Psi : \Delta \sqsubseteq \Delta'$.
Under translation this is:
\[
\inferrule
{\sem\Gamma, x : \sem{A} \,\,|\,\, \sem\Delta \vdash \sem M \sqsubseteq \sdncast{\u B}{\u B'}[\sem {M'}[\sem\Phi][\supcast{A}{A'}[x]/x']] : \sem{\u B}\\
\sem\Gamma \vdash \supcast{A}{A'}[{\sem V}] \sqsubseteq \sem{V'}[\sem\Phi] : \sem{A'}}
{\sem\Gamma \,\,|\,\, \sem\Delta \vdash \sem {M[V/x]} \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'[V'/x']}[\sem\Phi]] : \sem{\u B}}
\]
Which follows from compositionality of the translation:
\begin{align*}
\sem {M[V/x]}
&= \sem{M}[\sem{V}/x] \tag{Compositionality}\\
&\sqsubseteq \sdncast{\u B}{\u B'}[\sem {M'}[\sem\Phi][\supcast{A}{A'}[x]/x']][\sem{V}/x] \tag{IH}\\
&= \sdncast{\u B}{\u B'}[\sem {M'}[\sem\Phi][\supcast{A}{A'}[\sem{V}]/x']]\\
&\sqsubseteq \sdncast{\u B}{\u B'}[\sem {M'}[\sem\Phi][\sem{V'}[\sem\Phi]/x']]\tag{IH}\\
&= \sdncast{\u B}{\u B'}[\sem{M'[V'/x']}[\sem\Phi]] \tag{Compositionality}
\end{align*}
\item Substitution of a term in a stack:
The GTT rule is
\[
\inferrule
{\Phi \,\,|\,\, \bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B' \vdash S \sqsubseteq S' : \u C \sqsubseteq \u C'\\
\Phi \,\,|\,\, \cdot \vdash M \sqsubseteq M' : \u B \sqsubseteq \u B'}
{\Phi \,\,|\,\, \cdot \vdash S[M]\sqsubseteq S'[M'] : \u C \sqsubseteq \u C'}
\]
Where $\Phi : \Gamma \sqsubseteq \Gamma'$.
Under translation this is
\[
\inferrule
{\sem\Gamma \,\,|\,\, \bullet : \sem{\u B'} \vdash \sem{S}[\sdncast{\u B}{\u B'}[\bullet]] \sqsubseteq \sdncast{\u C}{\u C'}[\sem{S'}[\sem\Phi]] : \sem{\u C}\\
\sem\Gamma \,\,|\,\, \cdot \vdash \sem{M} \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi]] : \sem{\u B}}
{\sem\Gamma \,\,|\,\, \cdot \vdash \sem{S[M]} \sqsubseteq \sdncast{\u C}{\u C'}[\sem{S'[M']}[\sem\Phi]] : \sem{\u C}}
\]
We follows easily using compositionality of the translation:
\begin{align*}
\sem{S[M]}
&= \sem{S}[\sem{M}] \tag{Compositionality}\\
&\sqsubseteq \sem{S}[\sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi]]] \tag{IH}\\
&\sqsubseteq \sdncast{\u C}{\u C'}[\sem{S'}[\sem\Phi][\sem{M'}[\sem\Phi]]]\tag{IH}\\
&= \sdncast{\u C}{\u C'}[\sem{S'[M']}[\sem\Phi]] \tag{Compositionality}
\end{align*}
\item Variables: The GTT rule is
\[ \Gamma_1 \sqsubseteq \Gamma_1' ,x \sqsubseteq x' : A \sqsubseteq A', \Gamma_2 \sqsubseteq \Gamma_2' \vdash x \sqsubseteq x' : A \sqsubseteq A' \]
which under translation is
\[ \sem{\Gamma_1}, x : \sem A, \sem{\Gamma_2} \vdash \supcast{A}{A'}[x] \sqsubseteq \supcast{A}{A'}[x] : \sem{A'} \]
which is an instance of reflexivity.
\item Hole: The GTT rule is
\[ \Phi \,\,|\,\, \bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B' \vdash \bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B' \]
which under translation is
\[ \sem\Gamma \,\,|\,\, \bullet : \u B' \vdash \sdncast{\u B}{\u B'}[\bullet] \sqsubseteq \sdncast{\u B}{\u B'}[\bullet] : \u B \]
which is an instance of reflexivity.
\item Error is bottom: The GTT axiom is
\[ \Phi \vdash \mho \sqsubseteq M : \u B \]
where $\Phi : \Gamma \sqsubseteq \Gamma'$, so we need to show
\[ \sem\Gamma \vdash \mho \sqsubseteq \sdncast{\u B}{\u B}[\sem{M}[\sem{\Phi}]] : \sem{\u B} \]
which is an instance of the error is bottom axiom of CBPV.
\item Error strictness: The GTT axiom is
\[
\Phi \vdash S[\mho] \sqsubseteq \mho : \u B
\]
where $\Phi : \Gamma \sqsubseteq \Gamma'$, which under translation is
\[
\sem\Gamma \vdash \sem{S}[\mho] \sqsubseteq \sdncast{\u B}{\u B}[\mho] : \sem{\u B}
\]
By strictness of stacks in CBPV, both sides are equivalent to
$\mho$, so it follows by reflexivity.
\item UpCast-L: The GTT axiom is
\[
x \sqsubseteq x' : A \sqsubseteq A' \vdash \upcast{A}{A'}x \sqsubseteq x' : A'
\]
which under translation is
\[
x : \sem{A} \vdash \supcast{A'}{A'}[\supcast{A}{A'}[x]] \sqsubseteq \supcast{A}{A'}[x] : A'
\]
Which follows by identity expansion and reflexivity.
\item UpCast-R: The GTT axiom is
\[
x : A \vdash x \sqsubseteq \upcast{A}{A'}x : A \sqsubseteq A'
\]
which under translation is
\[
x : \sem{A} \vdash \supcast{A}{A'}[x] \sqsubseteq \supcast{A}{A'}[\supcast{A}{A}[x]] : \sem{A'}
\]
which follows by identity expansion and reflexivity.
\item DnCast-R: The GTT axiom is
\[
\bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B' \vdash \bullet \sqsubseteq \dncast{\u B}{\u B'} : \u B
\]
Which under translation is
\[
\bullet : \sem{\u B'} \vdash
\sdncast{\u B}{\u B'}[\bullet]
\sqsubseteq
\sdncast{\u B}{\u B}[\sdncast{\u B}{\u B'}[\bullet]]
: \sem{\u B}
\]
Which follows by identity expansion and reflexivity.
\item DnCast-L: The GTT axiom is
\[
\bullet : \u B' \vdash \dncast{\u B}{\u B'} \bullet \sqsubseteq \bullet : \u B \sqsubseteq \u B'
\]
So under translation we need to show
\[
\bullet : \sem{\u B'} \vdash
\sdncast{\u B}{\u B'}[\sdncast{\u B'}{\u B'}[\bullet]]
\sqsubseteq
\sdncast{\u B}{\u B'}\bullet : \sem{\u B}
\]
Which follows immediately by reflexivity and the lemma that
identity casts are identities.
\item $0$ elim, we do the term case, the value case is similar
\[
\inferrule
{\upcast{0}{0}[\sem{V}] \sqsubseteq \sem{V'}[\sem\Phi]}
{\kw{absurd} \sem{V} \sqsubseteq \dncast{\u B}{\u B'}\kw{absurd}\sem{V'}[\sem\Phi]}
\]
Immediate by $0\eta$.
\item $+$ intro, we do the $\kw{inl}$ case, the $\kw{inr}$ case is the same:
\[
\inferrule
{\supcast{A_1}{A_1'}[\sem{V}]\sqsubseteq \sem{V'}[\sem\Phi]}
{\supcast{A_1+A_2}{A_1'+A_2'}[\kw{inl}\sem{V}]\sqsubseteq \kw{inl}\sem{V'}[\sem\Phi]}
\]
Which follows easily:
\begin{align*}
\supcast{A_1+A_2}{A_1'+A_2'}[\kw{inl}\sem{V}]
&\mathrel{\gtdyn\ltdyn} \kw{inl} \supcast{A_1}{A_1'}\sem{V}\tag{cast reduction}\\
&\sqsubseteq \kw{inl} \sem{V'}[\sem\Phi]\tag{IH}
\end{align*}
\item $+$ elim, we do just the cases where the continuations are terms:
\[
\inferrule
{\supcast{A_1 + A_2}{A_1' + A_2'}[\sem{V}] \sqsubseteq \sem{V'}[\sem\Phi]\\
\sem{M_1}[\sem\Psi] \sqsubseteq \sem{M_1'}[\sem\Phi][\supcast{A_1}{A_1'}[x_1]/x_1']\\
\sem{M_2}[\sem\Psi] \sqsubseteq \sem{M_2'}[\sem\Phi][\supcast{A_2}{A_2'}[x_2]/x_2']}
{\caseofXthenYelseZ {\sem V} {x_1. \sem{M_1}[\sem\Psi]}{x_2. \sem{M_2}[\sem\Psi]} \sqsubseteq \sdncast{\u B}{\u B'}[\caseofXthenYelseZ {\sem V'[\sem\Phi]} {x_1'. \sem{M_1'}[\sem\Phi]}{x_2'. \sem{M_2'}[\sem\Phi]}]}
\]
\begin{align*}
& \caseofXthenYelseZ {\sem V} {x_1. \sem{M_1}[\sem\Psi]}{x_2. \sem{M_2}[\sem\Psi]}\\
&\sqsubseteq
\sdncast{\u B}{\u B'}[\caseofXthenYelseZ {\sem V} {x_1. \sem{M_1'}[\sem\Phi][\supcast{A_1}{A_1'}[x_1]/x_1']}{x_2. \sem{M_2'}[\sem\Phi][\supcast{A_2}{A_2'}[x_2]/x_2']}]\tag{IH}\\
&\mathrel{\gtdyn\ltdyn}
\caseofX {\sem V}\tag{comm conv}\\
&\qquad\{{x_1. \sdncast{\u B}{\u B'}[\sem{M_1'}[\sem\Phi][\supcast{A_1}{A_1'}[x_1]/x_1']]}\\
&\qquad\elseZ{x_2. \sdncast{\u B}{\u B'}[\sem{M_2'}[\sem\Phi][\supcast{A_2}{A_2'}[x_2]/x_2']]}\\
&\mathrel{\gtdyn\ltdyn}
\caseofX {\sem V}\tag{$+\beta$}\\
&\qquad\{{x_1. \sdncast{\u B}{\u B'}[\caseofXthenYelseZ {\kw{inl} \supcast{A_1}{A_1'}x_1} {x_1'. \sem{M_1'}[\sem\Phi]}{x_2'. \sem{M_2'}[\sem\Phi]}]}\\
&\qquad\elseZ{x_2. \sdncast{\u B}{\u B'}[\caseofXthenYelseZ {\kw{inr} \supcast{A_2}{A_2'}x_2} {x_1'. \sem{M_1'}[\sem\Phi]}{x_2'. \sem{M_2'}[\sem\Phi]}]}\\
&\mathrel{\gtdyn\ltdyn}
\caseofX {\sem V}\tag{cast reduction}\\
&\qquad\{{x_1. \sdncast{\u B}{\u B'}[\caseofXthenYelseZ {\supcast{A_1+A_2}{A_1'+A_2'}\kw{inl} x_1} {x_1'. \sem{M_1'}[\sem\Phi]}{x_2'. \sem{M_2'}[\sem\Phi]}]}\\
&\qquad\elseZ{x_2. \sdncast{\u B}{\u B'}[\caseofXthenYelseZ {\supcast{A_1+A_2}{A_1'+A_2'}\kw{inr} x_2} {x_1'. \sem{M_1'}[\sem\Phi]}{x_2'. \sem{M_2'}[\sem\Phi]}]}\\
&\mathrel{\gtdyn\ltdyn}
\sdncast{\u B}{\u B'}[\caseofXthenYelseZ {\supcast{A_1+A_2}{A_1'+A_2'}[\sem V]} {x_1'. \sem{M_1'}[\sem\Phi]}{x_2'. \sem{M_2'}[\sem\Phi]}]\\
&\sqsubseteq
\sdncast{\u B}{\u B'}[\caseofXthenYelseZ {\sem{V'}[\sem\Phi]} {x_1'. \sem{M_1'}[\sem\Phi]}{x_2'. \sem{M_2'}[\sem\Phi]}]\tag{IH}\\
\end{align*}
\item $1$ intro:
\[
\inferrule
{}
{\supcast{1}{1}[()]\sqsubseteq ()}
\]
Immediate by cast reduction.
\item $1$ elim (continuations are terms case):
\[
\inferrule
{\supcast{1}{1}[\sem{V}] \sqsubseteq \sem{V'}[\sem\Phi]\\
\sem{M}[\sem\Psi] \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi]]
}
{\pmpairWtoinZ {\sem V} {\sem{M}[\sem{\Psi}]}
\sqsubseteq
\dncast{\u B}{\u B'}[\pmpairWtoinZ {\sem V'[\sem\Phi]} {\sem{M'}[\sem{\Phi}]}]}
\]
which follows by identity expansion \ref{lem:ident-expansion}.
\item $\times$ intro:
\[
\inferrule
{\supcast{A_1}{A_1'}{\sem{V_1}} \sqsubseteq \sem{V_1'[\sem\Phi]}\\
\supcast{A_2}{A_2'}{\sem{V_2}} \sqsubseteq \sem{V_2'[\sem\Phi]}}
{\supcast{A_1 \times A_2}{A_1' \times A_2'}[(\sem{V_1},\sem{V_2})]
\sqsubseteq
(\sem{V_1'[\sem\Phi]}, \sem{V_2'[\sem\Phi]})}
\]
We proceed:
\begin{align*}
\supcast{A_1 \times A_2}{A_1' \times A_2'}[(\sem{V_1},\sem{V_2})]
&\mathrel{\gtdyn\ltdyn}
(\supcast{A_1}{A_1'}{\sem{V_1}},\supcast{A_2}{A_2'}{\sem{V_2}})\tag{cast reduction}\\
&\sqsubseteq (\sem{V_1'[\sem\Phi]}, \sem{V_2'[\sem\Phi]}) \tag{IH}
\end{align*}
\item $\times$ elim: We show the case where the continuations are
terms, the value continuations are no different:
\[
\inferrule
{\supcast{A_1\times A_2}{A_1' \times A_2'}[\sem{V}] \sqsubseteq \sem{V'}[\sem\Phi]\\
\sem{M}[\sem\Psi] \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi][\supcast{A_1}{A_1'}[x]/x'][\supcast{A_2}{A_2'}[y]/y']]
}
{\pmpairWtoXYinZ {\sem V} x y {\sem{M}[\sem{\Psi}]}
\sqsubseteq
\dncast{\u B}{\u B'}[\pmpairWtoXYinZ {\sem V'[\sem\Phi]} {x'} {y'} {\sem{M'}[\sem{\Phi}]}]
}
\]
We proceed as follows:
\begin{align*}
&\pmpairWtoXYinZ {\sem V} x y {\sem{M}[\sem{\Psi}]}\\
&\sqsubseteq\pmpairWtoXYinZ {\sem V} x y \sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi][\supcast{A_1}{A_1'}[x]/x'][\supcast{A_2}{A_2'}[y]/y']]\tag{IH}\\
&\mathrel{\gtdyn\ltdyn}
\pmpairWtoXYinZ {\sem V} x y\tag{$\times\beta$}\\
&\qquad \pmpairWtoXYinZ {(\supcast{A_1}{A_1'}[x],\supcast{A_2}{A_2'}[y])} {x'} {y'} \sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi]]\\
&\mathrel{\gtdyn\ltdyn}
\pmpairWtoXYinZ {\sem V} x y\tag{cast reduction}\\
&\qquad \pmpairWtoXYinZ {\supcast{A_1\times A_2'}{A_1'\times A_2'}[(x,y)]} {x'} {y'} \sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi]]\\
&\mathrel{\gtdyn\ltdyn}
\pmpairWtoXYinZ {\supcast{A_1\times A_2}{A_1'\times A_2'}[{\sem V}]} {x'} {y'} \sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi]]\tag{$\times\eta$}\\
&\sqsubseteq \pmpairWtoXYinZ {\sem{V'}[\sem\Phi]} {x'}{y'} \sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi]]\tag{IH}\\
&\mathrel{\gtdyn\ltdyn} \sdncast{\u B}{\u B'}[\pmpairWtoXYinZ {\sem{V'}[\sem\Phi]} {x'}{y'}\sem{M'}[\sem\Phi]]\tag{commuting conversion}
\end{align*}
\item $U$ intro:
\[
\inferrule
{\sem{M} \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi]]}
{\supcast{U\u B}{U \u B'}[\kw{thunk}\sem{M}] \sqsubseteq \kw{thunk}\sem{M'}[\sem\Phi]}
\]
We proceed as follows:
\begin{align*}
\supcast{U\u B}{U \u B'}[\kw{thunk}\sem{M}]
&\sqsubseteq \supcast{U\u B}{U \u B'}[\kw{thunk}\sdncast{\u B}{\u B'}[\sem{M'}[\sem\Phi]]]\tag{IH}\\
&\sqsubseteq \kw{thunk} \sem{M'}[\sem\Phi]\tag{alt projection}
\end{align*}
\item $U$ elim:
\[
\inferrule
{\supcast{U \u B}{U \u B'}[\sem{V}] \sqsubseteq \sem{V'}[\sem\Phi]}
{\kw{force} \sem V \sqsubseteq \sdncast{\u B}{\u B'}\kw{force} \sem {V'}[\sem\Phi]}
\]
By hom-set formulation of adjunction \ref{lem:hom-set-adj}.
\item $\top$ intro:
\[
\inferrule{}{\{\} \sqsubseteq \sdncast{\top}{\top}[\{\}]}
\]
Immediate by $\top\eta$
\item $\mathbin{\&}$ intro:
\[
\inferrule
{\sem{M_1}[\sem{\Psi}]\sqsubseteq \sdncast{\u B_1}{\u B_1'}[\sem{M_1'}[\sem{\Phi}]]\\
\sem{M_2}[\sem{\Psi}]\sqsubseteq \sdncast{\u B_2}{\u B_2'}[\sem{M_2'}[\sem{\Phi}]]}
{\pair{\sem{M_1}[\sem{\Psi}]}{\sem{M_2}[\sem{\Psi}]}
\sqsubseteq
\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\pair{\sem{M_1'}[\sem{\Phi}]}{\sem{M_2'}[\sem{\Phi}]}]}
\]
We proceed as follows:
\begin{align*}
&\pair{\sem{M_1}[\sem{\Psi}]}{\sem{M_2}[\sem{\Psi}]}\\
&\sqsubseteq
\pair{\sdncast{\u B_1}{\u B_1'}[\sem{M_1'}[\sem{\Phi}]]}{\sdncast{\u B_2}{\u B_2'}[\sem{M_2'}[\sem{\Phi}]]}\tag{IH}\\
&\mathrel{\gtdyn\ltdyn}
\pairone{\pi\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\pair{\sem{M_1'}[\sem{\Phi}]}{\sem{M_2'}[\sem{\Phi}]}]}\tag{cast reduction}\\
&\quad \pairtwo{\pi'\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\pair{\sem{M_1'}[\sem{\Phi}]}{\sem{M_2'}[\sem{\Phi}]}]}\\
&\mathrel{\gtdyn\ltdyn}
\sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\pair{\sem{M_1'}[\sem{\Phi}]}{\sem{M_2'}[\sem{\Phi}]}]\tag{$\mathbin{\&}\eta$}
\end{align*}
\item $\mathbin{\&}$ elim, we show the $\pi$ case, $\pi'$ is symmetric:
\[
\inferrule
{\sem{M}[\sem{\Psi}] \sqsubseteq \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\sem{M'}[\sem{\Phi}]]}
{\pi\sem{M}[\sem{\Psi}] \sqsubseteq \sdncast{\u B_1}{\u B_1'}[\pi\sem{M'}[\sem{\Phi}]]}
\]
We proceed as follows:
\begin{align*}
\pi\sem{M}[\sem{\Psi}]
&\sqsubseteq \pi \sdncast{\u B_1 \mathbin{\&} \u B_2}{\u B_1' \mathbin{\&} \u B_2'}[\sem{M'}[\sem{\Phi}]]\tag{IH}\\
&\mathrel{\gtdyn\ltdyn}
\sdncast{\u B_1}{\u B_1'}[\pi\sem{M'}[\sem{\Phi}]]\tag{cast reduction}
\end{align*}
\item
\[
\inferrule
{\sem{M}[\sem{\Psi}] \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem{\Phi}][\supcast{A}{A'}{x}/x']]}
{\lambda x:A. \sem{M}[\sem{\Psi}] \sqsubseteq \sdncast{A \to \u B}{A'\to\u B'}[\lambda x':A'. \sem{M'}[\sem{\Phi}]]}
\]
We proceed as follows:
\begin{align*}
&\lambda x:A. \sem{M}[\sem{\Psi}]\\
&\sqsubseteq
\lambda x:A. \sdncast{\u B}{\u B'}[\sem{M'}[\sem{\Phi}][\supcast{A}{A'}{x}/x']]\tag{IH}\\
&\mathrel{\gtdyn\ltdyn}
\lambda x:A. (\sdncast{A \to \u B}{A' \to \u B'}[\lambda x'. \sem{M'}[\sem{\Phi}]])\, x\tag{cast reduction}\\
&\mathrel{\gtdyn\ltdyn}
\sdncast{A \to \u B}{A' \to \u B'}[\lambda x'. \sem{M'}[\sem{\Phi}]]\tag{$\to\eta$}
\end{align*}
\item We need to show
\[
\inferrule
{\sem{M}[\sem{\Psi}] \sqsubseteq \sdncast{A \to \u B}{A' \to \u B'}[\sem{M'}[\sem{\Phi}]]\\
\supcast{A}{A'}[\sem{V}] \sqsubseteq \sem{V'}[\sem{\Phi}]}
{\sem{M}[\sem{\Psi}]\,\sem{V} \sqsubseteq \sdncast{\u B}{\u B'}[\sem{M'}[\sem{\Phi}]\, \sem{V'}[\sem{\Phi}]]}
\]
We proceed:
\begin{align*}
&\sem{M}[\sem{\Psi}]\,\sem{V}\\
&\sqsubseteq
(\sdncast{A \to \u B}{A' \to \u B'}[\sem{M'}[\sem{\Phi}]])\,\sem{V}\tag{IH}\\
&\mathrel{\gtdyn\ltdyn}
\sdncast{\u B}{\u B'}[\sem{M'}[\sem{\Phi}]\,(\supcast{A}{A'}{\sem{V}})]\tag{cast reduction}\\
&\sqsubseteq
\sdncast{\u B}{\u B'}[\sem{M'}[\sem{\Phi}]\,\sem{V'}[\sem{\Phi}]] \tag{IH}
\end{align*}
\item We need to show
\[
\inferrule
{\supcast{A}{A'}[\sem{V}] \sqsubseteq \sem{V'}[\sem{\Phi}]}
{\kw{ret}\sem{V}\sqsubseteq \sdncast{\u F A}{\u FA'}[\kw{ret}\sem{V'}[\sem{\Phi}]]}
\]
By hom-set definition of adjunction \ref{lem:hom-set-adj}
\item We need to show
\[
\inferrule
{\sem{M}[\sem{\Psi}] \sqsubseteq \sdncast{\u F A}{\u F A'}[\sem{M'}[\Phi]]\\
\sem{N} \sqsubseteq \sdncast{\u B}{\u B'}[\sem{N}[\Phi][\supcast{A}{A'} x/x']]}
{\bindXtoYinZ {\sem{M}[\sem{\Psi}]} x {\sem{N}}
\sqsubseteq
\sdncast{\u B}{\u B'}[{\bindXtoYinZ {\sem{M'}[\sem{\Phi}]} {x'} {\sem{N'}[\sem{\Phi}]}}]}
\]
We proceed:
\begin{align*}
&\bindXtoYinZ {\sem{M}[\sem{\Psi}]} x {\sem{N}}\\
&\sqsubseteq \bindXtoYinZ {\sdncast{\u F A}{\u F A'}[\sem{M'}[\Phi]]} x \sdncast{\u B}{\u B'}[\sem{N}[\Phi][\supcast{A}{A'} x/x']] \tag{IH, congruence}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\sdncast{\u F A}{\u F A'}[\sem{M'}[\Phi]]} x\\
&\qquad \bindXtoYinZ {\kw{ret}\supcast{A}{A'}[x]} {x'}
\sdncast{\u B}{\u B'}[\sem{N}[\Phi]] \tag{$\u F\beta$}\\
& \sqsubseteq \bindXtoYinZ {\sem{M'}[\Phi]} {x'} \sdncast{\u B}{\u B'}[\sem{N}[\Phi]] \tag{Projection}\\
& \mathrel{\gtdyn\ltdyn} \sdncast{\u B}{\u B'}[\bindXtoYinZ {\sem{M'}[\Phi]} {x'} \sem{N}[\Phi]] \tag{commuting conversion}
\end{align*}
\end{enumerate}
\end{longproof}
\end{longonly}
As a corollary, we have the following conservativity result, which says
that the homogeneous term dynamisms in GTT are sound and complete for
inequalities in CBPV*.
\begin{corollary}[Conservativity] \label{thm:gtt-cbpvstar-conservativity}
If $\Gamma \mid \Delta \vdash E, E' : T$ are two terms of the same
type in the intersection of GTT and CBPV*, then $\Gamma \mid
\Delta \vdash E \sqsubseteq E' : T$ is provable in GTT iff it is
provable in CBPV*.
\end{corollary}
\begin{proof}
The reverse direction holds because CBPV*\ is a syntactic subset of
GTT. The forward direction holds by axiomatic graduality and the
fact that identity casts are identities.
\end{proof}
\section{Complex Value/Stack Elimination}
\label{sec:complex}
Next, to bridge the gap between the semantic notion of complex value
and stack with the more rigid operational notion, we perform a
complexity-elimination pass.
This translates a computation with complex values in it to an equivalent
computation without complex values: i.e., all pattern matches take place
in computations, rather than in values, and translates a term dynamism
derivation that uses complex stacks to one that uses only ``simple''
stacks without pattern-matching and computation introduction forms.
\begin{longonly}
Stacks do not appear anywhere in the grammar of terms, but they are
used in the equational theory (computation $\eta$ rules and error
strictness).
\end{longonly}
\ This translation clarifies the behavioral meaning of complex values and
stacks, following \citet{munchmaccagnoni14nonassociative,
fuhrmann1999direct}, and therefore of upcasts and downcasts.
\begin{longonly}
This is related to completeness of focusing: it moves inversion rules
outside of focus phases.
\end{longonly}
\begin{longonly}
The syntax of operational CBPV is as in
Figure~\ref{fig:gtt-syntax-and-terms} (unshaded), but with recursive
types added as in Section~\ref{sec:cbpvstar}, and with values and stacks
restricted
as in Figure~\ref{fig:operation-cbpv-syntax}.
\begin{figure}
\begin{small}
\begin{mathpar}
\begin{array}{lcl}
A & \mathrel{\bf ::=} & X \mid \mu X.A \mid U \u B \mid 0 \mid A_1 + A_2 \mid 1 \mid A_1 \times A_2 \\
\u B & ::= & \u Y\mid \nu \u Y. \u B \mid \u F A \mid \top \mid \u B_1 \mathbin{\&} \u B_2 \mid A \to \u B\\
\Gamma & ::= & \cdot \mid \Gamma, x : A \\
\Delta & ::= & \cdot \mid \bullet : \u B \\
V & ::= & x \mid \rollty{\mu X.A}V \mid \kw{inl}{V} \mid \kw{inr}{V} \mid () \mid (V_1,V_2)\mid \kw{thunk}{M}
\\
M & ::= & \mho_{\u B} \mid \letXbeYinZ V x M \mid \pmmuXtoYinZ V x M \mid \rollty{\nu \u Y.\u B} M \mid \kw{unroll} M \mid \kw {abort}{V} \mid \\
& & \caseofXthenYelseZ V {x_1. M_1}{x_2.M_2} \mid \pmpairWtoinZ V M \mid \pmpairWtoXYinZ V x y M
\mid \kw{force}{V} \mid \\
& & \kw{ret}{V} \mid \bindXtoYinZ{M}{x}{N} \mid \lambda x:A.M \mid M\,V \mid \emptypair \mid \pair{M_1}{M_2} \mid \pi M \mid \pi' M
\\
S & ::= & \bullet \mid \bindXtoYinZ S x M \mid S\, V \mid \pi S \mid \pi' S \mid \unrollty{\nu \u Y.\u B}{S}
\end{array}
\end{mathpar}
\end{small}
\caption{Operational CBPV Syntax}
\label{fig:operation-cbpv-syntax}
\end{figure}
In CBPV, values include only introduction forms, as usual for values in
operational semantics, and CBPV\/ stacks consist only of elimination
forms for computation types
(the syntax of CBPV\/ enforces an A-normal
form, where only values can be pattern-matched on, so $\kw{case}$ and
$\kw{split}$ are not evaluation contexts in the operational semantics).
\begin{figure}
\begin{small}
\begin{mathpar}
\inferrule
{}
{\Gamma,x : A,\Gamma' \vdash x \sqsubseteq x : A}
\inferrule
{}
{\Gamma\,\,|\,\, \bullet : \u B \vdash \bullet \sqsubseteq \bullet : \u B}
\inferrule
{}
{\Gamma \vdash \mho \sqsubseteq \mho : \u B}
\inferrule
{\Gamma \vdash V \sqsubseteq V' : A \and
\Gamma, x : A \vdash M \sqsubseteq M' : \u B
}
{\Gamma \vdash \letXbeYinZ V x M \sqsubseteq \letXbeYinZ {V'} {x} {M'} : \u B}
\inferrule
{\Gamma \vdash V \sqsubseteq V' : 0}
{\Gamma \vdash \kw {abort} V \sqsubseteq \kw {abort} V' : \u B}
\inferrule
{\Gamma \vdash V \sqsubseteq V' : A_1}
{\Gamma \vdash \kw{inl} V \sqsubseteq \kw{inl} V' : A_1 + A_2}
\inferrule
{\Gamma \vdash V \sqsubseteq V' : A_2}
{\Gamma \vdash \kw{inr} V \sqsubseteq \kw{inr} V' : A_1 + A_2}
\inferrule
{\Gamma \vdash V \sqsubseteq V' : A_1 + A_2\and
\Gamma, x_1 : A_1 \vdash M_1 \sqsubseteq M_1' : \u B\and
\Gamma, x_2 : A_2 \vdash M_2 \sqsubseteq M_2' : \u B
}
{\Gamma \vdash \caseofXthenYelseZ V {x_1. M_1}{x_2.M_2} \sqsubseteq \caseofXthenYelseZ {V'} {x_1. M_1'}{x_2.M_2'} : \u B}
\inferrule
{}
{\Gamma \vdash () \sqsubseteq () : 1}
\inferrule
{\Gamma \vdash V_1 \sqsubseteq V_1' : A_1\and
\Gamma\vdash V_2 \sqsubseteq V_2' : A_2}
{\Gamma \vdash (V_1,V_2) \sqsubseteq (V_1',V_2') : A_1 \times A_2}
\inferrule
{\Gamma \vdash V \sqsubseteq V' : A_1 \times A_2\and
\Gamma, x : A_1,y : A_2 \vdash M \sqsubseteq M' : \u B
}
{\Gamma \vdash \pmpairWtoXYinZ V x y M \sqsubseteq \pmpairWtoXYinZ {V'} {x} {y} {M'} : \u B}
\inferrule
{\Gamma \vdash V \sqsubseteq V' : A[\mu X.A/X]}
{\Gamma \vdash \rollty{\mu X.A} V \sqsubseteq \rollty{\mu X.A} V' : \mu X.A }
\inferrule
{\Gamma \vdash V \sqsubseteq V' : \mu X. A\and
\Gamma, x : A[\mu X. A/X] \vdash M \sqsubseteq M' : \u B}
{\Gamma \vdash \pmmuXtoYinZ V x M \sqsubseteq \pmmuXtoYinZ {V'} {x} {M'} : \u B}
\inferrule
{\Gamma \vdash M \sqsubseteq M' : \u B}
{\Gamma \vdash \kw{thunk} M \sqsubseteq \kw{thunk} M' : U \u B}
\inferrule
{\Gamma \vdash V \sqsubseteq V' : U \u B}
{\Gamma \vdash \kw{force} V \sqsubseteq \kw{force} V' : \u B}
\inferrule
{\Gamma \vdash V \sqsubseteq V' : A}
{\Gamma \vdash \kw{ret} V \sqsubseteq \kw{ret} V' : \u F A}
\inferrule
{\Gamma \vdash M \sqsubseteq M' : \u F A\and
\Gamma, x: A \vdash N \sqsubseteq N' : \u B}
{\Gamma \vdash \bindXtoYinZ M x N \sqsubseteq \bindXtoYinZ {M'} {x} {N'} : \u B}
\inferrule
{\Gamma, x: A \vdash M \sqsubseteq M' : \u B}
{\Gamma \vdash \lambda x : A . M \sqsubseteq \lambda x:A. M' : A \to \u B}
\inferrule
{\Gamma \vdash M \sqsubseteq M' : A \to \u B\and
\Gamma \vdash V \sqsubseteq V' : A}
{\Gamma \vdash M\,V \sqsubseteq M'\,V' : \u B }
\inferrule
{\Gamma \vdash M_1 \sqsubseteq M_1' : \u B_1\and
\Gamma \vdash M_2 \sqsubseteq M_2' : \u B_2}
{\Gamma \vdash \pair {M_1} {M_2} \sqsubseteq \pair {M_1'} {M_2'} : \u B_1 \mathbin{\&} \u B_2}
\inferrule
{\Gamma \vdash M \sqsubseteq M' : \u B_1 \mathbin{\&} \u B_2}
{\Gamma \vdash \pi M \sqsubseteq \pi M' : \u B_1}
\inferrule
{\Gamma \vdash M \sqsubseteq M' : \u B_1 \mathbin{\&} \u B_2}
{\Gamma \vdash \pi' M \sqsubseteq \pi' M' : \u B_2}
\inferrule
{\Gamma \vdash M \sqsubseteq M' : \u B[{\nu \u Y. \u B}/\u Y]}
{\Gamma \vdash \rollty{\nu \u Y. \u B} M \sqsubseteq \rollty{\nu \u Y. \u B} M' : {\nu \u Y. \u B}}
\inferrule
{\Gamma \vdash M \sqsubseteq M' : {\nu \u Y. \u B}}
{\Gamma \vdash \kw{unroll} M \sqsubseteq \kw{unroll} M' : \u B[{\nu \u Y. \u B}/\u Y]}
\end{mathpar}
\end{small}
\caption{CBPV Inequational Theory (Congruence Rules)}
\end{figure}
\begin{figure}
\begin{small}
\begin{mathpar}
\inferrule
{}
{\caseofXthenYelseZ{\kw{inl} V}{x_1. M_1}{x_2. M_2} \mathrel{\gtdyn\ltdyn} M_1[V/x_1]}
\inferrule
{}
{\caseofXthenYelseZ{\kw{inr} V}{x_1. M_1}{x_2. M_2} \mathrel{\gtdyn\ltdyn} M_2[V/x_2]}
\inferrule
{\Gamma, x : A_1 + A_2 \vdash M : \u B}
{\Gamma, x : A_1 + A_2 \vdash M \mathrel{\gtdyn\ltdyn} \caseofXthenYelseZ x {x_1. M[\kw{inl} x_1/x]}{x_2. M[\kw{inr} x_2/x]} : \u B}
\inferrule
{}
{\pmpairWtoXYinZ{(V_1,V_2)}{x_1}{x_2}{M} \mathrel{\gtdyn\ltdyn} M[V_1/x_1,V_2/x_2]}
\inferrule
{\Gamma, x : A_1 \times A_2 \vdash M : \u B}
{\Gamma, x : A_1 \times A_2 \vdash M \mathrel{\gtdyn\ltdyn} \pmpairWtoXYinZ x {x_1}{x_2} M[(x_1,x_2)/x] : \u B}
\inferrule
{\Gamma, x : 1 \vdash M : \u B}
{\Gamma, x : 1 \vdash M \mathrel{\gtdyn\ltdyn} M[()/x] : \u B}
\inferrule
{}
{\pmmuXtoYinZ{\rollty A V}{x}{M} \mathrel{\gtdyn\ltdyn} M[V/x]}
\inferrule
{\Gamma, x : \mu X. A \vdash M :\u B}
{\Gamma, x : \mu X. A \vdash M \mathrel{\gtdyn\ltdyn} \pmmuXtoYinZ{x}{y}{M[\rollty{\mu X.A} y/x]} : \u B}
\inferrule
{}
{\kw{force}\kw{thunk} M \mathrel{\gtdyn\ltdyn} M}
\inferrule
{\Gamma \vdash V : U \u B}
{\Gamma \vdash V \mathrel{\gtdyn\ltdyn} \kw{thunk}\kw{force} V : U \u B}
\inferrule
{}
{\letXbeYinZ V x M \mathrel{\gtdyn\ltdyn} M[V/x]}
\inferrule
{}
{\bindXtoYinZ {\kw{ret} V} x M \mathrel{\gtdyn\ltdyn} M[V/x]}
\inferrule
{}
{\Gamma \,\,|\,\, \bullet : \u F A \vdash \bullet \mathrel{\gtdyn\ltdyn} \bindXtoYinZ \bullet x \kw{ret} x : \u F A}
\inferrule
{}
{(\lambda x:A. M)\,V \mathrel{\gtdyn\ltdyn} M[V/x]}
\inferrule
{\Gamma \vdash M : A \to \u B}
{\Gamma \vdash M \mathrel{\gtdyn\ltdyn} \lambda x:A. M\,x : A \to \u B}
\inferrule
{}
{\pi \pair{M}{M'} \mathrel{\gtdyn\ltdyn} M}
\inferrule
{}
{\pi' \pair{M}{M'} \mathrel{\gtdyn\ltdyn} M'}
\inferrule
{\Gamma \vdash M : \u B_1 \mathbin{\&} \u B_2}
{\Gamma \vdash M \mathrel{\gtdyn\ltdyn}\pair{\pi M}{\pi' M} : \u B_1 \mathbin{\&} \u B_2}
\inferrule
{\Gamma \vdash M : \top}
{\Gamma \vdash M \mathrel{\gtdyn\ltdyn} \{\} : \top}
\inferrule
{}
{\kw{unroll} \rollty{\u B} M \mathrel{\gtdyn\ltdyn} M}
\inferrule
{\Gamma \vdash M : \nu \u Y. \u B}
{\Gamma \vdash M \mathrel{\gtdyn\ltdyn} \rollty{\nu \u Y.\u B}\kw{unroll} M : \nu \u Y. \u B}
\end{mathpar}
\end{small}
\caption{CBPV $\beta, \eta$ rules}
\end{figure}
\begin{figure}
\begin{small}
\begin{mathpar}
\inferrule
{}
{\Gamma \vdash \mho \sqsubseteq M : \u B}
\inferrule
{}
{\Gamma \vdash S[\mho] \mathrel{\gtdyn\ltdyn} \mho : \u B}
\inferrule
{}
{\Gamma \vdash M \sqsubseteq M : \u B}
\inferrule
{}
{\Gamma \vdash V \sqsubseteq V : A}
\inferrule
{}
{\Gamma \,\,|\,\, \u B \vdash S \sqsubseteq S : \u B'}
\inferrule
{\Gamma \vdash M_1 \sqsubseteq M_2 : \u B \and \Gamma \vdash M_2 \sqsubseteq M_3 : \u B}
{\Gamma \vdash M_1 \sqsubseteq M_3 : \u B}
\inferrule
{\Gamma \vdash V_1 \sqsubseteq V_2 : A \and \Gamma \vdash V_2 \sqsubseteq V_3 : A}
{\Gamma \vdash V_1 \sqsubseteq V_3 : A}
\inferrule
{\Gamma \,\,|\,\, \u B \vdash S_1 \sqsubseteq S_2 : \u B' \and \Gamma \,\,|\,\, \u B \vdash S_2 \sqsubseteq S_3 : \u B'}
{\Gamma \,\,|\,\, \u B \vdash S_1 \sqsubseteq S_3 : \u B'}
\inferrule
{\Gamma, x : A \vdash M_1 \sqsubseteq M_2 : \u B \and
\Gamma \vdash V_1 \sqsubseteq V_2 : A}
{\Gamma \vdash M_1[V_1/x] \sqsubseteq M_2[V_2/x] : \u B}
\inferrule
{\Gamma, x : A \vdash V_1' \sqsubseteq V_2' : A' \and
\Gamma \vdash V_1 \sqsubseteq V_2 : A}
{\Gamma \vdash V_1'[V_1/x] \sqsubseteq V_2'[V_2/x] : A'}
\inferrule
{\Gamma, x : A \,\,|\,\, \u B \vdash S_1 \sqsubseteq S_2 : \u B' \and
\Gamma \vdash V_1 \sqsubseteq V_2 : A}
{\Gamma \,\,|\,\, \u B \vdash S_1[V_1/x] \sqsubseteq S_2[V_2/x] : \u B'}
\inferrule
{\Gamma \,\,|\,\, \u B \vdash S_1 \sqsubseteq S_2 : \u B' \and
\Gamma \vdash M_1 \sqsubseteq M_2 : \u B}
{\Gamma \vdash S_1[M_1] \sqsubseteq S_2[M_2] : \u B'}
\inferrule
{\Gamma \,\,|\,\, \u B' \vdash S_1' \sqsubseteq S_2' : \u B'' \and
\Gamma \,\,|\,\, \u B \vdash S_1 \sqsubseteq S_2 : \u B'}
{\Gamma \,\,|\,\, \u B \vdash S_1'[S_1] \sqsubseteq S_2'[S_2] : \u B''}
\end{mathpar}
\end{small}
\caption{CBPV logical and error rules}
\end{figure}
\end{longonly}
\citet{levy03cbpvbook} translates CBPV*\/ to CBPV, but not does not prove
the inequality preservation that we require here, so we give
an
alternative translation for which this property is easy to
verify \ifshort (see the extended version for full details)\fi.
We translate both complex values and complex
stacks to fully general computations, so that computation
pattern-matching can replace the pattern-matching in complex values/stacks.
\begin{longonly}
For example, for a closed value, we could ``evaluate away''
the complexity and get a closed simple value (if we don't use $U$), but
for open terms, evaluation will get ``stuck'' if we pattern match on
a variable---so not every complex value can be translated to a value in
CBPV.
\end{longonly}
More formally, we translate a CBPV*\/ complex value $V : A$ to a
CBPV\/ computation $\simp{V} : \u F A$ that in CBPV*\ is equivalent
to $\kw{ret} V$.
Similarly, we translate a CBPV*\/ complex stack $S$ with hole
$\bullet : \u B$ to a CBPV\ computation $\simp{S}$ with a free
variable $z : U \u B$ such that in CBPV*, $\simp S \mathrel{\gtdyn\ltdyn}
S[\kw{force} z]$.
Computations $M : \u B$ are translated to computations $\simp{M}$ with
the same type.
\begin{longonly}
The \emph{de-complexification} procedure is defined as follows.
We note that this translation is not the one presented in
\citet{levy03cbpvbook}, but rather a more inefficient version that, in CPS
terminology, introduces many administrative redices.
Since we are only proving results up to observational equivalence
anyway, the difference doesn't change any of our theorems, and makes
some of the proofs simpler.
\begin{definition}[De-complexification]
We define
\begin{small}
\begin{mathpar}
\begin{array}{rcl}
\simp \bullet &=& \kw{force} z\\
\simp x &=& \kw{ret} x\\\\
\simpp {\kw{ret} V} &= & \bindXtoYinZ {\simp V} x \kw{ret} x\\
\simpp {M\, V} &=& \bindXtoYinZ {\simp V} x \simp M\, x\\\\
\simpp{\kw{force} V} &=& \bindXtoYinZ {\simp V} x \kw{force} x\\
\simpp{\kw{absurd} V} &=& \bindXtoYinZ {\simp V} x \kw{absurd} x\\
\simpp{\caseofXthenYelseZ V {x_1. E_1}{x_2. E_2}} &=&
\bindXtoYinZ {\simp V} x \caseofXthenYelseZ x {x_1. \simp {E_1}}{x_2. \simp {E_2}}\\
\simpp{\pmpairWtoinZ V {E}} &=&
\bindXtoYinZ V w {\pmpairWtoinZ w \simp {E}}\\
\simpp{\pmpairWtoXYinZ V x y {E}} &=&
\bindXtoYinZ V w {\pmpairWtoXYinZ w x y \simp {E}}\\
\simpp{\pmmuXtoYinZ V x E} &=& \bindXtoYinZ {\simp V} y \pmmuXtoYinZ y x \simp{E}\\\\
\simpp{\kw{inl} V} &=& \bindXtoYinZ {\simp V} x \kw{ret}\kw{inl} x\\
\simpp{\kw{inr} V} &=& \bindXtoYinZ {\simp V} x \kw{ret}\kw{inr} x\\
\simp{()} &=& \kw{ret} ()\\
\simp{(V_1,V_2)} &=& \bindXtoYinZ {\simp {V_1}}{x_1} \bindXtoYinZ {\simp {V_2}} {x_2} \kw{ret} (x_1,x_2)\\
\simpp{\kw{thunk} M} &=& \kw{ret} \kw{thunk} \simp M\\
\simpp{\kw{roll} V} &=& \bindXtoYinZ {\simp V} x \kw{roll} x\\
\end{array}
\end{mathpar}
\end{small}
\end{definition}
The translation is type-preserving and the identity from CBPV*'s point of view
\begin{lemma}[De-complexification De-complexifies]
For any CBPV*\/ term $\Gamma \,\,|\,\, \Delta \vdash E : T$, $\simp E$
is a term of CBPV\/ satisfying $\Gamma, \simp\Delta \vdash \simp E :
\simp T$ where
$\simp{\cdot} = \cdot$ $\simpp{\bullet:\u B} = z:U\u B$,
$\simp{\u B} = \u B$, $\simp A = \u F A$.
\end{lemma}
\begin{lemma}[De-complexification is Identity in CBPV*]
Considering CBPV as a subset of CBPV*\, we have
\begin{enumerate}
\item If $\Gamma \,\,|\,\, \cdot \vdash M : \u B$ then $M \mathrel{\gtdyn\ltdyn} \simp M$.
\item If $\Gamma \,\,|\,\, \Delta \vdash S : \u B$ then $S[\kw{force} z] \mathrel{\gtdyn\ltdyn} \simp S$.
\item If $\Gamma \vdash V : A$ then $\kw{ret} V \mathrel{\gtdyn\ltdyn} \simp V$.
\end{enumerate}
Furthermore, if $M, V, S$ are in CBPV, the proof holds in CBPV.
\end{lemma}
\end{longonly}
Finally, we need to show that the translation preserves inequalities
($\simp{E} \sqsubseteq \simp{E'}$ if $E \sqsubseteq E'$), but because complex
values and stacks satisfy more equations than arbitrary computations in
the types of their translations do, we need to isolate the special
``purity'' property that their translations have.
We show that complex values are translated to computations that satisfy
\emph{thunkability}~\cite{munchmaccagnoni14nonassociative}, which
intuitively means $M$ should have no observable effects, and so
can be freely duplicated or discarded like a value.
In the inequational theory of CBPV\/, this is defined by saying that
running $M$ to a value and then duplicating its value is the same as
running $M$ every time we need its value:
\iflong{
\begin{definition}[Thunkable Computation]
A computation $\Gamma \vdash M : \u FA$ is \emph{thunkable} if \\
\fi
\[\Gamma \vdash \kw{ret}{( \kw{thunk} M)} \mathrel{\gtdyn\ltdyn} \bindXtoYinZ M x \kw{ret}{(\kw{thunk} (\kw{ret} x))} : \u FU\u F A\]
\iflong
\end{definition}
\fi
Dually, we show that complex stacks are translated to computations that
satisfy (semantic) \emph{linearity}~\cite{munchmaccagnoni14nonassociative}, where intuitively a computation $M$
with a free variable $x : U \u B$ is linear in $x$ if $M$ behaves as if
when it is forced, the first thing it does is forces $x$, and that is the only time
it uses $x$. This is described in the CBPV inequational theory as
follows:
\iflong
if we have a thunk $z : U\u F U \u B$, then either we can force
it now and pass the result to $M$ as $x$, or we can just run $M$ with a
thunk that will force $z$ each time $M$ is forced---but if $M$ forces
$x$ exactly once, first, these two are the same.
\begin{definition}[Linear Term]
A term $\Gamma, x : U\u B \vdash M : \u C$ is \emph{linear in $x$}
if\\
\fi
\[ \Gamma, z : U\u FU\u B \vdash
\bindXtoYinZ {\kw{force} z} x M
\mathrel{\gtdyn\ltdyn} M[\kw{thunk}{(\bindXtoYinZ {(\kw{force} z)} x \kw{force} x)}]
\]
\iflong
\end{definition}
\fi
\begin{longonly}
Thunkability/linearity of the translations of complex values/stacks are
used to prove the preservation of the $\eta$ principles for positive
types and the strictness of complex stacks with respect to errors under
decomplexification.
\end{longonly}
\begin{shortonly}
\noindent Composing this with the translation from GTT to CBPV*\/
shows that \emph{GTT value upcasts are thunkable and computation
downcasts are linear}, which justifies a number of program transformations.
\end{shortonly}
\begin{longonly}
We need a few lemmas about thunkables and linears to prove that complex
values become thunkable and complex stacks become linear.
First, the following lemma is useful for optimizing programs with
thunkable subterms. Intuitively, since a thunkable has ``no effects''
it can be reordered past any other effectful binding. Furhmann
\citep{fuhrmann1999direct} calls a morphism that has this property
\emph{central} (after the center of a group, which is those elements
that commute with every element of the whole group).
\begin{lemma}[Thunkable are Central]
If $\Gamma \vdash M : \u F A$ is thunkable and $\Gamma \vdash N : \u
F A'$ and $\Gamma , x:A, y:A' \vdash N' : \u B$, then
\[
\bindXtoYinZ M x \bindXtoYinZ N y N'
\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ N y \bindXtoYinZ M x N'
\]
\end{lemma}
\begin{proof}
\begin{align*}
&\bindXtoYinZ M x \bindXtoYinZ N y N'\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ M x \bindXtoYinZ N y \bindXtoYinZ {\kw{force} \kw{thunk} \kw{ret} x} x N' \tag{$U\beta,\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ M x \bindXtoYinZ {\kw{ret}\kw{thunk}\kw{ret} x} w \bindXtoYinZ N y \bindXtoYinZ {\kw{force} w} x N' \tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ {(\bindXtoYinZ M x {\kw{ret}\kw{thunk}\kw{ret} x})} w \bindXtoYinZ N y \bindXtoYinZ {\kw{force} w} x N' \tag{$\u F\eta$}\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ {\kw{ret} \kw{thunk} M} w \bindXtoYinZ N y \bindXtoYinZ {\kw{force} w} x N' \tag{$M$ thunkable}\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ N y \bindXtoYinZ {\kw{force} \kw{thunk} M} x N' \tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn}\bindXtoYinZ N y \bindXtoYinZ M x N' \tag{$U\beta$}\\
\end{align*}
\end{proof}
Next, we show thunkables are closed under composition and that return
of a value is always thunkable. This allows us to easily build up
bigger thunkables from smaller ones.
\begin{lemma}[Thunkables compose]
If $\Gamma \vdash M : \u F A$ and $\Gamma, x : A \vdash N : \u F A'$
are thunkable, then
\[ \bindXtoYinZ M x N \]
is thunkable.
\end{lemma}
\begin{proof}
\begin{align*}
&\bindXtoYinZ {(\bindXtoYinZ M x N)} y \kw{ret}\kw{thunk}\kw{ret} y\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ M x \bindXtoYinZ N y \kw{ret}\kw{thunk}\kw{ret} y\tag{$\u F\eta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ M x \kw{ret} \kw{thunk} N \tag{$N$ thunkable}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ M x \kw{ret} \kw{thunk} (\bindXtoYinZ {\kw{ret} x} x N)\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ M x \bindXtoYinZ {\kw{ret}\kw{thunk}\kw{ret} x} w \kw{ret} \kw{thunk} (\bindXtoYinZ {\kw{force} w} x N)\tag{$\u F\beta,U\beta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {(\bindXtoYinZ M x \kw{ret}\kw{thunk}\kw{ret} x)} w \kw{ret} \kw{thunk} (\bindXtoYinZ {\kw{force} w} x N)\tag{$\u F\eta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret}\kw{thunk} M} w \kw{ret} \kw{thunk} (\bindXtoYinZ {\kw{force} w} x N)\tag{$M$ thunkable}\\
&\mathrel{\gtdyn\ltdyn} \kw{ret} \kw{thunk} (\bindXtoYinZ {\kw{force} \kw{thunk} M} x N)\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{ret} \kw{thunk} (\bindXtoYinZ {M} x N)\tag{$U\beta$}\\
\end{align*}
\end{proof}
\begin{lemma}[Return is Thunkable]
If $\Gamma \vdash V : A$ then $\kw{ret} V$ is thunkable.
\end{lemma}
\begin{proof}
By $\u F\beta$:
\[ \bindXtoYinZ {\kw{ret} V} x \kw{ret}\kw{thunk}\kw{ret} x \mathrel{\gtdyn\ltdyn} \kw{ret}\kw{thunk}\kw{ret} V \]
\end{proof}
\begin{lemma}[Complex Values Simplify to Thunkable Terms]
If $\Gamma \vdash V : A$ is a (possibly) complex value, then $\Gamma
\vdash \simp V : \u F A$ is thunkable.
\end{lemma}
\begin{longproof}
Introduction forms follow from return is thunkable and thunkables
compose. For elimination forms it is sufficient to show that when
the branches of pattern matching are thunkable, the pattern match
is thunkable.
\begin{enumerate}
\item $x$: We need to show $\simp x = \kw{ret} x$ is thunkable, which we
proved as a lemma above.
\item{} $0$ elim, we need to show
\[ \bindXtoYinZ {\kw{absurd} V} y \kw{ret}\kw{thunk}\kw{ret} y\mathrel{\gtdyn\ltdyn} \kw{ret}\kw{thunk} {\kw{absurd} V}\]
but by $\eta0$ both sides are equivalent to $\kw{absurd} V$.
\item{} $+$ elim, we need to show
\[
\kw{ret}\kw{thunk} (\caseofXthenYelseZ V {x_1. M_1} {x_2. M_2})
\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {(\caseofXthenYelseZ V {x_1. M_1} {x_2. M_2})} y \kw{ret}\kw{thunk} \kw{ret} y
\]
\begin{align*}
&\kw{ret}\kw{thunk} (\caseofXthenYelseZ V {x_1. M_1} {x_2. M_2})\\
&\mathrel{\gtdyn\ltdyn}
\caseofX V \tag{$+\eta$}\\
&\qquad\{ {x_1. \kw{ret}\kw{thunk} (\caseofXthenYelseZ {\kw{inl} x_1} {x_1. M_1} {x_2. M_2})}\\
&\qquad\elseZ {x_2. \kw{ret}\kw{thunk} (\caseofXthenYelseZ {\kw{inr} x_2} {x_1. M_1} {x_2. M_2})}\\
&\mathrel{\gtdyn\ltdyn}\caseofX V \tag{$+\beta$}\\
&\qquad\{ {x_1. \kw{ret}\kw{thunk} M_1}\\
&\qquad\elseZ {x_2. \kw{ret}\kw{thunk} M_2}\\
&\mathrel{\gtdyn\ltdyn}\caseofX V \tag{$M_1,M_2$ thunkable}\\
&\qquad\{ {x_1. \bindXtoYinZ {M_1} y \kw{ret}\kw{thunk}\kw{ret} y}\\
&\qquad\elseZ {x_2. \bindXtoYinZ {M_2} y \kw{ret}\kw{thunk}\kw{ret} y}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {(\caseofXthenYelseZ V {x_1. M_1}{x_2. M_2})} y \kw{ret}\kw{thunk}\kw{ret} y\tag{commuting conversion}\\
\end{align*}
\item{} $\times$ elim
\begin{align*}
&\kw{ret}\kw{thunk} (\pmpairWtoXYinZ V x y M)\\
&\mathrel{\gtdyn\ltdyn} \pmpairWtoXYinZ V x y \kw{ret}\kw{thunk} \pmpairWtoXYinZ {(x,y)} x y M\tag{$\times\eta$}\\
&\mathrel{\gtdyn\ltdyn} \pmpairWtoXYinZ V x y \kw{ret}\kw{thunk} M\tag{$\times\beta$}\\
&\mathrel{\gtdyn\ltdyn} \pmpairWtoXYinZ V x y \bindXtoYinZ M z \kw{ret}\kw{thunk}\kw{ret} z\tag{$M$ thunkable}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {(\pmpairWtoXYinZ V x y M)} z \kw{ret}\kw{thunk}\kw{ret} z\tag{commuting conversion}
\end{align*}
\item $1$ elim
\begin{align*}
&\kw{ret}\kw{thunk} (\pmpairWtoinZ V x y M)\\
&\mathrel{\gtdyn\ltdyn} \pmpairWtoinZ V \kw{ret}\kw{thunk} \pmpairWtoinZ {()} M\tag{$1\eta$}\\
&\mathrel{\gtdyn\ltdyn} \pmpairWtoinZ V \kw{ret}\kw{thunk} M\tag{$1\beta$}\\
&\mathrel{\gtdyn\ltdyn} \pmpairWtoinZ V \bindXtoYinZ M z \kw{ret}\kw{thunk}\kw{ret} z\tag{$M$ thunkable}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {(\pmpairWtoinZ V M)} z \kw{ret}\kw{thunk}\kw{ret} z\tag{commuting conversion}
\end{align*} \item $\mu$ elim
\begin{align*}
&\kw{ret}\kw{thunk} (\pmmuXtoYinZ V x M)\\
&\mathrel{\gtdyn\ltdyn} \pmmuXtoYinZ V x \kw{ret}\kw{thunk} \pmmuXtoYinZ {\kw{roll} x} x M\tag{$\mu\eta$}\\
&\mathrel{\gtdyn\ltdyn} \pmmuXtoYinZ V x \kw{ret}\kw{thunk} M\tag{$\mu\beta$}\\
&\mathrel{\gtdyn\ltdyn} \pmmuXtoYinZ V x \bindXtoYinZ M y \kw{ret}\kw{thunk}\kw{ret} y\tag{$M$ thunkable}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {(\pmmuXtoYinZ V x M)} y \kw{ret}\kw{thunk}\kw{ret} y\tag{commuting conversion}
\end{align*}
\end{enumerate}
\end{longproof}
Dually, we have that a stack out of a force is linear and that linears
are closed under composition, so we can easily build up bigger linear
morphisms from smaller ones.
\begin{lemma}[Force to a stack is Linear]
If $\Gamma \,\,|\,\, \bullet : \u B \vdash S : \u C$, then
$\Gamma , x : U\u B\vdash S[\kw{force} x] : \u B$ is linear in $x$.
\end{lemma}
\begin{proof}
\begin{align*}
S[\kw{force} \kw{thunk} {(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}]
&\mathrel{\gtdyn\ltdyn}
S[{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}]\tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{force} z} x S[\kw{force} x] \tag{$\u F\eta$}
\end{align*}
\end{proof}
\begin{lemma}[Linear Terms Compose]
If $\Gamma , x : U \u B \vdash M : \u B'$ is linear in $x$ and
$\Gamma , y : \u B' \vdash N : \u B''$ is linear in $y$, then
$\Gamma , x : U \u B \vdash N[\kw{thunk} M/y] : $
\end{lemma}
\begin{proof}
\begin{align*}
&N[\kw{thunk} M/y][\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x]\\
&= N[\kw{thunk} {(M[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}])}/y]\\
&\mathrel{\gtdyn\ltdyn} N[\kw{thunk} {(\bindXtoYinZ {\kw{force} z} x M)}/y]\tag{$M$ linear}\\
&\mathrel{\gtdyn\ltdyn} N[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} \kw{thunk} M)}/y] \tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn}
N[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \bindXtoYinZ {\kw{ret}\kw{thunk} M} y \kw{force} y)}/y] \tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn}
N[\kw{thunk}{(\bindXtoYinZ {(\bindXtoYinZ {\kw{force} z} x \kw{ret}\kw{thunk} M)} y \kw{force} y)}/y] \tag{$\u F\eta$}\\
&\mathrel{\gtdyn\ltdyn}
N[\kw{thunk}{(\bindXtoYinZ {\kw{force} w} y \kw{force} y)}/y][\kw{thunk}(\bindXtoYinZ {\kw{force} z} x \kw{ret}\kw{thunk} M)/w] \tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn} (\bindXtoYinZ {\kw{force} w} y N)[\kw{thunk} (\bindXtoYinZ {\kw{force} z} x \kw{ret} \kw{thunk} M)/w] \tag{$N$ linear}\\
&\mathrel{\gtdyn\ltdyn} (\bindXtoYinZ {(\bindXtoYinZ {\kw{force} z} x \kw{ret} \kw{thunk} M)} y N) \tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn} (\bindXtoYinZ {\kw{force} z} x \bindXtoYinZ {\kw{ret}\kw{thunk} M} y N \tag{$\u F\eta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{force} z} x N[\kw{thunk} M/y]
\end{align*}
\end{proof}
\begin{lemma}[Complex Stacks Simplify to Linear Terms]
If $\Gamma\,\,|\,\, \bullet : \u B \vdash S : \u C$ is a (possibly)
complex stack, then $\Gamma, z : U\u B \vdash \simpp{S} : \u C$ is linear in $z$.
\end{lemma}
\begin{longproof}
There are $4$ classes of rules for complex stacks: those that are
rules for simple stacks ($\bullet$, computation type elimination
forms), introduction rules for negative computation types where the
subterms are complex stacks, elimination of positive value types
where the continuations are complex stacks and finally application
to a complex value.
The rules for simple stacks are easy: they follow immediately from
the fact that forcing to a stack is linear and that complex stacks
compose. For the negative introduction forms, we have to show that
binding commutes with introduction forms. For pattern matching
forms, we just need commuting conversions. For function application,
we use the lemma that binding a thunkable in a linear term is
linear.
\begin{enumerate}
\item $\bullet$: This is just saying that $\kw{force} z$ is linear,
which we showed above.
\item $\to$ elim We need to show, assuming that $\Gamma, x : \u B
\vdash M : \u C$ is linear in $x$ and $\Gamma \vdash N : \u F A$
is thunkable, that
\[
\bindXtoYinZ N y M\,y
\]
is linear in $x$.
\begin{align*}
&\bindXtoYinZ N y (M[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x])\,y\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ N y (\bindXtoYinZ {\kw{force} z} x M)\,y \tag{$M$ linear in $x$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ N y \bindXtoYinZ {\kw{force} z} x M\,y \tag{$\u F\eta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{force} z} x \bindXtoYinZ N y M\,y\tag{thunkables are central}
\end{align*}
\item $\to$ intro
\begin{align*}
& \lambda y:A. M[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x]\\
&\mathrel{\gtdyn\ltdyn} \lambda y:A. \bindXtoYinZ {\kw{force} z} x M \tag{$M$ is linear}\\
&\mathrel{\gtdyn\ltdyn} \lambda y:A. \bindXtoYinZ {\kw{force} z} x (\lambda y:A. M)\, y \tag{$\to\beta$}\\
&\mathrel{\gtdyn\ltdyn} \lambda y:A. (\bindXtoYinZ {\kw{force} z} x (\lambda y:A. M))\, y \tag{$\u F\eta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{force} z} x (\lambda y:A. M) \tag{$\to\eta$}
\end{align*}
\item $\top$ intro
We need to show
\[ \bindXtoYinZ {\kw{force} z} w \{\} \mathrel{\gtdyn\ltdyn} \{\} \]
Which is immediate by $\top\eta$
\item $\mathbin{\&}$ intro
\begin{align*}
& \pairone{M[\kw{thunk} {(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}]/x}\\
&\pairtwo{N[\kw{thunk} {(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x]}\\
&\mathrel{\gtdyn\ltdyn} \pairone{\bindXtoYinZ {\kw{force} z} x M}\tag{$M, N$ linear}\\
&\qquad \pairtwo{\bindXtoYinZ {\kw{force} z} x N}\\
&\mathrel{\gtdyn\ltdyn} \pairone{\bindXtoYinZ {\kw{force} z} x {\pi \pair M N}}\tag{$\mathbin{\&}\beta$}\\
&\qquad \pairtwo{\bindXtoYinZ {\kw{force} z} x {\pi' \pair M N}}\\
&\mathrel{\gtdyn\ltdyn} \pairone{\pi({\bindXtoYinZ {\kw{force} z} x \pair M N})}\tag{$\u F\eta$}\\
&\qquad \pairtwo{\pi'({\bindXtoYinZ {\kw{force} z} x \pair M N})}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{force} z} x \pair M N\tag{$\mathbin{\&}\eta$}
\end{align*}
\item $\nu$ intro
\begin{align*}
& \kw{roll} M[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x]\\
&\mathrel{\gtdyn\ltdyn} \kw{roll} (\bindXtoYinZ {\kw{force} z} x M) \tag{$M$ is linear} \\
&\mathrel{\gtdyn\ltdyn} \kw{roll} (\bindXtoYinZ {\kw{force} z} x \kw{unroll} \kw{roll} M) \tag{$\nu\beta$}\\
&\mathrel{\gtdyn\ltdyn} \kw{roll} \kw{unroll} (\bindXtoYinZ {\kw{force} z} x \kw{roll} M) \tag{$\u F\eta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{force} z} x (\kw{roll} M) \tag{$\nu\eta$}
\end{align*}
\item $\u F$ elim: Assume $\Gamma, x : A \vdash M : \u F A'$ and
$\Gamma, y : A' \vdash N : \u B$, then we need to show
\[ \bindXtoYinZ M y N \]
is linear in $M$.
\begin{align*}
& \bindXtoYinZ {M[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x]} y N\\
& \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {(\bindXtoYinZ {\kw{force} z} x M)} y N\tag{$M$ is linear}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\kw{force} z} x \bindXtoYinZ M y N\tag{$\u F\eta$}
\end{align*}
\item $0$ elim: We want to show $\Gamma, x:U\u B \vdash \kw{absurd} V :
\u C$ is linear in $x$, which means showing:
\[ \kw{absurd} V \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{force} z} x \kw{absurd} V
\]
which follows from $0\eta$
\item $+$ elim: Assuming $\Gamma, x : U\u B, y_1 : A_1 \vdash M_1 :
\u C$ and $\Gamma, x : U\u B, y_2: A_2\vdash M_2 : \u C$ are
linear in $x$, and $\Gamma \vdash V : A_1 + A_2$, we need to show
\[ \caseofXthenYelseZ V {y_1. M_1} {y_2. M_2} \]
is linear in $x$.
\begin{align*}
& \caseofX V\\
& \,\,\{ {y_1. M_1[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x]}\\
& \,\,\elseZ {y_2. M_2[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x]}\\
&\mathrel{\gtdyn\ltdyn} \caseofXthenYelseZ V {y_1. \bindXtoYinZ {\kw{force} z} x M_1}{y_2. \bindXtoYinZ {\kw{force} z} x M_2}\tag{$M_1,M_2$ linear}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\kw{force} z} x \caseofXthenYelseZ V {y_1. M_1}{y_2. M_2}
\end{align*}
\item $\times$ elim: Assuming $\Gamma, x:U\u B, y_1 : A_1, y_2 : A_2
\vdash M : \u B$ is linear in $x$ and $\Gamma \vdash V : A_1
\times A_2$, we need to show
\[ \pmpairWtoXYinZ V {y_1}{y_2} M \]
is linear in $x$.
\begin{align*}
&\pmpairWtoXYinZ V {y_1}{y_2} M[[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x]]\\
&\mathrel{\gtdyn\ltdyn} \pmpairWtoXYinZ V {y_1}{y_2} \bindXtoYinZ {\kw{force} z} x M\tag{$M$ linear}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{force} z} x\pmpairWtoXYinZ V {y_1}{y_2} M\tag{comm. conv}\\
\end{align*}
\item $\mu$ elim: Assuming $\Gamma , x : U \u B, y : A[\mu X.A/X]
\vdash M : \u C$ is linear in $x$ and $\Gamma \vdash V : \mu X.A$,
we need to show
\[ \pmmuXtoYinZ V y M \]
is linear in $x$.
\begin{align*}
& \pmmuXtoYinZ V y M[\kw{thunk}{(\bindXtoYinZ {\kw{force} z} x \kw{force} x)}/x]\\
& \mathrel{\gtdyn\ltdyn} \pmmuXtoYinZ V y \bindXtoYinZ {\kw{force} z} x M\tag{$M$ linear}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{force} z} x\pmmuXtoYinZ V y M \tag{commuting conversion}
\end{align*}
\end{enumerate}
\end{longproof}
Composing this with the previous translation from GTT to CBPV*\/
shows that \emph{GTT value type upcasts are thunkable and computation
type downcasts are linear}.
Since the translation takes values and stacks to terms, it cannot
preserve substitution up to equality.
Rather, we get the following, weaker notion that says that the
translation of a syntactic substitution is equivalent to an effectful
composition.
\begin{lemma}[Compositionality of De-complexification]
\begin{enumerate}
\item If $\Gamma, x : A\,\,|\,\, \Delta\vdash E : T$ and $\Gamma \vdash V : A$
are complex terms, then
\[
\simpp{E[V/x]} \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} x {\simp E}
\]
\item If $\Gamma \,\,|\,\, \bullet : \u B \vdash S : \u C$ and $\Gamma
\,\,|\,\, \Delta \vdash M : \u B$, then
\[
\simpp{S[M]} \mathrel{\gtdyn\ltdyn} \simp{S}[\kw{thunk}\simp{M}/z]
\]
\end{enumerate}
\end{lemma}
\begin{longproof}
\begin{enumerate}
\item First, note that every occurrence of a variable in $\simp E$ is of
the form $\kw{ret} x$ for some variable $x$. This means we can define
substitution of a \emph{term} for a variable in a simplified term by
defining $\simp{E}[N/\kw{ret} x]$ to replace every $\kw{ret} x : \u F A$
with $N : \u F A$. Then it is an easy observation that
simplification is compositional on the nose with respect to this
notion of substitution:
\[ \simpp{E[V/x]} = \simp{E}[\simp V / \kw{ret} x] \]
Next by repeated invocation of $U\beta$,
\[ \simp{E}[\simp V/\kw{ret} x] \mathrel{\gtdyn\ltdyn} \simp{E}[\kw{force}\kw{thunk}\simp V/\kw{ret} x] \]
Then we can lift the definition of the thunk to the top-level by $\u F\beta$:
\[ \simp{E}[\kw{force}\kw{thunk}\simp V/\kw{ret} x] \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ \kw{ret}\kw{thunk} \simp V w \simp{E}[\kw{force} w/\kw{ret} x]
\]
Then because $\simp V$ is thunkable, we can bind it at the top-level
and reduce an administrative redex away to get our desired result:
\begin{align*}
&\bindXtoYinZ \kw{ret}\kw{thunk} \simp V w \simp{E}[\kw{force} w/\kw{ret} x]\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} x \bindXtoYinZ {\kw{ret}\kw{thunk}\kw{ret} x} w \simp{E}[\kw{force} w/\kw{ret} x]\tag{$V$ thunkable}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} x \simp{E}[\kw{force} \kw{thunk}\kw{ret} x/\kw{ret} x]\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} x \simp{E}[\kw{ret} x/\kw{ret} x]\tag{$U\beta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} x \simp{E}\\
\end{align*}
\item Note that every occurrence of $z$ in $\simp S$ is of the form
$\kw{force} z$. This means we can define substitution of a \emph{term}
$M : \u B$ for $\kw{force} z$ in $\simp S$ by replacing $\kw{force} z$
with $M$. It is an easy observation that simplification is
compositional on the nose with respect to this notion of
substitution:
\[ \simpp{S[M/\bullet]} = \simp S[\simp M/\kw{force} z] \]
Then by repeated $U\beta$, we can replace $\simp M$ with a forced thunk:
\[ \simp S[\simp M/\kw{force} z] \mathrel{\gtdyn\ltdyn} \simp S[\kw{force}\kw{thunk} \simp M/\kw{force} z] \]
which since we are now substituting a force for a force is the
same as substituting the thunk for the variable:
\[ \simp S[\kw{force}\kw{thunk} \simp M/\kw{force} z]
\mathrel{\gtdyn\ltdyn}
\simp S[\kw{thunk} \simp M / z]
\]
\end{enumerate}
\end{longproof}
\begin{theorem}[De-complexification preserves Dynamism]
If $\Gamma \,\,|\,\, \Delta \vdash E \sqsubseteq E' : T$ then ${\Gamma, \simp
\Delta \vdash \simp E \sqsubseteq \simp{E'} : \simp T}$
\end{theorem}
\begin{longproof}
\begin{enumerate}
\item Reflexivity is translated to reflexivity.
\item Transitivity is translated to transitivity.
\item Compatibility rules are translated to compatibility rules.
\item Substitution of a Value
\[
\inferrule
{\Gamma, x : A, \simp\Delta \vdash \simp E \sqsubseteq \simp {E'} : \simp T \and \Gamma \vdash \simp V \sqsubseteq \simp {V'} : \u F A}
{\Gamma, \simp\Delta \vdash \simp{E[V/x]} \sqsubseteq \simp{E'[V'/x]} : \simp T}
\]
By the compositionality lemma, it is sufficient to show:
\[ \bindXtoYinZ {\simp V} x {\simp E} \sqsubseteq \bindXtoYinZ {\simp {V'}} {x} E' \]
which follows by bind compatibility.
\item Plugging a term into a hole:
\[
\inferrule
{\Gamma, z : U{\u C} \vdash \simp {S} \sqsubseteq \simp{S'} : \u B\and
\Gamma,\simp\Delta \vdash \simp{M} \sqsubseteq \simp{M'} : \u C}
{\Gamma, \simp\Delta \vdash \simp{S[M]} \sqsubseteq \simp{S'[M']} : \u B}
\]
By compositionality, it is sufficient to show
\[ \simp{S}[\kw{thunk}{\simp M}/z] \sqsubseteq \simp{S'}[\kw{thunk}{\simp{M'}}/z] \]
which follows by thunk compatibility and the simple substitution rule.
\item Stack strictness
We need to show for $S$ a complex stack,
that
\[ \simpp{S[\mho]} \mathrel{\gtdyn\ltdyn} \mho \]
By stack compositionality we know
\[ \simpp{S[\mho]} \mathrel{\gtdyn\ltdyn} \simp{S}[{\kw{thunk} \mho/z}] \]
\begin{align*}
\sem{S}[{\kw{thunk} \mho/z}]
&\mathrel{\gtdyn\ltdyn} \simp{S}[\kw{thunk} {(\bindXtoYinZ \mho y \mho)}/z]\tag{Stacks preserve $\mho$}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ \mho y \simp{S}[{\kw{thunk} \mho/z}] \tag{$\simp S$ is linear in $z$}\\
&\mathrel{\gtdyn\ltdyn} \mho \tag{Stacks preserve $\mho$}
\end{align*}
\item $1\beta$ By compositionality it is sufficient to show
\[\bindXtoYinZ {\kw{ret} ()} x \pmpairWtoinZ x {\simp E} \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret} ()} x \simp E \]
which follows by $\u F\beta, 1\beta$.
\item $1\eta$ We need to show for $\Gamma, x : 1 \,\,|\,\, \Delta \vdash E : T$
\[ \simp{E} \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret} x} x \pmpairWtoinZ x {\simpp{E[()/x]}}\]
after a $\u F\beta$, it is sufficient using $1\eta$ to prove:
\[ {\simpp{E[()/x]}} \mathrel{\gtdyn\ltdyn} \simp E[()/x] \]
which follows by compositionality and $\u F\beta$:
\[ {\simpp{E[()/x]}} \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret} ()} x {\simp E} \mathrel{\gtdyn\ltdyn} \simp{E}[()/x] \]
\item $\times\beta$ By compositionality it is sufficient to show
\begin{align*}
&\bindXtoYinZ {(\bindXtoYinZ {\simp{V_1}} {x_1} \bindXtoYinZ {\simp{V_2}} {x_2} {\kw{ret} (x_1,x_2)})} x \pmpairWtoXYinZ {x} {x_1}{x_2} {\simp E}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp{V_1}} {x_1} \bindXtoYinZ {\simp{V_2}} {x_2} \simp E
\end{align*}
which follows by $\u F\eta, \u F\beta, \times\beta$.
\item $\times\eta$ We need to show for $\Gamma, x : A_1\times A_2
\,\,|\,\,\Delta \vdash E : T$ that
\[ \simp{E} \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret} x} x \pmpairWtoXYinZ x {x_1}{x_2} \simpp{E[(x_1,x_2)/x]} \]
by $\u F\beta,\times\eta$ it is sufficient to show
\[ \simp{E[(x_1,x_2)/x]} \mathrel{\gtdyn\ltdyn} \simp{E}[(x_1,x_2)/x] \]
Which follows by compositionality:
\begin{align*}
&\simp{E[(x_1,x_2)/x]}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {x_1}{x_1}\bindXtoYinZ {x_2}{x_2} \bindXtoYinZ {\kw{ret} (x_1,x_2)} x \simp E\tag{compositionality}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret} (x_1,x_2)} x \simp E\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} \simp E[(x_1,x_2)/x]
\end{align*}
\item $0\eta$
We need to show for any $\Gamma, x : 0 \,\,|\,\, \Delta \vdash E : T$
that
\[ \simp E \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret} x} x \kw{absurd} x \]
which follows by $0\eta$
\item $+\beta$ Without loss of generality, we do the $\kw{inl}$ case
By compositionality it is sufficient to show
\[
\bindXtoYinZ {(\bindXtoYinZ {\simp V} x {\kw{inl} x})} x \caseofXthenYelseZ x {x_1. \simp E_1}{x_2. \simp E_2}
\mathrel{\gtdyn\ltdyn} \simp{E_1[V/x_1]}
\]
which holds by $\u F\eta,\u F\beta, +\beta$
\item $+\eta$ We need to show for any $\Gamma, x:A_1+A_2
\,\,|\,\,\Delta\vdash E :T$ that
\[ \simp E \mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\kw{ret} x} x \caseofXthenYelseZ x {x_1. \simpp{E[\kw{inl} x_1/x]}}{x_2. \simpp{E[\kw{inl} x_2/x]}} \]
\begin{align*}
&\simp E\\
&\mathrel{\gtdyn\ltdyn} \caseofXthenYelseZ x {x_1. \simp{E}[\kw{inl} x_1/x]}{x_2. \simp{E}[\kw{inl} x_2/x]} \tag{$+\eta$}\\
&\mathrel{\gtdyn\ltdyn}
\caseofXthenYelseZ x {x_1. \bindXtoYinZ {\kw{ret} \kw{inl} x_1} x \simp{E}}{x_2. \bindXtoYinZ {\kw{ret} \kw{inl} x_2} x \simp{E}}\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn}
\caseofXthenYelseZ x {x_1. \simp{E[\kw{inl} x_1]/x}}{x_2. \simp{E[\kw{inl} x_2]/x}}\tag{compositionality}\\
&\mathrel{\gtdyn\ltdyn}
\bindXtoYinZ {\kw{ret} x} x \caseofXthenYelseZ x {x_1. \simp{E[\kw{inl} x_1]/x}}{x_2. \simp{E[\kw{inl} x_2]/x}}\tag{$\u F\beta$}
\end{align*}
\item $\mu\beta$ By compositionality it is sufficient to show
\begin{align*}
&\bindXtoYinZ {(\bindXtoYinZ {\simp V} y {\kw{ret} \kw{roll} y})} x \pmmuXtoYinZ {x} y E\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} y \simp E
\end{align*}
which follows by $\u F\eta, \u F\beta, \mu\beta$.
\item $\mu\eta$ We need to show for $\Gamma, x : \mu X. A \,\,|\,\,\Delta \vdash E : T$ that
\[ \simp{E} \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret} x} x \pmmuXtoYinZ x y \simpp{E[\kw{roll} y/x]} \]
by $\u F\beta,\times\eta$ it is sufficient to show
\[ \simp{E[\kw{roll} y/x]} \mathrel{\gtdyn\ltdyn} \simp{E}[\kw{roll} y/x] \]
Which follows by compositionality:
\begin{align*}
&\simp{E[\kw{roll} y/x]}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\kw{ret} y} y \bindXtoYinZ {\kw{ret} \kw{roll} y} x \simp E\tag{compositionality}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {{\kw{ret}\kw{roll} y}} x \simp E\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} \simp E[\kw{roll} y/x] \tag{$\u F\beta$}
\end{align*}
\item $U\beta$ We need to show
\[ \bindXtoYinZ {\kw{ret} \simp M} x {\kw{force} x} \mathrel{\gtdyn\ltdyn} \simp M \]
which follows by $\u F\beta, U\beta$
\item $U\eta$ We need to show for any $\Gamma \vdash V : U\u B$ that
\[ \simp V \mathrel{\gtdyn\ltdyn} \kw{ret} \kw{thunk}{(\bindXtoYinZ {\simp V} x \kw{force} x)}\]
By compositionality it is sufficient to show
\[ \simp V \mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} x \kw{ret}\kw{thunk}{(\bindXtoYinZ {\kw{ret} x} x \kw{force} x)}\]
which follows by $U\eta$ and some simple reductions:
\begin{align*}
\bindXtoYinZ {\simp V} x \kw{ret}\kw{thunk}{(\bindXtoYinZ {\kw{ret} x} x \kw{force} x)}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} x\kw{ret}\kw{thunk}{\kw{force} x}\tag{$\u F\beta$}\\
&\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} x\kw{ret} x\tag{$U\eta$}\\
&\mathrel{\gtdyn\ltdyn} \simp V\tag{$\u F\eta$}
\end{align*}
\item $\to\beta$
By compositionality it is sufficient to show
\[ \bindXtoYinZ {\simp V} x (\lambda x:A. \simp M)\,x
\mathrel{\gtdyn\ltdyn} \bindXtoYinZ {\simp V} x \simp M \]
which follows by $\to\beta$
\item $\to\eta$ We need to show
\[ z : U(A \to \u B) \vdash
\kw{force} z \mathrel{\gtdyn\ltdyn}
\lambda x:A. \bindXtoYinZ {\kw{ret} x} x (\kw{force} z)\,x
\]
which follows by $\u F\beta, \to\eta$
\item $\top\eta$ We need to show
\[ z : U\top \vdash \kw{force} z \mathrel{\gtdyn\ltdyn} \{\} \]
which is exactly $\top\eta$.
\item $\mathbin{\&}\beta$ Immediate by simple $\mathbin{\&}\beta$.
\item $\mathbin{\&}\eta$ We need to show
\[ z : U(\u B_1\mathbin{\&}\u B_2) \vdash \kw{force} z \mathrel{\gtdyn\ltdyn} \pair{\pi\kw{force} z}{\pi'\kw{force} z}\]
which is exactly $\mathbin{\&}\eta$
\item $\nu\beta$ Immediate by simple $\nu\beta$
\item $\nu\eta$ We need to show
\[ z : U(\nu \u Y. \u B) \vdash \kw{force} z \mathrel{\gtdyn\ltdyn} \kw{roll}\kw{unroll} z \]
which is exactly $\nu\eta$
\item $\u F\beta$
We need to show
\[ \bindXtoYinZ {\simp V} x \simp M \mathrel{\gtdyn\ltdyn} \simp{M[V/x]}\]
which is exactly the compositionality lemma.
\item $\u F\eta$ We need to show
\[ z : U(\u F A)\kw{force} z \vdash \bindXtoYinZ {\kw{force} z} x \bindXtoYinZ {\kw{ret} x} x \kw{ret} x \]
which follows by $\u F\beta,\u F\eta$
\end{enumerate}
\end{longproof}
\begin{theorem}[Complex CBPV is Conservative over CBPV]
If $M, M'$ are terms in CBPV and $M \sqsubseteq M'$ is provable in CBPV*\
then $M \sqsubseteq M'$ is provable in CBPV.
\end{theorem}
\begin{longproof}
Because de-complexification preserves dynamism, $\simp M \sqsubseteq
\simp{M'}$ in simple CBPV. Then it follows because
de-complexification is equivalent to identity (in CBPV):
\[ M \mathrel{\gtdyn\ltdyn} \simp M \sqsubseteq \simp {M'} \mathrel{\gtdyn\ltdyn} M' \]
\end{longproof}
\end{longonly}
\section{Operational Model of GTT}
\label{sec:operational}
In this section, we establish a model of our CBPV inequational theory
using a notion of observational approximation based on the CBPV
operational semantics.
By composition with the axiomatic graduality theorem, this establishes
the \emph{operational graduality} theorem, i.e., a theorem analogous
to the \emph{dynamic gradual guarantee}~\cite{refined}.
\subsection{Call-by-push-value operational semantics}
We use a small-step operational semantics for CBPV
\ifshort
with the following rules (excerpt):
\fi
\iflong
in figure
\ref{fig:cbpv-operational-semantics}.
\begin{figure}
\fi
\begin{small}
\begin{minipage}[t]{0.65\textwidth}
\[
\begin{array}{rcl}
S[\mho] &\stepsin 0& \mho\\
\iflong
S[\caseofXthenYelseZ{\kw{inl} V}{x_1. M_1}{x_2. M_2}] &\stepsin 0 & S[M_1[V/x_1]]\\
S[\caseofXthenYelseZ{\kw{inr} V}{x_1. M_1}{x_2. M_2}] &\stepsin 0 & S[M_2[V/x_2]]\\
\fi
S[\pmpairWtoXYinZ{(V_1,V_2)}{x_1}{x_2}{M}] &\stepsin 0 & S[M[V_1/x_1,V_2/x_2]]\\
S[\pmmuXtoYinZ{\rollty A V}{x}{M}] &\stepsin 1 & S[M[V/x]]\\
S[\kw{force}\kw{thunk} M] &\stepsin 0 & S[M]\\
\iflong
S[\letXbeYinZ V x M] &\stepsin 0 & S[M[V/x]]\\
\fi
S[\bindXtoYinZ {\kw{ret} V} x M] &\stepsin 0 & S[M[V/x]]\\
S[(\lambda x:A. M)\,V] &\stepsin 0 & S[M[V/x]]\\
\iflong
S[\pi \pair{M}{M'}] &\stepsin 0 & S[M]\\
S[\pi' \pair{M}{M'}] &\stepsin 0 & S[M']\\
\fi
S[\kw{unroll} \rollty{\u B} M] &\stepsin 1 & S[M]\\
\end{array}
\]
\end{minipage}%
\begin{minipage}[t]{0.27\textwidth}
\begin{mathpar}
\inferrule
{ }
{M \bigstepsin 0 M}
\vspace{1.3em}
\inferrule
{M_1 \stepsin{i} M_2 \and M_2 \bigstepsin j M_3}
{M_1 \bigstepsin {i+j} M_3}
\end{mathpar}
\end{minipage}
\end{small}
\iflong
\caption{CBPV Operational Semantics}
\label{fig:cbpv-operational-semantics}
\end{figure}
\fi
This is morally the same as in \citet{levy03cbpvbook}, but we present
stacks in a manner similar to Hieb-Felleisen style evaluation
contexts\iflong(rather than as an explicit stack machine with stack frames)\fi.
We also make the step relation count unrollings of a recursive or
corecursive type, for the step-indexed logical relation later.
The operational semantics is only defined for terms of type
$\cdot \vdash M : \u F (1+1)$, which we take as the type of whole
programs.
\iflong
We can then observe the following standard operational properties. (We
write $M \mapsto N$ with no index when the index is irrelevant.)
\begin{lemma}[Reduction is Deterministic]
If $M \mapsto M_1$ and $M \mapsto M_2$, then $M_1 = M_2$.
\end{lemma}
\begin{lemma}[Subject Reduction]
If $\cdot \vdash M : \u F A$ and $M \mapsto M'$ then
$\cdot \vdash M' : \u F A$.
\end{lemma}
\begin{lemma}[Progress]
If $\cdot \vdash M : \u F A$ then one of the following holds:
\begin{mathpar}
M = \mho \and M = \kw{ret} V \text{with} V:A \and \exists M'.~ M \mapsto M'
\end{mathpar}
\end{lemma}
\fi
\begin{shortonly}
It is easy to see that the operational semantics is deterministic
and progress and type preservation theorems hold, which allows us to
define the ``final result'' of a computation as follows:
\end{shortonly}
\begin{longonly}
The standard progress-and-preservation properties allow us to define
the ``final result'' of a computation as follows:
\end{longonly}
\begin{corollary}[Possible Results of Computation]
For any $\cdot \vdash M : \u F 2$,
\begin{longonly}
one of the following is true:
\begin{mathpar}
M \Uparrow \and M \Downarrow \mho\and M \Downarrow \kw{ret} \texttt{true} \and
M \Downarrow \kw{ret} \texttt{false}
\end{mathpar}
\end{longonly}
\begin{shortonly}
either $M \Uparrow$ or $M \Downarrow \mho$ or $M \Downarrow \kw{ret}
\texttt{true}$ or $M \Downarrow \kw{ret} \texttt{false}$.
\end{shortonly}
\end{corollary}
\begin{longproof}
We define $M \Uparrow$ to hold when if $M \bigstepsin{i} N$ then
there exists $N'$ with $N \mapsto N'$. For the terminating results, we
define $M \Downarrow R$ to hold if there exists some $i$ with $M
\bigstepsin{i} R$. Then we prove the result by coinduction on
execution traces. If $M \in \{ \mho, \kw{ret}\texttt{true}, \kw{ret}\texttt{false} \}$ then we
are done, otherwise by progress, $M \mapsto M'$, so we need only
observe that each of the cases above is preserved by $\mapsto$.
\end{longproof}
\begin{definition}[Results]
The possible results of a computation are $ \Omega, \mho,
\kw{ret} \texttt{true}$ and $\kw{ret} \texttt{false}$. We denote a result by $R$, and define a
function $\text{result}$ which takes a program $\cdot \vdash M : \u F 2$,
and returns its end-behavior, i.e., $\text{result}(M)= \Omega$ if $M
\Uparrow$ and otherwise $M \Downarrow \text{result}(M)$.
\end{definition}
\subsection{Observational Equivalence and Approximation}
\label{sec:obs-equiv-approx}
Next, we define observational equivalence and approximation in CBPV.
\begin{longonly}
The (standard) definition of observational equivalence is that we
consider two terms (or values) to be equivalent when replacing one
with the other in any program text produces the same overall resulting
computation.
\end{longonly}
\ Define a context $C$ to be a term/value/stack with a single $[\cdot]$ as
some subterm/value/stack, and define a typing $C : (\Gamma \vdash \u B)
\Rightarrow (\Gamma' \vdash \u B')$ to hold when for any $\Gamma \vdash
M : \u B$, $\Gamma' \vdash C[M] : \u B'$ (and similarly for
values/stacks). Using contexts, we can lift any relation on
\emph{results} to relations on open terms, values and stacks.
\begin{definition}[Contextual Lifting]
Given any relation ${\sim} \subseteq \text{Result}^2$, we can define
its \emph{observational lift} $\ctxize\sim$ to be the typed relation
defined by
\[ \Gamma \,\,|\,\, \Delta \vDash E \ctxize\sim E' \in T = \forall C : (\Gamma\,\,|\,\,\Delta \vdash T) \Rightarrow (\cdot \vdash \u F2).~ \text{result}(C[E]) \sim \text{result}(C[E'])\]
\end{definition}
\begin{longfigure}
\begin{small}
\begin{mathpar}
\begin{array}{rcl}
C_V & ::= [\cdot] & \rollty{\mu X.A}C_V \mid \kw{inl}{C_V} \mid \kw{inr}{C_V} \mid (C_V,V)\mid(V,C_V)\mid \kw{thunk}{C_M}\\
\\
C_M & ::= & [\cdot] \mid \letXbeYinZ {C_V} x M \mid \letXbeYinZ V x
C_M \mid \pmmuXtoYinZ {C_V} x M \mid\pmmuXtoYinZ V x C_M \\
& & \mid \rollty{\nu \u Y.\u B} C_M \mid \kw{unroll} C_M \mid \kw {abort}{C_V} \mid \caseofXthenYelseZ {C_V} {x_1. M_1}{x_2.M_2} \\
& &
\mid\caseofXthenYelseZ V {x_1. C_M}{x_2.M_2} \mid\caseofXthenYelseZ
V {x_1. M_1}{x_2.C_M} \mid \pmpairWtoinZ {C_V} M\\
& & \mid \pmpairWtoinZ V C_M \mid \pmpairWtoXYinZ {C_V} x y M\mid \pmpairWtoXYinZ V x y C_M
\mid \kw{force}{C_V} \\
& & \mid \kw{ret}{C_V} \mid \bindXtoYinZ{C_M}{x}{N}
\mid\bindXtoYinZ{M}{x}{C_M} \mid \lambda x:A.C_M \mid C_M\,V \mid M\,C_V \\
& & \mid \pair{C_M}{M_2}\mid \pair{M_1}{C_M} \mid \pi C_M \mid \pi' C_M
\\
C_S &=& \pi C_S \mid \pi' C_S \mid S\,C_V\mid C_S\,V\mid \bindXtoYinZ {C_S} x M \mid \bindXtoYinZ S x C_M
\end{array}
\end{mathpar}
\end{small}
\caption{CBPV Contexts}
\end{longfigure}
\begin{shortonly}
The contextual lifting $\ctxize\sim$ is a preorder or equivalence
relation whenever the original relation $\sim$ is, and all $\sim$'s we
use will be at least preorders, so we write $\trianglelefteq$ instead of
$\sim$ for a relation on results.
\end{shortonly}
\begin{longonly}
The contextual lifting $\ctxize\sim$ inherits much structure of the
original relation $\sim$ as the following lemma shows.
This justifies calling $\ctxize\sim$ a contextual preorder when $\sim$
is a preorder (reflexive and transitive) and similarly a contextual
equivalence when $\sim$ is an equivalence (preorder and symmetric).
\begin{definition}[Contextual Preorder, Equivalence]
If $\sim$ is reflexive, symmetric or transitive, then for each
typing, $\ctxize\sim$ is reflexive, symmetric or transitive as well,
respectively.
\end{definition}
In the remainder of the paper we work only with relations that are at
least preorders so we write $\trianglelefteq$ rather than $\sim$.
\end{longonly}
\begin{shortonly}
\noindent Three important relations arise as liftings: Equality of results lifts to observational equivalence
($\ctxize=$). The preorder generated by $\mho \sqsubseteq R$ (i.e. the other
three results are unrelated maximal elements) lifts to the notion of
\emph{error approximation} used in \citet{newahmed18} to prove the
graduality property ($\ctxize\sqsubseteq$). The preorder generated by
$\Omega \preceq R$ lifts to the standard notion of \emph{divergence
approximation} ($\ctxize\preceq$).
\end{shortonly}
\begin{longonly}
The most famous use of lifting is for observational equivalence,
which is the lifting of equality of results ($\ctxize=$), and we will show that
$\mathrel{\gtdyn\ltdyn}$ proofs in GTT imply observational equivalences.
However, as shown in \citet{newahmed18}, the graduality property is
defined in terms of an observational \emph{approximation} relation
$\sqsubseteq$ that places $\mho$ as the least element, and every other
element as a maximal element.
Note that this is \emph{not} the standard notion of observational
approximation, which we write $\preceq$, which makes $\Omega$ a least
element and every other element a maximal element.
To distinguish these, we call $\sqsubseteq$ \emph{error} approximation and
$\preceq$ \emph{divergence} approximation.
We present these graphically (with two more) in Figure
\ref{fig:result-orders}.
\end{longonly}
\iflong
\begin{figure}
\begin{small}
\begin{minipage}{0.45\textwidth}
\begin{center}
\textbf{Diverge Approx. $\preceq$}\\
\end{center}
\begin{tikzcd}
\kw{ret}\texttt{false} \arrow[rd, no head] & \kw{ret} \texttt{true} \arrow[d, no head] & \mho \arrow[ld, no head] \\
& \Omega &
\end{tikzcd}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\begin{center}
\textbf{
Error Approx. $\sqsubseteq$}
\end{center}
\begin{tikzcd}
\kw{ret}\texttt{false} \arrow[rd, no head] & \kw{ret} \texttt{true} \arrow[d, no head] & \Omega \arrow[ld, no head] \\
& \mho &
\end{tikzcd}
\end{minipage}
\\\vspace{1em}
\begin{minipage}{0.45\textwidth}
\begin{center}
\textbf{Error Approx. up to left-divergence
$\errordivergeleft$}\\
\end{center}
\begin{tikzcd}
\kw{ret}\texttt{false} \arrow[rd, no head] & & \kw{ret} \texttt{true} \arrow[ld, no head] \\
& \mho , \Omega &
\end{tikzcd}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\vspace{1em}
\begin{center}
\textbf{Error Approx. up to right-divergence}
$\errordivergeright$\\
\end{center}
\begin{tikzcd}
& \Omega \arrow[ld, no head] \arrow[rd, no head] & \\
\kw{ret}\texttt{false} \arrow[rd, no head] & & \kw{ret} \texttt{true} \arrow[ld, no head] \\
& \mho &
\end{tikzcd}
\end{minipage}
\\\vspace{1em}
\begin{minipage}{0.45\textwidth}
\vspace{1em}
\begin{center}
\textbf{Error Approx. up to right-divergence Op}
$\errordivergerightop$\\
\end{center}
\begin{tikzcd}
& \mho \arrow[ld, no head] \arrow[rd, no head] & \\
\kw{ret}\texttt{false} \arrow[rd, no head] & & \kw{ret} \texttt{true} \arrow[ld, no head] \\
& \Omega &
\end{tikzcd}
\end{minipage}
\end{small}
\caption{Result Orderings}
\label{fig:result-orders}
\end{figure}
\fi
The goal of this section is to prove that a symmetric equality $E \mathrel{\gtdyn\ltdyn}
E'$ in CBPV (i.e. $E \sqsubseteq E'$ and $E' \sqsubseteq E$) implies contextual
equivalence $E \ctxize= E'$ and that inequality in CBPV $E \sqsubseteq E'$
implies error approximation $E \ctxize\sqsubseteq E'$, proving graduality of the operational model\ifshort .\else :\fi
\begin{longonly}
\begin{small}
\begin{mathpar}
\inferrule{\Gamma \,\,|\,\, \Delta \vdash E \mathrel{\gtdyn\ltdyn} E' : T}{\Gamma \,\,|\,\, \Delta \vDash E \ctxize= E' \in T}\and
\inferrule{\Gamma \,\,|\,\, \Delta \vdash E \sqsubseteq E' : T}{\Gamma \,\,|\,\, \Delta \vDash E \ctxize\sqsubseteq E' \in T}
\end{mathpar}
\end{small}
\end{longonly}
Because we have non-well-founded $\mu/\nu$ types, we use a
\emph{step-indexed logical relation} to prove properties about the
contextual lifting of certain preorders $\trianglelefteq$ on results.
In step-indexing, the \emph{infinitary} relation given by
$\ctxize\trianglelefteq$ is related to the set of all of its \emph{finitary
approximations} $\ix\trianglelefteq i$, which ``time out'' after observing
$i$ steps of evaluation and declare that the
terms \emph{are} related.
\begin{shortonly}
A preorder $\trianglelefteq$ is only recoverable from its finite
approximations if $\Omega$ is a \emph{least} element, $\Omega
\trianglelefteq R$, because a diverging term will cause a time out for
any finite index. We call a preorder with $\Omega \trianglelefteq R$ a
\emph{divergence preorder}.~
\end{shortonly}
\begin{longonly}
This means that the original relation is only recoverable from the
finite approximations if $\Omega$ is always related to another
element: if the relation is a preorder, we require that $\Omega$ is
a \emph{least} element.
We call such a preorder a \emph{divergence preorder}.
\begin{definition}[Divergence Preorder]
A preorder on results $\trianglelefteq$ is a divergence preorder if
$\Omega \trianglelefteq R$ for all results $R$.
\end{definition}
\end{longonly}
But this presents a problem, because \emph{neither} of our intended
relations ($=$ and $\sqsubseteq$) is a divergence preorder; rather both have
$\Omega$ as a \emph{maximal} element.
\begin{shortonly}
For observational equivalence, because contextual equivalence is
symmetric divergence approximation ($M \ctxize= N$ iff $M
\ctxize\preceq N$ and $N \ctxize\preceq M$), we can use a step-indexed
logical relation to characterize $\preceq$, and then obtain results
about observational equivalence from that~\cite{ahmed06:lr}.
A similar move works for error
approximation~\cite{newahmed18}, but since $R \sqsubseteq R'$ is \emph{not} symmetric, it is decomposed as the conjunction of two
orderings: error approximation up to divergence on the left
$\errordivergeleft$ (the preorder where $\mho$ and $\Omega$ are both
minimal: $\mho \preceq\sqsubseteq R$ and $\Omega \preceq\sqsubseteq R$) and
error approximation up to divergence on the right $\errordivergeright$
(the diamond preorder where $\mho$ is minimal and $\Omega$ is
maximal, with $\texttt{true}$/$\texttt{false}$ in between). Then $\preceq\sqsubseteq$ and the
\emph{opposite} of $\sqsubseteq\succeq$ (written $\errordivergerightop$)
are divergence preorders, so we can use a step-indexed logical
relation to characterize them. Overall, because $=$ is symmetric
$\sqsubseteq$, and $\sqsubseteq$ is the conjunction of $\errordivergeleft$ and
$\errordivergeright$, and contextual lifting commutes with conjunction
and opposites, it will suffice to develop logical relations for
divergence preorders.
\end{shortonly}
\begin{longonly}
However, there is a standard ``trick'' for subverting this obstacle in
the case of contextual equivalence~\cite{ahmed06:lr}: we notice
that we can define equivalence as the symmetrization of divergence
approximation, i.e., $M \ctxize= N$ if and only if $M \ctxize\preceq
N$ and $N \ctxize\preceq M$, and since $\preceq$ has $\Omega$ as
a least element, we can use a step-indexed relation to prove it.
As shown in \citet{newahmed18}, a similar trick works for error
approximation, but since $\sqsubseteq$ is \emph{not} an equivalence
relation, we decompose it rather into two \emph{different} orderings:
error approximation up to divergence on the left $\errordivergeleft$ and
error approximation up to divergence on the right $\errordivergeright$,
also shown in figure \ref{fig:result-orders}.
Note that $\errordivergeleft$ is a preorder, but not a poset because
$\mho, \Omega$ are order-equivalent but not equal.
Then clearly $\errordivergeleft$ is a divergence preorder and the
\emph{opposite} of $\errordivergeright$, written $\errordivergerightop$
is a divergence preorder.
Then we can completely reduce the problem of proving $\ctxize=$ and
$\ctxize\sqsubseteq$ results to proving results about divergence preorders
by the following observations.
\newcommand{\ctxsimi}[1]{\mathrel{\sim_{#1}^{\text{ctx}}}}
\begin{lemma}[Decomposing Result Preorders] \label{lem:decomposing-result}
Let $R, S$ be results.
\begin{enumerate}
\item $R = S$ if and only if $R \sqsubseteq S$ and $S \sqsubseteq R$.
\item $R = S$ if and only if $R \preceq S$ and $S \preceq R$.
\item $R \errordivergeleft S$ iff $R \sqsubseteq S$ or $R \preceq S$.
\item $R \errordivergeright S$ iff $R \sqsubseteq S$ or $R \succeq S$.
\end{enumerate}
\end{lemma}
In the following, we write $\sim^\circ$ for the opposite of a relation
($x \sim^\circ y$ iff $y \sim x$), $\Rightarrow$ for
containment/implication ($\sim \Rightarrow \sim'$ iff $x \sim y$ implies
$x \sim' y$), $\Leftrightarrow$ for bicontainment/equality, $\vee$ for
union ($x (\sim \vee \sim') y$ iff $x \sim y$ or $x \sim' y$), and
$\wedge$ for intersection ($x (\sim \wedge \sim') y$ iff $x \sim y$ and $x \sim' y$).
\begin{lemma}[Contextual Lift commutes with Conjunction] \label{lem:ctx-commutes-conjunction}
\[
\ctxize{(\simsub 1 \wedge \simsub 2)} \Leftrightarrow \ctxize{\simsub 1} \wedge \ctxize{\simsub 2}
\]
\end{lemma}
\begin{lemma}[Contextual Lift commutes with Dualization] \label{lem:ctx-commutes-dual}
\[
\ctxize{\sim^\circ} \Leftrightarrow \ctxize{\sim}^\circ
\]
\end{lemma}
\begin{lemma}[Contextual Decomposition Lemma] \label{lem:contextual-decomposition}
Let $\sim$ be a reflexive relation $(= \Rightarrow \sim)$, and $\leqslant$
be a reflexive, antisymmetric relation (${=} \Rightarrow {\leqslant}$ and
$(\leqslant \wedge {\leqslant^\circ}) \Leftrightarrow {=}$). Then
\[
\ctxize\sim \Leftrightarrow \ctxize{(\sim \vee \leqslant)} \wedge (\ctxize{(\sim^\circ \vee \leqslant)})^\circ
\]
\end{lemma}
\begin{proof}
Note that despite the notation, $\leqslant$ need not be assumed to be
transitive.
Reflexive relations form a lattice with $\wedge$ and $\vee$ with $=$ as
$\bot$ and the total relation as $\top$ (e.g. $(= \vee \sim)
\Leftrightarrow \sim$ because $\sim$ is reflexive, and $(= \wedge \sim)
\Leftrightarrow =$). So we have
\[
\sim \Leftrightarrow (\sim \vee \leqslant) \wedge (\sim \vee \leqslant^\circ)
\]
because FOILing the right-hand side gives
\[
(\sim \wedge \sim) \vee (\leqslant \wedge \sim) \vee (\sim \wedge \leqslant^\circ) \vee (\leqslant \wedge \leqslant^\circ)
\]
By antisymmetry, $(\leqslant \wedge \leqslant^\circ)$ is $=$, which is the
unit of $\vee$, so it cancels. By idempotence, $(\sim \wedge \sim)$ is $\sim$.
Then by absorption, the whole thing is $\sim$.
Opposite is \emph{not} de Morgan: $(P \vee Q)^\circ = P^\circ \vee
Q^\circ$, and similarly for $\wedge$. But it is involutive:
$(P^\circ)^\circ \Leftrightarrow P$.
So using Lemmas~\ref{lem:ctx-commutes-conjunction}, \ref{lem:ctx-commutes-dual} we can calculate as follows:
\[
\begin{array}{rcl}
\ctxize\sim & \Leftrightarrow &\ctxize{((\sim \vee \leqslant) \wedge (\sim \vee \leqslant^\circ))} \\
& \Leftrightarrow &\ctxize{(\sim \vee \leqslant)} \wedge \ctxize{(\sim \vee \leqslant^\circ)}\\
& \Leftrightarrow &\ctxize{(\sim \vee \leqslant)} \wedge \ctxize{((\sim \vee \leqslant^\circ)^\circ)^\circ}\\
& \Leftrightarrow &\ctxize{(\sim \vee \leqslant)} \wedge \ctxize{((\sim^\circ \vee (\leqslant^\circ)^\circ)^\circ)}\\
& \Leftrightarrow &\ctxize{(\sim \vee \leqslant)} \wedge \ctxize{(\sim^\circ \vee \leqslant)^\circ}\\
& \Leftrightarrow &\ctxize{(\sim \vee \leqslant)} \wedge \ctxize{(\sim^\circ \vee \leqslant)}^\circ
\end{array}
\]
\end{proof}
As a corollary, the decomposition of contextual equivalence into diverge
approximation in \citet{ahmed06:lr} and the decomposition of dynamism in
\citet{newahmed18} are really the same trick:
\begin{corollary}[Contextual Decomposition] ~~~ \label{cor:contextual-decomposition}
\begin{enumerate}
\item $\ctxize= \mathbin{\Leftrightarrow} \ctxize{\preceq} \wedge
(\ctxize{(\preceq)})^\circ$
\item $\ctxize= \mathbin{\Leftrightarrow} \ctxize{\sqsubseteq} \wedge (\ctxize{(\sqsubseteq)})^\circ$
\item $\ctxize\sqsubseteq \mathbin{\Leftrightarrow} \ctxize{\errordivergeleft} \wedge (\ctxize{(\errordivergerightop)})^\circ$
\end{enumerate}
\end{corollary}
\begin{proof}
For part 1 (though we will not use this below), applying
Lemma~\ref{lem:contextual-decomposition} with $\sim$ taken to be $=$
(which is reflexive) and $\leqslant$ taken to be $\preceq$ (which is
reflexive and antisymmetric) gives that contextual equivalence is
symmetric contextual divergence approximation:
\[
\ctxize= \Leftrightarrow \ctxize{(= \vee \preceq)} \wedge (\ctxize{(=^\circ \vee \preceq)})^\circ
\Leftrightarrow \ctxize{\preceq} \wedge (\ctxize{(\preceq)})^\circ
\]
For part (2), the same argument with $\sim$ taken to be $=$ and
$\leqslant$ taken to be $\sqsubseteq$ (which is also antisymmetric) gives that
contextual equivalence is symmetric contextual dynamism:
\[
\ctxize= \Leftrightarrow \ctxize{\sqsubseteq} \wedge (\ctxize{(\sqsubseteq)})^\circ
\]
For part (3), applying Lemma~\ref{lem:contextual-decomposition} with $\sim$
taken to be $\sqsubseteq$ and $\leqslant$ taken to be $\preceq$ gives that
dynamism decomposes as
\[
\ctxize\sqsubseteq \Leftrightarrow \ctxize{(\sqsubseteq \vee \preceq)} \wedge (\ctxize{(\sqsubseteq^\circ \vee \preceq)})^\circ
\Leftrightarrow \ctxize{\errordivergeleft} \wedge (\ctxize{(\errordivergerightop)})^\circ
\]
Since both ${\errordivergeleft}$ and $\errordivergerightop$ are of the
form $- \vee \preceq$, both are divergence preorders. Thus, it suffices
to develop logical relations for divergence preorders below.
\end{proof}
\end{longonly}
\subsection{CBPV Step Indexed Logical Relation}
\label{sec:lr}
\begin{shortonly}
We use a logical relation to prove results about $E \ctxize\trianglelefteq
E'$ where $\trianglelefteq$ is a divergence preorder. The
``finitization'' of a divergence preorder is a relation between
\emph{programs} and \emph{results}: a program approximates a result $R$
at index $i$ if it reduces to $R$ in $< i$ steps or it ``times out'' by reducing at least $i$ times.
\end{shortonly}
\begin{longonly}
Next, we turn to the problem of proving results about $E
\ctxize\trianglelefteq E'$ where $\trianglelefteq$ is a divergence preorder.
Dealing directly with a contextual preorder is practically impossible,
so instead we develop an alternative formulation as a logical relation
that is much easier to use.
Fortunately, we can apply standard logical relations techniques to
provide an alternate definition \emph{inductively} on types.
However, since we have non-well-founded type definitions using
$\mu$ and $\nu$, our logical relation will also be defined inductively on a
\emph{step index} that times out when we've exhausted our step budget.
To bridge the gap between the indexed logical relation and the
divergence preorder we care about, we define the ``finitization'' of a
divergence preorder to be a relation between \emph{programs} and
\emph{results}: the idea is that a program approximates a result $R$
at index $i$ if it reduces to $R$ in less than $i$ steps or it reduces
at least $i$ times.
\end{longonly}
\begin{definition}[Finitized Preorder]
Given a divergence preorder $\trianglelefteq$, we define the
\emph{finitization} of $\trianglelefteq$ to be, for each natural number
$i$, a relation between programs and results
\iflong
\[ {\ix\trianglelefteq i} \subseteq \{ M \,\,|\,\, \cdot\vdash M : \u F 2\} \times \text{Results} \]
\fi
defined by
\[
M \ix \trianglelefteq i R = (\exists M'.~ M \bigstepsin{i} M') \vee (\exists (j< i). \exists R_M.~ M \bigstepsin{j} R_M \wedge R_M \trianglelefteq R)
\]
\end{definition}
\begin{longonly}
Note that in this definition, unlike in the definition of divergence,
we only count non-well-founded steps.
This makes it slightly harder to establish the intended equivalence $M
\ix \trianglelefteq \omega R$ if and only if $\text{result}(M) \trianglelefteq R$, but
makes the logical relation theorem stronger: it proves that diverging
terms must use recursive types of some sort and so any term that does
not use them terminates.
This issue would be alleviated if we had proved type safety by a
logical relation rather than by progress and preservation.
However, the following properties of the indexed relation can easily
be established.
First, a kind of ``transitivity'' of the indexed relation with respect
to the original preorder, which is key to proving transitivity of the
logical relation.
\begin{lemma}[Indexed Relation is a Module of the Preorder]
\label{lem:module}
If $M \ix\trianglelefteq i R$ and $R \trianglelefteq R'$ then $M \ix\trianglelefteq i R'$
\end{lemma}
\begin{longproof}
If $M \bigstepsin{i} M'$ then there's nothing to show, otherwise
$M \bigstepsin{j< i} \text{result}(M)$ so it follows by transitivity of the
preorder: $\text{result}(M) \trianglelefteq R \trianglelefteq R'$.
\end{longproof}
Then we establish a few basic properties of the finitized preorder.
\begin{lemma}[Downward Closure of Finitized Preorder]
If $M \ix\trianglelefteq i R$ and $j\leq i$ then $M \ix \trianglelefteq j R$.
\end{lemma}
\begin{longproof} \hfill
\begin{enumerate}
\item If $M \bigstepsin{i} M_i$ then $M \bigstepsin{j} M_j$ and otherwise
\item If $M \bigstepsin{j \leq k i} \text{result}(M)$ then $M \bigstepsin{j} M_j$
\item if $M \bigstepsin{k < j \leq i} \text{result}(M)$ then $\text{result}(M) \trianglelefteq R$.
\end{enumerate}
\end{longproof}
\begin{lemma}[Triviality at $0$]
For any $\cdot \vdash M : \u F 2$, $M \ix\trianglelefteq 0 R$
\end{lemma}
\begin{longproof}
Because $M \bigstepsin{0} M$
\end{longproof}
\begin{lemma}[Result (Anti-)reduction]
If $M \bigstepsin{i} N$ then $\text{result}(M) = \text{result}(N)$.
\end{lemma}
\begin{lemma}[Anti-reduction]
If $M \ix\trianglelefteq i R$ and $N \bigstepsin{j} M$, then $N \ix\trianglelefteq {{i+j}} R$
\end{lemma}
\begin{longproof}
\begin{enumerate}
\item If $M \bigstepsin{i} M'$ then $N \bigstepsin{i+j} M'$
\item If $M \bigstepsin{k < i} \text{result}(M)$ then $N \bigstepsin{k+j}
\text{result}(M)$ and $\text{result}(M) = \text{result}(N)$ and $k+j < i+j$.
\end{enumerate}
\end{longproof}
\end{longonly}
\begin{figure}
\begin{small}
\begin{mathpar}
\iflong
{\itylrof\trianglelefteq{i}{A}} \subseteq \{ \cdot \vdash V : A \}^2
\qquad\qquad\qquad{\itylrof\trianglelefteq{i}{\u B}}\subseteq \{ \cdot \,\,|\,\, \u B \vdash S
: \u F (1 + 1) \}^2\\
\fi
\begin{array}{rcl}
\iflong
\cdot \itylrof\trianglelefteq i {\cdot} \cdot &=& \top\\
\gamma_1,V_1/x \itylrof\trianglelefteq i {\Gamma,x:A} \gamma_2,V_2/x &=& \gamma_1 \itylrof\trianglelefteq i \Gamma \gamma_2 \wedge V_1 \itylrof\trianglelefteq i A V_2\\
\fi
V_1 \itylr i 0 V_2 &=& \bot\\
\iflong
\kw{inl} V_1 \itylr i {A + A'} \kw{inl} V_2 &= & V_1 \itylr i A V_2\\
\kw{inr} V_1 \itylr i {A + A'} \kw{inr} V_2 &= & V_1 \itylr i {A'} V_2 \\
() \itylr i 1 () &=& \top\\
\fi
(V_1,V_1') \itylr i {A \times A'} (V_2, V_2') &=& V_1 \itylr i A V_2 \wedge V_1' \itylr i {A'} V_2'\\
\rollty {\mu X. A} V_1 \itylr i {\mu X. A} \rollty {\mu X. A} V_2 &=& i = 0 \vee V_1 \itylr {i-1} {A[\mu X.A/X]} V_2\\
V_1 \itylr i {U \u B} V_2 &=& \forall j \leq i, S_1 \itylr j {\u B} S_2.~ S_1[\kw{force} V_1] \ix\trianglelefteq j \text{result}(S_2[\kw{force} V_2]) \\\\
S_1[\bullet V_1] \itylr i {A \to \u B} S_1[\bullet V_2] & = & V_1 \itylr i A V_2 \wedge S_1 \itylr {i}{\u B} S_2\\
\iflong
S_1[\pi_1 \bullet] \itylr i {\u B \mathbin{\&} \u B'} S_2[\pi_1 \bullet] &=& S_1 \itylr i {\u B} S_2\\
S_1[\pi_2 \bullet] \itylr i {\u B \mathbin{\&} \u B'} S_2[\pi_2 \bullet] &=& S_1 \itylr i {\u B'} S_2\\
S_1 \itylr i {\top} S_2 &=& \bot\\
\fi
S_1[\kw{unroll} \bullet] \itylr i {\nu \u Y. \u B} S_2[\kw{unroll} \bullet] &=& i = 0 \vee S_1 \itylr {i-1} {\u B[\nu \u Y. \u B/\u Y]} S_2\\
S_1 \itylr i {\u F A} S_2 & = & \forall j\leq i, V_1 \itylr j A V_2.~ S_1[\kw{ret} V_1] \ix\trianglelefteq j \text{result}(S_2[\kw{ret} V_2])
\end{array}
\end{mathpar}
\end{small}
\vspace{-0.1in}
\caption{Logical Relation from a Preorder $\trianglelefteq$ \ifshort (selected cases) \fi}
\label{fig:lr}
\end{figure}
\begin{shortonly}
The (closed) \emph{logical} preorder (for closed values/stacks) is in Figure
\ref{fig:lr}. For every $i$ and value type $A$, we define a relation
$\itylrof \trianglelefteq i A$ between two closed values of type $A$, and for
every $i$ and $\u B$, we define a relation for two ``closed'' stacks $\u
B \vdash \u F 2$ outputting the observation type $\u F 2$---the
definition is by mutual lexicographic induction on $i$ and $A/\u B$.
Two values or stacks are related if they have the same structure, where
for $\mu,\nu$ we decrement $i$ and succeed if $i = 0$. The shifts $\u
F/U$ take the \emph{orthogonal} of the relation: the set of all
stacks/values that when composed with those values/stacks are related by
$\trianglelefteq^{j \le i}$; the quantifier over $j \leq i$ is needed to make the
relation downward closed.
\end{shortonly}
\begin{longonly}
Next, we define the (closed) \emph{logical} preorder (for closed values/stacks) by induction on types and
the index $i$ in figure \ref{fig:lr}.
Specifically, for every $i$ and value type $A$ we define a relation
$\itylrof \trianglelefteq i A$ between closed values of type $A$ because
these are the only ones that will be pattern-matched against at
runtime.
The relation is defined in a type-directed fashion, the intuition being
that we relate two positive values when they are built up in the same
way: i.e., they have the same introduction form and their subterms are
related.
For $\mu$, this definition would not be well-founded, so we decrement
the step index, giving up and relating the terms if $i = 0$.
Finally $U$ is the only negative value type, and so it is treated
differently.
A thunk $V : U\u B$ cannot be inspected by pattern matching, rather
the only way to interact with it is to force its evaluation.
By the definition of the operational semantics, this only ever occurs
in the step $S[\kw{force} V]$, so (ignoring indices for a moment), we
should define $V_1 \trianglelefteq V_2$ to hold in this case when, given
$S_1 \trianglelefteq S_2$, the result of $S_2[\kw{force} V_2]$ is approximated
by $S_1[\kw{force} V_1]$.
To incorporate the indices, we have to quantify over $j \leq i$ in
this definition because we need to know that the values are related in
all futures, including ones where some other part of the term has been
reduced (consuming some steps).
Technically, this is crucial for making sure the relation is
downward-closed.
This is known as the \emph{orthogonal} of the relation, and one
advantage of the CBPV language is that it makes the use of
orthogonality \emph{explicit} in the type structure, analogous to the
benefits of using Nakano's \emph{later} modality \cite{nakano} for step indexing
(which we ironically do not do).
Next, we define when two \emph{stacks} are related.
First, we define the relation only for two ``closed'' stacks, which
both have the same type of their hole $\u B$ and both have
``output'' the observation type $\u F 2$.
The reason is that in evaluating a program $M$, steps always occur as
$S[N] \bigstepsin{} S[N']$ where $S$ is a stack of this form.
An intuition is that for negative types, two stacks are related when
they start with the same elimination form and the remainder of the
stacks are related.
For $\nu$, we handle the step indices in the same way as for $\mu$.
For $\u F A$, a stack $S[\bullet : \u F A]$ is strict in its input and
waits for its input to evaluate down to a value $\kw{ret} V$, so two
stacks with $\u F A$ holes are related when in any future world, they
produce related behavior when given related values.
We note that in the CBV restriction of CBPV, the function type is
given by $U(A \to \u F A')$ and the logical relation we have presented
reconstructs the usual definition that involves a double orthogonal.
Note that the definition is well-founded using the lexicographic
ordering on $(i, A)$ and $(i, \u B)$: either the type reduces and the
index stays the same or the index reduces.
We extend the definition to contexts to \emph{closing substitutions}
pointwise: two closing substitutions for $\Gamma$ are related at $i$
if they are related at $i$ for each $x:A \in \Gamma$.
\end{longonly}
The logical preorder for open terms is defined as usual by quantifying
over all related closing substitutions, but also over all stacks to the
observation type $\u F (1+1)$:
\begin{definition}[Logical Preorder]
For a divergence preorder $\trianglelefteq$, its step-indexed logical
preorder is
\begin{shortonly}
for terms (open stack, value cases are defined in the extended version):
$\Gamma \vDash M_1 \ilrof\trianglelefteq{i} M_2 \in \u B$ iff for every $\gamma_1 \itylrof\trianglelefteq i {\Gamma} \gamma_2$ and $S_1
\itylrof\trianglelefteq i {\u B} S_2$, $S_1[M_1[\gamma_1]] \ix\trianglelefteq
i \text{result}(S_2[M_2[\gamma_2]])$.
\end{shortonly}
\begin{longonly}
\begin{enumerate}
\item $\Gamma \vDash M_1 \ilrof\trianglelefteq{i} M_2 \in \u B$ iff for every $\gamma_1 \itylrof\trianglelefteq i {\Gamma} \gamma_2$ and $S_1
\itylrof\trianglelefteq i {\u B} S_2$, $S_1[M_1[\gamma_1]] \ix\trianglelefteq
i \text{result}(S_2[M_2[\gamma_2]])$.
\item $\Gamma \vDash V_1 \ilrof\trianglelefteq{i} V_2 \in A$ iff
for every $\gamma_1 \itylrof\trianglelefteq i {\Gamma} \gamma_2$, $V_1[\gamma_1] \itylrof\trianglelefteq i A V_2[\gamma_2]$
\item $\Gamma \,\,|\,\, \u B \vDash S_1 \ilrof\trianglelefteq{i} S_2 \in \u B'$
iff for every $\gamma_1 \itylrof\trianglelefteq i {\Gamma} \gamma_2$ and
$S_1' \itylrof\trianglelefteq i {\u B'} S_2'$, $S_1'[S_1[\gamma_1]] \itylrof \trianglelefteq
i {\u B} S_2'[S_2[\gamma_2]])$.
\end{enumerate}
\end{longonly}
\end{definition}
\begin{longonly}
We next want to prove that the logical preorder is a congruence
relation, i.e., the fundamental lemma of the logical relation.
This requires the easy lemma, that the relation on closed terms and
stacks is downward closed.
\begin{lemma}[Logical Relation Downward Closure]
For any type $T$, if $j \leq i$ then $\itylrof\trianglelefteq i T
\subseteq \itylrof\trianglelefteq j T$
\end{lemma}
\end{longonly}
Next, we show the fundamental theorem:
\begin{theorem}[Logical Preorder is a Congruence]
For any divergence preorder, the logical preorder $E \ilrof\trianglelefteq
i E'$ is \iflong a congruence relation, i.e., it is \fi closed under
applying any value/term/stack constructors to both sides.
\end{theorem}
\begin{longproof}
For each congruence rule
\[
\inferrule
{\Gamma \,\,|\,\, \Delta \vdash E_1 \sqsubseteq E_1' : T_1 \cdots}
{\Gamma' \,\,|\,\, \Delta' \vdash E_c \sqsubseteq E_c' : T_c}
\]
we prove for every $i \in \mathbb{N}$ the validity of the rule
\[
\inferrule
{\Gamma \,\,|\,\, \Delta \vDash E_1 \ilr i E_1' \in T_1\cdots }
{\Gamma \,\,|\,\, \Delta \vDash E_c \ilr i E_c' \in T_c}
\]
\begin{enumerate}
\item $\inferrule {} {\Gamma,x : A,\Gamma' \vDash x \ilr i x \in
A}$. Given $\gamma_1 \itylr i {\Gamma,x:A,\Gamma'} \gamma_2$,
then by definition $\gamma_1(x) \itylr i A \gamma_2(x)$.
\item $\inferrule{}{\Gamma \vDash \mho \itylr \mho \in \u B}$ We
need to show $S_1[\mho] \ix\trianglelefteq i \text{result}(S_2[\mho])$. By
anti-reduction and strictness of stacks, it is sufficient to show
$\mho \ilr i \mho$. If $i = 0$ there is nothing to show,
otherwise, it follows by reflexivity of $\trianglelefteq$.
\item $\inferrule
{\Gamma \vDash V \ilr i V' \in A \and
\Gamma, x : A \vDash M \ilr i M' \in \u B
}
{\Gamma \vDash \letXbeYinZ V x M \ilr i \letXbeYinZ {V'} {x} {M'} \in \u B}$
Each side takes a $0$-cost step, so by anti-reduction, this reduces to
\[ S_1[M[\gamma_1,V/x]] \ix\trianglelefteq i \text{result}(S_2[M'[\gamma_2,V'/x]]) \] which follows by the assumption $\Gamma, x : A \vDash M \ilr i M' \in \u B$
\item $\inferrule
{\Gamma \vDash V \ilr i V' \in 0}
{\Gamma \vDash \kw {abort} V \ilr i \kw {abort} V' \in \u B}$.
By assumption, we get $V[\gamma_1] \logty i {0} V'[\gamma_2]$, but this is a contradiction.
\item $\inferrule
{\Gamma \vDash V \ilr i V' \in A_1}
{\Gamma \vDash \kw{inl} V \ilr i \kw{inl} V' \in A_1 + A_2}$.
Direct from assumption, rule for sums.
\item $\inferrule
{\Gamma \vDash V \ilr i V' \in A_2}
{\Gamma \vDash \kw{inr} V \ilr i \kw{inr} V' \in A_1 + A_2}$
Direct from assumption, rule for sums.
\item $\inferrule
{\Gamma \vDash V \ilr i V' \in A_1 + A_2\and
\Gamma, x_1 : A_1 \vDash M_1 \ilr i M_1' \in \u B\and
\Gamma, x_2 : A_2 \vDash M_2 \ilr i M_2' \in \u B
}
{\Gamma \vDash \caseofXthenYelseZ V {x_1. M_1}{x_2.M_2} \ilr i \caseofXthenYelseZ {V'} {x_1. M_1'}{x_2.M_2'} \in \u B}$\\
By case analysis of $V[\gamma_1] \ilr i V'[\gamma_2]$.
\begin{enumerate}
\item If $V[\gamma_1]=\kw{inl} V_1, V'[\gamma_2] = \kw{inl} V_1'$ with
$V_1 \itylr i {A_1} V_1'$, then taking $0$ steps, by anti-reduction
the problem reduces to
\[ S_1[M_1[\gamma_1,V_1/x_1]] \ix\trianglelefteq i \text{result}(S_1[M_1[\gamma_1,V_1/x_1]]) \]
which follows by assumption.
\item For $\kw{inr}{}$, the same argument.
\end{enumerate}
\item $\inferrule
{}
{\Gamma \vDash () \ilr i () \in 1}$ Immediate by unit rule.
\item $\inferrule
{\Gamma \vDash V_1 \ilr i V_1' \in A_1\and
\Gamma\vDash V_2 \ilr i V_2' \in A_2}
{\Gamma \vDash (V_1,V_2) \ilr i (V_1',V_2') \in A_1 \times A_2}$
Immediate by pair rule.
\item $\inferrule
{\Gamma \vDash V \ilr i V' \in A_1 \times A_2\and
\Gamma, x : A_1,y : A_2 \vDash M \ilr i M' \in \u B
}
{\Gamma \vDash \pmpairWtoXYinZ V x y M \ilr i \pmpairWtoXYinZ {V'} {x} {y} {M'} \in \u B}$
By $V \itylr i {A_1 \times A_2} V'$, we know $V[\gamma_1] =
(V_1,V_2)$ and $V'[\gamma_2] = (V_1', V_2')$ with $V_1 \itylr i
{A_1} V_1'$ and $V_2 \itylr i {A_2} V_2'$.
Then by anti-reduction, the problem reduces to
\[ S_1[M[\gamma_1,V_1/x,V_2/y]] \ix\trianglelefteq i \text{result}(S_1[M'[\gamma_1,V_1'/x,V_2'/y]]) \]
which follows by assumption.
\item $\inferrule
{\Gamma \vDash V \ilr i V' \in A[\mu X.A/X]}
{\Gamma \vDash \rollty{\mu X.A} V \ilr i \rollty{\mu X.A} V' \in \mu X.A }$
If $i = 0$, we're done. Otherwise $i=j+1$, and our assumption is
that $V[\gamma_1] \itylr {j+1} {A[\mu X.A/X]} V'[\gamma_2]$ and we need to show
that $\kw{roll} V[\gamma_1] \itylr {j+1} {\mu X. A}\kw{roll}
V'[\gamma_2]$. By definition, we need to show $V[\gamma_1] \itylr
j {A[\mu X.A/X]} V'[\gamma_2]$, which follows by downward-closure.
\item $\inferrule
{\Gamma \vDash V \ilr i V' \in \mu X. A\and
\Gamma, x : A[\mu X. A/X] \vDash M \ilr i M' \in \u B}
{\Gamma \vDash \pmmuXtoYinZ V x M \ilr i \pmmuXtoYinZ {V'} {x} {M'} \in \u B}$
If $i = 0$, then by triviality at $0$, we're done.
Otherwise, $V[\gamma_1] \itylr {j+1} {\mu X. A} V'[\gamma_2]$ so
$V[\gamma_1] = \kw{roll} V_\mu, V'[\gamma_2] = \kw{roll} V_\mu'$ with
$V_\mu \itylr j {A[\mu X.A/X]} V_\mu'$. Then each side takes $1$ step, so by anti-reduction it is sufficient to show
\[ S_1[M[\gamma_1,V_\mu/x]] \ix\trianglelefteq j \text{result}(S_2[M'[\gamma_2,V_\mu'/x]]) \] which follows by assumption and downward closure of the stack, value relations.
\item $\inferrule {\Gamma \vDash M \ilr i M' \in \u B} {\Gamma
\vDash \kw{thunk} M \ilr i \kw{thunk} M' \in U \u B}$. We need to show
$\kw{thunk} M[\gamma_1] \itylr i {U \u B} \kw{thunk} M'[\gamma_2]$, so let
$S_1 \itylr j {\u B} S_2$ for some $j \leq i$, and we need to show
\[ S_1[\kw{force} \kw{thunk} M_1[\gamma_1]] \ix\trianglelefteq j \text{result}(S_2[\kw{force} \kw{thunk} M_2[\gamma_2]]) \]
Then each side reduces in a $0$-cost step and it is sufficient to show
\[ S_1[M_1[\gamma_1]] \ix\trianglelefteq j \text{result}(S_2[M_2[\gamma_2]]) \]
Which follows by downward-closure for terms and substitutions.
\item $\inferrule {\Gamma \vDash V \ilr i V' \in U \u B} {\Gamma
\vDash \kw{force} V \ilr i \kw{force} V' \in \u B}$. \\ We need to show
$S_1[\kw{force} V[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[\kw{force}
V'[\gamma_2]])$, which follows by the definition of $V[\gamma_1]
\itylr i {U \u B} V'[\gamma_2]$.
\item $\inferrule
{\Gamma \vDash V \ilr i V' \in A}
{\Gamma \vDash \kw{ret} V \ilr i \kw{ret} V' \in \u F A}$\\
We need to show $S_1[\kw{ret} V[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[\kw{ret}
V'[\gamma_2]])$, which follows by the orthogonality definition of
$S_1 \itylr i {\u F A} S_2$.
\item $\inferrule
{\Gamma \vDash M \ilr i M' \in \u F A\and
\Gamma, x: A \vDash N \ilr i N' \in \u B}
{\Gamma \vDash \bindXtoYinZ M x N \ilr i \bindXtoYinZ {M'} {x} {N'} \in \u B}$.
We need to show $\bindXtoYinZ {M[\gamma_1]} x {N[\gamma_2]} \ix\trianglelefteq i \text{result}(\bindXtoYinZ {M'[\gamma_2]} {x} {N'[\gamma_2]})$.
By $M \ilr i M' \in \u F A$, it is sufficient to show that
\[ \bindXtoYinZ \bullet x {N[\gamma_1]} \itylr i {\u F A} \bindXtoYinZ \bullet {x} {N'[\gamma_2]}\]
So let $j \leq i$ and $V \itylr j A V'$, then we need to show
\[ \bindXtoYinZ {\kw{ret} V} x {N[\gamma_1]} \itylr j {\u F A} \bindXtoYinZ {\kw{ret} V'} {x} {N'[\gamma_2]} \]
By anti-reduction, it is sufficient to show
\[ N[\gamma_1,V/x] \ix\trianglelefteq j \text{result}(N'[\gamma_2,V'/x]) \]
which follows by anti-reduction for $\gamma_1 \itylr i {\Gamma} \gamma_2$ and $N \ilr i N'$.
\item $\inferrule
{\Gamma, x: A \vDash M \ilr i M' \in \u B}
{\Gamma \vDash \lambda x : A . M \ilr i \lambda x:A. M' \in A \to \u B}$
We need to show
\[S_1[\lambda x:A. M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[\lambda x:A.M'[\gamma_2]]).\]
By $S_1 \itylr i {A \to \u B} S_2$, we know $S_1 = S_1'[\bullet V_1]$, $S_2 = S_2'[\bullet V_2]$ with $S_1' \itylr i {\u B} S_2'$ and $V_1 \itylr i {A} V_2$.
Then by anti-reduction it is sufficient to show
\[
S_1'[M[\gamma_1,V_1/x]] \ix\trianglelefteq i \text{result}(S_2'[M'[\gamma_2,V_2/x]])
\]
which follows by $M \ilr i M'$.
\item $\inferrule
{\Gamma \vDash M \ilr i M' \in A \to \u B\and
\Gamma \vDash V \ilr i V' \in A}
{\Gamma \vDash M\,V \ilr i M'\,V' \in \u B }$
We need to show
\[S_1[M[\gamma_1]\,V[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M'[\gamma_2]\,V'[\gamma_2]])\] so by $M \ilr i M'$ it is sufficient to show $S_1[\bullet V[\gamma_1]] \itylr i {A \to \u B} S_2[\bullet V'[\gamma_2]]$ which follows by definition and assumption that $V \ilr i V'$.
\item $\inferrule{}{\Gamma \vdash \{\} : \top}$ We assume we are
given $S_1 \itylr i {\top} S_2$, but this is a contradiction.
\item $\inferrule
{\Gamma \vDash M_1 \ilr i M_1' \in \u B_1\and
\Gamma \vDash M_2 \ilr i M_2' \in \u B_2}
{\Gamma \vDash \pair {M_1} {M_2} \ilr i \pair {M_1'} {M_2'} \in \u B_1 \mathbin{\&} \u B_2}$
We need to show
\[S_1[\pair{M_1[\gamma_1]}{M_2[\gamma_1]}] \ix\trianglelefteq i \text{result}(S_2[\pair{M_1'[\gamma_1]}{M_2'[\gamma_2]}]).\]
We proceed by case analysis of $S_1 \itylr i {\u B_1 \mathbin{\&} \u B_2} S_2$
\begin{enumerate}
\item In the first possibility $S_1 = S_{1}'[\pi \bullet], S_2 =
S_2'[\pi \bullet]$ and $S_1' \itylr i {\u B_1} S_2'$.
Then by anti-reduction, it is sufficient to show
\[ S_1'[M_1[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2'[M_1'[\gamma_2]]) \]
which follows by $M_1 \ilr i M_1'$.
\item Same as previous case.
\end{enumerate}
\item $\inferrule
{\Gamma \vDash M \ilr i M' \in \u B_1 \mathbin{\&} \u B_2}
{\Gamma \vDash \pi M \ilr i \pi M' \in \u B_1}$
We need to show $S_1[\pi M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[\pi
M'[\gamma_2]])$, which follows by $S_1[\pi \bullet] \itylr i {\u
B_1 \mathbin{\&} \u B_2} S_2[\pi \bullet]$ and $M \ilr i M'$.
\item $\inferrule {\Gamma \vDash M \ilr i M' \in \u B_1 \mathbin{\&} \u
B_2} {\Gamma \vDash \pi' M \ilr i \pi' M' \in \u B_2}$ Similar
to previous case.
\item $\inferrule
{\Gamma \vDash M \ilr i M' \in \u B[{\nu \u Y. \u B}/\u Y]}
{\Gamma \vDash \rollty{\nu \u Y. \u B} M \ilr i \rollty{\nu \u Y. \u B} M' \in {\nu \u Y. \u B}}$
We need to show that
\[ S_1[ \rollty{\nu \u Y. \u B} M[\gamma_1]]
\ix\trianglelefteq i \text{result}(S_2[ \rollty{\nu \u Y. \u B} M'[\gamma_2]]) \]
If $i = 0$, we invoke triviality at $0$.
Otherwise, $i = j + 1$ and we know by $S_1 \itylr {j+1} {\nu \u Y. \u B} S_2$ that
$S_1 = S_1'[\kw{unroll} \bullet]$ and $S_2 = S_2'[\kw{unroll} \bullet]$ with $S_1' \itylr j {\u B[{\nu \u Y. \u B}/\u Y]} S_2'$, so by anti-reduction it is sufficient to show
\[ S_1'[ M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2'[ M'[\gamma_2]]) \]
which follows by $M \ilr i M'$ and downward-closure.
\item $\inferrule
{\Gamma \vDash M \ilr i M' \in {\nu \u Y. \u B}}
{\Gamma \vDash \kw{unroll} M \ilr i \kw{unroll} M' \in \u B[{\nu \u Y. \u B}/\u Y]}$
We need to show
\[S_1[\kw{unroll} M] \ix\trianglelefteq i \text{result}(S_2[\kw{unroll} M']),\] which
follows because $S_1[\kw{unroll} \bullet] \itylr i {\nu \u Y. \u B}
S_2[\kw{unroll} \bullet]$ and $M \ilr i M'$.
\end{enumerate}
\end{longproof}
\begin{longonly}
As a direct consequence we get the reflexivity of the relation
\begin{corollary}[Reflexivity]
For any $\Gamma \vdash M : \u B$, and $i \in \mathbb{N}$,
\(\Gamma \vDash M \ilrof\trianglelefteq i M \in \u B.\)
\end{corollary}
\end{longonly}
\begin{shortonly}
This in particular implies that the relation is reflexive ($\Gamma
\vDash M \ilrof\trianglelefteq i M \in \u B$ for all well-typed $M$),
\end{shortonly}
so we
have the following \emph{strengthening} of the progress-and-preservation
type soundness theorem: because $\ix\trianglelefteq i$ only counts unrolling
steps, terms that never use $\mu$ or $\nu$ types (for example) are
guaranteed to terminate.
\begin{corollary}[Unary LR]
For every program $\cdot \vdash M : \u F 2$ and $i \in \mathbb{N}$,
$M \ix\trianglelefteq i \text{result}(M)$
\end{corollary}
\begin{longproof}
By reflexivity, $\cdot \vDash M \ix\trianglelefteq i M \in \u F 2$ and by
definition $\bullet \itylrof\trianglelefteq i {\u F 2} \bullet$, so
unrolling definitions we get $M \ix\trianglelefteq i \text{result}(M)$.
\end{longproof}
\noindent Using reflexivity, we prove that the indexed relation between terms and
results recovers the original preorder in the limit as $i \to \omega$.
We write $\ix\trianglelefteq \omega$ to mean the relation holds for every
$i$, i.e., $\ix\trianglelefteq\omega =
\bigcap_{i\in\mathbb{N}} \ix\trianglelefteq i$.
\begin{corollary}[Limit Lemma]
\label{lem:limit}
For any divergence preorder $\trianglelefteq$, \( \text{result}(M) \trianglelefteq
R\) iff \( M \ix\trianglelefteq \omega R \).
\end{corollary}
\begin{longproof}
Two cases
\begin{enumerate}
\item If $\text{result}(M) \trianglelefteq R$ then we need to show for every $i
\in \mathbb{N}$, $M \ix \trianglelefteq i R$. By the unary model lemma,
$M \ix\trianglelefteq i \text{result}(M)$, so the result follows by the
module lemma \ref{lem:module}.
\item If $M \ix\trianglelefteq i R$ for every $i$, then there are two
possibilities: $M$ is always related to $R$ because it takes $i$
steps, or at some point $M$ terminates.
\begin{enumerate}
\item If $M \bigstepsin{i} M_i$ for every $i \in \mathbb{N}$, then
$\text{result}(M) = \Omega$, so $\text{result}(M) \trianglelefteq R$ because
$\trianglelefteq$ is a divergence preorder.
\item Otherwise there exists some $i \in \mathbb{M}$ such that $M
\bigstepsin{i} \text{result}(M)$, so it follows by the module lemma
\ref{lem:module}.
\end{enumerate}
\end{enumerate}
\end{longproof}
\begin{corollary}[Logical implies Contextual] \label{lem:logical-implies-contextual}
If $\Gamma \vDash E \ilrof\trianglelefteq \omega E' \in \u B$
then
$\Gamma \vDash E \ctxize\trianglelefteq E' \in \u B$.
\end{corollary}
\begin{proof}
Let $C$ be a closing context. By congruence, $C[M] \ilrof\trianglelefteq
\omega C[N]$, so using empty environment and stack, $C[M]
\ix\trianglelefteq\omega \text{result}(C[N])$ and by the limit lemma, we have
$\text{result}(C[M]) \trianglelefteq \text{result}(C[N])$.
\end{proof}
\begin{longonly}
In fact, we can prove the converse, that at least for the term case,
the logical preorder is \emph{complete} with respect to the contextual
preorder, though we don't use it.
\begin{lemma}[Contextual implies Logical]
For any $\trianglelefteq$, if $\Gamma \vDash M \ctxize \trianglelefteq N \in
\u B$, then $\Gamma \vDash M \ilrof\trianglelefteq \omega N \in \u B$.
\end{lemma}
\begin{longproof}
Let $S_1 \itylr i {\u B} S_2$ and $\gamma_1 \itylr i \Gamma \gamma_2$. We need to show that
\[
S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[N[\gamma_2]])
\]
So we need to construct a \emph{context} that when $M$ or $N$ is
plugged into the hole will reduce to the above.
To do this, first, we deconstruct the context
$x_1:A_1,\ldots,x_n:A_n = \Gamma$. Then we define $\cdot \vdash M'
: A_1\to \cdots \to A_n \to \u B$ as
\[ \lambda x_1:A_1.\ldots\lambda x_n:A_n. M \]
And similarly define $N'$. Then clearly
\[ S[M' \,V_1\, \cdots V_n] \bigstepsin{0} S[M[V_1/x_1,\ldots,V_n/x_n]] \]
so in particular
\[ S[M'\,\gamma(x_1)\cdots\gamma(x_n)] \bigstepsin{0} S[M[\gamma]]\]
and similarly for $N'$ if $x_1,\ldots,x_n$ are all of the variables
in $\gamma$.
Then the proof proceeds by the following transitivity chain:
\begin{align*}
S_1[M[\gamma_1]] &\ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]])\tag{$M \ilr i M$}\\
&=\text{result}(S_2[M'\,\gamma_2(x_1)\,\cdots\,\gamma_2(x_n)])\tag{reduction}\\
&\trianglelefteq \text{result}(S_2[N'\,\gamma_2(x_1)\,\cdots\,\gamma_2(x_n)])\tag{$M \ctxize\trianglelefteq N$}\\
&= \text{result}(S_2[N[\gamma_2]])\tag{reduction}
\end{align*}
So $S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[N[\gamma_2]])$ by
the module lemma \ref{lem:module}.
\end{longproof}
\end{longonly}
This establishes that our logical relation can prove graduality, so it
only remains to show that our \emph{inequational theory} implies our
logical relation.
Having already validated the congruence rules and reflexivity, we
validate the remaining rules of transitivity, error, substitution, and
$\beta\eta$ for each type constructor.
Other than the $\mho \sqsubseteq M$ rule, all of these hold for any
divergence preorder.
For transitivity, with the unary model and limiting lemmas in hand, we
can prove that all of our logical relations (open and closed) are
transitive in the limit. To do this, we first prove the following kind
of ``quantitative'' transitivity lemma, and then transitivity in the
limit is a consequence.
\begin{lemma}[Logical Relation is Quantitatively Transitive] \hfill
\iflong
\begin{enumerate}
\item
\fi
If $V_1 \itylr i A V_2$ and $V_2 \itylr
\omega A V_3$, then $V_1 \itylr i A V_3$\ifshort, and analogously
for stacks. \fi
\iflong
\item If $S_1 \itylr i {\u B} S_2$ and $S_2 \itylr
\omega {\u B} S_3$, then $S_1 \itylr i {\u B} S_3$
\end{enumerate}
\fi
\end{lemma}
\begin{longproof}
Proof is by mutual lexicographic induction on the pair $(i, A)$ or
$(i, \u B)$. All cases are straightforward uses of the inductive
hypotheses except the shifts $U, \u F$.
\begin{enumerate}
\item If $V_1 \itylr i {U \u B} V_2$ and $V_2
\itylr \omega {U \u B} V_3$, then we need to show that
for any $S_1 \itylr j {\u B} S_2$ with $j \leq i$,
\[ S_1[\kw{force} V_1] \ix\trianglelefteq j \text{result}(S_2[\kw{force} V_3]) \]
By reflexivity, we know $S_2 \itylr \omega {\u B} S_2$, so by assumption
\[ S_2[\kw{force} V_2] \ix\trianglelefteq \omega \text{result}(S_2[\kw{force} V_3])\]
which by the limiting lemma \ref{lem:limit} is equivalent to
\[ \text{result}(S_2[\kw{force} V_2]) \trianglelefteq \text{result}(S_2[\kw{force} V_3]) \]
so then by the module lemma \ref{lem:module}, it is sufficient to show
\[ S_1[\kw{force} V_1] \ix\trianglelefteq j \text{result}(S_2[\kw{force} V_2]) \]
which holds by assumption.
\item If $S_1 \itylr i {\u F A} S_2$ and $S_2 \itylr \omega {\u F A}
S_3$, then we need to show that for any $V_1 \itylr A j V_2$ with $j \leq i$ that
\[ S_1[\kw{ret} V_1] \ix\trianglelefteq j \text{result}(S_3[\kw{ret} V_2])\]
First by reflexivity, we know $V_2 \itylr \omega A V_2$, so by assumption,
\[ S_2[\kw{ret} V_2] \ix\trianglelefteq \omega \text{result}(S_3[\kw{ret} V_2]) \]
Which by the limit lemma \ref{lem:limit} is equivalent to
\[ \text{result}(S_2[\kw{ret} V_2]) \ix\trianglelefteq \omega \text{result}(S_3[\kw{ret} V_2]) \]
So by the module lemma \ref{lem:module}, it is sufficient to show
\[ S_1[\kw{ret} V_1] \ix\trianglelefteq j \text{result}(S_2[\kw{ret} V_2]) \]
which holds by assumption.
\end{enumerate}
\end{longproof}
\iflong
\begin{lemma}[Logical Relation is Quantitatively Transitive (Open Terms)]\hfill
\begin{enumerate}
\item If $\gamma_1 \itylr i \Gamma \gamma_2$ and $\gamma_2 \itylr
\omega \Gamma \gamma_3$, then $\gamma_1 \itylr i \Gamma \gamma_3$
\item If $\Gamma \vDash M_1 \ilr i M_2 \in \u B$ and
$\Gamma \vDash M_2 \ilr \omega M_3 \in \u B$, then
$\Gamma \vDash M_1 \ilr i M_3 \in \u B$.
\item If $\Gamma \vDash V_1 \ilr i V_2 \in A$ and
$\Gamma \vDash V_2 \ilr \omega V_3 \in A$, then
$\Gamma \vDash V_1 \ilr i V_3 \in A$.
\item If $\Gamma \,\,|\,\, \bullet : \u B \vDash S_1 \ilr i S_2 \in \u B'$ and
$\Gamma\,\,|\,\, \bullet : \u B \vDash S_2 \ilr \omega S_3 \in \u B'$, then
$\Gamma\,\,|\,\, \bullet : \u B \vDash S_1 \ilr i S_3 \in \u B'$.
\end{enumerate}
\end{lemma}
\begin{longproof}
\begin{enumerate}
\item By induction on the length of the context, follows from closed value case.
\item Assume $\gamma_1 \itylr i \Gamma \gamma_2$ and $S_1 \itylr i {\u B} S_2$.
We need to show
\[ S_1[M_1[\gamma_1]] \ix\trianglelefteq{i} \text{result}(S_2[M_3[\gamma_2]]) \]
by reflexivity and assumption, we know
\[ S_2[M_2[\gamma_2]] \ix\trianglelefteq \omega \text{result}(S_2[M_3[\gamma_2]])\]
and by limit lemma \ref{lem:limit}, this is equivalent to
\[ \text{result}(S_2[M_2[\gamma_2]]) \trianglelefteq \text{result}(S_2[M_3[\gamma_2]])\]
so by the module lemma \ref{lem:module} it is sufficient to show
\[ S_1[M_1[\gamma_1]] \ix\trianglelefteq{i} \text{result}(S_2[M_2[\gamma_2]]) \]
which follows by assumption.
\item Assume $\gamma_1 \itylr i \Gamma \gamma_2$. Then
$V_1[\gamma_1] \itylr i A V_2[\gamma_2]$ and by reflexivity
$\gamma_2 \itylr \omega \Gamma \gamma_2$ so $V_2[\gamma_2] \itylr
\omega A V_3[\gamma_2]$ so the result holds by the closed case.
\item Stack case is essentially the same as the value case.
\end{enumerate}
\end{longproof}
\fi
\begin{corollary}[Logical Relation is Transitive in the Limit]
\begin{shortonly}
$\ilrof\trianglelefteq \omega$ is transitive.
\end{shortonly}
\begin{longonly}
\hfill
\begin{enumerate}
\item If $\Gamma \vDash M_1 \ilrof\trianglelefteq \omega M_2 \in \u B$ and
$\Gamma \vDash M_2 \ilrof\trianglelefteq \omega M_3 \in \u B$, then
$\Gamma \vDash M_1 \ilrof\trianglelefteq \omega M_3 \in \u B$.
\item If $\Gamma \vDash V_1 \ilrof\trianglelefteq \omega V_2 \in A$ and
$\Gamma \vDash V_2 \ilrof\trianglelefteq \omega V_3 \in A$, then
$\Gamma \vDash V_1 \ilrof\trianglelefteq \omega V_3 \in A$.
\item If $\Gamma \,\,|\,\, \bullet : \u B \vDash S_1 \ilrof\trianglelefteq \omega S_2 \in \u B'$ and
$\Gamma\,\,|\,\, \bullet : \u B \vDash S_2 \ilrof\trianglelefteq \omega S_3 \in \u B'$, then
$\Gamma\,\,|\,\, \bullet : \u B \vDash S_1 \ilrof\trianglelefteq \omega S_3 \in \u B'$.
\end{enumerate}
\end{longonly}
\end{corollary}
\iflong
Next, we verify the $\beta, \eta$ equivalences hold as orderings each
way.
\begin{lemma}[$\beta, \eta$]
For any divergence preorder, the $\beta, \eta$
laws are valid for $\ilrof\trianglelefteq \omega$
\end{lemma}
\begin{longproof}
The $\beta$ rules for all cases except recursive types are direct
from anti-reduction.
\begin{enumerate}
\item $\mu X.A-\beta$:
\begin{enumerate}
\item We need to show
\[ S_1[\pmmuXtoYinZ {\rollty{\mu X.A} V[\gamma_1]} x M[\gamma_1]] \ilr i \text{result}(S_2[M[\gamma_2,V[\gamma_2]/x]]) \]
The left side takes $1$ step to $S_1[M[\gamma_1,V[\gamma_1]/x]]$ and we know
\[ S_1[M[\gamma_1,V[\gamma_1]/x]] \ilr i \text{result} (S_2[M[\gamma_2,V[\gamma_2]/x]]) \]
by assumption and reflexivity, so by anti-reduction we have
\[ S_1[\pmmuXtoYinZ {\rollty{\mu X.A} V[\gamma_1]} x M[\gamma_1]] \ilr {i+1} \text{result}(S_2[M[\gamma_2,V[\gamma_2]/x]]) \]
so the result follows by downward-closure.
\item For the other direction we need to show
\[ S_1[M[\gamma_1,V[\gamma_1]/x]] \ilr i \text{result}(S_2[\pmmuXtoYinZ {\rollty{\mu X.A} V[\gamma_2]} x M[\gamma_2]]) \]
Since results are invariant under steps, this is the same as
\[ S_1[M[\gamma_1,V[\gamma_1]/x]] \ilr i \text{result}(S_2[M[\gamma_2,V[\gamma_2/x]]]) \]
which follows by reflexivity and assumptions about the stacks
and substitutions.
\end{enumerate}
\item $\mu X.A-\eta$:
\begin{enumerate}
\item We need to show for any $\Gamma, x : \mu X. A \vdash M : \u B$,
and appropriate substitutions and stacks,
\[ S_1[\pmmuXtoYinZ {\rollty{\mu X.A} {\gamma_1(x)}} {y} M[\rollty{\mu X.A}y/x][\gamma_1]] \ilr i \text{result}(S_2[M[\gamma_2]]) \]
By assumption, $\gamma_1(x) \itylr i {\mu X.A} \gamma_2(x)$, so we know
\[ \gamma_1(x) = \rollty{\mu X.A} V_1 \]
and
\[ \gamma_2(x) = \rollty{\mu X.A} V_2 \]
so the left side takes a step:
\begin{align*}
S_1[\pmmuXtoYinZ {\kw{roll} {\gamma_1(x)}} {y} M[\kw{roll} y/x][\gamma_1]]
&\bigstepsin{1} S_1[M[\kw{roll} y/x][\gamma_1][V_1/y]]\\
&= S_1[M[\kw{roll} V_1/x][\gamma_1]]\\
& = S_1[M[\gamma_1]]
\end{align*}
and by reflexivity and assumptions we know
\[ S_1[M[\gamma_1]] \ilr {i} \text{result}(S_2[M[\gamma_2]]) \]
so by anti-reduction we know
\[ S_1[\pmmuXtoYinZ {\rollty{\mu X.A} {\gamma_1(x)}} {y} M[\rollty{\mu X.A}y/x][\gamma_1]] \ilr {i+1} \text{result}(S_2[M[\gamma_2]]) \]
so the result follows by downward closure.
\item Similarly, to show
\[ S_1[M[\gamma_1]] \ilr i \text{result}(S_2[\pmmuXtoYinZ {\rollty{\mu X.A} {\gamma_2(x)}} {y} M[\rollty{\mu X.A}y/x][\gamma_2]]) \]
by the same reasoning as above, $\gamma_2(x) = \rollty{\mu X.A}V_2$, so because result is invariant under reduction we need to show
\[ S_1[M[\gamma_1]] \ilr i \text{result}(S_2[M[\gamma_2]]) \]
which follows by assumption and reflexivity.
\end{enumerate}
\item $\nu \u Y. \u B-\beta$
\begin{enumerate}
\item We need to show
\[ S_1[\kw{unroll} \rollty{\nu \u Y. \u B} M[\gamma_1]] \ix\trianglelefteq i
\text{result}(S_2[M[\gamma_2]]) \]
By the operational semantics,
\[ S_1[\kw{unroll} \rollty{\nu \u Y. \u B} M[\gamma_1]] \bigstepsin{1} S_1[M[\gamma_1]] \]
and by reflexivity and assumptions
\[ S_1[M[\gamma_1]] \ix\trianglelefteq {i} S_2[M[\gamma_2]] \]
so the result follows by anti-reduction and downward closure.
\item We need to show
\[ S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[\kw{unroll} \rollty{\nu \u Y. \u B} M[\gamma_2]]) \]
By the operational semantics and invariance of result under reduction this is equivalent to
\[ S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]]) \]
which follows by assumption.
\end{enumerate}
\item $\nu \u Y. \u B-\eta$
\begin{enumerate}
\item We need to show
\[ S_1[\kw{roll} \kw{unroll} M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]]) \]
by assumption, $S_1 \itylr i {\nu \u Y.\u B} S_2$, so
\[ S_1 = S_1'[\kw{unroll} \bullet] \]
and therefore the left side reduces:
\begin{align*}
S_1[\kw{roll} \kw{unroll} M[\gamma_1]]
&= S_1'[\kw{unroll}\kw{roll}\kw{unroll} M[\gamma_1]]\\
&\bigstepsin{1} S_1'[\kw{unroll} M[\gamma_1]]\\
&= S_1[M[\gamma_1]]
\end{align*}
and by assumption and reflexivity,
\[ S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]]) \]
so the result holds by anti-reduction and downward-closure.
\item Similarly, we need to show
\[ S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[\kw{roll}\kw{unroll} M[\gamma_2]])\]
as above, $S_1 \itylr i {\nu \u Y.\u B} S_2$, so we know
\[ S_2 = S_2'[\kw{unroll}\bullet] \]
so
\[ \text{result}(S_2[\kw{roll}\kw{unroll} M[\gamma_2]]) = \text{result}(S_2[M[\gamma_2]])\]
and the result follows by reflexivity, anti-reduction and downward closure.
\end{enumerate}
\item $0\eta$ Let $\Gamma, x : 0 \vdash M : \u B$.
\begin{enumerate}
\item We need to show
\[ S_1[\kw{absurd} \gamma_1(x)] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]])\]
By assumption $\gamma_1(x) \itylr i 0 \gamma_2(x)$ but this is a contradiction
\item Other direction is the same contradiction.
\end{enumerate}
\item $+\eta$. Let $\Gamma , x:A_1 + A_2 \vdash M : \u B$
\begin{enumerate}
\item We need to show
\[ S_1[\caseofXthenYelseZ {\gamma_1(x)} {x_1. M[\kw{inl} x_1/x][\gamma_1]}{x_2. M[\kw{inr} x_2/x][\gamma_1]}]
\ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]]) \] by assumption
$\gamma_1(x) \itylr i {A_1 + A_2} \gamma_2(x)$, so either it's
an $\kw{inl}$ or $inr$. The cases are symmetric so assume
$\gamma_1(x) = \kw{inl} V_1$.
Then
\begin{align*}
S_1[\caseofXthenYelseZ {\gamma_1(x)} {x_1. M[\kw{inl} x_1/x][\gamma_1]}{x_2. M[\kw{inr} x_2/x][\gamma_1]}]\\
=S_1[\caseofXthenYelseZ {(\kw{inl} V_1)} {x_1. M[\kw{inl} x_1/x][\gamma_1]}{x_2. M[\kw{inr} x_2/x][\gamma_1]}]\\
\bigstepsin{0} S_1[M[\kw{inl} V_1/x][\gamma_1]]\\
= S_1[M[\gamma_1]]
\end{align*}
and so by anti-reduction it is sufficient to show
\[ S_1[M[\gamma_1]] \ix\trianglelefteq i S_2[M[\gamma_2]]\]
which follows by reflexivity and assumptions.
\item Similarly, We need to show
\[
\text{result}(S_1[M[\gamma_1]])
\ix\trianglelefteq i
\text{result}(S_2[\caseofXthenYelseZ {\gamma_2(x)} {x_1. M[\kw{inl} x_1/x][\gamma_2]}{x_2. M[\kw{inr} x_2/x][\gamma_2]}])
\]
and by assumption $\gamma_1(x) \itylr i {A_1 + A_2}
\gamma_2(x)$, so either it's an $\kw{inl}$ or $inr$. The cases are
symmetric so assume $\gamma_2(x) = \kw{inl} V_2$.
Then
\[ S_2[\caseofXthenYelseZ {\gamma_2(x)} {x_1. M[\kw{inl} x_1/x][\gamma_2]}{x_2. M[\kw{inr} x_2/x][\gamma_2]}] \bigstepsin{0}
S_2[M[\gamma_2]]
\]
So the result holds by invariance of result under reduction,
reflexivity and assumptions.
\end{enumerate}
\item $1\eta$ Let $\Gamma, x : 1 \vdash M : \u B$
\begin{enumerate}
\item We need to show
\[ S_1[M[()/x][\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]])\]
By assumption $\gamma_1(x) \itylr i 1 \gamma_2(x)$ so $\gamma_1(x) = ()$, so this is equivalent to
\[ S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]])\]
which follows by reflexivity, assumption.
\item Opposite case is similar.
\end{enumerate}
\item $\times\eta$ Let $\Gamma, x : A_1\times A_2 \vdash M : \u B$
\begin{enumerate}
\item We need to show
\[ S_1[\pmpairWtoXYinZ x {x_1}{y_1} M[(x_1,y_1)/x][\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]]) \]
By assumption $\gamma_1(x) \itylr i {A_1\times A_2} \gamma_2(x)$, so $\gamma_1(x) = (V_1,V_2)$, so
\begin{align*}
S_1[\pmpairWtoXYinZ x {x_1}{y_1} M[(x_1,y_1)/x][\gamma_1]]
&= S_1[\pmpairWtoXYinZ {(V_1,V_2)} {x_1}{y_1} M[(x_1,y_1)/x][\gamma_1]]\\
&\bigstepsin{0} S_1[M[(V_1,V_2)/x][\gamma_1]]\\
&= S_1[M[\gamma_1]]
\end{align*}
So by anti-reduction it is sufficient to show
\[ S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]]) \]
which follows by reflexivity, assumption.
\item Opposite case is similar.
\end{enumerate}
\item $U\eta$ Let $\Gamma \vdash V : U \u B$
\begin{enumerate}
\item We need to show that
\[ \kw{thunk}\kw{force} V[\gamma_1] \itylr i {U \u B} V[\gamma_2] \]
So assume $S_1 \itylr j {\u B} S_2$ for some $j\leq i$, then we need to show
\[ S_1[\kw{force} \kw{thunk}\kw{force} V[\gamma_1]] \ix\trianglelefteq j \text{result}(S_2[\kw{force} V[\gamma_2]])\]
The left side takes a step:
\[ S_1[\kw{force} \kw{thunk}\kw{force} V[\gamma_1]] \bigstepsin{0} S_1[\kw{force} V[\gamma_1]] \]
so by anti-reduction it is sufficient to show
\[ S_1[\kw{force} V[\gamma_1]] \ix\trianglelefteq j \text{result}(S_2[\kw{force} V[\gamma_2]]) \]
which follows by assumption.
\item Opposite case is similar.
\end{enumerate}
\item $F\eta$
\begin{enumerate}
\item We need to show that given $S_1 \itylr i {\u F A} S_2$,
\[ S_1[\bindXtoYinZ \bullet x \kw{ret} x] \itylr i {\u F A} S_2 \]
So assume $V_1 \itylr j A V_2$ for some $j\leq i$, then we need to show
\[ S_1[\bindXtoYinZ \bullet {\kw{ret} V_1} \kw{ret} x] \ix\trianglelefteq j \text{result}(S_2[\kw{ret} V_2])
\]
The left side takes a step:
\[ S_1[\bindXtoYinZ \bullet {\kw{ret} V_1} \kw{ret} x] \bigstepsin{0} S_1[\kw{ret} V_1]\]
so by anti-reduction it is sufficient to show
\[ S_1[\kw{ret} V_1] \ix\trianglelefteq j \text{result}(S_2[\kw{ret} V_2])\]
which follows by assumption
\item Opposite case is similar.
\end{enumerate}
\item $\to\eta$ Let $\Gamma \vdash M : A \to \u B$
\begin{enumerate}
\item We need to show
\[ S_1[(\lambda x:A. M[\gamma_1]\, x)] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]])
\]
by assumption that $S_1 \itylr i {A \to \u B} S_2$, we know
\[ S_1 = S_1'[\bullet\, V_1]\]
so the left side takes a step:
\begin{align*}
S_1[(\lambda x:A. M[\gamma_1]\, x)]
&= S_1'[(\lambda x:A. M[\gamma_1]\, x)\, V_1]\\
&\bigstepsin{0} S_1'[M[\gamma_1]\, V_1]\\
&= S_1[M[\gamma_1]]
\end{align*}
So by anti-reduction it is sufficient to show
\[ S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]])\]
which follows by reflexivity, assumption.
\item Opposite case is similar.
\end{enumerate}
\item $\mathbin{\&}\eta$ Let $\Gamma \vdash M : \u B_1 \mathbin{\&} \u B_2$
\begin{enumerate}
\item We need to show
\[ S_1[\pair{\pi M[\gamma_1]}{\pi' M[\gamma_1]}] \ix\trianglelefteq i \text{result}(S_1[M[\gamma_2]]) \]
by assumption, $S_1 \itylr i {\u B_1 \mathbin{\&} \u B_2} S_2$ so
either it starts with a $\pi$ or $\pi'$ so assume that $S_1 =
S_1'[\pi \bullet]$ ($\pi'$ case is similar).
Then the left side reduces
\begin{align*}
S_1[\pair{\pi M[\gamma_1]}{\pi' M[\gamma_1]}]
&= S_1'[\pi\pair{\pi M[\gamma_1]}{\pi' M[\gamma_1]}]\\
&\bigstepsin{0} S_1'[\pi M[\gamma_1]]\\
&= S_1[M[\gamma_1]]
\end{align*}
So by anti-reduction it is sufficient to show
\[ S_1[M[\gamma_1]] \ix\trianglelefteq i \text{result}(S_2[M[\gamma_2]]) \]
which follows by reflexivity, assumption.
\item Opposite case is similar.
\end{enumerate}
\item $\top\eta$ Let $\Gamma \vdash M : \top$
\begin{enumerate}
\item In either case, we assume we are given $S_1 \itylr i \top
S_2$, but this is a contradiction.
\end{enumerate}
\end{enumerate}
\end{longproof}
\begin{lemma}[Substitution Principles]
For any diverge-bottom preorder $\trianglelefteq$, the following are
valid
\begin{enumerate}
\item $\inferrule{\Gamma \vDash V_1 \ilr i V_2 \in A
\and \Gamma, x : A \vDash V_1' \ilr V_2' \in A'}{\Gamma \vDash V_1'[V_1/x] \ilr V_2'[V_2/x] \in A'}$
\item $\inferrule{\Gamma \vDash V_1 \ilr i V_2 \in A
\and \Gamma, x : A \vDash M_1 \ilr M_2 \in \u B}{\Gamma \vDash M_1[V_1/x] \ilr M_2[V_2/x] \in \u B}$
\end{enumerate}
\end{lemma}
\begin{longproof}
We do the term case, the value case is similar. Given $\gamma_1
\itylr i \Gamma \gamma_2$, we have $V_1[\gamma_1] \itylr i A
V_2[\gamma_2]$ so
\[ \gamma_1,V_1[\gamma_1]/x \itylr i {\Gamma, x : A} \gamma_2, V_2[\gamma_2]/x \]
and by associativity of substitution
\[ M_1[V_1/x][\gamma_1] = M_1[\gamma_1,V_1[\gamma_1]/x] \]
and similarly for $M_2$, so if $S_1 \itylr i {\u B} S_2$ then
\[ S_1[M_1[\gamma_1,V_1[\gamma_1]/x]] \ix\trianglelefteq i \text{result}(S_2[M_2[\gamma_2,V_2[\gamma_2]/x]])\]
\end{longproof}
\fi
For errors, the strictness axioms hold for any $\trianglelefteq$, but the axiom that
$\mho$ is a least element is specific to the definitions of
$\mathrel{\preceq\ltdyn}, \sqsubseteq\succeq$
\begin{lemma}[Error Rules]
For any divergence preorder $\trianglelefteq$ and appropriately
typed $S, M$,
\begin{small}
\begin{mathpar}
S[\mho] \ilr \omega \mho \and
\mho \ilr \omega S[\mho] \and
\mho \ilrof\mathrel{\preceq\ltdyn} \omega M \and
M \ilrof{\errordivergerightop} \omega \mho
\end{mathpar}
\end{small}
\end{lemma}
\begin{longproof}
\begin{enumerate}
\item It is sufficient by the limit lemma to show $\text{result}(S[\mho])
\trianglelefteq \mho$ which holds by reflexivity because $S[\mho]
\bigstepsin{0} \mho$.
\item We need to show $S[\mho] \ix\mathrel{\preceq\ltdyn} i R$ for arbitrary $R$,
so by the limit lemma it is sufficient to show $\mho \mathrel{\preceq\ltdyn}
R$, which is true by definition.
\item By the limit lemma it is sufficient to show $R
\mathrel{\errordivergerightop} \mho$ which is true by definition.
\end{enumerate}
\end{longproof}
The lemmas we have proved cover all of the inequality rules of CBPV, so
applying them with $\trianglelefteq$ chosen to be $\errordivergeleft$ and
$\errordivergerightop$ gives
\begin{lemma}[$\mathrel{\preceq\ltdyn}$ and $\mathrel{\ltdyn\succeq}$ are Models of CBPV] \label{lem:errordivergeleftrightopmodels}
If $\Gamma \,\,|\,\, \Delta \vdash E \sqsubseteq E' : \u B$ then
$\Gamma \,\,|\,\, \Delta \vDash E \ix\mathrel{\preceq\ltdyn} \omega E' \in \u B$ and
$\Gamma \,\,|\,\, \Delta \vDash E' \ix{\mathrel{\preceq\sqsupseteq}} \omega E \in \u B$.
\end{lemma}
Because logical implies contextual equivalence, we can
conclude with the main theorem:
\begin{theorem}[Contextual Approximation/Equivalence Model CBPV] ~~\\
If $\Gamma \,\,|\,\, \Delta \vdash E \sqsubseteq E' : T$ then
$\Gamma \,\,|\,\, \Delta \vDash E \ctxize\sqsubseteq E' \in T$;
if
${\Gamma \,\,|\,\, \Delta \vdash E \mathrel{\gtdyn\ltdyn} E' : T}$ then
${\Gamma \,\,|\,\, \Delta \vDash E \ctxize= E' \in T}$.
\end{theorem}
\begin{longproof}
For the first part, from Lemma~\ref{lem:errordivergeleftrightopmodels},
we have $E \ix\mathrel{\preceq\ltdyn} \omega E'$ and $E' \ix{\mathrel{\preceq\sqsupseteq}}
\omega E$. By Lemma~\ref{lem:logical-implies-contextual}, we then have
$E \ctxize{\errordivergeleft} E'$ and $E' \ctxize{\errordivergerightop}
E$. Finally, by Corollary~\ref{cor:contextual-decomposition}, $E
\ctxize\sqsubseteq E' \text{ iff } E \ctxize{\errordivergeleft} E' \text{and
} E (\ctxize{(\errordivergerightop)})^\circ E'$, so we have the result.
For the second part, applying the first part twice gives $E
\ctxize\sqsubseteq E'$ and $E' \ctxize\sqsubseteq E$, and we concluded in
Corollary~\ref{cor:contextual-decomposition} that this coincides with
contextual equivalence.
\end{longproof}
\section{Discussion and Related Work}
\label{sec:related}
In this paper, we have given a logic for reasoning about gradual
programs in a mixed call-by-value/call-by-name language, shown that
the axioms uniquely determine almost all of the contract translation
implementing runtime casts, and shown that the axiomatics is sound for
contextual equivalence/approximation in an operational model.
\iflong
\fi
In immediate future work, we believe it is straightforward to add
inductive/coinductive types and obtain similar unique cast
implementation theorems
(e.g. $\upcast{\mathtt{list}(A)}{\mathtt{list}(A')} \mathrel{\gtdyn\ltdyn}
\mathtt{map}\upcast{A}{A'}$). Additionally, since more efficient cast
implementations such as optimized cast calculi (the lazy variant in
\citet{herman2010spaceefficient}) and threesome
casts~\cite{siekwadler10zigzag}, are equivalent to the lazy contract
semantics, they should also be models of GTT, and if so we could use GTT
to reason about program transformations and optimizations in them.
\iflong
\paragraph{Applicability of Cast Uniqueness Principles}
\fi
The cast uniqueness principles given in theorem
\ref{thm:functorial-casts} are theorems in the formal logic of Gradual
Type Theory, and so there is a question of to what languages the
theorem applies.
The theorem applies to any \emph{model} of gradual type theory, such
as the models we have constructed using call-by-push-value given in
Sections \ref{sec:contract}, \ref{sec:complex}, \ref{sec:operational}.
We conjecture that simple call-by-value and call-by-name gradual
languages are also models of GTT, by extending the translation of
call-by-push-value into call-by-value and call-by-name in the appendix
of Levy's monograph \cite{levy03cbpvbook}.
In order for the theorem to apply, the language must validate an
appropriate version of the $\eta$ principles for the types.
So for example, a call-by-value language that has reference equality
of functions does \emph{not} validate even the value-restricted $\eta$
law for functions, and so the case for functions does not apply.
It is a well-known issue that in the presence of pointer equality of
functions, the lazy semantics of function casts is not compatible with
the graduality property, and our uniqueness theorem provides a
different perspective on this phenomenon
\cite{findlerflattfelleisen04,chaperonesimpersonators, refined}.
However, we note that the cases of the uniqueness theorem for each
type connective are completely \emph{modular}: they rely only on the
specification of casts and the $\beta,\eta$ principles for the
particular connective, and not on the presence of any other types,
even the dynamic types.
So even if a call-by-value language may have reference equality
functions, if it has the $\eta$ principle for strict pairs, then the
pair cast must be that of Theorem \ref{thm:functorial-casts}.
Next, we consider the applicability to non-eager languages.
Analogous to call-by-value, our uniqueness principle should apply to
simple \emph{call-by-name} gradual languages, where full $\eta$
equality for functions is satisfied, but $\eta$ equality for booleans
and strict pairs requires a ``stack restriction'' dual to the value
restriction for call-by-value function $\eta$.
We are not aware of any call-by-name gradual languages, but there is
considerable work on \emph{contracts} for non-eager languages,
especially Haskell \cite{hinzeJeuringLoh06,XuPJC09}.
However, we note that Haskell is \emph{not} a call-by-name language in
our sense for two reasons.
First, Haskell uses call-by-need evaluation where results of
computations are memoized. However, when only considering Haskell's
effects (error and divergence), this difference is not observable so
this is not the main obstacle.
The bigger difference between Haskell and call-by-name is that Haskell
supports a \texttt{seq} operation that enables the programmer to force
evaluation of a term to a value.
This means Haskell violates the function $\eta$ principle because
$\Omega$ will cause divergence under $\texttt{seq}$, whereas $\lambda
x. \Omega$ will not.
This is a crucial feature of Haskell and is a major source of
differences between implementations of lazy contracts, as noted in
\citet{Degen2012TheIO}.
We can understand this difference by using a different translation
into call-by-push-value: what Levy calls the ``lazy paradigm'', as
opposed to call-by-name \cite{levy03cbpvbook}.
Simply put, connectives are interpreted as in call-by-value, but with
the addition of extra thunks $UF$, so for instance the lazy function
type $A \to B$ is interpreted as $UFU(UFA \to FB)$ and the extra $UFU$
here is what causes the failure of the call-by-name $\eta$ principle.
With this embedding and the uniqueness theorem, GTT produces a
definition for lazy casts, and the definition matches the work of
\citet{XuPJC09} when restricting to non-dependent contracts.
\iflong\paragraph{Comparing Soundness Principles for Cast Semantics}\fi
\citet{greenmanfelleisen:2018} gives a spectrum of
differing syntactic type soundness theorems for different semantics of
gradual typing.
Our work here is complementary, showing that certain program
equivalences can only be achieved by certain cast semantics.
\citet{Degen2012TheIO} give an analysis of different cast semantics
for contracts in lazy languages, specifically based on Haskell, i.e.,
call-by-need with \texttt{seq}.
They propose two properties ``meaning preservation'' and
``completeness'' that they show are incompatible and identify which
contract semantics for a lazy language satisfy which of the
properties.
The meaning preservation property is closely related to graduality: it
says that evaluating a term with a contract either produces blame or
has the same observable effect as running the term without the
contract.
Meaning preservation rules out overly strict contract systems that
force (possibly diverging) thunks that wouldn't be forced in a
non-contracted term.
Completeness, on the other hand, requires that when a contract is
attached to a value that it is \emph{deeply} checked.
The two properties are incompatible because, for instance, a pair of a
diverging term and a value can't be deeply checked without causing the
entire program to diverge.
Using Levy's embedding of the lazy paradigm into call-by-push-value
their incompatibility theorem should be a consequence of our main
theorem in the following sense.
We showed that any contract semantics departing from the
implementation in Theorem \ref{thm:functorial-casts} must violate
$\eta$ or graduality.
Their completeness property is inherently eager, and so must be
different from the semantics GTT would provide, so either the
restricted $\eta$ or graduality fails.
However, since they are defining contracts within the language, they
satisfy the restricted $\eta$ principle provided by the language, and
so it must be graduality, and therefore meaning preservation that
fails.
\iflong\paragraph{Axiomatic Casts}\fi
Henglein's work on dynamic typing also uses an axiomatic semantics of
casts, but axiomatizes behavior of casts at each type directly whereas
we give a uniform definition of all casts and derive implementations
for each type \cite{henglein94:dynamic-typing}.
Because of this, the theorems proven in that paper are more closely
related to our model construction in Section
\ref{sec:contract}.
More specifically, many of the properties of casts needed to prove
Theorem \ref{thm:axiomatic-graduality} have direct analogues
in Henglein's work, such as the coherence theorems.
We have not included these lemmas in the paper because they are quite
similar to lemmas proven in \citet{newahmed18}; see there for a more
detailed comparison, and the extended version of this paper for full proof details \citep{newlicataahmed19:extended}.
Finally, we note that our assumption of compositionality, i.e., that
all casts can be decomposed into an upcast followed by a downcast, is
based on Henglein's analysis, where it was proven to hold in his
coercion calculus.
\iflong
\paragraph{Gradual Typing Frameworks}
\fi
In this work we have applied a method of ``gradualizing'' axiomatic
type theories by adding in dynamism orderings and adding dynamic
types, casts and errors by axioms related to the dynamism orderings.
This is similar in spirit to two recent frameworks for designing
gradual languages: Abstracting Gradual Typing (AGT) \citep{AGT} and the
Gradualizer \citep{gradualizer16,gradualizer17}.
All of these approaches start with a typed language and construct a
related gradual language.
A major difference between our approach and those is that our work is
based on axiomatic semantics and so we take into account the equality
principles of the typed language, whereas Gradualizer is based on the
typing and operational semantics and AGT is based on the type safety
proof of the typed language.
Furthermore, our approach produces not just a single language, but
also an axiomatization of the structure of gradual typing and so we
can prove results about many languages by proving theorems in GTT.
The downside to this is that our approach doesn't directly provide an
operational semantics for the gradual language, whereas for AGT this
is a semi-mechanical process and for Gradualizer, completely
automated.
Finally, we note that AGT produces the ``eager'' semantics for
function types, and it is not clear how to modify the AGT methodology
to reproduce the lazy semantics that GTT provides.
More generally, both AGT and the Gradualizer are known to produce
violations of parametricity when applied to polymorphic languages,
with the explanation being that the parametricity property is in no
way encoded in the input to the systems: the operational semantics and
the type safety proof.
In future work, we plan to apply our axiomatic approach to gradualizing
polymorphism and state by starting with the rich \emph{relational logics
and models} of program equivalence for these
features~\cite{plotkinabadi93, dunphyphd, ahmed08:paramseal, neis09,
ahmed09:sdri}, which may lend insight into existing
proposals~\cite{siek15:mono,ahmed17,igarashipoly17,siek-taha06}--- for
example, whether the ``monotonic'' \citep{siek15:mono} and ``proxied''
\citep{siek-taha06} semantics of references support relational reasoning
principles of local state.
\iflong \paragraph{Blame}
\fi
We do not give a treatment of runtime blame reporting, but we argue that
the observation that upcasts are thunkable and downcasts are linear is
directly related to blame soundness~\cite{tobin-hochstadt06,wadler-findler09} in that if
an upcast were \emph{not} thunkable, it should raise positive blame and
if a downcast were \emph{not} linear, it should raise negative blame.
First, consider a potentially effectful stack upcast of the form
$\upcast{\u F A}{\u F A'}$. If it is not thunkable, then in our logical
relation this would mean there is a value $V : A$ such that $\upcast{\u
F A}{\u F A'}(\kw{ret} V)$ performs some effect.
Since the only observable effects for casts are dynamic type errors,
$\upcast{\u F A}{\u F A'}(\kw{ret} V) \mapsto \mho$, and we must decide
whether the positive party or negative party is at fault.
However, since this is call-by-value evaluation, this error happens
unconditionally on the continuation, so the continuation never had a
chance to behave in such a way as to prevent blame, and so we must blame the
positive party.
Dually, consider a value downcast of the form $\dncast{U \u B}{U \u B'}$.
If it is not linear, that would mean it forces its $U \u B'$
input either never or more than once.
Since downcasts should refine their inputs, it is not possible for
the downcast to use the argument twice, since e.g. printing twice does not
refine printing once.
So if the cast is not linear, that means it fails without ever forcing
its input, in which case it knows nothing about the positive party and
so must blame the negative party.
In future work, we plan to investigate extensions of GTT with more than
one $\mho$ with different blame labels, and an axiomatic account of
a blame-aware observational equivalence.
\begin{longonly}
\paragraph{Denotational and Category-theoretic Models}
We have presented certain concrete models of GTT using ordered CBPV
with errors, in order to efficiently arrive at a concrete operational
interpretation.
It may be of interest to develop a more general notion of model of GTT
for which we can prove soundness and completeness theorems, as in
\citet{newlicata2018-fscd}.
A model would be a strong adjunction between double categories where
one of the double categories has all ``companions'' and the other has
all ``conjoints'', corresponding to our upcasts and downcasts.
Then the contract translation should be a construction that takes a
strong adjunction between 2-categories and makes a strong adjunction
between double categories where the ep pairs are ``Kleisli'' ep pairs:
the upcast is has a right adjoint, but only in the Kleisli category and
vice-versa the downcast has a left adjoint in the co-Kleisli category.
Furthermore, the ordered CBPV with errors should also have a sound and
complete notion of model, and so our contract translation should have
a semantic analogue as well.
\end{longonly}
\begin{longonly}
\paragraph{Gradual Session Types} ~
\end{longonly}
Gradual session types~\cite{igarashi+17gradualsession} share some
similarities to GTT, in that there are two sorts of types (values and
sessions) with a dynamic value type and a dynamic session type.
However, their language is not \emph{polarized} in the same way as CBPV,
so there is not likely an analogue between our upcasts always being
between value types and downcasts always being between computation
types. Instead, we might reconstruct this in a polarized session
type language~\cite{pfenninggriffith15session}.
\begin{longonly}
The two dynamic types would then be the ``universal sender'' and
``universal receiver'' session types.
\end{longonly}
\begin{longonly}
\paragraph{Dynamically Typed Call-by-push-value}
Our interpretation of the dynamic types in CBPV suggests a design for
a Scheme-like language with a value and computation distinction.
This may be of interest for designing an extension of Typed Racket that
efficiently supports CBN or a Scheme-like language with codata types.
While the definition of the dynamic computation type by a lazy product
may look strange, we argue that it is no stranger than the use of its
dual, the sum type, in the definition of the dynamic value type.
That is, in a truly dynamically typed language, we would not think of
the dynamic type as being built out of some sum type construction, but
rather that it is the \emph{union} of all of the ground value types, and the
union happens to be a \emph{disjoint} union and so we can
model it as a sum type.
In the dual, we don't think of the computation dynamic type as a
\emph{product}, but instead as the \emph{intersection} of the ground
computation types.
Thinking of the type as unfolding:
\[ \u {\text{?`}} = \u F\u {\text{?`}} \wedge ({?} \to \u F {?}) \wedge ({?} \to {?} \to \u F {?}) \wedge \cdots \]
This says that a dynamically typed computation is one that can be
invoked with any finite number of arguments on the stack, a fairly
accurate model of implementations of Scheme that pass multiple
arguments on the stack.
\end{longonly}
\iflong
\paragraph{Dependent Contract Checking}
\fi
We also plan to explore using GTT's specification of casts in a
dependently typed setting, building on work using
Galois connections for casts between dependent
types~\cite{dagand+18interoperability}, and work on effectful dependent
types based a CBPV-like judgement
structure~\cite{ahman+16fiberedeffects}.
\paragraph{Acknowledgments}
We thank Ron Garcia, Kenji Maillard and Gabriel Scherer for helpful
discussions about this work. We thank the anonymous reviewers for
helpful feedback on this article. This material is based on research
sponsored by the National Science Foundation under grant CCF-1453796
and the United States Air Force Research Laboratory under agreement
number FA9550-15-1-0053 and FA9550-16-1-0292.
The views and conclusions contained herein are those of the authors and
should not be interpreted as necessarily representing the official
policies or endorsements, either expressed or implied, of the United
States Air Force Research Laboratory, the U.S. Government, or Carnegie
Mellon University. |
1811.02414 | \section{Introduction and motivation}\label{sec:intro}
Statistical design of experiments underpins much quantitative work in the biological, physical and engineering sciences, providing a principled approach to the efficient allocation of (typically sparse) experimental resources to address the aims of the study. Often, experiments aim to understand a process by modeling discrete data, for example arising from the observation of a binary or count response. For completely randomized experiments, assuming homogeneous experimental units, a generalized linear model (GLM) may provide an appropriate description and there has been much research into the construction of optimal and efficient designs for multi-factor GLMs, including \citet{wler}. \citet{ds06, ds08} and \citet{rwle}. See \citet{aw2015} for a comprehensive review.
When heterogeneous experimental units can be grouped into more homogenous groups, or blocks, accounting for this grouping can improve the precision of inferences made from the experimental data. Methods to find block designs for discrete data have recently been proposed by, amongst others, \citet{woods+v_11}, \citet{niaparast-schwabe} and \citet{ww2015}. Two modeling paradigms have been adopted in the design literature: conditional models where the joint distribution of the data is derived by explicitly including block-specific random effects (e.g. generalized linear mixed models, \citealp{bres-clay}); and marginal models, where the dependence structure of the data is specified separately from the marginal distribution of each response (e.g. with parameters estimated via generalized estimating equations (GEEs), \citealp{liang-zeger}). For the linear model, these two modeling approaches coincide. In this paper, we find optimal designs under a marginal modeling approach when the intra-block dependence structure is defined via a copula. Such models are particularly appropriate when block effects are not of interest in themselves and the aim of the experiment is to understand the effects of treatment factors averaged across blocks. Optimal designs for marginal models using alternative definitions of the dependence structure have been found by \citet{jho}, \citet{ua2004} and \citet{vdvw2014}.
Although our methods can be generalized to arbitrary block sizes, we focus on the important special case of experiments with blocks of size two (see \citealp{godolphin2018}). Such blocks occur routinely in microarray experiments \citep{bailey_07, kerr_12} and in experiments on people, for example with eyes or arms as experimental units \citep{david+k_96}. Practical motivation for our work comes from a materials science experiment. In Section~\ref{sec:application} we find designs appropriate for aerospace materials testing experiments similar to those performed by our collaborators at the UK Defence Science and Technology Laboratory. The aim of these experiments is to compare the thermal properties of a set of novel materials against a reference material. In particular, one aim is to assess the probability of failure due to the exposure to extreme (high) temperatures. The experiment is performed using a arc jet to heat material samples which are held in one of six ``wedges'', each of which holds a pair of samples on a strut attached to a circular carousel, see Figure~\ref{fig:arcjet}. Hence, the experiment can be considered as a block design with six blocks, each containing two units. In the particular experiment considered here, six materials were tested, a reference and five novel samples. A variety of measures are made on each tested sample, including a visual inspection of quality to assess material failure which leads to a binary (pass/fail) response. It is this response for which we find optimal designs.
\begin{figure}[htb]
\begin{center}
\includegraphics[scale=.75]{figures/materials_equip} ~~~~~~
\includegraphics[scale=.75]{figures/materials_schematic}
\end{center}
\caption{\label{fig:arcjet}Arc jet carousel, struts and ``wedges'' (left) and schematic (right). In addition to the six wedges for holding material samples, the carousel had two further wedges used for temperature measurement.}
\end{figure}
In common with most nonlinear models, the performance of a given design for a copula-based GLM model may depend on the values of the model parameters that define both the marginal model and the dependence structure. If strong prior information is available, then locally optimal designs can be sought for given values of the model parameters. Otherwise, Bayesian (e.g. \citealp{ow2017}) or maximin (e.g. \citealp{king+w:2000}) approaches can be adopted. In common with much of the recent literature on designs for GLMs, we find optimal designs robust to the values of the model parameters via a pseudo-Bayesian approach (e.g. \citealp{ADT2007}, ch.~18), with a classical quantity for design performance averaged with respect to a prior distribution on the parameters. Here, we adopt variants of $D$-optimality for design selection.
The remainder of the paper is organized as follows. In Section~\ref{sec:copulas} we introduce the statistical models we employ, including copulas, and develop design methods for blocked experiments. An illustrative comparison is made to previous design approaches based on GEEs using an example from \citet{woods+v_11}. In Section~\ref{sec:application} we demonstrate and assess our methods via application to the materials testing example. In particular, we show how prior information on the parameters influences the choice of optimal design. We provide a brief discussion and some areas for future work in Section~\ref{sec:disc}.
\section{Designs for copula-based marginal models} \label{sec:copulas}
Suppose the experiment varies $m$ treatment factors, $\mathbf{x}^T = (x_1,\ldots,x_m)$, and the experiment has $b$ blocks of size $k$; throughout, our examples will assume $k=2$. The $j$th unit in the $i$th block receives treatment $\mathbf{x}_{ij}^T=(x_{1ij},\ldots,x_{mij})$ $(i=1,\ldots,b;\, j=1,\ldots,k)$ and realizes observation $Y_{ij}$. The $\mathbf{x}_{ij}$ are chosen from a standardized design space $\mathcal{X}=[-1,1]^m$ and are not necessarily distinct. Independence of observations $Y_{ij}, Y_{i'j'}$, for $i,i'=1,\ldots,b;\,j,j' =1,\ldots,k$, is assumed across blocks $(i\ne i')$ but we allow dependence within a block ($i=i'$), which we describe via a copula model.
\subsection{Statistical modeling via copulas}\label{sec:copmod}
The problem of specifying a probability model for dependent random variables $Y_{i1}, \dots, Y_{jk}$ can be simplified by expressing the corresponding $k$-dimensional joint distribution ${\mathbf{F}}_{{Y_{i1}},\dots,{Y_{ik}}}$ in terms of marginal distributions $F_{Y_{i1}}, \dots, F_{Y_{ik}}$, and an associated {$k$-copula} (or dependence function) $C$ defined as follows (cf. \citealp{nelsen_06}).
\begin{definition}
\label{Def:Copula}
A $k$-copula is a function $C: [0,1]^k \rightarrow [0,1]$, $k \geq 2$, with the following properties:
\begin{enumerate}
\item (\emph{uniform margins}) for every $\mathbf{u} \in [0,1]^k$, if at least one coordinate of $\mathbf{u}$ is $0$, then
\[C(\mathbf{u}) = 0\,, \]
and if all coordinates of $\mathbf{u}$ are $1$ except $u_i$, then
$$C(\mathbf{u}) = u_i\,.$$
\item (\emph{k-increasing}) for all $\mathbf{a}$, $\mathbf{b} \in [0,1]^k$ such that $\mathbf{a}\leq \mathbf{b}$,
\[V_{C}([\mathbf{a},\mathbf{b}]) \geq 0,\]
where $V_{C}$ is the measure induced by $C$ on $[0,1]^k$.
\end{enumerate}
\end{definition}
The connection between a copula and a joint probability distribution is given by Sklar's Theorem \citep{sklar_59}, which affirms that for every $k$-dimensional joint distribution ${\mathbf{F}}_{{Y_{i1}},\ldots,{Y_{ik}}}$ with marginal distributions $F_{Y_{i1}}, \ldots, F_{Y_{ik}}$, there exists a $k$-copula $C$, defined as in Definition~\ref{Def:Copula}, such that
\begin{equation}\label{Eq:S}
\mathbf{F}_{Y_{i1},\ldots,Y_{ik}} (y_1,\dots,y_k) = C(F_{Y_{i1}}(y_1),\ldots, F_{Y_{ik}}(y_k))\,,
\end{equation}
for all $y_1, \ldots, y_k\in\mathbb{R}$.
Conversely, if $C$ is a $k$-copula and $F_{Y_1}, \dots, F_{Y_k}$ are distribution functions, then the function $F_{Y_1,\dots,Y_k}$ given by (\ref{Eq:S}) is a joint distribution with marginals $F_{Y_1}, \dots,F_{Y_k}$. The copula $C$ may not be unique for discrete margins, however the practical limitations for statistical purposes are little, cf. \cite{genest+n_07}.
Owing to Sklar's theorem, parametric families of copulas represent a powerful tool to describe the joint relationship between dependent random variables. Selecting the appropriate dependence within an assumed parametric copula family reduces to the selection of copula parameters, which correspond, for example, to a specific measure of association for the modeled random variables. Assuming $Y_{i1},\ldots,Y_{ik}$ are continuous random variables with associated copula $C(\cdot;\alpha)$, one measure of association proposed by \cite{joe_90} is given by
\begin{equation}
\label{eq:tau}
\tau_k = \frac{1}{2^{k-1}-1} \left\{2^k \int\limits_{[0,1]^k} C(\cdot;\alpha) d C(\cdot;\alpha) - 1 \right\}\,.
\end{equation}
Equation~\eqref{eq:tau} is a generalized version of Kendall's $\tau$, and hence establishes a correspondence between a scalar copula parameter $\alpha$ and the degree of dependence. More details and properties of this quantity, and another more traditional measure of concordance, can be found in \citet{genest+al_11}.
\subsection{Design of experiments for copula models}\label{sec:design}
In common with most work on optimal design of experiments, we base our criterion on the Fisher information matrix (FIM), the inverse of which provides an asymptotic approximation to the variance-covariance matrix of the maximum likelihood estimators of the model parameters.
Let $\zeta_i = (\mathbf{x}_{i1},\ldots,\mathbf{x}_{ik})\in\mathcal{X}^k$ denote the $k$ treatment vectors assigned to the units in block $i$ $(i = 1,\ldots, b;\,j = 1, \ldots, k)$.
We will work within a class of normalized block designs defined as
$$
\xi = \left\{
\begin{array}{ccc}
\zeta_1,& \ldots, & \zeta_n \\
w_1, & \ldots, & w_n
\end{array}
\right\}\,,
\quad 0< w_i \leq 1\,,
\quad \sum_{i=1}^nw_i = 1\,,
$$
with $n \leq b$ distinct (support) blocks. As defined, $bw_i$ must be integer and represents the replication of the $i$th support block ($i=1,\ldots,n$). Without loss of generality, we assume the first $n$ blocks in the design correspond to $\zeta_1,\ldots,\zeta_b$, with the remaining $b-n$ blocks being replicates. We relax the assumption that $bw_i$ is integer to find so-called approximate or continuous designs; see also \citet{cheng} and \citet{ww2015}. Let $\Xi$ denote the space of all possible designs of this form.
Denote the vector of responses from the $i$th block as
$$
\mathbf{Y}_i = \left(Y_{i1},\ldots, Y_{ik}\right)^T\,,\quad i = 1,\ldots, b\,,
$$
with corresponding expectation vector
$$
\boldsymbol{\eta}_i = \left[\eta(\mathbf{x}_{i1};\,\boldsymbol{\beta}),\ldots,\eta(\mathbf{x}_{ik};\,\boldsymbol{\beta})\right]^T\,,
$$
where $\eta(\cdot;\,\cdot)$ is a known function and $\boldsymbol{\beta}=(\beta_1, \ldots,\beta_r)^T$ is a vector of unknown parameters requiring estimation. Denote the marginal distribution function for the $j$th entry in the block as $F_{Y_{ij}}\left(y_{ij};\, \mathbf{x}_{ij}, \boldsymbol{\beta}\right)$, $j=1,\ldots, k$, and denote the joint distribution, derived via a copula transformation, for the $k$ responses in the $i$th block as $C\left(F_{Y_{i1}},\ldots,F_{Y_{ik}};\, \boldsymbol{\alpha}\right)$ where $\boldsymbol{\alpha}=({\alpha}_1,\ldots, {\alpha}_l)^T$ are unknown (copula) parameters.
The FIM $M(\zeta_i;\,\boldsymbol{\gamma})$ for the $i$th block is an $(r +l) \times (r +l)$ matrix with $vw$th element
\begin{equation}\label{Eq:FIM}
M(\zeta_i;\boldsymbol{\gamma})_{vw} = \mathbf{E} \left( - \dfrac{\partial^2}{\partial \gamma_v \partial \gamma_w} \log c_{\mathbf{Y}_i}(\boldsymbol{\eta}_i, \boldsymbol{\alpha}) \right)\,,
\end{equation}
where $\boldsymbol{\gamma}=(\gamma_1,\ldots,{\gamma}_{r+l})^T=({\beta}_1,\ldots,{\beta}_r,{\alpha}_1,, \ldots, {\alpha}_l)^T$ and
\[
c_{\mathbf{Y}_i}(\boldsymbol{\eta}_i, \boldsymbol{\alpha})= \dfrac{\partial^k}{\partial y_{i1} \dots \partial y_{ik}} C\left(F_{Y_{i1}},\ldots,F_{Y_{ik}};\, \boldsymbol{\alpha}\right)\]
is the joint density function represented through a copula $C$ in accordance with Equation~(\ref{Eq:S}). The FIM for an approximate block design $\xi$ is then given by
\[M(\xi;\, \boldsymbol{\gamma}) = \sum\limits_{i=1}^n w_i M(\zeta_i;\,\boldsymbol{\gamma})\,.\]
An optimal design $\xi^\star$ maximizes a scalar function $\psi\left\{M(\xi;\,\boldsymbol{\gamma})\right\}$ of the information matrix. Previous work on optimal designs for copulas has focussed on finding completely randomized locally-optimal designs for multivariate responses, which can be considered as a block design where every unit within a block must receive the same treatment. \citet{denman_design_2011} found $D$-optimal designs for a bivariate response ($k=2$) that maximized $\psi^D \left\{M(\xi;\,\boldsymbol{\gamma})\right\} = \det M(\xi;\,\boldsymbol{\gamma})$, and \citet{Perrone+m_16} developed a corresponding equivalence theorem. These methods were extended to the local $D_A$-criterion, and, as a special case, for the $D_s$-criterion in \citet{perrone+al_17}. Other relevant uses of design of experiments in copula models are \citet{deldossi_optimal_2018} and \citet{durante_asymmetric_2016}, but until now all relied on the availability of a single ``best guess'' vector of parameter values.
To overcome this dependence on assumed parameter values, here we adopt a pseudo-Bayesian approach to constructing block designs. Furthermore, our primary interest is typically in $s$ meaningful linear combination of the parameters. Such combinations can be defined as elements of the vector $A^T\boldsymbol{\gamma}$, where $A^T$ is an $s \times (r+l)$ matrix of rank $s < (r+l)$. If $M(\xi;\, \boldsymbol{\gamma})$ is non-singular, the variance-covariance matrix of the maximum likelihood estimator of $A^T\boldsymbol{\gamma}$ is proportional to $A^T \{ M(\xi;\, \boldsymbol{\gamma}) \}^{-1} A$. Hence, we define a \textit{robust $D_A$-optimal block design} $\xi^\star$ as the design that maximizes
\begin{equation}\label{eq:bayesDA}
\Psi^D(\xi;\,G, A) = \int_{\Gamma} \log \det[A^T \{ M(\xi;\, \boldsymbol{\gamma}) \}^{-1} A]^{-1}\,\mathrm{d}G(\boldsymbol{\gamma})\,,
\end{equation}
\noindent where $G(\boldsymbol{\gamma})$ is a proper prior distribution function for $\boldsymbol{\gamma}$ and $\Gamma\subset\mathbb{R}^{r+l}$ is the support of $G$. See also \citet{woods+v_11}.
Most often the main interest is in an $s < (r+l)$-dimensional subset of the parameters. In such a case, a \textit{robust $D_s$-optimal block design} can be found by maximizing
\begin{equation}\label{eq:bayesDs}
\Psi^D(\xi;\,G) = \int_{\Gamma} \log \det \left\{M_{11} - M_{12}M_{22}^{-1}M_{12}^T\right\}\,\mathrm{d}G(\boldsymbol{\gamma})\,,
\end{equation}
following the partition of the information matrix as
$$M(\xi;\, \boldsymbol{\gamma}) = \left (
\begin{array}{cc}
M_{11} & M_{12} \\
M_{12}^T & M_{22}
\end{array}
\right )\,.$$
Here, $M_{11}$ is the $(s \times s)$ partition related to the parameters of interest. This criterion follows as a special case of the $D_A$-criterion with $A^T = (I_s \; 0_{s\times (r+l-s)})$, with $I_s$ the $s\times s$ identity matrix and $0_{s\times (r+l-s)}$ the $s\times (r+l-s)$ zero matrix.
We evaluate a design $\xi$ via its \textit{Bayesian efficiencies} under a given criterion, relative to an appropriate reference design $\xi^*$ (see, for example, \citealp{waite2018}). Under robust $D_s$-optimality, this efficiency is given by:
\[
\text{eff}(\xi,\xi^*) =
\left(\dfrac{\exp\int_{\mathcal{B}} \log \det[M_{11}(\xi, {\boldsymbol{\gamma}}) - M_{12}(\xi, {\boldsymbol{\gamma}})M_{22}^{-1}(\xi, {\boldsymbol{\gamma}})M_{12}^T(\xi, \tilde{\boldsymbol{\gamma}})] \,\mathrm{d}F(\boldsymbol{\gamma})}{\exp\int_{\mathcal{B}} \log \det[
M_{11}(\xi^*, {\boldsymbol{\gamma}}) - M_{12}(\xi^*, {\boldsymbol{\gamma}})M_{22}^{-1}(\xi^*,{\boldsymbol{\gamma}})M_{12}^T(\xi^*,{\boldsymbol{\gamma}}) ]\,\mathrm{d}F(\boldsymbol{\gamma}) }\right)^{1/s}.
\]
We find designs that maximize~\eqref{eq:bayesDA} and~\eqref{eq:bayesDs} numerically using a version of the Fedorov-Wynn algorithm \citep{wynn,fedorov}, as implemented in \texttt{R} package \texttt{docopulae} \citep{docopulae}.
The optimality of a block design $\xi^\star$ under the robust $D_A$-criterion, regardless of how it was found, can be assessed via application of the following Kiefer-Wolfowitz-type equivalence theorem. The proof is similar to that for completely randomized experiments with multivariate response, see \citet{perrone+al_17} for the locally-optimal design case.
\begin{theorem}\label{Th:1}
The following properties are equivalent:
\begin{enumerate}
\item $\xi^\star$ is $D_A$-optimal;
\item for every $\zeta \in \mathcal{X}^k$,
$$\int_{\mathcal{B}}
\textnormal{ tr }[ M(\xi^\star;\, {\boldsymbol{\gamma}})^{-1} A (A^T M(\xi^\star;\, {\boldsymbol{\gamma}})^{-1} A)^{-1} A^T M(\xi^\star;\, {\boldsymbol{\gamma}})^{-1} M(\zeta;\, {\boldsymbol{\gamma}})] \,\mathrm{d}G(\boldsymbol{\gamma}) \leq s\,;$$
\item over all $\xi \in \Xi$, the design $\xi^\star$ minimizes the function
$$\max\limits_{\zeta \in \mathcal{X}^k}
\int_{\mathcal{B}} \textnormal{ tr }[M(\xi^\star, {\boldsymbol{\gamma}})^{-1} A (A^T M(\xi^\star, {\boldsymbol{\gamma}})^{-1} A)^{-1} A^T M(\xi^\star, {\boldsymbol{\gamma}})^{-1} M(\zeta;\, {\boldsymbol{\gamma}})] \,\mathrm{d}G(\boldsymbol{\gamma})\,,$$
\end{enumerate}
where $\Xi$ is the set of all possible block designs.
\end{theorem}
\subsection{Comparative example}
We demonstrate robust optimal block designs for copula models using a simple example from \citet{woods+v_11}, which allows comparison to the designs found by those authors for a GEE model. We find robust designs for a single-factor log-linear regression model assuming Poisson marginal distirbutions and quadratic linear predictor, implying $\log\{\eta(\mathbf{x};\,\boldsymbol{\beta})\} = \beta_0 + \beta_1x + \beta_2x^2$. The prior distribution $G$ is uniform on the parameter space $[-1,1]\times [4, 5] \times [0.5, 1.5]$. In line with our motivating example, we assume blocks of size $k=2$ and intra-block dependence defined according to one of the following bivariate copula functions.
\begin{enumerate}
\item \emph{Product Copula}, which represents the independence case,
\[C(u_1,u_2) = u_1 u_2\,,\]
with generalized Kendall's $\tau$ of $\tau_2 = 0$.
\item \emph{Clayton Copula},
\item[] \[{C}_{\alpha}(u_1,u_2;\,\alpha) =\big[ \max\big( u_1^{-\alpha} + u_2^{-\alpha} -1 ,\, 0\big) \big]^{-\frac{1}{\alpha}}\,,\]
with $\alpha \in (0, +\infty)$ and generalized $\tau_2 = \frac{\alpha}{\alpha + 2}$.
\item \emph{Gumbel Copula},
\item[] \[{C}_{\alpha}(u_1,u_2;\,\alpha) =\exp \big( - \big[ ( -\ln u_1)^{\alpha} + (-\ln u_2)^{\alpha} \big]^{\frac{1}{\alpha}} \big)\,,\]
with $\alpha \in [1, +\infty)$ and generalized $\tau_2 = \frac{\alpha - 1}{\alpha}$.
\end{enumerate}
The first copula is chosen for reference purposes; the latter two represent opposing dependencies in the tails (lower tail dependence for the Clayton versus upper tail dependence for the Gumbel). To isolate the effect of the copula structure from the strength of the dependence, we set $\alpha$ for each copula such that the values for Kendall's $\tau$ coincide at three level,s $\tau_2=\epsilon>0, 1/10, 1/3$ respectively. Here $\epsilon=10^{-9}$ is a small number to approximate the zero case, but avoid singularity issues.
To find robust $D$-optimal designs, objective function~(\ref{eq:bayesDA}) was evaluated using quadrature \citep{gjs}. Optimal designs under the Clayton and Gumbel copulas are shown in Figure~\ref{fig:toyexample}, and demonstrate that increasing the generalized dependence (i.e. increasing $\tau_2$) leads to designs placing more weight on support blocks with points on the edge of the design space. All the designs display a ``mirror-image'' structure, with all design points having $\mathbf{x}>0$. These features are common in designs for Poisson regression (see \citealp{rwle}). The designs found under the Gumbel copula tend to include more support blocks but the pattern in the changes to these blocks as $\tau_2$ is increased is similar for both copulas.
\newcommand{.35}{.35}
\begin{figure}[htb]
\begin{center}
\includegraphics[viewport = 100 250 500 600, clip, scale = .35]{figures/poisson_copula_clayton_tau_0}
\includegraphics[viewport = 100 250 500 600, clip, scale = .35]{figures/poisson_copula_clayton_tau_01}
\includegraphics[viewport = 100 250 500 600, clip, scale = .35]{figures/poisson_copula_clayton_tau_033}
\includegraphics[viewport = 100 250 500 600, clip, scale = .35]{figures/poisson_copula_gumbel_tau_1e-09}
\includegraphics[viewport = 100 250 500 600, clip, scale = .35]{figures/poisson_copula_gumbel_tau_01}
\includegraphics[viewport = 100 250 500 600, clip, scale = .35]{figures/poisson_copula_gumbel_tau_033}
\end{center}
\caption{\label{fig:toyexample} Optimal designs for the comparative example; rows: Clayton and Gumbel copula; columns levels $\tau_2= \epsilon>0,1/10,1/3$. }
\end{figure}
{For reference purposes the optimal design using the independence copula, i.e. an optimal design assuming no block effect, was evaluated. It showed little difference to setting the nominal level for $\tau_2=0$ for a particular copula. In particular the D-efficiencies for the Clayton and Gumbel model were 96.3\% and 99.7\% respectively. This efficiency expectedly decreases as the association within the block increases, for $\tau_2=1/3$ for instance it is already down to 65.0\% and 61.3\% respectively.}
In \cite{woods+v_11}, robust $D$-optimal designs were found under the same Poisson marginal models and prior distribution but with the dependence described using a GEE approach with an exchangeable correlation matrix and pairwise working correlation of $0.5$. The optimal design found was given by:
\begin{equation}\label{eq:geedesign}
\xi^\star = \left\{
\begin{array}{ccc}
(.03,1) & (1,.60) & (-.40,.78)\\
.355 & .310 & .335
\end{array}
\right\}\,.
\end{equation}
That is, for example, the first support block is $\zeta_1 = (0.03, 1)$. This design is somewhat different in structure to the copula designs, without the same mirror structure. Quantitatively, the comparison shows the efficiencies under various scenarios given in Table 1. {Surprisingly the design from \cite{woods+v_11} seems to be most compatible with an independence assumption.}
\begin{table}[htb]
\begin{tabular}{lllll}
\hline
\multicolumn{1}{l}{Independence} & \multicolumn{1}{l}{Clayton,$\tau_2= \epsilon>0$} & \multicolumn{1}{l}{Clayton,$\tau_2=1/3$} & \multicolumn{1}{l}{Gumbel,$\tau_2= \epsilon>0$} & \multicolumn{1}{l}{Gumbel,$\tau_2= 1/3$} \\ \hline
\multicolumn{1}{c}{ 96.48\% } & \multicolumn{1}{c}{ 89.85\% } & \multicolumn{1}{c}{ 84.41\% } & \multicolumn{1}{c}{ 95.55\% } & \multicolumn{1}{c}{ 92.96\% } \\ \hline
\end{tabular}
\caption{D-efficiencies for design~\eqref{eq:geedesign} from the GEE-approach designs under various copula models.}
\end{table}
\section{Application to the materials example}
\label{sec:application}
In this section we return to the materials testing example to find and assess designs for comparing six materials in block of size two under a variety of modelling assumptions. The measured response is binary, with each material sample either passing or failing a visual check. We label the five novel materials as ``treatments'', with the reference material considered as a control. Marginally, we assume a logistic regression to model the differences between materials set up as
$$
Y_{ij}\sim \mathrm{Bernoulli}\left\{\eta(\mathbf{x}_{ij};\,\boldsymbol{\beta})\right\};\,\quad \eta(\mathbf{x}_{ij};\,\boldsymbol{\beta}) = \mathrm{expit}\left(\beta_0 + \sum_{l=1}^5\beta_ix_{ijl}\right)\,,
$$
where $\mathrm{expit}(u) = 1/\{1 + \exp(-u)\}$, $Y_{ij}$ is the binary response from the $i$th unit in the $j$th block ($i=1,2;\,j=1,\ldots,b$), $\eta(\mathbf{x}_{ij};\,\boldsymbol{\beta}$ is the associated probability of success, $x_{ijl}$ is an indicator variable taking the value 1 if the $i$th unit in the $j$th block was assigned treatment $l$ ($l=1,\ldots,5$) and 0 otherwise, and $\beta_0,\ldots,\beta_5$ are unknown parameters to be estimated. Here, $\beta_0$ is the logit for the reference material, with $\beta_l$ being the difference in expected response, on the logit scale, between the reference material and the $l$th novel material or treatment.
The choice of copula and the strength of intra-block association makes little difference to the design selected. However, assuming different marginal models and adopting a local or pseudo-Bayesian approach has a strong impact on the designs. Example designs for the Gumbel copula are shown in Figure~\ref{fig:materials}.
With a null marginal model, i.e. $\boldsymbol{\beta}^T= (0,0,0,0,0,0)$, when the response variance is constant, the locally D-optimal design contains all material combinations, excluding those blocks containing replicates of a single treatment. This design would also be optimal under a linear model with constant error variance. For different assumed parameter vectors, for example $\boldsymbol{\beta}^T=(0,-1,2,-3,4,-5)$, the optimal design contains only a few distinct treatment and treatment control combinations, with differing weights; here (1,2),(3,4),(4,5) and (5,6) are selected. The (pseudo)-Bayesian approach, assuming a continuous uniform prior on $[-1,1]$ for each $\beta_l$ ($l=0,\ldots,5$) yields designs with unequal weights spread across all material combinations.
Changing to a continuous uniform prior on the space $[-1,1]\times[-2,0]\times[1,3]\times[-4,-2]\times[3,5]\times[-6,-4]$, so centred on $\boldsymbol{\beta}^T=(0,-1,2,-3,4,-5)$, adjusts the weighting of the support blocks to give more emphasis on comparing treatments 2 and 4 and 3 and 5. These pairs of treatments have differences to the control with the same sign.
\begin{figure}[htb] \label{fig:materials}
\begin{center}
\includegraphics[viewport = 100 250 450 600, clip, scale = .5]{figures/beta0_materials_copula_gumbel_tau_033_Local_TRUE} ~~~~~~
\includegraphics[viewport = 100 250 450 600, clip, scale = .5]{figures/materials_copula_gumbel_tau_033_Local_TRUE}
\includegraphics[viewport = 100 250 450 600, clip, scale = .5]{figures/beta0_materials_copula_gumbel_tau_033_Local_FALSE} ~~~~~~
\includegraphics[viewport = 100 250 450 600, clip, scale = .5]{figures/materials_copula_gumbel_tau_033_Local_FALSE}
\end{center}
\caption{Optimal designs for the materials testing example assuming a Gumbel copula with $\tau_2= 0.33$; rows - local and pseudo-Bayesian; columns - assumed parameters or prior mean of $\boldsymbol{\beta}^T=(0,0,0,0,0,0)$ and $\boldsymbol{\beta}^T=(0,-1,2,-3,4,-5)$, respectively.
}
\end{figure}
\section{Discussion}
\label{sec:disc}
The modeling of block effects by copulas seems a natural choice and allows for elegant separation of the block and the marginal effects. Experimental designs for such models are now readily calculable.
The pseudo-Bayesian $D_A$-optimality criterion was added to the \texttt{R} package \texttt{docopulae} version 0.4 (see \citealp{docopulae}) with the functions \texttt{wDsensitivity} and \texttt{wDefficiency}, both relying on a prespecified quadrature scheme for evaluation of the integrals. In this paper we have concentrated on finding designs to estimate the complete parameter vector but the implementation provides flexibility for checking for symmetry, model discrimination, etc., as investigated in \citet{perrone+al_17}.
Our examples are confined to the case $k=2$. Whilst there is no theoretical necessity for that it is difficult to specify high-dimensional parametric copulas with a sufficient range of dependence, for details see the excellent survey of \cite{nikoloulopoulos_13}. However, work on this issue would go well beyond the scope of this paper. It might also be interesting to contrast our findings with some known analytic results for blocks of size two as, for example, given in \cite{cheng} where a Gaussian copula is implicitly assumed.
\section*{Acknowledgements}
We are grateful to Keith Warburton and Rob Ashmore from the UK Defence Science and Technology Laboratory for providing details of the materials testing example. W.G. M{\"u}ller would like to acknowledge the hospitality of the Southampton Statistical Sciences Research Institute during his sabbatical, when this research was initiated. He was partially supported by project grants LIT-2017-4-SEE-001 funded by the Upper Austrian Government, and Austrian Science Fund (FWF): I 3903-N32 and D.C. Woods was partially supported by Fellowship EP/J018317/1 from the UK Engineering and Physical Sciences Research Council.
\renewcommand{\baselinestretch}{1}
\normalsize
\bibliographystyle{asa} |
2005.07228 | \section{Introduction}
Galaxy morphological classification plays a fundamental role in descriptions of the galaxy population in the universe, and in our understanding of galaxy formation and evolution
Galaxy morphology is related to key physical, evolutionary, and environmental properties, such as system dynamics \citep{djo87, ger01, deb06, fal19, rom12}, the stellar formation history \citep{ken98, bru03, kau03, lov19}, gas and dust content \citep[e.g.,\xspace][]{lia19}, galaxy age \citep{ber10}, and interaction and merging events \citep[e.g.,\xspace][]{rom12}.
Early galaxy classifications strategies were based on the visual aspect of the objects, differentiating among spiral, elliptical, lenticular, and irregular galaxy types according to their resolved morphology.
Examples of these strategies are the original classification schemes by \citet{hub26} and \citet{vau59}.
This methodology has reached historic marks during the last decade through the citizen science initiative known as {\it Galaxy Zoo}.
It stands out for being the largest effort made to visually classify more than \numprint{900000} galaxies from the Sloan Digital Sky Survey \citep[SDSS;][]{fuk96} galaxies brighter than $r_{\rm SDSS} = 17.7$ with proven reliability \citep{lin11}.
After this milestone, this crowd-sourced astronomy project also included the analysis of datasets from the Kilo-Degree Survey (KiDS) imaging data in the Galaxy and Mass Assembly (GAMA) fields, classifying typical edge-on galaxies at $z<0.15$ \citep{hol19}, and the quantitative visual classification of approximately \numprint{48000} galaxies up to $z\sim3$ in three Hubble Space Telescope (HST) fields of the Cosmic Assembly Near-infrared Deep Extragalactic Legacy Survey \citep[CANDELS;][]{sim17}.
Using the visual classification approach, the morphology and size of luminous, massive galaxies at $0.3 < z < 0.7$ targeted by the Baryon Oscillation Spectroscopic Survey \citep[BOSS;][]{daw13} of SDSS-III were also determined \citep{mas11} using HST and Cosmic Evolution Survey\footnote{\url{http://cosmos.astro.caltech.edu}} \citep[COSMOS;][]{sco07} data.
However, the availability of larger telescopes and sophisticated instruments has made visual classification unfeasible because most galaxies are barely resolved, making identification of their morphological type very difficult, and the number of discovered galaxies has increased dramatically since the introduction of digital surveys dedicated to probing larger and deeper volumes in the universe.
This issue will be even more critical in the near future when the next generation of large surveys such as the \emph{Large Synoptic Survey Telescope} \citep{tys02} or the results from {\it Euclid} mission \citep{lau11} produce petabytes of information and trigger the need for time-domain astronomy \citep{hlo19} far exceeding the capacity of available human resources to manage this information.
For this reason, the automated classification of galaxies has become an intense area of research in modern astronomy.
Previous research into automated galaxy-classification algorithms has focused on colors, shape, and morphological parameters related to galaxy light distribution, such as concentration and asymmetry \citep[e.g.,\xspace][]{abr94, ber00, con03, con06, pov09, pov13, pov15, den13}.
Joint automated and visual classification procedures have been implemented in extragalactic surveys such as for example COSMOS \citep{cas07, zam07} and GAMA \citep{alp15}.
Another approach involves the fitting of spectral energy distributions (SEDs) using galaxy templates \citep{ilb09}.
In a complementary fashion, \citet{str01} investigated the dichotomous classification in early- and late-type (ET and LT) galaxies.
For these authors, the ET group includes the E, S0, and Sa morphological types, while the LT group comprises Sb, Sc, and Irr galaxies.
Furthermore, using the well-known tendency of the LT to be bluer than the ET galaxies, \citeauthor{str01} propose the \ur color to separate between these galaxy types.
S\'{e}rsic and concentration indexes have also been used, alone or in combination with the \ur color, to separate ET from LT galaxies \citep[e.g.,\xspace][]{con03, kel12, den13, vik15}.
A far more complicated and expensive classification, in terms of computational and observational resources, consists in fitting a set of either empirical or modeled SED templates to the galaxy continuum \citep[e.g.,\xspace][]{col80, kin96}.
Currently, there are some public codes that are able to perform such template-based classifications \citep[e.g.,\xspace LePhare:][]{arn99, ilb06}.
Classification can be addressed in machine learning through supervised learning techniques, which consist in training a function that maps inputs to outputs learning from input--output pairs, and using this function to assign new observations in two or more predefined categories.
Supervised learning techniques include decision trees \citep{bar20}, random forests \citep{mil17}, linear discriminant analysis (LDA) \citep{mur87}, support vector machines \citep{hue08}, Bayesian classifiers \citep{hen11}, and neural networks \citep{bal04}, among others.
Machine Learning algorithms are increasingly used for classification in large astronomical databases \citep[e.g.,\xspace][]{abo18}.
In particular, LDA is a common classifying method used in statistics, pattern recognition, and machine learning.
Linear discriminant analysis classifiers attempt to find linear boundaries that best separate the data.
Recently, LDA has being used for galaxy classification in spiral and elliptical morphological types \citep{fer15}, classification of Hickson’s compact groups of galaxies \citep{abd19}, and galaxy merger identification \citep{nev19}.
In recent years, neural networks have become very popular in different research areas because of their ability to perform outstanding accurate classifications, and regression and series analyses.
A typical neural network is made up of a number of hidden layers, each with a certain quantity of neurons that perform tensor operations.
There are several network types which are oriented to solve different issues \citep[a brief explanation of different networks can be found in][]{bar19}.
Also, \citet{bus18} used a one-dimensional convolutional neural network (CNN) for classification and redshift estimates of quasar spectra extracted from the BOSS.
Much of the recent research has focused on two-dimensional CNN classification of galaxy images \citep[e.g.,\xspace][]{ser96, hue15, die15, dom18, per18, wal20}.
In the future, neural networks will probably gain more importance and become the primary technique for classification of astronomical images.
However, there are two drawbacks that limit the use of CNN in astronomical research at present.
The first is the network bandwidth, which prevents the download of large amounts of heavy images obtained in remote observatories.
The second drawback is the computational and hardware resources needed to train a two-dimension CNN with tens of thousands of images.
Dense (or fully connected) neural networks (\dnn) are used to solve general classification problems applied to tabulated data.
In astronomy, DNNs\xspace have been applied to morphological type classification in low-redshift galaxies.
Thus, \citet{sto92} designed a simple \dnn architecture for morphological classification of 5217 galaxies drawn from the ESO-LV catalog \citep{lau89} using 13 parameters (most of them geometrical) in five different classes, obtaining an accuracy of 56\%.
\citet{nai95} used the same architecture for 830 bright galaxies ($B \leq 17$) and 24 parameters, reducing the parameter space dimension through principal components analysis.
\citet{ser93} used \dnn autoencoders for unsupervised classification of galaxies into three major classes: Sa+Sb, Sc+Sd, and SO+E.
\citet{sre18} applied a \dnn to a sample of 7528 galaxies at redshifts $z < 0.06$ extracted from the Galaxy And Mass Assembly survey (GAMA\footnote{\url{http://www.gama-survey.org}}) achieving an accuracy of 89.8\% for spheroid- versus disk-dominated classification.
These earlier works showed that DNNs\xspace are capable of performing accurate classification tasks on processed data such as photometry, colors, and shape parameters of low-redshift galaxies \citep{sto92,nai95,bal04}.
However, compared with image-oriented CNNs, little attention has been paid recently to the use of DNNs\xspace for galaxy classification, even if these networks do not require the large quantity of resources used by the CNN.
Moreover, both neural network software development \citep[e.g.,\xspace Tensorflow,][]{aba16} and hardware computation power (both central and graphics processing units) have increased dramatically, boosting the capabilities of \dnn applications.
In this paper we extend the use of DNNs\xspace to the morphological classification of galaxies up to redshifts $z \leq 2$.
We compare the performance of different galaxy classification techniques applied to a sample of galaxies extracted from the photometric OTELO database \citep{bon19} with a fitted S\'{e}rsic profile \citep{nad20}.
These techniques are (1) the \citet{str01} \ur color algorithm; (2) the LDA machine learning algorithm, which includes both the \ur color and a shape parameter, either the S\'{e}rsic index or the concentration index \citep{kel12}; and (3) a \dnn that uses optical and near-infrared photometry, and shape parameter for objects available in both OTELO and COSMOS catalogs.
We find that a simple, easily trainable \dnn yields a highly accurate classification for ET and LT OTELO galaxies.
Moreover, we apply our \dnn architecture to a set of tabulated COSMOS data with some differences in the photometric bands measured with respect to OTELO, and find that our architecture also performs accurate classification of COSMOS galaxies.
Finally, we use the same \dnn architecture but substituting the S\'{e}rsic index with the concentration index \citep{shi01} for both OTELO and COSMOS datasets.
This paper is organized as follows.
Section~\ref{sec:method} describes the different techniques used to classify galaxies.
In Section~\ref{sec:results} we show the results and compare the different techniques.
Finally, in Section~\ref{sec:conclus} we present our conclusions.
\section{Methodology}\label{sec:method}
The current investigation involves the automatic classification of galaxies into two dichotomous groups, namely ET and LT galaxies, using both photometric measurements and a factor that depends on the shape of the galaxys' light distribution.
Machine learning algorithms for automatic classification parse data and learn how to assign subjects to different classes.
These algorithms require both training and test datasets that consist of labeled data.
The training dataset is used to fit the model parameters, and the test dataset to provide an unbiased assessment of the model performance.
If the algorithm requires tuning the model hyperparameters, such as the number of layers and hidden units in a \dnn architecture, a third labeled dataset called the validation dataset is required to evaluate different model trials (the test dataset must be evaluated only by the final model).
Once the final model architecture is attained, it is trained joining both the training and the validation dataset, and then evaluated using the test dataset.
In this section we present our samples of galaxies extracted from OTELO and COSMOS.
We use the observed photometry and colors, that is, neither $k$ nor extinction corrections were performed.
In order to maximize the sample size while keeping a well-sampled set in redshift, data have been limited in photometric redshift ($z_{phot} \leq \zlim$) but not in flux, thus no cosmological inferences can be performed from our sample.
However, at the end of Sect.~\ref{sec:results} we present a brief analysis of the results obtained for flux-limited samples.
We describe the photometry and the shape factors of these data.
We then present the implementation of the different classification methodologies used: the \ur color, LDA, and \dnn.
Finally, we present the bootstrap procedure that we use to compare the results obtained with these methodologies.
\subsection{OTELO samples}
OTELO is a very deep blind survey performed with the red tunable filter OSIRIS instrument of the 10.4\,m Gran Telescopio Canarias \citep{bon19}.
OTELO data consist of images obtained in 36 adjacent narrow bands (FWHM 12\,\AA) covering a window of 230\,\AA\ around $\lambda = 9175$\,\AA.
The catalog includes ancillary data ranging from X-rays to far infrared.
Point spread function-model photometry and library templates were used for separating stars, AGNs, and galaxies.
The OTELO catalog comprises \numprint{11237} galaxies.
\citet{nad20} matched OTELO with the output from GALAPAGOS2 \citep{hau07, hau13} over high-resolution HST images.
Not all the OTELO galaxies were detected by GALAPAGOS2, which returned a total of 8812 sources.
\citet{nad20} account for automated detection of multiple matches produced by more than one source that lay inside the OTELO's Kron radius in a high-resolution F814W band image.
These latter authors attribute these multiple matches to close companions (gravitationally bounded or in projection), mergers, or resolved parts of the host galaxy.
In any case, sources with multiple matches were excluded from our analysis because they could affect low-resolution photometry.
Finally, we included further constraints to extract our OTELO samples (see below).
\subsubsection{S\'{e}rsic index and photometry sample}
OTELO uses LePhare templates to fit the SED of galaxies to obtain photometric morphological type classification and redshift estimates.
We used this morphological classification to assign the galaxies to ET and LT classes.
The best model fitting is recorded under the \texttt{MOD\_BEST\_deepN} numerical coded entries in the OTELO catalog \citep{bon19}.
The ET class includes galaxies coded as `1' in the OTELO catalog, which were best fitted by the E/S0 template from \citet{col80}.
The LT class comprises OTELO galaxies coded from `2' to `10', which were best fitted by different late-type galaxy templates, namely Sbc, Scd, and Irr \citep{col80}, and starburst-class templates from SB1 to SB6 \citep{kin96}.
\citet{bon19} estimate that the fraction of inaccurate SED fittings for the galaxies contained in the OTELO catalog may amount to up to $\sim 4\%$.
Therefore, our results may be affected if there are ET galaxies miscoded differently from `1' in OTELO, or any of the LT galaxies miscoded as `1'.
This could affect, for example, early-type spirals such as Sa galaxies, which are not explicitly included in the OTELO template set.
However, the UV SED for ellipticals and S0 galaxies is completely different from Sa and other LT galaxies.
The OTELO catalog also includes GALEX-UV data that allow us to identify ET galaxies even in the local universe.
Thus, we conclude that recoding the OTELO classification in our galaxy sample as ET and LT classes yields a negligible number of misclassified objects (certainly much less than the OTELO fraction of $\sim4\%$) and does not affect our results.
The S\'{e}rsic profile is a parametric relationship that expresses the intensity of a galaxy as a function of the distance from its center:
\begin{equation}\label{eq:sersic}
I(R) = I_e e^{-b \big[ \big( \frac{R}{R_e} \big)^{1/n} -1 \big]},
\end{equation}
\noindent
where $I$ is the intensity at a distance $R$ from the galaxy center, $R_e$ is the half-light radius, $I_e$ is the intensity at radius $R_e$, $b \ (\sim 2n-1/3)$ is a scale factor, and $n$ is the S\'{e}rsic index.
S\'{e}rsic profiles have been employed for galaxy classification \citep[e.g.,\xspace][]{kel12, vik15}.
This index provides a geometrical description of the galaxy concentration; for a S\'{e}rsic index $n = 4$ we obtain the de Vaucouleurs profile typical of elliptical galaxies, while setting $n = 1$ gives the exponential profile describing spiral galaxies.
Our OTELO S\'{e}rsic index and photometry (OTELO \ps) sample consists of \ngal galaxies at redshifts $z \leq \zlim$ extracted from the OTELO catalog (listed under the \texttt{Z\_BEST\_deepN} code).
The sample includes $ugriz$ optical photometry from the Canada-France-Hawaii Telescope Legacy Survey\footnote{\url{http://www.cfht.hawaii.edu/Science/CFHTLS/}} (CFHTLS), $JHKs$ near-infrared photometry from the WIRcam Deep Survey\footnote{\url{https://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/cfht/wirds.html}} (WIRDS), and S\'{e}rsic index estimates obtained using GALAPAGOS2/GALFIT \citep{hau13, pen02, pen10} on the HST-ACS publicly available data in the F814W band.
The sample comprises only galaxies with S\'{e}rsic indexes between $n = 0.22$ and $n = 7.9$; S\'{e}rsic indexes out of this range are not reliable because of an artificial limit imposed by the S\'{e}rsic-profile-fitting algorithm in GALAPAGOS2 \citep{hau07}.
Besides, the sample does not include galaxies with S\'{e}rsic index values less than three times their estimate errors.
For a detailed description of the S\'{e}rsic-profile-fitting process we refer to \citet{nad20}.
Figure \ref{fig:ps} shows the sample distributions of magnitudes in the $r$ band and photometric redshifts extracted from the OTELO catalog.
We note that the sample is not limited in flux, and therefore it is not a complete sample in the volume defined by the redshift limit $z_{phot} \leq \zlim$ (see the discussion about magnitude-limited samples below).
The redshift distribution presents concentrations at redshifts 0.04, 0.11, 0.34, 0.90, and 1.72 superimposed onto a bell-like distribution with a maximum around $z_{phot} \approx 0.8$ and a strong decay from $z_{phot} \approx 1.3$.
The photometric data are incomplete, which affects the available number of galaxies for those classification procedures that cannot effectively manage missing data.
The sample is randomly divided in a training set (70\% of the available galaxies) used for the algorithm training, and a test set (30\%) used to yield an unbiased estimate of the efficiency of the model.
Choosing the proportions of training and test sample sizes depends on a balance between the model performance and the variance in the estimates of the statistical parameters (in our case the accuracy, \sensitivity and \specificity, as explained below).
Rule-of-thumb proportions often used in machine learning are 90:10 (i.e., 90\% training, 10\% testing), 80:20 (inspired by the Pareto principle), and 70:30 (our choice).
In our case, the 70:30 proportion is justified because it fulfills the \emph{large enough sample condition} (another rule of thumb) that the sample size must be at least 30 to ensure that the conditions of the central limit theorem are met.
Thus, the number of expected ET galaxies in the \ps test sample is: $N_{gal} p_{et} p_{test} \approx \FPeval\result{round(\soln * 0.054 * 0.3, 0)}\numprint{\result}$, where $N_{gal} = \soln$ is the sample size, $p_{et} = \FPeval\result{round(\net / \soln, 3)}\numprint{\result}$ is the proportion of ET galaxies in the \ps sample, and $p_{test} = 0.3$ is the proportion of galaxies in the test sample.
\subsubsection{Concentration and photometry sample}
The concentration is widely used to differentiate ET from LT galaxies.
Concentration provides a direct measurement of the intensity distribution in the image of a galaxy.
For that reason, the concentration is easier to obtain than the S\'{e}rsic index, which requires fitting several parameters to the S\'{e}rsic profile.
Here we use the definition \citep{ber00,sca07}:
\begin{equation}
C = 5 \log_{10} \! \left( \frac{r_{80}}{r_{20}} \right),
\end{equation}
where $r_{80}$ and $r_{20}$ are the 80\% and 20\% light Petrosian radii, respectively, obtained from the HST F814W band images.
We chose the F814W band concentration for compatibility with COSMOS \citep{sca07}.
The data were limited to a redshift $z \leq \zlim$.
The final OTELO concentration and photometry (OTELO \pc) sample consists of \ngalc galaxies, with \netc\ classified as ET and \nltc\ as LT.
Figure \ref{fig:pc} shows the sample distributions of magnitudes in the $r$ band and photometric redshifts, which is similar to the case of the \ps sample discussed above.
The \pc sample was also divided in two subsamples: a training subsample containing \ntrainc (70\%) of the objects, and a test subsample with \ntestc (30\%) of the galaxies.
\subsection{COSMOS samples}
We expect that our \dnn architecture can be applied to galaxy classification in other databases.
Therefore, we checked its reliability using two COSMOS enhanced data products: the \emph{Zurich Structure \& Morphology Catalog v1.0} \citep[\zurich,][]{sca07, sar07} and the \emph{COSMOS photometric redshifts v1.5} \citep[\photoz,][]{ilb09}.
Those catalogs have \numprint{131532} and \numprint{385065} entries, respectively.
We merged both databases, obtaining \numprint{128442} matches, from which we chose a sample of galaxies with S\'{e}rsic indexes estimates in the range $0.2 < n < 8.8$, and another sample with the same concentration radii used in OTELO.
Both samples are limited to redshifts $z<2$ and include photometry in the CFHT~$u$, Subaru~$BVgriz$, UKIRT~$J$ and CFHT~$K$ bands, along with classification entries.
Thus, the galaxy records included all the available data from the \photoz bands except the CFGT $i^\prime$ magnitudes (we chose the Subaru $i$ band also included in the catalog).
The resulting COSMOS S\'{e}rsic index and photometry (COSMOS \ps) sample consists of \ngalC galaxies, \numprint{\nltC} of which had been classified as LT and \netC\ as ET.
With such a large number of galaxies, we can limit the training set to \ntrainC\ galaxies (a fraction of approximately \FPeval\result{round(100*5000/\solnC,0)}\numprint{\result}\% of the sample), and rise the fraction of the testing set up to \ntestC galaxies (approximately \FPeval\result{round(100*\soltestC/\solnC,0)}\numprint{\result}\% of the sample) in order to reduce the variance of the results.
Analogously, the COSMOS concentration and photometry (COSMOS \pc) sample consists of \ngalCc galaxies, distributed in \numprint{\nltCc}\ LT and \netCc\ ET.
We set the corresponding training and testing sets to \ntrainCc and \ntestCc galaxies, respectively.
\begin{table}[!t]
\caption[]{One color separation.}\label{tab:colors}
{\centering
\begin{tabular}{lr@{$\,\pm\,$}lrr@{$\,\pm\,$}l}
\hline
\noalign{\smallskip}
Color & \multicolumn{2}{c}{Accuracy} & N$_{test}$ & \multicolumn{2}{c}{Separation} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$u-J$ & 0.96 & 0.01 & 518 & \phantom{0}4.1 & 0.2 \\
$u-i$ & 0.96 & 0.01 & 536 & 2.8 & 0.3 \\
$u-r$ & 0.96 & 0.02 & 536 & 2.0 & 0.2 \\
$u-H$ & 0.95 & 0.01 & 510 & \multicolumn{2}{c}{Baseline} \\
$g-J$ & 0.94 & 0.02 & 527 & \multicolumn{2}{c}{Baseline} \\
$g-i$ & 0.94 & 0.01 & 548 & \multicolumn{2}{c}{Baseline} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\par }
\footnotesize{\textbf{Notes:}
Column 1: tested color.
Column 2: mean accuracy (proportion of galaxies correctly classified) on 100 random test sets; we note that the colors are sorted from the highest to the lowest accuracy score.
Column 3: sample size included in each test set; differences in the sample size between different colors are due to missing data.
Column 4: color discriminant value or \emph{Baseline} if the accuracy score is not statistically different from the baseline classification.}
\end{table}
\subsection{Classification procedures}
We used a classification baseline and three classification methods for the OTELO samples.
The baseline consists in classifying all the galaxies into the most frequent morphological group.
Any classification by a more sophisticated method should improve the baseline accuracy.
For the COSMOS samples we only used the classification baseline and the \dnn architecture developed for the OTELO samples, as we were interested only in probing this architecture.
\subsubsection{Color classification}
The first classification method uses a color discriminant.
After testing several colors, we focus on the \ur color as proposed by \citet{str01}.
These authors use a simple color discriminant such that any galaxy with \ur color redder than \urplane is classified as ET, and LT if \(\ur < \urplane.\)
This method was applied only to both \ps and \pc samples drawn from OTELO.
We also investigated other possible color discriminants that will be presented later.
Data records with missing \ur colors were disregarded, reducing the \ps sample to \nurgal galaxies and the \pc sample to \nurgalc.
\subsubsection{Linear discriminant analysis}
The second classification method is LDA.
The aim of LDA is to find a linear combination of features which separates different classes of objects.
These features are interpreted as a hyperplane normal to the input feature vectors.
We note that the \citet{str01} \ur color separation method can be regarded as a LDA which defines the \(\ur = \urplane\) plane normal to $u-g$ and $g-r$ vectors.
As in the previous method, LDA was only applied to \ps and \pc OTELO samples, and data records with missing \ur colors were disregarded.
\begin{table}[!t]
\caption{Color \ur confusion matrix.}\label{tab:urconf}
\centering
\begin{tabular}{crrrr}
\hline
\noalign{\smallskip}
& & \multicolumn{2}{c}{OTELO} \\
\cline{3-4}
\noalign{\smallskip}
& & ET & LT & \textbf{Total} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multirow{2}{*}{Color} & ET & 25 & 20 & \textbf{45} \\
& LT & 2 & 489 & \textbf{491} \\
& \textbf{Total} & \textbf{27} & \textbf{509} & \textbf{536} \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
Two problems with machine learning techniques are the management of missing data and the curse of dimensionality.
Missing data (e.g.,\xspace\ a photometric band) usually results in removing objects with incomplete records from the dataset.
The curse of dimensionality appears because increasing the number of variables in a classification scheme means that the volume of the space increases very quickly and therefore the data become sparse and difficult to group.
The curse of dimensionality can be mitigated by dimensionality reduction techniques such as principal component analysis (PCA), but dimensionality reduction may introduce unwanted effects \citep[data loss, nonlinear relations between variables, and the number of components to be kept]{car01,shl14} that tend to blur differences between the groups.
Alternative methodologies to deal with these problems are under development, for example by \citet{cai18} who introduce an adaptive classifier to cope with both missing data and the curse of dimensionality for high-dimensional LDA.
To avoid these problems, we chose to limit our LDA model to the S\'{e}rsic index and the single highly discriminant \ur color, as it has been already addressed in the galaxy classification literature \citep[e.g.,\xspace][]{kel12, vik15}.
\begin{table*}[t]
\caption{Comparison of classification methods for \ps samples.}\label{tab:compar}
\centering
\begin{tabular}{llllllllllr}
\hline
\noalign{\smallskip}
\multirow{2}{*}{Database} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Accuracy$\,^a$} & & \multicolumn{2}{c}{\sensitivity$\!^a$} & & \multicolumn{2}{c}{\specificity$\!^a$} & \multicolumn{1}{c}{Sample} \\
\cline{3-4}
\cline{6-7}
\cline{9-10}
\noalign{\smallskip}
& & Mean & Error & & Mean & Error & & Mean & Error & \multicolumn{1}{c}{size} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
OTELO & Baseline & \FPupn\result{\plt{} 100 swap / 3 round}\result & 0.006 && 0 & ~\dots && 1 & ~\dots & \ngal \\
OTELO & \ur color & 0.96 & 0.02 && 0.8 & 0.3 && 0.97 & 0.02 & 536 \\
OTELO & LDA & 0.970 & 0.008 && 0.80 & 0.08 && 0.979 & 0.007 & 536 \\
OTELO & \dnn & 0.985 & 0.007 && 0.84 & 0.09 && 0.993 & 0.006 & 551 \\
\noalign{\smallskip}
COSMOS & Baseline & \FPupn\result{\pltC{} 100 swap / 3 round}\result & 0.002 && 0 & ~\dots && 1 & ~\dots & \ngalC \\
COSMOS & \dnn & 0.967 & 0.002 && 0.91 & 0.03 && 0.979 & 0.005 & \FPeval\result{round(\solnC-5000, 0)}\numprint{\result} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{4}{l}{\footnotesize{$^a$ On 100 bootstrap runs.}}
\end{tabular}
\end{table*}
\subsubsection{Deep neural network}
The third method of classification involves a \dnn.
The sample was analyzed using the Keras library for deep learning.
Keras is a high-level neural network application programming interface (API) written in Python under GitHub license.
Currently, Keras is available for both Python and R computer languages \citep{cho17a, cho17b}.
In astronomy, Keras has already been used for image classification of galaxy morphologies \citep{per18, dom18} and spectral classification and redshift estimates of quasars \citep{bus18}, and is included in the astroNN package.\footnote{\url{https://astronn.readthedocs.io/en/latest/}}
As in the other methods, we use a training and a test set to teach and check the \dnn model,
respectively.
The difference from the other methods is that the structure of their learning discriminant function is predetermined, while the \dnn architecture should be tuned on the fly.
To achieve this goal, we split the training set in the OTELO samples in (i) a \teach (80\% of the original training set), and (ii) a validation set (the remaining 20\%).
Compared with OTELO, the COSMOS samples consist of many more galaxies.
Therefore, we limited the number of the training sets to \ntrainC for the COSMOS \ps sample, and \ntrainCc for the \pc sample, and conserved the respective \teach and validation set proportions.
We use the \teach to tune the \dnn model, and the validation set to check the loss and accuracy functions that describe the \dnn classification capability.
Once we have achieved a satisfactory result, the \dnn architecture has been optimized to classify the validation set, but the performance may be different for other datasets.
To generalize the result, we use the whole original training set to retrain the tuned \dnn model, and we then classify the test set galaxies.
Therefore, the test set galaxies were used neither to train nor to fine tune the \dnn model, but only to evaluate the \dnn performance.
An appealing feature of DNNs\xspace is the easiness to deal with missing data.
In practice, it is enough to substitute the missing values in each normalized variable by zeros to cancel their products on the network weights.
The \dnn then deals with missing values as if they do not carry any useful information and will ignore them.
Of course, it is better if there are not missing values, but DNNs\xspace allow the user to treat them without the need of dropping data entries or estimating missing values from other variables.
\citet{bar19} provides a succinct description of DNNs\xspace, and a complete explanation of Keras elements can be found in \citet{cho17a} and \citet{cho17b}.
Tuning a \dnn is a trial-and-error procedure aimed to find an appropriate architecture and setup.
As the numbers of input variables, units, and layers increase, the \dnn tends to overfit if the training set is small.
For this reason, we kept our \dnn model as simple as possible whilst obtaining a high-accuracy classification.
We use standard layers and functions for our
model that are already available from Keras.
For the interested reader, our \dnn architecture consists of two dense layers of 64 units each with \emph{rectified linear unit} (ReLU) activations, and an output dense layer of a single unit with \emph{sigmoid} activation.
The model was compiled using an iterative gradient descendent RMSprop optimizer, a binary-cross-entropy loss function, and accuracy metrics.
We kept the default values for the Keras RMSprop optimizer, i.e., a learning rate of 0.001 and a weight parameter for previous batches of $\rho = 0.9$.
These values are appropriate for most \dnn problems, and moderate changes do not affect the results.
We set the number of training epochs to avoid overfitting, and the training batch sizes to appropriate values for the number of records in the DNN training sample in each case.
\subsubsection{Bootstrap}
We used bootstrap \citep[e.g.,\xspace][]{efr93, chi18} to obtain reliable statistics that describe the performance of each classification technique.
Bootstrapping is a widely used nonparametric methodology for evaluating the distribution of a statistic using random resampling with replacement.
Thus, we calculated the classification accuracy and other classification statistics through 100 runs for the \ur color, LDA, and DNN methods.
For each run, we also divided the bootstrap random sample in a training set (70\%) and a test set (30\%).
\section{Results}\label{sec:results}
To determine a minimal set of attributes that are able to classify between ET and LT galaxies, we focus on two directly observable characteristics: photometry and shape.
Results obtained in previous studies were limited to nearby galaxies.
Thus, \citet{str01} used photometry from \numprint{147920} SDSS galaxies with magnitude $g^* \leq 21$ and redshifts $z \lesssim 0.4$ to build a binary classification model based in the $\ur = \urplane$ discriminant color, which they tested on a sample of 287 galaxies visually labeled as ET or LT, recovering 94 out of 117 (\FPeval\result{round(94 / 117 * 100, 0)}\result\%) ET, and 112 out of 170 (\FPeval\result{round(112 / 170 * 100, 0)}\result\%) LT galaxies.
\citet{den13} used a sample of \numprint{233669} SDSS-III DR8 galaxies with redshifts $0.01 < z < 0.25$ and report a concentration index discriminant to separate ET from LT galaxies in the $r$-band that achieved an accuracy of \FPupn\result{233669 46768 178557 + /} \FPupn\erresult{233669 \result{} \result{} 1 - / / 2 swap root}\FPeval\result{round(\result{} * 100, 2)}\result\ $\pm$ \FPeval\erresult{round(\erresult{} * 100, 2)}\erresult.
\citet{vik15} used both the \ur color and the S\'{e}rsic index in the $r$-band to classify a sample of 142 nearby ($z < 0.01$) galaxies, dividing the \ur versus $n_r$ plane in quadrants; most ET galaxies were located at the $\ur > 2.3$ and $n_r >2.5$ quadrant (28 out of 34, i.e., \FPeval\result{round(28 / 34 * 100, 0)}\result\% ETs were correctly classified).
\begin{table}[t]
\caption{LDA confusion matrix}\label{tab:ldaconf}
\centering
\begin{tabular}{llrrr}
\hline
\noalign{\smallskip}
& & \multicolumn{2}{c}{OTELO} \\
\cline{3-4}
\noalign{\smallskip}
& & ET & LT & \textbf{Total} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multirow{2}{*}{LDA} & ET & 20 & 8 & \textbf{28} \\
& LT & 7 & 501 & \textbf{508} \\
& \textbf{Total} & \textbf{27} & \textbf{509} & \textbf{536}\\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\subsection{S\'{e}rsic index and photometry\xspace samples}
\subsubsection{Baseline classification}
The baseline classification is the simplest classification method.
It assigns all the samples to the most frequent class.
This classification is helpful for determining a baseline performance that is used as a benchmark for other classification methods.
For this task, we selected all the galaxies in our OTELO SP sample.
In total, there are \ngal galaxy records, \net\ of them classified as ET galaxies (\FPeval\result{round(\pet, 1)}$\approx \result$\%), and \nlt\ as LT galaxies (\FPeval{\result}{round(\plt, 1)}$\approx \result$\%).
The two groups are unevenly balanced, which results in the baseline classification achieving a high overall accuracy of \result\%, which should be exceeded by any other classification method.
\subsubsection{Color classification}
A preliminary study was performed to decipher which colors yield a split between ET and LT galaxies that outperforms the baseline.
Table \ref{tab:colors} shows several examples of the measured accuracy for selecting appropriate single color discriminants.
We note that several colors did not perform better than the baseline classification (\FPupn\result{\plt{} 1 round}\result\%), but those involving the $u$ and a red band usually yield the most accurate results.
Both $u-J$ and $u-i$ colors perform marginally better than \ur, although $u-J$ has a larger number of missing records.
We present the rest of the color analysis based on the \ur color in order for ease of direct comparison with the report of \citet{str01}.
Table~\ref{tab:urconf} shows an example of the confusion matrix for a single \ur color bootstrap run, yielding an accuracy of $0.959 \pm 0.009.$
Table~\ref{tab:compar} shows the Accuracy, \sensitivity, and \specificity for the different databases and classification methods used in this paper, obtained through the bootstrap procedure.
The \sensitivity and \specificity both indicate the proportion of ET and LT galaxies, respectively, recovered through the classification procedure.
For the \ur color, the statistics yield an average Accuracy of $0.96 \pm 0.02$, a \sensitivity of $0.8 \pm 0.3$, and a \specificity of $0.97 \pm 0.02$.
The \sensitivity is the least precise of all the statistics in all the samples because of the relatively low number of ET galaxies.
\begin{table}[t]
\caption{OTELO \dnn confusion matrix}\label{tab:dnn}
\centering
\begin{tabular}{llrrr}
\hline
\noalign{\smallskip}
& & \multicolumn{2}{c}{OTELO} \\
\cline{3-4}
\noalign{\smallskip}
& & ET & LT & \textbf{Total} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multirow{2}{*}{\dnn} & ET & 27 & 4 & 31 \\
< & 4 & 516 & 520 \\
& \textbf{Total} & 31 & 520 & 551 \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
Bootstrap yields a \ur color discriminant for ET and LT separation of $2.0 \pm 0.2$, as shown in Fig.~\ref{fig:lda}.
The agreement with \citet{str01}, $\ur = \urplane$ (no error estimate is provided by these authors), is remarkable considering that the galaxies studied by these authors have redshifts in the interval $0 < z \leq 0.4$ while our sample expands to $z \leq 2$.
Figure \ref{fig:ur_z_sp} shows the \ur color distribution as a function of the redshift for the OTELO SP sample.
It is worth noting that LT galaxies in this sample tend to be bluer at redshifts $z \lesssim 0.5$, possibly due to an enhanced star-forming activity as also pointed out by \citeauthor{str01}
This feature, along with the scarcity of ET galaxies at $z > 1$ (about 13\% of all the ET galaxies in the OTELO SP sample), justifies the agreement between \citeauthor{str01} results and ours despite the redshift differences.
The distribution of the bootstrap Accuracy for the \ur color classification is shown in the upper panel of Fig.~\ref{fig:dnn_acc}.
Most of the \ur color accuracies are larger than the baseline, but the two extreme bootstrap runs with accuracies lying in the 0.915--0.92 interval fail to detect any ET galaxy.
The \sensitivity and \specificity are analogous to the \emph{True Positive Rate} and \emph{False Positive Rate} ($= 1 - $ \specificity) statistics.
These statistics are used in receiver operating characteristic (ROC) curves to represent the ability to discriminate between two groups as a function of a variable threshold, usually the likelihood of the classification \citep[e.g.,\xspace][]{bar19}.
Figure~\ref{fig:spvsse} shows the distribution of bootstrap values in the \sensitivity versus \specificity plane.
Every point in this figure corresponds to the 50\% probability threshold of the ROC curve (not shown) for each bootstrap run.
The closer the point to the top-right corner, the better the classification.
The data point located at \specificity = 1, \sensitivity = 0 corresponds to the two \ur color bootstrap runs that failed to detect any ET galaxy.
Below, for the LDA and \dnn classification methods, we increase the number of predictor variables used to enhance the distinction between ET and LT galaxies.
\begin{table*}[t]
\caption{OTELO \dnn missmatches}\label{tab:missmatch}
{\centering
\begin{tabular}{rccccccccr}
\hline
\noalign{\smallskip}
\multicolumn{1}{c}{\multirow{2}{*}{ID} } & \multirow{2}{*}{OTELO} & \multirow{2}{*}{\dnn} & \multirow{2}{*}{Prob.} & \multirow{2}{*}{$z_{phot}$} & \multirow{2}{*}{n} & \multirow{2}{*}{C} & \multirow{2}{*}{Elong.} & \multicolumn{1}{c}{Size} \\
& & & & & & &
& \multicolumn{1}{c}{(px$^2$)} \\
\hline
\noalign{\smallskip}
267 & LT & ET r & 0.32 & 0.77 & 1.59 & 3.55 & 1.88 & 259 \\
496 & LT & ET v & 0.12 & 0.70 & 2.54 & 3.54 & 1.22 & 231 \\
1895 & ET & LT r & 1.00 & 0.04 & 0.72 & 3.47 & 1.22 & 61 \\
2818 & LT & ET v & 0.12 & 0.72 & 3.29 & 3.32 & 1.39 & 255 \\
3680 & ET & LT v & 1.00 & 1.45 & 0.36 & 2.14 & 1.49 & 27 \\
4010 & LT & ET v & 0.08 & 0.33 & 3.91 & 4.29 & 1.09 & 1002 \\
4923 & ET & LT v & 1.00 & 0.08 & 0.41 & 1.12 & 1.04 & 55 \\
10207 & ET & LT v & 1.00 & 0.12 & 1.62 & 2.31 & 1.99 & 21 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\par }
\footnotesize{\textbf{Notes:}
Column 1: galaxy identifier in the OTELO catalog.
Column 2: OTELO classification.
Column 3: \dnn classification and a letter code to indicate reject (r) or validate (v); this classification after visual inspection by three of the authors.
Column 4: \dnn classification likelihood, closer to zero for ET instances and closer to one for LT.
Columns 5, 6, and 7: the photometric redshift, the S\'{e}rsic index, and the concentration value, respectively.
Column 8: shows the elongation, that is, the ratio between the major and minor axes of the galaxy image as calculated by SExtractor \citep{ber96}.
Column 9: shows the area in pixels of the HST Advanced Camera for surveys (scale 0.03 arcsec/px).}
\end{table*}
\subsubsection{Linear discriminant analysis classification}
Although the use of colors is an improvement on the baseline classification, and the \ur plane method is very easy to implement, we dispose of additional data in order to aim for more powerful classification techniques.
In particular, it will be very helpful to include a parameter associated with the galaxy morphology that can be inferred from optical or near-infrared observations.
The S\'{e}rsic profile in eq.~(\ref{eq:sersic}) describes the intensity of a galaxy as a function of the distance from its center regardless of the galaxy colors, and thus can be useful for our purpose.
The dataset combining \ur colors and S\'{e}rsic indexes has been probed using linear discriminant analysis.
The sample with complete records consisted of \nurgal galaxies which have been split in a training group of \FPupn{\result}{0.7 1787{} * 0 round}\result\ and a test group of \FPupn{\nurtest}{\result{} 1787{} - 0 round}\nurtest.
Figure~\ref{fig:lda} shows the LDA separation in the \ur color versus\ S\'{e}rsic Index $n$ plane for the test galaxies.
The logarithmic scale for the S\'{e}rsic index axis makes the visual comparison with the concentration index (which is already a logarithmic quantity) easier, but at the cost of showing a bent LDA line.
The \ur color is the main discriminant, but the S\'{e}rsic index helps to separate the ET and LT sets more clearly.
The separation line is located at $\ur = (2.756 \pm 0.002) - (0.14125 \pm 0.00007) n$, where $n$ is the S\'{e}rsic index.
An example for the confusion matrix for the test set LDA classification is shown in Table~\ref{tab:ldaconf}.
For a total of \nurtest\ test galaxies, only 15 (7+8) were misclassified, yielding a classification accuracy of \FPupn\result{\nurtest{} 15 \nurtest{} - / 3 round}\(\result \pm 0.008\) in this particular case.
Linear discriminant analysis improves both the baseline and the \ur color classifications, as shown in Table~\ref{tab:compar} and Fig.~\ref{fig:dnn_acc}.
The average \sensitivity of 0.80 is similar to the \ur color, and the \specificity of 0.979 is marginally larger.
Altogether, including the S\'{e}rsic index has helped to obtain a moderate improvement on the average accuracy (from 0.96 to 0.970) but reduces the accuracy uncertainty by \FPupn\result{0.02 0.008 0.02 - / 100 * 0 round}\result \% (from 0.02 to 0.008) with respect to the \ur color discriminant.
The LDA classification presented above is a simple machine learning methodology that shows the potential of this kind of algorithm.
As with most machine learning methods, LDA does not incorporate an easy solution to deal with missing data, although the research in this area has been continuous over the last 50 years \citep[e.g.,\xspace][]{jac68, cha76, cai18}.
Therefore, the usual way to deal with missing values is simply dropping incomplete records.
This is a major problem when dealing with cross-correlated data gathered from multiple catalogs because missing data is a frequent characteristic of catalog entries.
Thus, to prevent a drastic reduction in the amount of complete records, we are forced to put a limit on the number of photometric colors.
\subsubsection{\dnn classification}
Classification based on DNNs\xspace allows us to overcome the missing data problem that limits the number of feasible variables of other machine learning solutions.
This feature by itself justifies its application in astronomical databases, where records are often incomplete.
In the following, we show the results obtained for both OTELO and COSMOS photometry and S\'{e}rsic index samples.
\paragraph{OTELO\\}
We applied a very simple \dnn to the OTELO catalog.
First we computed the colors \mbox{$u-r$}, \mbox{$g-r$}, \mbox{$r-i$}, \mbox{$r-z$}, \mbox{$r-J$}, \mbox{$r-H$}, and \mbox{$r-Ks$}, and we introduced these colors as inputs in the \dnn along with the $r$ magnitude and the S\'{e}rsic index, that is, a total of nine input factors feeding the \dnn.
One example of the 100 random samplings analyzed with our \dnn classification is shown in Table~\ref{tab:dnn}.
For this particular example, the classification accuracy is \FPupn\result{551 27 516 + / 3 round}\(\result \pm 0.006.\)
We highlight the fact that, because of the missing data management, the number of cases included in the \dnn classification (551) is larger than those for the \ur color and LDA methods (536), despite the differences in the number of input factors (9 for the \dnn versus 2 for the LDA or 1 for the \ur color) which in most machine learning techniques would lead to a larger number of incomplete records being left out.
The mean accuracy for our 100 \dnn samplings is \(0.985 \pm 0.007\), as shown in Table~\ref{tab:compar} and in Fig.~\ref{fig:dnn_acc}.
The \sensitivity is 0.84, marginally larger than the \ur and LDA values, and the \specificity is the highest of the three methods tested.
Table~\ref{tab:missmatch} shows the eight discrepancies between the \dnn and OTELO classifications for the test sample data set presented in Table~\ref{tab:dnn}.
Figure~\ref{fig:galaxies} presents the HST images in the F814W band for these eight galaxies.
For the visual classification, we have taken into account the galaxy elongation and the light distribution in the HST image; the GALFIT model helps to indicate the shape and orientation, and the image residuals indicate a possible lack of fitting or possible substructures not visible in the HST image.
Elongated and fuzzy images support a LT visual classification, while a round and soft appearance points to an ET galaxy.
From our visual check, we conclude that six out of the eight galaxies with different class ascription are correctly classified by our \dnn algorithm.
Following is a brief description of each mismatched object.
\begin{itemize}
\item {\bf ID 267.} A north--south oriented disk galaxy with a fuzzy northeast portion.
The bulge of the galaxy broadly dominates the disk component.
Compatible with a Sab class.
Visual classification as LT.
\item {\bf ID 496.} A rounded smooth galaxy with a visual LT companion at the northwest and a star at the southwest.
Visual classification as ET.
\item {\bf ID 1895.} Appears as a rounded and compact galaxy in the HST F814W image (our detection image used for GALAPAGOS).
However, visual inspection of the HST F606W image shown in Fig.~\ref{fig:id1895} reveals that there is a companion source not detected in F814W.
Using our web-based graphic user interface\footnote{\url{http://research.iac.es/proyecto/otelo/pages/data-tools/analysis.php}} we find that this companion, probably a LT galaxy neither detected in the OTELO deep image, enters the ellipse which was used to extract photometry.
It is likely that a composite SED could be well fitted by LT templates instead single-population one.
Visual classification as ET with an unresolved companion.
\item {\bf ID 2818.} A round shaped galaxy, the image of residuals suggests possible over-subtraction.
Visual classification as ET.
\item {\bf ID 3680.} A small, fuzzy, northwest to southeast oriented disk galaxy.
Visual classification as LT
\item {\bf ID 4010.} A rounded galaxy, with a LT companion at the southeast.
Visual classification as ET.
\item {\bf ID 4923.} A faint and fuzzy galaxy.
Visual classification as LT.
\item {\bf ID 10207.} A west-east oriented fuzzy small disk galaxy.
Visual classification as LT.
\end{itemize}
\paragraph{COSMOS\\}
We used the COSMOS dataset to check for the reliability of our \dnn architecture.
Using the ZSMC and CPhR catalogs we built a sample of \ngalC galaxies for which the photometry and S\'{e}rsic indexes are available.
Photometric bands in the CPhR catalog do not exactly match OTELO's bands.
We have included Subaru's $BV$ bands, and have excluded the $H$ band which is absent from the CPhR database.
Thus, the COSMOS data used in this work consist of nine photometric bands (compared with eight in the case of OTELO catalog) and the S\'{e}rsic index.
Because the OTELO and COSMOS bands are different, we had to train our \dnn model again.
As in the case of OTELO, we fed the \dnn with the S\'{e}rsic index, the $r$ magnitudes, and the colors relative to the $r$ band.
We did so without changing the \dnn architecture except for the number of inputs.
Despite the differences between the two datasets, we shall see that our \dnn architecture reaches a high classification accuracy also for the COSMOS data.
\begin{table}[t]
\caption{COSMOS \dnn confusion matrix}\label{tab:cosmos}
\centering
\begin{tabular}{llrrr}
\hline
\noalign{\smallskip}
& & \multicolumn{2}{c}{COSMOS} \\
\cline{3-4}
\noalign{\smallskip}
& & ET & LT & \textbf{Total} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multirow{2}{*}{\dnn} & ET & 4450 & 457 & \textbf{ 4907} \\
& LT & 517 & 24264 & \textbf{24781} \\
& \textbf{Total}
& \textbf{4967} & \textbf{24721} & \textbf{29688} \\
\noalign{\smallskip}
\hline
\end{tabular}
\end{table}
\begin{table*}[ht]
\caption{Comparison of classification methods for \pc samples.}\label{tab:compar_concent}
\centering
\begin{tabular}{llllllllllr}
\hline
\noalign{\smallskip}
\multirow{2}{*}{Database} & \multirow{2}{*}{Method} & \multicolumn{2}{c}{Accuracy$\,^a$} & & \multicolumn{2}{c}{\sensitivity$\!^a$} & & \multicolumn{2}{c}{\specificity$\!^a$} & \multicolumn{1}{c}{Sample} \\
\cline{3-4}
\cline{6-7}
\cline{9-10}
\noalign{\smallskip}
&& Mean & Error & & Mean & Error & & Mean & Error & \multicolumn{1}{c}{size} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
OTELO & Baseline & \FPupn\result{\pltc{} 100 swap / 3 round}\result & 0.005 && 0 & ~\dots && 1 & ~\dots & \ngalc \\
OTELO & \ur color & 0.96 & 0.01 && 0.7 & 0.4 && 0.98 & 0.02 & 657 \\
OTELO & LDA & 0.971 & 0.007 && 0.78 & 0.09 && 0.980 & 0.006 & 657 \\
OTELO & \dnn & 0.980 & 0.006 && 0.75 & 0.09 && 0.992 & 0.005 & 688 \\
\noalign{\smallskip}
COSMOS & Baseline & \FPupn\result{\pltCc{} 100 swap / 3 round}\result & 0.001 && 0 & ~\dots && 1 & ~\dots & \ngalCc \\
COSMOS & \dnn & 0.971 & 0.001 && 0.84 & 0.03 && 0.985 & 0.004 & \ntestCc \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{4}{l}{\footnotesize{$^a$ On 100 bootstrap runs.}}
\end{tabular}
\end{table*}
Table \ref{tab:cosmos} shows the confusion matrix for one of the 100 random samplings that we used to characterize the COSMOS \dnn.
For this sampling in particular, the classification accuracy is of \FPupn\result{\soltestC{} 4450 24264 + / 3 round}\FPupn\err{\result{} 1 - \result{} * \soltestC{} swap / 2 swap root 3 round}$\result \pm \err.$
Figure \ref{fig:dnn_cosmos} and Table \ref{tab:compar} show the distribution of accuracies for 100 \dnn classification trials obtained from the COSMOS dataset.
The mean accuracy for these trials is $0.967 \pm 0.002$, well above the relatively low baseline of \FPupn\result{\solnC{} \nltC{} / 3 round}\FPupn\err{\result{} 1 - \result{} * \solnC{} swap / 2 swap root 3 round}$\result \pm \err,$ which corresponds to \numprint{28951} LT galaxies out of a total of \ngalC objects included in our COSMOS \ps sample.
Not only is the COSMOS \ps baseline lower than OTELO's (0.946), but the \dnn performance is also lower: 0.967 for COSMOS compared with 0.985 for OTELO.
The \sensitivity of 0.91 for COSMOS \ps is similar within the errors to that of OTELO (0.84), but the \specificity for COSMOS is slightly lower (0.979) than that for OTELO (0.993).
Applying the same \dnn architecture to the OTELO and COSMOS datasets, the method yields high classification accuracy in both cases.
Band differences between both datasets may contribute to the accuracy results.
We note that OTELO optical bands were gathered from CFHTLS data, but most of the COSMOS optical bands used were measured by Subaru.
The OTELO $H$ band is missed in COSMOS, while the COSMOS $BV$ bands, which are not included in our OTELO dataset, are heavily correlated to $gr$ bands.
The high classification accuracies for both the OTELO and the COSMOS datasets suggests that our proposed \dnn architecture may be applicable to a large number of databases that encompass both visual and infrared photometric bands and an estimate of the S\'{e}rsic index.
\subsection{Concentration and photometry\xspace samples}
The S\'{e}rsic index that we used in the LDA and \dnn classification methods detailed above is obtained through a parametric fitting that is difficult to achieve when dealing with low-resolution images.
On the contrary, the radius containing a given fraction of the galaxy total brightness is easier to estimate and can be measured directly.
In this section we repeat our previous analysis of the OTELO and COSMOS databases, but using samples obtained through the concentration index defined as the ratio between the radii containing 80\% and 20\% of the galaxy brightness.
Table~\ref{tab:compar_concent} shows the results obtained with the OTELO and COSMOS CP samples.
As in the S\'{e}rsic index samples, the \dnn classification yields the highest accuracy for OTELO (0.980), and also yields very accurate results for COSMOS (0.971).
In general, the results are comparable with those obtained using the SP sample.
Figure~\ref{fig:lda_conc} shows the distribution of the \ur colors versus the concentration index, along with the \ur color and the LDA separation boundaries.
The \ur color separation is $2.1 \pm 0.3$, in agreement with the values for the OTELO \ps sample ($2.0 \pm 0.2)$, and \citet[$\ur = \urplane$]{str01}.
The LDA separation is located at
%
$$u-r = (3.882 \pm 0.002) - (0.5342 \pm 0.0003) C,$$
%
where $C$ is the concentration.
The same trend of LT galaxies getting bluer at redshifts $z > 0.5$ can be seen in Fig. \ref{fig:ur_z_cp}.
Figure~\ref{fig:conc_dnn_acc} shows the distributions for the accuracies of the baseline, \ur color, LDA and \dnn classifications performed on the OTELO CP sample.
As in the OTELO SP sample, the \dnn yields the best accuracy, then LDA and finally the \ur color classification.
The distribution of \dnn accuracies for the COSMOS \pc sample is shown in Fig.~\ref{fig:conc_dnn_cosmos}.
Compared with the COSMOS \ps sample, the proportion of LT galaxies is larger (\numprint{\nltCc} LT out of \ngalCc galaxies), yielding a more accurate baseline (\FPeval\result{round(\nltCc{} / \solnCc{}, 3)}\result).
The \dnn accuracies are comparable, with a lower \sensitivity and a marginally larger \specificity for the COSMOS \pc sample.
\subsection{Magnitude limited samples}
Our aim in this paper is to use machine learning techniques to distinguish between ET and LT galaxies.
Thus, our samples are selected from a redshift limited region with the only requirement of containing enough galaxies in every redshift interval for accurate training and testing the machine learning algorithm.
However, neither OTELO nor COSMOS \ps and \pc samples were flux limited to produce a complete sample of galaxies in the volume defined by $z \leq 2$.
This leads us to question the possible cosmological inferences of our results.
In this section, we present the results of the machine learning algorithms but using flux limited samples for both training and testing sets.
Figure~\ref{fig:cum_dist} shows the cumulative distribution of galaxies by $r$ magnitudes for all the samples analyzed so far.
With respect to the low-brightness tails, both OTELO \ps and \pc samples have similar cumulative distributions that flatten around a magnitude of $r \simeq 26$.
This flattening may be considered as a rough measurement of completeness.
Thus, compared with OTELO-Deep image measurements, \citet{bon19} estimate that the OTELO catalog reaches a 50\% completeness flux at magnitude 26.38.
For COSMOS, the \ps sample flattens around $r \simeq 23$, while the \pc sample does at $r \simeq 24$.
Since COSMOS samples cover a large sky volume, their high brightness tails extend to galaxies approximately 1.5 magnitudes brighter than the much more confined OTELO volume.
\begin{table*}[ht]
\caption{Mean accuracy for OTELO samples at different magnitude limits in the r-band}
\label{tab:otelo_samples}
{\centering
\begin{tabular}{lrrr@{$\,\pm\,$}lr@{$\,\pm\,$}lr@{$\,\pm\,$}lr@{$\,\pm\,$}lr}
\hline
\noalign{\smallskip}
$r_{lim}$ & DC & $N$ & \multicolumn{2}{c}{ Baseline } & \multicolumn{2}{c}{ $u-r$ } & \multicolumn{2}{c}{ LDA } & \multicolumn{2}{c}{ DNN } \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{11}{c}{OTELO SP sample} \\
\noalign{\smallskip}
29.36 & 0 & 1834 & 0.946 & 0.006 & 0.96 & 0.02 & 0.970 & 0.008 & 0.985 & 0.007 \\
27.00 & 0.25 & 1765 & 0.947 & 0.006 & 0.96 & 0.02 & 0.969 & 0.009 & 0.977 & 0.008 \\
26.00 & 0.65 & 1358 & 0.943 & 0.007 & 0.96 & 0.02 & 0.97\phantom{0} & 0.01 & 0.978 & 0.009 \\
25.00 & 0.94 & 654 & 0.91\phantom{0} & 0.02 & 0.96 & 0.03 & 0.96\phantom{0} & 0.02 & 0.96\phantom{0} & 0.02 \\
\noalign{\smallskip}
\multicolumn{11}{c}{OTELO CP sample} \\
\noalign{\smallskip}
32.42 & 0 & 2292 & 0.950 & 0.006 & 0.96 & 0.02 & 0.971 & 0.007 & 0.980 & 0.007 \\
27.00 & 0.25 & 2051 & 0.951 & 0.006 & 0.96 & 0.02 & 0.972 & 0.008 & 0.981 & 0.007 \\
26.00 & 0.65 & 1437 & 0.944 & 0.007 & 0.96 & 0.02 & 0.973 & 0.008 & 0.977 & 0.009 \\
25.00 & 0.94 & 661 & 0.91\phantom{0} & 0.02 & 0.96 & 0.03 & 0.96\phantom{0} & 0.02 & 0.97\phantom{0} & 0.02 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\end{tabular}
\par }
\footnotesize{\textbf{Notes:}
Column1: $r$-magnitude limit; the first \ps and \pc rows correspond to the full, not magnitude-limited samples.
Column 2: OTELO-Deep image detection completeness from \citet{bon19}.
Column 3: sample size.
Column 4: sample baseline.
Columns 5, 6 and 7: \ur, LDA and \dnn accuracies, respectively, obtained after 100 bootstrap runs.}
\end{table*}
We check our machine learning algorithms using flux-limited samples.
Table~\ref{tab:otelo_samples} shows the results of 100 bootstrap runs on different OTELO $r$-magnitude-limited samples.
We highlight the fact that all the \ur color, LDA, and \dnn accuracies are consistent within the errors.
However, for brighter samples, we can observe a downward trend in accuracies and upward trend in uncertainties for the LDA and \dnn classifications, while the \ur color results remain basically without change.
Analogously, Table~\ref{tab:cosmos_samples} shows the limit $r$ magnitude (Col. 1), the S\'{e}rsic index (Col. 2), the baseline (Col. 3), and the \dnn accuracy for the COSMOS \ps and \pc samples.
In this case, the training set size is always \ntrainC for the \ps and \ntrainCc for the \pc samples, except for the \pc limit magnitude $r \leq 22$ with a sample size of 9852, for which the training set was set to \ntrainC.
As in the OTELO case, we notice the consistency of the \dnn accuracies within the errors, and the trend towards lower accuracies and larger uncertainties.
There are two effects that may account for the trends in accuracy and uncertainty observed in the LDA and \dnn classification methods.
On one hand we detect a tendency for a lower proportion of LT galaxies in brighter galaxies, indicated by the baseline decrease.
As the \emph{a priori} probabilities of a galaxy to be ET or LT are more alike, the uncertainty in the classification increases.
On the other hand, as the sample size shrinks in brighter samples, so do the fractions of the sample reserved for training and testing (70\% and 30\%, respectively).
This shrinking of the sample size leads to a less satisfactory training and a less precise testing.
Both effects, the baseline decrease and the sample shrinking, tend to reduce the classification accuracy.
With respect to the \ur classification, the \ur color discriminant is determined by low-redshift galaxies (see Figs.~\ref{fig:ur_z_sp} and \ref{fig:ur_z_cp}) that tend to dominate flux limited samples.
Thus, the discriminant remains constrained around a value of 2, and the classification accuracy remains around 0.96.
For the brighter \ps and \pc samples, with magnitude limit $r \leq 25$, the LDA and the \dnn accuracies are similar to that of the \ur color.
In the other two magnitude-limited cases ($r \leq 26$ and $r \leq 27$), the \dnn presents the highest accuracy, and the accuracy of the LDA is higher than the \ur color.
These results show that all the machine learning methods for classification presented in this paper are robust for both limited and unlimited flux samples.
\section{Conclusions}\label{sec:conclus}
Neural networks are becoming increasingly important for image classification and will play a fundamental role in mining future databases.
However, many of the current astronomical databases consist of catalogs of tabulated data.
Machine learning techniques are often used to analyze astronomical tabulated data, but analysis through DNNs\xspace is far less frequent and limited to low-redshift galaxies.
Here, we provide a consistent and homogeneous comparison of the popular techniques used in the literature for binary ET and LT morphological type classification of galaxies up to redshift $z \leq 2$.
We used data from the OTELO catalog for classifying galaxies by means of (i) the single \ur color discriminant, (ii) LDA using \ur color and the shape parameter (S\'{e}rsic or concentration index), and (iii) \dnn fed by visual-to-NIR photometry and shape parameter.
We also applied the \dnn architecture developed for OTELO on COSMOS to probe its reliability and reproducibility in a different database.
Both S\'{e}rsic index and concentration index shape parameters yield comparable results, but using the concentrations allowed to increase the size of OTELO and COSMOS available data.
All the machine learning methodologies for galaxy classification tested in this paper are robust and produce comparable results for both limited and unlimited flux samples.
Accuracy, \sensitivity, and \specificity estimates show that \dnn outperforms the other two methods and allows the user to classify more objects because of the missing data management.
These results show that \dnn classification is a powerful and reliable technique to mine existing optical astronomical databases.
For unresolved objects, the morphological identification is unattainable, the spectrum of a dim object is very difficult to obtain, and multiwavelength data are usually unavailable.
For most objects, photometric visible and near infrared observations are the only (and usually incomplete) accessible data.
This study indicates that \dnn classification may address the mining of currently available astronomical databases better than other popular techniques.
\begin{table}[!tp]
\caption{Mean accuracy for COSMOS samples at different magnitude limits in the r-band}
\label{tab:cosmos_samples}
\centering
\begin{tabular}{lrr@{$\,\pm\,$}lr@{$\,\pm\,$}l}
\noalign{\smallskip}
\hline
\noalign{\smallskip}
$r_{lim}$ & $n$ & \multicolumn{2}{c}{ Baseline } & \multicolumn{2}{c}{ DNN } \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{6}{c}{COSMOS SP sample} \\
\noalign{\smallskip}
25.80$^1$ & \numprint{34688} & 0.835 & 0.002 & 0.972 & 0.003 \\
24.00 & \numprint{34128} & 0.836 & 0.002 & 0.969 & 0.003 \\
23.00 & \numprint{23562} & 0.841 & 0.002 & 0.971 & 0.003 \\
22.00 & \numprint{ 9368} & 0.783 & 0.004 & 0.966 & 0.004 \\
\noalign{\smallskip}
\multicolumn{6}{c}{COSMOS CP sample} \\
\noalign{\smallskip}
26.88$^1$ & \numprint{105758} & 0.906 & 0.001 & 0.971 & 0.002 \\
24.00 & \numprint{ 65808} & 0.896 & 0.001 & 0.976 & 0.001 \\
23.00 & \numprint{ 25912} & 0.850 & 0.002 & 0.972 & 0.003 \\
22.00 & \numprint{ 9852} & 0.787 & 0.004 & 0.964 & 0.004 \\
\hline
\noalign{\smallskip}
\multicolumn{6}{l}{\footnotesize{$^1$ Sample not limited in magnitude.}}
\end{tabular}
\end{table}
An important limitation for all machine learning techniques is the availability of labeled data, that is, data that have already been classified or measured.
This limited us to a binary ET and LT classification and to impose a redshift threshold.
Incorporating reliable synthetic data for classification training is an important goal if we wish to overcome these limitations.
Our results provide compelling support for extending the \dnn classification to targets other than binary morphological classification of galaxies, such as separating stars from galaxies, deciphering the spectral type of stars, and detecting rare events.
The application of \dnn is not restricted to classification problems.
Our results strongly suggest that \dnn methods can also be very effective in exploring other issues such as, for example, photometric redshift estimates.
\begin{acknowledgements}
The authors are grateful to the referee for careful reading of the paper and valuable suggestions and comments.
This work was supported by the project Evolution of Galaxies, of reference AYA2014-58861-C3-1-P and AYA2017-88007-C3-1-P, within the "Programa estatal de fomento de la investigaci\'{o}n cient\'{\i}fica y t\'{e}cnica de excelencia del Plan Estatal de Investigaci\'{o}n Cient\'{\i}fica y T\'{e}cnica y de Innovaci\'{o}n (2013-2016)" of the "Agencia Estatal de Investigaci\'{o}n del Ministerio de Ciencia, Innovaci\'{o}n y Universidades", and co-financed by the FEDER "Fondo Europeo de Desarrollo Regional".\\
JAD is grateful for the support from the UNAM-DGAPA-PASPA 2019 program, the UNAM-CIC, the Canary Islands CIE: Tricontinental Atlantic Campus 2017, and the kind hospitality of the IAC. \\
MP acknowledges financial supports from the Ethiopian Space Science and Technology Institute (ESSTI) under the Ethiopian Ministry of Innovation and Technology (MoIT), and from the Spanish Ministry of Economy and Competitiveness (MINECO) through projects AYA2013-42227-P and AYA2016-76682C3-1-P.\\
APG, MSP and RPM were supported by the PNAYA project: AYA2017--88007--C3--2--P.\\
MC and APG are also funded by Spanish State Research Agency grant MDM-2017-0737 (Unidad de Excelencia Mar\'{\i}a de Maeztu CAB).\\
EJA acknowledges support from the Spanish Government Ministerio de Ciencia, Innovaci\'{o}n y Universidades though grant PGC2018-095049-B-C21. \\
M.P. and E.J.A also acknowledge support from the State Agency for Research of the Spanish MCIU through the ”Center of Excellence Severo Ochoa” award for the Instituto de Astrof\'{\i}sica de Andaluc\'{\i}a (SEV-2017-0709).\\
JG receives support through the project AyA2018-RTI-096188-B-100.\\
MALL acknowledges support from the Carlsberg Foundation via a Semper Ardens grant (CF15-0384).\\
JIGS receives support through the Proyecto Puente 52.JU25.64661 (2018) funded by Sodercan S.A. and the Universidad de Cantabria, and PGC2018--099705--B--100 funded by the Ministerio de Ciencia, Innovaci\'{o}n y Universidades.\\
Based on observations made with the Gran Telescopio Canarias (GTC), installed in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\'{\i}sica de Canarias, in the island of La Palma.
This work is (partly) based on data obtained with the instrument OSIRIS, built by a Consortium led by the Instituto de Astrof\'{\i}sica de Canarias in collaboration with the Instituto de Astronom\'{\i}a of the Universidad Aut\'{o}noma de M\'{e}xico.
OSIRIS was funded by GRANTECAN and the National Plan of Astronomy and Astrophysics of the Spanish Government
\end{acknowledgements} |
2002.12800 | \section{Introduction}
The hadronic Tile Calorimeter (TileCal) is an essential part of the ATLAS experiment~\cite{ATLAS} at the CERN Large Hadron Collider~\cite{LHC}. Together with the Liquid Argon (LAr) electromagnetic and hadronic calorimeters, it provides measurements of the energy of particles and jets produced in LHC collisions as well as information on the missing transverse energy $E^\mathrm{miss}_{\mathrm{T}}$. The ATLAS calorimeter system is shown in figure~\ref{fig:calorimeter}a.
TileCal is a sampling calorimeter composed of steel plates as the absorber/radiator and organic scintillating tiles made from polystyrene base as the active material. The design, general features and expected performance of the calorimeter are described in Ref.~\cite{TileCal}, while its operating parameters and performance can be found in Ref.~\cite{TileReady} and Ref.~\cite{TileRun1}.
A brief description of the salient dimensions of TileCal follows, while the detailed information about the mechanical structure is available in Ref.~\cite{TileMechanics}.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.65\textwidth]{atlascalo}}
\subfloat[]{\includegraphics[width=0.35\textwidth]{tileslice}}
\caption{(a) ATLAS calorimeter system. (b) Tile calorimeter module internal structure~\cite{TileReady}.}
\label{fig:calorimeter}
\end{figure}
TileCal consists of three sections, known as a Long Barrel (LB) and two Extended Barrels (EB), called EBA and EBC. These three sections have a cylindrical shape with inner and outer radii of 2280 and 4230~mm respectively. The LB is 5640~mm long along the beam axis (Z), while EBA and EBC are 2910~mm long. The mechanical supports for the LAr calorimeter cryostats are located inside the Extended Barrels.
Each of the TileCal cylinders is subdivided in azimuth into 64 independent modules. Three mm thick scintillating tiles lie within modules in the $r-\phi$ plane at a distance of 18.2~mm along the Z-axis, separated by 14.0~mm steel plates, thereby creating the sensitive material matrix with the periodic structure along the beam line, as shown in figure~\ref{fig:calorimeter}b.
The scintillating tiles are organised along the radius from the beam pipe in 11 tile rows of different sizes, numbered from 1 to 11 starting from the smallest radius. The scintillation light is collected at the exposed edges of each tile by wavelength shifting (WLS) fibres, arranged in pre-shaped opaque plastic ``profiles'' attached to both sides of the modules and running radially. To accommodate calibration tubes and fixing rods, all steel plates and tiles have two \O9.0~mm holes.
Within a module, the readout cells, shown in figure~\ref{fig:tilecells}, are defined by grouping together fibres which are then bundled and coupled to the photomultiplier tubes (PMTs) that read out each TileCal cell, as described below. Each fibre bundle brings to a PMT the light from one side of a contiguous group of tiles. The light from every cell is read out by two PMTs, which measure the light from two cell sides, thus improving the uniformity of cell response across $\phi$ and the reliability of light collection. The cells span pseudo-rapidity ($\eta$) intervals of 0.1 in layers A, B and C and 0.2 in layer D of the Long Barrel. In the angular range covered by the Extended Barrels, it is not possible to define pseudo-rapidity intervals with similar accuracy, therefore the readout cells span larger pseudo-rapidity intervals. There are 45 cells in each LB module and 14 cells in each EB module. The total number of TileCal PMTs is~9852. The details of the optics instrumentation can be found in Ref.~\cite{Instrumentation}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.9\textwidth]{tilecellsrows}
\caption{Tile calorimeter segmentation in depth (radius) and pseudo-rapidity ($\eta$)~\cite{TileReady}.}
\label{fig:tilecells}
\end{figure}
To calibrate and monitor TileCal it was decided to use $^{137}$Cs $\gamma$-ray sources, propelled by a hydraulic system, which traverse all modules and deposit in them the energy of the $\gamma$-ray~\cite{Cs137,Cs137H}. With this design, the source signal, averaged over all the tiles of a cell, will represent the relative response to particles detected in that cell of TileCal. The design and configuration of the system allow to check and to monitor the whole optical path from the scintillating tiles to the PMT with better than 1\% precision. The geometrical concept of the approach is shown in figure~\ref{fig:principle}a.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.45\textwidth]{csprinciple}}
\subfloat[]{\includegraphics[width=0.55\textwidth]{cspulses}}
\caption{(a) The concept of the $^{137}$Cs source calibration system. (b) Typical PMT response from the four neighbouring cells as a function of the source position along its path~\cite{Cs137}.}
\label{fig:principle}
\end{figure}
When a scintillating tile is traversed by the $^{137}$Cs source, the emitted 0.662~MeV gamma rays induce light in the scintillator. The light is transported to a PMT by the WLS fibres. The mean-free-path of the gamma rays in the calorimeter structure matches the 18.2~mm calorimeter sampling period, therefore the fine detail of individual tile responses is clearly visible, and any defect can be diagnosed.
The observed PMT signal averaged over a readout cell indicates the tile and fibre optical quality for the entire cell, while peak values can be associated with individual tile responses (figure~\ref{fig:principle}b). The PMT signal read-out electronics is based on a low-noise operational amplifier used as a charge integrator, described in Ref.~\cite{Integrator}.
The relatively long 30.2~year half-life of the $^{137}$Cs isotope (corresponding to a $\sim$2.3\% intensity loss per year) makes it possible to use the same sources to monitor the calorimeter response stability over a time comparable to the lifetime of the ATLAS experiment.
Two more systems, together with in-situ beam calibration, are used to monitor the TileCal response: a Laser monitoring system~\cite{Laser} to check the PMTs, and a Charge Injection System (CIS)~\cite{CIS} to test and to calibrate the fast front-end electronics. However, these two systems do not measure the properties of the entire optical path (scintillator, fibres, their optical coupling, etc.), whereas the radioactive source system can fill this need. Other options sensitive to the properties of the entire optical path, such as monitoring the calorimeter's operation with real events (with ``minimum bias'' or cosmic-ray triggers), depend on the available statistics of the recorded runs and are not as precise and flexible as the radioactive source system. This has made calibration by a Cs source very important for TileCal, during the module instrumentation and test beam phases and during the physics runs.
An elaborate source drive system, appropriate controls and an advanced online data analysis framework are required to precisely and reliably measure the response of $\sim$463000 tiles, passing through the 192 TileCal modules. The design of the system, the layout of the calibration tubes and the additional equipment had to respect the TileCal detector geometry envelopes and tolerances.
A system that fulfils these requirements, using a hydraulic drive to drive radioactive sources through the entire volume of TileCal, was designed, constructed and commissioned~\cite{MonSys}. A water-based liquid propels the radioactive sources through a large tubing circuit at a steady velocity of about 35~cm/s, which is adequate to scan the whole TileCal volume in a few hours while providing high-granularity outputs on the optical response of every single tile. An early system was used to calibrate the prototype and production modules of TileCal; the full-blown system was installed in the ATLAS experimental cavern from 2002 to 2008 and has been used since 2009 to monitor the response of TileCal to cosmic rays and to particles produced in LHC collisions.
In this article, the design, construction, installation, and results of the radioactive source system of the ATLAS Tile Calorimeter are described, with an emphasis on the overall aspects of this facility.
\section{System description}
\subsection{General design}
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.5\textwidth]{cshydraulics}}
\subfloat[]{\includegraphics[width=0.5\textwidth]{cscontours}}
\caption{ (a) Schematics of the Cs system hydraulics. (b) The hydraulic contour switching concept.}
\label{fig:hydraulics}
\end{figure}
The TileCal Cs source system consists of three separate but functionally identical subsystems, serving the Long Barrel (LB) and the two Extended barrels, EBA and EBC. The layout of a subsystem is schematically shown in figure~\ref{fig:hydraulics}. It consists of the following components:
\begin{itemize}
\item A circuit of calibration tubes within each of the three calorimeter barrels, that defines the source path, including the source storage devices (``garages''), where the source is kept between scans;
\item The circuit is divided into a number of segments (also known as ``contours'') consisting of 4 to 8 modules each; special tee-joints separate the source path segments and connect to supply pipes that lead the liquid into and out of the calibration circuit's common volume;
\item The supply pipes themselves, that provide the pressure that propels the source capsule within the appropriate circuit segment;
\item Pumping and distributing devices, that provide the liquid flow to each circuit segment, as well as propelling the source capsule in the desired direction within a segment and through the tee-joint to the next segment;
\item Storage vessels, to hold the liquid while the system is idle.
\end{itemize}
The tee-joints set the entry and exit points of the working liquid and define circuit segments with a volume smaller than that of the entire calibration circuit and allow to apply pressure to one circuit segment at a time with enough force to push the capsule in the desired direction. This way, pressures in each section and the overall liquid circuit are kept at appropriately low levels.
The liquid in other parts of the circuit stays almost at rest. There is a small flow in the opposite direction, estimated to be about 15\% of the contour flows.
The source capsule moves in the desired direction with the steady flow of the liquid. The stability of the movement depends on the performance of the pumping equipment, the inertia due to the mass of the fluid, the dynamic frictional properties of the source capsule on the surface of the calibration tubes and lubricity and viscosity of the working liquid. The source capsule passes from a segment to the next one through a tee-joint when the liquid flow is switched from one to the next.
The source circuit is realised with cold-rolled stainless steel tubes, \O6.0~mm~x~8.0~mm (ID x OD) consisting of straight, bent and interconnecting sections. The tube layout through a pair of neighbouring LB and EB modules is shown in figure~\ref{fig:tubeslayout}. The circuit through the LB is straightforward, while the one through the EB is slightly more complicated.
This is because in an EB module both input and output tubes must be on the same side, and because of the peculiar geometry of the extended barrels at large radii also shown in figure~\ref{fig:tubeslayout}. Because of these constraints, the source passes twice through tile row \#7, using both holes. Therefore there are 11 straight tubes in an LB and 12 in an EB module.
\begin{figure}[!htbp]
\centering
\subfloat[][Long barrel]{\includegraphics[width=0.42\textwidth]{cstubeslb}}
\subfloat[][Extended barrel]{\includegraphics[width=0.57\textwidth]{cstubeseb}}
\caption{Calibration tube layout in long barrel and extended barrel modules.}
\label{fig:tubeslayout}
\end{figure}
\subsection{Mechanical details}
\begin{figure}[!htbp]
\begin{minipage}{0.5\textwidth}
\includegraphics[width=\textwidth]{cstubesbent}
\caption{Examples of U-shaped tubes. The rubber cap is used to protect the calorimeter tiles from light, dust and glue during the interconnection of the tubes.}
\label{fig:tubesbent}
\end{minipage}
\qquad
\begin{minipage}{0.45\textwidth}
\includegraphics[width=\textwidth]{cstubesglue1}
\caption{A few steps of the procedure to glue straight and U-shaped tubes. }
\label{fig:tubesglueing}
\end{minipage}
\end{figure}
Specially made tubes, bent or U-shaped, interconnect the straight sections. These parts were bent with a technique that keeps the tube inner cross-section almost intact, thereby ensuring a smooth passage of the source (figure~\ref{fig:tubesbent}). The bent tube ends flare out for insertion into the straight tubes, and are joined with two-component Araldite 2011 epoxy-type glue. Glueing, rather than welding tubes together, is chosen in order to avoid damaging the plastic scintillating tiles during the assembly of the calibration circuit. The Araldite 2011 (former AW-106) glue is sufficiently radiation tolerant, at least up to 4 MGy~\cite{Epoxy}, while the maximum dose in TileCal after 10 years of LHC operation was not supposed to be more than 1 kGy.
All bent tubes have the same 15~mm radius of curvature, checked by several gauged dummy capsules just after fabrication, and at several stages of module instrumentation, in particular, after the glued joints are made. The dummy capsules have 6.00~mm outer diameter at fabrication and 5.85~mm at the final assembly stage. These dimensions guarantee passage of the 5.60~mm source capsule. Figure~\ref{fig:tubesglueing} shows some of the initial steps of the procedure of glueing together the different tube types.
More than 2200 U-shaped tubes of various lengths and configurations were used for 192 modules (and 2 spares), and more than 250 for module and source garage inter-connections. The total length of the source circuit is about 9~km for the three barrels: $\sim$4.3~km for LB and 2 times $\sim$2.4~km for EBA and EBC.
All 192 TileCal modules were delivered equipped with their calibration tubes at the optical instrumentation phase or just after it. At that time, a $^{137}$Cs source was used to calibrate and certify them. Later on, the modules of each of the three sections (LB, EBA, EBC) were interconnected in situ, in the ATLAS cavern, creating three independent but functionally identical systems. Every step of tube insertion and interconnection was accompanied by cross-checks: calibrated dummy source passage, verification of geometry envelopes and air pressure tests up to 5~bar.
The tube layout for a group of EBA modules during assembly in the cavern is shown in figure~\ref{fig:tubeseba}, in which one can see connections between straight tubes and between modules.
\begin{figure}[!htbp]
\begin{minipage}{0.50\textwidth}
\includegraphics[width=\textwidth]{cstubeseba}
\caption{Layout of calibration tubes in a group of EBA modules during assembly in the cavern. One can see the U-shaped tubes connecting straight tubes, module inter-connecting tubes and tee-joints.}
\label{fig:tubeseba}
\end{minipage}
\hfill
\begin{minipage}{0.46\textwidth}
\includegraphics[width=\textwidth]{teejoint}
\caption{Tee-joint of calibration tubes (stainless steel) and supply pipe (copper). The brass spring guides the source capsule between calibration tubes.}
\label{fig:teejoint}
\end{minipage}
\end{figure}
\begin{figure}[!htbp]
\centering
\subfloat[][Long Barrel]{\includegraphics[width=0.50\textwidth]{cscoppertubeslb}}
\hfill
\subfloat[][Extended Barrel]{\includegraphics[width=0.45\textwidth]{cscoppertubeseb}}
\caption{Copper supply tubes at the bottom of the LB (a) and a patch panel of the EB, where copper tubes are connected to NBR pipes (b).}
\label{fig:coppertubes}
\end{figure}
The rubber caps on the tubes serve as light seals. Part of the bent tubes lies in grooves machined in the module end-plates, in the area where the LAr cryostat supports, or its flange, are closest to the Tile calorimeter.
To provide a free passage of the source capsule from one module to another and to connect the calibration circuit to a liquid supply pipe, a tee-joint is used. Figure~\ref{fig:teejoint} shows in detail the tee-joint with the top cover lid removed to show its internal structure. The lid also houses a pressure sensor, to be described later on together with other sensors.
There are 16 tee-joints in the LB barrel and 14 of the same design in each of the EBA and EBC, forming the corresponding number of circuit segments. All the tee-joints are located at the outer radius of the calorimeter, on the tubes that interconnect modules.
Copper tubes of \O6.0~mm~x~8.0~mm (ID x OD) are used to carry the liquid from the drives to the calibration circuit at the detector's location. There are 16 copper supply tubes with lengths of 35 to 50 meters in the LB section and 14 in each of the EB sections. Flexible NBR (Nitrile Butadiene Rubber) of \O7.7~mm~x~13.5~mm (ID x OD) pipes, with a total length of 50 to 65 meters each (figure~\ref{fig:coppertubes}) connect to the EB tubes to allow moving the EB sections of TileCal when opening ATLAS. The total length of supply pipes is over 2~km for all three calibration subsystems.
The pneumatic part of the system consists of mechanical safety valves, electromagnetic valves built into the drives, and polyamide pipe (\O2.7~mm~x~4.0~mm ID~x~OD) distribution piping, interconnected with connectors made by LEGRIS.
The garage lock driving mechanism works with pressurised air, supplied via two pipes $\sim$70~m long. The same kind of pipes, a few hundred m long, supply air to the liquid storage vessels. Altogether the length of pneumatic pipes is close to 2~km. The garage locks are operated with pressures up to 5~bar, while liquid filling and draining are done at pressures under 2.6~bar. Both limits are controlled by calibrated safety valves.
\subsection{Sources and source capsules}
The Cs calibration and monitoring system uses several $^{137}$Cs radioactive sources of the same design, from two different manufacturers (JINR, Dubna and Isotope Products, Prague). The sources are contained in stainless steel cylinders of outer dimensions of \O2.0~mm~x~9.0~mm, hermetically welded at both ends. The cylinder wall thickness is 0.2~mm, allowing an internal volume of nearly 10~mm$^{3}$ for the source itself. The $^{137}$Cs isotope, obtained from the chemical purification of spent fuel rod arrays from nuclear reactors~\cite{Sources}, is sealed into a vitrified ceramic medium.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.48\textwidth]{cscapsuledrawing}}
\qquad
\subfloat[]{\includegraphics[width=0.45\textwidth]{cscapsuleintube}}
\caption{(a) Source and protective capsule design. (b) Dumb-bell-shaped capsule in a bent segment of the calibration tubes. The capsule cross section is 5.60~mm, to be compared to the 6.0~mm inner diameter of the calibration tubes, leaving up to 0.4~mm clearance to the tube inner walls.}
\label{fig:capsule}
\end{figure}
The source cylinders are mounted in dumb-bell-shaped capsules of a hardened aluminium alloy, plated with a 2 to 4 $\mu$m thick titanium nitride (TiN) film. Figure~\ref{fig:capsule} shows the design of the capsule. The diameter of the spherical end is 5.60~mm and the longitudinal dimension is slightly less than 12.0~mm. The source cylinder is fixed inside the capsule by crimping, using a specially designed tool. The shape and outside dimensions of the capsule allow it to be driven by the liquid flow through the bent tube sections with a radius of curvature down to 15~mm.
The capsules are not waterproof. They are designed for safe and reliable movement in the calibration circuit while protecting the sources from shock and friction during their travel, as well as from damage while they are handled. The titanium nitride coating decreases the friction with the stainless steel tube walls, thus improving the resistance of capsules to wear. The base material of the capsule is conductive, allowing it to be easily detected with inductive sensors. However, it is not chemically inert. Therefore precautions are taken to prevent corrosion of the capsule surface and of the source itself.
In figure~\ref{fig:capsule2}, a new, unused source capsule is shown together with one that travelled more than 250~km through calibration tubes.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.3\textwidth]{cscapsule1}}
\subfloat[]{\includegraphics[width=0.31\textwidth]{cscapsule2}}
\subfloat[]{\includegraphics[width=0.41\textwidth]{cscapsule3}}
\caption{From left to right: (a) a new encapsulated source; (b) the welded end of the source's stainless steel cylinder, after crimping it in the capsule; (c) a source capsule after passing through 250~km of calibration tubes over more than four years of use. The outer diameter was reduced from the initial 5.60~mm by less than 0.10~mm.}
\label{fig:capsule2}
\end{figure}
\begin{table}[!htbp]
\centering
\caption{$^{137}$Cs sources used in the ATLAS Tile Calorimeter Cs calibration system.}
\begin{tabular}{|c|c|c|c|c|}
\hline
Source & Estimated activity & Measured response ratio & Produced at & Usage \\
name & (MBq $\pm$15\%) & to 3713RP & & \\
& April 2009 & March 2009 & & \\
\hline
3712RP & 319 & 1.2200 $\pm$ 0.0005 & JINR Dubna & Instrumentation \\
3713RP & 264 & 1.0000 & & Test beam \\
\hline
RP4089 & 377 & 1.2180 $\pm$ 0.0007 & IP Prague & EBC monitoring \\
RP4090 & 363 & 1.1590 $\pm$ 0.0005 & & EBA monitoring \\
RP4091 & 372 & 1.1860 $\pm$ 0.0005 & & LB monitoring \\
\hline
\end{tabular}
\label{tab:sources}
\end{table}
Five radioactive sources are used for TileCal response calibration. All had similar initial activities of 250 to 400~MBq, and were intercalibrated with several techniques, with a relative precision of about $5\times10^{-4}$. The nearly equal activity of these sources is convenient because it allows the responses of the three barrels to be the same dynamic range; in addition, switching the sources, if needed, requires fewer system changes. In addition to the usual activity measurements carried out by producers and the CERN Radiation Protection team, all sources were inter-calibrated in two campaigns (in 2009 and 2013) by multiple passes through the same modules (LB/EB). There were no differences between the obtained results of the two campaigns within the achieved accuracy, which suggests the same isotope composition of the sources. The source activities and the TileCal cell response ratios are listed in table~\ref{tab:sources}.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.7\textwidth]{csspectrum}
\caption{The energy spectrum of a $^{137}$Cs source from data provided by the producer (Isotope Products), with zoom on the $^{137}$Cs peak.}
\label{fig:csspectrum}
\end{figure}
The typical energy spectrum of a source measured just after production is shown in figure~\ref{fig:csspectrum}. The $^{137}$Cs photo-peak (0.662~MeV) dominates; any admixture of short-lived isotopes such as $^{134}\rm{Cs}$ (0.698~MeV peak energy, half-life~$\sim$2 years) would have been present at <$10^{-3}$ of the main isotope activity. During the recalibration campaigns, the sources' spectra were obtained and no differences were found between "old" (JINR) and "new" (IP) sources, and the purity of all the sources was manifested by the absence of other (short-lived) isotopes.
It is planned to re-encapsulate the sources currently used in ATLAS after the slow wear of capsules produces critical changes of their shape and size. After being used for about 5 years with no significant wear, the 3713RP and 3712RP sources were successfully re-encapsulated at Isotope Products in Prague, with no observed source material leakage.
Five years of use with regular monthly Cs scans in the ATLAS cavern produced no significant wear of these capsules. The sources are checked every few years during the planned long stops of the LHC machine, when ATLAS detector is open for maintenance.
\subsection{Source storage garages}
A ``garage'' is where a source resides between calibration runs. Nine garages, three for each subsystem (EBA, EBC, LB), were built and installed. The garages are uniformly distributed over the outer perimeter of the 64 modules in each TileCal section; they are integrated into the calibration tubes sequence of each subsystem.
The garage lock driving mechanisms are operated with pressurised air, supplied via two $\sim$70~m long pipes, at the pressures mentioned in the previous subsection.
A source capsule can be loaded into a garage, extracted and moved in either direction, come to a stop in a garage coming from either side, kept there for any length of time and removed when needed (for instance, when ATLAS is opened). Functionally it is a detachable piece of the calibration tube circuit, radiation-shielded and equipped with two remotely operable locks and detecting sensors.
Each garage is supplied with a remotely operated electronic control card, that checks the status of the garage-locking mechanism, detects the presence of a metal capsule with a SIN sensor (described in section 4.2). On request, it can measure the capsule activity with a Geiger counter. The garage status is locally displayed with LEDs; this information is also sent to the control system either on request or every time the status changes.
Each source capsule normally resides in one of its three garages; the other two garages are usually unoccupied and are used as intermediate storage for the source during a normal scan or when a calorimeter scan has to be interrupted for some unforeseen reason, such as ATLAS operational priorities. At the normal speed of 35~cm/s, the source capsule travels between two garages (passing through 20--24 modules) in about 1 hour. This is known to be a reasonable time to decide whether or not to stop a full scan and let ATLAS proceed with its planned operations.
All the garages are identical and consist of an outer case, lead radiation shielding, the garage body itself with locks, driving pistons and a detachable cassette (figure~\ref{fig:garage}). The lead shielding thickness is 5~cm, and the dose rate at a 40~cm distance from the garage does not exceed 0.5~$\mu$Sv/h.
The source capsule is loaded into the cassette in a location providing an appropriate safety environment. The cassette lead shielding is the first level of operator protection; in addition, it is handled and transported to the garage location in a special container, specifically assigned to each cassette. The cassette installation into the garage body takes a few minutes, thereby limiting exposure of the operator to a safe dose. When the cassette is removed from the garage, it is replaced with a dummy one to close the calibration tubes circuit. The calibration pipe system must be drained and the garage volume must be reasonably dry before manipulating a cassette.
A source is locked in the cassette by brass wires, as shown in figure~\ref{fig:cassette}. These wires are moved by pistons operated with a 5~bar air pressure. The opening of locks allows the source capsule to move in the direction of the liquid flow. The pneumatically driven lock mechanism is insensitive to the ATLAS magnetic fields. The normal state of the locks is ``closed'', preventing unwanted capsule movements. If the capsule is inside the source compartment, it can exit only after the proper air pressure is applied and the liquid flow is provided.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.31\textwidth]{csgarage1}} \quad
\subfloat[]{\includegraphics[width=0.31\textwidth]{csgarage2}} \quad
\subfloat[]{\includegraphics[width=0.31\textwidth]{csgarage3}}
\caption{Cs source garage: (a) Garage with the control unit, fixed at a module's outer periphery. (b) Garage case, body, detachable cassette. (c) Lead shielding components and lock mechanism.}
\label{fig:garage}
\end{figure}
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.57\textwidth]{cscassette}} \quad
\subfloat[]{\includegraphics[width=0.37\textwidth]{cstransport}}
\caption{ (a) Cs cassette and its components, showing the brass wire lock that keeps the source capsule in its compartment. (b) The transport container.}
\label{fig:cassette}
\end{figure}
There are two source location sensors in the cassette. An inductive sensor (SIN, see section 4.2) registers the presence of the metal capsule. The second is a Geiger counter that detects the presence and monitors the source's gamma activity. It is a Russian-made SBM-10 type Geiger counter, with a thin-walled metal case. The dimensions of the Geiger tube are \O6.0~mm~x~25.0~mm. It is operated at an HV of 300--400~V and can take a rate of 700 pulses per second. Normally the counter HV is off; in order to increase the counter's expected lifetime, it is switched on only during source calibration runs, on request from the central control process.
To further decrease the rate load and thereby increase the Geiger counter's lifetime, the gamma-ray flux hitting it is reduced by embedding it in lead, within the cassette itself. With a 9~mCi $^{137}$Cs source at a distance in lead to the detector of 25--35~mm, the measured counting rate is below 500~Hz.
The Geiger counter is operated under software control in either of two modes. Routinely, when the source reaches the garage from the calibration tube circuit, the presence of the source capsule, registered by SIN sensor, is confirmed by measuring the Geiger counter rate over a one-second interval. This measurement distinguishes a capsule containing the radioactive source from a dummy one. In the other operating mode, the activity of the source and general system operation are occasionally checked by taking sixty sequential one-second measurements. Over longer intervals, the Geiger counter data are an important check of the integrity of the source.
\section{Hydraulic system}
\subsection{Working liquid}
Metals of different properties are used in the system: regular steel, stainless steel, copper, copper-based alloys, aluminium alloys. As a consequence, care must be taken to avoid corrosion, especially of the sources and the capsules themselves. Oxidation would be very likely if the working liquid were plain or distilled water.
The dangers of corrosion are avoided by using as the working liquid a mixture of demineralised water (65\% by volume) and NALCO 00GE056 (later replaced by TRAC100) liquid (composition: water, disodium metasilicate 5--10\%, sodium molybdate 10--20\%, sodium tetraborate 1--5\%)~--- which is customarily used as a cooling agent in pipelines that include materials similar to those employed here. This liquid is particularly apt to suppress corrosion when different metals are present, is non-toxic and has low conductivity. It must be handled with protective equipment, because it may irritate skin and eyes.
\subsection{Liquid storage}
The total volume of the LB section's pipes is around 140~l, and slightly over 90~l for each EB. In practice, the total liquid volume circulated is 150--160~l in the LB, and about 100~l in EBA or EBC. Hence the total capacity of the storage system must be nearly 400~l; besides, this volume must be pumped in and out of this nontrivial pipe system.
The height difference between the bottom of the ATLAS cavern and the top of the calorimeter is 18--20 meters, therefore the maximum top--bottom static pressure difference, taking into account the working liquid density ($\sim$1.15~g/cm$^3$) is about 2~bar. Pressurised air at 2.5--2.7~bar is used to inject the liquid into the piping system, or to return it into storage.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.54\textwidth]{csws1}
\caption{Liquid storage unit, made of two 230~l and four 130~l stainless steel tanks combined in pairs, for the liquid and air volumes.}
\label{fig:waterstation}
\end{figure}
The storage system (figure~\ref{fig:waterstation}) (water station, WS) consists of stainless steel tanks, one pair of 230~l for LB and two pairs of 130~l for EBA and EBC. For each couple of tanks, one is used to store the liquid and the other to store the air displaced during the filling operation. This air is eventually released into the atmosphere, carrying vapours which are absorbed by external filters. The volume of each system tank is slightly greater than the volume of the corresponding system piping.
\subsection{Hydraulic drive and control}
Three pumping units (``hydro-drives''), operated via 3U control crates located in the ATLAS cavern, attend to each calorimeter section. Two additional drives, one for further R\&D and a spare, are available. All drives are identical and exchangeable after appropriate configuration tuning.
The main purpose of the hydro-drive is to fill the piping system, to produce a stable and controllable flow of liquid in the appropriate contour -- be it in a pre-programmed or in an {\it ad-hoc} configuration -- and to drain the system. The maximum number of supported contours is 16, with a pressure difference of up to 4~bar at the drive in/outlets.
An additional task of the drive is to provide controllable pressurised air to operate and manipulate up to 6 garage locks and two liquid (or gas) storage units synchronised with the current operation procedure. All drive operations can be performed remotely as well as manually, using dual control features of the 3U crate.
Each drive unit (figure~\ref{fig:hydrodrive}) includes a magnetic gear pump (IWAKI MDG-M2), a frequency-varying drive (YASKAWA VS mini C CIMR-XCACB) used as a controlled 200-watt power supply for the pump, 42 hydraulic (LUCIFER 121K01) and 11 pneumatic (LUCIFER 131M14) electromagnetic valves, a 1.8~litre buffer vessel with a level meter, pressure sensors and manometers; additionally, a number of manual valves, tubes, pipes, cables, connectors, filters, etc. The drive occupies one full 6U euro-crate, weighs about 30~kg and operates at pressures of up to 5~bar.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.37\textwidth]{cshydrodrive1}} \quad
\subfloat[]{\includegraphics[width=0.6\textwidth]{cshydrodrive2}}
\caption{(a) Hydro-drive with its control crate. (b) Hydro-drive schematics.}
\label{fig:hydrodrive}
\end{figure}
The 3U control crate is equipped with a special-purpose local bus and contains a number of dedicated modules, which can be operated either remotely or manually. All the 3U crates and their modules are interchangeable. The main modules are listed here:
\begin{itemize}
\item A CAN 3U interface provides the communication between the 3U modules and the remote CPU, using a CAN bus interface and the 3U local special-purpose bus;
\item Up to 8 electromagnetic valve drives with 8 channels each. Both the hydraulic and pneumatic valves are opened with 24~V DC supply; the valve status is available at all times;
\item The pump drive itself, controlling the desired frequency and the power of the YASKAWA drive, hence the rotation velocity of the magnetic gear pump with the performance appropriate to providing the desired flow in the chosen contour;
\item The level meter control and display, which monitors changes of the amount of liquid quantity in the entire volume with an accuracy better than 200~ml;
\item The status display of the 3U-crate local bus, used for debugging purposes.
\end{itemize}
To drive the source capsule through the 6~mm inner-diameter tube with a steady velocity of about 35~cm/s, the liquid flow in the desired circuit section has to be about 10~cm$^{3}$/s. With these parameters the pressure drop in one LB module, containing $\sim$65 meters of calibration tubes, is 0.2~bar, and in one EB module ($\sim$35 meters of calibration tubes) is 0.15~bar. The number of modules in a contour varies from 4 to 6, therefore the applied pressure difference (positive or negative) to a contour has to be about or slightly over one bar above the local static pressure (-0.2+0.6~bar).
The hydro-drive is capable of pumping liquid with a finely controllable flow in the range of 5--30~cm$^{3}$/s, while providing a pressure difference of up to 4~bar. This pressure range corresponds to a capsule speed of 10--50~cm/s. The typical speed during early system tests was 25--30~cm/s, while currently, it is routinely 35~cm/s in all three calorimeter sections. The hydro-drive reaction time when run conditions change is adequate to the system piping, as designed.
\section{Sensors}
Several more types of sensors, together with the associated electronics, described in detail in Ref.~\cite{Electronics}, are used in the system to remotely control it either when idle or during operation.
\subsection{Pressure sensors}
The pressure sensors (PS, figure~\ref{fig:pressuresensor}) control the pressure in the calibration tubes circuit and liquid and gaseous supply vessels. More than 60 points are measured in the system, over the design range of~-1.0 to +5.0~bar).
The PS elements are Motorola integrated monolithic silicon devices MPX5700D and MPX5700A. They are based on a piezo-resistive transducer on-chip signal, which is conditioned, temperature-compensated and pre-calibrated by the manufacturer with an accuracy of about 2.5\%. A silicon-rubber cover protects the integrated circuit from the working fluids.
Sixteen pressure sensors are installed on LB tees and 28 on EBA \& EBC tees in total. In addition, 15 sensors are part of the liquid pumping and storage equipment, giving a total of 59 units located in the cavern.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.50\textwidth]{cspressuresensor1}} \quad
\subfloat[]{\includegraphics[width=0.43\textwidth]{cspressuresensor2}}
\caption{Pressure sensor element MPX5700 (a) and its case (b). The main locations of the sensors are tee-joints, pumping units (hydro-drives) and liquid storage units (WS).}
\label{fig:pressuresensor}
\end{figure}
\subsection{Inductive capsule sensor}
The inductance sensor (SIN, figure~\ref{fig:sinsensor}) is designed to register the passage of the conductive body of a capsule. It is a continuously powered LC circuit in which the inductive element is a coil wound around a tube of the source travel circuit. The frequency of the generated electromagnetic flux is shifted by the change in inductance due to the capsule's conductive body. The frequency shift, hence the presence of the capsule, is detected by conventional electronics.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.40\textwidth]{cssinsensor1}} \quad
\subfloat[]{\includegraphics[width=0.53\textwidth]{cstee2}}
\caption{SIN sensor installed on a section of the calibration tube (a) or around the tee connection (b).}
\label{fig:sinsensor}
\end{figure}
The SIN signal indicates to the control process the passage of the capsule from one module to the next, and when to switch the liquid flow to the next contour. Typically one SIN is located at the entrance of the liquid flow into a module's entrance and another one at the exit. In order to ensure the knowledge of the capsule location at all times, additional SINs are mounted at inaccessible zones of the calibration tube circuit, for instance under the Liquid Argon calorimeter cryostat's supports and flanges. More SINs signal the entrance and exit of the capsules from garages.
SIN data allow to control online the capsule velocity, and to tune the pump speed, thereby steadying the movement of the capsule while the flow within any contour is varied. Altogether almost 500 SINs are used in the three calorimeter sections.
SINs also register the presence of a source capsule in a garage (figure~\ref{fig:minicrate}a). In this case, the sensor coil has a slightly different shape but measure a frequency shift just like the others. The only difference is in the data treatment: while a conventional SIN only registers the passing of a capsule, the garage SINs are tuned to sense the change in conductivity due to the presence of a capsule. The corresponding parameters are saved in the memory of the appropriate garage module.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.48\textwidth]{csgaragesensors}} \hfill
\subfloat[]{\includegraphics[width=0.44\textwidth]{csminicrate}}
\caption{(a) SIN sensor coil and Geiger counter used in a garage. (b) Mini-crate with SIN\_CAN (top) and ADC\_CAN (bottom) FEE units.}
\label{fig:minicrate}
\end{figure}
The front-end electronics (FEE) modules that collect and treat the pressure and SIN sensor data are located at the periphery of calorimeter modules and are contained in custom-made mini-crates (figure~\ref{fig:minicrate}b). One ADC\_CAN unit can handle up to 8 pressure sensors; one SIN\_CAN unit supports up to 16 sensor channels. Altogether, about 12 ADC\_CAN modules and 40 SIN\_CAN modules are installed. All data are transmitted to the central CPU under the CAN bus protocol. A more detailed description of the system's sensors and electronic modules is given in Ref.~\cite{Electronics}.
\subsection{Liquid level meter}
While being pumped, the propelling liquid passes through a buffer vessel in order to eliminate bubbles, which otherwise might create problems such as letting the pump dry up or losing control of the source movement. Continuous measurement of the liquid level in the buffer vessel (BV) (see figure~\ref{fig:hydrodrive}) allows detecting the presence of bubbles, especially while filling the system with liquid -- the time when bubbles are likely to be produced -- and thereby avoiding problems due to their presence.
To control the liquid flow during a Cs scan, the drive buffer vessel (BV) level has to be controlled in the presence of a magnetic field of up to 50~Gauss from ATLAS Toroid magnet.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.26\textwidth]{cslevelmeter}}\quad
\subfloat[]{\includegraphics[width=0.70\textwidth]{cslevelhall2}}
\caption{(a) The buffer vessel's optical level meter with its control module. (b) The upgraded Hall-sensor level meter --- the float, the line of hall sensors and the display.}
\label{fig:levelmeter}
\end{figure}
Originally, the level meter (LM) was based on optical measurements. A small test volume, directly communicating with the buffer vessel, was equipped with eight infrared optical sensors (Honeywell LLE101000). In these devices, an infrared light beam is transmitted through the outer lens surface when immersed in the liquid, or is reflected back and detected when not. The eight sensors, vertically arranged in the test volume, provided a coarse but sufficiently informative measurement of the level of the liquid. An assembled level meter with its 3U control unit is shown in figure~\ref{fig:levelmeter}a. Eight LEDs turned red or green depending on their location above or below the liquid level in the LM and changed colour whenever the liquid volume in the buffer vessel changed by about 200~ml. The control unit reflected the LED display state and sent digital information of the current level meter status to the central CPU, via CAN bus. The level meter worked fine, however, the precision was not enough to quickly detect possible liquid level changes and the optical sensors occasionally returned false readings.
To overcome these issues, the level meter was upgraded later on to a version sensitive to changes of less than 25~ml, based on Hall effect sensors and a magnetic float. A narrow PCB of about 170~mm length with 32 Hall effect sensors (A1301EUA-T from Allegro MicroSystems) spaced by 5.5~mm is inserted into a stainless steel tube, equipped with a circular float with 4 embedded magnets. The sensors are read-out by the board with a micro-controller (STM32F205) that translates the readings of individual sensors into the liquid level, separating the signal by fitting the background. The results of the measurements, together with the raw readings are transferred into the 3U display card via SPI bus. For convenience, the float level is also displayed in an LED strip of the drive unit. The data from several sensors around the float position allows to subtract the background from the local environment's magnetic field and to improve the resolution of the level measurement beyond the step between individual hall sensors (20~ml vs 50~ml). The Hall effect sensor level meter float and the sensor board are shown in figure~\ref{fig:levelmeter}b.
\subsection{Liquid radioactivity monitors}
As a part of the radiation safety measures implemented in the Cs source system, an early warning of any leak from the Cs sources into the working liquid is provided by instrumentation that detects any unexpected radiation in the working liquid.
Two radiation detectors installed in the liquid storage system (figure~\ref{fig:radmon}) record the gamma-ray energy spectra of the stored working liquid. One unit monitors the LB storage tank, the other the EBA and EBC.
A \O2x2~inch NaI(Tl) crystal (\O50.8~mm with 50.8~mm thickness) produced by Bicron (Saint-Gobain), coupled with an ETI 9266B PMT is used as a gamma-ray total absorption detector. The signals are treated by a standard electronic chain consisting of a multiplexer, a CAEN N968 preamplifier and a CAEN N957 analyser. The PMT high voltage is provided by a standard CAEN N470 HV power supply. The analyser runs in self-triggering mode and the data are read out via USB.
The naturally occurring $\gamma$-ray signal at 1.46~MeV from $^{40}$K peak, obtained from a small sample of K$_2$O from commercial fertiliser, permanently deployed in front of the NaI(Tl) crystal, is used to calibrate the radiation monitor's energy scale. In figure~\ref{fig:radmon}, a $\gamma$-spectrum with the two peaks is shown, when the detector assembly was put in the vicinity of the $^{137}$Cs source on the test bench.
\clearpage
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.25\textwidth]{csrmon1}} \hfill
\subfloat[]{\includegraphics[width=0.64\textwidth]{csrmon3}}
\caption{(a) NaI(Tl) detector installed in the liquid storage system. (b) Energy spectrum measured in a test bench, with $^{137}$Cs source in the vicinity, with zoom on the peaks showing the fit curves. The energy reference is obtained by the 1.46~MeV~$\gamma$-ray peak from $^{40}$K from the fertiliser in the NaI(Tl) detector assembly.}
\label{fig:radmon}
\end{figure}
In addition to online monitoring by the early warning system, periodic chemical analyses of the system liquids are carried out every three months by the CERN Radiation Protection group. No significant radioactive contamination of the drive liquid was detected by either method over the decade-long operation of the Cs monitoring system. Because of the lack of a signal, a precise determination of the sensitivity of the early warning system is not yet available.
\section{DAQ and online software}
\subsection{DAQ architecture}
As already pointed out, the three source subsystems are functionally independent and have identical sets of sensors on the source path, garages, hydraulics drives, distributed front-end electronics (FEE) modules and power supplies. As a result, they have very similar architecture. Figure~\ref{fig:csarch1} schematically shows the layout of the control structure and read-out of one subsystem.
\begin{figure}[!htbp]
\centering
\includegraphics[width=\textwidth]{csarch1}
\caption{The schematics of the control and operation equipment structure of the LB TileCal subsystem. The CAN bus daisy-chain is represented by solid lines, while the dashed lines show power supply connections. The table shows the total numbers of sensors, FEE modules and 3U-control for the entire calorimeter.}
\label{fig:csarch1}
\end{figure}
The sensors are read out by the FEE modules, distributed over the TileCal body, and the service crates, located on the cavern floor. They are interconnected via 50 kBaud CAN bus daisy-chains to the CAN bus Read Out Buffers (RBUF) located in the ATLAS USA15 control room. During system operation, the changing hardware configuration, sensor hits and registered PMT responses record the time-dependent status of the system, including the status of the hardware, the latest changes of its parameters, the conditions of the fluids, the direction, speed and location of the sources, etc. Data flow and communications between equipment components are shown schematically in figure~\ref{fig:csarch2}a.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.45\textwidth]{csarch2}} \quad
\subfloat[]{\includegraphics[width=0.52\textwidth]{csarch3}}
\caption{(a) Schematics of the data flow. (b) Online software structure. Hydra, DAQ and GUI are the main processes active during data taking~\cite{Software}.}
\label{fig:csarch2}
\end{figure}
\subsection{Online software}
The collected status information is recorded and analysed by a number of custom-developed control processes~\cite{Software}, written in C++ and Python, outlined in figure~\ref{fig:csarch2}b. They use libraries provided by the ATLAS TDAQ~\cite{TDAQ}, namely, VME libraries~\cite{VMElib} for hardware access, IS~\cite{IS} for data exchange and ERS~\cite{ERS} for error reporting. The HYDRA process takes care of all mechanical operations: it runs the module scan according to the pre-programmed scenario via a specific interface to the 3U-control crates and synchronises the relevant readout subprocesses. The DAQ process cyclically reads the currents of the PMTs, module by module, at the design frequency of 90~Hz according to the position of the source, provided by HYDRA, and sequentially stores the PMT data.
In an auxiliary mode, a Detector Control System (DCS) process, in common with the entire TileCal DCS~\cite{DCS}, retrieves the essential information on the HV and LV for each module, using the DIM communication package~\cite{DIM}, and publishes the data and the state of the sensors.
In addition to the already mentioned HYDRA, DAQ and DCS processes, the software includes other general components:
\begin{itemize}
\item The ATLAS Information Service, used for communications;
\item Embedded scripting facilities for program logic and planned actions, containing descriptions of the standard and/or specific actions;
\item GUI~--- a graphical user interface that allows the operator to communicate with the running sub-processes and to visualise the state of the system, the operator's actions and its results;
\item Analysis (Ana) and data recording (Rec);
\item The common TileCal Data Base;
\item The CAN bus branches that control the system's drives and sensors.
\end{itemize}
The total number of sensors runs up to 500, while the number of FEE cards is about 80 altogether in all three sections of the system. Each of three 3U crates contains 11 specially designed cards of 6 types to control the communications, pump, valves, level meters, etc.
On-line operations and procedures of the system are designed maintaining full independence of mechanical and readout functions. Specifically, any capsule can be run through the entire calorimeter independently of the calorimeter readout status, with all the attributes of a data-taking run: full capsule movement control, visualisation and final data-file recording. The source signal readout sequence can be included into the mainstream data flow according to the source movement data reference or following a pre-arranged program. In turn, readout procedures can be run without source movement, for checking and testing purposes, including integrator calibration and pedestal measurements, as described later.
Synchronisation between run control and readout procedures is based on module entry and exit SIN signals: the module entered by the source is included into the readout chain, and when the output SIN sensor gives an exit signal its readout stops. In the normal scan preparation procedure, all modules to be read out are initialised for data acquisition. Normally just one module is read out while the source is inside it -- there are only a few cases in which two neighbouring modules are read out together because SINs could not be installed between them due to mechanical restrictions in the zones where the electromagnetic calorimeter cryostat supports did not allow to do so.
\subsection{Data recording}
The initial, current and final hardware status, the data flow and the integrator readout information are recorded as a raw data file with ROOT structure. This file contains time stamps that document all changes in all the relevant parts of the system: drive, garages, SINs, PSs, all modules channel responses and general information such as run conditions and constants like source ID, integrator gain, readout frequency, HV, LV and drawer internal temperatures, a list of bad channels, etc.
Integrators are read out via 250~kBaud CAN bus at 90~Hz. This readout rate is fast enough to observe the tile-to-tile structure clearly and is slow enough to switch readout to the next channel. The data flow is below 25~kBytes/s and does not pose any significant requirements on the infrastructure.
The raw data structure is organised to allow retrieving the past run procedures. This ``history'' option, together with the corresponding data flow, has been very helpful while developing and debugging the hardware and software.
The data are stored in separate files, corresponding to a run between two garages. The file size ranges from 5~MBytes for Extended Barrel up to 30~MBytes for Long Barrel runs.
\subsection{Scan operations}
Driving the source at $\sim$35~cm/s allows to scan an EB in 3 hours, and the LB in 5 hours. To prevent substantial data losses if a scan cannot be completed, the full scan of a TileCal barrel consists of three separate sub-scans of 20--24 modules each. The physical limits of the sub-scans are determined by the garage locations so that if a scan must be interrupted, the source can be safely stored in a garage in no more than one hour, reversing the source direction if convenient. The source velocity is easy to change: for instance, when passing through tees, the speed is programmed to increase up to~40~cm/s.
In the LB the source moves in one direction in one run, and in the opposite direction in the next one. EBA and EBC are scanned every time in both directions. The reason for such a scanning mode is that two modules in EBA and EBC can be read only in one direction. Being smaller, EB scans require just over one half of the time of LB scans, therefore scanning EB modules in both directions requires about as long as one LB scan in one direction.
Usually, all three calorimeter sections are scanned in parallel, requiring 6--8 hours altogether including scan initialisation and finalisation procedures. In principle, all scan procedures are fully automatic, but they are performed under operator supervision in case human intervention appears necessary.
\subsection{User interface}
The operator controls the execution of the scan using the graphical user interface (GUI) written in C++ and using the Qt toolkit. Figure~\ref{fig:gui} shows a screenshot of the interface during the on-going source scan. One can see here:
\begin{itemize}
\item The status of the drive pump and valves;
\item The pressure in the drive and at the tee-joints around the calorimeter;
\item The status of the garages, including the state of garage locks and possibly the presence of the source capsule;
\item The status of SIN sensors, represented by circles at the module entrances and exits: empty circles indicate no hit, coloured circles indicate hits with colour coded number of hits and an indication of the time passed from the hit, coded by the level of grey colour;
\item The window giving information on the SIN hits, the direction and the speed of the source;
\item The command window, that provides to the operator the list of scripts to be used to perform specific manual interventions;
\item A display of the responses of the PMTs in the module being scanned as a function of time.
\end{itemize}
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.95\textwidth]{csgui1}\\
\includegraphics[width=0.95\textwidth]{csguiplots}
\caption{An example of GUI display screen during a Cs-137 scan. The source is moving inside module 30 between garage 2 (G2) and garage 1 (G1).
The middle panel shows the sum of raw signals from all PMTs of the module being read out. The bottom panel is a display of pressure readings in different locations of the calorimeter calibration tubes during several operation phases.}
\label{fig:gui}
\end{figure}
\clearpage
\section{Offline processing}
The source signals from a readout cell sequentially represent the response of the tile rows in a cell traversed by the source. As the source passes through successive tile rows, the PMT responses in turn display pedestal and signal regions, as shown in figure~\ref{fig:pmtresponse}. Appropriate processing of the raw source data provides an accurate estimate of the cell response to the source.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.48\textwidth]{cspmtresp1}} \quad
\subfloat[]{\includegraphics[width=0.48\textwidth]{cspmtresp2}}
\caption{(a) A typical response sequence when the source traverses three tile rows of a cell and (b) the signal from one of the PMTs, covering part of a tile row. The pedestal and signal regions are clearly visible, as well as the regular tile structure in (b). On the abscissa, the number of readouts (at the 90~Hz rate) is shown.}
\label{fig:pmtresponse}
\end{figure}
A conceptual picture of the sharing of the gamma-ray energy between a pair of adjacent scintillator tile rows in the same cell of any module is shown in figure~\ref{fig:radsharing}a. Due to the geometry of the tiles and the location of the source tube holes within tiles the energy of the $^{137}$Cs gamma rays is deposited into adjacent tile rows in a ratio of about 22/78. In the figure, 78\% of the gamma-ray energy is deposited in the top tile row (at a smaller radius from the colliding beams) and 22\% in the bottom tile row. The cell signal consists of the sum of the two tile rows. A particular case occurs when the tile row at the larger radius is the last one in a cell; then the cell signal contains on average only 78\% of the energy because 22\% is deposited outside the cell limits. A detailed discussion of this effect and of the procedure adopted to correct it is given later in this section.
The response of an individual tile, traversed by the source, is accurately parametrised by a sum of a Gaussian and an exponential:
\begin{equation}
g\times\exp^{-0.5((x_0-x_i)/\sigma)^2}+(1-g)\times\exp^{-\lvert x_0-x_i \rvert/\lambda}
\label{eq:csfit}
\end{equation}
where $0<\mathrm{g}<1$ is the fraction of the integral of the parametrising function represented by a Gaussian, $x_i$ is the instantaneous coordinate of the source capsule and $x_0$ is the coordinate of the centre of the tile.
\begin{figure}[!htbp]
\centering
\begin{minipage}{0.56\textwidth}
\subfloat[]{\includegraphics[width=\textwidth]{cs7822}}
\end{minipage}
\begin{minipage}{0.40\textwidth}
\subfloat[]{\includegraphics[width=\textwidth]{csb11-2}} \\
\subfloat[]{\includegraphics[width=\textwidth]{csb11-1}}
\end{minipage}
\caption{Gamma-ray energy sharing between two adjacent tile rows (a). An example of the response measured from two tiles separated by ten 18.2~mm periods: the ``78\%+22\%=100\%'' case, showing the response from the same two tiles when the source passes through a tube within the two tiles (b); the ``22\%'' case, when the source passes in a tube located in the adjacent tile row (c). The responses are fitted with a sum of a Gaussian and an exponential.}
\label{fig:radsharing}
\end{figure}
The fit parameters were evaluated with specially-designed tests wherein a specific set of WLS fibres were coupled to the tiles whose individual response was under study. As an example, figures~\ref{fig:radsharing}b and \ref{fig:radsharing}c present the case where the source passes in the tile row at the next smaller radius with respect to the tile row being read-out (``22\%'' case) or through it (``78\%''), together with the fitted response functions, given in units of the TileCal periodic structure of 18.2~mm. Note that several data points are recorded for each 18.2~mm period.
They are the PMT response measurements corresponding to the position of the source along the calibration tubes, at a distance given by the source velocity divided by the readout frequency of 90~Hz, i.e. 35/90 = 0.4~cm.
The first correction applied to the raw data calculates to a good approximation the instantaneous response of the PMT current, mostly eliminating the distortion caused by the front-end electronics, a charge amplifier with a low-pass RC circuit with a time constant typically of about 10~ms.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.5\textwidth]{cstau}
\caption{Time dependence of the response of a single tile: ideal and recorded by the integrator's low-pass RC input circuit.}
\label{fig:taucorr}
\end{figure}
Figure~\ref{fig:taucorr} schematically shows the distortion of the signal from a single tile. The effect of the change of the peak's shape, due to the integrator's RC circuit, shifts the coordinates of the signal's centre of gravity, thereby producing a bias that depends on the direction of the source movement and the characteristics of the RC circuit.
The correction of the tile line shapes is performed using a simplified formula that reverses the effect of the RC filter by correcting the signal by an amount proportional to its derivative at every measured point. The correction is:
\begin{equation}
A_i \rightarrow A_i + \delta\times(A_i-A_{i-1})
\label{eq:taucorr}
\end{equation}
where the $A_i$ and $A_{i-1}$ are measurements adjacent in time and $\delta$ is set empirically, depending on the actual RC circuit value and the readout frequency.
The asymmetry of the responses also shows up as asymmetric tails at the beginning and the end of the sequence of signals from each PMT, depending on the direction of the source movement.
Numerous high-statistics test scans using the actual front-end amplifiers led to a fitted value for $\delta$ of 0.7$\pm$0.1 at the default 90~Hz readout rate. This correction makes the response curve symmetric to a good approximation, independently of the direction of the source movement.
Most important, the correction sharpens the peak/valley ratios of the PMT response curves with respect to the raw data shown in figure~\ref{fig:pmtresponse}b, hence it turns out to be very useful to precisely evaluate the response of individual tiles.
After this correction, pedestal values, recorded when the source is out of the cell being measured, are calculated and subtracted. After pedestal subtraction, any slightly negative values are set to zero, and the data are subjected to further treatment.
\subsection{Pattern recognition and tile row response calculations}
Calculating tile row responses is the first step following the ones just described. Then a cell's mean response is calculated averaging the tile row responses.
A typical readout sequence from the third row of an A-sample cell (78\% case) is shown in figure~\ref{fig:cstilerow} after applying all corrections described in the previous subsection. The units on the abscissa are the calorimeter 18.2~mm periods with one scintillating tile per period.
\begin{figure}
\centering
\includegraphics[width=0.9\textwidth]{cstilerow}
\caption{An example of a tile row response sequence taken at the 90~Hz readout frequency and presented here as a function of tile position. The line labeled <Int> gives the mean response calculated with the ``integral'' method and the line labeled <A> is the response obtained with the ``amplitude'' method.}
\label{fig:cstilerow}
\end{figure}
The cell edges are shown by the vertical lines; each of the 14 tiles of this tile row displays a clearly identifiable peak. One can see that the total response is the sum of the overlapping individual tile responses. In physics runs, the tile row response corresponds to the energy lost by particles in this tile row.
Two complementary approaches are used to evaluate the tile row responses. They are referred to as ``integral'' and ``amplitude'' methods. The two methods and their results are illustrated in figure~\ref{fig:cstilerow}.
The integral approach works by adding all the response points in the three regions labelled S0, S1 and S2; the sum is divided by the width of the distribution, labelled T2-T1. The proper setting of the cell edges is based on an appropriate determination of the tails and the correct count of tile peaks. The result of the integral method calculation is a mean response (<Int>) of the tile row of this cell. The method is fast and in most cases, the results are stable at the level of 0.2--0.3\%, estimated from repeatability tests.
The amplitude approach uses the coordinates of each tile in a tile row and calculates the sum of overlapping individual tile responses, determined as follows. A MINUIT fitting procedure, using the equation~\ref{eq:csfit} is applied to the sliding regions that cover in successive steps intervals of 5 to 10 periods with step-to-step overlaps of 3 to 5 periods. The individual tile responses (the ``amplitudes'' evaluated at the positions of their peaks) are shown by the black points in figure~\ref{fig:cstilerow}. Their mean is the mean row response labelled <A>. These calculations are rather complicated and CPU-intensive, besides needing good response parametrisation, but produce detailed pictures of the calorimeter's internal structure and of the positions of tile row edges, which are useful in case of module edge effects and tiles of special shape (cut tiles).
Indeed, properly reconstructing the Cs source signals in a tile row is not always as simple as shown in the previous figure. As already mentioned, about 22\% of the source gamma-ray energy leaks into the tile at a larger radius. The light is collected by the WLS fibres coupled to the tile sides and transported to the PMTs, where the light from all the tile rows belonging to the same cell side is added up. If the cell has 3 tile rows, in two cases the cell picks up this 22\% leak, but when the sources goes through the row at the outermost radius, the leak goes to the other cell. Also, if all the tile rows had exactly the same leak of the Cs radiation to the next-outer tile row, the overall cell response could be calculated quite easily, but in most cases, it is not so.
Furthermore, adjacent tile rows may have different numbers of tiles, leading to an additional distortion of individual peaks, and of the overall picture. Another feature of the tile layout that must be taken into account when reconstructing individual tile responses is that tiles in adjacent rows are shifted by half of a period (9.1~mm).
Figure~\ref{fig:tilerow7822} presents two typical cases in which these features must be considered in order to properly extract the response of each tile row from the Cs scan data, eliminating the complications due to the geometry of the particular cell. The first case is when the cell has a thick steel end-plate, causing the distortion of the result at the edge of the cell. The second case shows the influence of the adjacent tile row that is shifted in Z versus the tile row where the capsule is flowing, making the disentanglement and correct amplitude determination particularly complicated.
For this purpose, measurements were made in order to measure precisely the ``78/22'' ratio for different tile rows, and how to correct for it when calculating the tile row responses with the integral and the amplitude methods.
The measurements were made setting up special Cs source signal readout configurations for all eleven tile rows. In addition, the data from all modules underwent specific analysis procedures in order to obtain complementary information from each tile row.
In figure~\ref{fig:tilerow7822-2} ``integral'' ratios are shown for each tile row in which the two signal components are present and were separated. In panel (a), the ratios were extracted from Cs source runs through a specially instrumented TileCal module. In (b), the ratios were extracted from normal source data from all 64 LB, EBA and EBC modules.
The first approach provides a direct measurement of the ratios but is affected by systematic uncertainties arising from the optical properties of the particular module used. The second has much larger statistics, but here the ratios are evaluated from an overall fit, subject to the errors due to the algorithms used.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.48\textwidth]{cstilerow7822-1}} \quad
\subfloat[]{\includegraphics[width=0.48\textwidth]{cstilerow7822-2}}
\caption{Two examples of sum distributions from adjacent tile rows where the ``22\%'' leak takes place. The X-axis shows the 18.2~mm tile/steel period number. The top two curves show the measured and the fitted cell response, while the middle curve shows the fitted curve with subtracted ``22\%'' response from the adjacent tile row, denoted by the bottom curve. Circles denote the individual tile reconstructed amplitudes, triangles show the reconstructed ``22\%'' response from the adjacent tile row. (a) The effect of the 2~cm thick end-plate (EP) is shown, as the rising curve at the right bottom. (b) Row displacement coming from the cells structure (B and C rows) causes a large distortion of the resulting response curve, visible as the "hump" on the right, making the determination of the cell edge location very difficult and the responses balance problematic.}
\label{fig:tilerow7822}
\end{figure}
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.45\textwidth]{cstilerow7822-3}} \quad
\subfloat[]{\includegraphics[width=0.46\textwidth]{cstilerow7822-4}}
\caption{Experimentally measured ratios of the leaked and absorbed gamma ray energies. (a) The ``22/78~ratio'' measured using a specially equipped module with single tile readout. (b) Calculation of the energy leak factor using data analysis of the production modules in the detector.}
\label{fig:tilerow7822-2}
\end{figure}
The ratios for smaller tiles (\#1-3) look slightly higher than for the other ones, however, the differences are within the statistical and systematic errors. The effect may be associated with the tile volume, because of a smaller ``78\%'' absorbed fraction in the smaller tiles. It is encouraging that the two methods give very similar results, compatible within an errors corridor. The mean value of the leak over ``B9'', ``BC'' and ``D'' cells, where rather direct measurements are available, equals 0.261$\pm$0.012, corresponding to split ratio of 20.7\%/79.3\%.
When using the integral approach, the ``22/78 ratios'' depend on the tile row, while with the amplitude approach the variation with tile rows is smaller than the errors, hence it is reasonable to use the fixed 0.261 ratio. The two correction methods are complementary: with the integral method, the cell edge is sometimes hard to precisely locate, whereas, with the amplitude method, based on a constrained fit function, there is no such problem. Furthermore, the latter method's precision in determining single tile responses is about 2\%, which is amply sufficient to estimate the uniformity of the full modules. The amplitude method is CPU expensive but doesn't offer significant improvement with respect to integral method it is significantly better only for C10 cells (with only 5 periods and without the end-plate), therefore the integral method is normally used for calibration and equalisation. On the other hand, the amplitude method was very useful during instrumentation, because we could see immediately which fibre was broken and had to be fixed, while CPU required to reconstruct signal just in one module is not significant.
\subsection{Cell response evaluation}
After taking care of the corrections just described, the response of a readout cell is simple to calculate. As shown in figure~\ref{fig:radsharing}, the source typically traverses a tile through the hole at the larger calorimeter radius, depositing the ``78\%'' fraction in that tile row. Therefore within each cell, the signal in the tiles at the larger radius is not mixed with the signal from the adjacent tile row, which belongs to the next cell. It is convenient to adopt this signal for every tile, subtracting the calculated ``22\%'' leakage from the tile at a larger radius. After obtaining the ``78\%'' signals for every tile row in a cell, the overall cell response is calculated as the mean of these signals weighted by the volume of each tile row. There is no loss of information in not correcting for the ``78\%'' fraction in calculating the cell responses because they do not propagate any further than uniformity estimates.
It is useful to parametrise the uniformity of the response of tiles in a row, of rows with a cell, and of cells within a module. For the purpose of quality checks, two different parameters are found to be useful. They are referred to as ``instrumentation'' and ``physical'' uniformities. The instrumentation uniformity is the RMS/mean value of the responses, obtained with the amplitude method, of all the individual tiles in a cell. The physical uniformity is the RMS/mean value of the mean tile row responses within a cell; it is relevant to the quality of the response of the calorimeter cells in a module to hadronic showers. Figure~\ref{fig:tilecellresponse} shows an example of the data used to perform these calculations.
\begin{figure}[!htbp]
\includegraphics[width=\textwidth]{cstilecellresponse}
\caption{An example of the uniformities observed in an A-sample cell: tile responses belonging to one of the 3 tile rows are shown separately by squares, triangles and circles. The three horizontal lines are the averages in each tile row. From the variation of the response of all individual tiles in a cell, the instrumentation uniformity is calculated. From the values of the three horizontal lines, the physical uniformity of cells in a module is calculated. Here and in similar figures, the tube number is the tile row number.}
\label{fig:tilecellresponse}
\end{figure}
\section{System use and results}
The Cs calibration and monitoring system was used to check the quality of the prototype and production modules of TileCal and to set the electromagnetic energy scale (EM-scale)\footnote{In a calorimeter only some part of hadron shower energy, deposited in the sensitive parts of the detector, is responsible for the formation of
calorimeter response. This part is called visible energy, or energy in the electromagnetic scale (EM-scale).} of the calorimeter and the dynamic range of its readout system (1996--2004)~\cite{EM}. Since the installation of TileCal in the ATLAS cavern, together with the overall TileCal monitoring system, the Cs system is used to monitor and analyse the slow variation in time of the response of its various optical and readout components. In addition, it was used to study certain expected or unexpected phenomena, such as the effect of the ATLAS magnetic field on the TileCal signals and a slow drift of the PMT response, observed while testing the early module prototypes.
These uses of the Cs system are illustrated in this section.
\subsection{Checks of the module optical instrumentation}
The optical instrumentation stage in the construction of modules is described in detail elsewhere~\cite{Instrumentation}. Its steps are briefly recalled here to clarify the extent of the quality checks performed with the Cs system. This stage consisted of the insertion of tiles and fibres into the module steel structure, defining the readout cells by combining selected fibres into bundles, and coupling the fibre bundles to PMTs. The LB modules' mechanical structures, assembled at JINR, were delivered to CERN and fully instrumented in the latter laboratory, while the EBA modules were assembled and instrumented at Argonne National Laboratory (ANL) and Michigan State University (MSU) in the United States, and the EBC modules were assembled and instrumented at IFAE in Barcelona, Spain.
The main goal of the optical instrumentation checks was to verify the proper quality of the entire optical path, tiles to fibres to PMTs, and the correctness of the pattern of cells. It included the following steps:
\begin{itemize}
\item $^{137}$Cs $\gamma$-source runs through the module with the system's hydraulic equipment;
\item Calculating all the individual tile responses, thereby obtaining a maximally granular picture of the module and of its quality;
\item Checks of the fibre positioning in the cells, of the polishing of fibre bundles, of fibre glueing, of scintillator quality, of fibre quality -- looking for cracks, etc.;
\item All deviations of >25\% of the individual tile responses from the tile row mean were repaired, when possible;
\item After all repairs, Cs runs were repeated;
\item To document the final certification step, the final module response map and overall uniformity figures were stored in the database together with information on any observed faults or irregularities.
\end{itemize}
Figure~\ref{fig:instrumentation1} shows an LB calorimeter module ready for a Cs scan and an example of quality check plot that reveals a bad fibre coupling, worthy of repair.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.30\textwidth]{cstileinstr1}} \quad
\subfloat[]{\includegraphics[width=0.60\textwidth]{cstileinstr2}}
\caption{LB module at CERN instrumentation site under tests with Cs source (a). The calibration tubes and SIN sensors are visible. Example of a defective tile-to-fibre coupling (b)~\cite{Instrumentation}.}
\label{fig:instrumentation1}
\end{figure}
These checking and correction procedures noticeably improved the quality of instrumentation and helped to achieve the overall goal~--- an RMS/mean spread of physical uniformity numbers in any module better than 10\%. In figure~\ref{fig:instrumentation2} the physical uniformity of LB modules {\it vs.} their sequential production number is shown. Its value was kept well below the 10\% goal, despite appreciable drifts of the quality during the instrumentation time span (1999--2002). TileCal was assembled in the cavern in 2004--2006. These LB modules uniformity measurements were repeated in 2011, showing good general agreement with results at the time of module production and excellent optical integrity of the LB.
\begin{figure}[!htbp]
\includegraphics[width=\textwidth]{cstileinstr3}
\caption{LB modules uniformity measured with the Cs system, just after the final certification (circles) and ten years later (crosses). The physical uniformity of the cells of all modules is better than 10\%~\cite{Instrumentation}.}
\label{fig:instrumentation2}
\end{figure}
\subsection{Test beam calibrations: setting the EM energy scale}
The first hydraulic Cs source system was tested in 1996, on the prototype LB module. In 1997, two prototype EB modules (EBA and EBC) were under beam tests and the Cs system was used to calibrate and monitor these modules. After the regular TileCal module production started, in 1999, one of every eight modules was exposed to electron and hadron test beams, in order to measure their response to high-energy particles. In parallel, extensive response stability checks were also performed over the entire prototype and production module construction period (1996--2004).
Figure~\ref{fig:testbeam1} shows the Tile Calorimeter modules on a scanning table at the ATLAS combined test beam during the 2004 runs, with the hydraulic Cs scanning system. Calibration tubes, sensors and source garage are clearly visible.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.48\textwidth]{csmodtb2004}} \quad
\subfloat[]{\includegraphics[width=0.48\textwidth]{csgartbold}}
\caption{Tile Calorimeter modules on the scanning table at the ATLAS test beam, equipped with the Cs source calibration system (a). The Cs garage prototype at the test beam (b).}
\label{fig:testbeam1}
\end{figure}
The electron test beam runs on one out of every eight LB, EBA and EBC modules~\cite{Testbeam} allowed to set the scale of the response of TileCal modules to electromagnetic energy deposits, by connecting the charge measured by the readout system with the energy deposited in the calorimeter. The mean charge-to-energy conversion constant is 1.050$\pm$0.003 pC/GeV.
Scaling by the response of each module to the Cs sources, the energy calibration from the modules measured at the test beam was extended to all modules.
\subsection{Studies of module response stability}
Long-term monitoring of the TileCal prototype modules with Cs system allowed detecting a gradual signal loss, at a rate of about 1\%/month, much in excess of the $^{137}$Cs source decay rate. Subsequent intensive tests and source scans helped to pin down the effect to a loss of photocathode response with accumulated light~\cite{PMT}. Reporting this effect to the manufacturer, Hamamatsu Photonics, allowed to develop more stable version of the R7877 PMTs, ahead of the full detector construction.
\subsection{Response equalisation and monitoring}
The TileCal PMT voltages are set so that their linear dynamic range extends to 2~TeV/cell. In the process, running the Cs source through the entire volume of the calorimeter allows equalising all cell responses to Cs signals at a level of about 1\%. The process requires 2--3 iterations, in which measured cells responses are used to recalculate the next HV corrections in order to precisely get the desired response for all cells. The different activities of the sources used in the calorimeter sections are taken into account. Figure~\ref{fig:equal} shows the distributions of the normalised responses of all the readout channels, except for a few dead ones, at different times: when the equalisation was made, and the distribution of responses measured several years later. In between these two measurements, the PMT HV settings were not changed, but the response of the entire TileCal was monitored with Cs source runs and the slowly changing response measurements were used to update the calorimeter's energy scale.
\begin{figure}[!htbp]
\centering
\includegraphics[width=0.95\textwidth]{csequal12}
\caption{Entire TileCal normalised response distributions just after the initial equalisation (2009) and four year later (left), the same distribution after another equalisation (2015) and three years later (right). One can see the shift of the mean and widening of the distributions due to the changes of the PMTs performance and detector response.}
\label{fig:equal}
\end{figure}
Just after the initial equalisation step, channel responses are all within about 2\%, while an overwhelming majority of them are within 0.5\%. After a few years, the dispersion of responses has grown to almost 3\%. The mean of the response, normalised to the decay of the source, is also observed to change, partly corresponding to the so-called "up-drift" of the PMT response. The average value immediately after the equalisation is slightly above 1.0 because the equalisation was done without the magnetic field, that affects the scintillator performance (see section~\ref{sec:mfe}), and the measurement shown on the plot was done when magnetic field was switched on.
As both the scintillator and PMT change their response in time due to the ageing, the irradiation damage in scintillator and WLS fibres, and charge collection in the PMT anode, it is required to keep monitoring the performance of the calorimeter with the Cs system. By taking several Cs scans per year, one can study the evolution of the response of the TileCal cells in time.
Comparing the behaviour of the responses between relatively low luminosity in 2011--2012 (Run~1~\cite{TileRun1}) and much higher luminosity conditions in 2015--2018 (Run~2) in figure~\ref{fig:stability}, one can see a much higher spread and higher degradation of the cells' responses in the latter case. There is an evident dependence of the amount of degradation on the layer and $\eta$ of the cells that corresponds to the distribution of the particle flux, hence the amount of charge seen by PMT. The most affected part is layer A, closest to the beam pipe, and the region between the Long and Extended Barrels.
A closer look at the variation in time of the mean and deviation of the calorimeter responses of layer A, closest to the beam pipe, in figure~\ref{fig:stability2} shows that both the mean and the deviation values change more significantly during the higher luminosity periods (bottom plot). A more detailed analysis of this performance is presented in~Ref.~\cite{TileRun1}.
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.85\textwidth]{cs_drift_2011_2012} \\
\includegraphics[width=0.85\textwidth]{cs_drift_2015_2018} \\
\caption{The drift from the expected response of the cell to $^{137}$Cs source vs. eta of the cell for two run periods. 2011--2013 (Run 1) at the top, and 2015--2018 (Run 2) at the bottom. Most of the drift is coming from the change of the PMT response.}
\label{fig:stability}
\end{figure}
\begin{figure}[!tbp]
\centering
\includegraphics[width=0.9\textwidth]{cs_stability} \\
\includegraphics[width=0.9\textwidth]{cs_deviation} \\
\caption{The behaviour in time of the mean responses of TileCal cells to $^{137}$Cs and their dispersions in three longitudinal layers (top) and the deviation of the average cell response from expected values (bottom).}
\label{fig:stability2}
\end{figure}
This kind of reaction argues for the need of regular monthly Cs calibrations of the entire calorimeter, to adequately correct the drift of its response to particles from LHC collisions.
A detailed study of calorimeter response was performed in combination with other TileCal calibration systems~\cite{Calibrations}.
\clearpage
\subsection{Magnetic field effect}
\label{sec:mfe}
Effects of the magnetic field on the calorimeter's response may arise from two separate phenomena: changes in the response of the PMTs or changes in the light yield of scintillators~\cite{MagnetB}, \cite{MagnetM}. In TileCal, the former is expected to be small or not observable, because PMTs were carefully shielded from the residual magnetic field at their location. The observed significant magnetic field effect is likely to arise from the second phenomenon.
The Cs monitoring system allows detecting response variations at the row or cell level with an accuracy of 0.5\% or better. With sufficient statistics from runs with and without magnetic field, it was possible to detect the effect of the ATLAS magnetic field.
As an example, two sets of response measurements for a tile row of an A-cell are shown in figure~\ref{fig:mfe}a. Typically, as in this example, the response to the $^{137}$Cs source signal is roughly 1\% higher when the magnetic field is on. The effect is more apparent during the first 1200 days before the higher luminosity runs that temporarily decreased the response of the PMTs, however, it can be seen again after day 1400.
The effect of the magnetic field on cells in different layers of the LB, EBA and EBC sections, as a function of the position of the cells along the colliding beams axis, is shown in the figure~\ref{fig:mfe}b. The effect is seen to be largest in the most external layer D, that is closest to the toroidal magnet's coils. Such maps can be done module by module and cell by cell, providing a very detailed 3D view of the effect of the magnetic field at any point of TileCal.
\begin{figure}[!htbp]
\centering
\subfloat[]{\includegraphics[width=0.95\textwidth]{csmfe1}}\\
\subfloat[]{\includegraphics[width=0.95\textwidth]{csmfe2}}
\caption{The effect of the ATLAS magnetic field. (a) The two sets of measurements of A-layer cells with (blue squares) and without (green triangles) magnetic field. The sets are fitted with a curve taking into account combined effect of the Cs isotope decay and PMT gain updrift. (b) Distribution amongst cells at positions along the colliding beam axis Z, of the magnetic field on-off differences for different layers of the calorimeter, with separately shown cells C10 and D4, which are outside of the calorimeter end-plate.}
\label{fig:mfe}
\end{figure}
\section{Summary and conclusions}
This article gives a comprehensive description of the calibration and monitoring system for the ATLAS Tile Calorimeter, based on movable radioactive $^{137}$Cs sources. The sources are driven by the flow of a liquid within a system of tubes which traverses every single tile of the detector, supervised by custom-developed hardware and software that allows to control at any time the position and movement of the radioactive sources.
The TileCal signal readout system includes a parallel branch devoted to reading out the signals produced by the $^{137}$Cs sources. The acquisition of these signals is coordinated with the source movement controls; the data are stored in a database for later analysis.
The system measures the response of every single tile over the entire path of the optical signal, up to and including the PMTs, thereby providing information about the inner structure of every one of the about 5000 calorimeter cells. The response accuracy is reproducible with a precision of 0.2--0.3\% for a standard cell and about 2\% for individual tiles, fully adequate for the calorimeter calibration and monitoring goals.
The system was developed during the R\&D phase of the Tile Calorimeter and used to investigate the performance of the module prototypes.
During the production phase of modules, it was used to supervise the optical instrumentation of LB modules, at CERN, and to check similar construction steps, performed in collaborating laboratories, of EBA and EBC modules. Throughout this process, the Cs source scans allowed to detect a number of local faults and to certify their repair. The recorded data show that the uniformity of response goals across the calorimeter was met.
Using the Cs source data as a reference, a number of global settings were made: the response of all TileCal cells was equalised, the dynamic range of the module readout signals was set, the ratio of charge-to-energy deposited in the calorimeter was determined for a sample of modules exposed to test beams, and extended to all modules.
Periodic source scans of the entire calorimeter installed in the ATLAS cavern are performed, with an approximately monthly frequency. These scans allow to monitor at global and detailed levels the stability of the calorimeter and its components, and most importantly, to maintain the overall energy calibration of the detector. When compared to data taken after completion of module production, the uniformity of response is seen not to have deteriorated.
Furthermore, because the source signal shows the response of the entire TileCal optical system, Cs source measurements are a very useful tool to investigate several instrumental effects, such as the stability of response of PMTs themselves and the impact of the ATLAS magnetic field on the signals from particles.
Two overall conclusions can be drawn from the two-decade experience with this system:
\begin{enumerate}
\item The chosen liquid drive method has been proved to be extremely robust, and fully adequate to the requirements of the calibration and monitoring task.
\item The Cs source system --- hardware, controls, online software and offline analysis tools --- has been of paramount importance in achieving the goals for which the Tile Calorimeter was built and is operated.
\end{enumerate}
Finally, in the next years, the LHC and ATLAS, will undergo a series of upgrades, necessitating the upgrade of TileCal calibration systems, including Cesium calibration system, due to the new requirements on the radiation hardness, ageing of existing electronics, and changes in the front-end electronics. The upgrades will have to increase the reliability and integration with the calorimeter readout. The new front-end electronics will allow for shorter integration times and faster readout. The hydraulic system will have to undergo the improvements as well. The design of the new control boards and a path to the hydraulic system development are outlined in the TileCal upgrade technical design report~\cite{UTDR}.
\acknowledgments
The authors are very grateful to all members of the ATLAS TileCal community who participated in all the discussions, talks, tests and in the intensive use of the Cs monitoring system during several years. They crucially helped in creating a really useful tool for the TileCal optical instrumentation, inter-calibration and monitoring, and for the setting and maintenance of the calorimeter's electromagnetic energy scale.
\clearpage |
import matplotlib
matplotlib.use('Agg')
matplotlib.rc('text', usetex=True)
matplotlib.rc('font', family='serif')
import pylab as plt
from astrometry.util.fits import *
from astrometry.util.plotutils import *
import numpy as np
import fitsio
from glob import glob
from wise.allwisecat import *
plt.figure(figsize=(5,4))
plt.subplots_adjust(right=0.95, top=0.98)
np.errstate(all='ignore')
# Read DR5 LegacySurvey catalogs
#L = fits_table('/global/homes/d/dstn/cosmo/data/legacysurvey/dr5/sweep/5.0/sweep-240p005-250p010.fits')
#fns = ['/global/homes/d/dstn/cosmo/data/legacysurvey/dr5/sweep/5.0/sweep-240p005-250p010.fits']
fns = glob('/global/project/projectdirs/cosmo/data/legacysurvey/dr5/sweep/5.0/sweep-[12]*p005-*p010.fits')
L = []
for fn in fns:
print('Reading', fn)
L.append(fits_table(fn, columns=['ra','dec','type',
'flux_g','flux_r','flux_z',
'flux_w1','flux_w2','flux_w3', 'flux_w4',
'flux_ivar_g','flux_ivar_r', 'flux_ivar_z',
'flux_ivar_w1','flux_ivar_w2',
'flux_ivar_w3', 'flux_ivar_w4',
'mw_transmission_g','mw_transmission_r',
'mw_transmission_z',
'mw_transmission_w1','mw_transmission_w2',
'mw_transmission_w3', 'mw_transmission_w4',]))
L = merge_tables(L)
print(len(L), 'LegacySurvey sources')
L.cut((L.ra > 120) * (L.ra < 250))
print('Cut to', len(L), 'in RA 120-250')
L.writeto('/global/cscratch1/sd/dstn/ls.fits')
dlo=L.dec.min()
dhi=L.dec.max()
rlo=L.ra.min()
rhi=L.ra.max()
print('RA', rlo,rhi, 'Dec', dlo,dhi)
# Read AllWISE catalog
W = []
for i,(d1,d2) in enumerate(allwise_catalog_dec_range):
if d1 < dhi and d2 > dlo:
print('Overlaps part', i+1)
catfn = '/global/homes/d/dstn/cosmo/data/wise/allwise-catalog/wise-allwise-cat-part%02i-radecmpro.fits' % (i+1)
C = fits_table(catfn)
print(len(C), 'sources')
C.cut((C.ra >= rlo) * (C.ra <= rhi) * (C.dec >= dlo) * (C.dec <= dhi))
print(len(C), 'kept')
W.append(C)
W = merge_tables(W)
print(len(W), 'AllWISE catalog sources')
W.writeto('/global/cscratch1/sd/dstn/wise.fits')
from astrometry.libkd.spherematch import match_radec
print('Matching...')
I,J,d = match_radec(W.ra, W.dec, L.ra, L.dec, 4./3600.)
print(len(I), 'matches')
from collections import Counter
CW = Counter(I)
CL = Counter(J)
K, = np.nonzero([(CW[i] == 1) and (CL[j] == 1) for i,j in zip(I,J)])
print(len(K), 'unique matches')
# Unmatched LS sources
U = np.ones(len(L), bool)
U[J] = False
# Cut to one-to-one unique matches
I = I[K]
J = J[K]
# Compute mags, un-applying the Vega-to-AB conversion factors
L.w1 = -2.5*(np.log10(L.flux_w1)-9.) - 2.699
L.w2 = -2.5*(np.log10(L.flux_w2)-9.) - 3.339
L.w3 = -2.5*(np.log10(L.flux_w3)-9.) - 5.174
L.w4 = -2.5*(np.log10(L.flux_w4)-9.) - 6.620
L.z = -2.5*(np.log10(L.flux_z)-9.)
L.r = -2.5*(np.log10(L.flux_r)-9.)
L.e_r = 2.5 * np.log10(L.mw_transmission_r)
L.e_z = 2.5 * np.log10(L.mw_transmission_z)
L.e_w1 = 2.5 * np.log10(L.mw_transmission_w1)
L.e_w2 = 2.5 * np.log10(L.mw_transmission_w2)
L.e_w3 = 2.5 * np.log10(L.mw_transmission_w3)
L.e_w4 = 2.5 * np.log10(L.mw_transmission_w4)
L.is_psf = np.array([t[0]=='P' for t in L.type])
# Matched
ML = L[J]
MW = W[I]
# Unmatched
UL = L[U]
#WISEAB1 = 2.699 / WISE Vega to AB conv for band 1
#WISEAB2 = 3.339 / WISE Vega to AB conv for band 2
#WISEAB3 = 5.174 / WISE Vega to AB conv for band 3
#WISEAB4 = 6.62 / WISE Vega to AB conv for band 4
loghist(MW.w1mpro, ML.w1, 200, range=((5,19),(5,19)), hot=False, imshowargs=dict(cmap=antigray))
ax = plt.axis()
plt.plot([5,21],[5,21], 'k-', alpha=0.2)
plt.axis(ax)
plt.xlabel('AllWISE W1 mag')
plt.ylabel('Legacy Survey Forced-Photometry W1 mag')
plt.axis([ax[1],ax[0],ax[3],ax[2]])
plt.savefig('w1-matched.pdf')
plt.clf()
lo,hi = 10,23
ha=dict(range=(lo,hi), bins=150, histtype='step', color='b', log=True)
n,b,p1 = plt.hist(W.w1mpro, **ha)
n,b,p2 = plt.hist(L.w1, lw=3, alpha=0.4, **ha)
plt.legend((p1[0],p2[0]), ('AllWISE Catalog', 'LegacySurvey Forced'),
loc='lower left')
plt.xlim(lo,hi)
yl,yh = plt.ylim()
print('Plot limits:', yl,yh)
plt.ylim(10,yh)
#plt.ylim(10,1e5)
plt.xlabel('W1 mag')
plt.ylabel('Number of sources')
plt.savefig('w1-count.pdf')
plt.clf()
I = (ML.is_psf)
ha = dict(nbins=100, range=((0,2.5),(0.5,3)), doclf=False, dohot=False, imshowargs=dict(cmap=antigray),
docolorbar=False)
H,xe,ye = plothist((ML.r - ML.z)[I], (ML.z - ML.w1)[I], **ha)
plt.xlabel('r - z (mag)')
plt.ylabel('z - W1 (mag)')
#plt.title('Catalog-matched PSFs')
plt.savefig('cc-matched.pdf')
print(np.sum(H), 'matched')
# rz = (ML.r - ML.z)[I]
# zw = (ML.z - ML.w1)[I]
# print(np.sum((rz>0)*(rz<3)*(zw>0.5)*(zw<2.5)), 'Matched')
plt.clf()
I = ((UL.flux_w1 * np.sqrt(UL.flux_ivar_w1) > 3.) *
(UL.flux_r * np.sqrt(UL.flux_ivar_r ) > 5.) *
(UL.flux_z * np.sqrt(UL.flux_ivar_z ) > 5.) *
(UL.is_psf))
H,xe,ye = plothist((UL.r - UL.z)[I], (UL.z - UL.w1)[I], **ha)
plt.xlabel('r - z (mag)')
plt.ylabel('z - W1 (mag)')
plt.savefig('cc-unmatched.pdf')
#plt.title('LegacySurvey PSF without AllWISE counterparts')
#plt.title('Additional faint PSF sources')
print(np.sum(H), 'matched')
# rz = (UL.r - UL.z)[I]
# zw = (UL.z - UL.w1)[I]
# print(np.sum((rz>0)*(rz<3)*(zw>0.5)*(zw<2.5)), 'Unmatched')
# plt.savefig('cc.png')
# loghist(ML.z - ML.w1, ML.w1 - ML.w2, 200, range=((-1,5),(-1,5)), hot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
#
# loghist((ML.z - ML.w1)[ML.is_psf], (ML.w1 - ML.w2)[ML.is_psf], 200, range=((-1,5),(-1,5)), hot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
# plt.title('LegacySurvey PSFs matched to AllWISE catalog')
#
# plothist((ML.z - ML.w1)[ML.is_psf], (ML.w1 - ML.w2)[ML.is_psf], 200, range=((0.5,3),(-0.5,0.5)), dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
# plt.title('LegacySurvey PSFs (matched to AllWISE catalog)')
#
# I = np.logical_not(ML.is_psf)
# plothist((ML.z - ML.w1)[I], (ML.w1 - ML.w2)[I], 200, range=((0.5,3),(-0.5,0.5)), dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
# plt.title('LegacySurvey NON-PSFs (matched to AllWISE catalog)')
#
# plt.subplot(1,2,1)
# I = ML.is_psf
# plothist((ML.z - ML.w1)[I], (ML.w1 - ML.w2)[I], 200, range=((0.5,3),(-0.5,0.5)), doclf=False, dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
# plt.title('LegacySurvey PSFs (matched to AllWISE catalog)')
#
# plt.subplot(1,2,2)
# I = np.logical_not(ML.is_psf)
# plothist((ML.z - ML.w1)[I], (ML.w1 - ML.w2)[I], 200, range=((0.5,3),(-0.5,0.5)), doclf=False, dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
# plt.title('LegacySurvey NON-PSFs (matched to AllWISE catalog)')
# I = ((UL.flux_w1 * np.sqrt(UL.flux_ivar_w1) > 3.) *
# (UL.flux_w2 * np.sqrt(UL.flux_ivar_w2) > 3.) *
# (UL.flux_z * np.sqrt(UL.flux_ivar_z ) > 3.) *
# (UL.is_psf))
# plothist((UL.z - UL.w1)[I], (UL.w1 - UL.w2)[I], 200, range=((0.5,3),(-0.5,0.5)), dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
# plt.title('LegacySurvey PSFs (UNmatched to AllWISE catalog)')
#
#
# # In[86]:
#
# plothist((L.z - L.w1)[L.is_psf], (L.w1 - L.w2)[L.is_psf], 200, range=((0.5,3),(-0.5,0.5)), dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
# plt.title('LegacySurvey PSFs (all)')
#
#
# # In[70]:
#
# plothist((L.z - L.w1), (L.w1 - L.w2), 200, range=((0.5,3),(-0.5,0.5)), dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
# plt.title('LegacySurvey (all)')
#
#
# # In[58]:
#
# I = L.is_psf
# loghist((L.z - L.w1)[I], (L.w1 - L.w2)[I], 200, range=((-1,5),(-1,5)), hot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('z - W1 (mag)')
# plt.ylabel('W1 - W2 (mag)')
#
#
# # In[125]:
#
# plt.hist(ML.flux_w1 * np.sqrt(ML.flux_ivar_w1), range=(0,100), bins=100, histtype='step', color='b', log=True);
# plt.hist(L.flux_w1 * np.sqrt(L.flux_ivar_w1), range=(0,100), bins=100, histtype='step', color='k', log=True);
# plt.hist(UL.flux_w1 * np.sqrt(UL.flux_ivar_w1), range=(0,100), bins=100, histtype='step', color='r', log=True);
#
#
# # In[ ]:
#
#
#
#
# # In[122]:
#
# plt.hist(ML.w1, range=(10,20), bins=100, histtype='step', color='b', log=True);
# plt.hist(L.w1 , range=(10,20), bins=100, histtype='step', color='k', log=True);
# plt.hist(UL.w1 , range=(10,20), bins=100, histtype='step', color='r', log=True);
# yl,yh = plt.ylim()
# plt.ylim(1,yh);
#
#
# # In[60]:
#
# I = ML.is_psf
# plt.hist(ML.flux_w1[I] * np.sqrt(ML.flux_ivar_w1[I]), range=(0,20), bins=100, histtype='step', color='g');
# plt.hist(ML.flux_w2[I] * np.sqrt(ML.flux_ivar_w2[I]), range=(0,20), bins=100, histtype='step', color='r');
# plt.hist(ML.flux_z[I] * np.sqrt(ML.flux_ivar_z [I]), range=(0,20), bins=100, histtype='step', color='b');
# plt.xlabel('S/N');
#
#
# # In[130]:
#
# plt.subplot(1,2,1)
# I = (ML.is_psf)
# plothist((ML.r - ML.z)[I], (ML.z - ML.w1)[I], 200, range=((0,3),(0.5,2.5)), doclf=False, dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('r - z (mag)')
# plt.ylabel('z - W1 (mag)')
# plt.title('LegacySurvey PSFs (matched to AllWISE catalog)')
#
# rz = (ML.r - ML.z)[I]
# zw = (ML.z - ML.w1)[I]
# print(np.sum((rz>0)*(rz<3)*(zw>0.5)*(zw<2.5)), 'Matched')
#
# plt.subplot(1,2,2)
# I = ((UL.flux_w1 * np.sqrt(UL.flux_ivar_w1) > 5.) *
# (UL.flux_r * np.sqrt(UL.flux_ivar_r ) > 5.) *
# (UL.flux_z * np.sqrt(UL.flux_ivar_z ) > 5.) *
# (UL.is_psf))
# #I = UL.is_psf
# plothist((UL.r - UL.z)[I], (UL.z - UL.w1)[I], 200, range=((0,3),(0.5,2.5)), doclf=False, dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('r-z (mag)')
# plt.ylabel('z-W1 (mag)')
# plt.title('LegacySurvey PSFs (UNmatched to AllWISE catalog)')
#
# rz = (UL.r - UL.z)[I]
# zw = (UL.z - UL.w1)[I]
# print(np.sum((rz>0)*(rz<3)*(zw>0.5)*(zw<2.5)), 'Unmatched')
#
# plt.savefig('cc.png')
#
#
# # In[127]:
#
# plt.subplot(1,2,1)
# I = (ML.is_psf)
# plothist((ML.r - ML.z)[I], (ML.z - (ML.w1+ML.w2)/2.)[I], 200, range=((0,3),(0.5,2.5)), doclf=False, dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('r - z (mag)')
# plt.ylabel('z - W (mag)')
# plt.title('LegacySurvey PSFs (matched to AllWISE catalog)')
#
# plt.subplot(1,2,2)
# I = ((UL.flux_w1 * np.sqrt(UL.flux_ivar_w1) > 3.) *
# (UL.flux_r * np.sqrt(UL.flux_ivar_r ) > 3.) *
# (UL.flux_z * np.sqrt(UL.flux_ivar_z ) > 3.) *
# (UL.is_psf))
# #I = UL.is_psf
# plothist((UL.r - UL.z)[I], (UL.z - (UL.w1+UL.w2)/2.)[I], 200, range=((0,3),(0.5,2.5)), doclf=False, dohot=False, imshowargs=dict(cmap=antigray));
# plt.xlabel('r - z (mag)')
# plt.ylabel('z - W (mag)')
# plt.title('LegacySurvey PSFs (UNmatched to AllWISE catalog)')
#plt.subplot(1,2,1)
if False:
plt.clf()
ha = dict(nbins=100, range=((-0.5,3),(0,3)), doclf=False, hot=False, imshowargs=dict(cmap=antigray))
I = (ML.is_psf)
loghist((ML.r - ML.z)[I], (ML.z - ML.w1)[I], **ha)
plt.xlabel('r - z (mag)')
plt.ylabel('z - W1 (mag)')
#plt.title('LegacySurvey PSFs matched to AllWISE catalog')
plt.savefig('cc-matched.pdf')
rz = (ML.r - ML.z)[I]
zw = (ML.z - ML.w1)[I]
print(np.sum((rz>0)*(rz<3)*(zw>0.5)*(zw<2.5)), 'Matched')
plt.clf()
ha.update(imshowargs=dict(cmap=antigray, vmax=np.log10(3000)))
I = ((UL.flux_w1 * np.sqrt(UL.flux_ivar_w1) > 3.) *
(UL.flux_r * np.sqrt(UL.flux_ivar_r ) > 3.) *
(UL.flux_z * np.sqrt(UL.flux_ivar_z ) > 3.) *
(UL.is_psf))
loghist((UL.r - UL.z)[I], (UL.z - UL.w1)[I], **ha)
plt.xlabel('r - z (mag)')
plt.ylabel('z - W1 (mag)')
#plt.title('LegacySurvey PSFs unmatched to AllWISE catalog')
plt.savefig('cc-unmatched.pdf')
rz = (UL.r - UL.z)[I]
zw = (UL.z - UL.w1)[I]
print(np.sum((rz>0)*(rz<3)*(zw>0.5)*(zw<2.5)), 'Unmatched')
plt.clf()
ha = dict(nbins=100, range=((-0.5,3),(0,3)), doclf=False, hot=False, imshowargs=dict(cmap=antigray))
I = (ML.is_psf)
rz = ((ML.r-ML.e_r) - (ML.z-ML.e_z))[I]
zw = ((ML.z-ML.e_z) - (ML.w1-ML.e_w1))[I]
loghist(rz, zw, **ha)
plt.xlabel('r - z (mag)')
plt.ylabel('z - W1 (mag)')
#plt.title('LegacySurvey PSFs matched to AllWISE catalog')
plt.savefig('cc-matched2.pdf')
print(np.sum((rz>0)*(rz<3)*(zw>0.5)*(zw<2.5)), 'Matched')
plt.clf()
ha.update(imshowargs=dict(cmap=antigray, vmax=np.log10(3000)))
I = ((UL.flux_w1 * np.sqrt(UL.flux_ivar_w1) > 3.) *
(UL.flux_r * np.sqrt(UL.flux_ivar_r ) > 3.) *
(UL.flux_z * np.sqrt(UL.flux_ivar_z ) > 3.) *
(UL.is_psf))
rz = ((UL.r-UL.e_r) - (UL.z-UL.e_z))[I]
zw = ((UL.z-UL.e_z) - (UL.w1-UL.e_w1))[I]
loghist(rz, zw, **ha)
plt.xlabel('r - z (mag)')
plt.ylabel('z - W1 (mag)')
#plt.title('LegacySurvey PSFs unmatched to AllWISE catalog')
plt.savefig('cc-unmatched2.pdf')
print(np.sum((rz>0)*(rz<3)*(zw>0.5)*(zw<2.5)), 'Unmatched')
plt.clf()
ha = dict(nbins=200, range=((-5,10),(13,25)), doclf=False, hot=False, imshowargs=dict(cmap=antigray, vmax=4.))
I = (ML.is_psf)
loghist((ML.r - ML.w1)[I], ML.r[I], **ha)
plt.xlabel('r - W1 (mag)')
plt.ylabel('r (mag)')
#plt.title('LegacySurvey PSFs (matched to AllWISE catalog)')
plt.savefig('cm-matched.pdf')
plt.clf()
I = (#(L.flux_w1 * np.sqrt(L.flux_ivar_w1) > 3.) *
#(L.flux_r * np.sqrt(L.flux_ivar_r ) > 3.) *
#(L.flux_z * np.sqrt(L.flux_ivar_z ) > 3.) *
(L.is_psf))
loghist((L.r - L.w1)[I], L.r[I], **ha)
plt.xlabel('r - W1 (mag)')
plt.ylabel('r (mag)')
plt.savefig('cm-all.pdf') | |
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import _init_paths
import os
import json
import cv2
import numpy as np
import time
from progress.bar import Bar
import torch
import copy
from opts import opts
from logger import Logger
from utils.utils import AverageMeter
from dataset.dataset_factory import dataset_factory
from detector import Detector
class PrefetchDataset(torch.utils.data.Dataset):
def __init__(self, opt, dataset, pre_process_func):
self.images = dataset.images
self.load_image_func = dataset.coco.loadImgs
self.img_dir = dataset.img_dir
self.pre_process_func = pre_process_func
self.get_default_calib = dataset.get_default_calib
self.opt = opt
self.dataset = dataset
def __getitem__(self, index):
img_id = self.images[index]
img_info = self.load_image_func(ids=[img_id])[0]
img_path = os.path.join(self.img_dir, img_info['file_name'])
image = cv2.imread(img_path)
images, meta = {}, {}
for scale in opt.test_scales:
input_meta = {}
calib = img_info['calib'] if 'calib' in img_info \
else self.get_default_calib(image.shape[1], image.shape[0])
input_meta['calib'] = calib
images[scale], meta[scale] = self.pre_process_func(
image, scale, input_meta)
ret = {'images': images, 'image': image, 'meta': meta}
if 'frame_id' in img_info and img_info['frame_id'] == 1:
ret['is_first_frame'] = 1
ret['video_id'] = img_info['video_id']
# add point cloud
if opt.pointcloud:
assert len(opt.test_scales)==1, "Multi-scale testing not supported with pointcloud."
scale = opt.test_scales[0]
pc_2d, pc_N, pc_dep, pc_3d = self.dataset._load_pc_data(image, img_info,
meta[scale]['trans_input'], meta[scale]['trans_output'])
ret['pc_2d'] = pc_2d
ret['pc_N'] = pc_N
ret['pc_dep'] = pc_dep
ret['pc_3d'] = pc_3d
return img_id, ret
def __len__(self):
return len(self.images)
def prefetch_test(opt):
if not opt.not_set_cuda_env:
os.environ['CUDA_VISIBLE_DEVICES'] = opt.gpus_str
Dataset = dataset_factory[opt.test_dataset]
opt = opts().update_dataset_info_and_set_heads(opt, Dataset)
print(opt)
Logger(opt)
split = 'val' if not opt.trainval else 'test'
if split == 'val':
split = opt.val_split
dataset = Dataset(opt, split)
detector = Detector(opt)
if opt.load_results != '':
load_results = json.load(open(opt.load_results, 'r'))
for img_id in load_results:
for k in range(len(load_results[img_id])):
if load_results[img_id][k]['class'] - 1 in opt.ignore_loaded_cats:
load_results[img_id][k]['score'] = -1
else:
load_results = {}
data_loader = torch.utils.data.DataLoader(
PrefetchDataset(opt, dataset, detector.pre_process),
batch_size=1, shuffle=False, num_workers=1, pin_memory=True)
results = {}
num_iters = len(data_loader) if opt.num_iters < 0 else opt.num_iters
bar = Bar('{}'.format(opt.exp_id), max=num_iters)
time_stats = ['tot', 'load', 'pre', 'net', 'dec', 'post', 'merge', 'track']
avg_time_stats = {t: AverageMeter() for t in time_stats}
if opt.use_loaded_results:
for img_id in data_loader.dataset.images:
results[img_id] = load_results['{}'.format(img_id)]
num_iters = 0
for ind, (img_id, pre_processed_images) in enumerate(data_loader):
if ind >= num_iters:
break
if opt.tracking and ('is_first_frame' in pre_processed_images):
if '{}'.format(int(img_id.numpy().astype(np.int32)[0])) in load_results:
pre_processed_images['meta']['pre_dets'] = \
load_results['{}'.format(int(img_id.numpy().astype(np.int32)[0]))]
else:
print()
print('No pre_dets for', int(img_id.numpy().astype(np.int32)[0]),
'. Use empty initialization.')
pre_processed_images['meta']['pre_dets'] = []
detector.reset_tracking()
print('Start tracking video', int(pre_processed_images['video_id']))
if opt.public_det:
if '{}'.format(int(img_id.numpy().astype(np.int32)[0])) in load_results:
pre_processed_images['meta']['cur_dets'] = \
load_results['{}'.format(int(img_id.numpy().astype(np.int32)[0]))]
else:
print('No cur_dets for', int(img_id.numpy().astype(np.int32)[0]))
pre_processed_images['meta']['cur_dets'] = []
ret = detector.run(pre_processed_images)
results[int(img_id.numpy().astype(np.int32)[0])] = ret['results']
Bar.suffix = '[{0}/{1}]|Tot: {total:} |ETA: {eta:} '.format(
ind, num_iters, total=bar.elapsed_td, eta=bar.eta_td)
for t in avg_time_stats:
avg_time_stats[t].update(ret[t])
Bar.suffix = Bar.suffix + '|{} {tm.val:.3f}s ({tm.avg:.3f}s) '.format(
t, tm = avg_time_stats[t])
if opt.print_iter > 0:
if ind % opt.print_iter == 0:
print('{}/{}| {}'.format(opt.task, opt.exp_id, Bar.suffix))
else:
bar.next()
bar.finish()
if opt.save_results:
print('saving results to', opt.save_dir + '/save_results_{}{}.json'.format(
opt.test_dataset, opt.dataset_version))
json.dump(_to_list(copy.deepcopy(results)),
open(opt.save_dir + '/save_results_{}{}.json'.format(
opt.test_dataset, opt.dataset_version), 'w'))
dataset.run_eval(results, opt.save_dir, n_plots=opt.eval_n_plots,
render_curves=opt.eval_render_curves)
def test(opt):
os.environ['CUDA_VISIBLE_DEVICES'] = opt.gpus_str
Dataset = dataset_factory[opt.test_dataset]
opt = opts().update_dataset_info_and_set_heads(opt, Dataset)
print(opt)
Logger(opt)
split = 'val' if not opt.trainval else 'test'
if split == 'val':
split = opt.val_split
dataset = Dataset(opt, split)
detector = Detector(opt)
if opt.load_results != '': # load results in json
load_results = json.load(open(opt.load_results, 'r'))
results = {}
num_iters = len(dataset) if opt.num_iters < 0 else opt.num_iters
bar = Bar('{}'.format(opt.exp_id), max=num_iters)
time_stats = ['tot', 'load', 'pre', 'net', 'dec', 'post', 'merge']
avg_time_stats = {t: AverageMeter() for t in time_stats}
for ind in range(num_iters):
img_id = dataset.images[ind]
img_info = dataset.coco.loadImgs(ids=[img_id])[0]
img_path = os.path.join(dataset.img_dir, img_info['file_name'])
input_meta = {}
if 'calib' in img_info:
input_meta['calib'] = img_info['calib']
if (opt.tracking and ('frame_id' in img_info) and img_info['frame_id'] == 1):
detector.reset_tracking()
input_meta['pre_dets'] = load_results[img_id]
ret = detector.run(img_path, input_meta)
results[img_id] = ret['results']
Bar.suffix = '[{0}/{1}]|Tot: {total:} |ETA: {eta:} '.format(
ind, num_iters, total=bar.elapsed_td, eta=bar.eta_td)
for t in avg_time_stats:
avg_time_stats[t].update(ret[t])
Bar.suffix = Bar.suffix + '|{} {:.3f} '.format(t, avg_time_stats[t].avg)
bar.next()
bar.finish()
if opt.save_results:
print('saving results to', opt.save_dir + '/save_results_{}{}.json'.format(
opt.test_dataset, opt.dataset_version))
json.dump(_to_list(copy.deepcopy(results)),
open(opt.save_dir + '/save_results_{}{}.json'.format(
opt.test_dataset, opt.dataset_version), 'w'))
dataset.run_eval(results, opt.save_dir, n_plots=opt.eval_n_plots,
trairender_curves=opt.eval_render_curves)
def _to_list(results):
for img_id in results:
for t in range(len(results[img_id])):
for k in results[img_id][t]:
if isinstance(results[img_id][t][k], (np.ndarray, np.float32)):
results[img_id][t][k] = results[img_id][t][k].tolist()
return results
if __name__ == '__main__':
opt = opts().parse()
if opt.not_prefetch_test:
test(opt)
else:
prefetch_test(opt) | |
# ___________________________________________________________________________
#
# Prescient
# Copyright 2020 National Technology & Engineering Solutions of Sandia, LLC
# (NTESS). Under the terms of Contract DE-NA0003525 with NTESS, the U.S.
# Government retains certain rights in this software.
# This software is distributed under the Revised BSD License.
# ___________________________________________________________________________
from __future__ import annotations
from ..data_provider import DataProvider
from prescient.engine import forecast_helper
from egret.parsers.prescient_dat_parser import get_uc_model, create_model_data_dict_params
from egret.data.model_data import ModelData as EgretModel
import os.path
from datetime import datetime, date, timedelta
import dateutil.parser
import copy
from typing import TYPE_CHECKING
if TYPE_CHECKING:
from prescient.simulator.options import Options
from typing import Dict, Any
class DatDataProvider():
''' Provides data from pyomo DAT files
'''
def initialize(self, options: Options) -> None:
''' Do one-time initial setup
'''
self._uc_model_template = get_uc_model()
self._instance_directory_name = os.path.join(os.path.expanduser(options.data_directory),
"pyspdir_twostage")
self._actuals_by_date = {}
self._forecasts_by_date = {}
self._first_day = dateutil.parser.parse(options.start_date).date()
self._final_day = self._first_day + timedelta(days=options.num_days-1)
def get_initial_model(self, options:Options, num_time_steps:int) -> EgretModel:
''' Get a model ready to be populated with data
Returns
-------
A model object populated with static system information, such as
buses and generators, and with time series arrays that are large
enough to hold num_time_steps entries.
Initial values in time time series do not have meaning.
'''
# Get data for the first simulation day
first_day_model = self._get_forecast_by_date(self._first_day)
# Copy it, making sure we've got the right number of time periods
data =_recurse_copy_with_time_series_length(first_day_model.data, num_time_steps)
new_model = EgretModel(data)
new_model.data['system']['time_keys'] = list(str(i) for i in range(1,num_time_steps+1))
return new_model
def populate_initial_state_data(self, options:Options,
day:date,
model: EgretModel) -> None:
''' Populate an existing model with initial state data for the requested day
Sets T0 information from actuals:
* initial_state_of_charge for each storage element
* initial_status for each generator
* initial_p_output for each generator
Arguments
---------
options:
Option values
day:date
The day whose initial state will be saved in the model
model: EgretModel
The model whose values will be modifed
'''
if day < self._first_day:
day = self._first_day
elif day > self._final_day:
day = self._final_day
actuals = self._get_actuals_by_date(day)
for s, sdict in model.elements('storage'):
soc = actuals.data['elements']['storage'][s]['initial_state_of_charge']
sdict['initial_state_of_charge'] = soc
for g, gdict in model.elements('generator', generator_type='thermal'):
source = actuals.data['elements']['generator'][g]
gdict['initial_status'] = source['initial_status']
gdict['initial_p_output'] = source['initial_p_output']
def populate_with_forecast_data(self, options:Options,
start_time:datetime,
num_time_periods: int,
time_period_length_minutes: int,
model: EgretModel
) -> None:
''' Populate an existing model with forecast data.
Populates the following values for each requested time period:
* demand for each bus
* min and max non-dispatchable power for each non-dispatchable generator
* reserve requirement
Arguments
---------
options:
Option values
start_time: datetime
The time (day, hour, and minute) of the first time step for
which forecast data will be provided
num_time_periods: int
The number of time steps for which forecast data will be provided.
time_period_length_minutes: int
The duration of each time step
model: EgretModel
The model where forecast data will be stored
Notes
-----
This will store forecast data in the model's existing data arrays, starting
at index 0. If the model's arrays are not big enough to hold all the
requested time steps, only those steps for which there is sufficient storage
will be saved. If arrays are larger than the number of requested time
steps, the remaining array elements will be left unchanged.
If start_time is midnight of any day, all data comes from the DAT file for
the starting day. Otherwise, forecast data is taken from the file matching
the date of the time step. In other words, if requesting data starting at
midnight, all data in the first day's DAT file will be available, but otherwise
only the first 24 hours of each DAT file will be used.
Note that this method has the same signature as populate_with_actuals.
'''
self._populate_with_forecastable_data(options, start_time, num_time_periods,
time_period_length_minutes, model,
self._get_forecast_by_date)
def populate_with_actuals(self, options:Options,
start_time:datetime,
num_time_periods: int,
time_period_length_minutes: int,
model: EgretModel
) -> None:
''' Populate an existing model with actuals data.
Populates the following values for each requested time period:
* demand for each bus
* min and max non-dispatchable power for each non-dispatchable generator
* reserve requirement
Arguments
---------
options:
Option values
start_time: datetime
The time (day, hour, and minute) of the first time step for
which data will be provided
num_time_periods: int
The number of time steps for which actuals data will be provided.
time_period_length_minutes: int
The duration of each time step
model: EgretModel
The model where actuals data will be stored
Notes
-----
This will store actuals data in the model's existing data arrays, starting
at index 0. If the model's arrays are not big enough to hold all the
requested time steps, only those steps for which there is sufficient storage
will be saved. If arrays are larger than the number of requested time
steps, the remaining array elements will be left unchanged.
If start_time is midnight of any day, all data comes from the DAT file for
the starting day. Otherwise, data is taken from the file matching
the date of the time step. In other words, if requesting data starting at
midnight, all data in the first day's DAT file will be available, but otherwise
only the first 24 hours of each DAT file will be used.
Note that this method has the same signature as populate_with_forecast_data.
'''
self._populate_with_forecastable_data(options, start_time, num_time_periods,
time_period_length_minutes, model,
self._get_actuals_by_date)
def _populate_with_forecastable_data(self, options:Options,
start_time:datetime,
num_time_periods: int,
time_period_length_minutes: int,
model: EgretModel,
identify_dat: Callable[[date], EgretModel]
) -> None:
# For now, require the time period to always be 60 minutes
assert(time_period_length_minutes == 60.0)
step_delta = timedelta(minutes=time_period_length_minutes)
# See if we have space to store all the requested data.
# If not, only supply what we have space for
if len(model.data['system']['time_keys']) < num_time_periods:
num_time_periods = len(model.data['system']['time_keys'])
start_hour = start_time.hour
start_day = start_time.date()
# Loop through each time step
for step_index in range(0, num_time_periods):
step_time = start_time + step_delta*step_index
day = step_time.date()
# 0-based hour, useable as index into forecast arrays
hour = step_time.hour
# For data starting at time 0, we collect tomorrow's data
# from today's dat file
if start_hour == 0 and day != start_day:
day = start_day
hour += 24
# If request is beyond the last day, just repeat the final day's values
if day > self._final_day:
day = self._final_day
dat = identify_dat(day)
for src, target in forecast_helper.get_forecastables(dat, model):
target[step_index] = src[hour]
def _get_forecast_by_date(self, requested_date: date) -> EgretModel:
''' Get forecast data for a specific calendar day.
'''
return self._get_egret_model_for_date(requested_date,
"Scenario_forecasts.dat",
self._forecasts_by_date)
def _get_actuals_by_date(self, requested_date: date) -> EgretModel:
''' Get actuals data for a specific calendar day.
'''
return self._get_egret_model_for_date(requested_date,
"Scenario_actuals.dat",
self._actuals_by_date)
def _get_egret_model_for_date(self,
requested_date: date,
dat_filename: str,
cache_dict: Dict[date, EgretModel]) -> EgretModel:
''' Get data for a specific calendar day.
Implements the common logic of _get_actuals_by_date and _get_forecast_by_date.
'''
# Return cached model, if we have it
if requested_date in cache_dict:
return cache_dict[requested_date]
# Otherwise read the requested data and store it in the cache
date_str = str(requested_date)
path_to_dat = os.path.join(self._instance_directory_name,
date_str,
dat_filename)
day_pyomo = self._uc_model_template.create_instance(path_to_dat)
day_dict = create_model_data_dict_params(day_pyomo, True)
day_model = EgretModel(day_dict)
cache_dict[requested_date] = day_model
return day_model
def _recurse_copy_with_time_series_length(root:Dict[str, Any], time_count:int) -> Dict[str, Any]:
new_node = {}
for key, att in root.items():
if isinstance(att, dict):
if 'data_type' in att and att['data_type'] == 'time_series':
val = att['values'][0]
new_node[key] = { 'data_type': 'time_series',
'values' : [val]*time_count }
else:
new_node[key] = _recurse_copy_with_time_series_length(att, time_count)
else:
new_node[key] = copy.deepcopy(att)
return new_node | |
"""
Tests gdb bindings
"""
from __future__ import print_function
import os
import platform
import subprocess
import sys
import threading
from itertools import permutations
from numba import njit, gdb, gdb_init, gdb_breakpoint, prange, errors
from numba import jit
from numba import unittest_support as unittest
from numba.targets.gdb_hook import _confirm_gdb
from .support import (TestCase, captured_stdout, tag)
from .test_parfors import skip_unsupported as parfors_skip_unsupported
_platform = sys.platform
_unix_like = (_platform.startswith('linux')
or _platform.startswith('darwin')
or ('bsd' in _platform))
unix_only = unittest.skipUnless(_unix_like, "unix-like OS is required")
not_unix = unittest.skipIf(_unix_like, "non unix-like OS is required")
_arch_name = platform.machine()
_is_arm = _arch_name in {'aarch64', 'armv7l'}
not_arm = unittest.skipIf(_is_arm, "testing disabled on ARM")
_gdb_cond = os.environ.get('GDB_TEST', None) == '1'
needs_gdb_harness = unittest.skipUnless(_gdb_cond, "needs gdb harness")
# check if gdb is present and working
try:
_confirm_gdb()
_HAVE_GDB = True
except Exception:
_HAVE_GDB = False
_msg = "functioning gdb with correct ptrace permissions is required"
needs_gdb = unittest.skipUnless(_HAVE_GDB, _msg)
long_running = tag('long_running')
_dbg_njit = njit(debug=True)
_dbg_jit = jit(forceobj=True, debug=True)
def impl_gdb_call(a):
gdb('-ex', 'set confirm off', '-ex', 'c', '-ex', 'q')
b = a + 1
c = a * 2.34
d = (a, b, c)
print(a, b, c, d)
def impl_gdb_call_w_bp(a):
gdb_init('-ex', 'set confirm off', '-ex', 'c', '-ex', 'q')
b = a + 1
c = a * 2.34
d = (a, b, c)
gdb_breakpoint()
print(a, b, c, d)
def impl_gdb_split_init_and_break_w_parallel(a):
gdb_init('-ex', 'set confirm off', '-ex', 'c', '-ex', 'q')
a += 3
for i in prange(4):
b = a + 1
c = a * 2.34
d = (a, b, c)
gdb_breakpoint()
print(a, b, c, d)
@not_arm
@unix_only
class TestGdbBindImpls(TestCase):
"""
Contains unit test implementations for gdb binding testing. Test must be
decorated with `@needs_gdb_harness` to prevent their running under normal
test conditions, the test methods must also end with `_impl` to be
considered for execution. The tests themselves are invoked by the
`TestGdbBinding` test class through the parsing of this class for test
methods and then running the discovered tests in a separate process. Test
names not including the word `quick` will be tagged as @tag('long_running')
"""
@needs_gdb_harness
def test_gdb_cmd_lang_cpython_quick_impl(self):
with captured_stdout():
impl_gdb_call(10)
@needs_gdb_harness
def test_gdb_cmd_lang_nopython_quick_impl(self):
with captured_stdout():
_dbg_njit(impl_gdb_call)(10)
@needs_gdb_harness
def test_gdb_cmd_lang_objmode_quick_impl(self):
with captured_stdout():
_dbg_jit(impl_gdb_call)(10)
@needs_gdb_harness
def test_gdb_split_init_and_break_cpython_impl(self):
with captured_stdout():
impl_gdb_call_w_bp(10)
@needs_gdb_harness
def test_gdb_split_init_and_break_nopython_impl(self):
with captured_stdout():
_dbg_njit(impl_gdb_call_w_bp)(10)
@needs_gdb_harness
def test_gdb_split_init_and_break_objmode_impl(self):
with captured_stdout():
_dbg_jit(impl_gdb_call_w_bp)(10)
@parfors_skip_unsupported
@needs_gdb_harness
def test_gdb_split_init_and_break_w_parallel_cpython_impl(self):
with captured_stdout():
impl_gdb_split_init_and_break_w_parallel(10)
@parfors_skip_unsupported
@needs_gdb_harness
def test_gdb_split_init_and_break_w_parallel_nopython_impl(self):
with captured_stdout():
_dbg_njit(impl_gdb_split_init_and_break_w_parallel)(10)
@parfors_skip_unsupported
@needs_gdb_harness
def test_gdb_split_init_and_break_w_parallel_objmode_impl(self):
with captured_stdout():
_dbg_jit(impl_gdb_split_init_and_break_w_parallel)(10)
@not_arm
@unix_only
@needs_gdb
class TestGdbBinding(TestCase):
"""
This test class is used to generate tests which will run the test cases
defined in TestGdbBindImpls in isolated subprocesses, this is for safety
in case something goes awry.
"""
# test mutates env
_numba_parallel_test_ = False
_DEBUG = True
def run_cmd(self, cmdline, env, kill_is_ok=False):
popen = subprocess.Popen(cmdline,
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
env=env,
shell=True)
# finish in 20s or kill it, there's no work being done
def kill():
popen.stdout.flush()
popen.stderr.flush()
popen.kill()
timeout = threading.Timer(20., kill)
try:
timeout.start()
out, err = popen.communicate()
retcode = popen.returncode
if retcode != 0:
raise AssertionError(
"process failed with code %s: stderr follows\n%s\nstdout :%s" %
(retcode, err.decode(), out.decode()))
return out.decode(), err.decode()
finally:
timeout.cancel()
return None, None
def run_test_in_separate_process(self, test, **kwargs):
env_copy = os.environ.copy()
env_copy['NUMBA_OPT'] = '1'
# Set GDB_TEST to permit the execution of tests decorated with
# @needs_gdb_harness
env_copy['GDB_TEST'] = '1'
cmdline = [sys.executable, "-m", "numba.runtests", test]
return self.run_cmd(' '.join(cmdline), env_copy, **kwargs)
@classmethod
def _inject(cls, name):
themod = TestGdbBindImpls.__module__
thecls = TestGdbBindImpls.__name__
# strip impl
assert name.endswith('_impl')
methname = name.replace('_impl', '')
injected_method = '%s.%s.%s' % (themod, thecls, name)
def test_template(self):
o, e = self.run_test_in_separate_process(injected_method)
self.assertIn('GNU gdb', o)
self.assertIn('OK', e)
self.assertTrue('FAIL' not in e)
self.assertTrue('ERROR' not in e)
if 'quick' in name:
setattr(cls, methname, test_template)
else:
setattr(cls, methname, long_running(test_template))
@classmethod
def generate(cls):
for name in dir(TestGdbBindImpls):
if name.startswith('test_gdb'):
cls._inject(name)
TestGdbBinding.generate()
@not_arm
@unix_only
@needs_gdb
class TestGdbMisc(TestCase):
@long_running
def test_call_gdb_twice(self):
def gen(f1, f2):
@njit
def impl():
a = 1
f1()
b = 2
f2()
return a + b
return impl
msg_head = "Calling either numba.gdb() or numba.gdb_init() more than"
def check(func):
with self.assertRaises(errors.UnsupportedError) as raises:
func()
self.assertIn(msg_head, str(raises.exception))
for g1, g2 in permutations([gdb, gdb_init]):
func = gen(g1, g2)
check(func)
@njit
def use_globals():
a = 1
gdb()
b = 2
gdb_init()
return a + b
check(use_globals)
@not_unix
class TestGdbExceptions(TestCase):
def test_call_gdb(self):
def nop_compiler(x):
return x
for compiler in [nop_compiler, jit(forceobj=True), njit]:
for meth in [gdb, gdb_init]:
def python_func():
meth()
with self.assertRaises(errors.TypingError) as raises:
compiler(python_func)()
msg = "gdb support is only available on unix-like systems"
self.assertIn(msg, str(raises.exception))
if __name__ == '__main__':
unittest.main() | |
import gym
import numpy as np
import random
import tensorflow as tf
import matplotlib.pyplot as plt
#Define the FrozenLake enviroment
env = gym.make('FrozenLake-v0')
#Setup the TensorFlow placeholders and variabiles
tf.reset_default_graph()
inputs1 = tf.placeholder(shape=[1,16],dtype=tf.float32)
W = tf.Variable(tf.random_uniform([16,4],0,0.01))
Qout = tf.matmul(inputs1,W)
predict = tf.argmax(Qout,1)
nextQ = tf.placeholder(shape=[1,4],dtype=tf.float32)
#define the loss and optimization functions
loss = tf.reduce_sum(tf.square(nextQ - Qout))
trainer = tf.train.GradientDescentOptimizer(learning_rate=0.1)
updateModel = trainer.minimize(loss)
#initilize the vabiables
init = tf.global_variables_initializer()
#prepare the q-learning parameters
gamma = .99
e = 0.1
num_episodes = 6000
jList = []
rList = []
#Run the session
with tf.Session() as sess:
sess.run(init)
#Start the Q-learning procedure
for i in range(num_episodes):
s = env.reset()
rAll = 0
d = False
j = 0
while j < 99:
j+=1
a,allQ = sess.run([predict,Qout],\
feed_dict=\
{inputs1:np.identity(16)[s:s+1]})
if np.random.rand(1) < e:
a[0] = env.action_space.sample()
s1,r,d,_ = env.step(a[0])
Q1 = sess.run(Qout,feed_dict=\
{inputs1:np.identity(16)[s1:s1+1]})
maxQ1 = np.max(Q1)
targetQ = allQ
targetQ[0,a[0]] = r + gamma *maxQ1
_,W1 = sess.run([updateModel,W],\
feed_dict=\
{inputs1:np.identity(16)[s:s+1],nextQ:targetQ})
#cumulate the total reward
rAll += r
s = s1
if d == True:
e = 1./((i/50) + 10)
break
jList.append(j)
rList.append(rAll)
#print the results
print("Percent of succesful episodes: " + str(sum(rList)/num_episodes) + "%") | |
import cv2
from distutils.version import LooseVersion
import fcn
import numpy as np
import skimage.color
import skimage.segmentation
import warnings
from .geometry import label2instance_boxes
def draw_instance_boxes(img, boxes, instance_classes, n_class,
masks=None, captions=None, bg_class=0, thickness=1,
draw=None):
warnings.warn('draw_instance_boxes is deprecated, '
'please use draw_instance_bboxes')
return draw_instance_bboxes(
img, boxes, instance_classes, n_class,
masks=masks, captions=captions, bg_class=bg_class,
thickness=thickness, draw=draw)
def draw_instance_bboxes(img, bboxes, labels, n_class, masks=None,
captions=None, bg_class=0, thickness=1, alpha=0.5,
draw=None):
# validation
assert isinstance(img, np.ndarray)
assert img.shape == (img.shape[0], img.shape[1], 3)
assert img.dtype == np.uint8
bboxes = np.asarray(bboxes)
assert isinstance(bboxes, np.ndarray)
assert bboxes.shape == (bboxes.shape[0], 4)
labels = np.asarray(labels)
assert isinstance(labels, np.ndarray)
assert labels.shape == (labels.shape[0],)
if draw is None:
draw = [True] * bboxes.shape[0]
else:
assert len(draw) == bboxes.shape[0]
if masks is not None:
assert len(masks) == len(bboxes)
if captions is not None:
captions = np.asarray(captions)
assert isinstance(captions, np.ndarray)
assert captions.shape[0] == bboxes.shape[0]
img_viz = img.copy()
cmap = fcn.utils.labelcolormap(n_class)
cmap_inst = fcn.utils.labelcolormap(len(bboxes) + 1)[1:] # skip black
if masks is not None:
for i_box in range(bboxes.shape[0]):
if not draw[i_box]:
continue
box = bboxes[i_box]
y1, x1, y2, x2 = box.astype(int).tolist()
inst_class = labels[i_box]
if inst_class == bg_class:
continue
mask_inst = masks[i_box]
if mask_inst.shape != (y2 - y1, x2 - x1):
mask_inst = mask_inst[y1:y2, x1:x2]
color_inst = cmap_inst[i_box]
color_inst = (color_inst * 255)
img_viz[y1:y2, x1:x2][mask_inst] = (
img_viz[y1:y2, x1:x2][mask_inst] * (1 - alpha) +
color_inst * alpha
)
if LooseVersion(skimage.__version__) >= LooseVersion('0.11.0'):
mask_boundary = skimage.segmentation.find_boundaries(
mask_inst, connectivity=2)
else:
mask_boundary = skimage.segmentation.find_boundaries(mask_inst)
img_viz[y1:y2, x1:x2][mask_boundary] = [200, 200, 200]
assert img_viz.dtype == np.uint8
CV_AA = 16
for i_box in range(bboxes.shape[0]):
if not draw[i_box]:
continue
box = bboxes[i_box]
y1, x1, y2, x2 = box.astype(int).tolist()
inst_class = labels[i_box]
if inst_class == bg_class:
continue
# get color for the label
color = cmap[inst_class]
color = (color * 255).tolist()
cv2.rectangle(img_viz, (x1, y1), (x2, y2), color[::-1],
thickness=thickness, lineType=CV_AA)
if captions is not None:
caption = captions[i_box]
font_scale = 0.4
ret, baseline = cv2.getTextSize(
caption, cv2.FONT_HERSHEY_SIMPLEX, font_scale, 1)
# cv2.rectangle(img_viz, (x1, y2 - ret[1] - baseline),
# (x1 + ret[0], y2), color[::-1], -1)
cv2.putText(img_viz, caption, (x1, y2 - baseline),
cv2.FONT_HERSHEY_SIMPLEX, font_scale, (255, 255, 255),
1, CV_AA)
return img_viz
def visualize_instance_segmentation(lbl_ins, lbl_cls, img, class_names):
# visualize instances
lbl_ins = lbl_ins.copy()
lbl_ins[lbl_cls == 0] = -1
viz = skimage.color.label2rgb(lbl_ins, img, bg_label=-1)
viz = (viz * 255).astype(np.uint8)
# visualize classes
ins_clss, boxes = label2instance_boxes(lbl_ins, lbl_cls)
if ins_clss.size > 0:
viz = draw_instance_bboxes(
viz, boxes, ins_clss,
n_class=len(class_names),
captions=class_names[ins_clss])
return viz
# def get_positive_negative_samples(is_positive, negative_ratio=1.0):
# assert isinstance(is_positive, np.ndarray)
# assert is_positive.dtype == bool
# n_positive = is_positive.sum()
# n_negative = int(negative_ratio * n_positive)
# # get samples for specified negative ratio
# samples = np.where(is_positive)[0]
# is_negative = ~is_positive
# negative_samples = np.random.choice(np.where(is_negative)[0], n_negative)
# samples = np.hstack((samples, negative_samples))
# return samples
#
#
# def nms(dets, thresh, scores=None):
# x1 = dets[:, 0]
# y1 = dets[:, 1]
# x2 = dets[:, 2]
# y2 = dets[:, 3]
#
# areas = (x2 - x1 + 1) * (y2 - y1 + 1)
#
# if scores is None:
# scores = areas
# order = scores.argsort()[::-1]
#
# keep = []
# while order.size > 0:
# i = order[0]
# keep.append(i)
# xx1 = np.maximum(x1[i], x1[order[1:]])
# yy1 = np.maximum(y1[i], y1[order[1:]])
# xx2 = np.minimum(x2[i], x2[order[1:]])
# yy2 = np.minimum(y2[i], y2[order[1:]])
#
# w = np.maximum(0.0, xx2 - xx1 + 1)
# h = np.maximum(0.0, yy2 - yy1 + 1)
# inter = w * h
# ovr = inter / (areas[i] + areas[order[1:]] - inter)
#
# inds = np.where(ovr <= thresh)[0]
# order = order[inds + 1]
#
# return keep
#
#
# def resize_image(img, shape):
# height, width = shape[:2]
# img_pil = PIL.Image.fromarray(img)
# img_pil = img_pil.resize((width, height))
# return np.array(img_pil)
#
#
# def roi_scores_to_label(img_shape, rois, cls_scores, roi_mask_probs,
# down_scale, fcis_k, fcis_C):
# height, width = img_shape[:2]
#
# # suppress rois with threshold 0.7
# keep = []
# score_argsort = np.argsort(cls_scores.sum(axis=1))
# for i, roi_i in zip(score_argsort, rois[score_argsort]):
# if np.argmax(cls_scores[i]) == 0:
# continue
# roi_ns_i = (roi_i / down_scale).astype(int)
# x1, y1, x2, y2 = roi_ns_i
# roi_h, roi_w = y2 - y1, x2 - x1
# if not (roi_h >= fcis_k and roi_w >= fcis_k):
# continue
# if all(get_bbox_overlap(roi_i, rois[j]) < 0.7 for j in keep):
# keep.append(i)
# keep = np.array(keep)
#
# lbl_cls_pred = np.zeros(img_shape[:2], dtype=np.int32)
# lbl_ins_pred = np.zeros(img_shape[:2], dtype=np.int32)
# lbl_ins_pred.fill(-1)
#
# accumulated = []
# for i in keep:
# if i in accumulated:
# continue
#
# roi_mask_probs_cum = np.zeros((height, width, fcis_C + 1),
# dtype=np.float64)
# # cls_score_cum = np.zeros((self.C + 1,), dtype=np.float64)
#
# roi_i = rois[i]
# cls_score_i = cls_scores[i]
# # cls_score_cum += cls_score_i
# roi_mask_prob_i = roi_mask_probs[i]
# x1, y1, x2, y2 = roi_i
# roi_mask_prob_i = np.array([resize_image(m, (y2 - y1, x2 - x1))
# for m in roi_mask_prob_i])
# roi_mask_prob_i = roi_mask_prob_i.transpose(1, 2, 0)
# roi_mask_prob_i = 2 * roi_mask_prob_i - 1
# roi_mask_prob_i *= cls_score_i
# roi_mask_probs_cum[y1:y2, x1:x2] += roi_mask_prob_i
#
# for j in keep:
# roi_j = rois[j]
# if not (0.5 < get_bbox_overlap(roi_i, roi_j) < 1):
# continue
# assert 0.5 < get_bbox_overlap(roi_i, roi_j) < 0.7
# accumulated.append(j)
# cls_score_j = cls_scores[j]
# # cls_score_cum += cls_score_j
# roi_mask_prob_j = roi_mask_probs[j]
# x1, y1, x2, y2 = roi_j
# roi_mask_prob_j = np.array([
# resize_image(m, (y2 - y1, x2 - x1))
# for m in roi_mask_prob_j])
# roi_mask_prob_j = roi_mask_prob_j.transpose(1, 2, 0)
# roi_mask_prob_j = 2 * roi_mask_prob_j - 1
# roi_mask_prob_j *= cls_score_j
# roi_mask_probs_cum[y1:y2, x1:x2] += roi_mask_prob_j
#
# roi_cls = np.argmax(cls_scores[i])
#
# if roi_cls != 0:
# # 1/down_scale
# roi_mask_prob = roi_mask_probs_cum[:, :, roi_cls]
# roi_mask = roi_mask_prob > 0
# # 1/1
# roi_mask = resize_image(
# roi_mask.astype(np.int32),
# img_shape[:2]).astype(bool)
# x1, y1, x2, y2 = roi_i
# lbl_cls_pred[roi_mask] = roi_cls
# lbl_ins_pred[roi_mask] = i
#
# return lbl_ins_pred, lbl_cls_pred | |
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
# implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#
"""
Created on Mon Jan 23 11:18:28 2017
@author: giang nguyen
"""
import numpy as np
import pandas as pd
import math
import itertools
from pprint import pprint
from matplotlib.backends.backend_pdf import PdfPages
import matplotlib.pyplot as plt;
import seaborn as sns
sns.set_context("paper")
sns.set(style="whitegrid", color_codes=True)
dataset = 'test2'
_PATH_ = './data/' + dataset + '/'
statdir = './stat_' + dataset + '/'
def draw_len_freq(fni=statdir +'taxo_cartsize_2_n.tsv', fno=statdir +'taxo_cartsize.pdf'):
pp = PdfPages(fno)
plot1 = plt.figure()
plt.rcParams["figure.figsize"] = [5, 3]
# df = rdd.toDF(['len', 'freq']).toPandas() #RDD to Pandas df
df = pd.read_csv(fni, sep='\t', skipfooter=0, engine='python', names=['size', 'freq'])
df.sort_values(by='size', ascending=True, inplace=True)
attr = df['size']
y_pos = np.arange(len(attr))
x_pos = df['freq']
plt.barh(y_pos, x_pos, align='center', alpha=0.65)
plt.axis('tight')
plt.yticks(y_pos, attr)
plt.ylabel('cartsize')
plt.xlabel('freq')
plt.title('dataset=TEST1 cartsize TAXONOMY')
ax = plt.gca()
ax.invert_yaxis()
plt.show()
pp.savefig(plot1)
pp.close()
def merge_predajnost(fn=statdir +'tovar_freq.tsv'): #test2
fn0 = statdir + 'dwh.tsv'
fn1 = statdir + 'tovar_freq_2_7.tsv'
fn2 = statdir + 'tovar_freq_2_7_since2011.tsv'
fn3 = statdir + 'tovar_freq_2_7_since2012.tsv'
fn4 = statdir + 'tovar_freq_2_7_since2013.tsv'
fn5 = statdir + 'tovar_freq_2_7_since2014.tsv'
fn6 = statdir + 'tovar_freq_2_7_since2015.tsv'
df0 = pd.read_csv(fn0, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'nazov', 'popis'])
df1 = pd.read_csv(fn1, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'freq_all'])
df2 = pd.read_csv(fn2, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'fs_2011'])
df3 = pd.read_csv(fn3, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'fs_2012'])
df4 = pd.read_csv(fn4, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'fs_2013'])
df5 = pd.read_csv(fn5, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'fs_2014'])
df6 = pd.read_csv(fn6, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'fs_2015'])
df = df0.merge(df1, on='tovar', how='left')\
.merge(df2, on='tovar', how='left')\
.merge(df3, on='tovar', how='left')\
.merge(df4, on='tovar', how='left')\
.merge(df5, on='tovar', how='left')\
.merge(df6, on='tovar', how='left')
df[['freq_all', 'fs_2011', 'fs_2012', 'fs_2013', 'fs_2014', 'fs_2015']] = \
df[['freq_all', 'fs_2011', 'fs_2012', 'fs_2013', 'fs_2014', 'fs_2015']].fillna(0.0).astype(int)
df = df[['freq_all', 'fs_2011', 'fs_2012', 'fs_2013', 'fs_2014', 'fs_2015', 'tovar', 'nazov', 'popis']]
print(df.dtypes)
df.to_csv(fn, sep='\t', index=False)
#Tukey's test applied for list_of_freq of cart_size
def tukey_outlier_cartsize(k=3, fn = statdir +'cartsize.tsv'):
df = pd.read_csv(fn, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['cartsize', 'freq'])
size = df['cartsize'].values.tolist()[1:] # without cart_size==1
freq = df['freq'].values.tolist()[1:]
n = sum(freq)
i1 = 0.25*(n-1)
i2 = 0.75*(n-1)
lower = i1
for x in freq:
lower = lower - x
if lower < 0:
index = freq.index(x)
q1 = size[index]
print '=====> q1=', q1, 'index=', freq.index(x)
break
upper = i2
for x in freq:
upper = upper - x
if upper < 0:
index = freq.index(x)
q2 = size[index]
print '=====> q2=', q2, 'index=', freq.index(x)
break
iqr = q2 - q1
out = [math.floor(q1 - k*iqr), math.ceil(q2 + k*iqr)]
print '=====> k=', k, 'outlier=', out # k= 3 outlier= [-1.0, 6.0]
return out
def csv_to_dict(fni= statdir +'prob.tsv', fno='./prob_tmp.tsv'):
df = pd.read_csv(fni, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'freq', 'prob'])
'''
df = df[['tovar', 'freq']]
total = df['freq'].sum()
df['prob'] = df['freq']/total
df.sort_values(by=['prob'], ascending=False, inplace=True)
df.to_csv(fno, sep='\t', index=False)
print(df.dtypes)
'''
d = df.set_index('nazov').T.to_dict('list')
for k in d:
print k, ':', d[k]
return d
def read_dwh(fn_dwh=statdir + 'dwh.tsv'):
df_dwh = pd.read_csv(fn_dwh, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'nazov', 'popis'])
df_dwh = df_dwh[['tovar', 'nazov']]
print(df_dwh.dtypes)
return df_dwh
def sort_pair_dwh(fni=statdir +'pair_freq_0007_40989.tsv', fno='./pair_007.tsv'):
df_dwh = read_dwh()
df = pd.read_csv(fni, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['A', 'B', 'freq'])
df = df.merge(df_dwh, how='left', left_on=['A'], right_on=['tovar'])
df = df.drop('tovar', axis=1).rename(columns = {'nazov':'nazov_A'})
df = df.merge(df_dwh, how='left', left_on=['B'], right_on=['tovar'])
df = df.drop('tovar', axis=1).rename(columns = {'nazov':'nazov_B'})
df.sort_values(by='freq', ascending=False, inplace=True)
print(df.dtypes)
df.to_csv(fno, sep='\t', index=False)
return
def get_taxo(fno=statdir +'taxo.tsv'):
df = pd.read_csv('./data/taxonomia-data.tsv', sep='\t', skiprows=0, skipfooter=0, engine='python')
df['category'] = df['cat0.id'].astype(str).str.rjust(2,'0') +\
df['cat1.id'].astype(str).str.rjust(2,'0')
df['TOVAR'] = df['TOVAR'].astype(str)
#dfd = df[['TOVAR', 'category']]
#dtc = dfd.set_index('TOVAR').to_dict()['category']
dfn = df[['category', 'cat1.name']].drop_duplicates()
#dfn.to_csv(fno, sep='\t', index=False, header=False)
return dfn
def merge_pair_taxo(fn=statdir +'taxo_pair_freq.tsv', fno=statdir +'./taxo_pair.tsv'):
dft = pd.read_csv('./data/taxonomia-data.tsv', sep='\t', skiprows=0, skipfooter=0, engine='python')
dft['category'] = dft['cat0.id'].astype(str).str.rjust(2,'0') +\
dft['cat1.id'].astype(str).str.rjust(2,'0')
dft['category'] = dft['category'].astype(str).str.rjust(4,'0')
dft['cat1.name']= dft['cat1.name'].astype(str).str.ljust(20,' ')
dft = dft[['category', 'cat1.name']].drop_duplicates()
df = pd.read_csv(fn, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['A', 'B', 'freq'])
df['A'] = df['A'].astype(str).str.rjust(4,'0')
df['B'] = df['B'].astype(str).str.rjust(4,'0')
df = df.merge(dft, how='left', left_on=['A'], right_on=['category'])
df = df.drop('category', axis=1).rename(columns = {'cat1.name':'cat1_A'})
df = df.merge(dft, how='left', left_on=['B'], right_on=['category'])
df = df.drop('category', axis=1).rename(columns = {'cat1.name':'cat1_B'})
'''
with pd.option_context('display.max_rows', None, 'display.max_columns', 10):
print df
'''
df.sort_values(by='freq', ascending=False, inplace=True)
print(df.dtypes)
df.to_csv(fno, sep='\t', index=False)
return
def merge_stat(fni, fno):
df_dwh = read_dwh()
df_fni = pd.read_csv(fni, sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'freq'])
print(df_fni.dtypes)
df = df_fni.merge(df_dwh, how='left', on=['tovar'])
print(df.dtypes)
df.to_csv(fno, sep='\t', index=False)
def subset(arr, len_max=6):
if len(arr) < len_max:
loss = []
for i in range(1, len(arr)+1):
for subset in itertools.combinations(arr, i):
loss.append(subset)
else:
loss = [ arr ]
return loss
def pokladna_only(statdir, fno):
df1 = pd.read_csv(statdir +'pokladna1.tsv', sep='\t', skiprows=0, skipfooter=0, engine='python')
df2 = pd.read_csv(statdir +'pokladna2.tsv', sep='\t', skiprows=0, skipfooter=0, engine='python')
df3 = pd.read_csv(statdir +'pokladna3.tsv', sep='\t', skiprows=0, skipfooter=0, engine='python')
l1 = df1['tovar'].values
l2 = df2['tovar'].values
l3 = df3['tovar'].values
s = set(l3) - set(l2) - set(l1)
dfs = pd.DataFrame(list(s), columns=['tovar'])
df = dfs.merge(df3, how='left', on=['tovar'])
df.sort_values(by='freq', ascending=False, inplace=True)
df.to_csv(fno, sep='\t', index=False)
print len(s)
def test2_taxo(fno=statdir+'test2_taxo_dwh.tsv'):
df1 = pd.read_csv(statdir +'test2_dwh.tsv', sep='\t', skiprows=0, skipfooter=0, engine='python', names=['tovar', 'nazov', 'popis'])
df2 = pd.read_csv(statdir +'test2_taxo.txt', sep=',', skiprows=0, skipfooter=0, engine='python', names=['cat0.id', 'cat1.id', 'tovar'])
df = df1.merge(df2, on='tovar', how='left')
df['cat0.name'] = 'cn0_' + df['cat0.id'].astype(str)
df['cat1.name'] = 'cn1_' + df['cat1.id'].astype(str)
df['SKLAD'] = '0'
df = df[['cat0.id', 'cat1.id', 'cat0.name', 'cat1.name', 'SKLAD', 'tovar', 'nazov', 'popis']]
df.rename(columns={'tovar': 'TOVAR', 'nazov': 'NAZOV', 'popis': 'POPIS'}, inplace=True)
df.sort_values(['cat0.id', 'cat1.id', 'TOVAR'], ascending=True, inplace=True)
df.to_csv(fno, sep='\t', index=False, header=True)
def main(argv):
#draw_len_freq(statdir +'taxo_cartsize_2_n.tsv', statdir +'taxo_cartsize_2_n.pdf')
#tukey_outlier_cartsize(3, statdir +'cartsize.tsv')
#merge_pair_taxo()
#merge_predajnost()
#csv_to_dict()
#sort_pair_dwh()
#get_taxo()
#merge_stat(statdir +'freq_item.tsv', statdir +'freq_item_dwh.tsv')
#pokladna_only(statdir, statdir +'pokladna3-only-tovar.tsv')
print subset([1, 2, 3, 4], len_max=6)
test2_taxo()
if __name__ == "__main__":
import argparse
parser = argparse.ArgumentParser(description='Datapac', epilog='---')
parser.add_argument("--output",
default='./datapac_stats',
dest="outFN", help="output_file", metavar="FILENAME")
parser.add_argument("--log",
default='./datapac_log',
dest="logFN", help="log_file", metavar="FILENAME")
args = parser.parse_args()
main(args) | |
import json
from os.path import abspath, dirname, exists, join
import argparse
import logging
from tqdm import trange
import tqdm
import torch
import torch.nn.functional as F
import numpy as np
import socket
import os, sys
import re
import logging
from functools import partial
from demo_utils import download_model_folder
import argparse
import subprocess as sp
from pytorch_pretrained_bert import GPT2LMHeadModel, GPT2Tokenizer, GPT2Config
from gpt2_training.train_utils import get_eval_list_same_length, load_model, boolean_string, fix_state_dict_namespace
logging.basicConfig(format = '%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt = '%m/%d/%Y %H:%M:%S',
level = logging.INFO)
logger = logging.getLogger(__name__)
EOS_ID = 50256
def cut_seq_to_eos(sentence, remove_id=[-1]):
sent=[]
for s in sentence:
if s in remove_id:
continue
if s != EOS_ID:
sent.append(s)
else:
break
return sent
### FROM HUGGING FACE REPO
def top_filtering(logits, top_k=0, top_p=0.0, threshold=-float('Inf'), filter_value=-float('Inf')):
""" Filter a distribution of logits using top-k, top-p (nucleus) and/or threshold filtering
Args:
logits: logits distribution shape (vocabulary size)
top_k: <=0: no filtering, >0: keep only top k tokens with highest probability.
top_p: <=0.0: no filtering, >0.0: keep only a subset S of candidates, where S is the smallest subset
whose total probability mass is greater than or equal to the threshold top_p.
In practice, we select the highest probability tokens whose cumulative probability mass exceeds
the threshold top_p.
threshold: a minimal threshold to keep logits
"""
assert logits.dim() == 1 # Only work for batch size 1 for now - could update but it would obfuscate a bit the code
top_k = min(top_k, logits.size(-1))
if top_k > 0:
# Remove all tokens with a probability less than the last token in the top-k tokens
indices_to_remove = logits < torch.topk(logits, top_k)[0][..., -1, None]
logits[indices_to_remove] = filter_value
if top_p > 0.0:
# Compute cumulative probabilities of sorted tokens
sorted_logits, sorted_indices = torch.sort(logits, descending=True)
cumulative_probabilities = torch.cumsum(F.softmax(sorted_logits, dim=-1), dim=-1)
# Remove tokens with cumulative probability above the threshold
sorted_indices_to_remove = cumulative_probabilities > top_p
# Shift the indices to the right to keep also the first token above the threshold
sorted_indices_to_remove[..., 1:] = sorted_indices_to_remove[..., :-1].clone()
sorted_indices_to_remove[..., 0] = 0
# Back to unsorted indices and set them to -infinity
indices_to_remove = sorted_indices[sorted_indices_to_remove]
logits[indices_to_remove] = filter_value
indices_to_remove = logits < threshold
logits[indices_to_remove] = filter_value
return logits
def generate_next_token(model, input_ids, position_ids=None, token_type_ids=None, prev=None, temperature=1, top_k=0, top_p=0, past=None):
with torch.no_grad():
if not past:
hidden_states, past = model.transformer(prev, position_ids, token_type_ids, past=past)
else:
hidden_states, past = model.transformer(prev, past=past)
logits = model.lm_head(hidden_states)
logits = logits[0, -1, :] / temperature
logits = top_filtering(logits, top_k=top_k, top_p=top_p)
probs = F.softmax(logits.unsqueeze(0), dim=-1)
prev = torch.multinomial(probs, num_samples=1)
return prev, probs[0][prev], past
def generate_sequence(model, input_ids, position_ids=None, token_type_ids=None, temperature=1, top_k=0, top_p=0, length=20, past=None, device='cuda'):
output = input_ids.new_zeros([input_ids.size(0),0])
prev = input_ids
for i in range(length):
prev, probs, past = generate_next_token(model, input_ids, position_ids, token_type_ids, prev, temperature, top_k, top_p, past)
output = torch.cat((output, prev), dim=1)
return output
def cut_seq_to_eos(sentence, remove_id=[-1]):
sent=[]
for s in sentence:
if s in remove_id:
continue
if s != EOS_ID:
sent.append(s)
else:
break
return sent
def run_model():
parser = argparse.ArgumentParser()
parser.add_argument('--model_name_or_path', type=str, default='', help='pretrained model name or path to local checkpoint')
parser.add_argument("--seed", type=int, default=42)
parser.add_argument("--load_checkpoint", '-c', type=str, default='')
parser.add_argument("--fp16", type=boolean_string, default=False)
parser.add_argument("--max_seq_length", type=int, default=128)
parser.add_argument("--generation_length", type=int, default=20)
parser.add_argument("--max_history", type=int, default=2)
parser.add_argument("--temperature", type=float, default=1)
parser.add_argument("--top_k", type=int, default=0)
parser.add_argument("--top_p", type=float, default=0.9)
parser.add_argument('--use_gpu', action='store_true')
parser.add_argument("--gpu", type=int, default=0)
args = parser.parse_args()
os.environ['CUDA_VISIBLE_DEVICES'] = str(args.gpu)
device = torch.device("cuda" if torch.cuda.is_available() and args.use_gpu else "cpu")
n_gpu = torch.cuda.device_count()
args.device, args.n_gpu = device, n_gpu
np.random.seed(args.seed)
torch.random.manual_seed(args.seed)
torch.cuda.manual_seed(args.seed)
#### load the GPT-2 model
config = GPT2Config.from_json_file(os.path.join(args.model_name_or_path, '../config.json'))
enc = GPT2Tokenizer.from_pretrained(args.model_name_or_path)
model = load_model(GPT2LMHeadModel(config), args.load_checkpoint, args, verbose=True)
model.to(device)
model.eval()
history = []
while True:
raw_text = input("USR >>> ")
while not raw_text:
print('Prompt should not be empty!')
raw_text = input("USR >>> ")
history.append(raw_text)
context_tokens = sum([enc.encode(h) + [EOS_ID] for h in history],[]) #+ [EOS_ID]
context_tokens = torch.tensor(context_tokens, device=device, dtype=torch.long).unsqueeze(0)
position_ids = torch.arange(0, context_tokens.size(-1), dtype=torch.long, device=context_tokens.device)
out = generate_sequence(model, context_tokens, position_ids=position_ids,
length=args.generation_length, temperature=args.temperature,
top_k=args.top_k, top_p= args.top_p)
out = out.tolist()
text = enc.decode(cut_seq_to_eos(out[0])).encode('ascii','ignore').decode('ascii')
print("SYS >>> ", text)
history.append(text)
history = history[-(2*args.max_history+1):]
if __name__ == '__main__':
PYTHON_EXE = 'python'
MODEL_FOLDER = '../models'
DATA_FOLDER = '../data'
logging.basicConfig(
format='%(asctime)s - %(levelname)s - %(name)s - %(message)s',
datefmt='%m/%d/%Y %H:%M:%S', level=logging.INFO
)
logger = logging.getLogger(__name__)
if os.path.exists(MODEL_FOLDER):
print('Found existing ../models folder, skip creating a new one!')
os.makedirs(MODEL_FOLDER, exist_ok=True)
else:
os.makedirs(MODEL_FOLDER)
#########################################################################
# Download Model
#########################################################################
logger.info('Downloading models...')
download_model = partial(download_model_folder, DATA_FOLDER=MODEL_FOLDER)
# model size: could be one of 'small' (GPT2 with 117M), 'medium'(345M) or 'large' (1542M)
# dataset: one of 'multiref' or 'dstc'
# from_scratch: True : load model trained from scratch or False: load model trained from fine-tuning the GPT-2
target_folder = download_model(model_size='medium', dataset='multiref', from_scratch=False)
logger.info('Done!\n')
run_model() | |
import pytesseract as pt
import pdf2image
import nltk
from nltk.tokenize import sent_tokenize
from nltk.tokenize import word_tokenize
# from transformers import T5Tokenizer, T5Config, T5ForConditionalGeneration
import os
import yake
# from transformers import AutoTokenizer, AutoModelForPreTraining, AutoModel
from summarizer import Summarizer,TransformerSummarizer
from transformers import *
import numpy as np
# from https://theaidigest.in/summarize-text-document-using-transformers-and-bert/
# First, some setup
# Instantiating the model and tokenizer with gpt-2
# tokenizer=GPT2Tokenizer.from_pretrained('gpt2')
# model=GPT2LMHeadModel.from_pretrained('gpt2')
# Instantiating the model and tokenizer with Google's T5
# model_name = 't5-base'
# model_name = 'SEBIS/legal_t5_small_summ_en'
# t5_model = T5ForConditionalGeneration.from_pretrained(model_name)
# tokenizer_t5 = T5Tokenizer.from_pretrained(model_name)
# This uses the bert-large-uncased model
# bert_legal_model = Summarizer()
# result = model(text, min_length=60, ratio=0.01)
# Load model, model config and tokenizer via Transformers
# Change model as you see fit, see huggingface.co
# model_name = 'distilbert-base-uncased'
# model_name = 'nlpaueb/legal-bert-base-uncased'
model_name = 'laxya007/gpt2_legal'
# model_name = 'facebook/bart-large-cnn'
# The setup of huggingface.co
custom_config = AutoConfig.from_pretrained(model_name)
custom_config.output_hidden_states=True
custom_tokenizer = AutoTokenizer.from_pretrained(model_name)
custom_model = AutoModel.from_pretrained(model_name, config=custom_config)
bert_legal_model = Summarizer(custom_model=custom_model, custom_tokenizer=custom_tokenizer)
print('Using model {}\n'.format(model_name))
# setup FB Classifier
classifier = pipeline("zero-shot-classification",
model="facebook/bart-large-mnli")
candidate_labels = ['employment', 'confidentiality', 'NDA', 'partnership', 'contractor', 'referral', 'tax']
# model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
# padding = "max_length"
# YAKE
# Yet Another Keyword Extractor (Yake) library selects the most important keywords using
# the text statistical features method from the article. With the help of YAKE, you can
# control the extracted keyword word count and other features.
# YAKE keyword extractor settings
kw_extractor = yake.KeywordExtractor()
language = "en"
max_ngram_size = 1
deduplication_threshold = 0.9
numOfKeywords = 10
custom_kw_extractor = yake.KeywordExtractor(lan=language, n=max_ngram_size, dedupLim=deduplication_threshold, top=numOfKeywords, features=None)
# nltk settings
# nltk automatically checks if already downloaded so don't worry
nltk.download('punkt')
# Then the app...
#
# Turn the PDF into images
# We do not want images to be too big, dpi=300?
# All our images should have the same size (depends on dpi), width=1654 and height=2340
# these settings leads to an image at about 500kB
# TODO: keep tabs on how many pages in total have been processed
path = os.getcwd()
folder_name = 'pdfs'
path = os.path.join(path, folder_name)
list_of_files = []
for root, dirs, files in os.walk(path):
for file in files:
if(file.endswith(".pdf")):
# print(os.path.join(root,file))
list_of_files.append(os.path.join(root,file))
print("\nProcessing {} files...\n".format(len(list_of_files)))
total_pages = 0
for filename in list_of_files:
print(filename)
file = os.path.splitext(os.path.basename(filename))[0]
pages = pdf2image.convert_from_path(pdf_path=filename, dpi=400, size=(1654,2340))
total_pages += len(pages)
print("\nProcessing the next {} pages...\n".format(len(pages)))
# Then save all pages as images and convert them to text except the last page
# TODO: create this as a function
content = ""
dir_name = 'images/' + file + '/'
os.makedirs(dir_name, exist_ok=True)
# If folder doesn't exist, then create it.
for i in range(len(pages)-1):
pages[i].save(dir_name + str(i) + '.jpg')
# OCR the image using Google's tesseract
content += pt.image_to_string(pages[i])
# 'content' is now a large set of paragraphs for each PDF file, let's loop over them and summarise
# with the T5 model. A paragraph is assumed \n\n which is obviously wrong
# TODO: use nltk TextTilingTokenizer (?) for cleaner paragraph detection? and clean up
# using stopwords etc.?
# An alternative is to do each sentence and summarise them into mini sentences... but
# that would probably discard some of the context the GAN needs?
# enumerating just in case we need it
summary_text = ""
for i, paragraph in enumerate(content.split("\n\n")):
# use NLTK to prettify and detect sentences
# paragraph = str(sent_tokenize(paragraph))
# get rid of intra newlines and tabs
# get rid of empty paragraphs and one word paras and extra whitespaces
paragraph = paragraph.replace('\n',' ')
paragraph = paragraph.replace('\t','')
paragraph = ' '.join(paragraph.split())
# count words in the paragraph and exclude if less than 4 words
tokens = word_tokenize(paragraph)
# only do real words
tokens = [word for word in tokens if word.isalpha()]
# print("\nTokens: {}\n".format(len(tokens)))
# only do sentences with more than 1 words excl. alpha crap
if len(tokens) <= 1:
continue
# Perhaps also ignore paragraphs with no sentence?
sentences = sent_tokenize(paragraph)
# print("\nSentences: {}\n".format(len(sentences)))
# if len(sentences) == 0:
# continue
# recreate paragraph from the only words tokens list
paragraph = ' '.join(tokens)
print("\nParagraph:")
print(paragraph+"\n")
# T5 needs to have 'summarize' in order to work:
# text = "summarize:" + paragraph
text = paragraph
# encoding the input text
# input_ids=tokenizer_t5.encode(text, return_tensors='pt')
# input_ids=tokenizer_t5.encode(text, return_tensors='pt', max_length=512)
# Generating summary ids
# TODO: understand hyperparameters incl. early_stopping
# hyperparameters inspired by sentence length https://prowritingaid.com/Sentence-Length-Average
# summary_ids = t5_model.generate(input_ids,
# num_beams=4,
# no_repeat_ngram_size=2,
# min_length=1,
# max_length=30,
# early_stopping=False)
# Decoding the tensor and printing the summary
# t5_summary = tokenizer_t5.decode(summary_ids[0], skip_special_tokens=True)
# input_tokenized = custom_tokenizer.encode(text, return_tensors='pt',padding=padding,pad_to_max_length=True, max_length=6144,truncation=True)
# summary_ids = model.generate(input_tokenized,
# num_beams=4,
# no_repeat_ngram_size=3,
# length_penalty=2,
# min_length=10,
# max_length=500)
# summary = [custom_tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0]
summary = bert_legal_model(text, ratio = 0.01)
# summary = tokenizer_t5.decode(summary_ids[0], skip_special_tokens=True)
summary_text += str(summary) + "\n\n"
print("Summary:")
print(summary)
# Summary of concatenated summaries
# TODO: clean text for \n and stop words?
# text = "summarize:" + t5_text
# input_ids=tokenizer_t5.encode(text, return_tensors='pt')
# Generating summary ids
# summary_ids = t5_model.generate(input_ids,
# num_beams=4,
# no_repeat_ngram_size=2,
# min_length=20,
# max_length=1000,
# early_stopping=False)
# t5_summary = tokenizer_t5.decode(summary_ids[0], skip_special_tokens=True)
summary = bert_legal_model(content, ratio=0.1)
# input_tokenized = custom_tokenizer.encode(content, return_tensors='pt',padding=padding,pad_to_max_length=True, max_length=6144,truncation=True)
# summary_ids = model.generate(input_tokenized,
# num_beams=4,
# no_repeat_ngram_size=3,
# length_penalty=2,
# min_length=150,
# max_length=500)
# summary = [custom_tokenizer.decode(g, skip_special_tokens=True, clean_up_tokenization_spaces=False) for g in summary_ids][0]
# print("\nT5 complete summary:")
# print(t5_summary)
# print("\nT5 detailed summary:")
# print(t5_text)
# Extract keywords from all content
# TODO: clean result for things such as agreement, Andrew, Martin, ...
# Also, not sure if this should be done on as much or as little corpus as possible?
keywords = custom_kw_extractor.extract_keywords(content)
keywords2 = classifier(content, candidate_labels, multi_label=True)
keyword_list = ""
print("\nKeywords:")
for kw in keywords:
keyword_list += str(kw[0]).lower() + " with prob " + str(kw[1]) + "\n"
print(keyword_list)
# make a dictionary of the keywords and choose the top
combined_keywords = dict(zip(keywords2['labels'],keywords2['scores']))
top_keywords = dict((k, v) for k, v in combined_keywords.items() if v > 0.4)
print(top_keywords)
# write all to file for inspection and storage
all_text = "-------- The Keywords --------\n" + str(keyword_list) + str(top_keywords) + "\n\n\n" \
+ "-------- The Summary --------\n" + str(summary) + "\n\n\n" \
+ "-------- The Larger Summary --------\n" + str(summary_text) + "\n\n\n" \
+ "-------- The Original Content --------\n" + str(content)
with open('summaries/'+file+'-summary.txt', 'w') as f:
f.write(all_text)
# TODO: extract topic clusters see https://towardsdatascience.com/nlp-for-topic-modeling-summarization-of-legal-documents-8c89393b1534
# TODO: word2vec comparing documents and graphing them
# TODO: store summaries and topic clusters in a db with a link to the orig doc | |
import nltk
import csv
import datetime
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
now = datetime.datetime.now()
today = now.strftime("%Y-%m-%d")
dTrading = 'C:/Users/vitor/Documents/GetDataset/TradingView/'
# Resultados SentiLex
rSentilex = open(dTrading + today +'/LexiconTradingSentilex.csv', 'r', encoding='utf8')
posInv = 0
neuInv = 0
negInv = 0
for t in rSentilex.readlines():
if 'Positivo' in t:
posInv += 1
if 'Neutro' in t:
neuInv += 1
if 'Negativo' in t:
negInv += 1
print('Sentilex Pos ', posInv)
print('Sentilex Neu ', neuInv)
print('Sentilex Neg ', negInv)
# Resultados OpLexicon
rOplexicon = open(dTrading + today +'/LexiconTradingOpLexicon.csv', 'r', encoding='utf8')
posInf = 0
neuInf = 0
negInf = 0
for t in rOplexicon.readlines():
if 'Positivo' in t:
posInf += 1
if 'Neutro' in t:
neuInf += 1
if 'Negativo' in t:
negInf += 1
print('OpLexicon Pos ', posInf)
print('OpLexicon Neu ', neuInf)
print('OpLexicon Neg ', negInf)
# Resultados Finance
rFinance = open(dTrading + today +'/LexiconTradingFinance.csv', 'r', encoding='utf8')
posTrd = 0
neuTrd = 0
negTrd = 0
for t in rFinance.readlines():
if 'Positivo' in t:
posTrd += 1
if 'Neutro' in t:
neuTrd += 1
if 'Negativo' in t:
negTrd += 1
print('Finance Pos ', posTrd)
print('Finance Neu ', neuTrd)
print('Finance Neg ', negTrd)
raw_data = {'Fonte de Dados': ['SentiLex', 'OpLexicon', 'Finance'],
'Pos': [posInv, posInf, posTrd],
'Neu': [neuInv, neuInf, neuTrd],
'Neg': [negInv, negInf, negTrd]}
df = pd.DataFrame(raw_data, columns = ['Fonte de Dados', 'Pos', 'Neu', 'Neg'])
df
# Setting the positions and width for the bars
pos = list(range(len(df['Pos'])))
width = 0.25
fig, ax = plt.subplots(figsize=(10,5))
# Create a bar with pre_score data, # in position pos,
plt.bar(pos, df['Pos'], width, alpha=0.5, color='#EE3224', label=df['Fonte de Dados'][0])
# Create a bar with mid_score data, # in position pos + some width buffer,
plt.bar([p + width for p in pos], df['Neu'], width, alpha=0.5, color='#F78F1E', label=df['Fonte de Dados'][1])
# Create a bar with post_score data, # in position pos + some width buffer,
plt.bar([p + width*2 for p in pos], df['Neg'], width, alpha=0.5, color='#FFC222', label=df['Fonte de Dados'][2])
ax.set_title("Abordagem geral")
ax.set_ylabel('N° de Textos')
ax.set_xticks([p + 1 * width for p in pos])
ax.set_xticklabels(df['Fonte de Dados'])
plt.xlim(min(pos)-width, max(pos)+width*4)
plt.ylim([0, max(df['Pos'] + df['Neu'] + df['Neg'])] )
plt.legend(['Positivo', 'Neutro', 'Negativo'], loc='upper left')
plt.grid()
plt.show() | |
#!/usr/bin/env python
"""
A simple example from Stan. The model is written in NumPy/SciPy.
Probability model
Prior: Beta
Likelihood: Bernoulli
Variational model
Likelihood: Mean-field Beta
"""
import edward as ed
import numpy as np
from edward import PythonModel
from edward.variationals import Variational, Beta
from scipy.stats import beta, bernoulli
class BetaBernoulli(PythonModel):
"""
p(x, z) = Bernoulli(x | z) * Beta(z | 1, 1)
"""
def __init__(self):
self.num_vars = 1
def _py_log_prob(self, xs, zs):
# This example is written for pedagogy. We recommend
# vectorizing operations in practice.
n_minibatch = zs.shape[0]
lp = np.zeros(n_minibatch, dtype=np.float32)
for b in range(n_minibatch):
lp[b] = beta.logpdf(zs[b, :], a=1.0, b=1.0)
for n in range(len(xs)):
lp[b] += bernoulli.logpmf(xs[n], p=zs[b, :])
return lp
ed.set_seed(42)
model = BetaBernoulli()
variational = Variational()
variational.add(Beta(model.num_vars))
data = ed.Data(np.array([0, 1, 0, 0, 0, 0, 0, 0, 0, 1]))
inference = ed.MFVI(model, variational, data)
inference.run(n_iter=10000) | |
import torch
import os
import shutil
import functools
import numpy as np
from PIL import Image, ImageOps, ImageEnhance, ImageFilter
from torchvision import transforms
import torchvision.transforms.functional as F
MASKS = {'background': -1, 'robot': 0, 'table': 1, 'cage': 2}
PROBS = [1 / 3, 2 / 3, 1]
class ImageTransform:
"""Image transformation attributes: name, magnitude, probability"""
# static attribute
transforms = {
'identity': {
'magnitude': [1],
'proba': [1],
},
'white_noise': {
'magnitude': [0.04, 0.08],
'proba': PROBS,
},
'black_noise': {
'magnitude': [0.01, 0.03],
'proba': PROBS,
},
'edge_noise': {
'magnitude': [(['table', 'robot'], 2), (['table', 'robot'], 3),
(['table', 'robot'], 4)],
'proba':
PROBS,
},
'remove_object': {
'magnitude': [('table', ), ('background', 'cage'),
('background', 'cage', 'table')],
'proba':
PROBS,
},
'scale': {
'magnitude': [0.03, 0.05],
'proba': PROBS,
},
'sharpness': {
'magnitude': [0.5, 1.],
'proba': PROBS,
},
'cutout': {
'magnitude': [1, 3, 5],
'proba': PROBS
},
'invert': {
'magnitude': [1],
'proba': PROBS
},
'posterize': {
'magnitude': [5, 6],
'proba': PROBS,
},
'affine': {
'magnitude': [(5, 0.04), (9, 0.07)],
'proba': PROBS,
},
'contrast': {
'magnitude': [0.5, 2.],
'proba': PROBS,
},
'autocontrast': {
'magnitude': [1],
'proba': PROBS
},
'equalize': {
'magnitude': [1],
'proba': PROBS
},
}
def __init__(self, transform_name):
self.attribute_child = {
'name': 'magnitude',
'magnitude': 'proba',
'proba': 'name'
}
assert transform_name in self.transforms
self.name = transform_name
self.transform = self.transforms[self.name]
def attribute2range(self, attribute):
assert attribute in ['name', 'magnitude', 'proba'], attribute
if attribute == 'name':
return sorted(self.transforms)
else:
return self.transform[attribute]
def attribute2child(self, attribute):
return self.attribute_child[attribute]
# transformations is a list of (operation, magnitude_range, probability_range)
# probability is always in [0, 1]
# frame and mask are tensors with the shapes (1 x 224 x 224)
@staticmethod
def sample_params(name_transformation, magn, img_size):
assert hasattr(ImageTransform, name_transformation)
params = {}
if name_transformation == 'affine':
degree, translate = magn
ret = transforms.RandomAffine.get_params(
degrees=(-degree, degree),
translate=(translate, translate),
scale_ranges=None,
shears=None,
img_size=img_size)
params['affine'] = ret
elif name_transformation == 'white_noise':
params['amplitude'] = magn
elif name_transformation == 'scale':
a_min, a_max = 1 - magn, 1 + magn
b_min, b_max = -magn, magn
alpha, beta = torch.rand((2,))
a = (1 - alpha) * a_min + alpha * a_max
b = (1 - beta) * b_min + beta * b_max
params['a'], params['b'] = a, b
elif name_transformation == 'remove_object':
masks_list = magn
remove_tosses = torch.rand(len(masks_list))
masks_int = []
for mask_str, toss in zip(masks_list, remove_tosses):
if float(toss) < 1 / 2:
masks_int.append(int(MASKS[str(mask_str)]))
params['objects_to_remove'] = masks_int
elif name_transformation == 'cropping':
th, tw = magn
params['output_size'] = th, tw
w, h = img_size
# magn is equal to output size
if w == tw and h == th:
params['cropping'] = 0, 0, h, w
else:
i = np.random.randint(0, h - th)
j = np.random.randint(0, w - tw)
params['cropping'] = i, j, th, tw
return params
@staticmethod
def identity(frame, mask, magn, unused_params):
# identity, no transformation applied to frame
return frame, mask
@staticmethod
def white_noise(frame, unused_mask, unused_magn, params):
# magnitude of 0.04 should do this:
# frame += 0.04 * torch.rand(1) * 2 * (torch.rand(img.shape) - 0.5)
frame = frame + params['amplitude'] * 2 * (torch.rand(frame.shape) - 0.5)
return frame, unused_mask
@staticmethod
def black_noise(frame, unused_mask, magn, unused_params):
# randomly put pixels to 1
# magnitude of 0.01 should do this:
# mask_bernoulli = torch.bernoulli(0.01 * torch.ones_like(frame))
# frame[mask_bernoulli == 1] = 1
mask_bernoulli = torch.bernoulli(magn * torch.ones_like(frame))
frame[mask_bernoulli == 1] = 1
return frame, unused_mask
@staticmethod
def scale(frame, unused_mask, unused_magn, params):
# magnitude of 0.1 should do this:
# frame *= 0.9 + 0.2 * torch.rand(1)
# frame = (1+2*(torch.rand(1)-0.5) * magn) * frame # + magn * torch.rand(1)
# frame = a * frame + b
frame = params['a'] * frame
return frame, unused_mask
@staticmethod
def remove_object(frame, mask, unused_magn, params):
# set the pixels of the frame with the mask == magn to 1 (max value)
for mask_int in params['objects_to_remove']:
frame[mask == mask_int] = 1
return frame, mask
@staticmethod
def cutout(frame, unused_mask, magn, unused_params):
# cut out {0, ..., magn} random rectangles from the image
num_times = torch.randint(int(magn) + 1, (1, )).type(torch.int)
for _ in range(num_times):
size = int(frame.shape[1])
x_pos, y_pos = torch.randint(size, (2, )).type(torch.int)
x_size, y_size = torch.randint(48, (2, )).type(torch.int)
x_size = min(x_pos + x_size, size) - x_pos
y_size = min(y_pos + y_size, size) - y_pos
if x_size > 0 and y_size > 0:
frame[0, y_pos:y_pos + y_size, x_pos:x_pos +
x_size] = torch.rand(1)
return frame, unused_mask
@staticmethod
def edge_noise(frame, mask, magn, unused_params):
# randomly put pixels on object edges to 1
# magn defines the object on which to apply edge noise
# magn_noise defines the max side of the removed rectangles at the edge
# magn_noise = 4
masks_list, magn_noise = magn
pixels_edge = np.zeros((0, 2))
for mask_str in masks_list:
mask_noise = MASKS[mask_str]
size = frame.shape[1]
mask_object = Image.fromarray(
((mask == mask_noise)[0] * 255).numpy().astype(np.uint8))
im_edge = np.asarray(mask_object.filter(ImageFilter.FIND_EDGES))
pixels_edge = np.vstack((pixels_edge,
np.array(np.where(im_edge == 255)).T))
num_pixels_edge = pixels_edge.shape[0]
sizes_noise = np.random.randint(0, magn_noise,
(num_pixels_edge, 2))
yx_mins = np.clip(pixels_edge - sizes_noise / 2, 0,
size).astype(np.uint8)
yx_maxs = np.clip(pixels_edge + sizes_noise / 2, 0,
size).astype(np.uint8)
for yx_min, yx_max in zip(yx_mins, yx_maxs):
y_min, x_min = yx_min
y_max, x_max = yx_max
if y_max > y_min and x_max > x_min:
frame[0, y_min:y_max, x_min:x_max] = 1
return frame, mask
@staticmethod
def black_image(frame, unused_mask, magn, unused_params):
# dummy
frame = torch.zeros_like(frame)
return frame, unused_mask
@staticmethod
def _op_pil(op_name, frame, magn=None):
# apply PIL.ImageOps.op_name to the frame with the given magnitude
frame_numpy = (frame * 255).cpu().numpy().astype(np.uint8)[0]
op_pil = getattr(ImageOps, op_name)
frame_pil = (Image.fromarray(frame_numpy), magn)
if magn is not None:
frame_pil = op_pil(Image.fromarray(frame_numpy), magn)
else:
frame_pil = op_pil(Image.fromarray(frame_numpy))
frame = torch.tensor(
np.array(frame_pil)).type_as(frame)[None] / 255
return frame
@staticmethod
def _enhance_pil(enhance_name, frame, magn):
# apply PIL.ImageEnhance.enhance_name to the frame with the given magnitude
# final magn should be randomized in [1, 1 + magn]
magn_min, magn_max = 1, 1 + magn
magn = (magn_max - magn_min) * torch.rand((1, )) + magn_min
frame_numpy = (frame * 255).cpu().numpy().astype(np.uint8)[0]
pil_op = getattr(ImageEnhance,
enhance_name)(Image.fromarray(frame_numpy))
frame_pil = pil_op.enhance(magn)
frame = torch.tensor(np.array(frame_pil)).type_as(frame)[None] / 255
return frame
@staticmethod
def posterize(frame, unused_mask, magn, unused_params):
# binarize values of frame into magn bits
magn = torch.randint(
low=int(magn), high=9, size=(1, )).type(torch.int).item()
return ImageTransform._op_pil('posterize', frame, magn), unused_mask
@staticmethod
def autocontrast(frame, unused_mask, unused_magn, unused_params):
return ImageTransform._op_pil('autocontrast', frame), unused_mask
@staticmethod
def invert(frame, unused_mask, unused_magn, unused_params):
return 1 - frame, unused_mask
@staticmethod
def equalize(frame, unused_mask, unused_magn, unused_params):
return ImageTransform._op_pil('equalize', frame), unused_mask
@staticmethod
def sharpness(frame, unused_mask, magn, unused_params):
# change the sharpness of the image
# 0 is a strong blur, 1 is the original image, 2 is a strong sharpness
return ImageTransform._enhance_pil('Sharpness', frame, magn), unused_mask
@staticmethod
def brightness(frame, unused_mask, magn, unused_params):
# change the brightness of the image
# 0 is gray image, 1 is the original image, 2 is a strong brightness
return ImageTransform._enhance_pil('Brightness', frame, magn), unused_mask
@staticmethod
def contrast(frame, unused_mask, magn, unused_params):
# change the contrast of the image
# 0 is a gray image, 1 is the original image, 2 is a strong contrast
return ImageTransform._enhance_pil('Contrast', frame, magn), unused_mask
@staticmethod
def affine(frame, mask, unused_magn, params):
# random affine translation and rotation of the frame and mask
# get parameters of the transform first and apply it
# to ensure same transform is applied to frame and mask
frame = F.to_pil_image(frame)
# mask = F.to_pil_image(mask+1)
frame = F.affine(frame, *params['affine'], resample=False, fillcolor=255)
# mask = F.affine(mask, *ret, resample=False, fillcolor=255)
frame = F.to_tensor(frame)
return frame, mask
@staticmethod
def cropping(frames, params, centered=False, rgb=False):
# random crop of the frame and mask
# size is the final size of the crop
w, h = params['output_size']
frames_crop = torch.zeros(len(frames), w, h)
for i, frame in enumerate(frames):
frame = F.to_pil_image(frame[None, :])
if centered:
frame = F.center_crop(frame, (w, h))
else:
frame = F.crop(frame, *params['cropping'])
frame = F.to_tensor(frame)
frames_crop[i] = frame
return frames_crop
@staticmethod
def resize(frame, size):
"""
frames: tensor in [0, 1] of size (h, w)
return: tensor in [0, 1] of size (size, size)
"""
frame = F.to_pil_image(frame)
frame = F.resize(frame, size)
frame = F.to_tensor(frame)
return frame
def compose(frame, mask, lambda_funcs):
for func in lambda_funcs:
frame, mask = func(frame, mask)
return frame
def apply_transform(frame, mask, func, magnitude, params, proba):
rand = float(torch.rand(1))
if rand < proba:
return func(frame, mask, magnitude, params)
else:
return frame, mask
def sample_from_probas(probas):
fixed_choice = [np.random.binomial(1, p) for p in probas]
return fixed_choice
def path2policy(path, img_size, sampling='stochastic'):
"""
Converts path given by mcts of the form [name, magn, proba, name, ...]
to a pytorch image transformation
"""
probas = [block[2] for block in path]
if sampling == 'fixed':
probas = sample_from_probas(probas)
lambda_funcs = []
for block, proba in zip(path, probas):
name, magnitude, _ = block
# sample fixed parameters for each transformations
params_transform = ImageTransform.sample_params(name, magnitude, img_size)
func = functools.partial(
apply_transform,
func=getattr(ImageTransform, name),
magnitude=magnitude,
params=params_transform,
proba=proba)
lambda_funcs.append(func)
policy = functools.partial(compose, lambda_funcs=lambda_funcs)
return policy
def test(dataset, save_dir):
""" Test transformations and save them. """
transforms = ImageTransform.transforms
save_dir = os.path.join(save_dir, 'transforms_test')
if os.path.exists(save_dir):
shutil.rmtree(save_dir)
os.mkdir(save_dir)
for name, magns_probs in transforms.items():
print('checking {}...'.format(name))
dataset.dataset._frames.set_augmentation(['identity', 1, 1])
frame_idx = np.random.randint(len(dataset))
frame_orig = dataset[frame_idx][0].clone()
for magn in magns_probs['magnitude']:
aug_path = [name, magn, 1]
dataset.dataset._frames.set_augmentation(aug_path)
frame_aug = dataset[frame_idx][0]
frame = np.vstack(
(((frame_orig[0] * 0.5 + 0.5).numpy() * 255).astype(np.uint8),
((frame_aug[0].numpy() * 0.5 + 0.5) * 255).astype(np.uint8)))
im_orig_aug = Image.fromarray(frame)
fig_name = '{}_m{}.png'.format(name, magn)
im_orig_aug.save(os.path.join(save_dir, fig_name))
print('Saved transformed frames to {}'.format(save_dir)) | |
# -*- encoding: utf-8 -*-
# pylint: disable=E0203,E1101,C0111
"""
@file
@brief Runtime operator.
"""
from textwrap import dedent
from ._op import OpRunUnaryNum
def _leaky_relu(x, alpha):
sign = (x > 0).astype(x.dtype)
sign -= ((sign - 1) * alpha).astype(x.dtype)
return x * sign
def _leaky_relu_inplace(x, alpha):
sign = (x > 0).astype(x.dtype)
sign -= ((sign - 1) * alpha).astype(x.dtype)
x *= sign
class LeakyRelu(OpRunUnaryNum):
atts = {'alpha': 0.01}
def __init__(self, onnx_node, desc=None, **options):
OpRunUnaryNum.__init__(self, onnx_node, desc=desc,
expected_attributes=LeakyRelu.atts,
**options)
def _run(self, x): # pylint: disable=W0221
if self.inplaces.get(0, False):
return self._run_inplace(x)
return (_leaky_relu(x, self.alpha), )
def _run_inplace(self, x):
_leaky_relu_inplace(x, self.alpha)
return (x, )
def to_python(self, inputs):
return (dedent(
"""
import numpy
def _leaky_relu(x, alpha):
sign = (x > 0).astype(x.dtype)
sign -= ((sign - 1) * alpha).astype(x.dtype)
return x * sign
"""), "return _leaky_relu(%s, alpha)" % inputs[0]) | |
# test instantiating a 2D electrostatic PIC
import sys
import os
import matplotlib.pyplot as plt
import numpy as np
import py_platypus as plat
from py_platypus.utils.params import Parameters as Parameters
from py_platypus.models.pic_2d import PIC_2D as PIC_2D
if __name__ == "__main__":
sim_params = Parameters(2)
# set up parameters
params = {
"length": [2 * np.pi, 4 * np.pi],
"cells": [32, 32],
"dimensions": 2,
"nppc": 12
}
sim_params.set_from_dict(params)
pic = PIC_2D(sim_params)
# initialize x randomly and check distribution
pic.init_x_random()
plt.figure(1)
plt.scatter(pic.electron_x, pic.electron_y, s=0.1)
# initialize a maxwellian velocity distribution
pic.init_v_maxwellian()
plt.figure(2)
bins = np.linspace(-3, 3, 40)
plt.hist2d(pic.electron_vx, pic.electron_vy, bins = [bins, bins])
plt.colorbar()
# create a single stream and plot vx and vy
pic.init_v_single_stream(1, 0.5, 2)
plt.figure(3)
plt.scatter(pic.electron_x, pic.electron_y, c=pic.electron_vx, s = 1)
plt.title("Vx")
plt.figure(4)
plt.scatter(pic.electron_x, pic.electron_y, c=pic.electron_vy, s = 1)
plt.title("Vy")
# create a two stream setup and plot vx and vy
pic.init_v_maxwellian()
pic.init_v_two_beams(0.8, 0.5, 2, -2)
plt.figure(5)
plt.scatter(pic.electron_x, pic.electron_y, c=pic.electron_vx, s = 2)
plt.title("Two beams Vx")
plt.figure(6)
plt.scatter(pic.electron_x, pic.electron_y, c=pic.electron_vy, s = 2)
plt.title("Two beams Vy")
# create a density perturbation
pic.density_perturbation(0.8, 4)
plt.figure(7)
plt.scatter(pic.electron_x, pic.electron_y, s=1)
pic.update_ne()
plt.figure(8)
plt.title("Electron number density")
ax = plt.imshow(pic.ne, interpolation = 'none')
plt.colorbar()
# calculate charge density
pic.update_ne()
pic.update_ni()
pic.update_rho()
plt.figure(9)
ax = plt.imshow(pic.rho, interpolation = 'none')
plt.title("Charge density")
plt.colorbar()
# test calculating phi from rho
sin_2d = np.zeros(pic.cells)
for i in range(pic.cells[0]):
for j in range(pic.cells[1]):
sin_2d[i][j] = np.sin(pic.dx[0] * i ) + \
np.sin(pic.dx[1] * j )
pic.rho = sin_2d
pic.update_phi()
plt.figure(10)
ax = plt.imshow(pic.rho, interpolation = 'none')
plt.title("Sin Charge density")
plt.colorbar()
plt.figure(11)
ax = plt.imshow(pic.phi, interpolation = 'none')
plt.title("Electric potential")
plt.colorbar()
# test calculating electric field at the nodes
pic.update_e()
plt.figure(12)
ax = plt.imshow(pic.ex, interpolation = 'none')
plt.title("Electric field Ex")
plt.colorbar()
plt.figure(13)
ax = plt.imshow(pic.ey, interpolation = 'none')
plt.title("Electric field Ey")
plt.colorbar()
# test updating particle velocity
pic.electron_vx = np.zeros(pic.n_particles) # zero out particle velocity
pic.electron_vy = np.zeros(pic.n_particles)
pic.update_v() # update velocity based on E field
plt.figure(14)
plt.scatter(pic.electron_x, pic.electron_y, c=pic.electron_vx, s = 2)
plt.title("Velocity vx")
plt.colorbar()
plt.figure(15)
plt.scatter(pic.electron_x, pic.electron_y, c=pic.electron_vy, s = 2)
plt.title("Velocity vy")
plt.colorbar()
plt.show() | |
"""Getting bias-scores from input text and the assigned colour-codes"""
import numpy as np
import gensim
from sklearn.decomposition import PCA
from nltk import pos_tag, word_tokenize
# from nltk.stem import WordNetLemmatizer
# from application import lemmatizer
model_w2v = (
gensim.models.KeyedVectors.load_word2vec_format
('data/GoogleNews-vectors-negative300.bin',
binary=True))
# lemmatizer = WordNetLemmatizer()
gender_pairs = [['woman', 'man'], ['girl', 'boy'], ['she', 'he'],
['mother', 'father'], ['daughter', 'son'], ['gal', 'guy'],
['female', 'male'], ['her', 'his'], ['herself', 'himself'],
['Mary', 'John']]
matrix = []
for a, b in gender_pairs:
center = (model_w2v[a] + model_w2v[b])/2
matrix.append(model_w2v[a] - center)
matrix.append(model_w2v[b] - center)
matrix = np.array(matrix)
pca = PCA(n_components=10)
fitted = pca.fit(matrix)
g = fitted.components_[0]
center_wm = (model_w2v["woman"] + model_w2v["man"])/2
def get_relevant(sentence_o):
"""Filter out adjectives, nouns and verbs for analysis"""
tagged = pos_tag(word_tokenize(sentence_o))
rel_words = []
rel_tags = ["FW", "JJ", "JJR", "JJS", "NN", "NNP",
"NNPS", "NNS", "RBR", "RBS"]
for i in tagged:
if i[1] in rel_tags:
rel_words.append(i[0])
return rel_words
def get_bias(word_list, gv=g, center=center):
"""Get the gender a word is associated with and the intensity of the bias
from the input list of words."""
gender = []
intensity = []
words = []
for word in word_list:
try:
v = np.dot((model_w2v[word]-center_wm), gv)
if v < -0.13: # threshold: 50% ov values lie between -0.11 and 0.15.
gender.append("f")
words.append(word)
elif v > 0.13:
gender.append('m')
words.append(word)
if abs(v) > 1:
intensity.append(5)
elif abs(v) > 0.7:
intensity.append(4)
elif abs(v) > 0.5:
intensity.append(3)
elif abs(v) > 0.3:
intensity.append(2)
elif abs(v) > 0.10:
intensity.append(1)
except KeyError:
pass
return words, gender, intensity
def get_colours(orig_sentence, words, gender, intensity):
"""get colours assigned to bias-intensity-levels:
shades of red for male, shades of blue for female."""
sen = []
colours = []
if not isinstance(orig_sentence, list):
orig_sentence = orig_sentence.split()
for i in orig_sentence:
if i not in words:
sen.append(i)
colours.append("#030303")
elif i in words:
sen.append(i)
index = words.index(i)
wordgender = gender[index]
if wordgender == "m":
if intensity[index] == 1:
colours.append("#bf6d72")
elif intensity[index] == 2:
colours.append("#e79096")
elif intensity[index] == 3:
colours.append("#eaa6ab")
elif intensity[index] == 4:
colours.append("#f4bcc0")
elif intensity[index] == 5:
colours.append("#f4bcc0")
else:
if intensity[index] == 1:
colours.append("#076296")
elif intensity[index] == 2:
colours.append("#3971A4")
elif intensity[index] == 3:
colours.append("#6094C9")
elif intensity[index] == 4:
colours.append("#86B7EF")
elif intensity[index] == 5:
colours.append("#ACDDFF")
return sen, colours
def lengthen(sentence_list, colour_list):
to_pad = 20 - len(sentence_list)
for _ in range(to_pad):
sentence_list.append(" ")
colour_list.append(" ")
return sentence_list, colour_list
def colours_bias(o_sentence):
rel_words = get_relevant(o_sentence)
w, gen, intense = get_bias(rel_words)
s, c = get_colours(o_sentence, w, gen, intense)
sen, colors = lengthen(s, c)
return sen, colors | |
"""
Sampling of omniglot examples.
Data is expected to exist in `root_dir` as:
root_dir/
images_background/
{train alphabet 1}/
0709_01.png
...
...
{train alphabet n}
images_evaluation/
{test alphabet 1}/
0965_01.png
...
...
{test alphabet m}/
Data was downloaded from: https://github.com/brendenlake/omniglot
"""
import os
from typing import Optional, Callable, Tuple, List, Dict
import numpy as np
import torch
from PIL import Image
from torchvision.transforms import Compose, Resize, ToTensor
VALID_META_SPLITS = {"train": "images_background", "test": "images_evaluation"}
IMAGE_RESIZE_HW = (28, 28)
class Omniglot:
def __init__(
self,
root_dir: str,
n_classes_per_task: Optional[int] = 5,
k_shots_per_class: Optional[int] = 1,
test_shots_per_class: Optional[int] = 1,
meta_split: str = "train",
transform: Optional[Callable] = None,
target_transform: Optional[Callable] = None,
image_resize_hw: Tuple[int, int] = IMAGE_RESIZE_HW,
fed_recon: bool = False,
):
if fed_recon:
raise NotImplementedError(
"Federated reconnaissance not implemented for omniglot."
)
assert os.path.isdir(root_dir)
assert isinstance(n_classes_per_task, int)
assert isinstance(k_shots_per_class, int)
assert isinstance(image_resize_hw, tuple)
assert len(image_resize_hw) == 2
assert isinstance(meta_split, str)
assert meta_split in VALID_META_SPLITS
self.root_dir = root_dir
self.n_classes_per_task = n_classes_per_task
self.k_shots_per_class = k_shots_per_class
self.test_shots_per_class = test_shots_per_class
self.meta_split = meta_split
self.all_classes, self.class_paths = self._setup_classes()
self.image_resize_hw = image_resize_hw
if transform is None:
transform = Compose([Resize(image_resize_hw), ToTensor()])
if target_transform is None:
target_transform = torch.tensor
self.transform = transform
self.target_transform = target_transform
self._image_cache = {}
def _setup_classes(self) -> Tuple[List[str], Dict[str, List[str]]]:
root = os.path.join(self.root_dir, VALID_META_SPLITS[self.meta_split])
classes = []
class_paths = {}
for alphabet in os.listdir(root):
alphabet = os.path.join(root, alphabet)
if not os.path.isdir(alphabet):
continue
for character in os.listdir(alphabet):
character = os.path.join(alphabet, character)
if not os.path.isdir(character):
continue
for example in os.listdir(character):
if not example.endswith(".png") or example.startswith("._"):
continue
example = os.path.join(character, example)
try:
class_paths[str(character)].append(str(example))
except KeyError:
class_paths[str(character)] = [str(example)]
classes.append(str(character))
return classes, class_paths
def sample_meta_batch(
self,
batch_size: int = 1,
n_classes_per_task: Optional[int] = None,
k_shots_per_class: Optional[int] = None,
test_shots_per_class: Optional[int] = None,
return_n_k_along_same_axis: bool = True,
rotate_classes: bool = True,
):
"""
Returns data for a classification task in a dictionary
mapping string to tuple of omniglot {"{train, test}": (images, labels)}.
If return_n_k_along_same_axis, data is in shape:
([b, n, k, channels, rows, cols], [b, n, k])
else:
([b, n * k, channels, rows, cols], [b, n * k])
Each element in the label tensor is the integer index representing the class
where:
b is meta-batch size
n is the number of ways/classes
k is the number of shots per class
channels is the number of channels in the image
rows is the number of rows in the image
cols is the number of columns in the image
NOTE: Examples for each class are not shuffled.
:param batch_size: int of number of meta-tasks to sample.
:param n_classes_per_task: optional number of classes to sample from when constructing a class. If None, will reference self.n_classes_per_task
:param k_shots_per_class: optional number of training shots to sample for each class. If None, will reference self.k_shots_per_class
:param test_shots_per_class: optional number of test shots to sample for each class. If None, will reference self.test_shots_per_class
:return: dictionary mapping string to tuple of mini-ImageNet {"{train, test}": (images, labels)} in shape:
([b, n, k, channels, rows, cols], [b, n, k])
"""
n_classes_per_task = (
n_classes_per_task
if n_classes_per_task is not None
else self.n_classes_per_task
)
k_shots_per_class = (
k_shots_per_class
if k_shots_per_class is not None
else self.k_shots_per_class
)
test_shots_per_class = (
test_shots_per_class
if test_shots_per_class is not None
else self.test_shots_per_class
)
assert (
n_classes_per_task is not None
and k_shots_per_class is not None
and test_shots_per_class is not None
), "n classes, k shots, and test shots must be specified in either class init or method call but found {}. {}, {}".format(
n_classes_per_task, k_shots_per_class, test_shots_per_class
)
# sample k_shots_per_class for each class
meta_batch_images = []
meta_batch_labels = []
for i in range(batch_size):
# sample n_classes_per_task classes:
classes = np.random.choice(
self.all_classes, n_classes_per_task, replace=False
)
task_images = []
task_labels = []
for j, class_name in enumerate(classes):
if rotate_classes:
times_to_rotate_90 = np.random.randint(0, 4)
paths_for_class = np.random.choice(
self.class_paths[class_name],
k_shots_per_class + test_shots_per_class,
replace=False,
)
images_for_class = []
labels_for_class = []
for path in paths_for_class:
try:
img = self._image_cache[path]
except KeyError:
img = Image.open(path)
img = self.transform(img)
img = img.numpy()
self._image_cache[path] = img
# TODO: split train and test set sampling, add image augmentation to training examples, and move img.numpy below that
if rotate_classes:
img = np.rot90(img, k=times_to_rotate_90, axes=[1, 2])
images_for_class.append(img)
labels_for_class.append(j)
task_images.append(images_for_class)
task_labels.append(labels_for_class)
meta_batch_images.append(task_images)
meta_batch_labels.append(task_labels)
meta_batch_images = np.array(meta_batch_images, dtype=np.float32)
meta_batch_labels = np.array(meta_batch_labels, dtype=np.int64)
train_images = meta_batch_images[:, :, :k_shots_per_class, :, :, :]
test_images = meta_batch_images[:, :, k_shots_per_class:, :, :, :]
train_labels = meta_batch_labels[:, :, :k_shots_per_class]
test_labels = meta_batch_labels[:, :, k_shots_per_class:]
train_images = torch.from_numpy(train_images) # -> [b, n, k, ch, row, col]
train_labels = torch.from_numpy(train_labels) # -> [b, n, k]
test_images = torch.from_numpy(test_images) # -> [b, n, k, ch, row, col]
test_labels = torch.from_numpy(test_labels) # -> [b, n, k]
if return_n_k_along_same_axis:
train_images = train_images.reshape(
[
batch_size,
n_classes_per_task * k_shots_per_class,
1,
*self.image_resize_hw,
]
)
train_labels = train_labels.reshape(
[batch_size, n_classes_per_task * k_shots_per_class]
)
test_images = test_images.reshape(
[
batch_size,
n_classes_per_task * test_shots_per_class,
1,
*self.image_resize_hw,
]
)
test_labels = test_labels.reshape(
[batch_size, n_classes_per_task * test_shots_per_class]
)
# TODO add flag to shuffle
# images_labels = list(zip(task_images, task_labels))
# random.shuffle(images_labels)
# task_images, task_labels = zip(*images_labels)
# dev = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu")
# train_images = train_images.to(dev)
# train_labels = train_labels.to(dev)
# test_images = test_images.to(dev)
# test_labels = test_labels.to(dev)
batch = {
"train": (train_images, train_labels),
"test": (test_images, test_labels),
}
return batch | |
# -*- coding: utf-8 -*-
"""generate_attack_files.ipynb
Automatically generated by Colaboratory.
Original file is located at
https://colab.research.google.com/drive/1CDyCghmEMadl1NHbQvvXXFEsQHckKUtH
"""
# Commented out IPython magic to ensure Python compatibility.
# %cd /content/drive/MyDrive/attacks/
!ls
# load function taken from https://github.com/itaygal/RS_TrueReputation/
import re
import copy
import os
"""load rating .csv file and save results to given data structures
Args:
dataset_path: path to movielens 100k rating file
user_movie_ratings: dic of user to a dic of movie to a rating. user_movie_ratings[user_id][movie_id] = rating
movie_user_ratings: dic of movie to a dic of user to a rating. movie_user_ratings[movie_id][user_id] = rating
movies: set of all movie names
Returns:
None.
"""
def load(dataset_path, user_movie_ratings, movie_user_ratings, movies):
# user id | item id | rating | timestamp
rating_match = re.compile("\D*(\d+)\D*(\d+)\D*(\d+)\D*(\d+)")
with open(dataset_path, 'r') as dataset_file:
for rating_line in dataset_file:
m = rating_match.match(rating_line)
if m:
user_id = m.group(1)
movie_id = m.group(2)
rating = m.group(3)
timestamp = m.group(4)
if user_id not in user_movie_ratings:
user_movie_ratings[user_id] = {}
user_movie_ratings[user_id][movie_id] = (int(rating), int(timestamp))
if movie_id not in movie_user_ratings:
movie_user_ratings[movie_id] = {}
movies.add(movie_id)
movie_user_ratings[movie_id][user_id] = (int(rating), int(timestamp))
user_movie_ratings = {} # dic of user to a dic of movie to a rating. user_movie_ratings[user_id][movie_id] = rating
movie_user_ratings = {} # dic of movie to a dic of user to a rating. movie_user_ratings[movie_id][user_id] = rating
movies = set() # set of all movie names
# load rating .csv file into data structures
load("/content/drive/MyDrive/attacks/ml-100k/u.data", user_movie_ratings, movie_user_ratings, movies)
# load item information .csv release year into movie release year
from tqdm import tqdm
Movies={}
for m in tqdm(movie_user_ratings):
Movies[m]=[]
Movies[m].append(len(movie_user_ratings[m]))
avg_rating=0
for i in movie_user_ratings[m]:
avg_rating+=movie_user_ratings[m][i][0]
Movies[m].append(avg_rating/len(movie_user_ratings[m]))
from collections import defaultdict
import numpy as np
import random
userProfile=defaultdict(dict)
itemProfile = defaultdict(dict)
timeProfile= defaultdict(dict)
random_time=[]
for user in user_movie_ratings:
for item in user_movie_ratings[user]:
userProfile[int(user)][int(item)]=int(user_movie_ratings[user][item][0])
itemProfile[int(item)][int(user)]=int(user_movie_ratings[user][item][0])
timeProfile[int(user)][int(item)]=int(user_movie_ratings[user][item][1])
random_time.append(int(user_movie_ratings[user][item][1]))
# attack functions are modified from https://github.com/Coder-Yu/SDLib
############################################### config ###################################
outputDir = "/content/drive/MyDrive/attack_datasets/Movielens1M/bandwagon/"
attackSize = 0.05
fillerSize = 0.05
selectedSize = 0.005
targetCount = 100
targetScore = 4.0
threshold = 3.0
maxScore = 4.0
minScore = 1.0
minCount = 50
maxCount = 200
linkSize = 0.001
itemList = []
spamProfile = defaultdict(dict)
spamItem = defaultdict(list) # items rated by spammers
spamTimeProfile = defaultdict(dict)
targetItems = []
itemAverage = {}
startUserID = 0
def getAverageRating():
for itemID in itemProfile:
li = itemProfile[itemID].values()
itemAverage[itemID] = float(sum(li)) / len(li)
def selectTarget():
print('Selecting target items...')
for i in itemProfile.keys():
itemList.append(i)
itemList.sort()
while len(targetItems) < targetCount:
# generate a target order at random
target = np.random.randint(len(itemList))
if len(itemProfile[itemList[target]]) < maxCount and len(itemProfile[itemList[target]]) > minCount \
and itemList[target] not in targetItems \
and itemAverage[itemList[target]] <= threshold:
targetItems.append(itemList[target])
#print(itemList[target], ' ', itemAverage[itemList[target]])
############################################### config ###################################
def generateLabels(filename):
labels = []
path = outputDir + filename
with open(path, 'w') as f:
for user in spamProfile:
labels.append(str(user)+' 1\n')
for user in userProfile:
labels.append(str(user)+' 0\n')
f.writelines(labels)
print('User profiles have been output')
def generateProfiles(filename):
ratings = []
path = outputDir+filename
with open(path, 'w') as f:
for user in userProfile:
for item in userProfile[user]:
ratings.append(str(user)+' '+str(item)+' ' +
str(userProfile[user][item])+' '+str(timeProfile[user][item])+'\n')
for user in spamProfile:
for item in spamProfile[user]:
ratings.append(str(user) + ' ' + str(item) + ' ' +
str(spamProfile[user][item])+' '+str(spamTimeProfile[user][item])+'\n')
print(len(spamProfile))
f.writelines(ratings)
print('User labels have been output')
############################################## average attack ##########################################
def average_attack(startID=0):
print('Modeling average attack...')
startUserID = len(userProfile) if startID == 0 else startID
for _ in range(int(len(userProfile)*attackSize)):
fillerItems = getFillerItems()
for item in fillerItems:
spamProfile[startUserID][itemList[item]] = round(itemAverage[itemList[item]])
spamTimeProfile[startUserID][itemList[item]]= random.sample(random_time,1)[0] # random time assigned
for _ in range(targetCount):
target = np.random.randint(len(targetItems))
spamProfile[startUserID][targetItems[target]] = targetScore
spamTimeProfile[startUserID][targetItems[target]]= random.sample(random_time,1)[0] # random time assigned
spamItem[startUserID].append(targetItems[target])
startUserID += 1
print(f"userid={startUserID}")
def getFillerItems():
mu = int(fillerSize*len(itemProfile))
sigma = int(0.1*mu)
markedItemsCount = abs(int(round(random.gauss(mu, sigma))))
markedItems = np.random.randint(len(itemProfile), size=markedItemsCount)
return markedItems.tolist()
############################################## average attack ##########################################
outputDir = "/content/drive/MyDrive/attack_datasets/Movielens100K/average/"
for i in [0.1,0.15,0.2,0.25]:
attackSize = i
fillerSize = 0.05
if i>0.15:
fillerSize = 0.1
selectedSize = 0.005
targetCount = 100
targetScore = 4.0
threshold = 3.0
maxScore = 4.0
minScore = 1.0
minCount = 5
maxCount = 150
linkSize = 0.001
itemList = []
spamProfile = defaultdict(dict)
spamItem = defaultdict(list) # items rated by spammers
spamTimeProfile = defaultdict(dict)
targetItems = []
itemAverage = {}
startUserID = 0
getAverageRating()
selectTarget()
average_attack()
#attack.farmLink()
generateLabels(f'labels_{i*10}.txt')
generateProfiles(f'profiles_{i*10}.txt')
import pandas as pd
names = ['user_id', 'item_id', 'rating', 'timestamp']
a=pd.read_csv("/content/drive/MyDrive/attack_datasets/Movielens1M/bandwagon_profiles.txt",delim_whitespace=True,names=names)
a
############################################## bandwagon attack ##########################################
hotItems = sorted(itemProfile.items(), key=lambda d: len(d[1]), reverse=True)[
: int(selectedSize * len(itemProfile))
]
def bandwagon_attack(startID=0):
print("Modeling bandwagon attack...")
startUserID = len(userProfile) if startID == 0 else startID
for _ in range(int(len(userProfile) * attackSize)):
fillerItems = getFillerItems()
for item in fillerItems:
spamProfile[startUserID][itemList[item]] = random.randint(
minScore, maxScore
)
spamTimeProfile[startUserID][itemList[item]]= random.sample(random_time,1)[0] # random time assigned
selectedItems = getSelectedItems()
for item in selectedItems:
spamProfile[startUserID][item] = targetScore
spamTimeProfile[startUserID][item]= random.sample(random_time,1)[0] # random time assigned
for _ in range(targetCount):
target = np.random.randint(len(targetItems))
spamProfile[startUserID][targetItems[target]] = targetScore
spamTimeProfile[startUserID][targetItems[target]]= random.sample(random_time,1)[0] # random time assigned
spamItem[startUserID].append(targetItems[target])
startUserID += 1
print(f"userid={startUserID}")
def getFillerItems():
mu = int(fillerSize * len(itemProfile))
sigma = int(0.1 * mu)
markedItemsCount = int(round(random.gauss(mu, sigma)))
markedItemsCount = max(markedItemsCount, 0)
return np.random.randint(len(itemProfile), size=markedItemsCount)
def getSelectedItems():
mu = int(selectedSize * len(itemProfile))
sigma = int(0.1 * mu)
markedItemsCount = abs(int(round(random.gauss(mu, sigma))))
markedIndexes = np.random.randint(len(hotItems), size=markedItemsCount)
return [hotItems[index][0] for index in markedIndexes]
outputDir = "/content/drive/MyDrive/attack_datasets/Movielens100K/bandwagon/"
for i in [0.1,0.15,0.2,0.25]:
attackSize = i
fillerSize = 0.05
if i>0.15:
fillerSize = 0.1
selectedSize = 0.005
targetCount = 100
targetScore = 4.0
threshold = 3.0
maxScore = 4.0
minScore = 1.0
minCount = 5
maxCount = 150
linkSize = 0.001
itemList = []
spamProfile = defaultdict(dict)
spamItem = defaultdict(list) # items rated by spammers
spamTimeProfile = defaultdict(dict)
targetItems = []
itemAverage = {}
startUserID = 0
getAverageRating()
selectTarget()
bandwagon_attack()
#attack.farmLink()
generateLabels(f'bandwagon_labels_{i*10}.txt')
generateProfiles(f'bandwagon_profiles_{i*10}.txt')
############################################## random attack ##########################################
def random_attack(startID=0):
print('Modeling random attack...')
startUserID = len(userProfile) if startID == 0 else startID
for _ in range(int(len(userProfile)*attackSize)):
fillerItems = getFillerItems()
for item in fillerItems:
spamProfile[startUserID][itemList[item]
] = random.randint(minScore, maxScore)
spamTimeProfile[startUserID][itemList[item]]= random.sample(random_time,1)[0] # random time assigned
for _ in range(targetCount):
target = np.random.randint(len(targetItems))
spamProfile[startUserID][targetItems[target]] = targetScore
spamTimeProfile[startUserID][targetItems[target]]= random.sample(random_time,1)[0] # random time assigned
spamItem[startUserID].append(targetItems[target])
startUserID += 1
print(f"userid={startUserID}")
def getFillerItems():
mu = int(fillerSize*len(itemProfile))
sigma = int(0.1*mu)
markedItemsCount = abs(int(round(random.gauss(mu, sigma))))
markedItems = np.random.randint(len(itemProfile), size=markedItemsCount)
return markedItems.tolist()
############################################## random attack ##########################################
outputDir = "/content/drive/MyDrive/attack_datasets/Movielens100K/random/"
for i in [0.1,0.15,0.2,0.25]:
attackSize = i
fillerSize = 0.05
if i>0.15:
fillerSize = 0.1
selectedSize = 0.005
targetCount = 100
targetScore = 4.0
threshold = 3.0
maxScore = 4.0
minScore = 1.0
minCount = 5
maxCount = 150
linkSize = 0.001
itemList = []
spamProfile = defaultdict(dict)
spamItem = defaultdict(list) # items rated by spammers
spamTimeProfile = defaultdict(dict)
targetItems = []
itemAverage = {}
startUserID = 0
getAverageRating()
selectTarget()
random_attack()
#attack.farmLink()
generateLabels(f'random_labels_{i*10}.txt')
generateProfiles(f'random_profiles_{i*10}.txt')
names = ['user_id', 'item_id', 'rating', 'timestamp']
ratings=pd.read_csv("/content/drive/MyDrive/attack_datasets/Netflix300K/random/random_profiles_1.0.txt",delim_whitespace=True,names=names)
names1=['user_id','label']
labels=pd.read_csv("/content/drive/MyDrive/attack_datasets/Netflix300K/random/random_labels_1.0.txt",delim_whitespace=True,names=names1) | |
from setuptools import setup, find_packages, Extension
from distutils.command.build_ext import build_ext
from distutils.errors import CCompilerError, DistutilsExecError, DistutilsPlatformError
import numpy
import pyyeti
import os
# the following is here so matplotlib will not open figures during
# "python setup.py nosetests" ... but don't make this a hard
# requirement
try:
import matplotlib as mpl
except ImportError:
pass
else:
mpl.interactive(False)
mpl.use("Agg")
ext_errors = (
CCompilerError,
DistutilsExecError,
DistutilsPlatformError,
IOError,
ValueError,
)
def read(*filenames, **kwargs):
encoding = kwargs.get("encoding", "utf-8")
sep = kwargs.get("sep", "\n")
buf = []
for filename in filenames:
with open(filename, encoding=encoding) as f:
buf.append(f.read())
return sep.join(buf)
long_description = read("README.md")
CLASSIFIERS = [
"Development Status :: 4 - Beta",
"Programming Language :: C",
"Programming Language :: Python",
"Programming Language :: Python :: 3.6",
"Programming Language :: Python :: 3.7",
"Programming Language :: Python :: 3.8",
"Programming Language :: Python :: 3.9",
"Programming Language :: Python :: 3 :: Only",
"Programming Language :: Python :: Implementation :: CPython",
"Natural Language :: English",
"Intended Audience :: Science/Research",
"License :: OSI Approved :: BSD License",
"Operating System :: OS Independent",
"Topic :: Scientific/Engineering",
]
def check_dependencies():
install_requires = []
packages = ["numpy", "scipy", "matplotlib", "pandas", "xlsxwriter", "h5py"]
for package in packages:
try:
exec(f"import {package}")
except ImportError:
install_requires.append(package)
return install_requires
class BuildFailed(Exception):
pass
class ve_build_ext(build_ext):
# This class allows C extension building to fail.
def run(self):
try:
build_ext.run(self)
except DistutilsPlatformError:
raise BuildFailed()
def build_extension(self, ext):
try:
build_ext.build_extension(self, ext)
except ext_errors:
raise BuildFailed()
def run_setup(with_binary):
if with_binary:
kw = dict(
ext_modules=[
Extension("pyyeti.rainflow.c_rain", ["pyyeti/rainflow/c_rain.c"])
],
cmdclass=dict(build_ext=ve_build_ext),
include_dirs=[numpy.get_include()],
)
else:
kw = {}
install_requires = check_dependencies()
setup(
name="pyyeti",
version=pyyeti.__version__,
url="http://github.com/twmacro/pyyeti/",
license="BSD",
author="Tim Widrick",
install_requires=install_requires,
author_email="twmacro@gmail.com",
description=("Tools mostly related to structural dynamics"),
long_description=long_description,
long_description_content_type="text/markdown",
packages=find_packages(),
scripts=["scripts/lsop2", "scripts/lsop4"],
include_package_data=True,
data_files=[
(
"tests/nas2cam_csuper",
[
"tests/nas2cam_csuper/nas2cam.op2",
"tests/nas2cam_csuper/nas2cam.op4",
"tests/nas2cam_csuper/inboard.op4",
"tests/nas2cam_csuper/inboard.asm",
],
),
(
"tests/nastran_drm12",
[
"tests/nastran_drm12/inboard_nas2cam.op2",
"tests/nastran_drm12/inboard_nas2cam.op4",
"tests/nastran_drm12/drm12.op2",
"tests/nastran_drm12/drm12.op4",
],
),
],
platforms="any",
tests_require=["nose"],
classifiers=CLASSIFIERS,
**kw,
)
if __name__ == "__main__":
try:
run_setup(True)
except BuildFailed:
BUILD_EXT_WARNING = """
Warning:
The C rainflow extension could not be compiled; only plain Python
rainflow counter will be available. Note: the Python version will
be sped up with `numba.jit(nopython=True)` if possible; tests show
speeds on par with compiled C version.
"""
print("*" * 86)
print(BUILD_EXT_WARNING)
print("Failure information, if any, is above.")
print("I'm retrying the build without the C extension now.")
print("*" * 86)
run_setup(False)
print("*" * 86)
print(BUILD_EXT_WARNING)
print("Plain-Python installation succeeded.")
print("*" * 86) | |
# System
# Data
import numpy as np
import pandas as pd
# Plotting
import matplotlib.pyplot as plt
# Caiman
try:
import caiman as cm
from caiman.source_extraction.cnmf import cnmf as cnmf
from caiman.motion_correction import MotionCorrect
from caiman.source_extraction.cnmf.utilities import detrend_df_f
from caiman.components_evaluation import estimate_components_quality_auto
from caiman.source_extraction.cnmf import deconvolution
except ModuleNotFoundError:
print("CaImAn not installed or environment not activated, certain functions might not be usable")
# Utils
from utils import *
def get_sample_spikes_slow_ramp():
sp1 = np.full(1000, 0.02)
sp1[50] = 0.2
sp1[80] = 0.15
sp1[60] = 0.15
sp1[40] = 0.15
sp1[20] = 0.15
sp1[50:100] += np.arange(50) * 0.05 / 100
sp1[150:200] += np.arange(50) * 0.05 / 100
return sp1
def deconv_test1(sp1=None, impulse=True):
g = np.array([ 0.87797883, -0.10934919])
if impulse:
sp1 = impulse_model_test(sp1, show=False)
else:
# simulated slow ramp test
sp1 = get_sample_spikes_slow_ramp() if sp1 is None else sp1
spc = SpikeCalciumizer(fmodel="AR_2", fluorescence_saturation=0, std_noise=0.02, alpha=1, g=g)
rec = spc.binned_spikes_to_calcium(sp1.reshape((1, -1)))
c2, bl2, c12, g2, sn2, sp2, lam2 = deconvolution.constrained_foopsi(rec.ravel(), p=2, bl=0,
bas_nonneg=False, s_min=0)
c2, bl2, c12, g2, sn2, sp3, lam2 = deconvolution.constrained_foopsi(rec.ravel(), p=2, bas_nonneg=False,
s_min=0)
conv = np.array([1] + list(-g2))
h0 = inverse_kernel(conv, N=rec.shape[1], fft=True)
x_hat, hhat = wiener_deconvolution(rec.ravel(), h0)
fig, axes = plt.subplots(nrows=5, ncols=1, sharex=True)
axes[0].plot(sp1)
axes[1].plot(sp3)
axes[2].plot(sp2)
axes[3].plot(x_hat)
axes[4].plot(rec.ravel())
axes[0].set_ylabel("truth spike")
axes[1].set_ylabel("bl_auto")
axes[2].set_ylabel('bl=0')
axes[3].set_ylabel("wiener")
axes[4].set_ylabel("calcium")
# TODO: try band filtering
def impulse_model_test(sp1=None, show=True):
g = np.array([0.87797883, -0.10934919])
sp1 = get_sample_spikes_slow_ramp() if sp1 is None else sp1
v1 = np.diff(sp1)
a1 = np.diff(v1)
a1_in = np.concatenate([[0, 0], a1])
# v1 = np.gradient(sp1)
# a1 = np.gradient(v1)
spc = SpikeCalciumizer(fmodel='AR_2', g=[2, -1], std_noise=0.)
spp = spc.binned_spikes_to_calcium(a1_in.reshape((1, -1))) + sp1[0]
spc2 = SpikeCalciumizer(fmodel="AR_2", std_noise=0.02, g=g)
c = spc2.binned_spikes_to_calcium(spp).ravel()
# visualization
if show:
fig, axes = plt.subplots(nrows=5, ncols=1, sharex=True)
axes[0].plot(sp1)
axes[0].set_ylabel('true spike')
axes[1].plot(v1)
axes[1].set_ylabel("ds/dt")
axes[2].plot(a1)
axes[2].set_ylabel("d2s/dt2")
axes[3].plot(spp.ravel())
axes[3].set_ylabel("gradient-sim spikes")
axes[4].plot(c)
axes[4].set_ylabel("simulated calcium")
return spp.ravel()
def wiener_deconv_test(y, diff=False):
c, bl, c1, g, sn, sp, lam = deconvolution.constrained_foopsi(y, p=2, bas_nonneg=False, s_min=0)
conv = np.array([1] + list(-g))
h0 = inverse_kernel(conv, N=len(y), fft=True)
x_hat, hhat = wiener_deconvolution(y, h0)
fig, axes = plt.subplots(nrows=3, ncols=1, sharex=True)
if diff:
axes[0].plot(np.vstack((np.diff(y, prepend=0), sp, x_hat)).T)
axes[0].legend(['df/dt', 'deconv', 'wiener'])
else:
axes[0].plot(np.vstack((sp, x_hat)).T)
axes[0].legend(['deconv', 'wiener'])
axes[1].plot(y)
axes[1].set_ylabel('dff')
axes[2].plot(h0)
axes[2].set_ylabel("IRF") | |
# This script processes images received from NOAA satellites
import sys
from datetime import datetime, timezone, timedelta
from math import atan, atan2, sqrt, pi, sin, cos, asin, acos, tan
from typing import Tuple
from sgp4.io import twoline2rv
from sgp4.earth_gravity import wgs72, wgs84
from sgp4.api import jday, Satrec
import numpy as np
from pymap3d import ecef
sys.path.append('.')
from noaatools import export_js
from noaatools.constants import DEG2RAD, RAD2DEG, Ellipsoid, Method, NOAA_PROCESSING_DELAY, RE, AVHRR_FOV, ellipsoid_wgs84
# Nice conversions: https://github.com/skyfielders/python-skyfield/blob/master/skyfield/sgp4lib.py
# Good explanation: https://stackoverflow.com/questions/8233401/how-do-i-convert-eci-coordinates-to-longitude-latitude-and-altitude-to-display-o
def julianDateToGMST(jd, fr):
"""
Converts Julian date (expressed at two floats) to GMST (Greenwich Mean Sidereal Time).
Parameters:
jd : float - Julian date full integer + 0.5
fr : float - fractional part of the Julian date
Returns
=======
A single floating point representing a GMST, expressed in degrees (0...359.99999).
This calculation takes into consideration the precession, but not nutation.
Source: https://www.cv.nrao.edu/~rfisher/Ephemerides/times.html#GMST
"""
T0 = 2451545.0 # J2000, 2000-Jan-01 12h UT1 as Julian date
# First calculate number of days since J2000 (2000-Jan-01 12h UT1)
d = jd - T0
d = d + fr
# Now convert this to centuries. Don't ask me why.
T = d / 36525.0
# Calculate GMST (in seconds at UT1=0)
gmst = 24110.54841 + 8640184.812866 * T + 0.093104 * T * T - 0.0000062 * T*T*T
# Let's truncate this and return the value in degrees.
# This is clearly broken.
return (gmst % 24)*(15/3600.0)
def julianDateToGMST2(jd: float, fr: float) -> Tuple[float, float]:
"""
Converts Julian date (expressed at two floats) to GMST (Greenwich Mean Sidereal Time 1982).
Parameters:
jd : float - Julian date full integer + 0.5
fr : float - fractional part of the Julian date
Returns
=======
A tuple with two values:
theta - single floating point representing a GMST, expressed in radians
theta_dot - unknown
This calculation takes into consideration the precession, but not nutation.
Source: https://github.com/skyfielders/python-skyfield/blob/master/skyfield/sgp4lib.py
- theta_GSMT1982 function
This angle defines the difference between the idiosyncratic True
Equator Mean Equinox (TEME) frame of reference used by SGP4 and the
more standard Pseudo Earth Fixed (PEF) frame of reference.
From AIAA 2006-6753 Appendix C.
"""
tau = 6.283185307179586476925287
_second = 1.0 / (24.0 * 60.0 * 60.0)
T0 = 2451545.0 # J2000, 2000-Jan-01 12h UT1 as Julian date
# First calculate number of days since J2000 (2000-Jan-01 12h UT1)
d = jd - T0
d = d + fr
# Now convert this to centuries. Don't ask me why.
t = d / 36525.0
# Don't undersran
g = 67310.54841 + (8640184.812866 + (0.093104 + (-6.2e-6) * t) * t) * t
dg = 8640184.812866 + (0.093104 * 2.0 + (-6.2e-6 * 3.0) * t) * t
theta = ((jd + fr) % 1.0 + g * _second % 1.0) * tau
theta_dot = (1.0 + dg * _second / 36525.0) * tau
return theta, theta_dot
def longitude_trunc(lon: float) -> float:
"""
Makes sure the longitude is within <-pi ... pi> range.
Parameters
==========
lon - longitude expressed in radians (may be any value)
Returns
=======
normalized longitude in <-pi..pi> range. Note that both -pi and pi are accepted.
"""
if (lon <= pi) and (lon >= -pi):
# Don't do any conversion if it's not necessary. Avoid conversion errors if necessary
return lon
return (lon + pi) % (2*pi) - pi
def teme2geodetic_spherical(x: float, y: float, z: float, t: datetime):
"""
Converts TEME/ECI coords (x,y,z - expressed in km) to LLA (longitude, lattitude, altitude).
This function assumes the Earth is completely round.
The calculations here are based on T.S. Kelso's excellent paper "Orbital Coordinate Systems, Part III
https://celestrak.com/columns/v02n03/.
Parameters
==========
x,y,z : float - coordates in TEME (True Equator Mean Equinoex) version of ECI (Earth Centered Intertial) coords system.
This is the system that's produced by SGP4 models.
t : datetime
Returns
=======
lat, lon, alt - latitude, longitude (degrees), altitude (km)
"""
jd, fr = jday(t.year, t.month, t.day, t.hour, t.minute, t.second)
gmst = julianDateToGMST2(jd, fr)[0]
lat = atan2(z, sqrt(x*x + y*y)) # phi
lon = atan2(y, x) - gmst # lambda-E
lon = longitude_trunc(lon)
alt = sqrt(x*x + y*y + z*z) - RE # h
# TODO: convert this to radians and use radians everywhere.
return lat*RAD2DEG, lon*RAD2DEG, alt
def teme2geodetic_oblate(x: float, y: float, z: float, t: datetime, ellipsoid: Ellipsoid):
"""
Converts TEME/ECI coords (x,y,z - expressed in km) to LLA (longitude, lattitude, altitude).
ellipsoid is Earth ellipsoid to be used (e.g. ellipsoid_wgs84).
The calculations here are based on T.S. Kelso's excellent paper "Orbital Coordinate Systems, Part III
https://celestrak.com/columns/v02n03/.
Parameters
==========
x,y,z : float - coordates in TEME (True Equator Mean Equinoex) version of ECI (Earth Centered Intertial) coords system.
This is the system that's produced by SGP4 models.
t : datetime - time of the observation
ellipsoid: Ellipsoid - an Earth exlipsoid specifying Earth oblateness, e.g. Ellipsoid_wgs84. Two params are used from it:
a and inverse of f. Both must be specified in kms
Returns
=======
lat, lon, alt - latitude, longitude (both in degrees), alt (in km)
"""
# First, we need to do some basic calculations for Earth oblateness
a = ellipsoid.a
f = 1.0/ellipsoid.finv
b = a*(1 - 1.0/f)
e2 = f*(2-f)
phii = 1 # This is the starting value for initial iteration
# There should be a check on |phii - phi| value, but let's always do 5 iterations. Good enough for now.
for _ in range(1,5):
C = 1/(sqrt(1-e2*pow(sin(phii), 2)))
# This is not explicitly stated on clestrak page, but it's shown on a diagram.
R = sqrt(x*x + y*y)
phi = atan2(z + a*C*e2*sin(phii), R)
h= R/(cos(phi)) - a*C
phii=phi
jd, fr = jday(t.year, t.month, t.day, t.hour, t.minute, t.second)
gmst = julianDateToGMST2(jd, fr)[0]
lon = atan2(y, x) - gmst # lambda-E
lon = longitude_trunc(lon)
return phi*RAD2DEG, lon*RAD2DEG, h
def teme2geodetic_pymap3d(x: float, y: float, z: float, t : datetime, ell = None):
"""
Converts TEME/ECI coordinates to geodetic, using pymap3d library.
For details, see https://github.com/geospace-code/pymap3d
Parameters
==========
x,y,z : float - coordates in TEME (True Equator Mean Equinoex) version of ECI (Earth Centered Intertial) coords system.
This is the system that's produced by SGP4 models.
t : datetime - time of the observation
ell: unknown - haven't figured this out yet.
Returns
=======
lat, lon, alt - latitude, longitude (in degrees), alt (in km)
"""
# Short version - whole conversion in one go
#lat, lon, alt = ecef.eci2geodetic(x*1000, y*1000, z*1000, t)
#return lat, lon, alt/1000.0
#print("teme[x,y,z]=%f, %f, %f" % (x, y, z))
xecef, yecef, zecef = ecef.eci2ecef(np.array([x*1000]), np.array([y*1000]), np.array([z*1000]), t)
#print("ecef[x,y,z]=%f, %f, %f" % (xecef, yecef, zecef))
# True = we want the response in degrees
lat, lon, alt = ecef.ecef2geodetic(xecef, yecef, zecef, ell, True)
#print("lla = %f, %f, %f" % (lat, lon, alt))
return lat, lon, alt/1000.0
def get_ssp(lla):
return [ lla[0], lla[1], 0 ]
def calc_azimuth(p1, p2):
""" Calculates azimuth from point 1 to point 2.
Point - an array 3 of floats (LLA)
Returns azimuth in degrees
Source: http://edwilliams.org/avform.htm#Crs
"""
lat1 = p1[0] * DEG2RAD
lon1 = -p1[1] * DEG2RAD
lat2 = p2[0] * DEG2RAD
lon2 = -p2[1] * DEG2RAD
d = 2*asin(sqrt((sin((lat1-lat2)/2))**2 + cos(lat1)*cos(lat2)*(sin((lon1-lon2)/2))**2))
tc1 = acos((sin(lat2)-sin(lat1)*cos(d))/(sin(d)*cos(lat1)))
tc1 = atan2(sin(lon1-lon2)*cos(lat2), cos(lat1)*sin(lat2)-sin(lat1)*cos(lat2)*cos(lon1-lon2))
if (tc1 < 0):
tc1 += 2*pi
if (tc1 > 2*pi):
tc1 -= 2*pi
return tc1*RAD2DEG
def calc_swath(alt, nu):
"""This calculates the swath width, given the altitude (alt, in km) of the sat and camera angle (nu, in radians).
Returns swath in km"""
# Convert to radians first.
nur = nu*DEG2RAD
# Ok, this is an overly simplified approximation. It neglects the Earth curvature.
# return alt*tan(nu)
# Source Wertz "Mission geometry", pg. 420.
# rho is an angle between two lines: (sat - tangential to Earth) and (sat - Earth center)
rho = asin(RE/(RE+alt))
epsilon = acos(sin(nur)/sin(rho))
lam = pi/2 - nur - epsilon
swath = RE*lam
print("calc_swath(alt=%f nu=%f/%f) => rho=%f/%f epsilon=%f/%f, lambda= %f/%f => swath=%f [km]" %
(alt, nu, nur, rho, rho*RAD2DEG, epsilon, epsilon*RAD2DEG, lam, lam*RAD2DEG, swath))
return swath
def radial_distance(lat1, lon1, bearing, distance):
"""
Return final coordinates (lat2,lon2) [in degrees] given initial coordinates
(lat1,lon1) [in degrees] and a bearing [in degrees] and distance [in km]
Based on this:
https://stackoverflow.com/questions/877524/calculating-coordinates-given-a-bearing-and-a-distance
"""
rlat1 = lat1*DEG2RAD
rlon1 = lon1*DEG2RAD
rdistance = distance / RE # normalize linear distance to radian angle
rbearing = bearing * DEG2RAD
rlat = asin( sin(rlat1) * cos(rdistance) + cos(rlat1) * sin(rdistance) * cos(rbearing) )
if cos(rlat) == 0 or abs(cos(rlat)) < 0.00000001: # Endpoint a pole
rlon = rlon1
else:
rlon = ( (rlon1 + asin( sin(rbearing)* sin(rdistance) / cos(rlat) ) + pi ) % (2*pi) ) - pi
return (rlat*RAD2DEG, rlon*RAD2DEG)
def calc_distance(lat1, lon1, lat2, lon2):
"""
Calculates distance between two (lat,lon) points. Return value is in km.
"""
rlat1 = lat1*DEG2RAD
rlon1 = lon1*DEG2RAD
rlat2 = lat2*DEG2RAD
rlon2 = lon2*DEG2RAD
d = 2 * asin(sqrt((sin((rlat1-rlat2)/2))**2 + cos(rlat1)*cos(rlat2)*(sin((rlon1-rlon2)/2))**2))
return d * RE
def azimuth_add(az, delta):
""" Adds delta to specified azimuth. Does the modulo 360 arithmetic"""
return (az + delta) % 360.0
def teme2geodetic(method: Method, x: float, y: float, z: float, t: datetime):
if method == Method.SPHERICAL:
return teme2geodetic_spherical(x, y, z, t)
if method == Method.OBLATE:
return teme2geodetic_oblate(x, y, z, t, ellipsoid_wgs84)
if method == Method.PYMAP3D:
return teme2geodetic_pymap3d(x, y, z, t)
raise Exception("Invalid calculation method: %s" % method)
def georef(method: Method, tle1: str, tle2: str, aos_txt: str, los_txt: str):
""" This is a naive georeferencing method:
- calculates the sat location at AOS and LOS points (using )
then calculates distance between them. """
# Convert date as a string datetime. Make sure to use UTC rather than the default (local timezone)
d1 = datetime.fromisoformat(aos_txt).replace(tzinfo=timezone.utc)
d2 = datetime.fromisoformat(los_txt).replace(tzinfo=timezone.utc)
print("AOS time: %s" % d1)
print("LOS time: %s" % d2)
# STEP 1: Calculate sat location at AOS and LOS
# Old sgp4 API 1.x used this approach, which is not recommended anymore.
#sat_old = twoline2rv(tle1, tle2, wgs72)
#pos1_old, _ = sat_old.propagate(d1.year, d1.month, d1.day, d1.hour, d1.minute, d1.second)
#pos2_old, _ = sat_old.propagate(d1.year, d1.month, d1.day, d1.hour, d1.minute, d1.second)
# This approach uses new API 2.x which gives a slightly different results.
# In case of NOAA, the position is off by less than milimeter
sat = Satrec.twoline2rv(tle1, tle2)
jd1, fr1 = jday(d1.year, d1.month, d1.day, d1.hour, d1.minute, d1.second)
jd2, fr2 = jday(d2.year, d2.month, d2.day, d2.hour, d2.minute, d2.second)
# Take sat processing/transmission delay into consideration. At AOS time the signal received
# was already NOAA_PROCESSING_DELAY seconds old.
fr1 -= NOAA_PROCESSING_DELAY/86400.0
fr2 -= NOAA_PROCESSING_DELAY/86400.0
_, pos1, _ = sat.sgp4(jd1, fr1) # returns error, position and velocity - we care about position only
_, pos2, _ = sat.sgp4(jd2, fr2)
# Delta between a point and a point+delta (the second delta point is used to calculate azimuth)
DELTA = 30.0
_, pos1delta, _ = sat.sgp4(jd1, fr1 + DELTA/86400.0)
_, pos2delta, _ = sat.sgp4(jd2, fr2 + DELTA/86400.0)
# STEP 2: Calculate sub-satellite point at AOS, LOS times
# T.S. Kelso saves the day *again*: see here: https://celestrak.com/columns/v02n03/
# Ok, we have sat position at time of AOS and LOS returned by SGP4 models. The tricky part here is those are in
# Earth-Centered Intertial (ECI) reference system. The model used is TEME (True equator mean equinox).
# Convert AOS coordinates to LLA
aos_lla = teme2geodetic(method, pos1[0], pos1[1], pos1[2], d1)
# Now caluclate a position for AOS + 30s. Will use it to determine the azimuth
d1delta = d1 + timedelta(seconds = 30.0)
aos_bis = teme2geodetic(method, pos1delta[0], pos1delta[1], pos1delta[2], d1delta)
aos_az = calc_azimuth(aos_lla, aos_bis)
print("AOS converted to LLA is lat=%f long=%f alt=%f, azimuth=%f" % (aos_lla[0], aos_lla[1], aos_lla[2], aos_az) )
# Now do the same for LOS
los_lla = teme2geodetic(method, pos2[0], pos2[1], pos2[2], d2)
# Let's use a point 30 seconds later. AOS and (AOS + 30s) will determine the azimuth
d2delta = d2 + timedelta(seconds = 30.0)
los_bis = teme2geodetic(method, pos2delta[0], pos2delta[1], pos2delta[2], d2delta)
los_az = calc_azimuth(los_lla, los_bis)
print("LOS converted to LLA is lat=%f long=%f alt=%f azimuth=%f" % (los_lla[0], los_lla[1], los_lla[2], los_az))
# STEP 3: Find image corners. Here's an algorithm proposal:
#
# 1. calculate satellite flight azimuth AZ
# https://en.wikipedia.org/wiki/Great-circle_navigation
# In addition to AOS and LOS subsatellite points, we calculate AOSbis and LOSbis, subsat points
# after certain detla seconds. This is used to calculate azimuth
#
# 2. calculate directions that are perpendicular (+90, -90 degrees) AZ_L, AZ_R
# (basic math, add/subtract 90 degrees, modulo 360)
#
# 3. calculate sensor swath (left-right "width" of the observation), divite by 2 to get D
# - SMAD
# - WERTZ Mission Geometry, page 420
#
# 4. calculate terminal distance starting from SSP at the azimuth AZ_L and AZ_R and distance D
# https://www.fcc.gov/media/radio/find-terminal-coordinates
# https://stackoverflow.com/questions/877524/calculating-coordinates-given-a-bearing-and-a-distance
# TODO: Calculcate if this pass is northbound or southbound
# Let's assume this is AVHRR instrument. Let's use its field of view angle.
fov = AVHRR_FOV
# Now calculate corner positions (use only the first method)
swath = calc_swath(aos_lla[2], fov)
print("Instrument angle is %f deg, altitude is %f km, swath (each side) is %f km, total swath is %f km" % (fov, aos_lla[2], swath, swath*2))
corner_ul = radial_distance(aos_lla[0], aos_lla[1], azimuth_add(aos_az, +90), swath)
corner_ur = radial_distance(aos_lla[0], aos_lla[1], azimuth_add(aos_az, -90), swath)
print("Upper left corner: lat=%f lon=%f" % (corner_ul[0], corner_ul[1]))
print("Upper right corner: lat=%f lon=%f" % (corner_ur[0], corner_ur[1]))
# Now calculate corner positions (use only the first method)
corner_ll = radial_distance(los_lla[0], los_lla[1], azimuth_add(los_az, +90), swath)
corner_lr = radial_distance(los_lla[0], los_lla[1], azimuth_add(los_az, -90), swath)
print("Lower left corner: lat=%f lon=%f" % (corner_ll[0], corner_ll[1]))
print("Lower right corner: lat=%f lon=%f" % (corner_lr[0], corner_lr[1]))
# Ok, we have the sat position in LLA format. Getting sub-satellite point is trivial. Just assume altitude is 0.
aos_lla = get_ssp(aos_lla)
los_lla = get_ssp(los_lla)
return d1, d2, aos_lla, los_lla, corner_ul, corner_ur, corner_ll, corner_lr | |
import numpy as np
from scipy.sparse import csr_matrix
from scipy.sparse.csgraph import dijkstra
import cvxpy as cp
import matplotlib.pyplot as plt
import time
class gridworld:
#"""A class for making gridworlds"""
def __init__(self, image, targetx, targety, n_dirc=8, turning_loss=0.01, p_sys=0.01, p_row=0.001,p_col=0.0001):
self.image = image
self.n_row = image.shape[0]
self.n_col = image.shape[1]
self.n_dirc = n_dirc
self.obstacles = []
self.freespace = []
self.targetx = targetx
self.targety = targety
self.G = [] # transition matrix (by bool) G (<784*8,<784*8)
self.W = [] # transition matrix (by distance) W (<784*8,<784*8)
self.R = [] # action reward matrix (by distance)(all 0 for goal) R (<784*8,8)
self.P = [] # transition matrix w.r.t action (by bool) P (<784*8,<784*8,8)
self.A = []
self.PP = []
self.C = [] # for LP constrain: Cx =<_K d , C of size (num_Wij!=0, <784*8)
self.d = []
self.p_pos = [] # penalty for position
self.n_states = 0
self.n_actions = 0
self.num_freespace = 0
self.state_map_col = []
self.state_map_row = []
self.non_obstacles = []
self.p_turn = turning_loss # penalty for turning
self.p_sys = p_sys # penalty for sysmtric
self.p_row = p_row # penalty for latitude
self.p_col = p_col # penalty for longitude
self.set_vals()
def set_vals(self):
# Setup function to initialize all necessary
# data
row_obs, col_obs = np.where(self.image == 0)
row_free, col_free = np.where(self.image != 0)
self.obstacles = [row_obs, col_obs]
self.freespace = [row_free, col_free]
n_states = self.n_row * self.n_col * self.n_dirc # 28*28*8
n_actions = 8
n_dirction = self.n_dirc
self.n_states = n_states
self.n_actions = n_actions
p_n = np.zeros((self.n_states, self.n_states))
p_s = np.zeros((self.n_states, self.n_states))
p_e = np.zeros((self.n_states, self.n_states))
p_w = np.zeros((self.n_states, self.n_states))
p_ne = np.zeros((self.n_states, self.n_states))
p_nw = np.zeros((self.n_states, self.n_states))
p_se = np.zeros((self.n_states, self.n_states))
p_sw = np.zeros((self.n_states, self.n_states))
# build action reward matrix of szie (#states, #actions), whose goal_state row is all zeros.
R = -1 * np.ones((self.n_states, self.n_actions))
R[:, 4:self.n_actions] = R[:, 4:self.n_actions] * np.sqrt(2)
target = np.ravel_multi_index(
[self.targetx, self.targety, range(0,self.n_dirc)], (self.n_row, self.n_col, self.n_dirc), order='F')
R[target, :] = 0
for row in range(0, self.n_row):
for col in range(0, self.n_col):
for dirc in range(0, self.n_dirc):
# a int: the state(row,col)'s index in the 28*28*8 vector
curpos = np.ravel_multi_index(
[row, col, dirc], (self.n_row, self.n_col, self.n_dirc), order='F')
# three (3,) array: all possible next state's row/column/dirction in 28*28 matrix
rows, cols, dircs = self.neighbors(row, col, dirc)
# (3,) array: all possible next state's indices in the 28*28*3 vectors
neighbor_inds = np.ravel_multi_index(
[rows, cols, dircs], (self.n_row, self.n_col, self.n_dirc), order='F')
# eight (28^2*8, 28^2*8) arrays: 8 state_transition matrices w.r.t 8 different actions
# p_a(i,j) means s_i ==> s_j by action a
p_turn = self.p_turn
p_sys = self.p_sys
if dirc == 0:
p_n[curpos, neighbor_inds[0]] = p_n[curpos, neighbor_inds[0]] + 1
p_ne[curpos, neighbor_inds[1]] = p_ne[curpos, neighbor_inds[1]] + 1 + p_turn + p_sys
p_nw[curpos, neighbor_inds[2]] = p_nw[curpos, neighbor_inds[2]] + 1 + p_turn
if dirc == 1:
p_s[curpos, neighbor_inds[0]] = p_s[curpos, neighbor_inds[0]] + 1
p_se[curpos, neighbor_inds[1]] = p_se[curpos, neighbor_inds[1]] + 1 + p_turn
p_sw[curpos, neighbor_inds[2]] = p_sw[curpos, neighbor_inds[2]] + 1 + p_turn + p_sys
if dirc == 2:
p_e[curpos, neighbor_inds[0]] = p_e[curpos, neighbor_inds[0]] + 1
p_ne[curpos, neighbor_inds[1]] = p_ne[curpos, neighbor_inds[1]] + 1 + p_turn
p_se[curpos, neighbor_inds[2]] = p_se[curpos, neighbor_inds[2]] + 1 + p_turn + p_sys
if dirc == 3:
p_w[curpos, neighbor_inds[0]] = p_w[curpos, neighbor_inds[0]] + 1
p_nw[curpos, neighbor_inds[1]] = p_nw[curpos, neighbor_inds[1]] + 1 + p_turn + p_sys
p_sw[curpos, neighbor_inds[2]] = p_sw[curpos, neighbor_inds[2]] + 1 + p_turn
if dirc == 4:
p_ne[curpos, neighbor_inds[0]] = p_ne[curpos, neighbor_inds[0]] + 1
p_n[curpos, neighbor_inds[1]] = p_n[curpos, neighbor_inds[1]] + 1 + p_turn
p_e[curpos, neighbor_inds[2]] = p_e[curpos, neighbor_inds[2]] + 1 + p_turn + p_sys
if dirc == 5:
p_nw[curpos, neighbor_inds[0]] = p_nw[curpos, neighbor_inds[0]] + 1
p_n[curpos, neighbor_inds[1]] = p_n[curpos, neighbor_inds[1]] + 1 + p_turn + p_sys
p_w[curpos, neighbor_inds[2]] = p_w[curpos, neighbor_inds[2]] + 1 + p_turn
if dirc == 6:
p_se[curpos, neighbor_inds[0]] = p_se[curpos, neighbor_inds[0]] + 1
p_s[curpos, neighbor_inds[1]] = p_s[curpos, neighbor_inds[1]] + 1 + p_turn + p_sys
p_e[curpos, neighbor_inds[2]] = p_e[curpos, neighbor_inds[2]] + 1 + p_turn
if dirc == 7:
p_sw[curpos, neighbor_inds[0]] = p_sw[curpos, neighbor_inds[0]] + 1
p_s[curpos, neighbor_inds[1]] = p_s[curpos, neighbor_inds[1]] + 1 + p_turn
p_w[curpos, neighbor_inds[2]] = p_w[curpos, neighbor_inds[2]] + 1 + p_turn + p_sys
# penalty for position
p_pos = np.zeros((self.n_row, self.n_col, self.n_dirc))
for i in range(0,self.n_row):
for j in range(0,self.n_col):
p_pos[i,j,:] = pow(i,1/1)*self.p_row + pow(j,1/1)*self.p_col
p_pos = p_pos.flatten('F')
# (28^2*8, 28^2*8) array: state_transition matrix by bool
G = np.logical_or.reduce((p_n, p_s, p_e, p_w, p_ne, p_nw, p_se, p_sw))
# (28^2*8, 28^28*8) array: state_transition matrix by distance
W = np.maximum(
np.maximum(
np.maximum(
np.maximum(
np.maximum(np.maximum(np.maximum(p_n, p_s), p_e), p_w),
np.sqrt(2) * p_ne),
np.sqrt(2) * p_nw),
np.sqrt(2) * p_se),
np.sqrt(2) * p_sw)
# (<28^28*8,) array: free spaces's indices in 28*28*8 vector
self.num_freespace = np.size(self.freespace[0])
non_obstacles = np.ravel_multi_index(
[np.tile(self.freespace[0], n_dirction), np.tile(self.freespace[1], n_dirction),
np.repeat(np.array(range(0, self.n_dirc)), self.num_freespace)],
(self.n_row, self.n_col, self.n_dirc),order='F')
non_obstacles = np.sort(non_obstacles)
self.non_obstacles = non_obstacles
p_n = p_n[non_obstacles, :]
p_n = np.expand_dims(p_n[:, non_obstacles], axis=2) # of size (<784*8,<784*8,1)
p_s = p_s[non_obstacles, :]
p_s = np.expand_dims(p_s[:, non_obstacles], axis=2)
p_e = p_e[non_obstacles, :]
p_e = np.expand_dims(p_e[:, non_obstacles], axis=2)
p_w = p_w[non_obstacles, :]
p_w = np.expand_dims(p_w[:, non_obstacles], axis=2)
p_ne = p_ne[non_obstacles, :]
p_ne = np.expand_dims(p_ne[:, non_obstacles], axis=2)
p_nw = p_nw[non_obstacles, :]
p_nw = np.expand_dims(p_nw[:, non_obstacles], axis=2)
p_se = p_se[non_obstacles, :]
p_se = np.expand_dims(p_se[:, non_obstacles], axis=2)
p_sw = p_sw[non_obstacles, :]
p_sw = np.expand_dims(p_sw[:, non_obstacles], axis=2)
G = G[non_obstacles, :]
G = G[:, non_obstacles]
W = W[non_obstacles, :]
W = W[:, non_obstacles]
R = R[non_obstacles, :]
p_pos = p_pos[non_obstacles]
# Compute matrix C and vector d
num_states = np.size(non_obstacles)
k = 0
C = np.zeros((np.count_nonzero(W), num_states))
d = np.zeros((np.count_nonzero(W), ))
for i in range(0,num_states):
for j in range(0,num_states):
if W[i,j] != 0:
C[k, j] = 1
C[k, i] = -1
d[k] = W[i,j] + p_pos[i] + p_pos[j]
k=k+1
P = np.concatenate(
(p_n, p_s, p_e, p_w, p_ne, p_nw, p_se, p_sw), axis=2)
self.G = G
self.W = W
self.P = P
self.R = R
self.C = C
self.d = d
# for test net
n_states_test = self.n_row * self.n_col
pp_n = np.zeros((n_states_test, n_states_test))
pp_s = np.zeros((n_states_test, n_states_test))
pp_e = np.zeros((n_states_test, n_states_test))
pp_w = np.zeros((n_states_test, n_states_test))
pp_ne = np.zeros((n_states_test, n_states_test))
pp_nw = np.zeros((n_states_test, n_states_test))
pp_se = np.zeros((n_states_test, n_states_test))
pp_sw = np.zeros((n_states_test, n_states_test))
for row in range(0, self.n_row):
for col in range(0, self.n_col):
# a int: the state(row,col)'s index in the 28*28 vector
curpos = np.ravel_multi_index(
[row, col], (self.n_row, self.n_col), order='F')
# two (8,) array: all possible next state's row/column in 28*28 matrix
rows, cols = self.neighbors_test(row, col)
# (8,) array: all possible next state's indices in the 28*28 vectors
neighbor_inds = np.ravel_multi_index(
[rows, cols], (self.n_row, self.n_col), order='F')
# eight (28^2, 28^2) array: 8 state_transition matrices w.r.t 8 different actions
pp_n[curpos, neighbor_inds[0]] = pp_n[curpos, neighbor_inds[0]] + 1
pp_s[curpos, neighbor_inds[1]] = pp_s[curpos, neighbor_inds[1]] + 1
pp_e[curpos, neighbor_inds[2]] = pp_e[curpos, neighbor_inds[2]] + 1
pp_w[curpos, neighbor_inds[3]] = pp_w[curpos, neighbor_inds[3]] + 1
pp_ne[curpos, neighbor_inds[4]] = pp_ne[curpos, neighbor_inds[4]] + 1
pp_nw[curpos, neighbor_inds[5]] = pp_nw[curpos, neighbor_inds[5]] + 1
pp_se[curpos, neighbor_inds[6]] = pp_se[curpos, neighbor_inds[6]] + 1
pp_sw[curpos, neighbor_inds[7]] = pp_sw[curpos, neighbor_inds[7]] + 1
# (<28^28,) array: free spaces's indices in 28*28 vector
non_obstacles_test = np.ravel_multi_index(
[self.freespace[0], self.freespace[1]], (self.n_row, self.n_col),
order='F')
non_obstacles_test = np.sort(non_obstacles_test)
pp_n = pp_n[non_obstacles_test, :]
pp_n = np.expand_dims(pp_n[:, non_obstacles_test], axis=2) # of size (<784,<784,1)
pp_s = pp_s[non_obstacles_test, :]
pp_s = np.expand_dims(pp_s[:, non_obstacles_test], axis=2)
pp_e = pp_e[non_obstacles_test, :]
pp_e = np.expand_dims(pp_e[:, non_obstacles_test], axis=2)
pp_w = pp_w[non_obstacles_test, :]
pp_w = np.expand_dims(pp_w[:, non_obstacles_test], axis=2)
pp_ne = pp_ne[non_obstacles_test, :]
pp_ne = np.expand_dims(pp_ne[:, non_obstacles_test], axis=2)
pp_nw = pp_nw[non_obstacles_test, :]
pp_nw = np.expand_dims(pp_nw[:, non_obstacles_test], axis=2)
pp_se = pp_se[non_obstacles_test, :]
pp_se = np.expand_dims(pp_se[:, non_obstacles_test], axis=2)
pp_sw = pp_sw[non_obstacles_test, :]
pp_sw = np.expand_dims(pp_sw[:, non_obstacles_test], axis=2)
PP = np.concatenate(
(pp_n, pp_s, pp_e, pp_w, pp_ne, pp_nw, pp_se, pp_sw), axis=2)
self.PP = PP
# generate mesh grid coordinate: use two 28*28 matrix represent a 784*784 mesh
state_map_col, state_map_row = np.meshgrid(np.arange(0, self.n_col), np.arange(0, self.n_row))
# generate <784*<784 coordinate
self.state_map_col = state_map_col.flatten('F')[non_obstacles_test]
self.state_map_row = state_map_row.flatten('F')[non_obstacles_test]
def get_coords(self, states):
# Given a state or states, state is a int <784*8, returns
# [row,col,dir] pairs for the state(s)
non_obstacles = self.non_obstacles
states = states.astype(int)
r, c, d = np.unravel_index(
non_obstacles[states], (self.n_col, self.n_row, self.n_dirc), order='F')
return r, c, d
def north(self, row, col):
# Returns new [row,col]
# if we take the action
new_row = np.max([row - 1, 0])
new_col = col
if self.image[new_row, new_col] == 0:
new_row = row
new_col = col
return new_row, new_col
def northeast(self, row, col):
# Returns new [row,col]
# if we take the action
new_row = np.max([row - 1, 0])
new_col = np.min([col + 1, self.n_col - 1])
if self.image[new_row, new_col] == 0:
new_row = row
new_col = col
return new_row, new_col
def northwest(self, row, col):
# Returns new [row,col]
# if we take the action
new_row = np.max([row - 1, 0])
new_col = np.max([col - 1, 0])
if self.image[new_row, new_col] == 0:
new_row = row
new_col = col
return new_row, new_col
def south(self, row, col):
# Returns new [row,col]
# if we take the action
new_row = np.min([row + 1, self.n_row - 1])
new_col = col
if self.image[new_row, new_col] == 0:
new_row = row
new_col = col
return new_row, new_col
def southeast(self, row, col):
# Returns new [row,col]
# if we take the action
new_row = np.min([row + 1, self.n_row - 1])
new_col = np.min([col + 1, self.n_col - 1])
if self.image[new_row, new_col] == 0:
new_row = row
new_col = col
return new_row, new_col
def southwest(self, row, col):
# Returns new [row,col]
# if we take the action
new_row = np.min([row + 1, self.n_row - 1])
new_col = np.max([col - 1, 0])
if self.image[new_row, new_col] == 0:
new_row = row
new_col = col
return new_row, new_col
def east(self, row, col):
# Returns new [row,col]
# if we take the action
new_row = row
new_col = np.min([col + 1, self.n_col - 1])
if self.image[new_row, new_col] == 0:
new_row = row
new_col = col
return new_row, new_col
def west(self, row, col):
# Returns new [row,col]
# if we take the action
new_row = row
new_col = np.max([col - 1, 0])
if self.image[new_row, new_col] == 0:
new_row = row
new_col = col
return new_row, new_col
def get_reward_prior(self):
# Returns reward prior for gridworld
im = -1 * np.ones((self.n_row, self.n_col))
im[self.targetx, self.targety] = 10
return im
def t_get_reward_prior(self):
# Returns reward prior as needed for
# dataset generation
im = np.zeros((self.n_row, self.n_col))
im[self.targetx, self.targety] = 10
return im
def neighbors(self, row, col, dirc):
# Get valid neighbors in all valid directions
rows, cols, dircs = [], [], []
# N == 0
if (dirc == 0) or (dirc == 4) or (dirc == 5):
new_row, new_col = self.north(row, col)
rows, cols, dircs = np.append(rows, new_row), np.append(cols, new_col), np.append(dircs,0)
# S == 1
if (dirc == 1) or (dirc == 6) or (dirc == 7):
new_row, new_col = self.south(row, col)
rows, cols, dircs = np.append(rows, new_row), np.append(cols, new_col), np.append(dircs,1)
# E == 2
if (dirc == 2) or (dirc == 4) or (dirc == 6):
new_row, new_col = self.east(row, col)
rows, cols, dircs = np.append(rows, new_row), np.append(cols, new_col), np.append(dircs,2)
# W == 3
if (dirc == 3) or (dirc == 5) or (dirc == 7):
new_row, new_col = self.west(row, col)
rows, cols, dircs = np.append(rows, new_row), np.append(cols, new_col), np.append(dircs,3)
# NE == 4
if (dirc == 4) or (dirc == 0) or (dirc == 2):
new_row, new_col = self.northeast(row, col)
rows, cols, dircs = np.append(rows, new_row), np.append(cols, new_col), np.append(dircs,4)
# NW == 5
if (dirc == 5) or (dirc == 0) or (dirc == 3):
new_row, new_col = self.northwest(row, col)
rows, cols, dircs = np.append(rows, new_row), np.append(cols, new_col), np.append(dircs,5)
# SE == 6
if (dirc == 6) or (dirc == 1) or (dirc == 2):
new_row, new_col = self.southeast(row, col)
rows, cols, dircs = np.append(rows, new_row), np.append(cols, new_col), np.append(dircs,6)
# SW == 7
if (dirc == 7) or (dirc == 1) or (dirc == 3):
new_row, new_col = self.southwest(row, col)
rows, cols, dircs = np.append(rows, new_row), np.append(cols, new_col), np.append(dircs,7)
rows = rows.astype(np.int64)
cols = cols.astype(np.int64)
dircs = dircs.astype(np.int64)
return rows, cols, dircs
# functions for test
def get_coords_test(self, states):
# Given a state or states, returns
# [row,col] pairs for the state(s)
non_obstacles = np.ravel_multi_index(
[self.freespace[0], self.freespace[1]], (self.n_row, self.n_col),
order='F')
non_obstacles = np.sort(non_obstacles)
states = states.astype(int)
r, c = np.unravel_index(
non_obstacles[states], (self.n_col, self.n_row), order='F')
return r, c
def next_state_prob(self, s, a):
# Gets next state probability for
# a given action (a)
if hasattr(a, "__iter__"):
p = np.squeeze(self.PP[s, :, a])
else:
p = np.squeeze(self.PP[s, :, a]).T
return p
def rand_choose(self, in_vec):
# Samples
if len(in_vec.shape) > 1:
if in_vec.shape[1] == 1:
in_vec = in_vec.T
temp = np.hstack((np.zeros((1)), np.cumsum(in_vec))).astype('int')
q = np.random.rand()
x = np.where(q > temp[0:-1])
y = np.where(q < temp[1:])
choice = np.intersect1d(x, y)[0]
return choice
def sample_next_state(self, s, a):
# Gets the next state given the
# current state (s) and an
# action (a)
vec = self.next_state_prob(s, a)
result = self.rand_choose(vec)
return result
def map_ind_to_state(self, row, col):
# Takes [row, col] and maps to a state
rw = np.where(self.state_map_row == row)
cl = np.where(self.state_map_col == col)
return np.intersect1d(rw, cl)[0]
def neighbors_test(self, row, col):
# Get valid neighbors in all valid directions
rows, cols = self.north(row, col)
new_row, new_col = self.south(row, col)
rows, cols = np.append(rows, new_row), np.append(cols, new_col)
new_row, new_col = self.east(row, col)
rows, cols = np.append(rows, new_row), np.append(cols, new_col)
new_row, new_col = self.west(row, col)
rows, cols = np.append(rows, new_row), np.append(cols, new_col)
new_row, new_col = self.northeast(row, col)
rows, cols = np.append(rows, new_row), np.append(cols, new_col)
new_row, new_col = self.northwest(row, col)
rows, cols = np.append(rows, new_row), np.append(cols, new_col)
new_row, new_col = self.southeast(row, col)
rows, cols = np.append(rows, new_row), np.append(cols, new_col)
new_row, new_col = self.southwest(row, col)
rows, cols = np.append(rows, new_row), np.append(cols, new_col)
return rows, cols
def LP(M, n_traj=7):
num_states = M.C.shape[1]
C = M.C
d = M.d
Lambda = []
Distance = []
# get free position(no obs nor target) at vector (<784-1,)
non_obs_pos = np.ravel_multi_index(
[M.freespace[0], M.freespace[1]], (M.n_row, M.n_col),order='F')
target_pos = np.ravel_multi_index(
[M.targetx, M.targety], (M.n_row, M.n_col),order='F')
non_obs_pos = np.delete(non_obs_pos, np.argwhere(non_obs_pos==target_pos))
num_pos = np.size(non_obs_pos)
# sample n_traj start point
if num_pos >= n_traj:
rand_ind = np.random.permutation(num_pos)
else:
rand_ind = np.tile(np.random.permutation(num_pos), (1, 10))
start_ind = rand_ind[0:n_traj].flatten()
start_xy = non_obs_pos[start_ind]
startx, starty = np.unravel_index(
start_xy, (M.n_row, M.n_col), order='F')
# solve n_traj times LP, with the help of warm_start
for k in range(0,n_traj):
# set start/target point: two (8,) arrays means 16 indices in 784*8 vector
start = np.ravel_multi_index(
[startx[k], starty[k], range(0,M.n_dirc)], (M.n_row, M.n_col, M.n_dirc), order='F')
target = np.ravel_multi_index(
[M.targetx, M.targety, range(0,M.n_dirc)], (M.n_row, M.n_col, M.n_dirc), order='F')
x_start = []
x_target = []
for i in range(0, M.n_dirc):
x_start.append(np.argwhere(M.non_obstacles == start[i]).reshape((1,)))
x_target.append(np.argwhere(M.non_obstacles == target[i]).reshape((1,)))
# compute q_ij
q = []
for i in range(0, M.n_dirc):
for j in range(0, M.n_dirc):
q_temp = np.zeros((1, num_states))
q_temp[0, x_target[j]] = 1
q_temp[0, x_start[i]] = -1
q.append(q_temp)
qq = np.array(q).reshape(M.n_dirc*M.n_dirc, num_states)
# Linear Program(LP)
x = cp.Variable(shape = num_states)
constraints = [C*x <= d]
f_0 = cp.min(qq*x)
prob = cp.Problem(objective = cp.Maximize(cp.min(qq*x)),
constraints = constraints)
try:
if k == 0:
prob.solve(solver=cp.SCS, verbose=False)
else:
prob.solve(solver=cp.SCS, warm_start=True, verbose=False)
except:
Lambda_i = 'unsolved'
Distance_i = 'unsolved'
#print('unsolved')
if prob.status is not None:
#print(prob.status)
if prob.status == 'unbounded':
Lambda_i = 'unbounded'
Distance_i = 'unbounded'
else:
Lambda_i = constraints[0].dual_value
Distance_i = f_0.value
Lambda.append(Lambda_i)
Distance.append(Distance_i)
return Lambda, Distance, startx, starty
def visualize(dom, states_xy):
fig, ax = plt.subplots()
implot = plt.imshow(dom.T, cmap="Greys_r")
ax.plot(states_xy[:, 0], states_xy[:, 1], c='b', label='Optimal Path')
ax.plot(states_xy[0, 0], states_xy[0, 1], '-o', label='Start')
ax.plot(states_xy[-1, 0], states_xy[-1, 1], '-s', label='Goal')
legend = ax.legend(loc='upper right', shadow=False)
for label in legend.get_texts():
label.set_fontsize('x-small') # the legend text size
for label in legend.get_lines():
label.set_linewidth(0.5) # the legend line width
plt.draw()
plt.waitforbuttonpress(0)
plt.close(fig)
def get_opt_path(Lambda, Distance, M, startx, starty):
W = M.W
C = M.C
d = M.d
targetx = M.targetx
targety = M.targety
r_C = np.size(C, axis=0)
# search for best epsilon : NEED Revise Later !!! THE GOST THAT HOVER !!! ####################################
epsilon = 0.5
n_search = 10 #200
t = 2 # t must bigger than 1
Lambda_n = np.zeros((r_C,))
b_max = int(np.floor(Distance / np.sqrt(2)))
a = []
for b in range(0, b_max+1):
aa = Distance - b*np.sqrt(2)
aa = np.amin(np.array([np.ceil(aa)-aa, aa-np.floor(aa)]))
a.append(aa)
b_true = int(np.argmin(np.array(a)))
a_true = int(np.round(Distance - b_true*np.sqrt(2)))
for i in range(0, r_C):
if np.absolute(Lambda)[i] > epsilon:
Lambda_n[i] = 1
else:
Lambda_n[i] = 0
path = np.argwhere(Lambda_n==1)
n_step = np.size(path, 0)
#print('before searching, the number of lambda > %f is %d' %(epsilon,n_step))
for k in range(1, n_search):
t = (2 + np.sqrt(k))/(np.sqrt(k))
if n_step > a_true + b_true + 0.1:
epsilon = epsilon * t
for i in range(0, r_C):
if np.abs(Lambda)[i] > epsilon:
Lambda_n[i] = 1
else:
Lambda_n[i] = 0
path = np.argwhere(Lambda_n==1)
n_step = np.size(path, 0)
if n_step < a_true + b_true - 0.1:
epsilon = epsilon / t
for i in range(0, r_C):
if np.abs(Lambda)[i] > epsilon:
Lambda_n[i] = 1
else:
Lambda_n[i] = 0
path = np.argwhere(Lambda_n==1)
n_step = np.size(path, 0)
#print('Lambda for the constraints that works = ', Lambda[path])
#print('after searching %d iteration, the number of lambda > %f is %d' %(n_search,epsilon,n_step))
distance_check = np.dot(Lambda_n, d)
#if np.abs(distance_check - Distance) < 0.05:
#print('Distance + search_loss = %f'%Distance)
#print('check Distance + search_loss = %f'%distance_check)
#print('check pass')
#else:
#print('check failure')
#print('epsilon = ',epsilon)
#print('true step = %d'%(a_true + b_true))
#print('search setp = %d'%n_step)
################################################################################################################
# find the Difference Constraints that works: of size(n_step,)
constraints_ind = np.argwhere(Lambda_n == 1).reshape(-1,)
# get optimal path connection matrix: of size(n_step, <784*8)
Conn_matr = C[constraints_ind, :]
# get coordinate in the 28*28 map
step_from = []
step_to = []
for i in range(0,n_step):
one_step = Conn_matr[i,:].reshape(-1,)
from_ind = np.argwhere(one_step == -1).reshape(1,) # a int <784*8
to_ind = np.argwhere(one_step == 1).reshape(1,) # a int <784*8
step_from.append(from_ind)
step_to.append(to_ind)
# three arrays of size(1, n_step)
coords_from_r, coords_from_c, coords_from_d = M.get_coords(np.array(step_from))
coords_to_r, coords_to_c, coords_to_d = M.get_coords(np.array(step_to))
# two arrays of size(2, n_step)
states_from = np.concatenate((coords_from_r, coords_from_c), axis=1)
states_to = np.concatenate((coords_to_r, coords_to_c), axis=1)
# get right sequeence
states_xy = np.zeros((n_step+1, 2))
states_xy[0, :] = [startx, starty]
search_failure = 0
for i in range(0,n_step):
ind_x = np.argwhere(coords_from_r.reshape((-1,)) == states_xy[i, 0])
ind_y = np.argwhere(coords_from_c.reshape((-1,)) == states_xy[i, 1])
intersec = np.setdiff1d(ind_x, np.setdiff1d(ind_x, ind_y))
if (np.size(intersec) != 0):
ind = intersec[0]
else:
states_xy = None
search_failure = 1
return states_xy, search_failure
states_xy[i+1, :] = states_to[ind, :]
return states_xy, search_failure
# ----------- Help to Understand ------------ #
import sys
sys.path.append('.')
from generators.obstacle_gen import *
sys.path.remove('.')
def main1():
### ==> Step1: Build map. Add obstacles, border and goal.
obs = obstacles(domsize=[28,28], # domain size
mask=[23,24], # goal
size_max=2, # obstacl's max size
dom=None, # must be None
obs_types='rect', # obstacl's tpye, 'rect' only
num_types=1) # number of types, 1 only
# add random obstacles to dom,
# if this obstacl doesn't mask goal then add, else skip to add next one
n_obs = obs.add_n_rand_obs(n=120)
# add border to dom,
# if border don't mask goal then add, else skip to add next border point
border_res = obs.add_border()
# print final dom
#obs.show()
### ==> Step2: Find optimal path.
# get final map
im = obs.get_final()
# generate gridworld from obstacle map
G = gridworld(image=im,
n_dirc=8,
targetx=23, # goal[0]
targety=24, # goal[1]
turning_loss=0.053333,
p_sys=0.010225,
p_row=0.0002,
p_col=0.0001)
# set number of traj
n_traj = 100
# solve LP problem
print('solve LP problem:')
start_time = time.time()
Lambda, Distance, startx, starty = LP(M=G, n_traj=n_traj)
end_time = time.time()
print('time for solving LP = ', (end_time-start_time))
#print('Lambda = ',Lambda)
#print('Distance + search_loss = ', Distance)
print('end LP, now search the path:')
# search optimal path
start_time = time.time()
states_xy = []
n_solver_failure = 0
n_search_failure = 0
n_problem_infeasibly = 0
for i in range(0,n_traj):
if Lambda[i] is 'unsolved':
n_solver_failure = n_solver_failure + 1
elif Lambda[i] is 'unbounded':
n_problem_infeasibly = n_problem_infeasibly + 1
else:
states_xy_one_traj, search_failure = get_opt_path(Lambda=Lambda[i],
Distance=Distance[i],
M=G,
startx=startx[i],
starty=starty[i])
states_xy.append(states_xy_one_traj)
n_search_failure = n_search_failure + search_failure
end_time = time.time()
print('time for searching = ', (end_time-start_time))
print('all solver failure %d times of %d'%(n_solver_failure, n_traj))
print('all search failure %d times of %d'%(n_search_failure, n_traj))
print('all infeasibly failure %d times of %d'%(n_problem_infeasibly, n_traj))
print('press 0 for next path visualization')
# visualize optiaml path
j = 0
for i in range(0,n_traj):
if states_xy[i] is not None:
visualize(im, states_xy[i])
j = j + 1
if j == 100:
break
print('End All! Thank for Runing! -- Zhun YIN')
if __name__ == '__main__':
main1() | |
'''
physics
'''
# Mountain Climate Simulator, meteorological forcing disaggregator
# Copyright (C) 2015 Joe Hamman
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
import numpy as np
from metsim.defaults import CONSTS as constants
def calc_pet(rad, ta, pa, dayl, dt=0.2):
'''
calculates the potential evapotranspiration for aridity corrections in
`calc_vpd()`, according to Kimball et al., 1997
Parameters
----------
rad : scalar or numpy.ndarray
daylight average incident shortwave radiation (W/m2)
ta : scalar or numpy.ndarray
daylight average air temperature (deg C)
pa : scalar or numpy.ndarray
air pressure (Pa)
dayl : scalar or numpy.ndarray
daylength (s)
dt : scalar, optional
offset for saturation vapor pressure calculation, default = 0.2
Returns
----------
pet : scalar or numpy.ndarray
Potential evapotranspiration (cm/day)
'''
# rnet # (W m-2) absorbed shortwave radiation avail. for ET
# lhvap # (J kg-1) latent heat of vaporization of water
# gamma # (Pa K-1) psychrometer parameter
# dt = 0.2 # offset for saturation vapor pressure calculation
# t1, t2 # (deg C) air temperatures
# pvs1, pvs2 # (Pa) saturated vapor pressures
# pet # (kg m-2 day-1) potential evapotranspiration
# s # (Pa K-1) slope of saturated vapor pressure curve
# calculate absorbed radiation, assuming albedo = 0.2 and ground
# heat flux = 10% of absorbed radiation during daylight
rnet = rad * 0.72
# calculate latent heat of vaporization as a function of ta
lhvap = 2.5023e6 - 2430.54 * ta
# calculate the psychrometer parameter: gamma = (cp pa)/(lhvap epsilon)
# where:
# cp (J/kg K) specific heat of air
# epsilon (unitless) ratio of molecular weights of water and air
gamma = constants['CP'] * pa / (lhvap * constants['EPS'])
# estimate the slope of the saturation vapor pressure curve at ta
# temperature offsets for slope estimate
t1 = ta + dt
t2 = ta - dt
# calculate saturation vapor pressures at t1 and t2, using formula from
# Abbott, P.F., and R.C. Tabony, 1985. The estimation of humidity
# parameters. Meteorol. Mag., 114:49-56.
pvs1 = svp(t1)
pvs2 = svp(t2)
# calculate slope of pvs vs. T curve near ta
s = (pvs1 - pvs2) / (t1 - t2)
# can this be s = svp_slope(ta)? JJH
# calculate PET using Priestly-Taylor approximation, with coefficient
# set at 1.26. Units of result are kg/m^2/day, equivalent to mm water/day
pet = (1.26 * (s / (s + gamma)) * rnet * dayl) / lhvap
# return a value in centimeters/day, because this value is used in a ratio
# to annual total precip, and precip units are centimeters
return (pet / 10.)
def atm_pres(elev):
'''atmospheric pressure (Pa) as a function of elevation (m)
Parameters
----------
elev : scalar or numpy.ndarray
Elevation (meters)
Returns
-------
pressure : scalar or numpy.ndarray
Atmospheric pressure at elevation `elev` (Pa)
References
----------
* Iribane, J.V., and W.L. Godson, 1981. Atmospheric Thermodynamics, 2nd
Edition. D. Reidel Publishing Company, Dordrecht, The Netherlands.
(p. 168)
'''
t1 = 1.0 - (constants['LR_STD'] * elev) / constants['T_STD']
t2 = constants['G_STD'] / (constants['LR_STD'] * (constants['R'] /
constants['MA']))
return constants['P_STD'] * np.power(t1, t2)
def svp(temp, a=0.61078, b=17.269, c=237.3):
'''Compute the saturated vapor pressure.
Parameters
----------
temp : numpy.ndarray
Temperature (degrees Celsius)
Returns
----------
pressure : numpy.ndarray
Saturated vapor pressure at temperature `temp` (Pa)
References
----------
* Maidment, David R. Handbook of hydrology. McGraw-Hill Inc., 1992.
Equation 4.2.2.
'''
svp = a * np.exp((b * temp) / (c + temp))
inds = np.nonzero(temp < 0.)[0]
svp[inds] *= 1.0 + .00972 * temp[inds] + .000042 * np.power(temp[inds], 2)
return svp * 1000.
def svp_slope(temp, a=0.61078, b=17.269, c=237.3):
'''Compute the gradient of the saturated vapor pressure as a function of
temperature.
Parameters
----------
temp : numpy.ndarray
Temperature (degrees Celsius)
Returns
-------
gradient : numpy.ndarray
Gradient of d(svp)/dT.
References
----------
* Maidment, David R. Handbook of hydrology. McGraw-Hill Inc., 1992.
Equation 4.2.3.
'''
return (b * c) / ((c + temp) * (c + temp)) * svp(temp, a=a, b=b, c=c) | |
r"""Cheng and Shu's 1d acoustic wave propagation in 1d (1 min)
particles have properties according
to the following distribuion
.. math::
\rho = \rho_0 + \Delta\rho sin(kx)
p = 1.0
u = 1 + 0.1sin(kx)
with :math:`\Delta\rho = 1` and :math:`k = 2\pi/\lambda`
where \lambda is the domain length.
.. math::
\rho_0 = 2, \gamma = 1.4 and p_0 = 1.0
"""
# standard library and numpy imports
import numpy
# pysph imports
from pysph.base.utils import get_particle_array as gpa
from pysph.base.nnps import DomainManager
from pysph.solver.application import Application
from pysph.sph.scheme import GSPHScheme, SchemeChooser
class ChengShu(Application):
def initialize(self):
self.xmin = 0.
self.xmax = 1.
self.gamma = 1.4
self.p_0 = 1.
self.c_0 = 1.
self.delta_rho = 1
self.n_particles = 1000
self.domain_length = self.xmax - self.xmin
self.dx = self.domain_length / (self.n_particles - 1)
self.k = 2 * numpy.pi / self.domain_length
self.hdx = 2.
self.dt = 1e-4
self.tf = 1.0
self.dim = 1
def create_domain(self):
return DomainManager(
xmin=self.xmin, xmax=self.xmax, periodic_in_x=True
)
def create_particles(self):
x = numpy.linspace(
self.xmin, self.xmax, self.n_particles
)
rho = 2 + numpy.sin(2 * numpy.pi * x)*self.delta_rho
p = numpy.ones_like(x)
u = 1 + 0.1 * numpy.sin(2 * numpy.pi * x)
cs = numpy.sqrt(
self.gamma * p / rho
)
h = numpy.ones_like(x) * self.dx * self.hdx
m = numpy.ones_like(x) * self.dx * rho
e = p / ((self.gamma - 1) * rho)
fluid = gpa(
name='fluid', x=x, p=p, rho=rho, u=u, h=h, m=m, e=e, cs=cs
)
self.scheme.setup_properties([fluid])
return [fluid, ]
def create_scheme(self):
gsph = GSPHScheme(
fluids=['fluid'], solids=[], dim=self.dim,
gamma=self.gamma, kernel_factor=1.,
g1=0., g2=0., rsolver=3, interpolation=1, monotonicity=1,
interface_zero=True, hybrid=False, blend_alpha=5.0,
niter=200, tol=1e-6
)
s = SchemeChooser(
default='gsph', gsph=gsph
)
return s
def configure_scheme(self):
s = self.scheme
if self.options.scheme == 'gsph':
s.configure_solver(
dt=self.dt, tf=self.tf,
adaptive_timestep=False, pfreq=1000
)
if __name__ == "__main__":
app = ChengShu()
app.run() | |
#!/usr/bin/env python
# -*- coding: utf-8 -*-
# utils_test.py
"""
Tests for utility functions.
Copyright (c) 2020, David Hoffman
"""
import numpy as np
import pytest
from pyotf.utils import *
def test_remove_bg_unsigned():
"""Make sure that remove background doesn't fuck up unsigned ints."""
test_data = np.array((1, 2, 3, 3, 3, 4, 5), dtype=np.uint16)
assert np.allclose(remove_bg(test_data, 1.0), test_data - 3.0)
def test_center_data():
"""Make sure center data works as advertised."""
ndims = np.random.randint(2, 3)
shape = np.random.randint(1, 512, ndims)
data = np.zeros(shape)
random_index = tuple((np.random.randint(i),) for i in shape)
data[random_index] = 1
data_centered = center_data(data)
assert np.fft.ifftshift(data_centered)[((0,),) * ndims]
def test_psqrt():
"""Test psqrt."""
data = np.random.randint(-1000, 1000, size=20)
ps_data = psqrt(data)
less_than_zero = data < 0
assert (ps_data[less_than_zero] == 0).all()
more_than_zero = np.logical_not(less_than_zero)
assert np.allclose(ps_data[more_than_zero], np.sqrt(data[more_than_zero]))
def test_cart2pol():
"""Make sure cart2pol is good."""
z = np.random.randn(10) + np.random.randn(10) * 1j
theta = np.angle(z)
r = abs(z)
test_r, test_theta = cart2pol(z.imag, z.real)
assert np.allclose(test_theta, theta), "theta failed"
assert np.allclose(test_r, r), "r failed" | |
import numpy as np
import os
import scipy
from experimental_tools import *
from newton_methods import cubic_newton
from oracles import create_log_reg_oracle
from sklearn.datasets import load_svmlight_file
from utils import get_tolerance, get_tolerance_strategy
def run_experiment(dataset_filename, name, max_iters):
print('Experiment: \t %s, \t file: %s, \t max_iters = %d.' %
(name, dataset_filename, max_iters))
X, y = load_svmlight_file(dataset_filename)
oracle = create_log_reg_oracle(X, y, 1 / X.shape[0])
x_0 = np.zeros(X.shape[1])
print('Minimize by scipy ... ', flush=True, end='')
f_star = \
scipy.optimize.minimize(oracle.func, x_0, jac=oracle.grad, tol=1e-9).fun
print('f_star = %g.' % f_star)
H_0 = 1.0
line_search = True
tolerance = get_tolerance({'criterion': 'func',
'f_star': f_star,
'tolerance': 1e-8})
subsolver = 'FGM'
stopping_criterion_subproblem = 'func'
constant_strategies = get_constant_strategies()
power_strategies = get_power_strategies()
adaptive_strategy = get_tolerance_strategy({'strategy': 'adaptive',
'c': 1.0,
'alpha': 1,
'label': 'adaptive'})
adaptive_15_strategy = get_tolerance_strategy({'strategy': 'adaptive',
'c': 1.0,
'alpha': 1.5,
'label': r'adaptive $1.5$'})
adaptive_2_strategy = get_tolerance_strategy({'strategy': 'adaptive',
'c': 1.0,
'alpha': 2,
'label': r'adaptive $2$'})
strategies_1 = constant_strategies
strategies_2 = power_strategies + [constant_strategies[-1]]
strategies_3 = [adaptive_strategy, adaptive_15_strategy,
adaptive_2_strategy, constant_strategies[-1]]
method = lambda strategy: cubic_newton(oracle, x_0, tolerance,
max_iters=max_iters,
H_0=H_0,
line_search=line_search,
inner_tolerance_strategy=strategy,
subsolver=subsolver,
trace=True,
B=None,
Binv=None,
stopping_criterion_subproblem=
stopping_criterion_subproblem)
filename = os.getcwd() + '/plots/exact_logreg_%s' % (name)
labels_1 = get_labels(strategies_1)
histories_1 = run_method(method, strategies_1, labels_1)
plot_func_residual_iter(histories_1, 'hess_vec_calls', f_star, labels_1,
['grey', 'grey', 'grey', 'grey'],
['-', '--', '-.', ':'],
[5, 4, 3, 4],
[1, 1, 1, 1],
r'Log-reg, %s: constant strategies' % name,
'Hessian-vector products',
filename=filename+'_const.pdf')
labels_2 = get_labels(strategies_2)
histories_2 = run_method(method, strategies_2, labels_2)
plot_func_residual_iter(histories_2, 'hess_vec_calls', f_star, labels_2,
['blue', 'blue', 'blue', 'blue', 'gray'],
['-', '--', '-.', ':', ':'],
[5, 4, 3, 2, 4],
[0.6, 0.6, 0.6, 0.6, 0.8],
r'Log-reg, %s: dynamic strategies' % name,
'Hessian-vector products',
filename=filename+'_power.pdf')
labels_3 = get_labels(strategies_3)
histories_3 = run_method(method, strategies_3, labels_3)
plot_func_residual_iter(histories_3, 'hess_vec_calls', f_star, labels_3,
['red', 'tab:orange', 'tab:orange', 'gray'],
['-', '--', '-.', ':'],
[2, 4, 2, 4],
[1, 1, 1, 0.8],
r'Log-reg, %s: adaptive strategies' % name,
'Hessian-vector products',
filename=filename+'_adaptive.pdf')
run_experiment('data/mushrooms.txt', 'mushrooms', max_iters=500)
run_experiment('data/w8a.txt', 'w8a', max_iters=200)
run_experiment('data/a8a.txt', 'a8a', max_iters=200)
run_experiment('data/phishing.txt', 'phishing', max_iters=500)
run_experiment('data/splice.txt', 'splice', max_iters=200) | |
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (11, 5) #set default figure size
import numpy as np
import sympy as sym
from sympy import init_printing, latex
from matplotlib import cm
from mpl_toolkits.mplot3d import Axes3D
# True present value of a finite lease
def finite_lease_pv_true(T, g, r, x_0):
G = (1 + g)
R = (1 + r)
return (x_0 * (1 - G**(T + 1) * R**(-T - 1))) / (1 - G * R**(-1))
# First approximation for our finite lease
def finite_lease_pv_approx_1(T, g, r, x_0):
p = x_0 * (T + 1) + x_0 * r * g * (T + 1) / (r - g)
return p
# Second approximation for our finite lease
def finite_lease_pv_approx_2(T, g, r, x_0):
return (x_0 * (T + 1))
# Infinite lease
def infinite_lease(g, r, x_0):
G = (1 + g)
R = (1 + r)
return x_0 / (1 - G * R**(-1))
def plot_function(axes, x_vals, func, args):
axes.plot(x_vals, func(*args), label=func.__name__)
T_max = 50
T = np.arange(0, T_max+1)
g = 0.02
r = 0.03
x_0 = 1
our_args = (T, g, r, x_0)
funcs = [finite_lease_pv_true,
finite_lease_pv_approx_1,
finite_lease_pv_approx_2]
## the three functions we want to compare
fig, ax = plt.subplots()
ax.set_title('Finite Lease Present Value $T$ Periods Ahead')
for f in funcs:
plot_function(ax, T, f, our_args)
ax.legend()
ax.set_xlabel('$T$ Periods Ahead')
ax.set_ylabel('Present Value, $p_0$')
plt.show() | |
# -*- coding:utf-8 -*-
import io
import numpy as np
def load_vocab(file_path):
"""
load the given vocabulary
"""
vocab = {}
with io.open(file_path, 'r', encoding='utf8') as f:
wid = 0
for line in f:
parts = line.rstrip().split('\t')
vocab[parts[0]] = int(parts[1])
vocab["<unk>"] = len(vocab)
return vocab
def preprocess(lac, texts, word_dict, use_gpu=False, batch_size=1):
"""
firstly, the predicted texts are segmented by lac module
then, the word segmention results input into senta
"""
result = []
input_dict = {'text': texts}
processed = lac.lexical_analysis(
data=input_dict, use_gpu=use_gpu, batch_size=batch_size)
unk_id = word_dict["<unk>"]
for index, data in enumerate(processed):
result_i = {'processed': []}
result_i['origin'] = texts[index]
for word in data['word']:
if word in word_dict:
_index = word_dict[word]
else:
_index = unk_id
result_i['processed'].append(_index)
result.append(result_i)
return result
def postprocess(predict_out, texts):
"""
Convert model's output tensor to sentiment label
"""
predict_out = predict_out.as_ndarray()
batch_size = len(texts)
result = []
for index in range(batch_size):
result_i = {}
result_i['text'] = texts[index]['origin']
label = int(np.argmax(predict_out[index]))
if label == 0:
key = 'negative'
else:
key = 'positive'
result_i['sentiment_label'] = label
result_i['sentiment_key'] = key
result_i['positive_probs'] = float('%.4f' % predict_out[index, 1])
result_i['negative_probs'] = float('%.4f' % (1 - predict_out[index, 1]))
result.append(result_i)
return result | |
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
#http://www.johnwittenauer.net/machine-learning-exercises-in-python-part-1/
#şimdi burada tek değişken üzerinden linear_regression problemini çözecez
alpha=0.01
iters=1000
#aşağısı
def costFunction (x,y,theta):
inner =np.power(((x*theta.T)-y),2)
return np.sum(inner)/(2*len(x))
def gradientDescent(x,y,theta,alpha,iters):
temp=np.matrix(np.zeros(theta.shape))
parameters=int(theta.ravel().shape[1])
cost=np.zeros(iters)
for i in range(iters):
error=(x*theta.T)-y
for j in range(parameters):
term=np.multiply(error,x[:,j])
temp[0,j]=theta[0,j]-((alpha/len(x))*np.sum(term))
theta=temp
cost[i]=costFunction(x,y,theta)
return theta,cost
path=os.getcwd()+'/LinearRegression/ex1data1.txt'
data =pd.read_csv(path,header=None,names=['Population','Profit'])
head =data.head()
describe=data.describe()
print(describe)
#data.plot(kind='scatter',x='Population',y='Profit',figsize=(12,8))
data.insert(0,'Ones',1)
cols=data.shape[1]
x=data.iloc[:,0:cols-1]
y=data.iloc[:,cols-1:cols]
x=np.matrix(x.values)
y=np.matrix(y.values)
theta=np.matrix(np.array([0,0]))
g,cost=gradientDescent(x,y,theta,alpha,iters)
print("Cost:%1.5f"%costFunction(x,y,theta))
print(g)
print(costFunction(x,y,g))
#yaptığımız işi vizualize edeyoz
x=np.linspace(data.Population.min(),data.Population.max(),100)
f=g[0,0]+(g[0,1]*x)
fig,ax=plt.subplots(figsize=(12,8))
ax.plot(x,f,'r',label='Preditcion')
ax.scatter(data.Population,data.Profit,label='Training Data')
ax.legend(loc=2)
ax.set_xlabel('Population')
ax.set_ylabel('Profit')
ax.set_title('Predicted Profit vs. Population Size')
plt.show()
fig,ax=plt.subplots(figsize=(12,8))
ax.plot(np.arange(iters),cost,'r')
ax.set_xlabel('Iterations')
ax.set_ylabel('Cost')
ax.set_title('Error vs. Training Epoch')
plt.show() | |
# coding: utf-8
# <h1>Table of Contents<span class="tocSkip"></span></h1>
# <div class="toc"><ul class="toc-item"><li><span><a href="#Water-vapor-retrieval-using-MYD05-data" data-toc-modified-id="Water-vapor-retrieval-using-MYD05-data-1"><span class="toc-item-num">1 </span>Water vapor retrieval using MYD05 data</a></span><ul class="toc-item"><li><span><a href="#Near-IR-vs.-IR-datasets" data-toc-modified-id="Near-IR-vs.-IR-datasets-1.1"><span class="toc-item-num">1.1 </span>Near IR vs. IR datasets</a></span></li><li><span><a href="#What-this-notebook-does" data-toc-modified-id="What-this-notebook-does-1.2"><span class="toc-item-num">1.2 </span>What this notebook does</a></span></li></ul></li><li><span><a href="#Setup" data-toc-modified-id="Setup-2"><span class="toc-item-num">2 </span>Setup</a></span></li><li><span><a href="#Read-in-the-1km-and-5km-water-vapor-files" data-toc-modified-id="Read-in-the-1km-and-5km-water-vapor-files-3"><span class="toc-item-num">3 </span>Read in the 1km and 5km water vapor files</a></span><ul class="toc-item"><li><span><a href="#Start-with-the-lats/lons-for-1km-and-5km" data-toc-modified-id="Start-with-the-lats/lons-for-1km-and-5km-3.1"><span class="toc-item-num">3.1 </span>Start with the lats/lons for 1km and 5km</a></span></li><li><span><a href="#Get-the-IR-vapor-plus-5-of-its-attributes" data-toc-modified-id="Get-the-IR-vapor-plus-5-of-its-attributes-3.2"><span class="toc-item-num">3.2 </span>Get the IR vapor plus 5 of its attributes</a></span></li><li><span><a href="#Replace--9999-with-np.nan" data-toc-modified-id="Replace--9999-with-np.nan-3.3"><span class="toc-item-num">3.3 </span>Replace -9999 with np.nan</a></span></li><li><span><a href="#now-scale-the-data-and-histogram-it" data-toc-modified-id="now-scale-the-data-and-histogram-it-3.4"><span class="toc-item-num">3.4 </span>now scale the data and histogram it</a></span></li><li><span><a href="#Repeat-for-the-1-km-near-ir-data" data-toc-modified-id="Repeat-for-the-1-km-near-ir-data-3.5"><span class="toc-item-num">3.5 </span>Repeat for the 1 km near-ir data</a></span></li><li><span><a href="#Note-that-the--scaled-wv-values-are-similar-between-near_ir-and-ir-retrievals" data-toc-modified-id="Note-that-the--scaled-wv-values-are-similar-between-near_ir-and-ir-retrievals-3.6"><span class="toc-item-num">3.6 </span>Note that the scaled wv values are similar between near_ir and ir retrievals</a></span></li></ul></li><li><span><a href="#Map-the-data" data-toc-modified-id="Map-the-data-4"><span class="toc-item-num">4 </span>Map the data</a></span><ul class="toc-item"><li><ul class="toc-item"><li><span><a href="#Resample-the-5km-IR-retrieval-onto-a-laea-xy-grid" data-toc-modified-id="Resample-the-5km-IR-retrieval-onto-a-laea-xy-grid-4.0.1"><span class="toc-item-num">4.0.1 </span>Resample the 5km IR retrieval onto a laea xy grid</a></span></li><li><span><a href="#Resample-the-1km-near-ir-water-vapor-on-the-same-grid" data-toc-modified-id="Resample-the-1km-near-ir-water-vapor-on-the-same-grid-4.0.2"><span class="toc-item-num">4.0.2 </span>Resample the 1km near-ir water vapor on the same grid</a></span></li></ul></li><li><span><a href="#now-use-the-1-km-MYD03-lons-and-lats-to-get-a-full-resolution-xy-grid" data-toc-modified-id="now-use-the-1-km-MYD03-lons-and-lats-to-get-a-full-resolution-xy-grid-4.1"><span class="toc-item-num">4.1 </span>now use the 1 km MYD03 lons and lats to get a full resolution xy grid</a></span></li></ul></li><li><span><a href="#Save-the-mapped-images" data-toc-modified-id="Save-the-mapped-images-5"><span class="toc-item-num">5 </span>Save the mapped images</a></span><ul class="toc-item"><li><span><a href="#Now-save-these-three-images-plus-their-area_def's-for-future-plotting" data-toc-modified-id="Now-save-these-three-images-plus-their-area_def's-for-future-plotting-5.1"><span class="toc-item-num">5.1 </span>Now save these three images plus their area_def's for future plotting</a></span></li><li><span><a href="#Create-a-directory-to-hold-the-images-and-area_def-dictionaries" data-toc-modified-id="Create-a-directory-to-hold-the-images-and-area_def-dictionaries-5.2"><span class="toc-item-num">5.2 </span>Create a directory to hold the images and area_def dictionaries</a></span></li><li><span><a href="#Here's-a-function-that-writes-the-image-plus-metadata-to-npz-and-json-files" data-toc-modified-id="Here's-a-function-that-writes-the-image-plus-metadata-to-npz-and-json-files-5.3"><span class="toc-item-num">5.3 </span>Here's a function that writes the image plus metadata to npz and json files</a></span></li><li><span><a href="#Write-out-images,-putting-useful-metadeta-in-metadata_dict" data-toc-modified-id="Write-out-images,-putting-useful-metadeta-in-metadata_dict-5.4"><span class="toc-item-num">5.4 </span>Write out images, putting useful metadeta in metadata_dict</a></span></li></ul></li></ul></div>
# # Water vapor retrieval using MYD05 data
# ## Near IR vs. IR datasets
#
# As we will discuss in class, Modis provides two separate measurements on the column integrated water vapor.
# The high level overview is given in the [modis water vapor products](https://ladsweb.modaps.eosdis.nasa.gov/missions-and-measurements/products/water-vapor/MYD05_L2). Basically the reason for two separate retrievals is that they have different strengths and weaknesses.
#
# * Near Infrared Retrieval
#
# * Uses reflected photons in two separate water vapor absorption bands
#
# * Strengths
#
# * 1 km spatial resolution at nadir
#
# * retrieval doesn't depend on temperature difference between vapor and surface
#
# * more accurate than longwave
#
# * Weaknesses
#
# * Doesn't work at night
#
# * Doesn't work over dark surfaces (can work over ocean
# as long as the pixel is reflecting direct sunlight ("sunglint")
#
# * Needs separate MYD03 file for lats/lons
#
# * Infrared Retrieval
#
# * Uses the water absorption bands near 11 microns
#
# * Strengths
#
# * Works day/night, over dark surfaces
#
# * 5 km lat/lons included in file
#
# * Weaknesses
#
# * 5 km pixels at nadir
#
# * Doesn't work when most of the vapor is in the boundary layer and has about the same temperature
# as the surface
# ## What this notebook does
#
# 1. Reads a MYD03 file named m3_file_2018_10_1.hdf and a MYD05 file named myd05_l2_10_7.hdf located
# in a301.data_dir and grabs latitudes, longitudes and two arrays: Water_Vapor_Near_Infrared and
# Water_Vapor_Infrared
#
# 1. Scales the water vapar arrays by scale_factor and offset to produce the retrieved column water vapor
# in cm
#
# 1. Maps the two arrays onto the same 5km array for direct comparison
#
# 1. Maps the near_ir array onto a 1 km grid to show the full resolution.
#
# 1. Writes the three images with their area_def map information and metadata out to new folders in
# a301_code/map_data/wv_maps as npz files (for the images) and json files (for the metadata)
# # Setup
#
# 1. Download the MYD05 granule that corresponds to your 5 minute date/time. It should look something like:
#
# MYD05_L2.A2013222.2105.061.2018048043105.hdf
#
# 1. Rename it to **myd05_l2_10_7.hdf** and copy to a301.data_dir
#
# 1. Run the checkup program:
#
# python -m a301.install_tests.wv_resample_test
#
#
# which should produce something like this:
#
# working on /Users/phil/repos/a301_code/data/m3_file_2018_10_1.hdf, originally was MYD03.A2013222.2105.006.2013223155808.hdf
#
# ****************************************
# lats_1km.shape, lons_1km.shape: (2040, 1354),(2040, 1354)
# ****************************************
# through
# working on /Users/phil/repos/a301_code/data/myd05_l2_10_7.hdf, originally was MYD05_L2.A2013222.2105.061.2018048043105.hdf
# ****************************************
# nearir vapor array shape is: (2040, 1354)
# ****************************************
# ****************************************
# ir vapor array shape is: (408, 270)
# ****************************************
# ****************************************
# lats_5km arrayshape is: (408, 270)
# ****************************************
# ****************************************
# lons_5km arrayshape is: (408, 270)
# ****************************************
# was able to regrid the nearir image, xy shape is (2244, 1489)
# was able to regrid the ir image, xy shape is (448, 297)
# data looks good, ready to go
#
# In[1]:
from matplotlib import cm
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import Normalize
from IPython.display import Image,display
#Image('figures/MYBRGB.A2016224.2100.006.2016237025650.jpg',width=600)
# In[2]:
get_ipython().run_line_magic('matplotlib', 'inline')
from matplotlib import cm
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import Normalize
from IPython.display import Image,display
import a301
from a301.geometry import get_proj_params
from a301.scripts.modismeta_read import parseMeta
from pathlib import Path
from pyhdf.SD import SD, SDC
import pprint
import json
import pdb
# # Read in the 1km and 5km water vapor files
# ## Start with the lats/lons for 1km and 5km
# In[3]:
m5_file = a301.data_dir / Path('myd05_l2_10_7.hdf')
m3_file = a301.data_dir / Path('m3_file_2018_10_1.hdf')
the_file = SD(str(m3_file), SDC.READ)
lats_1km = the_file.select('Latitude').get()
lons_1km = the_file.select('Longitude').get()
the_file.end()
the_file = SD(str(m5_file), SDC.READ)
lats_5km = the_file.select('Latitude').get()
lons_5km = the_file.select('Longitude').get()
the_file.end()
# ## Get the IR vapor plus 5 of its attributes
#
# Store the data in a numpy array, and the attributes in a dictionary,
# using a [dictionary comprehension](https://jakevdp.github.io/WhirlwindTourOfPython/11-list-comprehensions.html)
# at line 4
# In[4]:
the_file = SD(str(m5_file), SDC.READ)
wv_ir = the_file.select('Water_Vapor_Infrared')
attributes=['units', 'scale_factor', 'add_offset', 'valid_range', '_FillValue']
attr_dict=wv_ir.attributes()
wv_ir_attrs={k: attr_dict[k] for k in attributes}
print(f'wv_ir attributes: {pprint.pformat(wv_ir_attrs)}')
wv_ir_data = wv_ir.get()
# ## Replace -9999 with np.nan
#
# Note that this has to a happen before we scale the data by the scale_factor so the -9999 can be recognized
# In[5]:
bad_data = (wv_ir_data == wv_ir_attrs['_FillValue'])
#
# next line converts to floating point so we can use np.nan
#
wv_ir_data = wv_ir_data.astype(np.float32)
wv_ir_data[bad_data]=np.nan
# ## now scale the data and histogram it
# In[6]:
wv_ir_scaled = wv_ir_data*attr_dict['scale_factor'] + attr_dict['add_offset']
# Note that we need to get rid of all nan values by taking ~ (not) np.isnan
#
# ```
# plt.hist(wv_ir_scaled)
# ```
# won't work
# In[7]:
plt.hist(wv_ir_scaled[~np.isnan(wv_ir_scaled)])
ax=plt.gca()
ax.set_title('5 km wv data (cm)');
# ## Repeat for the 1 km near-ir data
#
# Use a dictionary comprehension again to move the attributes in attrib_list into a dict at line 4
# In[8]:
the_file = SD(str(m5_file), SDC.READ)
wv_nearir = the_file.select('Water_Vapor_Near_Infrared')
attrib_list=['unit', 'scale_factor', 'add_offset', 'valid_range', '_FillValue']
attr_dict=wv_nearir.attributes()
wv_nearir_attrs={k: attr_dict[k] for k in attrib_list}
print(f'wv_nearir attributes: {pprint.pformat(wv_nearir_attrs)}')
wv_nearir_data = wv_nearir.get()
the_file.end()
# In[9]:
bad_data = wv_nearir_data == wv_nearir_attrs['_FillValue']
wv_nearir_data = wv_nearir_data.astype(np.float32)
wv_nearir_data[bad_data]=np.nan
wv_nearir_scaled = wv_nearir_data*attr_dict['scale_factor'] + attr_dict['add_offset']
# ## Note that the scaled wv values are similar between near_ir and ir retrievals
# In[10]:
plt.hist(wv_nearir_scaled[~np.isnan(wv_nearir_scaled)])
ax=plt.gca()
ax.set_title('1 km water vapor (cm)');
# # Map the data
#
#
# ### Resample the 5km IR retrieval onto a laea xy grid
#
# Let swath_def.compute_optimal_bb_area choose the extent and dimensions for
# the low resolution (lr) image
#
# In[ ]:
# %load temp.md
def runit():
from pyresample import SwathDefinition, kd_tree, geometry
proj_params = get_proj_params(m5_file)
swath_def = SwathDefinition(lons_5km, lats_5km)
area_def_lr=swath_def.compute_optimal_bb_area(proj_dict=proj_params)
area_def_lr.name="ir wv retrieval modis 5 km resolution (lr=low resolution)"
area_def_lr.area_id='modis_ir_wv'
area_def_lr.job_id = area_def_lr.area_id
fill_value=-9999.
image_wv_ir = kd_tree.resample_nearest(swath_def, wv_ir_scaled.ravel(),
area_def_lr, radius_of_influence=5000,
nprocs=2,fill_value=fill_value)
image_wv_ir[image_wv_ir < -9000]=np.nan
print(f'\ndump area definition:\n{area_def_lr}\n')
print((f'\nx and y pixel dimensions in meters:'
f'\n{area_def_lr.pixel_size_x}\n{area_def_lr.pixel_size_y}\n'))
pdb.set_trace()
runit()
# ### Resample the 1km near-ir water vapor on the same grid
#
# Reuse area_def_lr for the high resolution nearir image so we can compare directly with low resolution ir
# In[ ]:
swath_def = SwathDefinition(lons_1km, lats_1km)
fill_value=-9999.
image_wv_nearir_lr = kd_tree.resample_nearest(swath_def, wv_nearir_scaled.ravel(),
area_def_lr, radius_of_influence=5000,
nprocs=2,fill_value=fill_value)
image_wv_nearir_lr[image_wv_nearir_lr < -9000]=np.nan
# In[ ]:
plt.hist(image_wv_nearir_lr[~np.isnan(image_wv_nearir_lr)])
ax=plt.gca()
ax.set_title('1 km water vapor (cm), low resolution nearir scaled to 5km (lr)');
# ## now use the 1 km MYD03 lons and lats to get a full resolution xy grid
#
# resample the neair wv onto that grid to show full resolution image. Call this
# area_def area_def_hr
# In[ ]:
### Resample the 1 km near-ir water vapor onto a 1 km grid
proj_params = get_proj_params(m3_file)
swath_def = SwathDefinition(lons_1km, lats_1km)
area_def_hr=swath_def.compute_optimal_bb_area(proj_dict=proj_params)
area_def_hr.name="near ir wv retrieval modis 1 km resolution (hr=high resolution)"
area_def_hr.area_id="wv_nearir_hr"
area_def_hr.job_id = area_def_hr.area_id
fill_value=-9999.
image_wv_nearir_hr = kd_tree.resample_nearest(swath_def, wv_nearir_scaled.ravel(),
area_def_hr, radius_of_influence=5000,
nprocs=2,fill_value=fill_value)
image_wv_nearir_hr[image_wv_nearir_hr < -9000]=np.nan
# # Save the mapped images
# ## Now save these three images plus their area_def's for future plotting
#
# The function area_def_to_dict saves the pyresample area_def as a dict
#
# At line 20 note that
# ```python
# a=getattr(area_def,key)
# ```
# where key='my_attribute' is the same as
# ```python
# a=area_def.my_attribute
# ```
# but you don't have to hard-code in 'my_attribute'
#
# In[ ]:
import json
def area_def_to_dict(area_def):
"""
given an area_def, save it as a dictionary`
Parameters
----------
area_def: pyresample area_def object
Returns
-------
out_dict: dict containing
area_def dictionary
"""
keys=['area_id','proj_id','name','proj_dict','x_size','y_size','area_extent']
area_dict={key:getattr(area_def,key) for key in keys}
area_dict['proj_id']=area_dict['area_id']
return area_dict
# ## Create a directory to hold the images and area_def dictionaries
# In[ ]:
map_dir = a301.map_dir / Path('map_data/wv_maps')
map_dir.mkdir(parents=True, exist_ok=True)
# ## Here's a function that writes the image plus metadata to npz and json files
#
# We'll need to use area_def_to_dict when we create the metadata_dict
# In[ ]:
import pdb
def dump_image(image_array,metadata_dict,foldername,
image_array_name='image'):
"""
write an image plus mmetadata to a folder
Parameters
----------
image_array: ndarray
the 2-d image to be saved
foldername: Path object or string
the path to the folder that holds the image files
image_array_name: str
the root name for the npz and json files
i.e. image.npz and image.json
Returns: None
side effect -- an npz and a json file are written
"""
image_file=Path(foldername) / Path(image_array_name)
out_dict={image_array_name:image_array}
np.savez(image_file,**out_dict)
json_name = foldername / Path(image_array_name + '.json')
with open(json_name,'w') as f:
json.dump(metadata_dict,f,indent=4)
print(f"\ndumping {image_file}\n and {json_name}\n")
# ## Write out images, putting useful metadeta in metadata_dict
# In[ ]:
image_name='wv_nearir_lr'
metadata_dict=dict(modismeta = parseMeta(m5_file))
metadata_dict['area_def']=area_def_to_dict(area_def_lr)
metadata_dict['image_name']=image_name
metadata_dict['description']='modis near ir water vapor (cm) sampled at 5 km resolution'
metadata_dict['history']='written by level2_cartopy_resample.ipynb'
map_dir = a301.data_dir.parent / Path('map_data/wv_maps')
map_dir.mkdir(parents=True, exist_ok=True)
dump_image(image_wv_nearir_lr,metadata_dict,map_dir,image_name)
image_name='wv_nearir_hr'
metadata_dict=dict(modismeta = parseMeta(m5_file))
metadata_dict['area_def']=area_def_to_dict(area_def_hr)
metadata_dict['image_name']=image_name
metadata_dict['description']='modis near ir water vapor (cm) sampled at 1 km resolution'
metadata_dict['history']='written by level2_cartopy_resample.ipynb'
dump_image(image_wv_nearir_hr,metadata_dict,map_dir,image_name)
image_name='wv_ir'
metadata_dict=dict(modismeta = parseMeta(m5_file))
metadata_dict['area_def']=area_def_to_dict(area_def_lr)
metadata_dict['image_name']=image_name
metadata_dict['description']='modis ir water vapor (cm) sampled at 5 km resolution'
metadata_dict['history']='written by level2_cartopy_resample.ipynb'
dump_image(image_wv_ir,metadata_dict,map_dir,image_name)
# In[ ]:
area_def_lr
# In[ ]:
area_def_hr
# In[ ]:
area_def_lr | |
import h5py
import numpy
f = h5py.File('GSM4339771_C143_filtered_feature_bc_matrix.h5', 'r')
d = f['matrix']
d.visit(lambda name: print(d[name]))
for key in ['shape', 'indptr', 'barcodes', 'features/id']:
print(key, ': ', d[key].value) | |
"""
``semiclass`` provides classes implementing various domain adaptation methods.
All domain adaptation methods have to be subclass of BaseEstimator.
This implementation aims for clarity rather than efficiency (it is not fast enough) and scalability (it can't really deal with large dimension or large sample case).
For example, numpy built-in linear system solver is often used.
"""
from abc import abstractmethod
import numpy as np
import operator
class BaseEstimator():
"""Base class for domain adaptation"""
@abstractmethod
def fit(self, data, source, target):
"""Fit model.
Arguments:
data (dict of (X, y) pairs): maps env index to the (X, y) pair in that env
source (list of indexes): indexes of source envs
target (int): single index of the target env
"""
self.source = source
self.target = target
return self
@abstractmethod
def predict(self, X):
"""Use the learned estimator to predict labels on fresh target data X
"""
def __str__(self):
"""For easy name printing
"""
return self.__class__.__name__
class ZeroBeta(BaseEstimator):
"""Estimator that sets beta to zero"""
def fit(self, data, source, target):
super().fit(data, source, target)
xtar, _ = data[target]
# add a column of ones for intercept
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
self.beta = np.zeros(xtar1.shape[1])
# set the predicted responses
self.ypred = xtar1.dot(self.beta)
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
class Tar(BaseEstimator):
"""Oracle Ridge (or OLS) trained on the target domain"""
def __init__(self, lamL2=0.0):
self.lamL2 = lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
xtar, ytar = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
ntar = xtar.shape[0]
A = np.eye(xtar1.shape[1])
A[-1, -1] = 0
beta = np.linalg.solve(xtar1.T.dot(xtar1)/ntar + self.lamL2*A, xtar1.T.dot(ytar)/ntar)
self.beta = beta
ypred = xtar1.dot(beta)
self.ypred = ypred
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Ridge{:.1f}".format(self.lamL2)
class Src(BaseEstimator):
"""Use one source env and then run Ridge (or OLS)"""
def __init__(self, lamL2=0.0, sourceInd = 0):
self.lamL2 = lamL2
self.sourceInd = sourceInd
def fit(self, data, source, target):
super().fit(data, source, target)
boolA = False
x, y = data[source[self.sourceInd]]
n = x.shape[0]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
XX = x1.T.dot(x1)
XY = x1.T.dot(y)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta = np.linalg.solve(XX/n + self.lamL2*A, XY/n)
self.beta = beta
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
ypred = xtar1.dot(beta)
self.ypred = ypred
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Ridge{:.1f}".format(self.lamL2)
class SrcPool(BaseEstimator):
"""Pool all source data together and then run Ridge (or OLS)"""
def __init__(self, lamL2=0.0):
self.lamL2 = lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
XY = 0.
XX = 0.
ntotal = 0
boolA = False
for m in source:
x, y = data[m]
ntotal += x.shape[0]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
XX += x1.T.dot(x1)
XY += x1.T.dot(y)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta = np.linalg.solve(XX/ntotal + self.lamL2*A, XY/ntotal)
self.beta = beta
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
ypred = xtar1.dot(beta)
self.ypred = ypred
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Ridge{:.1f}".format(self.lamL2)
class DirectImpute(BaseEstimator):
"""Direct imputation of target XY using the fact that the intervention is uncorrelated with Y"""
def __init__(self, lamL2=0.0, center=True):
self.center = center
self.lamL2 = lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
fakeXY = 0.
Msource = len(source)
for m in source:
x, y = data[m]
nm = x.shape[0]
if self.center:
y = y - np.mean(y)
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
fakeXY += x1.T.dot(y)/nm/Msource
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
ntar = xtar.shape[0]
A = np.eye(x1.shape[1])
A[-1, -1] = 0
beta = np.linalg.solve(xtar1.T.dot(xtar1)/ntar+self.lamL2*A, fakeXY)
self.beta = beta
ypred = xtar1.dot(beta)
self.ypred = ypred
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Ridge{:.1f}".format(self.lamL2)
class DIP(BaseEstimator):
"""Pick one source, DIP match mean of X * beta between source and target"""
def __init__(self, lamMatch=10., lamL2=0., sourceInd = 0):
self.lamMatch = lamMatch
self.lamL2 = lamL2
self.sourceInd = sourceInd
def fit(self, data, source, target):
super().fit(data, source, target)
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
x, y = data[source[self.sourceInd]]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x.shape[0]
diffx1 = np.mean(xtar1, axis=0) - np.mean(x1, axis=0)
XTX = x1.T.dot(x1)/n1 + self.lamMatch * np.outer(diffx1, diffx1)
XTY = x1.T.dot(y)/n1
A = np.eye(x1.shape[1])
A[-1, -1] = 0
beta = np.linalg.solve(XTX+self.lamL2*A, XTY)
self.beta = beta
ypred = xtar1.dot(beta)
self.ypred = ypred
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class DIPmix(BaseEstimator):
"""Pick one source, DIP match mean of X * beta between source and target
the version that deals with mixed-causal-anticausal case
we first remove the causal part, do DIP and then add the causal part back
This is an oracle estimator"""
def __init__(self, causal_index=[0], lamMatch=10., lamL2=0., sourceInd = 0):
self.lamMatch = lamMatch
self.lamL2 = lamL2
self.sourceInd = sourceInd
self.causal_index = causal_index
def fit(self, data, source, target):
super().fit(data, source, target)
d = data[source[0]][0].shape[1]
self.noncausal_index = list(set(np.arange(d)) - set(self.causal_index))
def get_causal_beta(indexk):
# for one covariate coordinate x or for y
# indexk is the index of the covariate coordinate x
XY = 0.
XX = 0.
ntotal = 0
boolA = False
betacausal1_restrict = 0
for m in source:
x, y = data[m]
if indexk != -1:
y = x[:, indexk]
# only use the causal part of x
x = x[:, self.causal_index]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
ntotal += x.shape[0]
XX += x1.T.dot(x1)
XY += x1.T.dot(y)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
betacausal1_restrict += np.linalg.solve(XX/ntotal + self.lamL2*A, XY/ntotal)/len(source)
return(betacausal1_restrict)
beta_corrections = {}
for indexk in self.noncausal_index:
beta_corrections[indexk] = get_causal_beta(indexk)
beta_corrections[-1] = get_causal_beta(-1)
# create new dataset based by removing causal part
betacausal1 = np.zeros(d)
betacausal1[self.causal_index] = beta_corrections[-1][:-1]
self.betacausal1 = betacausal1
# for cirm, y - x * betacuasal1 will be used as a replacement for y
# Now modify the dataset
dataNew = {}
for m in np.concatenate((source, [target])):
x, y = data[m]
xNew = np.zeros_like(x[:, self.noncausal_index])
for k, indexk in enumerate(self.noncausal_index):
xNew[:, k] = x[:, indexk] - x[:, self.causal_index].dot(beta_corrections[indexk][:-1])
yNew = y - x.dot(self.betacausal1)
dataNew[m] = xNew, yNew
# do DIP on the new dataset
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
xtarNew, _ = dataNew[target]
xtarNew1 = np.concatenate((xtarNew, np.ones((xtarNew.shape[0], 1))), axis=1)
x, y = data[source[self.sourceInd]]
xNew, yNew = dataNew[source[self.sourceInd]]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
xNew1 = np.concatenate((xNew, np.ones((xNew.shape[0], 1))), axis=1)
n1 = xNew.shape[0]
diffx1New = np.mean(xtarNew1, axis=0) - np.mean(xNew1, axis=0)
diffx1 = np.zeros(d+1)
diffx1[self.noncausal_index] = diffx1New[:-1]
XTX = xNew1.T.dot(xNew1)/n1 + self.lamMatch * np.outer(diffx1New, diffx1New)
XTY = xNew1.T.dot(yNew)/n1
A = np.eye(xNew1.shape[1])
A[-1, -1] = 0
betaNew = np.linalg.solve(XTX+self.lamL2*A, XTY)
self.beta = np.zeros(d+1)
self.beta[self.noncausal_index] = betaNew[:-1]
self.beta[-1] = betaNew[-1]
self.beta[self.causal_index] = beta_corrections[-1][:-1]
for k, indexk in enumerate(self.noncausal_index):
self.beta[self.causal_index] -= betaNew[k]*beta_corrections[indexk][:-1]
ypred = xtar1.dot(self.beta)
self.ypred = ypred
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class DIPOracle(BaseEstimator):
"""Pick one source, DIP match mean of X * beta between source and target, use target labels to fit (oracle)"""
def __init__(self, lamMatch=10., lamL2=0., sourceInd = 0):
self.lamMatch = lamMatch
self.lamL2 = lamL2
self.sourceInd = sourceInd
def fit(self, data, source, target):
super().fit(data, source, target)
xtar, ytar = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
ntar = xtar.shape[0]
x, y = data[source[self.sourceInd]]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x.shape[0]
diffx1 = np.mean(xtar1, axis=0) - np.mean(x1, axis=0)
XTX = xtar1.T.dot(xtar1)/ntar + self.lamMatch * np.outer(diffx1, diffx1)
XTY = xtar1.T.dot(ytar)/ntar
A = np.eye(x1.shape[1])
A[-1, -1] = 0
beta = np.linalg.solve(XTX+self.lamL2*A, XTY)
self.beta = beta
ypred = xtar1.dot(beta)
self.ypred = ypred
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class DIPweigh(BaseEstimator):
'''loop throught all source envs, match the mean of X * beta between source env i and target, weigh the final prediction based loss of env i'''
def __init__(self, lamMatch=10.0, lamL2=0.0, weightrho=1000.):
self.lamMatch = lamMatch
self.lamL2 = lamL2
self.weightrho = weightrho
def fit(self, data, source, target):
super().fit(data, source, target)
# mth position contains beta from mth source env
self.betas = {}
# mth position contains predicted response from mth source env
ypreds = {}
# source env selection criteria, src loss
self.crits = {}
# normalized version of the selection criteria, to avoid overflow
self.crits_norm = {}
self.ypred = 0
self.total_weight = 0
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
nm = x.shape[0]
diffx1 = np.mean(xtar1, axis=0) - np.mean(x1, axis=0)
XTX = x1.T.dot(x1)/nm + self.lamMatch * np.outer(diffx1, diffx1)
XTY = x1.T.dot(y)/nm
A = np.eye(x1.shape[1])
A[-1, -1] = 0
self.betas[m] = np.linalg.solve(XTX + self.lamL2*A, XTY)
ypreds[m] = xtar1.dot(self.betas[m])
# souce env selection criteria is source loss
self.crits[m] = np.sum((x1.dot(self.betas[m])-y)**2)/nm + self.lamMatch * np.inner(diffx1, self.betas[m])**2
minDiffIndx = min(self.crits.items(), key=operator.itemgetter(1))[0]
# kept for version compability
self.minDiffIndx = minDiffIndx
self.min_critindex = minDiffIndx
# use normalized weights to avoid numerical overflow
for m in source:
self.crits_norm[m] = self.crits[m] - self.crits[self.min_critindex]
self.ypred += np.exp(-self.weightrho * self.crits_norm[m]) * ypreds[m]
self.total_weight += np.exp(-self.weightrho * self.crits_norm[m])
self.ypred /= self.total_weight
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = 0
for k in range(len(self.source)):
ypredX1 += np.exp(-self.weightrho * self.crits_norm[self.source[k]]) * X1.dot(self.betas[self.source[k]])
ypredX1 /= self.total_weight
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class CIPalt(BaseEstimator):
"""Match the conditional (on Y) mean of X * beta across source envs, no target env is needed"""
def __init__(self, lamCIP=10.0, lamL2=0.0):
self.lamCIP = lamCIP
self.lamL2 = lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
XTX = 0
XTY = 0
boolA = False
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
XTX += x1.T.dot(x1) / n1
XTY += x1.T.dot(y) / n1
conditionx1 = np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1)
for j in source:
if j != m:
xj, yj = data[j]
xj1 = np.concatenate((xj, np.ones((xj.shape[0], 1))), axis=1)
conditionxj1 = np.mean(xj1, axis=0) - np.mean(yj) * 1./np.sum(yj**2) * yj.dot(xj1)
diffxj1 = conditionx1 - conditionxj1
XTX += self.lamCIP / len(source) * 2. * np.outer(diffxj1, diffxj1)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta = np.linalg.solve(XTX + self.lamL2 * A, XTY)
self.beta = beta
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
ypred = xtar1.dot(beta)
self.ypred = ypred
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Match{:.1f}".format(self.lamCIP) + "_Ridge{:.1f}".format(self.lamL2)
class CIP(BaseEstimator):
"""Match the conditional (on Y) mean of X * beta across source envs, no target env is needed"""
def __init__(self, lamCIP=10.0, lamL2=0.0):
self.lamCIP = lamCIP
self.lamL2 = lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
XTX = 0
XTY = 0
boolA = False
avconditionx1 = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
avconditionx1 += (np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1))/len(source)
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
XTX += x1.T.dot(x1) / n1 / len(source)
XTY += x1.T.dot(y) / n1 / len(source)
conditionx1 = np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1)
diffx1 = conditionx1 - avconditionx1
XTX += self.lamCIP / len(source) * np.outer(diffx1, diffx1)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
self.beta = np.linalg.solve(XTX + self.lamL2 * A, XTY)
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
self.ypred = xtar1.dot(self.beta)
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_CIP{:.1f}".format(self.lamCIP) + "_Ridge{:.1f}".format(self.lamL2)
class RII(BaseEstimator):
"""Residiual invariant and independent estimator,
Match the residual Y - X * beta across source envs, no target env is needed"""
def __init__(self, lamRII=10.0, lamL2=0.0):
self.lamRII = lamRII
self.lamL2 = lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
XTX = 0
XTY = 0
boolA = False
avgx1mean = 0
avgymean = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
avgx1mean += np.mean(x1, axis=0)/len(source)
avgymean += np.mean(y)/len(source)
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
x1mean = np.mean(x1, axis=0)
ymean = np.mean(y)
xtymean = x1.T.dot(y - ymean) / n1
ytymean = np.mean(y * (y-ymean))
XTX += x1.T.dot(x1) / n1
XTY += x1.T.dot(y) / n1
diffx1 = x1mean - avgx1mean
# for the residual invariant penalty
XTX += self.lamRII * np.outer(diffx1, diffx1)
XTY += self.lamRII * diffx1 * (ymean - avgymean)
# for the residual independent penalty
XTX += self.lamRII * np.outer(xtymean, xtymean)
XTY += self.lamRII * xtymean * ytymean
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta = np.linalg.solve(XTX + self.lamL2 * A, XTY)
self.beta = beta
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
ypred = xtar1.dot(beta)
self.ypred = ypred
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_RII{:.1f}".format(self.lamRII) + "_Ridge{:.1f}".format(self.lamL2)
class CondMatchSrcTarWeigh(BaseEstimator):
"""Match the conditional (on Y) mean of X * beta across source envs, use Yhat as proxy of Y to do conditional match between source and target.
This method is not guaranteed to work"""
def __init__(self, lamMatch=10.0, lamL2=0.0):
self.lamMatch = lamMatch
self.lamL2 = lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
# use source envs to match the conditional mean
# find beta_invariant
XTX = 0
XTY = 0
boolA = False
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
XTX += x1.T.dot(x1) / n1
XTY += x1.T.dot(y) / n1
conditionx1 = np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1)
for j in source:
if j != m:
xj, yj = data[j]
xj1 = np.concatenate((xj, np.ones((xj.shape[0], 1))), axis=1)
conditionxj1 = np.mean(xj1, axis=0) - np.mean(yj) * 1./np.sum(yj**2) * yj.dot(xj1)
diffxj1 = conditionx1 - conditionxj1
XTX += self.lamCIP * np.outer(diffxj1, diffxj1)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta_invariant = np.linalg.solve(XTX + self.lamL2 * A, XTY)
self.beta_invariant = beta_invariant
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
# use Yhat as proxy of Y in the target env
yguesstar = xtar1.dot(beta_invariant)
conditionxtar1 = np.mean(xtar1, axis=0) \
- np.mean(yguesstar) * 1./np.sum(yguesstar**2) * yguesstar.dot(xtar1)
# now do conditonal match between each source env and target env
betas = {}
ypreds = {}
diffs = {}
self.ypred = 0
self.total_weight = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
nm = x.shape[0]
yguess = x1.dot(beta_invariant)
conditionx1 = np.mean(x1, axis=0) - np.mean(yguess) * 1./np.sum(yguess**2) * yguess.dot(x1)
diffx1 = conditionx1 - conditionxtar1
XTXt = x1.T.dot(x1)/nm + self.lamMatch * np.outer(diffx1, diffx1)
XTYt = x1.T.dot(y)/nm
betas[m] = np.linalg.solve(XTXt + self.lamL2 * A, XTYt)
ypreds[m] = xtar1.dot(betas[m])
diffs[m] = np.inner(diffx1, betas[m])**2
# diffs[m] = np.sum((x1.dot(betas[m])-y)**2)/nm+ self.lamMatch * np.inner(diffx1, betas[m])**2
self.ypred += np.exp(-10000 * diffs[m]) * ypreds[m]
self.total_weight += np.exp(-10000 * diffs[m])
self.ypred /= self.total_weight
# m_argmin = min(diffs.items(), key=operator.itemgetter(1))[0]
# self.ypred = ypreds[m_argmin]
self.betas = betas
self.ypreds = ypreds
self.diffs = diffs
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = 0
for k in range(len(self.source)):
ypredX1 += np.exp(-10000 * self.diffs[self.source[k]])* X1.dot(self.betas[self.source[k]])
ypredX1 /= self.total_weight
# m_argmin = min(self.diffs.items(), key=operator.itemgetter(1))[0]
# ypredX1 = X1.dot(self.betas[m_argmin])
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class CIRM(BaseEstimator):
"""Match the conditional (on Y) mean of X * beta across source envs, use Yhat as proxy of Y to remove the Y parts in X.
Match on the residual between one source env and target env"""
def __init__(self, lamCIP=10.0, lamMatch=10.0, lamL2=0.0, sourceInd = 0):
self.lamCIP = lamCIP
self.lamMatch = lamMatch
self.lamL2 = lamL2
self.sourceInd = sourceInd
def fit(self, data, source, target):
super().fit(data, source, target)
# Step 1: use source envs to match the conditional mean
# find beta_invariant
XTX = 0
XTY = 0
boolA = False
avconditionx1 = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
avconditionx1 += (np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1))/len(source)
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
XTX += x1.T.dot(x1) / n1
XTY += x1.T.dot(y) / n1
conditionx1 = np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1)
diffx1 = conditionx1 - avconditionx1
XTX += self.lamCIP * np.outer(diffx1, diffx1)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta_invariant = np.linalg.solve(XTX + self.lamL2*A, XTY)
self.beta_invariant = beta_invariant
# Step 2: remove the invariant part on all source envs, so that everything is independent of Y
# get that coefficient b
YsrcMean = 0
ntotal = 0
for m in source:
YsrcMean += np.sum(data[m][1])
ntotal += data[m][1].shape[0]
YsrcMean /= ntotal
XTY = 0
YTY = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
yguess = x1.dot(beta_invariant)
# yguess = x.dot(beta_invariant[:-1])
yCentered = y - YsrcMean
YTY += np.sum(yguess * yCentered)
XTY += x.T.dot(yCentered)
b_invariant = np.zeros_like(beta_invariant)
b_invariant[:-1] = XTY / YTY
self.b_invariant = b_invariant
# Step 3: mean match between source and target on the residual, after transforming the covariates X - (X * beta_invariant) * b_invariant
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
conditionxtar1 = np.mean(xtar1, axis=0) - np.mean(xtar1.dot(beta_invariant)) * b_invariant
conditionxtar1[-1] = 0
betas = {}
ypreds = {}
ypred = 0
x, y = data[source[self.sourceInd]]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
nm = x.shape[0]
conditionx1 = np.mean(x1, axis=0) - np.mean(x1.dot(beta_invariant)) * b_invariant
conditionx1[-1] = 0
diffx1 = conditionx1 - conditionxtar1
XTXt = x1.T.dot(x1)/nm + self.lamMatch * np.outer(diffx1, diffx1)
XTYt = x1.T.dot(y)/nm
self.beta = np.linalg.solve(XTXt + self.lamL2*A, XTYt)
self.ypred = xtar1.dot(self.beta)
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_CIP{:.1f}".format(self.lamCIP) + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class CIRMi(BaseEstimator):
"""Match the conditional (on Y) mean of X * beta across source envs, use Yhat as proxy of Y to remove the Y parts in X.
Match on the residual between one source env and target env
with additional residual independent constraint"""
def __init__(self, lamCIP=10.0, lamMatch=10.0, lamL2=0.0, sourceInd = 0):
self.lamCIP = lamCIP
self.lamMatch = lamMatch
self.lamL2 = lamL2
self.sourceInd = sourceInd
def fit(self, data, source, target):
super().fit(data, source, target)
# Step 1: use source envs to match the conditional mean
# find beta_invariant
XTX = 0
XTY = 0
boolA = False
avconditionx1 = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
avconditionx1 += (np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1))/len(source)
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
XTX += x1.T.dot(x1) / n1
XTY += x1.T.dot(y) / n1
conditionx1 = np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1)
diffx1 = conditionx1 - avconditionx1
XTX += self.lamCIP * np.outer(diffx1, diffx1)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta_invariant = np.linalg.solve(XTX + self.lamL2*A, XTY)
self.beta_invariant = beta_invariant
# Step 2: remove the invariant part on all source envs, so that everything is independent of Y
# get that coefficient b
YsrcMean = 0
ntotal = 0
for m in source:
YsrcMean += np.sum(data[m][1])
ntotal += data[m][1].shape[0]
YsrcMean /= ntotal
XTY = 0
YTY = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
yguess = x1.dot(beta_invariant)
# yguess = x.dot(beta_invariant[:-1])
yCentered = y - YsrcMean
YTY += np.sum(yguess * yCentered)
XTY += x.T.dot(yCentered)
b_invariant = np.zeros_like(beta_invariant)
b_invariant[:-1] = XTY / YTY
self.b_invariant = b_invariant
# Step 3: mean match between source and target on the residual, after transforming the covariates X - (X * beta_invariant) * b_invariant
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
conditionxtar1 = np.mean(xtar1, axis=0) - np.mean(xtar1.dot(beta_invariant)) * b_invariant
conditionxtar1[-1] = 0
betas = {}
ypreds = {}
diffs = {}
ypred = 0
x, y = data[source[self.sourceInd]]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
nm = x.shape[0]
conditionx1 = np.mean(x1, axis=0) - np.mean(x1.dot(beta_invariant)) * b_invariant
conditionx1[-1] = 0
diffx1 = conditionx1 - conditionxtar1
XTXt = x1.T.dot(x1)/nm + self.lamMatch * np.outer(diffx1, diffx1)
XTYt = x1.T.dot(y)/nm
ymean = np.mean(y)
xtymean = x1.T.dot(y - ymean) / nm
ytymean = np.mean(y * (y-ymean))
# for the residual independent penalty
XTXt += self.lamMatch * np.outer(xtymean, xtymean)
XTYt += self.lamMatch * xtymean * ytymean
self.beta = np.linalg.solve(XTXt + self.lamL2*A, XTYt)
self.ypred = xtar1.dot(self.beta)
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_CIP{:.1f}".format(self.lamCIP) + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class CIRMweigh(BaseEstimator):
"""Match the conditional (on Y) mean of X * beta across source envs, use Yhat as proxy of Y to remove the Y parts in X.
Match on the residual between one source env and target env"""
def __init__(self, lamCIP=10.0, lamMatch=10.0, lamL2=0.0, weightrho=1000.):
self.lamCIP = lamCIP
self.lamMatch = lamMatch
self.lamL2 = lamL2
self.weightrho = weightrho
def fit(self, data, source, target):
super().fit(data, source, target)
# Step 1: use source envs to match the conditional mean
# find beta_invariant
XTX = 0
XTY = 0
boolA = False
avconditionx1 = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
avconditionx1 += (np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1))/len(source)
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
XTX += x1.T.dot(x1) / n1
XTY += x1.T.dot(y) / n1
conditionx1 = np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1)
diffx1 = conditionx1 - avconditionx1
XTX += self.lamCIP * np.outer(diffx1, diffx1)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta_invariant = np.linalg.solve(XTX + self.lamL2*A, XTY)
self.beta_invariant = beta_invariant
# Step 2: remove the invariant part on all source envs, so that everything is independent of Y
# get that coefficient b
YsrcMean = 0
ntotal = 0
for m in source:
YsrcMean += np.sum(data[m][1])
ntotal += data[m][1].shape[0]
YsrcMean /= ntotal
XTY = 0
YTY = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
yguess = x1.dot(beta_invariant)
# yguess = x.dot(beta_invariant[:-1])
yCentered = y - YsrcMean
YTY += np.sum(yguess * yCentered)
XTY += x.T.dot(yCentered)
b_invariant = np.zeros_like(beta_invariant)
b_invariant[:-1] = XTY / YTY
self.b_invariant = b_invariant
# Step 3: mean match between source and target on the residual, after transforming the covariates X - (X * beta_invariant) * b_invariant
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
conditionxtar1 = np.mean(xtar1, axis=0) - np.mean(xtar1.dot(beta_invariant)) * b_invariant
conditionxtar1[-1] = 0
self.betas = {}
ypreds = {}
self.crits_norm = {}
self.crits = {}
self.ypred = 0
self.total_weight = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
nm = x.shape[0]
conditionx1 = np.mean(x1, axis=0) - np.mean(x1.dot(beta_invariant)) * b_invariant
conditionx1[-1] = 0
diffx1 = conditionx1 - conditionxtar1
XTXt = x1.T.dot(x1)/nm + self.lamMatch * np.outer(diffx1, diffx1)
XTYt = x1.T.dot(y)/nm
self.betas[m] = np.linalg.solve(XTXt + self.lamL2*A, XTYt)
ypreds[m] = xtar1.dot(self.betas[m])
self.crits[m] = np.sum((x1.dot(self.betas[m])-y)**2)/nm+ self.lamMatch * np.inner(diffx1, self.betas[m])**2
minDiffIndx = min(self.crits.items(), key=operator.itemgetter(1))[0]
self.minDiffIndx = minDiffIndx
self.min_critindex = minDiffIndx
# use normalized weights to avoid numerical overflow
for m in source:
self.crits_norm[m] = self.crits[m] - self.crits[self.min_critindex]
self.ypred += np.exp(-self.weightrho * self.crits_norm[m]) * ypreds[m]
self.total_weight += np.exp(-self.weightrho * self.crits_norm[m])
self.ypred /= self.total_weight
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = 0
for k in range(len(self.source)):
ypredX1 += np.exp(-self.weightrho * self.crits_norm[self.source[k]]) * X1.dot(self.betas[self.source[k]])
ypredX1 /= self.total_weight
# m_argmin = min(self.diffs.items(), key=operator.itemgetter(1))[0]
# ypredX1 = X1.dot(self.betas[m_argmin])
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_CIP{:.1f}".format(self.lamCIP) + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class CIRMmixweigh(BaseEstimator):
"""Match the conditional (on Y) mean of X * beta across source envs, use Yhat as proxy of Y to remove the Y parts in X.
Match on the residual between one source env and target env
the version that deals with mixed-causal-anticausal case
we first remove the causal part, do CIRM and then add the causal part back"""
def __init__(self, causal_index=[0], lamCIP=10.0, lamMatch=10.0, lamL2=0.0, weightrho = 1000.):
self.lamCIP = lamCIP
self.lamMatch = lamMatch
self.lamL2 = lamL2
self.causal_index = causal_index
self.weightrho = weightrho
def fit(self, data, source, target):
super().fit(data, source, target)
d = data[source[0]][0].shape[1]
self.noncausal_index = list(set(np.arange(d)) - set(self.causal_index))
def get_causal_beta(indexk):
XY = 0.
XX = 0.
ntotal = 0
boolA = False
betacausal1_restrict = 0
for m in source:
x, y = data[m]
if indexk != -1:
y = x[:, indexk]
# only use the causal part of x
x = x[:, self.causal_index]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
ntotal += x.shape[0]
XX += x1.T.dot(x1)
XY += x1.T.dot(y)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
betacausal1_restrict += np.linalg.solve(XX/ntotal + self.lamL2*A, XY/ntotal)/len(source)
return(betacausal1_restrict)
beta_corrections = {}
for indexk in self.noncausal_index:
beta_corrections[indexk] = get_causal_beta(indexk)
beta_corrections[-1] = get_causal_beta(-1)
# Step 0: run SrcPool on the causal_index
betacausal1 = np.zeros(d)
betacausal1[self.causal_index] = beta_corrections[-1][:-1]
self.betacausal1 = betacausal1
# for cirm, y - x * betacuasal1 will be used as a replacement for y
# Now modify the dataset
dataNew = {}
for m in np.concatenate((source, [target])):
x, y = data[m]
xNew = np.zeros_like(x[:, self.noncausal_index])
for k, indexk in enumerate(self.noncausal_index):
xNew[:, k] = x[:, indexk] - x[:, self.causal_index].dot(beta_corrections[indexk][:-1])
yNew = y - x.dot(self.betacausal1)
dataNew[m] = xNew, yNew
# Step 1: use source envs to match the conditional mean
# find beta_invariant
XTX = 0
XTY = 0
boolA = False
avconditionx1 = 0
for m in source:
x, y = dataNew[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
avconditionx1 += (np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1))/len(source)
for m in source:
x, y = dataNew[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
XTX += x1.T.dot(x1) / n1
XTY += x1.T.dot(y) / n1
conditionx1 = np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1)
diffx1 = conditionx1 - avconditionx1
XTX += self.lamCIP * np.outer(diffx1, diffx1)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta_invariant = np.linalg.solve(XTX + self.lamL2*A, XTY)
self.beta_invariant = beta_invariant
# Step 2: remove the invariant part on all source envs, so that everything is independent of Y
# get that coefficient b
YsrcMean = 0
ntotal = 0
for m in source:
YsrcMean += np.sum(dataNew[m][1])
ntotal += dataNew[m][1].shape[0]
YsrcMean /= ntotal
XTY = 0
YTY = 0
for m in source:
x, y = dataNew[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
yguess = x1.dot(beta_invariant)
# yguess = x.dot(beta_invariant[:-1])
yCentered = y - YsrcMean
YTY += np.sum(yguess * yCentered)
XTY += x.T.dot(yCentered)
b_invariant = np.zeros_like(beta_invariant)
b_invariant[:-1] = XTY / YTY
self.b_invariant = b_invariant
# Step 3: mean match between source and target on the residual, after transforming the covariates X - (X * beta_invariant) * b_invariant
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
xtarNew, _ = dataNew[target]
xtarNew1 = np.concatenate((xtarNew, np.ones((xtarNew.shape[0], 1))), axis=1)
conditionxtar1 = np.mean(xtarNew1, axis=0) - np.mean(xtarNew1.dot(beta_invariant)) * b_invariant
conditionxtar1[-1] = 0
self.betas = {}
ypreds = {}
self.crits = {}
self.crits_norm = {}
self.ypred = 0
self.total_weight = 0
for m in source:
x, y = dataNew[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
nm = x.shape[0]
conditionx1 = np.mean(x1, axis=0) - np.mean(x1.dot(beta_invariant)) * b_invariant
conditionx1[-1] = 0
diffx1 = conditionx1 - conditionxtar1
XTXt = x1.T.dot(x1)/nm + self.lamMatch * np.outer(diffx1, diffx1)
XTYt = x1.T.dot(y)/nm
betaNew = np.linalg.solve(XTXt + self.lamL2*A, XTYt)
self.betas[m] = np.zeros(d+1)
self.betas[m][self.noncausal_index] = betaNew[:-1]
self.betas[m][-1] = betaNew[-1]
self.betas[m][self.causal_index] = beta_corrections[-1][:-1]
for k, indexk in enumerate(self.noncausal_index):
self.betas[m][self.causal_index] -= betaNew[k]*beta_corrections[indexk][:-1]
ypreds[m] = xtar1.dot(self.betas[m])
self.crits[m] = np.sum((x1.dot(betaNew)-y)**2)/nm + self.lamMatch * np.inner(diffx1, betaNew)**2
minDiffIndx = min(self.crits.items(), key=operator.itemgetter(1))[0]
self.minDiffIndx = minDiffIndx
self.min_critindex = minDiffIndx
# use normalized weights to avoid numerical overflow
for m in source:
self.crits_norm[m] = self.crits[m] - self.crits[minDiffIndx]
self.ypred += np.exp(-self.weightrho * self.crits_norm[m]) * ypreds[m]
self.total_weight += np.exp(-self.weightrho * self.crits_norm[m])
self.ypred /= self.total_weight
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = 0
for k in range(len(self.source)):
ypredX1 += np.exp(-self.weightrho * self.crits_norm[self.source[k]]) * X1.dot(self.betas[self.source[k]])
ypredX1 /= self.total_weight
# m_argmin = min(self.diffs.items(), key=operator.itemgetter(1))[0]
# ypredX1 = X1.dot(self.betas[m_argmin])
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_CIP{:.1f}".format(self.lamCIP) + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class CIRMiweigh(BaseEstimator):
"""Match the conditional (on Y) mean of X * beta across source envs, use Yhat as proxy of Y to remove the Y parts in X.
Match on the residual between one source env and target env
with additional residual independent """
def __init__(self, lamCIP=10.0, lamMatch=10.0, lamL2=0.0):
self.lamCIP = lamCIP
self.lamMatch = lamMatch
self.lamL2 = lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
# Step 1: use source envs to match the conditional mean
# find beta_invariant
XTX = 0
XTY = 0
boolA = False
avconditionx1 = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
avconditionx1 += (np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1))/len(source)
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
XTX += x1.T.dot(x1) / n1
XTY += x1.T.dot(y) / n1
conditionx1 = np.mean(x1, axis=0) - np.mean(y) * 1./np.sum(y**2) * y.dot(x1)
diffx1 = conditionx1 - avconditionx1
XTX += self.lamCIP * np.outer(diffx1, diffx1)
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta_invariant = np.linalg.solve(XTX + self.lamL2*A, XTY)
self.beta_invariant = beta_invariant
# Step 2: remove the invariant part on all source envs, so that everything is independent of Y
# get that coefficient b
YsrcMean = 0
ntotal = 0
for m in source:
YsrcMean += np.sum(data[m][1])
ntotal += data[m][1].shape[0]
YsrcMean /= ntotal
XTY = 0
YTY = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
yguess = x1.dot(beta_invariant)
# yguess = x.dot(beta_invariant[:-1])
yCentered = y - YsrcMean
YTY += np.sum(yguess * yCentered)
XTY += x.T.dot(yCentered)
b_invariant = np.zeros_like(beta_invariant)
b_invariant[:-1] = XTY / YTY
self.b_invariant = b_invariant
# Step 3: mean match between source and target on the residual, after transforming the covariates X - (X * beta_invariant) * b_invariant
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
conditionxtar1 = np.mean(xtar1, axis=0) - np.mean(xtar1.dot(beta_invariant)) * b_invariant
conditionxtar1[-1] = 0
betas = {}
ypreds = {}
diffs = {}
ypred = 0
total_weight = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
nm = x.shape[0]
conditionx1 = np.mean(x1, axis=0) - np.mean(x1.dot(beta_invariant)) * b_invariant
conditionx1[-1] = 0
diffx1 = conditionx1 - conditionxtar1
XTXt = x1.T.dot(x1)/nm + self.lamMatch * np.outer(diffx1, diffx1)
XTYt = x1.T.dot(y)/nm
ymean = np.mean(y)
xtymean = x1.T.dot(y - ymean) / nm
ytymean = np.mean(y * (y-ymean))
# for the residual independent penalty
XTXt += self.lamMatch * np.outer(xtymean, xtymean)
XTYt += self.lamMatch * xtymean * ytymean
betas[m] = np.linalg.solve(XTXt + self.lamL2*A, XTYt)
ypreds[m] = xtar1.dot(betas[m])
diffs[m] = np.inner(diffx1, betas[m])**2
# diffs[m] = np.sum((x1.dot(betas[m])-y)**2)/nm+ self.lamMatch * np.inner(diffx1, betas[m])**2
ypred += np.exp(-10000 * diffs[m]) * ypreds[m]
total_weight += np.exp(-10000 * diffs[m])
ypred /= total_weight
self.ypred = ypred
self.total_weight = total_weight
# m_argmin = min(diffs.items(), key=operator.itemgetter(1))[0]
# self.ypred = ypreds[m_argmin]
self.betas = betas
self.ypreds = ypreds
self.diffs = diffs
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = 0
for k in range(len(self.source)):
ypredX1 += np.exp(-10000 * self.diffs[self.source[k]]) * X1.dot(self.betas[self.source[k]])
ypredX1 /= self.total_weight
# m_argmin = min(self.diffs.items(), key=operator.itemgetter(1))[0]
# ypredX1 = X1.dot(self.betas[m_argmin])
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_CIP{:.1f}".format(self.lamCIP) + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class RIIRMweigh(BaseEstimator):
"""Residiual invariant and independent, residual match estimator,
Match the residual Y - X * beta across source envs,
use Yhat as proxy of Y to remove the Y parts in X.
Match on the residual between one source env and target env"""
def __init__(self, lamRII=10.0, lamMatch=10.0, lamL2=0.0):
self.lamRII = lamRII
self.lamMatch = lamMatch
self.lamL2 = lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
# Step 1: use source envs to match the conditional mean
# find beta_invariant
XTX = 0
XTY = 0
boolA = False
avgx1mean = 0
avgymean = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
avgx1mean += np.mean(x1, axis=0)/len(source)
avgymean += np.mean(y)/len(source)
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
n1 = x1.shape[0]
x1mean = np.mean(x1, axis=0)
ymean = np.mean(y)
xtymean = x1.T.dot(y - ymean) / n1
ytymean = np.mean(y * (y-ymean))
XTX += x1.T.dot(x1) / n1
XTY += x1.T.dot(y) / n1
diffx1 = x1mean - avgx1mean
# for the residual invariant penalty
XTX += self.lamRII * np.outer(diffx1, diffx1)
XTY += self.lamRII * diffx1 * (ymean - avgymean)
# for the residual independent penalty
XTX += self.lamRII * np.outer(xtymean, xtymean)
XTY += self.lamRII * xtymean * ytymean
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
beta_invariant = np.linalg.solve(XTX + self.lamL2*A, XTY)
self.beta_invariant = beta_invariant
# Step 2: remove the invariant part on all source envs, so that everything is independent of Y
# get that coefficient b
YsrcMean = 0
ntotal = 0
for m in source:
YsrcMean += np.sum(data[m][1])
ntotal += data[m][1].shape[0]
YsrcMean /= ntotal
XTY = 0
YTY = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
yguess = x1.dot(beta_invariant)
# yguess = x.dot(beta_invariant[:-1])
yCentered = y - YsrcMean
YTY += np.sum(yguess * yCentered)
XTY += x.T.dot(yCentered)
b_invariant = np.zeros_like(beta_invariant)
b_invariant[:-1] = XTY / YTY
self.b_invariant = b_invariant
# Step 3: mean match between source and target on the residual, after transforming the covariates X - (X * beta_invariant) * b_invariant
xtar, _ = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
conditionxtar1 = np.mean(xtar1, axis=0) - np.mean(xtar1.dot(beta_invariant)) * b_invariant
conditionxtar1[-1] = 0
betas = {}
ypreds = {}
diffs = {}
ypred = 0
total_weight = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
nm = x.shape[0]
conditionx1 = np.mean(x1, axis=0) - np.mean(x1.dot(beta_invariant)) * b_invariant
conditionx1[-1] = 0
diffx1 = conditionx1 - conditionxtar1
XTXt = x1.T.dot(x1)/nm + self.lamMatch * np.outer(diffx1, diffx1)
XTYt = x1.T.dot(y)/nm
betas[m] = np.linalg.solve(XTXt + self.lamL2*A, XTYt)
ypreds[m] = xtar1.dot(betas[m])
diffs[m] = np.inner(diffx1, betas[m])**2
# diffs[m] = np.sum((x1.dot(betas[m])-y)**2)/nm+ self.lamMatch * np.inner(diffx1, betas[m])**2
ypred += np.exp(-10000 * diffs[m]) * ypreds[m]
total_weight += np.exp(-10000 * diffs[m])
ypred /= total_weight
self.ypred = ypred
self.total_weight = total_weight
# m_argmin = min(diffs.items(), key=operator.itemgetter(1))[0]
# self.ypred = ypreds[m_argmin]
self.betas = betas
self.ypreds = ypreds
self.diffs = diffs
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = 0
for k in range(len(self.source)):
ypredX1 += np.exp(-10000 * self.diffs[self.source[k]]) * X1.dot(self.betas[self.source[k]])
ypredX1 /= self.total_weight
# m_argmin = min(self.diffs.items(), key=operator.itemgetter(1))[0]
# ypredX1 = X1.dot(self.betas[m_argmin])
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_RII{:.1f}".format(self.lamRII) + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2)
class Anchor(BaseEstimator):
"""Anchor regression"""
def __init__(self, lamMatch=10., lamL2=0.):
self.lamMatch=lamMatch
self.lamL2=lamL2
def fit(self, data, source, target):
super().fit(data, source, target)
xmean0 = 0
ymean0 = 0
for m in source:
x, y = data[m]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
xmean0 += np.mean(x1, axis=0)
ymean0 += np.mean(y)
xmean0 /= len(source)
ymean0 /= len(source)
XTX = 0
XTY = 0
boolA = False
for m in source:
x, y = data[m]
nm = x.shape[0]
x1 = np.concatenate((x, np.ones((x.shape[0], 1))), axis=1)
XTX += x1.T.dot(x1)/nm
XTY += x1.T.dot(y)/nm
diffxm = np.mean(x1, axis=0) - xmean0
XTX += self.lamMatch * np.outer(diffxm, diffxm)
diffym = np.mean(y) - ymean0
XTY += self.lamMatch * diffym * diffxm
if not boolA:
A = np.eye(x1.shape[1])
A[-1, -1] = 0
boolA = True
self.beta = np.linalg.solve(XTX + self.lamL2*A, XTY)
xtar, ytar = data[target]
xtar1 = np.concatenate((xtar, np.ones((xtar.shape[0], 1))), axis=1)
self.ypred = xtar1.dot(self.beta)
return self
def predict(self, X):
X1 = np.concatenate((X, np.ones((X.shape[0], 1))), axis=1)
ypredX1 = X1.dot(self.beta)
return ypredX1
def __str__(self):
return self.__class__.__name__ + "_Match{:.1f}".format(self.lamMatch) + "_Ridge{:.1f}".format(self.lamL2) | |
from math import sqrt
import cozmo
from cozmo.util import Pose
from cozmo.objects import CustomObject, CustomObjectMarkers, CustomObjectTypes, ObservableElement, ObservableObject
from sympy import Eq, symbols, solve
from numpy import ones,vstack
from numpy.linalg import lstsq
x, y = symbols("x y")
def line_equation(p1_x, p1_y, p2_x, p2_y):
points = [(p1_x, p1_y),(p2_x, p2_y)]
x_coords, y_coords = zip(*points)
A = vstack([x_coords, ones(len(x_coords))]).T
m, b = lstsq(A, y_coords, rcond=None)[0]
return m, b
def nearest_intersection(x_o, y_o, points):
dx_0 = (x_o - points[0][x])
dy_0 = (y_o - points[0][y])
dist_0 = sqrt(dx_0**2 + dy_0**2)
dx_1 = (x_o - points[1][x])
dy_1 = (y_o - points[1][y])
dist_1 = sqrt(dx_1**2 + dy_1**2)
if dist_0 < dist_1:
return points[0]
return points[1]
def custom_object_pose(robot, custom_object):
print(">>> robot: ", robot.pose.position)
print(">>> cube: ", custom_object.pose.position)
m, b = line_equation(robot.pose.position.x, robot.pose.position.y,
custom_object.pose.position.x, custom_object.pose.position.y)
print(">>> m, b: ", m, b)
# line between cozmo and the object
line = Eq(y - m*x, b)
print(f">>> line: y = {m} x + {b}")
# circle around the object
circle = Eq((x - custom_object.pose.position.x)**2 + (y - custom_object.pose.position.y)**2, 10000)
print(f">>> circle: (x - {custom_object.pose.position.x})**2 + (y - {custom_object.pose.position.y})**2")
# Intersection points between line and circle
result = solve([line, circle])
print(">>> intesection points: ", result)
point = nearest_intersection(robot.pose.position.x, robot.pose.position.y, result)
print(">>> nearest intersection: ", point)
return Pose(point[x], point[y], 0, angle_z=custom_object.pose.rotation.angle_z)
def objects(robot: cozmo.robot.Robot):
return [robot.world.define_custom_cube(CustomObjectTypes.CustomType00,
CustomObjectMarkers.Circles2,
25.4, 25.4, 25.4, True),
robot.world.define_custom_cube(CustomObjectTypes.CustomType01,
CustomObjectMarkers.Hexagons3,
25.4, 25.4, 25.4, True),
robot.world.define_custom_cube(CustomObjectTypes.CustomType02,
CustomObjectMarkers.Triangles3,
25.4, 25.4, 25.4, True),
robot.world.define_custom_cube(CustomObjectTypes.CustomType03,
CustomObjectMarkers.Diamonds2,
25.4, 25.4, 25.4, True),
robot.world.define_custom_cube(CustomObjectTypes.CustomType04,
CustomObjectMarkers.Circles3,
25.4, 25.4, 25.4, True),
] | |
"""DQN Agent"""
import tensorflow as tf
import numpy as np
from network import DQN
from replay_buffer import ReplayBuffer
class DQNAgent:
def __init__(self, sess, state_size, action_size):
self.sess = sess
self.state_size = state_size
self.action_size = action_size
# hyper parameter
self.batch_size = 32
self.discount_factor = 0.99
self.learning_rate = 0.00025
# epsilon
self.s_epsilon = 1.0
self.e_epsilon = 0.01
self.n_epsilon_decay = 100000
self.epsilon = self.s_epsilon
# replay buffer
self.buffer = ReplayBuffer(50000)
# place holder
self.actions = tf.placeholder(tf.int32, shape=None)
self.targets = tf.placeholder(tf.float32, shape=None)
# network
self.policy_net = DQN(self.state_size, self.action_size, net_name="policy_net")
self.target_net = DQN(self.state_size, self.action_size, net_name="target_net")
self.sess.run(tf.global_variables_initializer())
self.update_target_network()
# optimizer
self.loss_op, self.train_op = self._build_op()
def _build_op(self):
"""신경망 학습을 위한 Loss function과 Optimaizer를 정의합니다."""
def select_action(self, state):
"""epsilon-greedy로 action을 선택합니다."""
def update_model(self):
"""학습 네트워크를 학습합니다."""
def update_target_network(self):
"""학습 네트웍의 변수의 값들을 타겟 네트웍으로 복사해서 타겟 네트웍의 값들을 최신으로 업데이트합니다."""
def save_model(self, filename):
"""Save model."""
saver = tf.train.Saver()
path = "./save/" + filename + ".ckpt"
save_path = saver.save(self.sess, path)
print("[Model saved in path: %s !!!]" % save_path)
def load_model(self, filename):
"""Load model."""
saver = tf.train.Saver()
path = "./save/" + filename + ".ckpt"
saver.restore(self.sess, path)
print("[Model restored !!!]") | |
import numpy as np
from scipy.integrate import cumtrapz
import warnings
warnings.filterwarnings("ignore", category=RuntimeWarning)
'''
this module contains all the vector calculus math used in nimpy on vector and scalar
fields. It depends on:
numpy (definied as np)
scipy.integrate.cumtrapz (as cumtrapz)
warnings
Note: b/c 1/R blows up @ R=0, this would normally throw a lot of warnings, but
they have been disabled.
----For all calculations d/dphi=0, i.e. we are only considering poloidal derivatives----
In nimpy, the R (or X) component is the axis=1 of numpy arrays and the Z component is the axis=0
Nimrod coordinate system has (R,Z) or (R, Theta) or (X, Z) as the poloidal components for cylindrical, spherical, and cartesian geometries, respectively.
The Phi or Y component is the toroidal component for either cylindrical or spherical and cartesian geometries, respectively.
Break down:
----geom='cartesian'----
f1 -> f_x
f2 -> f_y "toroidal comp"
f3 -> f_z
----geom='cylindrical'----
f1 -> f_r
f2 -> f_phi "toroidal comp"
f3 -> f_z (same as f3 for cartesian)
----geom='spherical'----
f1 -> f_r
f2 -> f_phi "toroidal comp"
f3 -> f_theta (polar angle)
'''
def df_dx1(f, x1):
return np.gradient(f, axis=1)/np.gradient(x1, axis=1)
def df_dx3(f, x3):
return np.gradient(f, axis=0)/np.gradient(x3, axis=0)
def d2f_dx12(f, x1):
return np.gradient(df_dx1(f, x1), axis=1)/np.gradient(x1, axis=1)
def d2f_dx32(f, x3):
return np.gradient(df_dx3(f, x3), axis=0)/np.gradient(x3, axis=0)
def div(f1, f3, x1, x3, geom='cylindrical'):
'''
Divergence of a nimpy field in poloidal plane
input: f1 (np.ndarray) the R or X component of the field
f3 (np.ndarray) the Z or Theta component of the field
x1 (np.ndarray) the R grid
x3 (np.ndarray) the Z or Theta grid
geom (str): geometry to evaluate on, choices are: cylindrical, spherical or cartesian
'''
if geom == 'cylindrical':
return df_dx1(f1, x1) + f1/x1 + df_dx3(f3,x3)
elif geom == 'spherical':
return df_dx1(f1, x1) + (2./x1)*f1 + (df_dx3(f3, x3)/x1)
elif geom == 'cartesian':
return df_dx1(f1, x1) + df_dx3(f3, x3)
else:
raise ValueError('{0} is not a valid geom. Try cylindrical, spherical or cartesian'.format(geom))
def grad(f, x1, x3, geom='cylindrical'):
'''
Gradient of a nimpy field in poloidal plane
input: f (np.ndarray) the field component
x1 (np.ndarray) the R grid
x3 (np.ndarray) the Z or Theta grid
geom (str): geometry to evaluate on, choices are: cylindrical, spherical or cartesian
output based on geometry:
cylindrical (dict) {'R': R component, 'Z': Z component}
spherical (dict) {'R': R component, 'Theta': Theta component}
cartesian (dict) {'X': X component, 'Z': Z component}
'''
if geom == 'cylindrical':
return {'R': df_dx1(f, x1), 'Z': df_dx3(f, x3)}
elif geom == 'spherical':
return {'R': df_dx1(f, x1), 'Theta': df_dx3(f, x3)/x1}
elif geom == 'cartesian':
return {'X': df_dx1(f, x1), 'Z': df_dx3(f, x3)}
else:
raise ValueError('{0} is not a valid geom. Try cylindrical, spherical or cartesian'.format(geom))
def curl(f1, f2, f3, x1, x3, geom='cylindrical'):
'''
Curl of a nimpy field in poloidal plane
input: f1 (np.ndarray) the R or X component of the field
f2 (np.ndarray) the Phi or Y component of the field
f3 (np.ndarray) the Z or Theta component of the field
x1 (np.ndarray) the R grid
x3 (np.ndarray) the Z or Theta grid
geom (str): geometry to evaluate on, choices are: cylindrical, spherical or cartesian
output based on geometry:
cylindrical (dict) {'R': R component, 'Phi': Phi component, 'Z': Z component}
spherical (dict) {'R': R component, 'Phi': Phi component, 'Theta': Theta component}
cartesian (dict) {'X': X component, 'Y': Y component, 'Z': Z component}
'''
if geom == 'cylindrical':
ans = {}
ans['R'] = -1.*df_dx3(f2, x3)
ans['Phi'] = df_dx3(f1, x3)-df_dx1(f3, x1)
ans['Z'] = df_dx1(f2, x1) + f2/x1
return ans
elif geom == 'spherical':
ans = {}
ans['R'] = -1.*df_dx3(f2, x3)/x1
ans['Phi'] = (df_dx3(f1,x3)/x1) - (f3/x1) - df_dx1(f3, x1)
ans['Theta'] = (1./x1)*f2 + df_dx1(f2, x1)
return ans
elif geom == 'cartesian':
ans = {}
ans['X'] = -1.*df_dx3(f2, x3)
ans['Y'] = df_dx3(f1, x3) - df_dx1(f3, x1)
ans['Z'] = df_dx1(f2, x1)
return ans
else:
raise ValueError('{0} is not a valid geom. Try cylindrical, spherical or cartesian'.format(geom))
def laplacian(f, x1, x3, geom='cylindrical'):
'''
Laplacian of a nimpy field in poloidal plane
input: f (np.ndarray) the field component
x1 (np.ndarray) the R grid
x3 (np.ndarray) the Z or Theta grid
geom (str): geometry to evaluate on, choices are: cylindrical, spherical or cartesian
'''
if geom == 'cylindrical':
return d2f_dx12(f, x1) + df_dx1(f, x1)/x1 + d2f_dx32(f, x3)
elif geom == 'spherical':
return d2f_dx12(f, x1) + (2./x1)*df_dx1(f,x1) + (1./x1**2)*d2f_dx32(f, x3)
elif geom == 'cartesian':
return d2f_dx12(f, x1) + d2f_dx32(f, x3)
else:
raise ValueError('{0} is not a valid geom. Try cylindrical, spherical or cartesian'.format(geom))
def vec_laplacian(f1, f2, f3, x1, x3, geom='cylindrical'):
'''
Vector Laplacian of a nimpy field in poloidal plane
input: f1 (np.ndarray) the R or X component of the field
f2 (np.ndarray) the Phi or Y component of the field
f3 (np.ndarray) the Z or Theta component of the field
x1 (np.ndarray) the R grid
x3 (np.ndarray) the Z or Theta grid
geom (str): geometry to evaluate on, choices are: cylindrical, spherical or cartesian
output based on geometry:
cylindrical (dict) {'R': R component, 'Phi': Phi component, 'Z': Z component}
spherical (dict) {'R': R component, 'Phi': Phi component, 'Theta': Theta component}
cartesian (dict) {'X': X component, 'Y': Y component, 'Z': Z component}
'''
if geom == 'cylindrical':
ans = {}
ans['R'] = laplacian(f1, x1, x3) - f1/x1**2
ans['Phi'] = laplacian(f2, x1, x3) - f2/x1**2
ans['Z'] = laplacian(f3, x1, x3)
return ans
elif geom == 'spherical':
ans = {}
ans['R'] = laplacian(f1, x1, x3, geom=geom) - (2./x1**2)*(f1 + df_dx3(f3, x3))
ans['Phi'] = laplacian(f2, x1, x3, geom=geom) - f2/x1**2
ans['Theta'] = laplacian(f3, x1, x3, geom=geom) - f3/x1**2 + (2./x1**2)*df_dx3(f1, x3)
return ans
elif geom == 'cartesian':
ans = {}
ans['X'] = laplacian(f1, x1, x3, geom=geom)
ans['Y'] = laplacian(f2, x1, x3, geom=geom)
ans['Z'] = laplacian(f3, x1, x3, geom=geom)
return ans
else:
raise ValueError('{0} is not a valid geom. Try cylindrical, spherical or cartesian'.format(geom))
def f_dot_grad_f(f1, f2, f3, x1, x3, geom='cylindrical'):
'''
A_dot_grad_A of a nimpy field in poloidal plane
input: f1 (np.ndarray) the R or X component of the field
f2 (np.ndarray) the Phi or Y component of the field
f3 (np.ndarray) the Z or Theta component of the field
x1 (np.ndarray) the R grid
x3 (np.ndarray) the Z or Theta grid
geom (str): geometry to evaluate on, choices are: cylindrical, spherical or cartesian
output based on geometry:
cylindrical (dict) {'R': R component, 'Phi': Phi component, 'Z': Z component}
spherical (dict) {'R': R component, 'Phi': Phi component, 'Theta': Theta component}
cartesian (dict) {'X': X component, 'Y': Y component, 'Z': Z component}
'''
if geom == 'cylindrical':
return {'R': f1*df_dx1(f1, x1) + f3*df_dx3(f1, x3) - f2**2/x1, 'Phi': f1*df_dx1(f2, x1) + f3*df_dx3(f2, x3) + (f1*f2)/x1, 'Z': f1*df_dx1(f3, x1) + f3*df_dx3(f3, x3)}
elif geom == 'spherical':
ans = {}
ans['R'] = f1*df_dx1(f1, x1) + (f3/x1)*df_dx3(f1, x3) - (f2**2 + f3**2)/x1
ans['Phi'] = f1*df_dx1(f2, x1) + (f3/x1)*df_dx3(f2, x3) + (f2*f1)/x1
ans['Theta'] = f1*df_dx1(f3, x1) + (f3/x1)*df_dx3(f3, x3) + (f3*f1)/x1
return ans
elif geom == 'cartesian':
ans = {}
ans['X'] = f1*df_dx1(f1, x1) + f3*df_dx3(f1, x3)
ans['Y'] = f1*df_dx1(f2, x1) + f3*df_dx3(f2, x3)
ans['Z'] = f1*df_dx1(f3, x1) + f3*df_dx3(f3, x3)
return ans
else:
raise ValueError('{0} is not a valid geom. Try cylindrical, spherical or cartesian'.format(geom))
def calc_poloidal_stream_func(f3, x1, x3=None, geom='cylindrical'):
'''
Poloidal stream function calculator using scipy.integrate.cumtrapz
input: f3 (np.ndarray) the Z or Theta component of the field
x1 (np.ndarray) the R or X grid
x3 (np.ndarray) (optional, default=None) if doing spherical calculation, needs the Theta grid
geom (str): geometry to evalute on, choices are: cylindrical, spherical or cartesian
'''
if geom == 'cylindrical' or geom == 'cartesian':
psi = cumtrapz(f3 * x1, x1[0, :], initial=0.0, axis=1)
elif geom == 'spherical':
if x3 is None:
raise ValueError('You must provide a theta mesh for spherical stream calc')
psi = -1.*np.sin(x3) * cumtrapz(f3 * x1, x1[0, :], initial=0.0, axis=1)
else:
raise ValueError('{0} is not a valid geom. Try cylindrical, spherical or cartesian'.format(geom))
return psi | |
#Importing header files
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
#Reading the file
data=pd.read_csv(path)
#1 Visualizing the company's record with respect to loan approvals.
print(data.shape)
#Creating a new variable to store the value counts
loan_status=data['Loan_Status'].value_counts()
#Plotting bar plot
plt.bar(loan_status.index,loan_status.data)
plt.show()
#Company has more 'loan approvals'
print('----------------------------------------')
#2 Loan approval distribution across the regions.
#Plotting an unstacked bar plot
property_and_loan=data.groupby(['Property_Area','Loan_Status'])
property_and_loan=property_and_loan.size().unstack()
property_and_loan.plot(kind='bar',stacked=False,figsize=[15,10])
#Changing the x-axis label
plt.xlabel('Property Area')
#Changing the y-axis label
plt.ylabel('Loan Status')
#Rotating the ticks of X-axis
plt.xticks(rotation=45)
plt.show()
#Semiurban region with the highest no. of loan approvals
#Rural region with lowest no. of loan approvals
#Semiurban region with the maximum difference between loan approvals and loan rejections
print('----------------------------------------')
#3 Does higher education result in a better guarantee in issuing loans?
#Plotting a stacked bar plot
education_and_loan=data.groupby(['Education','Loan_Status'])
education_and_loan=education_and_loan.size().unstack()
education_and_loan.plot(kind='bar',stacked=True,figsize=[15,10])
#Changing the x-axis label
plt.xlabel('Education Status')
#Changing the y-axis label
plt.ylabel('Loan Status')
#Rotating the ticks of X-axis
plt.xticks(rotation=45)
plt.show()
#- Graduate group has asked for higher loan services irrespective of the approval.
print('----------------------------------------')
#4 Checking whether being graduate or not also leads to different loan amount distribution
#Subsetting the dataframe based on 'Education' column
graduate=data[data['Education'] == 'Graduate']
#Subsetting the dataframe based on 'Education' column
not_graduate=data[data['Education'] == 'Not Graduate']
#Plotting density plot for 'Graduate'
graduate['LoanAmount'].plot(kind='density',label='Graduate')
#Plotting density plot for 'Graduate'
not_graduate['LoanAmount'].plot(kind='density',label='Not Graduate')
#For automatic legend display
plt.legend()
print('----------------------------------------')
#5 Cheecking correlation between the borrower's income and loan amount
#Setting up the subplots
fig,(ax_1,ax_2,ax_3)=plt.subplots(nrows = 3 , ncols = 1,figsize=[20,10])
#Plotting scatter plot
ax_1.scatter(data['ApplicantIncome'],data['LoanAmount'])
#Setting the subplot ax.legis title
ax_1.set_title('Applicant Income')
#Plotting scatter plot
ax_2.scatter(data['CoapplicantIncome'],data['LoanAmount'])
#Setting the subplot axis title
ax_2.set_title('Coapplicant Income')
#Creating a new column 'TotalIncome'
data['TotalIncome']=data['ApplicantIncome']+data['CoapplicantIncome']
#Plotting scatter plot
ax_3.scatter(data['TotalIncome'],data['LoanAmount'])
#Setting the subplot axis title
ax_3.set_title('Total Income')
# High Correlation between 'ApplicantIncome' and 'LoanAmount' | |
import model
import utils
import json
import pandas as pd
from sklearn.linear_model import LogisticRegression
from numpy.random import RandomState
from unittest import TestCase
class ModelTests(TestCase):
def test_split_dataset(self):
parquets = utils.get_files("parquets", "*.parquet")
if len(parquets) > 0:
data = pd.read_parquet(parquets[0])
# split into training and validation
training_set, validation_set = model.split_dataset(data, 0.25, 1)
number_of_customers = len(data)
customers_to_train = len(training_set)
customers_to_validate = len(validation_set)
assert number_of_customers == customers_to_train + customers_to_validate
def test_predict_multiple_models(self):
customer = dict(id="8db4206f-8878-174d-7a23-dd2c4f4ef5a0",
score_3=480.0,
score_4=105.2,
score_5=0.8514,
score_6=94.2,
income=50000)
#data = json.dumps(customer)
predictions = model.perform_predictions(customer, True)
files = utils.get_files("parquets", "*.parquet")
assert isinstance(predictions['predictions'], list)
assert len(predictions['predictions']) == len(files)
def test_predict_single_model(self):
customer = dict(id="8db4206f-8878-174d-7a23-dd2c4f4ef5a0",
score_3=480.0,
score_4=105.2,
score_5=0.8514,
score_6=94.2,
income=50000)
prediction = model.perform_predictions(customer, False)
assert isinstance(prediction, dict)
assert len(prediction) == 3 | |
# # 遍历一个文件夹下所有文件
# import os
# import re
# dirs = os.listdir("./models/")
# table = []
# for name in dirs:
# # if len(name.split("_")) != 4:
# # continue
# if 'clear' not in name:
# continue
# filename = "./models/%s/train.log" % name
# with open(filename, "r") as f:
# lines = f.read().split("\n")[-6:]
# output = name.split("_")[:3]
# # output[2] = int(re.findall(r"\d+",output[2])[0])
# for line in lines:
# if "Test" in line:
# output.append(line.split(":")[-1])
# table.append(output)
# table = sorted(table)
# table = [[str(i) for i in line] for line in table]
# table = "\n".join(["\t".join(line) for line in table])
# print(table)
import pickle
import os
import numpy as np
from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score, roc_auc_score
def read_triple(file_path, entity2id, relation2id):
'''
Read triples and map them into ids.
'''
triples = []
with open(file_path) as fin:
for line in fin:
h, r, t = line.strip().split('\t')
try:
triples.append((entity2id[h], relation2id[r], entity2id[t]))
except:
pass
return triples
dataset = "YAGO3-10"
fake = 10
data_path = "./data/%s" % dataset
with open(os.path.join(data_path, 'entities.dict')) as fin:
entity2id = dict()
id2entity = dict()
for line in fin:
eid, entity = line.strip().split('\t')
entity2id[entity] = int(eid)
id2entity[int(eid)] = entity
with open(os.path.join(data_path, 'relations.dict')) as fin:
relation2id = dict()
id2relation = dict()
for line in fin:
rid, relation = line.strip().split('\t')
relation2id[relation] = int(rid)
id2relation[int(rid)] = relation
nentity = len(entity2id)
nrelation = len(relation2id)
train_triples = read_triple(os.path.join(data_path, 'train.txt'), entity2id, relation2id)
fake_triples = pickle.load(open(os.path.join(data_path, "fake%s.pkl" % fake), "rb"))
model = "TransE"
with open("./models/%s_%s_CLF_soft10/confidence_weight.pkl" % (model, dataset), "rb") as f:
confidence_weight = pickle.load(f)
predict, label = [], []
for triple in train_triples:
predict.append(confidence_weight[triple].item())
label.append(1)
min100_triple = np.array(predict).argsort()[:100]
with open("codes/min_score_true100.txt", "w") as fw:
for index in min100_triple:
h, r, t = train_triples[index]
head, relation, tail = id2entity[h], id2relation[r], id2entity[t]
print("%s\t%s\t%s\t%f" % (head, relation, tail, predict[index]))
fw.write("%s\t%s\t%s\t%f\n" % (head, relation, tail, predict[index]))
max100_triple = np.array(predict).argsort()[-100:]
with open("codes/max_score_true100.txt", "w") as fw:
for index in max100_triple:
h, r, t = train_triples[index]
head, relation, tail = id2entity[h], id2relation[r], id2entity[t]
print("%s\t%s\t%s\t%f" % (head, relation, tail, predict[index]))
fw.write("%s\t%s\t%s\t%f\n" % (head, relation, tail, predict[index]))
# for triple in fake_triples:
# predict.append(confidence_weight[triple].item())
# label.append(0)
#
# y_score, y_true = np.array(predict), np.array(label)
# auc = roc_auc_score(y_true=y_true, y_score=y_score)
# specificity = recall_score(y_true=1 - y_true, y_pred=y_score < 0.5)
# print("auc: %f, specificity: %f" % (auc, specificity)) | |
# -*- coding: utf-8 -*-
from __future__ import print_function
import grpc
import servers.data_server_pb2 as data_server_pb2
import servers.data_server_pb2_grpc as data_server_pb2_grpc
from concurrent import futures
from multiprocessing import Process
from utils.hdfs_utils import HDFSClient, multi_download
import time
import sys
import os
import xxhash
import numpy as np
from utils.logger import logging
class DataClient(object):
def __init__(self):
self.stub_list = []
self.load_data_into_patch = None
def uid_shard(self, uid):
try:
uid_hash = xxhash.xxh32(str(uid), seed=101).intdigest()
except:
return -1
shard_idx = uid_hash % len(self.stub_list)
return shard_idx
# should set all params to numpy array with shape and dtype
# buggy here
def set_param_by_uid(self, uid, param_dict):
shard_idx = self.uid_shard(uid)
if shard_idx == -1:
return -1
user_param = data_server_pb2.UserParams()
user_param.uid = uid
for key in param_dict:
param = data_server_pb2.Param()
param.name = key
np_var = param_dict[param.name]
param.shape.extend(np_var.shape)
param.weight.extend(np_var.ravel())
user_param.user_params.extend([param])
call_future = self.stub_list[shard_idx].UpdateUserParams.future(
user_param)
err_code = call_future.result().err_code
return err_code
def get_param_by_uid(self, uid):
shard_idx = self.uid_shard(uid)
if shard_idx == -1:
return -1
data = data_server_pb2.Data()
data.uid = uid
call_future = self.stub_list[shard_idx].GetUserParams.future(data)
user_params = call_future.result()
param_dict = {}
for param in user_params.user_params:
param_dict[param.name] = np.array(
list(param.weight), dtype=np.float32)
param_dict[param.name].shape = list(param.shape)
return param_dict
def clear_user_data(self, date):
def clear():
for stub in self.stub_list:
data = data_server_pb2.Data()
data.date = date
call_future = stub.ClearUserData.future(data)
res = call_future.result()
p = Process(target=clear, args=())
p.start()
p.join()
def get_data_by_uid(self, uid, date):
shard_idx = self.uid_shard(uid)
if shard_idx == -1:
return -1
data = data_server_pb2.Data()
data.uid = uid
data.date = date
call_future = self.stub_list[shard_idx].GetUserData.future(data)
user_data_list = []
for item in call_future.result().line_str:
user_data_list.append(item)
return user_data_list
def set_data_server_endpoints(self, endpoints):
self.stub_list = []
for ep in endpoints:
options = [('grpc.max_message_length', 1024 * 1024 * 1024),
('grpc.max_receive_message_length', 1024 * 1024 * 1024)]
channel = grpc.insecure_channel(ep, options=options)
stub = data_server_pb2_grpc.DataServerStub(channel)
self.stub_list.append(stub)
def global_shuffle_by_patch(self, data_patch, date, concurrency):
shuffle_time = len(data_patch) / concurrency + 1
for i in range(shuffle_time):
if i * concurrency >= len(data_patch):
break
pros = []
end = min((i + 1) * concurrency, len(data_patch))
patch_list = data_patch[i * concurrency:end]
width = len(patch_list)
for j in range(width):
p = Process(
target=self.send_one_patch, args=(patch_list[j], date))
pros.append(p)
for p in pros:
p.start()
for p in pros:
p.join()
logging.info("shuffle round {} done.".format(i))
def send_one_patch(self, patch, date):
for line in patch:
group = line.strip().split("\t")
if len(group) != 3:
continue
data = data_server_pb2.Data()
data.uid = group[0]
data.date = date
data.line = line.strip()
stub_idx = self.uid_shard(data.uid)
if stub_idx == -1:
logging.info("send_one_patch continue for uid: %s" % data.uid)
continue
call_future = self.stub_list[stub_idx].SendData.future(data)
u_num = call_future.result()
def global_shuffle_by_file(self, filelist, concurrency):
pass
def set_load_data_into_patch_func(self, func):
self.load_data_into_patch = func
def get_local_files(self,
base_path,
date,
node_idx,
node_num,
hdfs_configs=None):
full_path = "{}/{}".format(base_path, date)
if os.path.exists(full_path):
file_list = os.listdir(full_path)
local_files = ["{}/{}".format(full_path, x) for x in file_list]
elif hdfs_configs is not None:
local_files = self.download_from_hdfs(hdfs_configs, base_path,
date, node_idx, node_num)
else:
local_files = []
return local_files
def download_from_hdfs(self, hdfs_configs, base_path, date, node_idx,
node_num):
# return local filelist
hdfs_client = HDFSClient("$HADOOP_HOME", hdfs_configs)
multi_download(
hdfs_client,
"{}/{}".format(base_path, date),
date,
node_idx,
node_num,
multi_processes=30)
filelist = os.listdir(date)
files = ["{}/{}".format(date, fn) for fn in filelist]
return files
def test_global_shuffle():
data_client = DataClient()
server_endpoints = ["127.0.0.1:{}".format(50050 + i) for i in range(10)]
data_client.set_data_server_endpoints(server_endpoints)
date = "0330"
file_name = ["data_with_uid/part-01991"]
with open(file_name[0]) as fin:
for line in fin:
group = line.strip().split("\t")
uid = group[0]
user_data_dict = data_client.get_data_by_uid(uid, date)
def test_set_param():
data_client = DataClient()
server_endpoints = ["127.0.0.1:{}".format(50050 + i) for i in range(10)]
data_client.set_data_server_endpoints(server_endpoints)
uid = ["1001", "10001", "100001", "101"]
param_dict = {"w0": [1.0, 1.1, 1.2, 1.3], "b0": [1.1, 1.2, 1.3, 1.5]}
for cur_i in uid:
data_client.set_param_by_uid(cur_i, param_dict)
def test_get_param():
data_client = DataClient()
server_endpoints = ["127.0.0.1:{}".format(50050 + i) for i in range(10)]
data_client.set_data_server_endpoints(server_endpoints)
uid = ["1001", "10001", "100001", "101"]
for cur_i in uid:
param_dict = data_client.get_param_by_uid(cur_i)
print(param_dict)
if __name__ == "__main__":
#load_data_global_shuffle()
#test_global_shuffle()
test_set_param()
test_get_param() | |
# -*- coding: utf-8 -*-
import numpy
from typing import List
def polynomials(p: List[float], x: int) -> float:
"""
>>> polynomials([1.1, 2.0, 3.0], 0)
3.0
"""
polyval = numpy.polyval(p, x)
return polyval
if __name__ == '__main__':
p, x = [*map(float, input().split())], int(input())
print(polynomials(p, x)) | |
#######################################################################
# Copyright (C) 2017 Shangtong Zhang(zhangshangtong.cpp@gmail.com) #
# Permission given to modify the code as long as you keep this #
# declaration at the top #
#######################################################################
import numpy as np
import torch
from baselines.common.running_mean_std import RunningMeanStd
class BaseNormalizer:
def __init__(self, read_only=False):
self.read_only = read_only
def set_read_only(self):
self.read_only = True
def unset_read_only(self):
self.read_only = False
def state_dict(self):
return None
def load_state_dict(self, _):
return
class MeanStdNormalizer(BaseNormalizer):
def __init__(self, read_only=False, clip=10.0, epsilon=1e-8):
BaseNormalizer.__init__(self, read_only)
self.read_only = read_only
self.rms = None
self.clip = clip
self.epsilon = epsilon
def __call__(self, x):
x = np.asarray(x)
if self.rms is None:
self.rms = RunningMeanStd(shape=(1,) + x.shape[1:])
if not self.read_only:
self.rms.update(x)
if self.clip is None:
return (x - self.rms.mean) / np.sqrt(self.rms.var + self.epsilon)
else:
return np.clip((x - self.rms.mean) / np.sqrt(self.rms.var + self.epsilon),
-self.clip, self.clip)
def state_dict(self):
return {'mean': self.rms.mean,
'var': self.rms.var}
def load_state_dict(self, saved):
self.rms.mean = saved['mean']
self.rms.var = saved['var']
class RescaleNormalizer(BaseNormalizer):
def __init__(self, coef=1.0):
BaseNormalizer.__init__(self)
self.coef = coef
def __call__(self, x):
if not isinstance(x, torch.Tensor):
x = np.asarray(x)
return self.coef * x
class ImageNormalizer(RescaleNormalizer):
def __init__(self):
RescaleNormalizer.__init__(self, 1.0 / 255)
class SignNormalizer(BaseNormalizer):
def __call__(self, x):
return np.sign(x) | |
import io
import os
import numpy as np
import pandas as pd
import torch
from torch.utils.data import Dataset
class FaceLandmarksDataset(Dataset):
"""Face Landmarks dataset."""
def __init__(self, csv_file, root_dir, transform=None):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.tags = pd.read_csv(csv_file)
self.root_dir = root_dir
self.transform = transform
def __len__(self):
return len(self.tags)
def __getitem__(self, idx):
if torch.is_tensor(idx):
idx = idx.tolist()
img_name = os.path.join(self.root_dir,
self.tags.iloc[idx, 0])
image = io.imread(img_name)
landmarks = self.tags.iloc[idx, 1:]
landmarks = np.array([landmarks])
landmarks = landmarks.astype('float').reshape(-1, 2)
sample = {'image': image, 'landmarks': landmarks}
if self.transform:
sample = self.transform(sample)
return sample | |
# INTEL CONFIDENTIAL
#
# Copyright (C) 2021 Intel Corporation
#
# This software and the related documents are Intel copyrighted materials, and
# your use of them is governed by the express license under which they were provided to
# you ("License"). Unless the License provides otherwise, you may not use, modify, copy,
# publish, distribute, disclose or transmit this software or the related documents
# without Intel's prior written permission.
#
# This software and the related documents are provided as is,
# with no express or implied warranties, other than those that are expressly stated
# in the License.
import numpy as np
import pytest
from ote_sdk.entities.label import Domain, LabelEntity
from ote_sdk.entities.scored_label import ScoredLabel
from ote_sdk.tests.constants.ote_sdk_components import OteSdkComponent
from ote_sdk.tests.constants.requirements import Requirements
from ote_sdk.usecases.exportable_code.prediction_to_annotation_converter import (
DetectionToAnnotationConverter,
)
@pytest.mark.components(OteSdkComponent.OTE_SDK)
class TestPredictionToAnnotationConverter:
@pytest.mark.priority_medium
@pytest.mark.component
@pytest.mark.reqids(Requirements.REQ_1)
def test_detection_to_annotation_convert(self):
"""
<b>Description:</b>
Check that DetectionToAnnotationConverter correctly converts Network output to list of Annotation
<b>Input data:</b>
Array of network output with shape [4,6]
<b>Expected results:</b>
Test passes if each Converted annotation has the same values as the network output
<b>Steps</b>
1. Create mock network output
2. Convert network output to Annotation
3. Check Annotations
"""
test_boxes = np.array(
(
(0, 0.6, 0.1, 0.1, 0.2, 0.3),
(1, 0.2, 0.2, 0.1, 0.3, 0.4),
(1, 0.7, 0.3, 0.2, 0.5, 0.6),
(0, 0.1, 0.1, 0.1, 0.2, 0.3),
)
)
labels = [
LabelEntity("Zero", domain=Domain.DETECTION),
LabelEntity("One", domain=Domain.DETECTION),
]
converter = DetectionToAnnotationConverter(labels)
annotation_scene = converter.convert_to_annotation(test_boxes)
for i, annotation in enumerate(annotation_scene.annotations):
label: ScoredLabel = next(iter(annotation.get_labels()))
test_label = labels[int(test_boxes[i][0])]
assert test_label.name == label.name
assert test_boxes[i][1], label.probability
assert test_boxes[i][2] == annotation.shape.x1
assert test_boxes[i][3] == annotation.shape.y1
assert test_boxes[i][4] == annotation.shape.x2
assert test_boxes[i][5] == annotation.shape.y2
annotation_scene = converter.convert_to_annotation(np.ndarray((0, 6)))
assert 0 == len(annotation_scene.shapes)
@pytest.mark.priority_medium
@pytest.mark.component
@pytest.mark.reqids(Requirements.REQ_1)
def test_detection_to_annotation_convert_openvino_shape(self):
"""
<b>Description:</b>
Check that DetectionToAnnotationConverter correctly converts OpenVINO Network output to annotations
<b>Input data:</b>
Array of network output with shape [4,7]
<b>Expected results:</b>
Test passes if each Converted annotation has the same values as the network output
<b>Steps</b>
1. Create mock network output
2. Convert network output to Annotation
3. Check Annotations
"""
test_boxes = np.array(
(
(-12, 0, 0.6, 0.1, 0.1, 0.2, 0.3),
(12, 1, 0.2, 0.0, 0.1, 0.1, 0.2),
(1234, 1, 0.7, 0.2, 0.4, 0.7, 0.5),
(1251, 0, 0.1, 0.1, 0.1, 0.2, 0.3),
)
)
labels = [
LabelEntity("Zero", domain=Domain.DETECTION),
LabelEntity("One", domain=Domain.DETECTION),
]
converter = DetectionToAnnotationConverter(labels)
annotation_scene = converter.convert_to_annotation(test_boxes)
for i, annotation in enumerate(annotation_scene.annotations):
label: ScoredLabel = next(iter(annotation.get_labels()))
test_label = labels[int(test_boxes[i][1])]
assert test_label.name == label.name
assert test_boxes[i][2] == label.probability
assert test_boxes[i][3] == annotation.shape.x1
assert test_boxes[i][4] == annotation.shape.y1
assert test_boxes[i][5] == annotation.shape.x2
assert test_boxes[i][6] == annotation.shape.y2
@pytest.mark.priority_medium
@pytest.mark.component
@pytest.mark.reqids(Requirements.REQ_1)
def test_detection_to_annotation_convert_invalid_input(self):
"""
<b>Description:</b>
Check that DetectionToAnnotationConverter raises an error if invalid inputs are provided
<b>Input data:</b>
Array of size [1203, 5]
Array of size [3, 8]
<b>Expected results:</b>
Test passes a ValueError is raised for both inputs
<b>Steps</b>
1. Create DetectionToAnnotationConverter
2. Attempt to convert array of [1203,5] to annotations
3. Attempt to convert array of [3, 8] to annotations
"""
labels = [
LabelEntity("Zero", domain=Domain.DETECTION),
LabelEntity("One", domain=Domain.DETECTION),
]
converter = DetectionToAnnotationConverter(labels)
with pytest.raises(ValueError):
converter.convert_to_annotation(np.ndarray((1203, 5)))
with pytest.raises(ValueError):
converter.convert_to_annotation(np.ndarray((3, 8))) | |
"""Module to implement a simple feature selection system based on thresholds over
energy and spectral flatness."""
import librosa
import numpy as np
from audio_loader.activity_detection.feature_selection import FeatureSelection
class Simple(FeatureSelection):
"""Simple voice activity detection, based on signal energy and spectral flatness."""
def __init__(self,
win_size,
hop_size,
sampling_rate,
energy_threshold=0.2,
spectral_flatness_threshold=0.3,
smooth=5):
"""Initializes activity detector.
Parameters
----------
win_size: int
Number of samples to use for the window size.
hop_size: int
Number of samples to use for hopping windows.
sampling_rate: int
Sampling rate expected by the signal in the process method.
energy_threshold: float, optional
Between 0. and 1..
spectral_flatness_threshold: float, optional
Between 0. and 1..
smooth: int, optional
Number of allowed filled holes.
"""
super().__init__(win_size, hop_size, sampling_rate, padding=True)
self.energy_threshold = energy_threshold
self.spectral_flatness_threshold = spectral_flatness_threshold
self.smooth = smooth
def process(self, signal):
"""Executes the activity detection.
Parameters
----------
signal (array): 2d signal
(n, channel)
Return
------
vector of activity
"""
signal = signal.transpose(1, 0)
res = []
for channel in signal:
# compute required features
computed_energy = librosa.feature.rms(
y=channel, frame_length=self.win_size, hop_length=self.hop_size)
computed_spectrall_flatness = librosa.feature.spectral_flatness(
y=channel, n_fft=self.win_size, hop_length=self.hop_size)
# Voice Activity Detection
energy95p = np.percentile(computed_energy, 95)
if energy95p == 0:
raise Exception(f"The channel is silent")
normalized_en = (computed_energy / energy95p)
out = np.logical_and(
normalized_en > self.energy_threshold,
computed_spectrall_flatness < self.spectral_flatness_threshold
)
if self.smooth > 0:
start = -self.smooth
for i in range(out.shape[1]):
if out[:, i]:
if (start != i-1) and (i-start < self.smooth):
out[:, start+1:i] = True
start = i
res.append(out.flatten())
return np.array(res) | |
import torch
import numpy as np
def adjust_for_ortho(boxes, position, div_num):
for idx, box in enumerate(boxes):
tl_x = box[0]
tl_y = box[1]
br_x = box[2]
br_y = box[3]
# start position from 0 not 1
adj_x = (position[1] - 1 - 11) * 600
adj_y = (position[0] - 1 - 8) * 600
out_box = torch.tensor([
tl_x + adj_x,
tl_y + adj_y,
br_x + adj_x,
br_y + adj_y
]).unsqueeze(0)
if idx == 0:
out_boxes = out_box
else:
out_boxes = torch.cat(
(out_boxes, out_box), 0)
return out_boxes
# this is tmporal implementation term presentation
# so, this function is stricted 3 x 3 ortho image
def unite_images(images, idxs, positions, div_nums):
# all element in div_nums(list) is same.
div_num = div_nums[0]
print(images[0]) | |
from astropy import units as u
# from functions.bodies import BODIES as _BODIES
from poliastro.twobody import Orbit
from astropy import time
import datetime
from poliastro import ephem
if __name__ == "__main__":
from poliastro.bodies import Earth, Mars, Sun
epoch = time.Time(datetime.datetime.now()) # UTC by default
res = Orbit.from_body_ephem(Earth, epoch)
print(
res.
)
print("END OF GAME") | |
# <markdowncell>
# ## Shows the plotting tools.
# <markdowncell> Import teneto, numpy and matplotlib
# <codecell>
import teneto
import numpy as np
import matplotlib.pyplot as plt
# <markdowncell> Set color sceme
# <codecell>
plt.rcParams['image.cmap'] = 'gist_gray'
# <markdowncell> Create a 3D network
# <codecell>
A=np.zeros((3,3,3))
A[0,1,:]=1
A[1,0,:]=1
A[0,2,1:]=1
A[2,0,1:]=1
A[1,2,2]=1
A[2,1,2]=1
# <markdowncell> Create a 3D network
# <codecell>
fig, ax = plt.subplots(1,2)
# <markdowncell> Plot circle graph
# <codecell>
ax[0] = teneto.plot.circle_plot(A[:,:,1],ax[0])
ax[0].set_title('A',loc='left')
# <markdowncell> Plot slice graphlet
# <codecell>
ax[1] = teneto.plot.slice_plot(A,ax[1],['Ashley','Blake','Casey','Dylan'],['2014','2015','2016'])
ax[1].set_xlabel('time (years)')
ax[1].set_title('B',loc='left')
# <markdowncell> save figures
# <codecell>
fig.tight_layout()
fig.savefig('./examples/figures/friendexampletst.pdf')
fig.savefig('./examples/figures/friendexample.eps') | |
# -*- coding: utf-8 -*-
import numpy as np
from functools import reduce
from flare import pipe as fp
class Sequential(list):
def __init__(self, seq=None, **kwargs):
super(Sequential, self).__init__(seq, **kwargs)
def assertDuplication(self):
result = True
for elm in self:
result = result and (np.all(elm == self[0]))
if not result:
break
return result
class HierarchicalDict(dict):
def __init__(self, seq=None, **kwargs):
super(HierarchicalDict, self).__init__(seq, **kwargs)
def __getitem__(self, subkeys):
if isinstance(subkeys, str):
return dict.__getitem__(self, subkeys)
elif isinstance(subkeys, list) or isinstance(subkeys, tuple):
val = self
for subkey in subkeys:
if isinstance(val, Sequential):
val = Sequential([elm[subkey] for elm in val])
else:
val = val[subkey]
return val
else:
raise(Exception('type error'))
def find_keys(self, key):
subkeys = key.split('.')
val = self
results = []
try:
result = []
for subkey in subkeys:
if isinstance(val, Sequential):
val = val[0][subkey]
else:
val = val[subkey]
result.append(subkey)
results.append(result)
if isinstance(val, HierarchicalDict):
for k in val.keys():
if k[0] != '_':
subresults = val.find_keys(k)
for subresult in subresults:
results.append(result + subresult)
elif isinstance(val, Sequential):
for k in val[0].keys():
if k[0] != '_':
subresults = val[0].find_keys(k)
for subresult in subresults:
results.append(result + subresult)
except KeyError:
results = []
return results
class ObservableDict(HierarchicalDict):
def __init__(self, inputs, outputs, layout=[], layout_in=None, layout_out=None, layouts_in=None):
super(ObservableDict, self).__init__([])
self.list_input = inputs
self.list_output = outputs
self.fields_input = set(inputs)
self.fields_output = set(outputs)
self.fields_processed = set()
self.layout = layout
if layout_in is None:
layout_in = layout
if layout_out is None:
layout_out = layout
self.layout_in = layout_in
self.layout_out = layout_out
self.layout_input = [1, len(self.fields_input)] + layout_in
self.layout_output = [1, len(self.fields_output)] + layout_out
self.data_input = np.zeros(self.layout_input)
self.data_output = np.zeros(self.layout_output)
def __setitem__(self, key, value):
dict.__setitem__(self, key, value)
subkeys = self.find_keys(key)
for subkey in subkeys:
keystr = '.'.join(subkey)
if keystr in self.list_input:
pos = self.list_input.index(keystr)
if len(self.layout_input) == 6:
val = np.array(self[subkey])
if np.isscalar(val):
value = val * np.ones(self.layout_in)
else:
value = val.reshape(self.layout_in)
self.data_input[0, pos, :, :, :, :] = value
elif len(self.layout_input) == 5:
val = np.array(self[subkey])
if np.isscalar(val):
value = val * np.ones(self.layout_in)
else:
value = val.reshape(self.layout_in)
self.data_input[0, pos, :, :, :] = value
elif len(self.layout_input) == 4:
self.data_input[0, pos, :, :] = np.array(self[subkey]).reshape(self.layout_in)
elif len(self.layout_input) == 3:
self.data_input[0, pos, :] = np.array(self[subkey]).reshape(self.layout_in)
elif len(self.layout_input) == 2:
self.data_input[0, pos] = np.array(self[subkey]).reshape(self.layout_in)
else:
raise(Exception('layout error'))
if keystr in self.list_output:
pos = self.list_output.index(keystr)
if len(self.layout_output) == 6:
val = np.array(self[subkey])
if np.isscalar(val):
value = val * np.ones(self.layout)
else:
value = val.reshape(self.layout_out)
self.data_output[0, pos, :, :, :, :] = np.array(value).reshape(self.layout_out)
elif len(self.layout_output) == 5:
val = np.array(self[subkey])
if np.isscalar(val):
value = val * np.ones(self.layout)
else:
value = val.reshape(self.layout_out)
self.data_output[0, pos, :, :, :] = np.array(value).reshape(self.layout_out)
elif len(self.layout_output) == 4:
self.data_output[0, pos, :, :] = np.array(self[subkey]).reshape(self.layout_out)
elif len(self.layout_output) == 3:
self.data_output[0, pos, :] = np.array(self[subkey]).reshape(self.layout_out)
elif len(self.layout_output) == 2:
self.data_output[0, pos] = np.array(self[subkey]).reshape(self.layout_out)
else:
raise(Exception('layout error'))
def update(self, another):
for k in another.keys():
self[k] = another[k]
class memo(object):
def __init__(self, f):
self.fn = f
self._curr_ = None
self._last_ = None
def __call__(self, *args, **kwargs):
for data in self.fn(*args, **kwargs):
self._curr_ = data
yield data
def _swap_(self, *args, **kwargs):
self._last_ = self._curr_
def last(f):
def wrapped(*args, **kwargs):
for data in f(*args, **kwargs):
if '_last_' in dir(f):
data.update({'_last_': f._last_})
f._swap_()
yield data
return wrapped
def attributes(*names):
def wrapper(f):
def wrapped(*args, **kwargs):
for data in f(*args, **kwargs):
kvs = zip(names, data)
result = HierarchicalDict({k: v for k, v in kvs})
if '_last_' in dir(f):
result.update({'_last_': f._last_})
f._swap()
yield result
return wrapped
return wrapper
def discrete(inputs, outputs, layout=[]):
def wrapper(f):
def wrapped(*args, **kwargs):
for data in f(*args, **kwargs):
result = ObservableDict(inputs, outputs, layout)
result.update(data)
yield result
return wrapped
return wrapper
def sequential(inputs, outputs, layout=[], layout_in=None, layout_out=None):
def wrapper(f):
def wrapped(*args, **kwargs):
for data in f(*args, **kwargs):
result = ObservableDict(inputs, outputs, layout, layout_in, layout_out)
result.update(data)
yield result
return wrapped
return wrapper
def feature(g, inputs, outputs):
def wrapper(f):
def wrapped(*args, **kwargs):
for result in f(*args, **kwargs):
kvin = {k: result[k] for k in inputs}
result.update({k: v for k, v in zip(outputs, g(**kvin))})
yield result
return wrapped
return wrapper
def filter(g, inputs):
def wrapper(f):
def wrapped(*args, **kwargs):
for result in f(*args, **kwargs):
if isinstance(result, HierarchicalDict):
kvin = {k: result[k] for k in inputs}
if g(**kvin):
yield result
else:
flags = []
filtered = []
for elm in result:
kvin = {k: elm[k] for k in inputs}
filtered.append(elm)
flags.append(g(**kvin))
if reduce(bool.__and__, flags, True):
yield filtered
return wrapped
return wrapper
def window(window_size=1):
def wrapper(f):
def wrapped(*args, **kwargs):
return fp.roll(f(*args, **kwargs), window_size=window_size)
return wrapped
return wrapper
def segment(segment_size=1):
def wrapper(f):
def wrapped(*args, **kwargs):
for seg in fp.batches(f(*args, **kwargs), batch_size=segment_size):
yield Sequential(seg)
return wrapped
return wrapper
def divid(lengths=[1], names=['x']):
begins = {}
ends = {}
pos = 0
for k, l in zip(names, lengths):
begins[k] = pos
ends[k] = pos + l
pos = ends[k]
def wrapper(f):
def wrapped(*args, **kwargs):
for data in f(*args, **kwargs):
yield HierarchicalDict({k:Sequential(data[begins[k]:ends[k]]) for k in names})
return wrapped
return wrapper
def assertDup(keys):
keys = keys.split('.')
def wrapper(f):
def wrapped(*args, **kwargs):
for value in f(*args, **kwargs):
seq = value()[keys]
if isinstance(seq, Sequential):
assert(seq.assertDuplication())
return f(*args, **kwargs)
return wrapped
return wrapper
def assertNoDup(keys):
keys = keys.split('.')
def wrapper(f):
def wrapped(*args, **kwargs):
for value in f(*args, **kwargs):
seq = value[keys]
if isinstance(seq, Sequential):
assert(not seq.assertDuplication())
return f(*args, **kwargs)
return wrapped
return wrapper
mapping = feature
def debug():
def wrapper(f):
def wrapped(*args, **kwargs):
for result in f(*args, **kwargs):
print(result)
yield result
return wrapped
return wrapper
def data(swap=None):
def wrapper(f):
def wrapped(*args, **kwargs):
for result in f(*args, **kwargs):
input, output = result.data_input, result.data_output
while len(input.shape) > 2 and input.shape[1] == 1:
input = np.squeeze(input, 1)
while len(output.shape) > 2 and output.shape[1] == 1:
output = np.squeeze(output, 1)
if swap is None:
yield [(input, output)]
else:
idx = range(len(swap))
yield [(np.moveaxis(input, idx, swap), np.moveaxis(output, idx, swap))]
return wrapped
return wrapper
def shuffle(fn, repeat=1):
def wrapper(f):
def wrapped(*args, **kwargs):
results = [result for result in f(*args, **kwargs)]
for _ in range(repeat):
for result in results:
for pair in result:
xs, ys = pair
xs = np.array(xs, dtype=np.float32)
ys = np.array(ys, dtype=np.float32)
yield [fn(xs, ys)]
return wrapped
return wrapper
def rebatch(repeat=1):
def wrapper(f):
def wrapped(*args, **kwargs):
batch_count = 0
results = [result for result in f(*args, **kwargs)]
for result in results:
for pair in result:
if batch_count % repeat == 0:
result_batch_xs, result_batch_ys = [], []
batch_count += 1
xs, ys = pair
shapex = list(xs.shape)
shapex[0] = shapex[0] * repeat
shapey = list(ys.shape)
shapey[0] = shapey[0] * repeat
result_batch_xs.append(xs)
result_batch_ys.append(ys)
if batch_count % repeat == 0:
array_xs = np.array(result_batch_xs, dtype=np.float32).reshape(shapex)
array_ys = np.array(result_batch_ys, dtype=np.float32).reshape(shapey)
yield [(array_xs, array_ys)]
return wrapped
return wrapper | |
from __future__ import division
from __future__ import absolute_import
from builtins import object
from past.utils import old_div
from nose.tools import (assert_equal, assert_not_equal, raises,
assert_almost_equal)
from nose.plugins.skip import SkipTest
from .test_helpers import assert_items_almost_equal, assert_items_equal
import pandas as pd
import numpy as np
import openpathsampling as paths
import logging
logging.getLogger('openpathsampling.initialization').setLevel(logging.CRITICAL)
logging.getLogger('openpathsampling.ensemble').setLevel(logging.CRITICAL)
logging.getLogger('openpathsampling.storage').setLevel(logging.CRITICAL)
logging.getLogger('openpathsampling.netcdfplus').setLevel(logging.CRITICAL)
class TestWHAM(object):
def setup(self):
self.exact = [1.0, 0.5, 0.25, 0.125, 0.0625, 0.03125, 0.015625]
self.iface1 = [2.0, 1.0, 0.5, 0.25, 0.125, 0.0625, 0.0]
self.iface2 = [1.0, 1.0, 1.0, 0.5, 0.25, 0.125, 0.0625]
self.iface3 = [3.0, 3.0, 3.0, 3.0, 3.0, 1.5, 0.75]
# self.iface1 = [1.0, 0.5, 0.25, 0.125, 0.0625, 0.0, 0.0]
# self.iface2 = [1.0, 1.0, 1.0, 0.5, 0.25, 0.125, 0.0625]
# self.iface3 = [1.0, 1.0, 1.0, 1.0, 1.0, 0.5, 0.25]
# self.iface1 = [2.0, 0.5, 0.125, 0.0]
# self.iface2 = [1.0, 1.0, 0.25, 0.0625]
# self.iface3 = [3.0, 3.0, 3.0, 0.75]
# self.index = [0.0, 0.2, 0.4, 0.6]
self.columns = ["Interface 1", "Interface 2", "Interface 3"]
self.index = [0.0, 0.1, 0.2, 0.3, 0.4, 0.5, 0.6]
self.input_df = pd.DataFrame(
data=np.array([self.iface1, self.iface2, self.iface3]).T,
index=self.index,
columns=self.columns
)
self.expected_cleaned = np.array([[2.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[0.5, 1.0, 0.0],
[0.25, 0.5, 0.0],
[0.0, 0.25, 3.0],
[0.0, 0.125, 1.5],
[0.0, 0.0, 0.75]])
self.cleaned = pd.DataFrame(data=self.expected_cleaned,
index=self.index,
columns=self.columns)
self.wham = paths.numerics.WHAM(cutoff=0.1)
def test_prep_reverse_cumulative(self):
cleaned = self.wham.prep_reverse_cumulative(self.input_df)
np.testing.assert_allclose(cleaned.values,
self.expected_cleaned)
def test_prep_reverse_cumulative_with_interfaces(self):
wham = paths.numerics.WHAM(cutoff=0.1, interfaces=[0.0, 0.2, 0.3])
cleaned = wham.prep_reverse_cumulative(self.input_df)
np.testing.assert_allclose(cleaned.values,
np.array([[2.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[0.5, 1.0, 0.0],
[0.25, 0.5, 3.0],
[0.0, 0.25, 3.0],
[0.0, 0.125, 1.5],
[0.0, 0.0, 0.75]]))
def test_unweighting_tis(self):
unweighting = self.wham.unweighting_tis(self.cleaned)
expected = np.array([[1.0, 0.0, 0.0],
[1.0, 0.0, 0.0],
[1.0, 1.0, 0.0],
[1.0, 1.0, 0.0],
[0.0, 1.0, 1.0],
[0.0, 1.0, 1.0],
[0.0, 0.0, 1.0]])
np.testing.assert_allclose(unweighting.values, expected)
def test_sum_k_Hk_Q(self):
sum_k_Hk_Q = self.wham.sum_k_Hk_Q(self.cleaned)
expected = np.array([2.0, 1.0, 1.5, 0.75, 3.25, 1.625, 0.75])
np.testing.assert_allclose(sum_k_Hk_Q.values, expected)
def test_n_entries(self):
n_entries = self.wham.n_entries(self.cleaned)
expected = np.array([3.75, 1.875, 5.25])
np.testing.assert_allclose(n_entries.values, expected)
def test_weighted_counts_tis(self):
n_entries = self.wham.n_entries(self.cleaned)
unweighting = self.wham.unweighting_tis(self.cleaned)
weighted_counts = self.wham.weighted_counts_tis(unweighting,
n_entries)
expected = np.array([[3.75, 0.0, 0.0],
[3.75, 0.0, 0.0],
[3.75, 1.875, 0.0],
[3.75, 1.875, 0.0],
[0.0, 1.875, 5.25],
[0.0, 1.875, 5.25],
[0.0, 0.0, 5.25]])
np.testing.assert_allclose(weighted_counts.values, expected)
def test_generate_lnZ(self):
guess = [1.0, 1.0, 1.0]
expected_lnZ = np.log([1.0, old_div(1.0,4.0), old_div(7.0,120.0)])
# TODO: I'm not sure the last is log(7/120)
# however, I got the same result out of the old version, too, and
# this does combine into the correct result in the end (see
# test_output_histogram)
unweighting = self.wham.unweighting_tis(self.cleaned)
sum_k_Hk_Q = self.wham.sum_k_Hk_Q(self.cleaned)
weighted_counts = self.wham.weighted_counts_tis(
unweighting,
self.wham.n_entries(self.cleaned)
)
lnZ = self.wham.generate_lnZ(guess, unweighting, weighted_counts,
sum_k_Hk_Q)
np.testing.assert_allclose(lnZ.values, expected_lnZ)
def test_output_histogram(self):
sum_k_Hk_Q = self.wham.sum_k_Hk_Q(self.cleaned)
n_entries = self.wham.n_entries(self.cleaned)
unweighting = self.wham.unweighting_tis(self.cleaned)
weighted_counts = self.wham.weighted_counts_tis(unweighting,
n_entries)
lnZ = pd.Series(data=np.log([1.0, old_div(1.0,4.0), old_div(7.0,120.0)]),
index=n_entries.index)
wham_hist = self.wham.output_histogram(lnZ, sum_k_Hk_Q,
weighted_counts)
normed = self.wham.normalize_cumulative(wham_hist)
np.testing.assert_allclose(normed.values, np.array(self.exact))
def test_guess_lnZ_crossing_probability(self):
input_data = np.array([[2.0, 1.0, 5.0],
[1.0, 1.0, 5.0],
[0.5, 1.0, 5.0],
[0.1, 0.2, 5.0],
[0.0, 0.04, 1.0],
[0.0, 0.02, 0.2]])
input_df = pd.DataFrame(data=input_data,
index=self.index[0:6],
columns=self.columns)
cleaned = self.wham.prep_reverse_cumulative(input_df)
guess_lnZ = self.wham.guess_lnZ_crossing_probability(cleaned)
expected_Z = np.array([1.0, 0.25, 0.25*0.2])
np.testing.assert_allclose(guess_lnZ.values, np.log(expected_Z))
def test_wham_bam_histogram(self):
wham_hist = self.wham.wham_bam_histogram(self.input_df)
np.testing.assert_allclose(wham_hist.values, self.exact)
@raises(RuntimeError)
def test_check_overlaps_no_overlap_with_first(self):
bad_data = np.array([[1.0, 0.0, 0.0],
[0.5, 0.0, 0.0],
[0.0, 1.0, 0.0],
[0.0, 0.5, 1.0],
[0.0, 0.1, 0.2]])
bad_df = pd.DataFrame(data=bad_data,
index=self.index[0:5],
columns=self.columns)
self.wham.check_cleaned_overlaps(bad_df)
@raises(RuntimeError)
def test_check_overlaps_no_overlap_with_final(self):
bad_data = np.array([[1.0, 0.0, 0.0],
[0.5, 0.0, 0.0],
[0.2, 1.0, 0.0],
[0.1, 0.5, 0.0],
[0.0, 0.0, 1.0],
[0.0, 0.0, 0.5]])
bad_df = pd.DataFrame(data=bad_data,
index=self.index[0:6],
columns=self.columns)
self.wham.check_cleaned_overlaps(bad_df)
@raises(RuntimeError)
def test_check_overlaps_no_overlap_in_middle(self):
bad_data = np.array([[1.0, 0.0, 0.0, 0.0],
[0.5, 1.0, 0.0, 0.0],
[0.1, 0.2, 0.0, 0.0],
[0.0, 0.0, 1.0, 0.0],
[0.0, 0.0, 0.5, 1.0],
[0.0, 0.0, 0.1, 0.2]])
bad_df = pd.DataFrame(data=bad_data,
index=self.index[0:6],
columns=self.columns + ['Interface 4'])
self.wham.check_cleaned_overlaps(bad_df) | |
#!/usr/bin/python3
import numpy as np
import helper.basis
from helper.figure import Figure
import helper.plot
def main():
p = 3
fig = Figure.create(figsize=(2.3, 1.3))
ax = fig.gca()
basisWF = helper.basis.WeaklyFundamentalSpline(p)
supportWF = basisWF.getSupport()
K = np.linspace(supportWF[0]+(p+1)/2, supportWF[1]-(p+1)/2,
supportWF[1]-supportWF[0]-(p+1)+1)
xl = [supportWF[0] - 0.5, supportWF[1] + 0.5]
basis = helper.basis.CentralizedCardinalBSpline(p)
support = basis.getSupport()
color = helper.plot.mixColors("C0", 0.5)
for k in K:
xx = np.linspace(max(support[0]-k, xl[0]), min(support[1]-k, xl[1]), 101)
yy = basis.evaluate(xx + k)
ax.plot(xx, yy, "-", color=color, clip_on=False)
xx = np.linspace(*supportWF, 129)
yy = basisWF.evaluate(xx)
ax.plot(xx, yy, "-", color="C0", lw=1.5, clip_on=False)
ax.plot(np.linspace(2-p, p-2, p-1), np.zeros((p-1,)), "o", color="C0",
mfc="none", clip_on=False)
ax.set_xticks(np.linspace(np.ceil(xl[0]), np.floor(xl[1]),
int(np.floor(xl[1]) - np.ceil(xl[0]) + 1)))
ax.set_xlim(*xl)
ax.set_yticks([0])
ax.spines["bottom"].set_position("zero")
fig.save()
if __name__ == "__main__":
main() | |
# Copyright 2021 Google LLC
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import enum
from dataclasses import dataclass
from functools import partial
import itertools as it
from typing import Union, Optional, Callable, Dict, Tuple, TypeVar, FrozenSet
import numpy as np
import jax.numpy as jnp
from jax import core
from jax import linear_util as lu
from jax.api_util import flatten_fun
from jax.interpreters import partial_eval as pe
from jax.tree_util import tree_flatten, tree_unflatten, register_pytree_node
from jax._src import source_info_util, traceback_util
from jax import lax
from jax._src.util import (as_hashable_function, unzip2, split_list, safe_map,
safe_zip)
source_info_util.register_exclusion(__file__)
traceback_util.register_exclusion(__file__)
map, unsafe_map = safe_map, map
zip, unsafe_zip = safe_zip, zip
## Utils
def popattr(obj, attrname):
val = getattr(obj, attrname)
delattr(obj, attrname)
return val
def setnewattr(obj, name, val):
sentinel = object()
assert getattr(obj, name, sentinel) is sentinel
setattr(obj, name, val)
## Error value data type and functional assert.
Bool = Union[bool, core.Tracer]
Int = Union[int, core.Tracer]
@dataclass(frozen=True)
class Error:
err: Bool
code: Int
msgs: Dict[int, str]
def get(self) -> Optional[str]:
"""Returns error message is error happened, None if no error happened."""
assert np.shape(self.err) == np.shape(self.code)
if np.size(self.err) == 1:
if self.err:
return self.msgs[int(self.code)]
else:
return '\n'.join(f'at mapped index {", ".join(map(str, idx))}: ' # type: ignore
f'{self.msgs[int(self.code[idx])]}' # type: ignore
for idx, e in np.ndenumerate(self.err) if e) or None
return None
def throw(self):
"""Throw ValueError with error message if error happened."""
err = self.get()
if err:
raise ValueError(err)
register_pytree_node(Error,
lambda e: ((e.err, e.code), tuple(sorted(e.msgs.items()))),
lambda msgs, data: Error(*data, dict(msgs))) # type: ignore
init_error = Error(False, 0, {})
next_code = it.count(1).__next__ # globally unique ids, could be uuid4
def assert_func(error: Error, pred: Bool, msg: str) -> Error:
code = next_code()
out_err = error.err | jnp.logical_not(pred)
out_code = lax.select(error.err, error.code, code)
return Error(out_err, out_code, {code: msg, **error.msgs})
## Checkify transformation for plumbing functional error values.
class CheckifyTracer(core.Tracer):
def __init__(self, trace, val):
self._trace = trace
self.val = val
core.get_aval(val), val
aval = property(lambda self: core.get_aval(self.val))
full_lower = lambda self: self
class CheckifyTrace(core.Trace):
pure = lift = lambda self, val: CheckifyTracer(self, val)
def __init__(self, main: core.MainTrace, sublevel: core.Sublevel,
enabled_errors: FrozenSet['ErrorCategory']) -> None:
self.main = main
self.level = main.level
self.sublevel = sublevel
self.main.enabled_errors = enabled_errors
def sublift(self, tracer):
return CheckifyTracer(self, tracer.val)
def process_primitive(self, primitive, tracers, params):
in_vals = [t.val for t in tracers]
rule = error_checks.get(primitive)
if rule:
out, self.main.error = rule(self.main.error, self.main.enabled_errors, # type: ignore
*in_vals, **params)
else:
out = primitive.bind(*in_vals, **params)
if primitive.multiple_results:
return [CheckifyTracer(self, x) for x in out]
else:
return CheckifyTracer(self, out)
def process_call(self, primitive, f, tracers, params):
in_vals = [t.val for t in tracers]
e = popattr(self.main, 'error')
f, msgs = checkify_subtrace(f, self.main, tuple(e.msgs.items()))
if 'donated_invars' in params:
params = dict(params, donated_invars=(False, False,
*params['donated_invars']))
err, code, *out_vals = primitive.bind(f, e.err, e.code, *in_vals, **params)
setnewattr(self.main, 'error', Error(err, code, msgs()))
return [CheckifyTracer(self, x) for x in out_vals]
def process_map(self, primitive, f, tracers, params):
in_vals = [t.val for t in tracers]
e = popattr(self.main, 'error')
f, msgs = checkify_subtrace(f, self.main, tuple(e.msgs.items()))
@as_hashable_function(closure=params['out_axes_thunk'])
def new_out_axes_thunk():
return (0, 0, *params['out_axes_thunk']())
params_ = dict(params, in_axes=(None, None, *params['in_axes']),
out_axes_thunk=new_out_axes_thunk,
donated_invars=(False, False, *params['donated_invars']))
errs, codes, *outs = primitive.bind(f, e.err, e.code, *in_vals, **params_)
err, code = _reduce_any_error(errs, codes)
setnewattr(self.main, 'error', Error(err, code, msgs()))
return [CheckifyTracer(self, x) for x in outs]
def post_process_call(self, primitive, tracers, params):
vals = [t.val for t in tracers]
main = self.main
e = popattr(main, 'error')
err, code, main.msgs = e.err, e.code, e.msgs
def todo(vals):
err, code, *vals = vals
setnewattr(main, 'error', Error(err, code, popattr(main, 'msgs')))
trace = main.with_cur_sublevel()
return [CheckifyTracer(trace, x) for x in vals]
return (err, code, *vals), todo
def post_process_map(self, primitive, tracers, params):
vals = [t.val for t in tracers]
main = self.main
e = popattr(main, 'error')
err, code, main.msgs = e.err, e.code, e.msgs
def todo(vals):
errs, codes, *vals = vals
err, code = _reduce_any_error(errs, codes)
setnewattr(main, 'error', Error(err, code, popattr(main, 'msgs')))
trace = main.with_cur_sublevel()
return [CheckifyTracer(trace, x) for x in vals]
def out_axes_transform(out_axes):
return (0, 0, *out_axes)
return (err, code, *vals), (todo, out_axes_transform)
def process_custom_jvp_call(self, prim, fun, jvp, tracers):
in_vals = [t.val for t in tracers]
e = popattr(self.main, 'error')
msgs = tuple(e.msgs.items())
fun, msgs1 = checkify_subtrace(fun, self.main, msgs)
jvp, msgs2 = checkify_custom_jvp_subtrace(jvp, self.main, msgs)
err, code, *out_vals = prim.bind(fun, jvp, e.err, e.code, *in_vals)
fst, out_msgs = lu.merge_linear_aux(msgs1, msgs2)
setattr(self.main, 'error', Error(err, code, out_msgs))
return [CheckifyTracer(self, x) for x in out_vals]
def post_process_custom_jvp_call(self, out_tracers, jvp_was_run):
if jvp_was_run:
msg = ("support for custom_jvp rules which close over checkify values is "
"not implemented. If you see this, open an issue at "
"https://github.com/google/jax/issues!")
raise NotImplementedError(msg)
vals = [t.val for t in out_tracers]
main = self.main
e = popattr(main, 'error')
err, code, main.msgs = e.err, e.code, e.msgs
def todo(vals):
err, code, *vals = vals
setnewattr(main, 'error', Error(err, code, popattr(main, 'msgs')))
trace = main.with_cur_sublevel()
return [CheckifyTracer(trace, x) for x in vals]
return (err, code, *vals), todo
def process_custom_vjp_call(self, prim, fun, fwd, bwd, tracers, out_trees):
in_vals = [t.val for t in tracers]
e = popattr(self.main, 'error')
msgs = tuple(e.msgs.items())
fun, msgs1 = checkify_subtrace(fun, self.main, msgs)
fwd, msgs2 = checkify_custom_vjp_subtrace(fwd, self.main, msgs)
out = prim.bind(fun, fwd, bwd, e.err, e.code, *in_vals, out_trees=out_trees)
fst, out_msgs = lu.merge_linear_aux(msgs1, msgs2)
if fst:
err, code, *out = out
else:
err, code = e.err, e.code # forward input error values to output
setattr(self.main, 'error', Error(err, code, out_msgs))
return [CheckifyTracer(self, x) for x in out]
def _reduce_any_error(errs, codes):
errs_, codes_ = lax.sort_key_val(errs, codes, dimension=0)
return errs_[-1], codes_[-1]
ErrorCheckRule = Callable # (Error, FrozenSet[ErrorCategory], *in_vals, **params) -> (Any, Error)
error_checks: Dict[core.Primitive, ErrorCheckRule] = {}
def checkify_flat(fun: lu.WrappedFun, enabled_errors: FrozenSet['ErrorCategory'],
*args):
fun, msgs = checkify_subtrace(fun)
fun = checkify_traceable(fun, tuple(init_error.msgs.items()), enabled_errors)
err, code, *outvals = fun.call_wrapped(init_error.err, init_error.code, *args)
return (err, code, outvals), msgs()
@lu.transformation
def checkify_traceable(msgs, enabled_errors, err, code, *args):
with core.new_main(CheckifyTrace, enabled_errors=enabled_errors) as main:
outs = yield (main, msgs, err, code, *args), {}
del main
yield outs
@lu.transformation_with_aux
def checkify_subtrace(main, msgs, err, code, *args):
setnewattr(main, 'error', Error(err, code, dict(msgs)))
trace = main.with_cur_sublevel()
in_tracers = [CheckifyTracer(trace, x) for x in args]
out = yield in_tracers, {}
out_tracers = map(trace.full_raise, out)
out_vals = [t.val for t in out_tracers]
err, code, msgs = main.error.err, main.error.code, main.error.msgs
del main.error
yield (err, code, *out_vals), msgs
@lu.transformation_with_aux
def checkify_custom_jvp_subtrace(main, msgs, *args):
# Like checkify_subtrace, but used specifically on the custom JVP rules
# associated with a custom_jvp. This code is called in the context of a
# jvp-of-checkify-of-custom_jvp. It takes both primal and tangent inputs,
# flattened into a single args tuple, and similarly must produce flattened
# primal and tangent outputs. Both primals and tangents include error values,
# but the tangent error values are trivially zero.
# The types to have in mind are:
# jvp : (a -> b) -> (a, T a) -> (b, T b)
# checkify : (a -> b) -> a -> Err b
# jvp-of-checkify : (a -> b) -> (a, T a) -> (Err b, T (Err b))
# where because Err is a pytree, we necessarily have T (Err b) = Err' (T b)
# where the other Err' components are trivial (of float0 dtype).
# Semantically, we don't add checks to the JVP rule. To check the result of a
# JVP rule, one must instead use checkify-of-jvp. Thus this implementation
# just forwards the input error and code (and trivial tangents) to the output.
del main
n, ragged = divmod(len(args), 2)
assert not ragged
(err,), (code,), primals = split_list(args[:n], [1, 1])
(err_dot,), (code_dot,), tangents = split_list(args[n:], [1, 1])
outs = yield (*primals, *tangents), {}
m, ragged = divmod(len(outs), 2)
assert not ragged
out_primals, out_tangents = outs[:m], outs[m:]
yield (err, code, *out_primals, err_dot, code_dot, *out_tangents), dict(msgs)
@lu.transformation_with_aux
def checkify_custom_vjp_subtrace(main, msgs, err, code, *args):
# We don't add any checks; just drop input error values.
del main, err, code
outs = yield args, {}
yield outs, dict(msgs)
# TODO take (error_aval, code_aval) instead of error here?
def checkify_jaxpr(jaxpr, error, enabled_errors):
f = lu.wrap_init(core.jaxpr_as_fun(jaxpr))
return checkify_fun_to_jaxpr(f, error, enabled_errors, jaxpr.in_avals)
def checkify_fun_to_jaxpr(f, error, enabled_errors, in_avals):
f, msgs = checkify_subtrace(f)
f = checkify_traceable(f, tuple(error.msgs.items()), enabled_errors)
err_aval = core.raise_to_shaped(core.get_aval(error.err))
code_aval = core.raise_to_shaped(core.get_aval(error.code))
avals_in = [err_aval, code_aval, *in_avals]
jaxpr_out, _, literals_out = pe.trace_to_jaxpr_dynamic(f, avals_in)
return core.ClosedJaxpr(jaxpr_out, literals_out), msgs()
## assert primitive
def check(pred: Bool, msg: str) -> None:
"""Check a predicate, add an error with msg if predicate is False.
This is an effectful operation, and can't be staged (jitted/scanned/...).
Before staging a function with checks, ``checkify`` it!
Args:
pred: if False, an error is added.
msg: error message if error is added.
For example:
>>> import jax
>>> import jax.numpy as jnp
>>> from jax.experimental import checkify
>>> def f(x):
... checkify.check(x!=0, "cannot be zero!")
... return 1/x
>>> checked_f = checkify.checkify(f)
>>> err, out = jax.jit(checked_f)(0)
>>> err.throw() # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
ValueError: cannot be zero! (check failed at ...)
"""
if not is_scalar_pred(pred):
raise TypeError(f'check takes a scalar pred as argument, got {pred}')
code = next_code()
msg += f' (check failed at {summary()})'
return check_error(Error(jnp.logical_not(pred), code, {code: msg}))
def is_scalar_pred(pred) -> bool:
return (isinstance(pred, bool) or
isinstance(pred, jnp.ndarray) and pred.shape == () and
pred.dtype == jnp.dtype('bool'))
def check_error(error: Error) -> None:
"""Raise an Exception if ``error`` represents a failure. Functionalized by ``checkify``.
The semantics of this function are equivalent to:
>>> def check_error(err: Error) -> None:
... err.throw() # can raise ValueError
But unlike that implementation, ``check_error`` can be functionalized using
the ``checkify`` transformation.
This function is similar to ``check`` but with a different signature: whereas
``check`` takes as arguments a boolean predicate and a new error message
string, this function takes an ``Error`` value as argument. Both ``check``
and this function raise a Python Exception on failure (a side-effect), and
thus cannot be staged out by ``jit``, ``pmap``, ``scan``, etc. Both also can
be functionalized by using ``checkify``.
But unlike ``check``, this function is like a direct inverse of ``checkify``:
whereas ``checkify`` takes as input a function which can raise a Python
Exception and produces a new function without that effect but which produces
an ``Error`` value as output, this ``check_error`` function can accept an
``Error`` value as input and can produce the side-effect of raising an
Exception. That is, while ``checkify`` goes from functionalizable Exception
effect to error value, this ``check_error`` goes from error value to
functionalizable Exception effect.
``check_error`` is useful when you want to turn checks represented by an
``Error`` value (produced by functionalizing ``checks`` via ``checkify``)
back into Python Exceptions.
Args:
error: Error to check.
For example, you might want to functionalize part of your program through
checkify, stage out your functionalized code through ``jit``, then re-inject
your error value outside of the ``jit``:
>>> import jax
>>> from jax.experimental import checkify
>>> def f(x):
... checkify.check(x>0, "must be positive!")
... return x
>>> def with_inner_jit(x):
... checked_f = checkify.checkify(f)
... # a checkified function can be jitted
... error, out = jax.jit(checked_f)(x)
... checkify.check_error(error)
... return out
>>> _ = with_inner_jit(1) # no failed check
>>> with_inner_jit(-1) # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
ValueError: must be positive!
>>> # can re-checkify
>>> error, _ = checkify.checkify(with_inner_jit)(-1)
"""
if np.shape(error.err):
err, code = _reduce_any_error(error.err, error.code)
else:
err, code = error.err, error.code
return assert_p.bind(~err, code, msgs=error.msgs)
assert_p = core.Primitive('assert') # TODO: rename to check?
assert_p.multiple_results = True # zero results
@assert_p.def_impl
def assert_impl(pred, code, *, msgs):
Error(~pred, code, msgs).throw()
return []
@assert_p.def_abstract_eval
def assert_abstract_eval(pred, code, *, msgs):
# TODO(lenamartens) add in-depth explanation to link to in module docs.
raise ValueError('Cannot abstractly evaluate a checkify.check which was not'
' functionalized. This probably means you tried to stage'
' (jit/scan/pmap/...) a `check` without functionalizing it'
' through `checkify.checkify`.'
)
## checkify rules
def summary() -> str:
return str(source_info_util.summarize(source_info_util.current()))
def nan_error_check(prim, error, enabled_errors, *in_vals, **params):
out = prim.bind(*in_vals, **params)
if ErrorCategory.NAN not in enabled_errors:
return out, error
no_nans = jnp.logical_not(jnp.any(jnp.isnan(out)))
msg = f"nan generated by primitive {prim.name} at {summary()}"
return out, assert_func(error, no_nans, msg)
def gather_error_check(error, enabled_errors, operand, start_indices, *,
dimension_numbers, slice_sizes, unique_indices,
indices_are_sorted, mode, fill_value):
out = lax.gather_p.bind(
operand, start_indices, dimension_numbers=dimension_numbers,
slice_sizes=slice_sizes, unique_indices=unique_indices,
indices_are_sorted=indices_are_sorted, mode=mode, fill_value=fill_value)
if ErrorCategory.OOB not in enabled_errors:
return out, error
# compare to OOB masking logic in lax._gather_translation_rule
dnums = dimension_numbers
operand_dims = np.array(operand.shape)
upper_bound = operand_dims[np.array(dnums.start_index_map)]
upper_bound -= np.array(slice_sizes)[np.array(dnums.start_index_map)]
all_inbounds = jnp.all((start_indices >= 0) & (start_indices <= upper_bound))
msg = f"out-of-bounds indexing at {summary()}"
return out, assert_func(error, all_inbounds, msg)
error_checks[lax.gather_p] = gather_error_check
def div_error_check(error, enabled_errors, x, y):
"""Checks for division by zero and NaN."""
if ErrorCategory.DIV in enabled_errors:
all_nonzero = jnp.logical_not(jnp.any(jnp.equal(y, 0)))
msg = f'divided by zero at {summary()}'
error = assert_func(error, all_nonzero, msg)
return nan_error_check(lax.div_p, error, enabled_errors, x, y)
error_checks[lax.div_p] = div_error_check
def scatter_in_bounds(operand, indices, updates, dnums):
# Ref: see clamping code used in scatter_translation_rule
slice_sizes = []
pos = 0
for i in range(len(operand.shape)):
if i in dnums.inserted_window_dims:
slice_sizes.append(1)
else:
slice_sizes.append(updates.shape[dnums.update_window_dims[pos]])
pos += 1
upper_bound = np.array([operand.shape[i] - slice_sizes[i]
for i in dnums.scatter_dims_to_operand_dims],
np.int64)
upper_bound = np.minimum(upper_bound, np.iinfo(indices.dtype).max)
upper_bound = lax.broadcast_in_dim(upper_bound, indices.shape,
(len(indices.shape) - 1,))
lower_in_bounds = jnp.all(jnp.greater_equal(indices, 0))
upper_in_bounds = jnp.all(jnp.less_equal(indices, upper_bound))
return jnp.logical_and(lower_in_bounds, upper_in_bounds)
def scatter_error_check(prim, error, enabled_errors, operand, indices, updates,
*, update_jaxpr, update_consts, dimension_numbers,
indices_are_sorted, unique_indices, mode):
"""Checks if indices are within bounds and update does not generate NaN."""
out = prim.bind(
operand, indices, updates, update_jaxpr=update_jaxpr,
update_consts=update_consts, dimension_numbers=dimension_numbers,
indices_are_sorted=indices_are_sorted, unique_indices=unique_indices,
mode=mode)
if ErrorCategory.OOB not in enabled_errors:
return out, error
in_bounds = scatter_in_bounds(operand, indices, updates, dimension_numbers)
oob_msg = f'out-of-bounds indexing while updating at {summary()}'
oob_error = assert_func(error, in_bounds, oob_msg)
no_nans = jnp.logical_not(jnp.any(jnp.isnan(out)))
nan_msg = f'nan generated by primitive {prim.name} at {summary()}'
return out, assert_func(oob_error, no_nans, nan_msg)
error_checks[lax.scatter_p] = partial(scatter_error_check, lax.scatter_p)
error_checks[lax.scatter_add_p] = partial(scatter_error_check, lax.scatter_add_p)
error_checks[lax.scatter_mul_p] = partial(scatter_error_check, lax.scatter_mul_p)
error_checks[lax.scatter_min_p] = partial(scatter_error_check, lax.scatter_min_p)
error_checks[lax.scatter_max_p] = partial(scatter_error_check, lax.scatter_max_p)
def cond_error_check(error, enabled_errors, index, *ops, branches, linear):
new_branches, msgs_ = unzip2(checkify_jaxpr(jxpr, error, enabled_errors)
for jxpr in branches)
new_linear = (False, False, *linear)
err, code, *outs = lax.cond_p.bind(
index, error.err, error.code, *ops,
branches=tuple(new_branches), linear=new_linear)
new_msgs = {k:v for d in it.chain([error.msgs], msgs_) for k, v in d.items()}
return outs, Error(err, code, new_msgs)
error_checks[lax.cond_p] = cond_error_check
def scan_error_check(error, enabled_errors, *in_flat, reverse, length, jaxpr,
num_consts, num_carry, linear, unroll):
consts, carry, xs = split_list(in_flat, [num_consts, num_carry])
checked_jaxpr_, msgs_ = checkify_jaxpr(jaxpr, error, enabled_errors)
tomove = [False] * 2 + [True] * len(consts) + [False] * (len(carry) + len(xs))
checked_jaxpr = pe.move_binders_to_front(checked_jaxpr_, tomove)
new_linear = (False, False, *linear)
new_in_flat = [*consts, error.err, error.code, *carry, *xs]
err, code, *outs = lax.scan_p.bind(
*new_in_flat, reverse=reverse, length=length, jaxpr=checked_jaxpr,
num_consts=len(consts), num_carry=len(carry)+2,
linear=new_linear, unroll=unroll)
new_msgs = {**error.msgs, **msgs_}
return outs, Error(err, code, new_msgs)
error_checks[lax.scan_p] = scan_error_check
def checkify_while_body_jaxpr(cond_jaxpr, body_jaxpr, error, enabled_errors, c_consts):
cond_f = core.jaxpr_as_fun(cond_jaxpr)
body_f = core.jaxpr_as_fun(body_jaxpr)
def new_body_f(*vals):
out = body_f(*vals)
# This checks if the next cond application will error
_ = cond_f(*c_consts, *out)
return out
return checkify_fun_to_jaxpr(lu.wrap_init(new_body_f), error, enabled_errors,
body_jaxpr.in_avals)
def ignore_errors_jaxpr(jaxpr, error):
"""Constructs a jaxpr which takes two extra args but ignores them."""
err_aval = core.raise_to_shaped(core.get_aval(error.err))
code_aval = core.raise_to_shaped(core.get_aval(error.code))
consts = jaxpr.consts
jaxpr = jaxpr.jaxpr
new_vars = core.gensym([jaxpr])
new_invars = (new_vars(err_aval), new_vars(code_aval), *jaxpr.invars)
new_jaxpr = core.Jaxpr(jaxpr.constvars, new_invars,
jaxpr.outvars, jaxpr.eqns)
return core.ClosedJaxpr(new_jaxpr, consts)
def while_loop_error_check(error, enabled_errors, *in_flat, cond_nconsts,
cond_jaxpr, body_nconsts, body_jaxpr):
c_consts, b_consts, carry = split_list(in_flat, [cond_nconsts, body_nconsts])
# Check if the first cond application will error.
cond_jaxpr_, msgs_cond = checkify_jaxpr(cond_jaxpr, error, enabled_errors)
cond_err, cond_code, _ = core.jaxpr_as_fun(cond_jaxpr_)(error.err, error.code,
*c_consts, *carry)
del cond_jaxpr_
checked_body_jaxpr_, msgs_body = checkify_while_body_jaxpr(
cond_jaxpr, body_jaxpr, error, enabled_errors, c_consts)
to_move = [False] * 2 + [True] * body_nconsts + [False] * len(carry)
checked_body_jaxpr = pe.move_binders_to_front(checked_body_jaxpr_, to_move)
compat_cond_jaxpr_ = ignore_errors_jaxpr(cond_jaxpr, error)
to_move = [False] * 2 + [True] * cond_nconsts + [False] * len(carry)
compat_cond_jaxpr = pe.move_binders_to_front(compat_cond_jaxpr_, to_move)
new_in_flat = [*c_consts, *b_consts, cond_err, cond_code, *carry]
err, code, *out = lax.while_p.bind(
*new_in_flat, cond_nconsts=cond_nconsts, cond_jaxpr=compat_cond_jaxpr,
body_nconsts=body_nconsts, body_jaxpr=checked_body_jaxpr)
new_msgs = {**error.msgs, **msgs_body, **msgs_cond}
return out, Error(err, code, new_msgs)
error_checks[lax.while_p] = while_loop_error_check
def add_nan_check(prim):
error_checks[prim] = partial(nan_error_check, prim)
add_nan_check(lax.floor_p)
add_nan_check(lax.ceil_p)
add_nan_check(lax.round_p)
add_nan_check(lax.sign_p)
add_nan_check(lax.shift_left_p)
add_nan_check(lax.shift_right_arithmetic_p)
add_nan_check(lax.shift_right_logical_p)
add_nan_check(lax.bitcast_convert_type_p)
add_nan_check(lax.real_p)
add_nan_check(lax.complex_p)
add_nan_check(lax.conj_p)
add_nan_check(lax.imag_p)
add_nan_check(lax.add_p)
add_nan_check(lax.sub_p)
add_nan_check(lax.convert_element_type_p)
add_nan_check(lax.broadcast_in_dim_p)
add_nan_check(lax.concatenate_p)
add_nan_check(lax.pad_p)
add_nan_check(lax.reshape_p)
add_nan_check(lax.rev_p)
add_nan_check(lax.transpose_p)
add_nan_check(lax.slice_p)
add_nan_check(lax.reduce_sum_p)
add_nan_check(lax.reduce_window_sum_p)
add_nan_check(lax.fft_p)
add_nan_check(lax.cumsum_p)
add_nan_check(lax.cumprod_p)
add_nan_check(lax.cummax_p)
add_nan_check(lax.cummin_p)
add_nan_check(lax.erf_p)
add_nan_check(lax.expm1_p)
add_nan_check(lax.log1p_p)
add_nan_check(lax.sqrt_p)
add_nan_check(lax.rsqrt_p)
add_nan_check(lax.asinh_p)
add_nan_check(lax.acosh_p)
add_nan_check(lax.atanh_p)
add_nan_check(lax.erfc_p)
add_nan_check(lax.rem_p)
add_nan_check(lax.clamp_p)
add_nan_check(lax.erf_inv_p)
add_nan_check(lax.exp_p)
add_nan_check(lax.pow_p)
add_nan_check(lax.integer_pow_p)
add_nan_check(lax.tanh_p)
add_nan_check(lax.log_p)
add_nan_check(lax.atan2_p)
add_nan_check(lax.sin_p)
add_nan_check(lax.cos_p)
add_nan_check(lax.sinh_p)
add_nan_check(lax.cosh_p)
add_nan_check(lax.dot_general_p)
add_nan_check(lax.mul_p)
add_nan_check(lax.conv_general_dilated_p)
add_nan_check(lax.reduce_max_p)
add_nan_check(lax.reduce_min_p)
add_nan_check(lax.abs_p)
add_nan_check(lax.select_n_p)
add_nan_check(lax.max_p)
add_nan_check(lax.min_p)
def assert_discharge_rule(error, enabled_errors, pred, code, *, msgs):
if ErrorCategory.USER_CHECK not in enabled_errors:
return [], error
out_err = error.err | jnp.logical_not(pred)
out_code = lax.select(error.err, error.code, code)
return [], Error(out_err, out_code, {**error.msgs, **msgs})
error_checks[assert_p] = assert_discharge_rule
## checkify api
ErrorCategory = enum.Enum('ErrorCategory', ['NAN', 'OOB', 'DIV', 'USER_CHECK'])
user_checks = frozenset({ErrorCategory.USER_CHECK})
nan_checks = frozenset({ErrorCategory.NAN})
index_checks = frozenset({ErrorCategory.OOB})
div_checks = frozenset({ErrorCategory.DIV})
float_checks = nan_checks | div_checks
automatic_checks = float_checks | index_checks
all_checks = automatic_checks | user_checks
Out = TypeVar('Out')
def checkify(fun: Callable[..., Out],
errors: FrozenSet[ErrorCategory] = user_checks
) -> Callable[..., Tuple[Error, Out]]:
"""Functionalize `check` calls in `fun`, and optionally add run-time error checks.
Run-time errors are either user-added ``checkify.check`` assertions, or
automatically added checks like NaN checks, depending on the ``errors``
argument.
The returned function will return an Error object `err` along with the output
of the original function. ``err.get()`` will either return ``None`` (if no
error occurred) or a string containing an error message. This error message
will correspond to the first error which occurred. ``err.throw()`` will raise
a ValueError with the error message if an error occurred.
By default only user-added ``checkify.check`` assertions are enabled. You can
enable automatic checks through the ``errors`` argument.
The automatic check sets which can be enabled, and when an error is generated:
- ``user_checks``: a ``checkify.check`` evaluated to False.
- ``nan_checks``: a floating-point operation generated a NaN value
as output.
- ``div_checks``: a division by zero.
- ``index_checks``: an index was out-of-bounds.
Multiple categories can be enabled together by creating a `Set` (eg.
``errors={ErrorCategory.NAN, ErrorCategory.OOB}``). Multiple sets can be
re-combined (eg. ``errors=float_checks|user_checks``)
Args:
fun: Callable which can contain user checks (see ``check``).
errors: A set of ErrorCategory values which defines the set of enabled
checks. By default only explicit ``checks`` are enabled
(``user_checks``). You can also for example enable NAN and
DIV errors by passing the ``float_checks`` set, or for
example combine multiple sets through set operations
(``float_checks | user_checks``)
Returns:
A function which accepts the same arguments as ``fun`` and returns as output
a pair where the first element is an ``Error`` value, representing the first
failed ``check``, and the second element is the original output of ``fun``.
For example:
>>> import jax
>>> import jax.numpy as jnp
>>> from jax.experimental import checkify
>>>
>>> @jax.jit
... def f(x):
... y = jnp.sin(x)
... return x+y
>>> err, out = checkify.checkify(f, errors=checkify.float_checks)(jnp.inf)
>>> err.throw() # doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
ValueError: nan generated by primitive sin
"""
@traceback_util.api_boundary
def checked_fun(*args, **kwargs):
args_flat, in_tree = tree_flatten((args, kwargs))
f, out_tree = flatten_fun(lu.wrap_init(fun), in_tree)
(err, code, out_flat), msgs = checkify_flat(f, errors, *args_flat)
out = tree_unflatten(out_tree(), out_flat)
return Error(err, code, msgs), out
return checked_fun | |
# ------------------------------------------------------------
# Copyright (c) 2017-present, SeetaTech, Co.,Ltd.
#
# Licensed under the BSD 2-Clause License.
# You should have received a copy of the BSD 2-Clause License
# along with the software. If not, See,
#
# <https://opensource.org/licenses/BSD-2-Clause>
#
# ------------------------------------------------------------
"""Some useful mappings are defined here."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy
TENSOR_TYPE_TO_NP_TYPE = {
'bool': numpy.bool,
'int8': numpy.int8,
'uint8': numpy.uint8,
'int32': numpy.int32,
'int64': numpy.int64,
'float16': numpy.float16,
'float32': numpy.float32,
'float64': numpy.float64,
}
TENSOR_TYPE_TO_TORCH_TENSOR = {
'int8': 'CharTensor',
'uint8': 'ByteTensor',
'int32': 'IntTensor',
'int64': 'LongTensor',
'float16': 'HalfTensor',
'float32': 'FloatTensor',
'float64': 'DoubleTensor',
} | |
# """Tools for constructing quantum circuits."""
import json
import numpy as np
import pyquil
import cirq
import qiskit
import random
from qiskit import QuantumRegister
from pyquil import Program
from pyquil.gates import *
from ..utils import convert_array_to_dict, convert_dict_to_array
from ._gate import *
from ._qubit import *
from ._gateset import COMMON_GATES, UNIQUE_GATES, ALL_GATES
from ..utils import SCHEMA_VERSION, pauli_x, pauli_y, pauli_z, identity
from openfermion.ops import FermionOperator
class Circuit(object):
"""Base class for quantum circuits.
Attributes:
name: string
Name of the Circuit object. By default this is called 'Unnamed'.
gates: list[Gate]
The gate sequence of the circuit. Implemented as a list of core.gate.Gate
objects.
qubits: list[Qubit]
The set of qubits that the circuit acts on. Implemented as a list of
core.qubit.Qubit objects.
info: dictionary
Additional information related to the circuit. For example if the circuit is converted
from another package, infomation related to the native specification of the circuit in
that package is recorded here.
"""
def __init__(self, input_object=None, name='Unnamed'):
"""Initialize a circuit. Most likely the circuit is generated by converting a circuit
object in other packages to core.circuit.Circuit object.
Args:
input_object: pyquil.Program, cirq.Circuit
A generic circuit object that may be created from one of the various packages
currently supported by Zap OS.
"""
self.name = name # name of the circuit (by default a random Hash string)
self.gates = [] # list of gates (see gate.py for Gate class def)
self.qubits = [] # list of qubits (see qubit.py for Qubit class def)
self.info = {
'label': None # the name of the native package that generates the circuit
# e.g. 'pyquil', 'cirq', 'qiskit' etc. The purpose is to
# provide hints about what unique functionalities of
# the package one might be able to take advantage of.
}
if isinstance(input_object, pyquil.Program):
self.from_pyquil(input_object)
if isinstance(input_object, pyquil.quilbase.Gate):
converted_input = pyquil.Program(input_object)
self.from_pyquil(converted_input)
if isinstance(input_object, cirq.Circuit):
self.from_cirq(input_object)
if isinstance(input_object, qiskit.QuantumCircuit):
self.from_qiskit(input_object)
@property
def n_multiqubit_gates(self):
"""The number of multiqubit gates in the circuit.
"""
n_mq_gates = 0
for gate in self.gates:
if len(gate.qubits) > 1:
n_mq_gates += 1
return n_mq_gates
def __eq__(self, anotherCircuit):
"""Comparison between two Circuit objects.
"""
p1 = self.to_pyquil()
p2 = anotherCircuit.to_pyquil()
return (p1 == p2)
def __add__(self, other_circuit):
"""Add two circuits.
"""
qubit_indices = set([qubit.index for qubit in self.qubits] +
[qubit.index for qubit in other_circuit.qubits])
new_circuit = Circuit()
for qubit_index in qubit_indices:
new_circuit.qubits.append(Qubit(qubit_index))
new_circuit.gates = self.gates + other_circuit.gates
return new_circuit
def get_qubits(self):
"""Returns a list of qubit indices (ints).
"""
return [q.index for q in self.qubits]
def to_pyquil(self):
"""Converts the circuit to a pyquil Program object.
"""
output = Program()
if self.gates != None:
for gate in self.gates:
output = add_gate_to_pyquil_program(output, gate)
return output
def to_cirq(self, cirq_qubits=None):
"""Converts the circuit to a cirq Circuit object.
NOTE: Here we always assume that the resulting circuit acts on a linear chain of
qubits.
Args:
cirq_qubits: list[cirq.LineQubit]
(optional) A list of cirq.LineQubit objects.
"""
qubits = []
if cirq_qubits == None:
if self.qubits != None:
if self.info['label'] == 'cirq':
for q in self.qubits:
qkey = q.info['QubitKey']
if q.info['QubitType'] == 'GridQubit':
qubits.append(cirq.GridQubit(qkey[0], qkey[1]))
if q.info['QubitType'] == 'LineQubit':
qubits.append(cirq.LineQubit(qkey))
else:
qubits = [cirq.LineQubit(i) for i in self.get_qubits()]
else:
if len(cirq_qubits) < len(self.qubits):
raise Exception('Input qubit register size is {}, which is not enough to represent this Circuit object that acts on {} qubits'.format(len(cirq_qubits), len(self.qubits)))
qubits = cirq_qubits
if self.gates != None:
gates = [g.to_cirq(cirq_qubits) for g in self.gates]
else:
gates = []
cirq_circuit = cirq.Circuit()
cirq_circuit.append(gates, strategy=cirq.circuits.InsertStrategy.EARLIEST)
return cirq_circuit
def to_qiskit(self):
"""Converts the circuit to a qiskit QuantumCircuit object.
"""
qiskit_circuit = qiskit.QuantumCircuit() # New qiskit circuit object
list_qregs = [] # list of QuantumRegister objects
qreg = []
list_cregs = []
creg = []
if self.qubits != None: # If there are qubits in the circuit, add them to the new qiskit circuit
if self.info['label'] == 'qiskit': # If the circuit originally was a qiskit circuit, create and add the quantum registers
collected_qregs = {} # dictionary of entries 'qreg name':list of collected qubit indices
collected_cregs = {}
for q in self.qubits: # For every qubit in the circuit...
# Quantum Register is stored as a string, so must parse for info and recreate qreg
q_qreg_num = int(q.info['qreg'][q.info['qreg'].find("(")+1:q.info['qreg'].find(",")])
q_qreg_label = q.info['qreg'][q.info['qreg'].find("'")+1:q.info['qreg'].rfind("'")]
q_qreg = QuantumRegister(q_qreg_num, q_qreg_label)
if q_qreg.name in collected_qregs:
# If the qubit register has already been collected, do nothing
pass
else:
# If qubit register has not been collected, add to lists
collected_qregs[q_qreg.name] = [q.index]
list_qregs.append(q_qreg)
# Quantum Register is stored as a string, so must parse for info and recreate qreg
if 'creg' in q.info.keys():
q_creg_num = int(q.info['creg'][q.info['creg'].find("(")+1:q.info['creg'].find(",")])
q_creg_label = q.info['creg'][q.info['creg'].find("'")+1:q.info['creg'].rfind("'")]
q_creg = ClassicalRegister(q_creg_num, q_creg_label)
if q_creg.name in collected_cregs:
# If the qubit register has already been collected, do nothing
pass
else:
# If qubit register has not been collected, add to lists
collected_cregs[q_creg.name] = [q.index]
list_cregs.append(q_creg)
for qreg in list_qregs:
qiskit_circuit.add_register(qreg)
for creg in list_cregs:
qiskit_circuit.add_register(creg)
else: # If not from a qiskit circuit, add all of the qubits to one register
max_qindex = max([q.index for q in self.qubits])
qreg = qiskit.QuantumRegister(max_qindex+1, 'q')
creg = qiskit.ClassicalRegister(max_qindex+1, 'c')
qiskit_circuit.add_register(qreg)
qiskit_circuit.add_register(creg)
if self.gates != None:
for gate in self.gates:
if gate.info['label'] == 'qiskit': # if the original circuit is a qiskit circuit
qiskit_gate_data = gate.to_qiskit() # assume that the underlying QuantumRegister is already provided
else: # if the original state is not a qiskit circuit
qiskit_gate_data = gate.to_qiskit(qreg) # provide the gate conversion with the associated QuantumRegister
N = len(qiskit_gate_data) # total number of entries in the list (which is 3x the number of elementary gates)
if N % 3 != 0:
raise ValueError("The number of entries in qiskit_gate_data is {} which is not a multiple of 3".format(N))
for index in np.linspace(0, N - 3, N // 3):
qiskit_circuit.append(qiskit_gate_data[int(index)], qargs=qiskit_gate_data[int(index)+1], cargs=qiskit_gate_data[int(index)+2])
return qiskit_circuit
def to_dict(self):
"""Creates a dictionary representing a circuit.
Returns:
dictionary (dict): the dictionary
"""
if self.gates != None:
gates_entry = [gate.to_dict() for gate in self.gates]
else:
gates_entry = None
if self.qubits != None:
qubits_entry = [qubit.to_dict() for qubit in self.qubits]
else:
qubits_entry = None
dictionary = {
'schema': SCHEMA_VERSION+'-circuit',
'name': self.name,
'gates': gates_entry,
'qubits': qubits_entry,
'info': self.info
}
return dictionary
def to_unitary(self):
"""Creates a unitary matrix representing the circuit.
Returns:
An array representing the unitary matrix.
"""
return self.to_cirq()._unitary_()
def to_text_diagram(self, transpose=False):
"""Gets a text diagram representing the circuit.
transpose (bool): if true, arrange qubit wires vertically instead of horizontally
Returns:
str: a string containing the text diagram
"""
return self.to_cirq().to_text_diagram(transpose=transpose)
def to_quil(self):
"""Gets the quil program representing the circuit.
Returns:
str: a string containing the quil program
"""
return self.to_pyquil().out()
def to_qpic(self):
"""Generates a string that can be used by qpic to build a picture of the circuit.
Returns:
str: a qpic string
"""
qpic_string = ''
for qubit in sorted(self.qubits, key=lambda q: q.index):
qpic_string += 'w{} W {}\n'.format(qubit.index, qubit.index)
for gate in self.gates:
qpic_string += gate.to_qpic() + '\n'
return qpic_string
def __str__(self):
"""Get a string representation of the circuit.
Returns:
str: a string representation of the circuit
"""
return self.to_text_diagram()
@classmethod
def from_dict(cls, dictionary):
"""Loads information of the circuit from a dictionary. This corresponds to the
serialization routines to_dict for Circuit, Gate and Qubit.
Args:
dictionary (dict): the dictionary
Returns:
A core.circuit.Circuit object
"""
output = cls(dictionary['name'])
if dictionary['gates'] != None:
output.gates = [Gate.from_dict(gate) for gate in dictionary['gates']]
else:
output.gates = None
if dictionary['qubits'] != None:
output.qubits = [Qubit.from_dict(qubit) for qubit in dictionary['qubits']]
else:
output.qubits = None
output.info = dictionary['info']
return output
def from_pyquil(self, pyquil_circuit):
"""Converts a pyquil Program object to a core.Circuit object.
Args:
pyquil_circuit: Program object(pyquil)
name: string
Name of the converted core.Circuit object.
"""
self.info['label'] = 'pyquil'
_gatelist = []
_qubits = []
if len(pyquil_circuit) == 0:
return
_pyquil_qubits = [] # list of currently found *pyquil* qubits
for gate in pyquil_circuit:
_gatequbits = []
for qubit in gate.qubits:
def qubit_in_list(qubit, qubitlist): # check if a pyquil qubit is in a list of pyquil qubits
output = False
out_index = []
for q in qubitlist:
if qubit.index == q.index:
output = True
out_index = q.index
break
return output, out_index
_flag, _index = qubit_in_list(qubit, _pyquil_qubits)
if _flag == False:
_pyquil_qubits.append(qubit)
_new_Qubit = Qubit.from_pyquil(qubit)
_qubits.append(_new_Qubit)
_gatequbits.append(_new_Qubit)
else:
for q in _qubits:
if q.index == _index:
_old_Qubit = q
break
_gatequbits.append(_old_Qubit)
_gatelist.append(Gate.from_pyquil(gate, _gatequbits))
self.gates=_gatelist
self.qubits=_qubits
def from_cirq(self, cirq_circuit):
"""Convert from a cirq Circuit object to a core.Circuit object.
Args:
cirq_circuit: cirq Cirquit object.
See the following: https://github.com/quantumlib/Cirq
"""
self.info['label'] = 'cirq'
_gatelist = []
_qubits = []
if len(cirq_circuit) == 0 or sum([len(m.operations) for m in cirq_circuit]) == 0:
return
_cirq_qubits = [] # list of currently found *cirq* qubits
for moment in cirq_circuit:
for op in moment.operations:
_gatequbits = []
for qubit in op.qubits:
def qubit_in_list(qubit, qubitlist): # check if a cirq qubit is in a list of cirq qubits
# if yes return the index
output = False
out_index = []
for q in qubitlist:
if isinstance(qubit, cirq.GridQubit) and isinstance(q, cirq.GridQubit):
if qubit.row == q.row and qubit.col == q.col:
output = True
out_index = (q.row, q.col)
break
elif isinstance(qubit, cirq.LineQubit) and isinstance(qubit, cirq.LineQubit):
if qubit.x == q.x:
output = True
out_index = q.x
break
else:
raise TypeError('(Cirq) Qubit and Qubit list elements not of the same kind.')
return output, out_index
_flag, _index = qubit_in_list(qubit, _cirq_qubits)
if _flag == False: # if the qubit is not seen before
_cirq_qubits.append(qubit) # add the cirq qubit to the list of cirq qubits seen
_new_Qubit = Qubit.from_cirq(qubit, qubit.x) # generate a new qubit
_qubits.append(_new_Qubit)
_gatequbits.append(_new_Qubit)
else: # if the qubit is already seen before
for q in _qubits: # search for the old Qubit object in the _qubits list
if q.info['QubitKey'] == _index:
_old_Qubit = q
break
_gatequbits.append(_old_Qubit)
_gatelist.append(Gate.from_cirq(op, _gatequbits))
self.gates=_gatelist
self.qubits=_qubits
def from_qiskit(self, qiskit_circuit):
"""Convert from a qiskit QuantumCircuit object to a core.circuit.Circuit object.
Args:
qiskit_circuit: qiskit QuantumCircuit object.
"""
self.name = qiskit_circuit.name
self.info['label'] = 'qiskit'
_gatelist = [] # list of gates for the output Circuit object
_qubits = [] # list of qubits for the output Circuit object
if len(qiskit_circuit.data) == 0:
return
_qiskit_qubits = [] # list of qiskit qubits in the circuit object
for gate_data in qiskit_circuit.data:
_gatequbits = []
for qubit in gate_data[1]:
def qubit_in_list(qubit, qubitlist): # check if a qiskit qubit is in a list of qiskit qubit
output = False
for q in qubitlist:
if qubit == q:
output = True
break
return output
if qubit_in_list(qubit, _qiskit_qubits)==0: # if the qubit is not seen before
_qiskit_qubits.append(qubit) # add the qiskit qubit to the list of currently seen qiskit qubits
_new_Qubit = Qubit.from_qiskit(qubit, qubit.index) # generate a new Qubit object
_qubits.append(_new_Qubit) # add to the list of Qubit objects for the output Circuit object
_gatequbits.append(_new_Qubit) # add to the list of Qubit objects that the gate acts on
else: # if the qubit is already seen before
for q in _qubits: # search for the old Qubit object in the _qubits list
if (q.info['qreg'] == str(qubit.register) and q.info['num'] == qubit.index):
_old_Qubit = q
break
_gatequbits.append(_old_Qubit)
zap_gate = Gate.from_qiskit(gate_data[0], _gatequbits)
if zap_gate is not None:
_gatelist.append(zap_gate)
self.gates=_gatelist
self.qubits=_qubits
def save_circuit(circuit, filename):
"""Saves a circuit object to a file.
Args:
circuit (core.Circuit): the circuit to be saved
filename (str): the name of the file
"""
with open(filename, 'w') as f:
f.write(json.dumps(circuit.to_dict()))
def load_circuit(file):
"""Loads a circuit from a file.
Args:
file (str or file-like object): the name of the file, or a file-like object.
Returns:
circuit (core.Circuit): the circuit
"""
if isinstance(file, str):
with open(file, 'r') as f:
data = json.load(f)
else:
data = json.load(file)
return(Circuit.from_dict(data))
def pyquil2cirq(qprog):
"""Convert a pyquil Program to a cirq Circuit.
Currently supports only common single- and two-qubit gates.
Args:
qprog (pyquil.quil.Program): the program to be converted.
Returns:
circuit (cirq.Cirquit): the converted circuit"""
# A map between gate names used by pyquil and cirq gate objects
op_map = {
'X' : cirq.X,
'Y' : cirq.Y,
'Z' : cirq.Z,
'T' : cirq.T,
'H' : cirq.H,
'S' : cirq.S,
'RX' : cirq.XPowGate,
'RY' : cirq.YPowGate,
'RZ' : cirq.ZPowGate,
'CNOT' : cirq.CNOT,
'SWAP' : cirq.SWAP,
'CZ' : cirq.CZ,
'CPHASE' : cirq.ops.common_gates.CZPowGate
}
# Create the qubits. The row of each grid qubit is equal to the index
# of the corresponding pyquil qubit.
qubits = [cirq.GridQubit(i, 0) for i in qprog.get_qubits()]
# A map between the row of the qubit and the index in the qubits array
qubit_map = {}
for i in range(len(qubits)):
qubit_map[qubits[i].row] = i
circuit = cirq.Circuit()
for gate in qprog:
if not op_map.get(gate.name):
raise ValueError('Gate {} not yet supported'.format(gate.name))
# Find the cirq qubits that this gate acts on
target_qubits = [qubits[qubit_map[q.index]] for q in gate.qubits]
# Create the cirq gate
if len(gate.params)==0:
cirq_gate = op_map[gate.name](*target_qubits)
elif len(gate.params)==1:
cirq_gate = op_map[gate.name](exponent=gate.params[0]/np.pi)(*target_qubits)
else:
raise ValueError('Gates with more than one parameter not yet supported: {}'.format(gate))
# Append the gate to the circuit
circuit.append(cirq_gate, strategy=cirq.circuits.InsertStrategy.EARLIEST)
return circuit
def cirq2pyquil(circuit):
"""Convert a cirq Circuit to a pyquil Program.
Currently supports only common single- and two-qubit gates.
Args:
circuit (cirq.Cirquit): the converted circuit.
Returns:
qprog (pyquil.quil.Program): the program to be converted."""
# A map between cirq gate string representations and pyquil gate classes
op_repr_map = {
'cirq.X' : pyquil.gates.X,
'cirq.Y' : pyquil.gates.Y,
'cirq.Z' : pyquil.gates.Z,
'cirq.T' : pyquil.gates.T,
'cirq.H' : pyquil.gates.H,
'cirq.S' : pyquil.gates.S,
'cirq.CNOT' : pyquil.gates.CNOT,
'cirq.SWAP' : pyquil.gates.SWAP,
'cirq.CZ' : pyquil.gates.CZ
}
# A map between cirq gate classes and pyquil gate classes. Perhaps better to parse repr?
op_type_map = {
cirq.ops.common_gates.XPowGate : pyquil.gates.RX,
cirq.ops.common_gates.YPowGate : pyquil.gates.RY,
cirq.ops.common_gates.ZPowGate : pyquil.gates.RZ,
cirq.ops.common_gates.CZPowGate : pyquil.gates.CPHASE
}
# Create a map from row/column tuples to linear qubit index
qubit_map = {}
qubit_count = 0
qubit = next(iter(circuit.all_qubits())) # Grab a random qubit
if isinstance(qubit, cirq.GridQubit):
qubit_key = lambda q: (q.row, q.col)
elif isinstance(qubit, cirq.LineQubit):
qubit_key = lambda q: q.x
else:
raise ValueError('Qubit type {} not yet supported'.format(type(qubit)))
for qubit in sorted(circuit.all_qubits(), key=qubit_key):
qubit_map[qubit_key(qubit)] = qubit_count
qubit_count += 1
# Create the program
qprog = pyquil.quil.Program()
def add_to_program(op):
"""Add a cirq op to the pyquil program qprog."""
# Find the linear indices of the qubits acted on by this operation
qubits = [qubit_map[qubit_key(q)] for q in op.qubits]
# First check if the string representation matches known gates
if op_repr_map.get(repr(op.gate)):
qprog.inst(op_repr_map[repr(op.gate)](*qubits))
# Next check if the type of the gate object matches known gates
elif op_type_map.get(type(op.gate)):
rads = op.gate.exponent*np.pi
pyquil_gate = op_type_map[type(op.gate)]
qprog.inst(pyquil_gate(rads, *qubits))
# Decompose if PhasedXPowGate or HPowGate
elif isinstance(op.gate, cirq.PhasedXPowGate) or isinstance(op.gate, cirq.HPowGate):
ops = cirq.decompose(op)
for op in ops:
add_to_program(op)
elif isinstance(op.gate, cirq.XXPowGate):
q1, q2 = op.qubits
ops = [cirq.H(q1), cirq.H(q2),
cirq.CNOT(q1, q2), cirq.Rz(op.gate.exponent*pi)(q2), cirq.CNOT(q1, q2),
cirq.H(q1), cirq.H(q2)]
for op in ops:
add_to_program(op)
elif isinstance(op.gate, cirq.YYPowGate):
q1, q2 = op.qubits
ops = [cirq.Z(q1)**0.5, cirq.Z(q2)**0.5, cirq.H(q1), cirq.H(q2),
cirq.CNOT(q1, q2), cirq.Rz(op.gate.exponent*pi)(q2), cirq.CNOT(q1, q2),
cirq.H(q1), cirq.H(q2), cirq.Z(q1)**-0.5, cirq.Z(q2)**-0.5]
for op in ops:
add_to_program(op)
elif isinstance(op.gate, cirq.ZZPowGate):
q1, q2 = op.qubits
ops = [cirq.CNOT(q1, q2), cirq.Rz(op.gate.exponent*pi)(q2), cirq.CNOT(q1, q2)]
for op in ops:
add_to_program(op)
else:
raise ValueError('Gate {} not yet supported'.format(op.gate))
for moment in circuit:
for op in moment.operations:
add_to_program(op)
return qprog
def add_gate_to_pyquil_program(pyquil_program, gate):
"""Add the definition of a gate to a pyquil Program object if the gate is
not currently defined.
Args:
pyquil_program: pyquil.Program
The input Program object to which the gate is going to be added.
gate: Gate (core.circuit)
The Gate object describing the gate to be added.
Returns:
A new pyquil.Program object with the definition of the new gate being added.
"""
if gate.name in COMMON_GATES: # if a gate is already included in pyquil
return pyquil_program + gate.to_pyquil() # do nothing
elif gate.name in UNIQUE_GATES: # if a gate is unique to a specific package
if gate.name == 'ZXZ':
beta = pyquil.quilatom.Parameter('beta')
gamma = pyquil.quilatom.Parameter('gamma')
zxz_unitary = np.array([[quil_cos(gamma/2),\
-quil_sin(beta)*quil_sin(gamma/2)-1j*quil_cos(beta)*quil_sin(gamma/2)],\
[quil_sin(beta)*quil_sin(gamma/2)-1j*quil_cos(beta)*quil_sin(gamma/2),\
quil_cos(gamma/2)]])
zxz_def = pyquil.quilbase.DefGate('ZXZ', zxz_unitary, [beta, gamma])
ZXZ = zxz_def.get_constructor()
return pyquil_program + zxz_def + ZXZ(gate.params[0], gate.params[1])(gate.qubits[0].index)
if gate.name == 'RH':
beta = pyquil.quilatom.Parameter('beta')
elem00 = quil_cos(beta/2)-1j*1/np.sqrt(2)*quil_sin(beta/2)
elem01 = -1j*1/np.sqrt(2)*quil_sin(beta/2)
elem10 = -1j*1/np.sqrt(2)*quil_sin(beta/2)
elem11 = quil_cos(beta/2)+1j*1/np.sqrt(2)*quil_sin(beta/2)
rh_unitary = np.array([[elem00, elem01], [elem10, elem11]])
rh_def = pyquil.quilbase.DefGate('RH', rh_unitary, [beta])
RH = rh_def.get_constructor()
return pyquil_program + rh_def + RH(gate.params[0])(gate.qubits[0].index)
if gate.name == 'XX': # XX gate (modified from XXPowGate in cirq)
beta = pyquil.quilatom.Parameter('beta')
elem_cos = quil_cos(beta)
elem_sin = 1j * quil_sin(beta)
xx_unitary = np.array([[elem_cos, 0, 0, elem_sin],
[0, elem_cos, elem_sin, 0],
[0, elem_sin, elem_cos, 0],
[elem_sin, 0, 0, elem_cos]])
xx_def = pyquil.quilbase.DefGate('XX', xx_unitary, [beta])
XX = xx_def.get_constructor()
return pyquil_program + xx_def + XX(gate.params[0])(gate.qubits[0].index, gate.qubits[1].index)
if gate.name == 'YY': # YY gate (modified from XXPowGate in cirq)
beta = pyquil.quilatom.Parameter('beta')
elem_cos = quil_cos(beta)
elem_sin = 1j * quil_sin(beta)
yy_unitary = np.array([[elem_cos, 0, 0, elem_sin],
[0, elem_cos, -elem_sin, 0],
[0, -elem_sin, elem_cos, 0],
[elem_sin, 0, 0, elem_cos]])
yy_def = pyquil.quilbase.DefGate('YY', yy_unitary, [beta])
YY = yy_def.get_constructor()
return pyquil_program + yy_def + YY(gate.params[0])(gate.qubits[0].index, gate.qubits[1].index)
if gate.name == 'ZZ': # ZZ gate (modified from XXPowGate in cirq)
beta = pyquil.quilatom.Parameter('beta')
elem_cos = quil_cos(beta)
elem_sin = 1j * quil_sin(beta)
zz_unitary = np.array([[elem_cos+elem_sin, 0, 0, 0],
[0, elem_cos-elem_sin, 0, 0],
[0, 0, elem_cos-elem_sin, 0],
[0, 0, 0, elem_cos+elem_sin]])
zz_def = pyquil.quilbase.DefGate('ZZ', zz_unitary, [beta])
ZZ = zz_def.get_constructor()
return pyquil_program + zz_def + ZZ(gate.params[0])(gate.qubits[0].index, gate.qubits[1].index)
if gate.name == 'U1ex': # IBM U1ex gate (arXiv:1805.04340v1)
alpha = pyquil.quilatom.Parameter('alpha')
beta = pyquil.quilatom.Parameter('beta')
elem_cos = quil_cos(beta)
elem_sin = 1j * quil_sin(beta)
unitary = [[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 1]]
unitary[1][1] = quil_cos(alpha)
unitary[2][2] = -quil_cos(alpha)
unitary[2][1] = (quil_cos(beta) - 1j*quil_sin(beta))*quil_sin(alpha)
unitary[1][2] = (quil_cos(beta) + 1j*quil_sin(beta))*quil_sin(alpha)
u1ex_def = pyquil.quilbase.DefGate('U1ex', np.array(unitary), [alpha, beta])
U1ex = u1ex_def.get_constructor()
output_program = pyquil_program + U1ex(gate.params[0], gate.params[1])(gate.qubits[0].index, gate.qubits[1].index)
gate_already_defined = False
for gate_definition in pyquil_program.defined_gates:
if gate_definition.name == 'U1ex':
gate_already_defined = True
break
if not gate_already_defined:
output_program = output_program + u1ex_def
return output_program
if gate.name == 'U2ex': # IBM U2ex gate (arXiv:1805.04340v1)
alpha = pyquil.quilatom.Parameter('alpha')
unitary = [[1, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 0],
[0, 0, 0, 1]]
unitary[1][1] = quil_cos(2 * alpha)
unitary[2][2] = quil_cos(2 * alpha)
unitary[2][1] = - 1j*quil_sin(2 * alpha)
unitary[1][2] = - 1j*quil_sin(2 * alpha)
u2ex_def = pyquil.quilbase.DefGate('U2ex', np.array(unitary), [alpha])
U2ex = u2ex_def.get_constructor()
output_program = pyquil_program + U2ex(gate.params[0])(gate.qubits[0].index, gate.qubits[1].index)
gate_already_defined = False
for gate_definition in pyquil_program.defined_gates:
if gate_definition.name == 'U2ex':
gate_already_defined = True
break
if not gate_already_defined:
output_program = output_program + u1ex_def
return output_program
if gate.name == "MEASURE":
reg_name = 'r'+str(gate.qubits[0].index)
ro = pyquil_program.declare(reg_name, 'BIT', 1)
return pyquil_program + MEASURE(gate.qubits[0].index, ro[0])
if gate.name == "BARRIER":
return pyquil_program | |
# coding:=utf-8
# Copyright 2020 Tencent. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
''' Applications based on TinyBERT. '''
import os
import copy
import numpy as np
from uf.tools import tf
from .base import ClassifierModule
from uf.modeling.tiny_bert import TinyBERTCLSDistillor
from .bert import BERTClassifier, get_bert_config
from uf.tokenization.word_piece import get_word_piece_tokenizer
import uf.utils as utils
class TinyBERTClassifier(BERTClassifier, ClassifierModule):
''' Single-label classifier on TinyBERT, a distillation model. '''
_INFER_ATTRIBUTES = BERTClassifier._INFER_ATTRIBUTES
def __init__(self,
config_file,
vocab_file,
max_seq_length=128,
label_size=None,
init_checkpoint=None,
output_dir=None,
gpu_ids=None,
drop_pooler=False,
hidden_size=384,
num_hidden_layers=4,
do_lower_case=True,
truncate_method='LIFO'):
super(ClassifierModule, self).__init__(
init_checkpoint, output_dir, gpu_ids)
self.batch_size = 0
self.max_seq_length = max_seq_length
self.label_size = label_size
self.truncate_method = truncate_method
self._drop_pooler = drop_pooler
self._id_to_label = None
self.__init_args__ = locals()
self.bert_config = get_bert_config(config_file)
self.tokenizer = get_word_piece_tokenizer(vocab_file, do_lower_case)
self._key_to_depths = 'unsupported'
self.student_config = copy.deepcopy(self.bert_config)
self.student_config.hidden_size = hidden_size
self.student_config.intermediate_size = 4 * hidden_size
self.student_config.num_hidden_layers = num_hidden_layers
if '[CLS]' not in self.tokenizer.vocab:
self.tokenizer.add('[CLS]')
self.bert_config.vocab_size += 1
self.student_config.vocab_size += 1
tf.logging.info('Add necessary token `[CLS]` into vocabulary.')
if '[SEP]' not in self.tokenizer.vocab:
self.tokenizer.add('[SEP]')
self.bert_config.vocab_size += 1
self.student_config.vocab_size += 1
tf.logging.info('Add necessary token `[SEP]` into vocabulary.')
def to_bert(self):
''' Isolate student tiny_bert out of traing graph. '''
if not self._graph_built:
raise ValueError(
'Fit, predict or score before saving checkpoint.')
if not self.output_dir:
raise ValueError('Attribute `output_dir` is None.')
tf.logging.info(
'Saving checkpoint into %s/bert_model.ckpt'
% (self.output_dir))
self.init_checkpoint = (
self.output_dir + '/bert_model.ckpt')
assignment_map = {}
for var in self.global_variables:
if var.name.startswith('tiny/'):
assignment_map[var.name.replace('tiny/', '')[:-2]] = var
saver = tf.train.Saver(assignment_map, max_to_keep=1000000)
saver.save(self.sess, self.init_checkpoint)
self.student_config.to_json_file(
os.path.join(self.output_dir, 'bert_config.json'))
def convert(self, X=None, y=None, sample_weight=None, X_tokenized=None,
is_training=False):
self._assert_legal(X, y, sample_weight, X_tokenized)
if is_training:
assert y is None, (
'Training of %s is unsupervised. `y` should be None.'
% self.__class__.__name__)
n_inputs = None
data = {}
# convert X
if X or X_tokenized:
tokenized = False if X else X_tokenized
input_ids, input_mask, segment_ids = self._convert_X(
X_tokenized if tokenized else X, tokenized=tokenized)
data['input_ids'] = np.array(input_ids, dtype=np.int32)
data['input_mask'] = np.array(input_mask, dtype=np.int32)
data['segment_ids'] = np.array(segment_ids, dtype=np.int32)
n_inputs = len(input_ids)
if n_inputs < self.batch_size:
self.batch_size = max(n_inputs, len(self._gpu_ids))
if y:
# convert y and sample_weight
label_ids = self._convert_y(y)
data['label_ids'] = np.array(label_ids, dtype=np.int32)
# convert sample_weight
if is_training or y:
sample_weight = self._convert_sample_weight(
sample_weight, n_inputs)
data['sample_weight'] = np.array(sample_weight, dtype=np.float32)
return data
def _forward(self, is_training, split_placeholders, **kwargs):
distillor = TinyBERTCLSDistillor(
student_config=self.student_config,
bert_config=self.bert_config,
is_training=is_training,
input_ids=split_placeholders['input_ids'],
input_mask=split_placeholders['input_mask'],
segment_ids=split_placeholders['segment_ids'],
sample_weight=split_placeholders.get('sample_weight'),
scope='bert',
drop_pooler=self._drop_pooler,
label_size=self.label_size,
**kwargs)
(total_loss, losses, probs, preds) = distillor.get_forward_outputs()
return (total_loss, losses, probs, preds)
def _get_fit_ops(self, as_feature=False):
return [self._train_op, self._losses['losses']]
def _get_fit_info(self, output_arrays, feed_dict, as_feature=False):
# loss
batch_losses = output_arrays[1]
loss = np.mean(batch_losses)
info = ''
info += ', distill loss %.6f' % loss
return info
def _get_predict_ops(self):
return [self._probs['probs']]
def _get_predict_outputs(self, batch_outputs):
n_inputs = len(list(self.data.values())[0])
output_arrays = list(zip(*batch_outputs))
# probs
probs = utils.transform(output_arrays[0], n_inputs)
# preds
preds = np.argmax(probs, axis=-1).tolist()
if self._id_to_label:
preds = [self._id_to_label[idx] for idx in preds]
outputs = {}
outputs['preds'] = preds
outputs['probs'] = probs
return outputs
def _get_score_ops(self):
return [self._probs['probs']]
def _get_score_outputs(self, batch_outputs):
n_inputs = len(list(self.data.values())[0])
output_arrays = list(zip(*batch_outputs))
# accuracy
probs = utils.transform(output_arrays[0], n_inputs)
preds = np.argmax(probs, axis=-1)
labels = self.data['label_ids']
accuracy = np.mean(preds == labels)
# loss
losses = [-np.log(probs[i][label]) for i, label in enumerate(labels)]
sample_weight = self.data['sample_weight']
losses = np.array(losses) * sample_weight
loss = np.mean(losses)
outputs = {}
outputs['accuracy'] = accuracy
outputs['loss'] = loss
return outputs | |
import numpy as np
import glob
import re
def getsequenceandstructure(filename, headersize):
data = np.loadtxt(filename, skiprows = headersize, dtype='str')
sequence = data[0]
pattern = re.compile('.{1,1}')
sequence = ' '.join(pattern.findall(sequence))
structure = data[1]
structure = ' '.join(pattern.findall(structure))
return sequence, structure
def writedatafile(paths, outfile, headersize):
f = open(outfile, 'w')
for path in paths:
sequence, structure = getsequenceandstructure(path, headersize)
f.write(path + '\n')
f.write(sequence + ' \n')
f.write(structure + ' \n')
f.write('\n')
f.close()
return
if __name__ == '__main__':
# CHANGE THESE IF YOU'RE USING YOUR OWN DATA
outfile = 'fucksta.txt' # output file to write to
headersize = 2 # number of lines in the .ct file before the sequence begins
# get all filepaths
fuck = '*.sta'
paths = glob.glob(fuck, recursive = True)
writedatafile(paths, outfile, headersize) | |
import numpy as np
from gym import spaces
from gym_pybullet_drones.envs.BaseAviary import DroneModel, BaseAviary
################################################################################
class Physics(Enum):
"""Physics implementations enumeration class."""
PYB = "pyb" # Base PyBullet physics update
DYN = "dyn" # Update with an explicit model of the dynamics
PYB_GND = "pyb_gnd" # PyBullet physics update with ground effect
PYB_DRAG = "pyb_drag" # PyBullet physics update with drag
PYB_DW = "pyb_dw" # PyBullet physics update with downwash
PYB_GND_DRAG_DW = "pyb_gnd_drag_dw" # PyBullet physics update with ground effect, drag, and downwash
PYB_AERO = "pyb_aero" # PyBullet physics update with aerodynamic forces due to wind (bluff-body and induction)
PYB_BF = "pyb_bf" # PyBullet physics update with blade flapping forces due to wind
PyB_AERO_BF = "pyb_aero_bf" # Pybullet physics update with aerodynamics forces, blade flapping due to wind
################################################################################
class FlowAviary(BaseAviary):
"""Multi-drone environment class for flow-sensor wind estimation applications."""
################################################################################
# Q: Any issues from adding the initial_wind argument?
def __init__(self,
drone_model: DroneModel=DroneModel.CF2X,
num_drones: int=1,
neighbourhood_radius: float=np.inf,
initial_xyzs=None,
initial_rpys=None,
initial_wind=None,
physics: Physics=Physics.PYB_AERO,
freq: int=240,
aggregate_phy_steps: int=1,
gui=False,
record=False,
obstacles=False,
user_debug_gui=True
):
"""Initialization of an aviary environment for control applications.
Parameters
----------
drone_model : DroneModel, optional
The desired drone type (detailed in an .urdf file in folder `assets`).
num_drones : int, optional
The desired number of drones in the aviary.
neighbourhood_radius : float, optional
Radius used to compute the drones' adjacency matrix, in meters.
initial_xyzs: ndarray | None, optional
(NUM_DRONES, 3)-shaped array containing the initial XYZ position of the drones.
initial_rpys: ndarray | None, optional
(NUM_DRONES, 3)-shaped array containing the initial orientations of the drones (in radians).
initial_wind: ndarray | None, optional
(, 3)-shaped array containing the initial free-stream wind velocity of the environment.
physics : Physics, optional
The desired implementation of PyBullet physics/custom dynamics.
freq : int, optional
The frequency (Hz) at which the physics engine steps.
aggregate_phy_steps : int, optional
The number of physics steps within one call to `BaseAviary.step()`.
gui : bool, optional
Whether to use PyBullet's GUI.
record : bool, optional
Whether to save a video of the simulation in folder `files/videos/`.
obstacles : bool, optional
Whether to add obstacles to the simulation.
user_debug_gui : bool, optional
Whether to draw the drones' axes and the GUI RPMs sliders.
"""
# Automatically inherit the methods and properties from its parent (BaseAviary)
# See inheritance guide: https://www.w3schools.com/python/python_inheritance.asp
# Q: Since super, this should contain the same arguments as BaseAviary.
super().__init__(drone_model=drone_model,
num_drones=num_drones,
neighbourhood_radius=neighbourhood_radius,
initial_xyzs=initial_xyzs,
initial_rpys=initial_rpys,
physics=physics,
freq=freq,
aggregate_phy_steps=aggregate_phy_steps,
gui=gui,
record=record,
obstacles=obstacles,
user_debug_gui=user_debug_gui
)
#### Parameters ############################################
# Q: not sure if this is the correct implementation of a wind parameter
self.wind = initial_wind
################################################################################
# Overriding step function to include custom aero dynamics
def step(self,
action
):
"""Advances the environment by one simulation step.
Parameters
----------
action : ndarray | dict[..]
The input action for one or more drones, translated into RPMs by
the specific implementation of `_preprocessAction()` in each subclass.
Returns
-------
ndarray | dict[..]
The step's observation, check the specific implementation of `_computeObs()`
in each subclass for its format.
float | dict[..]
The step's reward value(s), check the specific implementation of `_computeReward()`
in each subclass for its format.
bool | dict[..]
Whether the current epoisode is over, check the specific implementation of `_computeDone()`
in each subclass for its format.
dict[..]
Additional information as a dictionary, check the specific implementation of `_computeInfo()`
in each subclass for its format.
"""
#### Save PNG video frames if RECORD=True and GUI=False ####
if self.RECORD and not self.GUI and self.step_counter%self.CAPTURE_FREQ == 0:
[w, h, rgb, dep, seg] = p.getCameraImage(width=self.VID_WIDTH,
height=self.VID_HEIGHT,
shadow=1,
viewMatrix=self.CAM_VIEW,
projectionMatrix=self.CAM_PRO,
renderer=p.ER_TINY_RENDERER,
flags=p.ER_SEGMENTATION_MASK_OBJECT_AND_LINKINDEX,
physicsClientId=self.CLIENT
)
(Image.fromarray(np.reshape(rgb, (h, w, 4)), 'RGBA')).save(self.IMG_PATH+"frame_"+str(self.FRAME_NUM)+".png")
#### Save the depth or segmentation view instead #######
# dep = ((dep-np.min(dep)) * 255 / (np.max(dep)-np.min(dep))).astype('uint8')
# (Image.fromarray(np.reshape(dep, (h, w)))).save(self.IMG_PATH+"frame_"+str(self.FRAME_NUM)+".png")
# seg = ((seg-np.min(seg)) * 255 / (np.max(seg)-np.min(seg))).astype('uint8')
# (Image.fromarray(np.reshape(seg, (h, w)))).save(self.IMG_PATH+"frame_"+str(self.FRAME_NUM)+".png")
self.FRAME_NUM += 1
#### Read the GUI's input parameters #######################
if self.GUI and self.USER_DEBUG:
current_input_switch = p.readUserDebugParameter(self.INPUT_SWITCH, physicsClientId=self.CLIENT)
if current_input_switch > self.last_input_switch:
self.last_input_switch = current_input_switch
self.USE_GUI_RPM = True if self.USE_GUI_RPM == False else False
if self.USE_GUI_RPM:
for i in range(4):
self.gui_input[i] = p.readUserDebugParameter(int(self.SLIDERS[i]), physicsClientId=self.CLIENT)
clipped_action = np.tile(self.gui_input, (self.NUM_DRONES, 1))
if self.step_counter%(self.SIM_FREQ/2) == 0:
self.GUI_INPUT_TEXT = [p.addUserDebugText("Using GUI RPM",
textPosition=[0, 0, 0],
textColorRGB=[1, 0, 0],
lifeTime=1,
textSize=2,
parentObjectUniqueId=self.DRONE_IDS[i],
parentLinkIndex=-1,
replaceItemUniqueId=int(self.GUI_INPUT_TEXT[i]),
physicsClientId=self.CLIENT
) for i in range(self.NUM_DRONES)]
#### Save, preprocess, and clip the action to the max. RPM #
else:
self._saveLastAction(action)
clipped_action = np.reshape(self._preprocessAction(action), (self.NUM_DRONES, 4))
#### Repeat for as many as the aggregate physics steps #####
for _ in range(self.AGGR_PHY_STEPS):
#### Update and store the drones kinematic info for certain
#### Between aggregate steps for certain types of update ###
if self.AGGR_PHY_STEPS > 1 and self.PHYSICS in [Physics.DYN, Physics.PYB_GND, Physics.PYB_DRAG, Physics.PYB_DW, Physics.PYB_GND_DRAG_DW]:
self._updateAndStoreKinematicInformation()
#### Step the simulation using the desired physics update ##
for i in range (self.NUM_DRONES):
if self.PHYSICS == Physics.PYB:
self._physics(clipped_action[i, :], i)
elif self.PHYSICS == Physics.DYN:
self._dynamics(clipped_action[i, :], i)
elif self.PHYSICS == Physics.PYB_GND:
self._physics(clipped_action[i, :], i)
self._groundEffect(clipped_action[i, :], i)
elif self.PHYSICS == Physics.PYB_DRAG:
self._physics(clipped_action[i, :], i)
self._drag(self.last_clipped_action[i, :], i)
elif self.PHYSICS == Physics.PYB_DW:
self._physics(clipped_action[i, :], i)
self._downwash(i)
elif self.PHYSICS == Physics.PYB_GND_DRAG_DW:
self._physics(clipped_action[i, :], i)
self._groundEffect(clipped_action[i, :], i)
self._drag(self.last_clipped_action[i, :], i)
self._downwash(i)
elif self.PHYSICS == Physics.PYB_AERO:
self._physics(clipped_action[i, :], i)
self._aeroForces(clipped_action[i, :], i) # clipped or last-clipped?
elif self.PHYSICS == Physics.PYB_BF:
self._physics(clipped_action[i, :], i)
self._bladeFlapping(clipped_action[i, :], i) # clipped or last-clipped?
elif self.PHYSICS == Physics.PyB_AERO_BF:
self._physics(clipped_action[i, :], i)
self._aeroForces(clipped_action[i, :], i) # clipped or last-clipped?
self._bladeFlapping(clipped_action[i, :], i) # clipped or last-clipped?
#### PyBullet computes the new state, unless Physics.DYN ###
if self.PHYSICS != Physics.DYN:
p.stepSimulation(physicsClientId=self.CLIENT)
#### Save the last applied action (e.g. to compute drag) ###
self.last_clipped_action = clipped_action
#### Update and store the drones kinematic information #####
self._updateAndStoreKinematicInformation()
#### Prepare the return values #############################
obs = self._computeObs()
reward = self._computeReward()
done = self._computeDone()
info = self._computeInfo()
#### Advance the step counter ##############################
self.step_counter = self.step_counter + (1 * self.AGGR_PHY_STEPS)
return obs, reward, done, info
################################################################################
def _aeroForces(self,
rpm,
nth_drone
):
"""PyBullet implementation of bluff body and induced drag.
Based on the the aerodynamic model in (Paley, 2020).
Parameters
----------
rpm : ndarray
(4)-shaped array of ints containing the RPMs values of the 4 motors.
nth_drone : int
The ordinal number/position of the desired drone in list self.DRONE_IDS.
"""
#### Rotation matrix of the base ###########################
base_rot = np.array(p.getMatrixFromQuaternion(self.quat[nth_drone, :])).reshape(3, 3)
#### Simple draft model applied to the base/center of mass #
drag_factors = -1 * self.DRAG_COEFF * np.sum(np.array(2*np.pi*rpm/60))
drag = np.dot(base_rot, drag_factors*np.array(self.vel[nth_drone, :]))
p.applyExternalForce(self.DRONE_IDS[nth_drone],
4,
forceObj=drag,
posObj=[0, 0, 0],
flags=p.LINK_FRAME,
physicsClientId=self.CLIENT
)
################################################################################
def _bladeFlapping(self,
rpm,
nth_drone
):
"""PyBullet implementation of blade-flapping induced torque on a rotor.
Based on the the aerodynamic model in (Paley, 2020).
Parameters
----------
rpm : ndarray
(4)-shaped array of ints containing the RPMs values of the 4 motors.
nth_drone : int
The ordinal number/position of the desired drone in list self.DRONE_IDS.
"""
#### Rotation matrix of the base ###########################
base_rot = np.array(p.getMatrixFromQuaternion(self.quat[nth_drone, :])).reshape(3, 3)
#### Simple draft model applied to the base/center of mass #
drag_factors = -1 * self.DRAG_COEFF * np.sum(np.array(2*np.pi*rpm/60))
drag = np.dot(base_rot, drag_factors*np.array(self.vel[nth_drone, :]))
p.applyExternalForce(self.DRONE_IDS[nth_drone],
4,
forceObj=drag,
posObj=[0, 0, 0],
flags=p.LINK_FRAME,
physicsClientId=self.CLIENT
)
################################################################################
def _actionSpace(self):
"""Returns the action space of the environment.
Returns
-------
dict[str, ndarray]
A Dict of Box(4,) with NUM_DRONES entries,
indexed by drone Id in string format.
"""
#### Action vector ######## P0 P1 P2 P3
act_lower_bound = np.array([0., 0., 0., 0.])
act_upper_bound = np.array([self.MAX_RPM, self.MAX_RPM, self.MAX_RPM, self.MAX_RPM])
return spaces.Dict({str(i): spaces.Box(low=act_lower_bound,
high=act_upper_bound,
dtype=np.float32
) for i in range(self.NUM_DRONES)})
################################################################################
def _observationSpace(self):
"""Returns the observation space of the environment.
Returns
-------
dict[str, dict[str, ndarray]]
A Dict with NUM_DRONES entries indexed by Id in string format,
each a Dict in the form {Box(20,), MultiBinary(NUM_DRONES)}.
"""
#### Observation vector ### X Y Z Q1 Q2 Q3 Q4 R P Y VX VY VZ WX WY WZ P0 P1 P2 P3
obs_lower_bound = np.array([-np.inf, -np.inf, 0., -1., -1., -1., -1., -np.pi, -np.pi, -np.pi, -np.inf, -np.inf, -np.inf, -np.inf, -np.inf, -np.inf, 0., 0., 0., 0.])
obs_upper_bound = np.array([np.inf, np.inf, np.inf, 1., 1., 1., 1., np.pi, np.pi, np.pi, np.inf, np.inf, np.inf, np.inf, np.inf, np.inf, self.MAX_RPM, self.MAX_RPM, self.MAX_RPM, self.MAX_RPM])
return spaces.Dict({str(i): spaces.Dict({"state": spaces.Box(low=obs_lower_bound,
high=obs_upper_bound,
dtype=np.float32
),
"neighbors": spaces.MultiBinary(self.NUM_DRONES)
}) for i in range(self.NUM_DRONES)})
################################################################################
def _computeObs(self):
"""Returns the current observation of the environment.
For the value of key "state", see the implementation of `_getDroneStateVector()`,
the value of key "neighbors" is the drone's own row of the adjacency matrix.
Returns
-------
dict[str, dict[str, ndarray]]
A Dict with NUM_DRONES entries indexed by Id in string format,
each a Dict in the form {Box(20,), MultiBinary(NUM_DRONES)}.
"""
adjacency_mat = self._getAdjacencyMatrix()
return {str(i): {"state": self._getDroneStateVector(i), "neighbors": adjacency_mat[i, :]} for i in range(self.NUM_DRONES)}
################################################################################
def _preprocessAction(self,
action
):
"""Pre-processes the action passed to `.step()` into motors' RPMs.
Clips and converts a dictionary into a 2D array.
Parameters
----------
action : dict[str, ndarray]
The (unbounded) input action for each drone, to be translated into feasible RPMs.
Returns
-------
ndarray
(NUM_DRONES, 4)-shaped array of ints containing to clipped RPMs
commanded to the 4 motors of each drone.
"""
clipped_action = np.zeros((self.NUM_DRONES, 4))
for k, v in action.items():
clipped_action[int(k), :] = np.clip(np.array(v), 0, self.MAX_RPM)
return clipped_action
################################################################################
def _computeReward(self):
"""Computes the current reward value(s).
Unused as this subclass is not meant for reinforcement learning.
Returns
-------
int
Dummy value.
"""
return -1
################################################################################
def _computeDone(self):
"""Computes the current done value(s).
Unused as this subclass is not meant for reinforcement learning.
Returns
-------
bool
Dummy value.
"""
return False
################################################################################
def _computeInfo(self):
"""Computes the current info dict(s).
Unused as this subclass is not meant for reinforcement learning.
Returns
-------
dict[str, int]
Dummy value.
"""
return {"answer": 42} #### Calculated by the Deep Thought supercomputer in 7.5M years | |
"""
Copyright 2019 Johns Hopkins University (Author: Jesus Villalba)
Apache 2.0 (http://www.apache.org/licenses/LICENSE-2.0)
"""
from __future__ import absolute_import
from __future__ import print_function
from __future__ import division
import numpy as np
class LoggerList(object):
"""Container for a list of logger callbacks
Attributes:
loggers: list of Logger objects
"""
def __init__(self, loggers=None):
self.loggers = loggers or []
def append(self, logger):
self.loggers.append(logger)
def on_epoch_begin(self, epoch, logs=None, **kwargs):
"""At the start of an epoch
Args:
epoch: index of the epoch
logs: dictionary of logs
"""
logs = logs or {}
for logger in self.loggers:
logger.on_epoch_begin(epoch, logs, **kwargs)
def on_epoch_end(self, logs=None, **kwargs):
"""At the end of an epoch
Args:
epoch: index of the epoch
logs: dictionary of logs
"""
logs = logs or {}
for logger in self.loggers:
logger.on_epoch_end(logs, **kwargs)
def on_batch_begin(self, batch, logs=None, **kwargs):
"""At the start of a batch
Args:
batch: batch index within the epoch
logs: dictionary of logs
"""
logs = logs or {}
for logger in self.loggers:
logger.on_batch_begin(batch, logs, **kwargs)
def on_batch_end(self, logs=None, **kwargs):
"""At the end of a batch
Args:
batch: batch index within the epoch
logs: dictionary of logs
"""
logs = logs or {}
for logger in self.loggers:
logger.on_batch_end(logs, **kwargs)
def on_train_begin(self, logs=None, **kwargs):
"""At the start of training
Args:
logs: dictionary of logs
"""
logs = logs or {}
for logger in self.loggers:
logger.on_train_begin(logs, **kwargs)
def on_train_end(self, logs=None, **kwargs):
"""At the end of training
Args:
batch: batch index within the epoch
logs: dictionary of logs
"""
logs = logs or {}
for logger in self.loggers:
logger.on_train_end(logs, **kwargs)
def __iter__(self):
return iter(self.loggers)
class Logger(object):
"""Base class for logger objects
Attributes:
params: training params dictionary
"""
def __init__(self):
self.cur_epoch = 0
self.cur_batch = 0
self.params=None
def on_epoch_begin(self, epoch, logs, **kwargs):
"""At the start of an epoch
Args:
epoch: index of the epoch
logs: dictionary of logs
"""
self.cur_epoch = epoch
def on_epoch_end(self, logs, **kwargs):
"""At the end of an epoch
Args:
logs: dictionary of logs
"""
pass
def on_batch_begin(self, batch, logs, **kwargs):
"""At the start of a batch
Args:
batch: batch index within the epoch
logs: dictionary of logs
"""
self.cur_batch = batch
def on_batch_end(self, logs, **kwargs):
"""At the end of a batch
Args:
batch: batch index within the epoch
logs: dictionary of logs
"""
pass
def on_train_begin(self, logs, **kwargs):
"""At the start of training
Args:
logs: dictionary of logs
"""
pass
def on_train_end(self, logs, **kwargs):
"""At the end of training
Args:
batch: batch index within the epoch
logs: dictionary of logs
"""
pass | |
# Copyright (c) 2018 Copyright holder of the paper Generative Adversarial Model Learning
# submitted to NeurIPS 2019 for review
# All rights reserved.
import torch
from rllab.algos.base import Algorithm
from rllab.misc.overrides import overrides
import rllab.misc.logger as logger
import numpy as np
from rllab.torch.utils import torch as torch_utils
from rllab.dynamic_models.cartpole_model import CartPoleModel
import scipy
from tqdm import tqdm
import sys
"""
class which is used for behavior cloning to imitate a expert policy or a environment
"""
class BehaviorCloning(Algorithm):
def __init__(self, expert_data, imitation_model, n_itr ,mini_batchsize=1000, weight_decay=0, mode="imitate_env", optim=torch.optim.Adam):
self.imitationModel = imitation_model
self.expert_data = expert_data
if optim is not None:
self.optimizer = optim(imitation_model.parameters(), weight_decay=weight_decay)
else:
self.optimizer = None
self.mode = mode
self.mini_batchsize = mini_batchsize
self.n_itr = n_itr
self.l2_reg = weight_decay
def create_torch_var_from_paths(self, expert_data):
if self.mode == "imitate_env":
normalize_input_obs = self.imitationModel.normalized_input_obs
normalize_input_a = self.imitationModel.normalized_input_a
expert_observations_np = expert_data["observations"]
normalized_input_obs_idx = [i for i, x in enumerate(normalize_input_obs) if x]
expert_observations_np[:, normalized_input_obs_idx] = expert_data["normalized_observations"][:, normalized_input_obs_idx]
expert_actions_np = expert_data["actions"]
normalized_input_a_idx = [i for i, x in enumerate(normalize_input_a) if x]
expert_actions_np[:, normalize_input_a] = expert_data["unscaled_actions"][:, normalized_input_a_idx]
torch_input_batch = torch.cat([torch.from_numpy(expert_observations_np).float(),
torch.from_numpy(expert_actions_np).float()], dim=1)
try:
if self.imitationModel.pred_diff:
# we assume that they are all unnormalized, since they come directly from the expert env
expert_obs_diff_np = expert_data["env_infos"]["obs_diff"]
# normalize them now as needed
normalize_output_state_diff = self.imitationModel.normalized_output_state_diff
lb , ub = self.imitationModel._wrapped_env.observation_space.bounds
# select only the one we need to normalize
normalized_idx = [i for i, x in enumerate(normalize_output_state_diff) if x]
lb = lb[normalized_idx]
ub = ub[normalized_idx]
expert_obs_diff_np[:, normalized_idx] = (2 * (expert_obs_diff_np[:, normalized_idx] - lb) / (
ub - lb)) - 1
expert_obs_diff_np[:, normalized_idx] = np.clip(expert_obs_diff_np[:, normalized_idx], -1, 1)
torch_output_batch = torch.from_numpy(expert_obs_diff_np).float()
except AttributeError:
raise NotImplementedError("We cannot deal with envs with only next state predictions yet")
elif self.mode == "imitate_policy":
normalize_input = self.imitationModel.normalized_input
normalize_output = self.imitationModel.normalized_output
normalized_input_idx = [i for i, x in enumerate(normalize_input) if x]
normalized_output_idx = [i for i, x in enumerate(normalize_output) if x]
expert_observations_np = expert_data["observations"]
expert_observations_np[normalized_input_idx] = expert_data["normalized_observations"][normalized_input_idx]
expert_actions_np = expert_data["actions"]
expert_actions_np[normalized_output_idx] = expert_data["unscaled_actions"][normalized_output_idx]
torch_input_batch = torch.from_numpy(expert_observations_np).float()
torch_output_batch = torch.from_numpy(expert_actions_np).float()
else:
raise ValueError("invalid mode")
return torch_input_batch, torch_output_batch
def train(self):
if self.optimizer is not None:
self._train_SGD()
else:
self._train_BGFS()
def _train_SGD(self):
# TODO: we need to get here the right observations, actions and next_observations for the model
# expert_observations, expert_actions, expert_next_observations = create_torch_var_from_paths(self.expert_data)
# now train imitation policy using collect batch of expert_data with MLE on log prob since we have a Gaussian
# TODO: do we train mean and variance? or only mean
torch_input_batch, torch_output_batch = self.create_torch_var_from_paths(self.expert_data)
# split data randomly into training and validation set, let's go with 70 - 30 split
numTotalSamples = torch_input_batch.size(0)
trainingSize = int(numTotalSamples*0.7)
randomIndices = np.random.permutation(np.arange(numTotalSamples))
trainingIndices = randomIndices[:trainingSize]
validationIndices = randomIndices[trainingSize:]
validation_input_batch = torch_input_batch[validationIndices]
validation_output_batch = torch_output_batch[validationIndices]
torch_input_batch = torch_input_batch[trainingIndices]
torch_output_batch = torch_output_batch[trainingIndices]
best_loss = np.inf
losses = np.array([best_loss] * 25)
with tqdm(total=self.n_itr, file=sys.stdout) as pbar:
for epoch in range(self.n_itr+1):
with logger.prefix('epoch #%d | ' % epoch):
# split into mini batches for training
total_batchsize = torch_input_batch.size(0)
logger.record_tabular('Iteration', epoch)
indices = np.random.permutation(np.arange(total_batchsize))
if isinstance(self.imitationModel, CartPoleModel):
logger.record_tabular("theta", str(self.imitationModel.theta.detach().numpy()))
logger.record_tabular("std", str(self.imitationModel.std.detach().numpy()))
# go through the whole batch
for k in range(int(total_batchsize/self.mini_batchsize)):
idx = indices[self.mini_batchsize*k:self.mini_batchsize*(k+1)]
# TODO: how about numerical stability?
log_prob = self.imitationModel.get_log_prob(torch_input_batch[idx, :], torch_output_batch[idx, :])
# note that L2 regularization is in weight decay of optimizer
loss = -torch.mean(log_prob) # negative since we want to minimize and not maximize
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# calculate the loss on the whole batch
log_prob = self.imitationModel.get_log_prob(validation_input_batch, validation_output_batch)
loss = -torch.mean(log_prob)
# Note: here we add L2 regularization to the loss to log the proper loss
# weight decay
for param in self.imitationModel.parameters():
loss += param.pow(2).sum() * self.l2_reg
logger.record_tabular("loss", loss.item())
# check if loss has decreased in the last 25 itr on the validation set, if not stop training
# and return the best found parameters
losses[1:] = losses[0:-1]
losses[0] = loss
if epoch == 0:
best_loss = np.min(losses)
best_flat_parameters = torch_utils.get_flat_params_from(self.imitationModel).detach().numpy()
logger.record_tabular("current_best_loss", best_loss)
elif np.min(losses) <= best_loss and not (np.mean(losses) == best_loss): #second condition prevents same error in whole losses
# set best loss to new one if smaller or keep it
best_loss = np.min(losses)
best_flat_parameters = torch_utils.get_flat_params_from(self.imitationModel).detach().numpy()
logger.record_tabular("current_best_loss", best_loss)
else:
pbar.close()
print("best loss did not decrease in last 25 steps")
print("saving best result...")
logger.log("best loss did not decrease in last 25 steps")
torch_utils.set_flat_params_to(self.imitationModel, torch_utils.torch.from_numpy(best_flat_parameters))
logger.log("SGD converged")
logger.log("saving best result...")
params, torch_params = self.get_itr_snapshot(epoch)
if not params is None:
params["algo"] = self
logger.save_itr_params(self.n_itr, params, torch_params)
logger.log("saved")
break
pbar.set_description('epoch: %d' % (1 + epoch))
pbar.update(1)
# save result
logger.log("saving snapshot...")
params, torch_params = self.get_itr_snapshot(epoch)
if not params is None:
params["algo"] = self
logger.save_itr_params(epoch, params, torch_params)
logger.log("saved")
logger.dump_tabular(with_prefix=False)
def _train_BGFS(self):
if not isinstance(self.imitationModel, CartPoleModel):
raise NotImplementedError("train BGFS can be only called with CartPoleModel")
expert_observations = torch.from_numpy(self.expert_data["observations"]).float()
expert_actions = torch.from_numpy(self.expert_data["actions"]).float()
expert_obs_diff = torch.from_numpy(self.expert_data["env_infos"]["obs_diff"]).float()
# now train imitation policy using collect batch of expert_data with MLE on log prob since we have a Gaussian
# TODO: do we train mean and variance? or only mean
if self.mode == "imitate_env":
input = torch.cat([expert_observations, expert_actions], dim=1)
output = expert_obs_diff
else:
return ValueError("invalid mode")
imitation_model = self.imitationModel
total_batchsize = input.size(0)
def get_negative_likelihood_loss(flat_params):
torch_utils.set_flat_params_to(imitation_model, torch_utils.torch.from_numpy(flat_params))
for param in imitation_model.parameters():
if param.grad is not None:
param.grad.data.fill_(0)
indices = np.random.permutation(np.arange(total_batchsize))
loss = - torch.mean(imitation_model.get_log_prob(input[indices[:self.mini_batchsize]], output[indices[:self.mini_batchsize]]))
# weight decay
for param in imitation_model.parameters():
loss += param.pow(2).sum() * self.l2_reg
loss.backward()
# FIX: removed [0] since, mean reduces already it to an int (new functionality of new torch version?
return loss.detach().numpy(), \
torch_utils.get_flat_grad_from(
imitation_model.parameters()).detach().numpy(). \
astype(np.float64)
curr_itr = 0
def callback_fun(flat_params):
nonlocal curr_itr
torch_utils.set_flat_params_to(imitation_model, torch_utils.torch.from_numpy(flat_params))
# calculate the loss of the whole batch
loss = - torch.mean(imitation_model.get_log_prob(input, output))
# weight decay
for param in imitation_model.parameters():
loss += param.pow(2).sum() * self.l2_reg
loss.backward()
if isinstance(self.imitationModel, CartPoleModel):
logger.record_tabular("theta", str(self.imitationModel.theta.detach().numpy()))
logger.record_tabular("std", str(self.imitationModel.std.detach().numpy()))
logger.record_tabular('Iteration', curr_itr)
logger.record_tabular("loss", loss.item())
logger.dump_tabular(with_prefix=False)
curr_itr += 1
x0 = torch_utils.get_flat_params_from(self.imitationModel).detach().numpy()
# only allow positive variables since we know the masses and variance cannot be negative
bounds = [(0, np.inf) for _ in x0]
flat_params, _, opt_info = scipy.optimize.fmin_l_bfgs_b(
get_negative_likelihood_loss,
x0, maxiter=self.n_itr, bounds=bounds, callback=callback_fun)
logger.log(str(opt_info))
torch_utils.set_flat_params_to(self.imitationModel, torch.from_numpy(flat_params))
# save result
logger.log("saving snapshot...")
params, torch_params = self.get_itr_snapshot(0)
params["algo"] = self
logger.save_itr_params(self.n_itr, params, torch_params)
logger.log("saved")
@overrides
def get_itr_snapshot(self, itr):
if itr == 0:
return dict(
itr=itr,
expert_data=self.expert_data,
imitationModel=self.imitationModel,
), dict(imitationModel=self.imitationModel)
else:
return None, {'imitationModel': self.imitationModel} | |
"""
===
Rcm
===
Cuthill-McKee ordering of matrices
The reverse Cuthill-McKee algorithm gives a sparse matrix ordering that
reduces the matrix bandwidth.
"""
import networkx as nx
from networkx.utils import reverse_cuthill_mckee_ordering
import numpy as np
# build low-bandwidth numpy matrix
G = nx.grid_2d_graph(3, 3)
rcm = list(reverse_cuthill_mckee_ordering(G))
print("ordering", rcm)
print("unordered Laplacian matrix")
A = nx.laplacian_matrix(G)
x, y = np.nonzero(A)
# print(f"lower bandwidth: {(y - x).max()}")
# print(f"upper bandwidth: {(x - y).max()}")
print(f"bandwidth: {(y - x).max() + (x - y).max() + 1}")
print(A)
B = nx.laplacian_matrix(G, nodelist=rcm)
print("low-bandwidth Laplacian matrix")
x, y = np.nonzero(B)
# print(f"lower bandwidth: {(y - x).max()}")
# print(f"upper bandwidth: {(x - y).max()}")
print(f"bandwidth: {(y - x).max() + (x - y).max() + 1}")
print(B) | |
# Copyright 2016 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
"""Building Blocks of TensorFlow Debugger Command-Line Interface."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import copy
import os
import re
import sre_constants
import traceback
import numpy as np
import six
from tensorflow.python import pywrap_tensorflow_internal
from tensorflow.python.platform import gfile
HELP_INDENT = " "
EXPLICIT_USER_EXIT = "explicit_user_exit"
REGEX_MATCH_LINES_KEY = "regex_match_lines"
INIT_SCROLL_POS_KEY = "init_scroll_pos"
MAIN_MENU_KEY = "mm:"
class CommandLineExit(Exception):
def __init__(self, exit_token=None):
Exception.__init__(self)
self._exit_token = exit_token
@property
def exit_token(self):
return self._exit_token
class RichLine(object):
"""Rich single-line text.
Attributes:
text: A plain string, the raw text represented by this object. Should not
contain newlines.
font_attr_segs: A list of (start, end, font attribute) triples, representing
richness information applied to substrings of text.
"""
def __init__(self, text="", font_attr=None):
"""Construct a RichLine with no rich attributes or a single attribute.
Args:
text: Raw text string
font_attr: If specified, a single font attribute to be applied to the
entire text. Extending this object via concatenation allows creation
of text with varying attributes.
"""
# TODO(ebreck) Make .text and .font_attr protected members when we no
# longer need public access.
self.text = text
if font_attr:
self.font_attr_segs = [(0, len(text), font_attr)]
else:
self.font_attr_segs = []
def __add__(self, other):
"""Concatenate two chunks of maybe rich text to make a longer rich line.
Does not modify self.
Args:
other: Another piece of text to concatenate with this one.
If it is a plain str, it will be appended to this string with no
attributes. If it is a RichLine, it will be appended to this string
with its attributes preserved.
Returns:
A new RichLine comprising both chunks of text, with appropriate
attributes applied to the corresponding substrings.
"""
ret = RichLine()
if isinstance(other, six.string_types):
ret.text = self.text + other
ret.font_attr_segs = self.font_attr_segs[:]
return ret
elif isinstance(other, RichLine):
ret.text = self.text + other.text
ret.font_attr_segs = self.font_attr_segs[:]
old_len = len(self.text)
for start, end, font_attr in other.font_attr_segs:
ret.font_attr_segs.append((old_len + start, old_len + end, font_attr))
return ret
else:
raise TypeError("%r cannot be concatenated with a RichLine" % other)
def __len__(self):
return len(self.text)
def rich_text_lines_from_rich_line_list(rich_text_list, annotations=None):
"""Convert a list of RichLine objects or strings to a RichTextLines object.
Args:
rich_text_list: a list of RichLine objects or strings
annotations: annotatoins for the resultant RichTextLines object.
Returns:
A corresponding RichTextLines object.
"""
lines = []
font_attr_segs = {}
for i, rl in enumerate(rich_text_list):
if isinstance(rl, RichLine):
lines.append(rl.text)
if rl.font_attr_segs:
font_attr_segs[i] = rl.font_attr_segs
else:
lines.append(rl)
return RichTextLines(lines, font_attr_segs, annotations=annotations)
def get_tensorflow_version_lines(include_dependency_versions=False):
"""Generate RichTextLines with TensorFlow version info.
Args:
include_dependency_versions: Include the version of TensorFlow's key
dependencies, such as numpy.
Returns:
A formatted, multi-line `RichTextLines` object.
"""
lines = ["TensorFlow version: %s" % pywrap_tensorflow_internal.__version__]
lines.append("")
if include_dependency_versions:
lines.append("Dependency version(s):")
lines.append(" numpy: %s" % np.__version__)
lines.append("")
return RichTextLines(lines)
class RichTextLines(object):
"""Rich multi-line text.
Line-by-line text output, with font attributes (e.g., color) and annotations
(e.g., indices in a multi-dimensional tensor). Used as the text output of CLI
commands. Can be rendered on terminal environments such as curses.
This is not to be confused with Rich Text Format (RTF). This class is for text
lines only.
"""
def __init__(self, lines, font_attr_segs=None, annotations=None):
"""Constructor of RichTextLines.
Args:
lines: A list of str or a single str, representing text output to
screen. The latter case is for convenience when the text output is
single-line.
font_attr_segs: A map from 0-based row index to a list of 3-tuples.
It lists segments in each row that have special font attributes, such
as colors, that are not the default attribute. For example:
{1: [(0, 3, "red"), (4, 7, "green")], 2: [(10, 20, "yellow")]}
In each tuple, the 1st element is the start index of the segment. The
2nd element is the end index, in an "open interval" fashion. The 3rd
element is an object or a list of objects that represents the font
attribute. Colors are represented as strings as in the examples above.
annotations: A map from 0-based row index to any object for annotating
the row. A typical use example is annotating rows of the output as
indices in a multi-dimensional tensor. For example, consider the
following text representation of a 3x2x2 tensor:
[[[0, 0], [0, 0]],
[[0, 0], [0, 0]],
[[0, 0], [0, 0]]]
The annotation can indicate the indices of the first element shown in
each row, i.e.,
{0: [0, 0, 0], 1: [1, 0, 0], 2: [2, 0, 0]}
This information can make display of tensors on screen clearer and can
help the user navigate (scroll) to the desired location in a large
tensor.
Raises:
ValueError: If lines is of invalid type.
"""
if isinstance(lines, list):
self._lines = lines
elif isinstance(lines, six.string_types):
self._lines = [lines]
else:
raise ValueError("Unexpected type in lines: %s" % type(lines))
self._font_attr_segs = font_attr_segs
if not self._font_attr_segs:
self._font_attr_segs = {}
# TODO(cais): Refactor to collections.defaultdict(list) to simplify code.
self._annotations = annotations
if not self._annotations:
self._annotations = {}
# TODO(cais): Refactor to collections.defaultdict(list) to simplify code.
@property
def lines(self):
return self._lines
@property
def font_attr_segs(self):
return self._font_attr_segs
@property
def annotations(self):
return self._annotations
def num_lines(self):
return len(self._lines)
def slice(self, begin, end):
"""Slice a RichTextLines object.
The object itself is not changed. A sliced instance is returned.
Args:
begin: (int) Beginning line index (inclusive). Must be >= 0.
end: (int) Ending line index (exclusive). Must be >= 0.
Returns:
(RichTextLines) Sliced output instance of RichTextLines.
Raises:
ValueError: If begin or end is negative.
"""
if begin < 0 or end < 0:
raise ValueError("Encountered negative index.")
# Copy lines.
lines = self.lines[begin:end]
# Slice font attribute segments.
font_attr_segs = {}
for key in self.font_attr_segs:
if key >= begin and key < end:
font_attr_segs[key - begin] = self.font_attr_segs[key]
# Slice annotations.
annotations = {}
for key in self.annotations:
if not isinstance(key, int):
# Annotations can contain keys that are not line numbers.
annotations[key] = self.annotations[key]
elif key >= begin and key < end:
annotations[key - begin] = self.annotations[key]
return RichTextLines(
lines, font_attr_segs=font_attr_segs, annotations=annotations)
def extend(self, other):
"""Extend this instance of RichTextLines with another instance.
The extension takes effect on the text lines, the font attribute segments,
as well as the annotations. The line indices in the font attribute
segments and the annotations are adjusted to account for the existing
lines. If there are duplicate, non-line-index fields in the annotations,
the value from the input argument "other" will override that in this
instance.
Args:
other: (RichTextLines) The other RichTextLines instance to be appended at
the end of this instance.
"""
orig_num_lines = self.num_lines() # Record original number of lines.
# Merge the lines.
self._lines.extend(other.lines)
# Merge the font_attr_segs.
for line_index in other.font_attr_segs:
self._font_attr_segs[orig_num_lines + line_index] = (
other.font_attr_segs[line_index])
# Merge the annotations.
for key in other.annotations:
if isinstance(key, int):
self._annotations[orig_num_lines + key] = (other.annotations[key])
else:
self._annotations[key] = other.annotations[key]
def _extend_before(self, other):
"""Add another RichTextLines object to the front.
Args:
other: (RichTextLines) The other object to add to the front to this
object.
"""
other_num_lines = other.num_lines() # Record original number of lines.
# Merge the lines.
self._lines = other.lines + self._lines
# Merge the font_attr_segs.
new_font_attr_segs = {}
for line_index in self.font_attr_segs:
new_font_attr_segs[other_num_lines + line_index] = (
self.font_attr_segs[line_index])
new_font_attr_segs.update(other.font_attr_segs)
self._font_attr_segs = new_font_attr_segs
# Merge the annotations.
new_annotations = {}
for key in self._annotations:
if isinstance(key, int):
new_annotations[other_num_lines + key] = (self.annotations[key])
else:
new_annotations[key] = other.annotations[key]
new_annotations.update(other.annotations)
self._annotations = new_annotations
def append(self, line, font_attr_segs=None):
"""Append a single line of text.
Args:
line: (str) The text to be added to the end.
font_attr_segs: (list of tuples) Font attribute segments of the appended
line.
"""
self._lines.append(line)
if font_attr_segs:
self._font_attr_segs[len(self._lines) - 1] = font_attr_segs
def append_rich_line(self, rich_line):
self.append(rich_line.text, rich_line.font_attr_segs)
def prepend(self, line, font_attr_segs=None):
"""Prepend (i.e., add to the front) a single line of text.
Args:
line: (str) The text to be added to the front.
font_attr_segs: (list of tuples) Font attribute segments of the appended
line.
"""
other = RichTextLines(line)
if font_attr_segs:
other.font_attr_segs[0] = font_attr_segs
self._extend_before(other)
def write_to_file(self, file_path):
"""Write the object itself to file, in a plain format.
The font_attr_segs and annotations are ignored.
Args:
file_path: (str) path of the file to write to.
"""
with gfile.Open(file_path, "w") as f:
for line in self._lines:
f.write(line + "\n")
# TODO(cais): Add a method to allow appending to a line in RichTextLines with
# both text and font_attr_segs.
def regex_find(orig_screen_output, regex, font_attr):
"""Perform regex match in rich text lines.
Produces a new RichTextLines object with font_attr_segs containing highlighted
regex matches.
Example use cases include:
1) search for specific items in a large list of items, and
2) search for specific numerical values in a large tensor.
Args:
orig_screen_output: The original RichTextLines, in which the regex find
is to be performed.
regex: The regex used for matching.
font_attr: Font attribute used for highlighting the found result.
Returns:
A modified copy of orig_screen_output.
Raises:
ValueError: If input str regex is not a valid regular expression.
"""
new_screen_output = RichTextLines(
orig_screen_output.lines,
font_attr_segs=copy.deepcopy(orig_screen_output.font_attr_segs),
annotations=orig_screen_output.annotations)
try:
re_prog = re.compile(regex)
except sre_constants.error:
raise ValueError("Invalid regular expression: \"%s\"" % regex)
regex_match_lines = []
for i, line in enumerate(new_screen_output.lines):
find_it = re_prog.finditer(line)
match_segs = []
for match in find_it:
match_segs.append((match.start(), match.end(), font_attr))
if match_segs:
if i not in new_screen_output.font_attr_segs:
new_screen_output.font_attr_segs[i] = match_segs
else:
new_screen_output.font_attr_segs[i].extend(match_segs)
new_screen_output.font_attr_segs[i] = sorted(
new_screen_output.font_attr_segs[i], key=lambda x: x[0])
regex_match_lines.append(i)
new_screen_output.annotations[REGEX_MATCH_LINES_KEY] = regex_match_lines
return new_screen_output
def wrap_rich_text_lines(inp, cols):
"""Wrap RichTextLines according to maximum number of columns.
Produces a new RichTextLines object with the text lines, font_attr_segs and
annotations properly wrapped. This ought to be used sparingly, as in most
cases, command handlers producing RichTextLines outputs should know the
screen/panel width via the screen_info kwarg and should produce properly
length-limited lines in the output accordingly.
Args:
inp: Input RichTextLines object.
cols: Number of columns, as an int.
Returns:
1) A new instance of RichTextLines, with line lengths limited to cols.
2) A list of new (wrapped) line index. For example, if the original input
consists of three lines and only the second line is wrapped, and it's
wrapped into two lines, this return value will be: [0, 1, 3].
Raises:
ValueError: If inputs have invalid types.
"""
new_line_indices = []
if not isinstance(inp, RichTextLines):
raise ValueError("Invalid type of input screen_output")
if not isinstance(cols, int):
raise ValueError("Invalid type of input cols")
out = RichTextLines([])
row_counter = 0 # Counter for new row index
for i, line in enumerate(inp.lines):
new_line_indices.append(out.num_lines())
if i in inp.annotations:
out.annotations[row_counter] = inp.annotations[i]
if len(line) <= cols:
# No wrapping.
out.lines.append(line)
if i in inp.font_attr_segs:
out.font_attr_segs[row_counter] = inp.font_attr_segs[i]
row_counter += 1
else:
# Wrap.
wlines = [] # Wrapped lines.
osegs = []
if i in inp.font_attr_segs:
osegs = inp.font_attr_segs[i]
idx = 0
while idx < len(line):
if idx + cols > len(line):
rlim = len(line)
else:
rlim = idx + cols
wlines.append(line[idx:rlim])
for seg in osegs:
if (seg[0] < rlim) and (seg[1] >= idx):
# Calculate left bound within wrapped line.
if seg[0] >= idx:
lb = seg[0] - idx
else:
lb = 0
# Calculate right bound within wrapped line.
if seg[1] < rlim:
rb = seg[1] - idx
else:
rb = rlim - idx
if rb > lb: # Omit zero-length segments.
wseg = (lb, rb, seg[2])
if row_counter not in out.font_attr_segs:
out.font_attr_segs[row_counter] = [wseg]
else:
out.font_attr_segs[row_counter].append(wseg)
idx += cols
row_counter += 1
out.lines.extend(wlines)
# Copy over keys of annotation that are not row indices.
for key in inp.annotations:
if not isinstance(key, int):
out.annotations[key] = inp.annotations[key]
return out, new_line_indices
class CommandHandlerRegistry(object):
"""Registry of command handlers for CLI.
Handler methods (callables) for user commands can be registered with this
class, which then is able to dispatch commands to the correct handlers and
retrieve the RichTextLines output.
For example, suppose you have the following handler defined:
def echo(argv, screen_info=None):
return RichTextLines(["arguments = %s" % " ".join(argv),
"screen_info = " + repr(screen_info)])
you can register the handler with the command prefix "echo" and alias "e":
registry = CommandHandlerRegistry()
registry.register_command_handler("echo", echo,
"Echo arguments, along with screen info", prefix_aliases=["e"])
then to invoke this command handler with some arguments and screen_info, do:
registry.dispatch_command("echo", ["foo", "bar"], screen_info={"cols": 80})
or with the prefix alias:
registry.dispatch_command("e", ["foo", "bar"], screen_info={"cols": 80})
The call will return a RichTextLines object which can be rendered by a CLI.
"""
HELP_COMMAND = "help"
HELP_COMMAND_ALIASES = ["h"]
VERSION_COMMAND = "version"
VERSION_COMMAND_ALIASES = ["ver"]
def __init__(self):
# A dictionary from command prefix to handler.
self._handlers = {}
# A dictionary from prefix alias to prefix.
self._alias_to_prefix = {}
# A dictionary from prefix to aliases.
self._prefix_to_aliases = {}
# A dictionary from command prefix to help string.
self._prefix_to_help = {}
# Introductory text to help information.
self._help_intro = None
# Register a default handler for the command "help".
self.register_command_handler(
self.HELP_COMMAND,
self._help_handler,
"Print this help message.",
prefix_aliases=self.HELP_COMMAND_ALIASES)
# Register a default handler for the command "version".
self.register_command_handler(
self.VERSION_COMMAND,
self._version_handler,
"Print the versions of TensorFlow and its key dependencies.",
prefix_aliases=self.VERSION_COMMAND_ALIASES)
def register_command_handler(self,
prefix,
handler,
help_info,
prefix_aliases=None):
"""Register a callable as a command handler.
Args:
prefix: Command prefix, i.e., the first word in a command, e.g.,
"print" as in "print tensor_1".
handler: A callable of the following signature:
foo_handler(argv, screen_info=None),
where argv is the argument vector (excluding the command prefix) and
screen_info is a dictionary containing information about the screen,
such as number of columns, e.g., {"cols": 100}.
The callable should return:
1) a RichTextLines object representing the screen output.
The callable can also raise an exception of the type CommandLineExit,
which if caught by the command-line interface, will lead to its exit.
The exception can optionally carry an exit token of arbitrary type.
help_info: A help string.
prefix_aliases: Aliases for the command prefix, as a list of str. E.g.,
shorthands for the command prefix: ["p", "pr"]
Raises:
ValueError: If
1) the prefix is empty, or
2) handler is not callable, or
3) a handler is already registered for the prefix, or
4) elements in prefix_aliases clash with existing aliases.
5) help_info is not a str.
"""
if not prefix:
raise ValueError("Empty command prefix")
if prefix in self._handlers:
raise ValueError(
"A handler is already registered for command prefix \"%s\"" % prefix)
# Make sure handler is callable.
if not callable(handler):
raise ValueError("handler is not callable")
# Make sure that help info is a string.
if not isinstance(help_info, six.string_types):
raise ValueError("help_info is not a str")
# Process prefix aliases.
if prefix_aliases:
for alias in prefix_aliases:
if self._resolve_prefix(alias):
raise ValueError(
"The prefix alias \"%s\" clashes with existing prefixes or "
"aliases." % alias)
self._alias_to_prefix[alias] = prefix
self._prefix_to_aliases[prefix] = prefix_aliases
# Store handler.
self._handlers[prefix] = handler
# Store help info.
self._prefix_to_help[prefix] = help_info
def dispatch_command(self, prefix, argv, screen_info=None):
"""Handles a command by dispatching it to a registered command handler.
Args:
prefix: Command prefix, as a str, e.g., "print".
argv: Command argument vector, excluding the command prefix, represented
as a list of str, e.g.,
["tensor_1"]
screen_info: A dictionary containing screen info, e.g., {"cols": 100}.
Returns:
An instance of RichTextLines or None. If any exception is caught during
the invocation of the command handler, the RichTextLines will wrap the
error type and message.
Raises:
ValueError: If
1) prefix is empty, or
2) no command handler is registered for the command prefix, or
3) the handler is found for the prefix, but it fails to return a
RichTextLines or raise any exception.
CommandLineExit:
If the command handler raises this type of exception, this method will
simply pass it along.
"""
if not prefix:
raise ValueError("Prefix is empty")
resolved_prefix = self._resolve_prefix(prefix)
if not resolved_prefix:
raise ValueError("No handler is registered for command prefix \"%s\"" %
prefix)
handler = self._handlers[resolved_prefix]
try:
output = handler(argv, screen_info=screen_info)
except CommandLineExit as e:
raise e
except SystemExit as e:
# Special case for syntax errors caught by argparse.
lines = ["Syntax error for command: %s" % prefix,
"For help, do \"help %s\"" % prefix]
output = RichTextLines(lines)
except BaseException as e: # pylint: disable=broad-except
lines = ["Error occurred during handling of command: %s %s:" %
(resolved_prefix, " ".join(argv)), "%s: %s" % (type(e), str(e))]
# Include traceback of the exception.
lines.append("")
lines.extend(traceback.format_exc().split("\n"))
output = RichTextLines(lines)
if not isinstance(output, RichTextLines) and output is not None:
raise ValueError(
"Return value from command handler %s is not None or a RichTextLines "
"instance" % str(handler))
return output
def is_registered(self, prefix):
"""Test if a command prefix or its alias is has a registered handler.
Args:
prefix: A prefix or its alias, as a str.
Returns:
True iff a handler is registered for prefix.
"""
return self._resolve_prefix(prefix) is not None
def get_help(self, cmd_prefix=None):
"""Compile help information into a RichTextLines object.
Args:
cmd_prefix: Optional command prefix. As the prefix itself or one of its
aliases.
Returns:
A RichTextLines object containing the help information. If cmd_prefix
is None, the return value will be the full command-line help. Otherwise,
it will be the help information for the specified command.
"""
if not cmd_prefix:
# Print full help information, in sorted order of the command prefixes.
help_info = RichTextLines([])
if self._help_intro:
# If help intro is available, show it at the beginning.
help_info.extend(self._help_intro)
sorted_prefixes = sorted(self._handlers)
for cmd_prefix in sorted_prefixes:
lines = self._get_help_for_command_prefix(cmd_prefix)
lines.append("")
lines.append("")
help_info.extend(RichTextLines(lines))
return help_info
else:
return RichTextLines(self._get_help_for_command_prefix(cmd_prefix))
def set_help_intro(self, help_intro):
"""Set an introductory message to help output.
Args:
help_intro: (RichTextLines) Rich text lines appended to the
beginning of the output of the command "help", as introductory
information.
"""
self._help_intro = help_intro
def _help_handler(self, args, screen_info=None):
"""Command handler for "help".
"help" is a common command that merits built-in support from this class.
Args:
args: Command line arguments to "help" (not including "help" itself).
screen_info: (dict) Information regarding the screen, e.g., the screen
width in characters: {"cols": 80}
Returns:
(RichTextLines) Screen text output.
"""
_ = screen_info # Unused currently.
if not args:
return self.get_help()
elif len(args) == 1:
return self.get_help(args[0])
else:
return RichTextLines(["ERROR: help takes only 0 or 1 input argument."])
def _version_handler(self, args, screen_info=None):
del args # Unused currently.
del screen_info # Unused currently.
return get_tensorflow_version_lines(include_dependency_versions=True)
def _resolve_prefix(self, token):
"""Resolve command prefix from the prefix itself or its alias.
Args:
token: a str to be resolved.
Returns:
If resolvable, the resolved command prefix.
If not resolvable, None.
"""
if token in self._handlers:
return token
elif token in self._alias_to_prefix:
return self._alias_to_prefix[token]
else:
return None
def _get_help_for_command_prefix(self, cmd_prefix):
"""Compile the help information for a given command prefix.
Args:
cmd_prefix: Command prefix, as the prefix itself or one of its
aliases.
Returns:
A list of str as the help information fo cmd_prefix. If the cmd_prefix
does not exist, the returned list of str will indicate that.
"""
lines = []
resolved_prefix = self._resolve_prefix(cmd_prefix)
if not resolved_prefix:
lines.append("Invalid command prefix: \"%s\"" % cmd_prefix)
return lines
lines.append(resolved_prefix)
if resolved_prefix in self._prefix_to_aliases:
lines.append(HELP_INDENT + "Aliases: " + ", ".join(
self._prefix_to_aliases[resolved_prefix]))
lines.append("")
help_lines = self._prefix_to_help[resolved_prefix].split("\n")
for line in help_lines:
lines.append(HELP_INDENT + line)
return lines
class TabCompletionRegistry(object):
"""Registry for tab completion responses."""
def __init__(self):
self._comp_dict = {}
# TODO(cais): Rename method names with "comp" to "*completion*" to avoid
# confusion.
def register_tab_comp_context(self, context_words, comp_items):
"""Register a tab-completion context.
Register that, for each word in context_words, the potential tab-completions
are the words in comp_items.
A context word is a pre-existing, completed word in the command line that
determines how tab-completion works for another, incomplete word in the same
command line.
Completion items consist of potential candidates for the incomplete word.
To give a general example, a context word can be "drink", and the completion
items can be ["coffee", "tea", "water"]
Note: A context word can be empty, in which case the context is for the
top-level commands.
Args:
context_words: A list of context words belonging to the context being
registered. It is a list of str, instead of a single string, to support
synonym words triggering the same tab-completion context, e.g.,
both "drink" and the short-hand "dr" can trigger the same context.
comp_items: A list of completion items, as a list of str.
Raises:
TypeError: if the input arguments are not all of the correct types.
"""
if not isinstance(context_words, list):
raise TypeError("Incorrect type in context_list: Expected list, got %s" %
type(context_words))
if not isinstance(comp_items, list):
raise TypeError("Incorrect type in comp_items: Expected list, got %s" %
type(comp_items))
# Sort the completion items on registration, so that later during
# get_completions calls, no sorting will be necessary.
sorted_comp_items = sorted(comp_items)
for context_word in context_words:
self._comp_dict[context_word] = sorted_comp_items
def deregister_context(self, context_words):
"""Deregister a list of context words.
Args:
context_words: A list of context words to deregister, as a list of str.
Raises:
KeyError: if there are word(s) in context_words that do not correspond
to any registered contexts.
"""
for context_word in context_words:
if context_word not in self._comp_dict:
raise KeyError("Cannot deregister unregistered context word \"%s\"" %
context_word)
for context_word in context_words:
del self._comp_dict[context_word]
def extend_comp_items(self, context_word, new_comp_items):
"""Add a list of completion items to a completion context.
Args:
context_word: A single completion word as a string. The extension will
also apply to all other context words of the same context.
new_comp_items: (list of str) New completion items to add.
Raises:
KeyError: if the context word has not been registered.
"""
if context_word not in self._comp_dict:
raise KeyError("Context word \"%s\" has not been registered" %
context_word)
self._comp_dict[context_word].extend(new_comp_items)
self._comp_dict[context_word] = sorted(self._comp_dict[context_word])
def remove_comp_items(self, context_word, comp_items):
"""Remove a list of completion items from a completion context.
Args:
context_word: A single completion word as a string. The removal will
also apply to all other context words of the same context.
comp_items: Completion items to remove.
Raises:
KeyError: if the context word has not been registered.
"""
if context_word not in self._comp_dict:
raise KeyError("Context word \"%s\" has not been registered" %
context_word)
for item in comp_items:
self._comp_dict[context_word].remove(item)
def get_completions(self, context_word, prefix):
"""Get the tab completions given a context word and a prefix.
Args:
context_word: The context word.
prefix: The prefix of the incomplete word.
Returns:
(1) None if no registered context matches the context_word.
A list of str for the matching completion items. Can be an empty list
of a matching context exists, but no completion item matches the
prefix.
(2) Common prefix of all the words in the first return value. If the
first return value is None, this return value will be None, too. If
the first return value is not None, i.e., a list, this return value
will be a str, which can be an empty str if there is no common
prefix among the items of the list.
"""
if context_word not in self._comp_dict:
return None, None
comp_items = self._comp_dict[context_word]
comp_items = sorted(
[item for item in comp_items if item.startswith(prefix)])
return comp_items, self._common_prefix(comp_items)
def _common_prefix(self, m):
"""Given a list of str, returns the longest common prefix.
Args:
m: (list of str) A list of strings.
Returns:
(str) The longest common prefix.
"""
if not m:
return ""
s1 = min(m)
s2 = max(m)
for i, c in enumerate(s1):
if c != s2[i]:
return s1[:i]
return s1
class CommandHistory(object):
"""Keeps command history and supports lookup."""
_HISTORY_FILE_NAME = ".tfdbg_history"
def __init__(self, limit=100, history_file_path=None):
"""CommandHistory constructor.
Args:
limit: Maximum number of the most recent commands that this instance
keeps track of, as an int.
history_file_path: (str) Manually specified path to history file. Used in
testing.
"""
self._commands = []
self._limit = limit
self._history_file_path = (
history_file_path or self._get_default_history_file_path())
self._load_history_from_file()
def _load_history_from_file(self):
if os.path.isfile(self._history_file_path):
try:
with open(self._history_file_path, "rt") as history_file:
commands = history_file.readlines()
self._commands = [command.strip() for command in commands
if command.strip()]
# Limit the size of the history file.
if len(self._commands) > self._limit:
self._commands = self._commands[-self._limit:]
with open(self._history_file_path, "wt") as history_file:
for command in self._commands:
history_file.write(command + "\n")
except IOError:
print("WARNING: writing history file failed.")
def _add_command_to_history_file(self, command):
try:
with open(self._history_file_path, "at") as history_file:
history_file.write(command + "\n")
except IOError:
pass
@classmethod
def _get_default_history_file_path(cls):
return os.path.join(os.path.expanduser("~"), cls._HISTORY_FILE_NAME)
def add_command(self, command):
"""Add a command to the command history.
Args:
command: The history command, as a str.
Raises:
TypeError: if command is not a str.
"""
if self._commands and command == self._commands[-1]:
# Ignore repeating commands in a row.
return
if not isinstance(command, six.string_types):
raise TypeError("Attempt to enter non-str entry to command history")
self._commands.append(command)
if len(self._commands) > self._limit:
self._commands = self._commands[-self._limit:]
self._add_command_to_history_file(command)
def most_recent_n(self, n):
"""Look up the n most recent commands.
Args:
n: Number of most recent commands to look up.
Returns:
A list of n most recent commands, or all available most recent commands,
if n exceeds size of the command history, in chronological order.
"""
return self._commands[-n:]
def lookup_prefix(self, prefix, n):
"""Look up the n most recent commands that starts with prefix.
Args:
prefix: The prefix to lookup.
n: Number of most recent commands to look up.
Returns:
A list of n most recent commands that have the specified prefix, or all
available most recent commands that have the prefix, if n exceeds the
number of history commands with the prefix.
"""
commands = [cmd for cmd in self._commands if cmd.startswith(prefix)]
return commands[-n:]
# TODO(cais): Lookup by regex.
class MenuItem(object):
"""A class for an item in a text-based menu."""
def __init__(self, caption, content, enabled=True):
"""Menu constructor.
TODO(cais): Nested menu is currently not supported. Support it.
Args:
caption: (str) caption of the menu item.
content: Content of the menu item. For a menu item that triggers
a command, for example, content is the command string.
enabled: (bool) whether this menu item is enabled.
"""
self._caption = caption
self._content = content
self._enabled = enabled
@property
def caption(self):
return self._caption
@property
def type(self):
return self._node_type
@property
def content(self):
return self._content
def is_enabled(self):
return self._enabled
def disable(self):
self._enabled = False
def enable(self):
self._enabled = True
class Menu(object):
"""A class for text-based menu."""
def __init__(self, name=None):
"""Menu constructor.
Args:
name: (str or None) name of this menu.
"""
self._name = name
self._items = []
def append(self, item):
"""Append an item to the Menu.
Args:
item: (MenuItem) the item to be appended.
"""
self._items.append(item)
def insert(self, index, item):
self._items.insert(index, item)
def num_items(self):
return len(self._items)
def captions(self):
return [item.caption for item in self._items]
def caption_to_item(self, caption):
"""Get a MenuItem from the caption.
Args:
caption: (str) The caption to look up.
Returns:
(MenuItem) The first-match menu item with the caption, if any.
Raises:
LookupError: If a menu item with the caption does not exist.
"""
captions = self.captions()
if caption not in captions:
raise LookupError("There is no menu item with the caption \"%s\"" %
caption)
return self._items[captions.index(caption)]
def format_as_single_line(self,
prefix=None,
divider=" | ",
enabled_item_attrs=None,
disabled_item_attrs=None):
"""Format the menu as a single-line RichTextLines object.
Args:
prefix: (str) String added to the beginning of the line.
divider: (str) The dividing string between the menu items.
enabled_item_attrs: (list or str) Attributes applied to each enabled
menu item, e.g., ["bold", "underline"].
disabled_item_attrs: (list or str) Attributes applied to each
disabled menu item, e.g., ["red"].
Returns:
(RichTextLines) A single-line output representing the menu, with
font_attr_segs marking the individual menu items.
"""
if (enabled_item_attrs is not None and
not isinstance(enabled_item_attrs, list)):
enabled_item_attrs = [enabled_item_attrs]
if (disabled_item_attrs is not None and
not isinstance(disabled_item_attrs, list)):
disabled_item_attrs = [disabled_item_attrs]
menu_line = prefix if prefix is not None else ""
attr_segs = []
for item in self._items:
menu_line += item.caption
item_name_begin = len(menu_line) - len(item.caption)
if item.is_enabled():
final_attrs = [item]
if enabled_item_attrs:
final_attrs.extend(enabled_item_attrs)
attr_segs.append((item_name_begin, len(menu_line), final_attrs))
else:
if disabled_item_attrs:
attr_segs.append(
(item_name_begin, len(menu_line), disabled_item_attrs))
menu_line += divider
return RichTextLines(menu_line, font_attr_segs={0: attr_segs}) | |
"""
变化检测数据集
"""
import os
from PIL import Image
import numpy as np
from torch.utils import data
from datasets.data_utils import CDDataAugmentation
"""
CD data set with pixel-level labels;
├─image
├─image_post
├─label
└─list
"""
IMG_FOLDER_NAME = "A"
IMG_POST_FOLDER_NAME = 'B'
LIST_FOLDER_NAME = 'list'
ANNOT_FOLDER_NAME = "label"
IGNORE = 255
label_suffix='.png' # jpg for gan dataset, others : png
def load_img_name_list(dataset_path):
img_name_list = np.loadtxt(dataset_path, dtype=np.str)
if img_name_list.ndim == 2:
return img_name_list[:, 0]
return img_name_list
def load_image_label_list_from_npy(npy_path, img_name_list):
cls_labels_dict = np.load(npy_path, allow_pickle=True).item()
return [cls_labels_dict[img_name] for img_name in img_name_list]
def get_img_post_path(root_dir,img_name):
return os.path.join(root_dir, IMG_POST_FOLDER_NAME, img_name)
def get_img_path(root_dir, img_name):
return os.path.join(root_dir, IMG_FOLDER_NAME, img_name)
def get_label_path(root_dir, img_name):
return os.path.join(root_dir, ANNOT_FOLDER_NAME, img_name.replace('.jpg', label_suffix))
class ImageDataset(data.Dataset):
"""VOCdataloder"""
def __init__(self, root_dir, split='train', img_size=256, is_train=True,to_tensor=True):
super(ImageDataset, self).__init__()
self.root_dir = root_dir
self.img_size = img_size
self.split = split # train | train_aug | val
# self.list_path = self.root_dir + '/' + LIST_FOLDER_NAME + '/' + self.list + '.txt'
self.list_path = os.path.join(self.root_dir, LIST_FOLDER_NAME, self.split+'.txt')
self.img_name_list = load_img_name_list(self.list_path)
self.A_size = len(self.img_name_list) # get the size of dataset A
self.to_tensor = to_tensor
if is_train:
self.augm = CDDataAugmentation(
img_size=self.img_size,
with_random_hflip=True,
with_random_vflip=True,
with_scale_random_crop=True,
with_random_blur=True,
)
else:
self.augm = CDDataAugmentation(
img_size=self.img_size
)
def __getitem__(self, index):
name = self.img_name_list[index]
A_path = get_img_path(self.root_dir, self.img_name_list[index % self.A_size])
B_path = get_img_post_path(self.root_dir, self.img_name_list[index % self.A_size])
img = np.asarray(Image.open(A_path).convert('RGB'))
img_B = np.asarray(Image.open(B_path).convert('RGB'))
[img, img_B], _ = self.augm.transform([img, img_B],[], to_tensor=self.to_tensor)
return {'A': img, 'B': img_B, 'name': name}
def __len__(self):
"""Return the total number of images in the dataset."""
return self.A_size
class CDDataset(ImageDataset):
def __init__(self, root_dir, img_size, split='train', is_train=True, label_transform=None,
to_tensor=True):
super(CDDataset, self).__init__(root_dir, img_size=img_size, split=split, is_train=is_train,
to_tensor=to_tensor)
self.label_transform = label_transform
def __getitem__(self, index):
name = self.img_name_list[index]
A_path = get_img_path(self.root_dir, self.img_name_list[index % self.A_size])
B_path = get_img_post_path(self.root_dir, self.img_name_list[index % self.A_size])
img = np.asarray(Image.open(A_path).convert('RGB'))
img_B = np.asarray(Image.open(B_path).convert('RGB'))
L_path = get_label_path(self.root_dir, self.img_name_list[index % self.A_size])
label = np.array(Image.open(L_path), dtype=np.uint8)
# 二分类中,前景标注为255
if self.label_transform == 'norm':
label = label // 255
[img, img_B], [label] = self.augm.transform([img, img_B], [label], to_tensor=self.to_tensor)
# print(label.max())
return {'name': name, 'A': img, 'B': img_B, 'L': label} | |
import os
import sys
import torch
import argparse
import numpy as np
import pandas as pd
from tqdm import tqdm
from skorch import NeuralNetClassifier, NeuralNetBinaryClassifier
from skorch.callbacks import Checkpoint
sys.path.append(os.path.join(sys.path[0], '..'))
from DPROM.module import DPROMModule
from DPROM.dataset import DPROMDataset
###########################################
# Command line interface
this_dir = os.path.dirname(os.path.abspath(sys.argv[0]))
default_out = os.path.join(os.path.dirname(this_dir), "results.csv")
default_input = "data/human_complete.fa"
default_neg = ""
default_mod = "models/dprom/model.pt"
parser = argparse.ArgumentParser(description=r"This script will test a model's performance with DProm dataset")
parser.add_argument('-binary',
type=bool,
help='For model: a 1 neuron sigmoid output if set, otherwise a 2 neuron softmax output',
default=False)
parser.add_argument('--model',
type = str,
help = f'Path for desired model file. Default: {default_mod}. '
'The model file is a checkpoint created by pytorch with the weights of a model',
default = default_mod
)
parser.add_argument('--output',
type = str,
help = f'Path for desired output file. Default: {default_out}. '
'The output file is a csv with the sequences tested, their true labels, and the predictions by the model',
default = default_out
)
parser.add_argument('--input',
type = str,
help = f'Path to desired annotations file. Default: {default_input}.'
'The annotations file is an sga obtained from Mass Genome Annotation Data Repository',
default = default_input
)
parser.add_argument('--neg_file',
type = str,
help = f'Path to desired annotations file. Default: Empty String.'
'The annotations file is an sga obtained from Mass Genome Annotation Data Repository',
default = default_neg
)
args = parser.parse_args()
###########################################
model_folder = os.path.dirname(args.model)
cp = Checkpoint(dirname=model_folder, f_params=os.path.basename(args.model))
# Binary(sigmoid): Use NeuralNetBinaryClassifier (!IMPORT IT), num_classes=1, binary=True
# Multi(softmax): Use NeuralNetClassifier (!IMPORT IT), num_classes=2, binary=False
neg_f = None
if(args.neg_file != ''):
neg_f = args.neg_file
if(args.binary):
nc = 1
cls = NeuralNetBinaryClassifier
else:
nc = 2
cls = NeuralNetClassifier
ds = DPROMDataset(file=args.input, neg_file=neg_f, binary=args.binary, save_df=None, drop_dups=False)
def tqdm_iterator(dataset, **kwargs):
return tqdm(torch.utils.data.DataLoader(dataset, **kwargs))
net = cls(module=DPROMModule,
module__num_classes=nc,
module__seqs_length=ds.seqs_length,
batch_size=256,
device='cuda' if torch.cuda.is_available() else 'cpu',
iterator_valid=tqdm_iterator)
net.initialize()
print("Testing: Initialized")
net.load_params(checkpoint=cp)
print("Testing: Model Loaded")
print("Testing: Predicting")
y_score = net.forward(ds)
print("Testing: Predicting Done")
if(args.binary):
df = pd.DataFrame(list(zip(ds.dataframe.sequence, ds.dataframe.label, y_score.tolist())), columns=['sequence', 'label', 'prediction'])
else:
logits = list(zip(*y_score.tolist()))
df = pd.DataFrame(list(zip(ds.dataframe.sequence, ds.dataframe.label, logits[0], logits[1])), columns=['sequence', 'label', 'prediction_0', 'prediction_1'])
print("Testing: Saving Results")
df.to_csv(args.output) | |
"""Required modules"""
import re
import csv
import sys
import numpy as np
import scipy.io as sio
import xlrd
import numexpr as ne
DATE = xlrd.XL_CELL_DATE
TEXT = xlrd.XL_CELL_TEXT
BLANK = xlrd.XL_CELL_BLANK
EMPTY = xlrd.XL_CELL_EMPTY
ERROR = xlrd.XL_CELL_ERROR
NUMBER = xlrd.XL_CELL_NUMBER
def read_excel(filename, sheet=None):
"""Read sheet data or sheet names from an Excel workbook into a
:class:`Spreadsheet`.
:example:
sheet_names = read_excel('parameter.xlsx') # returns a list of sheet names
:example:
spreadsheet = read_excel('parameter.xlsx', 0) # read the first sheet
:example:
spreadsheet = read_excel(parameter.xls', 'sheet_2') # load 'sheet_2'
:param filename: name of the excel woorkbook to import
:param sheet: spreadsheet name or index to import
:type filename: string
:type sheet: string or integer or None
:return: sheet names if sheet is None, otherwise sheet data
:rtype: list of strings if sheet is None, otherwise :class:`Spreadsheet`"""
book = xlrd.open_workbook(filename)
spreadsheet = Spreadsheet()
if sheet is None:
return book.sheet_names()
elif isinstance(sheet, int):
xl_sheet = book.sheet_by_index(sheet)
spreadsheet.set_data(xl_sheet.get_rows())
return spreadsheet
else:
xl_sheet = book.sheet_by_name(sheet)
spreadsheet.set_data(xl_sheet.get_rows())
return spreadsheet
def loadtxt(filename, dtype='float', comments='#', delimiter=None, skiprows=0,
usecols=None, unpack=False):
"""Load ascii files into a numpy ndarray using numpy.loadtxt."""
return np.loadtxt(
filename, dtype, comments, delimiter,
None, skiprows, usecols, unpack)
def load(file, mmap_mode=None, allow_pickle=True, fix_imports=True,
encoding='ASCII'):
"""Load numpy .npy and .npz files to an array or map of arrays
respectively using np.load"""
return np.load(file, mmap_mode, allow_pickle, fix_imports, encoding)
def read_csv(filename, start=1, stop=None, assume=TEXT):
"""Read a csv file into a :class:`Spreadsheet`
:example:
sheet = read_csv('parameters.csv', start=9, assume=NUMBER)
:param filename: name of the file to read
:param start: row to start reading
:param stop: row to stop reading
:param assume: type of data to assume
:type filename: string
:type start: integer
:type stop: integer
:type assume: integer
:return: spreadsheet data
:rtype: :class:`Spreadsheet`"""
values = []
spreadsheet = Spreadsheet(assume)
with open(filename) as csvfile:
reader = csv.reader(csvfile)
for row in reader:
values.append(row)
if stop is None:
stop = len(values)
values = values[start-1:stop]
spreadsheet.set_values(values)
return spreadsheet
def load_mat(filename, variable):
"""Read the variable from filename
:example:
sheet = read_mat("parameter.mat", "cse")
:param filename: name of the .mat file to read
:param variable: variable to load
:type filename: string
:type variable: string
:return: variable data
:rtype: array"""
contents = sio.loadmat(filename)
return contents[variable]
def load_section(sheet, row_range=None, col_range=None):
"""Read a 'chunk' of data from a spreadsheet.
Given a selection of rows and columns, this function will return the
intersection of the two ranges. Note that the minimum value for each range
is 1.
:example:
spreadsheet = read_excel('parameters.xlsx', 'Parameters')
cell_data = load_section(
spreadsheet, [1, 3, 5], range(7, 42))
:param sheet: spreadsheet data
:param row_range: selected rows
:param col_range: selected columns
:type sheet: :class:`xlrd.sheet`
:type row_range: list of integers or integer
:type col_range: list of integers or integer
:return: section of sheet data
:rtype: array if assume=NUMBER else list"""
if row_range is None:
row_range = range(1, len(sheet.values)+1)
if col_range is None:
col_range = range(1, len(sheet.values[0])+1)
if isinstance(row_range, int):
row_range = [row_range]
if isinstance(col_range, int):
col_range = [col_range]
rval = [[sheet.cell(x-1, y-1) for y in col_range] for x in row_range]
if sheet.assume == NUMBER:
return np.array(
[[rval[x-1][y-1].value for y in col_range] for x in row_range],
dtype='float')
return rval
def _multiple_replace(repl, text):
"""Replace multiple regex expressions
:param repl: dictionary of values to replace
:param text: text to perform regex on
:type repl: dict
:type text: string
:return: processed text
:rtype: string"""
# Create a regular expression from the dictionary keys
regex = re.compile("(%s)" % "|".join(map(re.escape, repl.keys())))
# For each match, look-up corresponding value in dictionary
return regex.sub(lambda mo: repl[mo.string[mo.start():mo.end()]], text)
def _fun_to_lambda(entry):
"""Convert a given string representing a matlab anonymous
function to a lambda function
:example:
lambdafun = "@(x) cos(x)"
lambdafun(np.pi)
:param entry: string of matlab anonymous equation
:type: string
:return: mathmatical function
:rtype: lambda function"""
repl = {
'./': '/',
'.*': '*',
'.^': '**'
}
# pull out function variable definition
vari = re.findall(r'\@\(.*?\)', entry)
vari = [re.sub(r'\@|\(|\)', '', x) for x in vari]
# remove variable definition
entry = re.sub(r'\@\(.*?\)', '', entry)
# replace operators to suit numpy
entry = _multiple_replace(repl, entry)
# separate equations into different functions
entry = re.sub('{|}', '', entry).split(',')
return list(lambda x, z=i: ne.evaluate(entry[z], local_dict={vari[z]: x})
for i in range(0, len(entry)))
def load_params(sheet, rows=None, ncols=None, pcols=None, cols=None,
nrows=None, prows=None):
"""Read designated parameters from the sheet
:example:
sheet=read_excel('parameter_list.xlsx', 0, 'index')
params["pos"] = load_params(sheet, range(55, 75), ncols=2, pcols=3)
:param sheet: spreadsheet data
:param rows: same as nrows=prows
:param cols: same as ncols=pcols
:param nrows: cell rows to read for parameter names
:param ncols: cell columns to read for parameter names
:param prows: cell rows to read for parameter data
:param pcols: cell columns to read for parameter data
:type sheet: :class:`Spreadsheet`
:type rows: list of integers or integer
:type cols: list of integers or integer
:type nrows: list of integers or integer
:type ncols: list of integers or integer
:type prows: list of integers or integer
:type pcols: list of integers or integer
:return: mapping of parameter names to values
:rtype: dict"""
if rows:
nrows = rows
prows = rows
if cols:
ncols = cols
pcols = cols
name_cells = load_section(sheet, nrows, ncols)
data_cells = load_section(sheet, prows, pcols)
# Verify the number of names matches the number of params
assert len(name_cells) == len(data_cells)
data = [_fun_to_lambda(x.value) if x.ctype == TEXT else
x.value if x.ctype == NUMBER else None
for y in data_cells for x in y]
return dict(zip([x.value for y in name_cells for x in y], data))
class Spreadsheet(object):
"""Hold spreadsheet data"""
def __init__(self, assumption=None):
"""Entry point for :class:`Spreadsheet`"""
self.values = None
self.ctypes = None
self.assume = assumption
def set_data(self, data_in):
"""Set spreadsheet data using cell generators"""
data = list(data_in)
self.values = [[col.value for col in row] for row in data]
self.ctypes = [[col.ctype for col in row] for row in data]
def set_values(self, values):
"""Set spreadsheet cell values
:param values: values to set
:type values: container, e.g. list"""
self.values = values
def set_ctypes(self, ctype):
"""Set spreadsheet cell types. I.e. NUMBER, TEXT, etc.
:param ctype: cell types to set
:type values: container, e.g. list"""
self.ctypes = ctype
def size(self):
"""Retrieve the dimensions of the spreadsheet
:return: spreadsheed dimensions
:rtype: tuple"""
if self.values is not None:
return len(self.values), len(self.values[0])
else:
return None
def cell(self, xpos, ypos):
"""Retrieve cell information
:param xpos: cell row
:param ypos: cell column
:type xpos: integer
:type ypos: integer
:return: cell values and info
:rtype: :class:`xlrd.sheet.Cell`"""
if self.ctypes:
return xlrd.sheet.Cell(
self.ctypes[xpos][ypos], self.values[xpos][ypos])
elif self.assume:
return xlrd.sheet.Cell(self.assume, self.values[xpos][ypos])
else:
return None
def main():
"""Module entry point"""
pass
if __name__ == '__main__':
sys.exit(main()) | |
# ------------------------------------------------------------------------------
# Copyright (c) Microsoft
# Licensed under the MIT License.
# Written by Bin Xiao (Bin.Xiao@microsoft.com)
# ------------------------------------------------------------------------------
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
import yaml
import numpy as np
from easydict import EasyDict as edict
config = edict()
config.OUTPUT_DIR = ''
config.LOG_DIR = ''
config.DATA_DIR = ''
config.GPUS = '0'
config.WORKERS = 4
config.PRINT_FREQ = 20
# Cudnn related params
config.CUDNN = edict()
config.CUDNN.BENCHMARK = True
config.CUDNN.DETERMINISTIC = False
config.CUDNN.ENABLED = True
# pose_resnet related params
POSE_RESNET = edict()
POSE_RESNET.NUM_LAYERS = 50
POSE_RESNET.DECONV_WITH_BIAS = False
POSE_RESNET.NUM_DECONV_LAYERS = 3
POSE_RESNET.NUM_DECONV_FILTERS = [256, 256, 256]
POSE_RESNET.NUM_DECONV_KERNELS = [4, 4, 4]
POSE_RESNET.FINAL_CONV_KERNEL = 1
POSE_RESNET.TARGET_TYPE = 'gaussian'
POSE_RESNET.HEATMAP_SIZE = [64, 64] # width * height, ex: 24 * 32
POSE_RESNET.SIGMA = 2
MODEL_EXTRAS = {
'pose_resnet': POSE_RESNET,
}
# common params for NETWORK
config.MODEL = edict()
config.MODEL.NAME = 'pose_resnet'
config.MODEL.INIT_WEIGHTS = True
config.MODEL.PRETRAINED = ''
config.MODEL.NUM_JOINTS = 16
config.MODEL.IMAGE_SIZE = [256, 256] # width * height, ex: 192 * 256
config.MODEL.EXTRA = MODEL_EXTRAS[config.MODEL.NAME]
config.MODEL.STYLE = 'pytorch'
config.LOSS = edict()
config.LOSS.USE_TARGET_WEIGHT = True
# DATASET related params
config.DATASET = edict()
config.DATASET.ROOT = ''
config.DATASET.DATASET = 'mpii'
config.DATASET.TRAIN_SET = 'train'
config.DATASET.TEST_SET = 'valid'
config.DATASET.DATA_FORMAT = 'jpg'
config.DATASET.HYBRID_JOINTS_TYPE = ''
config.DATASET.SELECT_DATA = False
# training data augmentation
config.DATASET.FLIP = True
config.DATASET.SCALE_FACTOR = 0.25
config.DATASET.ROT_FACTOR = 30
# train
config.TRAIN = edict()
config.TRAIN.LR_FACTOR = 0.1
config.TRAIN.LR_STEP = [90, 110]
config.TRAIN.LR = 0.001
config.TRAIN.OPTIMIZER = 'adam'
config.TRAIN.MOMENTUM = 0.9
config.TRAIN.WD = 0.0001
config.TRAIN.NESTEROV = False
config.TRAIN.GAMMA1 = 0.99
config.TRAIN.GAMMA2 = 0.0
config.TRAIN.BEGIN_EPOCH = 0
config.TRAIN.END_EPOCH = 140
config.TRAIN.RESUME = False
config.TRAIN.CHECKPOINT = ''
config.TRAIN.BATCH_SIZE = 32
config.TRAIN.SHUFFLE = True
# testing
config.TEST = edict()
# size of images for each device
config.TEST.BATCH_SIZE = 32
# Test Model Epoch
config.TEST.FLIP_TEST = False
config.TEST.POST_PROCESS = True
config.TEST.SHIFT_HEATMAP = True
config.TEST.USE_GT_BBOX = False
# nms
config.TEST.OKS_THRE = 0.5
config.TEST.IN_VIS_THRE = 0.0
config.TEST.COCO_BBOX_FILE = ''
config.TEST.BBOX_THRE = 1.0
config.TEST.MODEL_FILE = ''
config.TEST.IMAGE_THRE = 0.0
config.TEST.NMS_THRE = 1.0
# debug
config.DEBUG = edict()
config.DEBUG.DEBUG = False
config.DEBUG.SAVE_BATCH_IMAGES_GT = False
config.DEBUG.SAVE_BATCH_IMAGES_PRED = False
config.DEBUG.SAVE_HEATMAPS_GT = False
config.DEBUG.SAVE_HEATMAPS_PRED = False
def _update_dict(k, v):
if k == 'DATASET':
if 'MEAN' in v and v['MEAN']:
v['MEAN'] = np.array([eval(x) if isinstance(x, str) else x
for x in v['MEAN']])
if 'STD' in v and v['STD']:
v['STD'] = np.array([eval(x) if isinstance(x, str) else x
for x in v['STD']])
if k == 'MODEL':
if 'EXTRA' in v and 'HEATMAP_SIZE' in v['EXTRA']:
if isinstance(v['EXTRA']['HEATMAP_SIZE'], int):
v['EXTRA']['HEATMAP_SIZE'] = np.array(
[v['EXTRA']['HEATMAP_SIZE'], v['EXTRA']['HEATMAP_SIZE']])
else:
v['EXTRA']['HEATMAP_SIZE'] = np.array(
v['EXTRA']['HEATMAP_SIZE'])
if 'IMAGE_SIZE' in v:
if isinstance(v['IMAGE_SIZE'], int):
v['IMAGE_SIZE'] = np.array([v['IMAGE_SIZE'], v['IMAGE_SIZE']])
else:
v['IMAGE_SIZE'] = np.array(v['IMAGE_SIZE'])
for vk, vv in v.items():
if vk in config[k]:
config[k][vk] = vv
else:
raise ValueError("{}.{} not exist in config.py".format(k, vk))
def update_config(config_file):
exp_config = None
with open(config_file) as f:
exp_config = edict(yaml.load(f))
for k, v in exp_config.items():
if k in config:
if isinstance(v, dict):
_update_dict(k, v)
else:
if k == 'SCALES':
config[k][0] = (tuple(v))
else:
config[k] = v
else:
raise ValueError("{} not exist in config.py".format(k))
def gen_config(config_file):
cfg = dict(config)
for k, v in cfg.items():
if isinstance(v, edict):
cfg[k] = dict(v)
with open(config_file, 'w') as f:
yaml.dump(dict(cfg), f, default_flow_style=False)
def update_dir(model_dir, log_dir, data_dir):
if model_dir:
config.OUTPUT_DIR = model_dir
if log_dir:
config.LOG_DIR = log_dir
if data_dir:
config.DATA_DIR = data_dir
config.DATASET.ROOT = os.path.join(
config.DATA_DIR, config.DATASET.ROOT)
config.TEST.COCO_BBOX_FILE = os.path.join(
config.DATA_DIR, config.TEST.COCO_BBOX_FILE)
config.MODEL.PRETRAINED = os.path.join(
config.DATA_DIR, config.MODEL.PRETRAINED)
def get_model_name(cfg):
name = cfg.MODEL.NAME
full_name = cfg.MODEL.NAME
extra = cfg.MODEL.EXTRA
if name in ['pose_resnet']:
name = '{model}_{num_layers}'.format(
model=name,
num_layers=extra.NUM_LAYERS)
deconv_suffix = ''.join(
'd{}'.format(num_filters)
for num_filters in extra.NUM_DECONV_FILTERS)
full_name = '{height}x{width}_{name}_{deconv_suffix}'.format(
height=cfg.MODEL.IMAGE_SIZE[1],
width=cfg.MODEL.IMAGE_SIZE[0],
name=name,
deconv_suffix=deconv_suffix)
elif name in ['pose_mobilenet']:
name = '{model}'.format(
model=name)
deconv_suffix = ''.join(
'd{}'.format(num_filters)
for num_filters in extra.NUM_DECONV_FILTERS)
full_name = '{height}x{width}_{name}_{deconv_suffix}'.format(
height=cfg.MODEL.IMAGE_SIZE[1],
width=cfg.MODEL.IMAGE_SIZE[0],
name=name,
deconv_suffix=deconv_suffix)
else:
raise ValueError('Unkown model: {}'.format(cfg.MODEL))
return name, full_name
if __name__ == '__main__':
import sys
gen_config(sys.argv[1]) | |
from xmuda.data.nuscenes.nuscenes_dataloader import NuScenesSCN
import numpy as np
import os.path as osp
preprocess_dir = "/home/xyyue/xiangyu/nuscenes_unzip/xmuda_lidarseg_preprocess"
nuscenes_dir = "/home/xyyue/xiangyu/nuscenes_unzip"
split = ('train_usa',)
# pselab_paths = ('/home/docker_user/workspace/outputs/xmuda/nuscenes/day_night/xmuda/pselab_data/train_night.npy',)
dataset = NuScenesSCN(split=split,
preprocess_dir=preprocess_dir,
nuscenes_dir=nuscenes_dir,
scale=1,
# pselab_paths=pselab_paths,
merge_classes=True,
use_image=True,
)
import open3d as o3d
pcd = o3d.geometry.PointCloud()
v3d = o3d.utility.Vector3dVector
for i in range(5):
data = dataset[i]
seg_points = data['coords']
pcd.points = v3d(seg_points) # seg_points=(N, 3)
o3d.io.write_point_cloud(osp.join('tmp/xmuda_train_usa', f'{i}.pcd'), pcd)
print(i, len(seg_points))
print() | |
# -*- coding: utf-8 -*-
"""
Created on Wed Sep 11 18:59:16 2019
@author: st
"""
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
dataset = pd.read_csv('Social_Network_Ads.csv')
X=dataset.iloc[:, [2,3]].values
y=dataset.iloc[:,4].values
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.25,random_state=0)
from sklearn.preprocessing import StandardScaler
sc=StandardScaler()
X_train=sc.fit_transform(X_train)
X_test=sc.transform(X_test)
from sklearn.neighbors import KNeighborsClassifier
classifier=KNeighborsClassifier(algorithm='auto', leaf_size=30, metric='minkowski',
metric_params=None, n_jobs=None, n_neighbors=5, p=2,
weights='uniform')
classifier.fit(X_train,y_train)
y_pred=classifier.predict(X_test)
from sklearn.metrics import confusion_matrix
cm=confusion_matrix(y_test,y_pred)
from matplotlib.colors import ListedColormap
X_set, y_set = X_train, y_train
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('red', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('orange', 'green'))(i), label = j)
plt.title('KNN (Training set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show()
from matplotlib.colors import ListedColormap
X_set, y_set = X_test, y_test
X1, X2 = np.meshgrid(np.arange(start = X_set[:, 0].min() - 1, stop = X_set[:, 0].max() + 1, step = 0.01),
np.arange(start = X_set[:, 1].min() - 1, stop = X_set[:, 1].max() + 1, step = 0.01))
plt.contourf(X1, X2, classifier.predict(np.array([X1.ravel(), X2.ravel()]).T).reshape(X1.shape),
alpha = 0.75, cmap = ListedColormap(('orange', 'green')))
plt.xlim(X1.min(), X1.max())
plt.ylim(X2.min(), X2.max())
for i, j in enumerate(np.unique(y_set)):
plt.scatter(X_set[y_set == j, 0], X_set[y_set == j, 1],
c = ListedColormap(('red', 'green'))(i), label = j)
plt.title('KNN (Test set)')
plt.xlabel('Age')
plt.ylabel('Estimated Salary')
plt.legend()
plt.show() | |
import argparse
import collections
import csv
import json
import load
from sklearn.metrics import confusion_matrix, f1_score, roc_auc_score, precision_recall_fscore_support
from tensorflow import keras
import scipy.stats as sst
import numpy as np
import sklearn.metrics as skm
from tensorflow.python.keras import models
import architecture
def predict(parser):
val = load.load_dataset("data/validation_2.json")
preproc = load.preproc(*val)
args = parser.parse_args()
print("args model : ", args.model)
model = architecture.build_model()
model.load_weights(args.model)
with open("data/validation_2.json", "rb") as fid:
val_labels = [json.loads(l)['labels'] for l in fid]
counts = collections.Counter(preproc.class_to_int[l[0]] for l in val_labels)
counts = sorted(counts.most_common(), key=lambda x: x[0])
counts = list(zip(*counts))[1]
print("counts : " , counts)
smooth = 500
counts = np.array(counts)[None, None, :]
total = np.sum(counts) + counts.shape[1]
print("total : ", total)
prior = (counts + smooth) / float(total) # ???
print("prior : ", prior)
ecgs, committee_labels = preproc.process(*val)
m_probs = model.predict(ecgs)
committee_labels = np.argmax(committee_labels, axis=2)
committee_labels = committee_labels[:, 0]
print("===================")
temp = []
preds = np.argmax(m_probs / prior, axis = 2)
for i, j in zip(preds, val_labels):
t = sst.mode(i[:len(j)-1])[0][0]
temp.append(t)
#print(i[:len(j)-1])
preds = temp
#print("preds : \n", preds)
report = skm.classification_report(committee_labels, preds, target_names=preproc.classes, digits=3)
scores = skm.precision_recall_fscore_support(committee_labels, preds, average=None)
print("report : \n", report)
cm = confusion_matrix(committee_labels, preds)
print("confusion matrix : \n", cm)
f1 = f1_score(committee_labels, preds, average='micro')
#print("f1_score : ", f1)
# ***roc_auc_score - m_probs***
s_probs = np.sum(m_probs, axis=1)
s_probs = s_probs / 71 # one data set max size (element count) -> normalization
#ovo_auroc = roc_auc_score(committee_labels, s_probs, multi_class='ovo')
ovr_auroc = roc_auc_score(committee_labels, s_probs, multi_class='ovr')
print("ovr_auroc : ", ovr_auroc)
#print("ovo_auroc : ", ovo_auroc)
'''
bootstrapping
'''
n_bootstraps = 100
np.random.seed(3033)
total_precision = []
total_recall = []
total_f1 = []
total_auroc = []
precision = []
recall = []
f1 = []
total = []
for j in range(n_bootstraps):
indices = np.random.random_integers(0, len(m_probs) -1, 100)
#print("indices : ", len(indices))
if len(np.unique(committee_labels[indices])) < 2:
continue
sub_labels = []
sub_result = []
sub_probs = []
#print(indices)
for i in indices:
sub_labels.append(committee_labels[i])
sub_result.append(preds[i])
sub_probs.append(m_probs[i])
s_scores = precision_recall_fscore_support(sub_labels, sub_result, labels=[0, 1, 2, 3], average=None)
# ***roc_auc_score - m_probs***
s_p = np.sum(sub_probs, axis=1)
s_p = s_p / 71 # one data set max size (element count) -> normalization
# ovo_auroc = roc_auc_score(committee_labels, s_probs, multi_class='ovo')
#print(sub_labels)
#print(s_p)
try:
s_auroc = roc_auc_score(sub_labels, s_p, multi_class='ovr')
except:
s_auroc = -1
#print(s_scores)
precision.append(np.array(s_scores[0]))
recall.append(np.array(s_scores[1]))
f1.append(np.array(s_scores[2]))
#auroc.append(s_auroc)
total_precision.append(np.average(s_scores[0]))
total_recall.append(np.average(s_scores[1]))
total_f1.append(np.average(s_scores[2]))
total_auroc.append(s_auroc)
total_precision.sort()
total_recall.sort()
total_f1.sort()
total_auroc.sort()
total_auroc.remove(-1)
#print(total_auroc)
'''
bootstrapping 시 클래스가 존재하지 않는 케이스가 있을수도 있음
'''
precision = np.array(precision)
precision[precision == .0] = np.nan
recall = np.array(recall)
recall[recall == .0] = np.nan
f1 = np.array(f1)
f1[f1 == .0] = np.nan
#print(total_auroc)
for i in range(4):
pre = precision[:, i]
pre.sort()
rec = recall[:, i]
rec.sort()
f = f1[:, i]
f.sort()
pre = np.round(pre[int(len(pre) * 0.025): int(len(pre) * 0.975)], 3)
rec = np.round(rec[int(len(rec) * 0.025): int(len(rec) * 0.975)], 3)
f = np.round(pre[int(len(f) * 0.025): int(len(f) * 0.975)], 3)
'''
print(i,
" : ", "{0} ({1}, {2})".format(np.round(np.nanmean(pre), 3), round(pre[0], 3), round(pre[-1], 3)),
" : ", "{0} ({1}, {2})".format(np.round(np.nanmean(rec), 3), round(rec[0], 3), round(rec[-1], 3)),
" : ", "{0} ({1}, {2})".format(np.round(np.nanmean(f), 3), round(f[0], 3), round(f[-1], 3)))
'''
item = [i,
"{0} ({1}, {2})".format(np.round(np.nanmean(pre), 3), round(np.nanmin(pre), 3), round(np.nanmax(pre), 3)),
"{0} ({1}, {2})".format(np.round(np.nanmean(rec), 3), round(np.nanmin(rec), 3), round(np.nanmax(rec), 3)),
"{0} ({1}, {2})".format(np.round(np.nanmean(f), 3), round(np.nanmin(f), 3), round(np.nanmax(f), 3))]
total.append(item)
total_auroc = np.round(total_auroc[int(len(total_auroc) * 0.025): int(len(total_auroc) * 0.975)], 3)
total_precision = np.round(total_precision[int(len(total_precision) * 0.025): int(len(total_precision) * 0.975)], 3)
total_recall = np.round(total_recall[int(len(total_recall) * .025): int(len(total_recall) * .975)], 3)
total_f1 = np.round(total_f1[int(len(total_f1) * .025): int(len(total_f1) * .975)], 3)
with open(args.file_name, "w", newline='') as file:
writer = csv.writer(file)
writer.writerow(["", "precision", "recall", "f1-score", "auroc"])
writer.writerow(["",
"{0} ({1}, {2})".format(np.round(np.average(scores[0]), 3), total_precision[0], total_precision[-1]),
"{0} ({1}, {2})".format(np.round(np.average(scores[1]), 3), total_recall[0], total_recall[-1]),
"{0} ({1}, {2})".format(np.round(np.average(scores[2]), 3), total_f1[0], total_f1[-1]),
"{0} ({1}, {2})".format(np.round(ovr_auroc, 3), total_auroc[0], total_auroc[-1]),
])
for i in total:
writer.writerow(i)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("--model", default="data/model/base.hmd5")
parser.add_argument("--file_name", default="ecg.csv")
predict(parser) | |
# coding: utf-8
import scrapy
from time import sleep
import time
import numpy as np
from selenium import webdriver
from selenium.webdriver.common.by import By
from selenium.webdriver.support import expected_conditions as EC
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.support.ui import WebDriverWait
from selenium.webdriver import ActionChains
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from wanfang.settings import *
from wanfang.items import WanfangItem
def write_log(a,b):
with open('log.txt', 'a') as filer:
line = '{0}: {1} {2}\n'.format(time.asctime(), a, b)
filer.write(line)
def write_url(url):
with open('passed_url.txt', 'a') as filer:
line = '{0}: {1}\n'.format(time.asctime(), url)
filer.write(line)
# 如果网络波动,网断了则重连
def mysql_reconnect():
while(True):
try:
if(db.ping(reconnect=True)==None):
break
except Exception, e:
pass
class WanfangSpider(scrapy.Spider):
url_used = list()
url_crawled = list()
name = 'wanfang'
allowed_domains = [r'med.wanfangdata.com.cn']
start_urls = [
r'http://med.wanfangdata.com.cn/Periodical/Subject?class=R1'
]
service_args = ['--load-images=false', '--disk-cache=true']
# service_args = ['--load-images=false', '--disk-cache=true', '--proxy=127.0.0.1:9050', '--proxy-type=socks5']
driver = webdriver.PhantomJS(service_args= service_args)
website_possible_httpstatus_list = [403]
handle_httpstatus_list = [403]
# driver = webdriver.Chrome()
# rules = [
# # Rule(LinkExtractor(allow=("http://med\.wanfangdata\.com\.cn/Periodical/Subject\?class=R\d")), follow=True),
# Rule(LinkExtractor(allow=("http://med\.wanfangdata\.com\.cn/Periodical/.*")), follow=True),
# Rule(LinkExtractor(allow=("http://med\.wanfangdata\.com\.cn/Paper/Detail/PeriodicalPaper_.*")), callback='parse_getss', follow=True)
#
# ]
# wait_time = [10,8,12,9,10,10,15]
wait_time = [5,6,5,5,7,6,5]
waits_time = [1,2,1,0.5,1,0.5,2]
begin_time = time.time()
def parse(self, response):
responseUrl = response.url
# 这批url是需要点击之后还是在主节点A
links = LinkExtractor(allow=("http://med\.wanfangdata\.com\.cn/Periodical/Subject\?class=\w*(($)|(&p=\d*$))"
)).extract_links(response)
link_urls = [x.url for x in links]
for url in link_urls:
# 判断该网站是否已经爬过
if url in self.url_used:
# sleep(1)
continue
self.url_used.append(url)
# sleep(1)
yield scrapy.Request(url=url, callback=self.parse_again)
# 这批url点击之后进入期刊节点B
links_next = LinkExtractor(allow=("http://med\.wanfangdata\.com\.cn/Periodical/\w*$"
)).extract_links(response)
link_urls_next = [x.url for x in links_next]
for url in link_urls_next:
# 判断该网站是否已经爬过
if url in self.url_used:
# sleep(1)
continue
self.url_used.append(url)
# sleep(1)
yield scrapy.Request(url=url, callback=self.parse_getss)
# 主节点函数
def parse_again(self, response):
sleep(self.waits_time[np.random.randint(7)])
responseUrl = response.url
# 这批url是需要点击之后还是在主节点A
links = LinkExtractor(allow=("http://med\.wanfangdata\.com\.cn/Periodical/Subject\?class=\w*(($)|(&p=\d*$))"
)).extract_links(response)
link_urls = [x.url for x in links]
for url in link_urls:
# 判断该网站是否已经爬过
if url in self.url_used:
# sleep(1)
continue
self.url_used.append(url)
# sleep(1)
yield scrapy.Request(url=url, callback=self.parse_again)
# 这批url点击之后进入期刊节点B
links_next = LinkExtractor(allow=("http://med\.wanfangdata\.com\.cn/Periodical/\w*$"
)).extract_links(response)
link_urls_next = [x.url for x in links_next]
for url in link_urls_next:
# 判断该网站是否已经爬过
if url in self.url_used:
# sleep(1)
continue
self.url_used.append(url)
# sleep(1)
yield scrapy.Request(url=url, callback=self.parse_getss)
# 期刊节点函数
def parse_getss(self, response):
sleep(self.waits_time[np.random.randint(7)])
responseUrl = response.url
# 途径网站继续follow
links = LinkExtractor(allow=("http://med\.wanfangdata\.com\.cn/Periodical/.*")).extract_links(response)
link_urls = [x.url for x in links]
for url in link_urls:
# 判断该网站是否已经爬过
if url in self.url_used:
# sleep(1)
continue
self.url_used.append(url)
# sleep(1)
yield scrapy.Request(url=url, callback=self.parse_getss)
links_paper = LinkExtractor(allow=("http://med\.wanfangdata\.com\.cn/Paper/Detail/PeriodicalPaper_.*")).extract_links(response)
link_urls_paper = [x.url for x in links_paper]
for url in link_urls_paper:
# 判断该paper是否爬过
# sleep(1)
if url in self.url_crawled:
continue
sql = "select * from wanfang where url='{0}' limit 1".format(url)
sql_result = 0
try:
sql_result = cursor.execute(sql)
except Exception ,e:
mysql_reconnect()
sql_result = cursor.execute(sql)
if sql_result == 1:
continue
else:
# sleep(1)
self.url_crawled.append(url)
yield scrapy.Request(url=url, callback=self.parse_paper)
a = 0
b = 0
def parse_paper(self, response):
if (time.time()-self.begin_time)>=2000:
self.begin_time = time.time()
try:
self.driver.quit()
except Exception, e:
print e
sleep(10)
self.driver = webdriver.PhantomJS(service_args= self.service_args)
sleep(3)
pageurl = response.url
self.driver.get(pageurl)
xpaths = ["//h3/a[@id='AA1']", "//h3/a[@id='AA2']", "//h3/a[@id='AA5']",
"//h3/a[@id='AA6']", "//h3/a[@id='AA7']", "//h3/a[@id='AA8']"]
# prexpaths = ['//ul[@id="Ul1"]/li|//ul[@id="Ul1"]/text()', '//ul[@id="Ul2"]/li|//ul[@id="Ul2"]/text()',
# '//ul[@id="Ul5"]/li|//ul[@id="Ul5"]/text()', '//ul[@id="Ul7"]/li|//ul[@id="Ul7"]/text()',
# '//ul[@id="Ul6"]/li|//ul[@id="Ul6"]/text()', '//ul[@id="Ul8"]/li|//ul[@id="Ul8"]/text()']
prexpaths = [u'//ul[@id="Ul1"]/li | //ul[@id="Ul1" and contains(text(), "本文无")]',
u'//ul[@id="Ul2"]/li | //ul[@id="Ul2" and contains(text(), "本文无")]',
u'//ul[@id="Ul5"]/li | //ul[@id="Ul5" and contains(text(), "没有")]',
u'//ul[@id="Ul7"]/li | //ul[@id="Ul7" and contains(text(), "没有")]',
u'//ul[@id="Ul6"]/li | //ul[@id="Ul6" and contains(text(), "没有")]',
u'//ul[@id="Ul8"]/li | //ul[@id="Ul8" and contains(text(), "没有")]']
# prexpaths = [u'//ul[@id="Ul1" | //ul[@id="Ul1" and text()=""]', u'//ul[@id="Ul2"]',
# u'//ul[@id="Ul5"]', u'//ul[@id="Ul7"]',
# u'//ul[@id="Ul6"]', u'//ul[@id="Ul8"]']
# 点击
for myxpath in xpaths:
if len(response.xpath(myxpath)) != 0:
self.driver.find_element_by_xpath(myxpath).click()
sleep(1)
# 等待显示
sleep(self.wait_time[np.random.randint(7)])
self.b += 1
for i in xrange(len(xpaths)):
if len(response.xpath(xpaths[i])) != 0:
try:
# # 再次点击
# self.driver.find_element_by_xpath(xpaths[i]).click()
# sleep(1)
element = WebDriverWait(self.driver, 30).until(
EC.presence_of_element_located((By.XPATH, prexpaths[i])))
except Exception, e:
self.a += 1
write_log(self.a, self.b)
print '还没加载完毕,无法正常显示'
# 如果存在的xpath都没正常显示,证明被ban了,换个ip重新加载
banIf = True
passIf = False
for i in xrange(len(xpaths)):
if len(response.xpath(xpaths[i])) != 0:
try:
self.driver.find_element_by_xpath(prexpaths[i])
banIf = False
except Exception, e:
passIf = True
else:
banIf = False
if banIf:
req = response.request
req.meta["change_proxy"] = True
return req
if passIf and not banIf:
write_url(pageurl)
new_response = scrapy.Selector(text = self.driver.page_source)
title = ''
if len(new_response.xpath("//h4/text()")) != 0:
title = ''+new_response.xpath("//h4/text()").extract()[0]
click = ''
if len(new_response.xpath("//span[@id='artcileClickCount']/text()")) != 0:
click = (new_response.xpath("//span[@id='artcileClickCount']/text()").extract()[0]).split(':')[1]
download = ''
if len(new_response.xpath("//span[@id='artcileDownloadCount']/text()"))!= 0:
download = (new_response.xpath("//span[@id='artcileDownloadCount']/text()").extract()[0]).split(':')[1]
des = ''
if len(new_response.xpath("//p[@class='prvTXT']/text()")) != 0:
des = ''+new_response.xpath("//p[@class='prvTXT']/text()").extract()[0]
zuozhe = ''
if len(new_response.xpath(u'//th[text()="作 者"]')) != 0:
zuozhe = ''.join([x.strip() for x in (new_response.xpath(u'//th[text()="作 者"]')).xpath('../td/a/text()').extract()])
if len(zuozhe) == 0:
zuozhe = ''.join([x.strip() for x in (new_response.xpath(u'//th[text()="作 者"]')).xpath('../td/text()').extract()])
kanming = ''
if len(new_response.xpath(u'//th[text()="刊 名"]')) != 0:
kanming = ''.join([x.strip()+',' for x in (new_response.xpath(u'//th[text()="刊 名"]')).xpath("../td/a/text()").extract()])
yingwenkanming = ''
if len(new_response.xpath(u'//th[text()="英文期刊名"]')) != 0:
yingwenkanming = "".join([x.strip() for x in (new_response.xpath(u'//th[text()="英文期刊名"]')).xpath('../td/text()').extract()])
keyword = ''
if len(new_response.xpath(u'//th[text()="关键词"]')) != 0:
keyword = ''.join([x.strip()+',' for x in (new_response.xpath(u'//th[text()="关键词"]')).xpath('../td/a/text()').extract()])
lanmuname = ''
if len(new_response.xpath(u'//th[text()="栏目名称"]')) != 0:
lanmuname = ''.join([x.strip() for x in (new_response.xpath(u'//th[text()="栏目名称"]')).xpath('../td/text()').extract()])
doi = ''
if len(new_response.xpath(u'//th[text()="DOI号"]')) != 0:
doi = ''.join([x.strip() for x in (new_response.xpath(u'//th[text()="DOI号"]')).xpath('../td/a/text()').extract()])
cankao = ''
if len(new_response.xpath('//ul[@id="Ul1"]/li')) != 0:
cankao_content = new_response.xpath('//ul[@id="Ul1"]/li')
for li in cankao_content:
cankao_author = li.xpath('./text()').extract()
cankao_journal = li.xpath('./a/text()').extract()
item1 = ''
item2 = ''
item3 = ''
if len(cankao_author) == 2:
item1 = cankao_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in cankao_author[1].split('\n')]).strip().strip('.')
if len(cankao_journal):
item2 = (cankao_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
cankao += item
elif len(cankao_author) == 3:
item1 = cankao_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in cankao_author[2].split('\n')]).strip().strip('.')
if len(cankao_journal):
item2 = (cankao_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
cankao += item
else:
if len(new_response.xpath('//ul[@id="Ul1"]/text()')) != 0:
cankao = new_response.xpath('//ul[@id="Ul1"]/text()').extract()[0]+';'
# cankao = "//".join(c.xpath('./text()').extract()[0].strip('\r\n []1234567890').strip() + '|' + c.xpath('./a/text()').extract()[0].strip('\r\n []1234567890').strip() + '|' + "".join(c.xpath('./text()').extract()[2].split()).strip('.') for c in new_response.xpath('//ul[@id="Ul1"]/li'))
yizheng = ''
if len(new_response.xpath('//ul[@id="Ul2"]/li')) != 0:
yinzheng_content = new_response.xpath('//ul[@id="Ul2"]/li')
for li in yinzheng_content:
yinzheng_author = li.xpath('./text()').extract()
yinzheng_journal = li.xpath('./a/text()').extract()
item1 = ''
item2 = ''
item3 = ''
if len(yinzheng_author) == 2:
item1 = yinzheng_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[1].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
yizheng += item
elif len(yinzheng_author) == 3:
item1 = yinzheng_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[2].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
yizheng += item
else:
if len(new_response.xpath('//ul[@id="Ul2"]/text()')) != 0:
yizheng = new_response.xpath('//ul[@id="Ul2"]/text()').extract()[0]+';'
# yinzheng = ''.join(c.xpath('./text()').extract()[0].strip('\r\n []1234567890').strip() + '|' + c.xpath('./a/text()').extract()[0].strip('\r\n []1234567890').strip() + '|' + "".join(c.xpath('./text()').extract()[2].split()).strip('.') for c in new_response.xpath('//ul[@id="Ul2"]/li'))
sswenxian = ''
if len(new_response.xpath('//ul[@id="Ul5"]/li')) != 0:
yinzheng_content = new_response.xpath('//ul[@id="Ul5"]/li')
for li in yinzheng_content:
yinzheng_author = li.xpath('./text()').extract()
yinzheng_journal = li.xpath('./a/text()').extract()
item1 = ''
item2 = ''
item3 = ''
if len(yinzheng_author) == 2:
item1 = yinzheng_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[1].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
sswenxian += item
elif len(yinzheng_author) == 3:
item1 = yinzheng_author[0].strip('\r\n []1234567890,').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[2].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
sswenxian += item
else:
if len(new_response.xpath('//ul[@id="Ul5"]/text()')) != 0:
sswenxian = new_response.xpath('//ul[@id="Ul5"]/text()').extract()[0]+';'
sswaiwen = ''
if len(new_response.xpath('//ul[@id="Ul7"]/li')) != 0:
yinzheng_content = new_response.xpath('//ul[@id="Ul7"]/li')
for li in yinzheng_content:
yinzheng_author = li.xpath('./text()').extract()
yinzheng_journal = li.xpath('./a/text()').extract()
item1 = ''
item2 = ''
item3 = ''
if len(yinzheng_author) == 2:
item1 = yinzheng_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[1].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
sswaiwen += item
elif len(yinzheng_author) == 3:
item1 = yinzheng_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[2].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
sswaiwen += item
else:
if len(new_response.xpath('//ul[@id="Ul7"]/text()')) != 0:
sswaiwen = new_response.xpath('//ul[@id="Ul7"]/text()').extract()[0]+';'
sshuiyi = ''
if len(new_response.xpath('//ul[@id="Ul6"]/li')) != 0:
yinzheng_content = new_response.xpath('//ul[@id="Ul6"]/li')
for li in yinzheng_content:
yinzheng_author = li.xpath('./text()').extract()
yinzheng_journal = li.xpath('./a/text()').extract()
item1 = ''
item2 = ''
item3 = ''
if len(yinzheng_author) == 2:
item1 = yinzheng_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[1].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
sshuiyi += item
elif len(yinzheng_author) == 3:
item1 = yinzheng_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[2].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
sshuiyi += item
else:
if len(new_response.xpath('//ul[@id="Ul6"]/text()')) != 0:
sshuiyi = new_response.xpath('//ul[@id="Ul6"]/text()').extract()[0]+';'
ssxuewei = ''
if len(new_response.xpath('//ul[@id="Ul8"]/li')) != 0:
yinzheng_content = new_response.xpath('//ul[@id="Ul8"]/li')
for li in yinzheng_content:
yinzheng_author = li.xpath('./text()').extract()
yinzheng_journal = li.xpath('./a/text()').extract()
item1 = ''
item2 = ''
item3 = ''
if len(yinzheng_author) == 2:
item1 = yinzheng_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[1].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
ssxuewei += item
elif len(yinzheng_author) == 3:
item1 = yinzheng_author[0].strip('\r\n []1234567890.').strip()
item3 = ''.join([i.strip() for i in yinzheng_author[2].split('\n')]).strip().strip('.')
if len(yinzheng_journal):
item2 = (yinzheng_journal[0].strip('\r\n []1234567890.').strip())
item = ''+item1+'|'+item2+'|'+item3+';'
ssxuewei += item
else:
if len(new_response.xpath('//ul[@id="Ul8"]/text()')) != 0:
ssxuewei = new_response.xpath('//ul[@id="Ul8"]/text()').extract()[0]+';'
item = WanfangItem()
item['url'] = pageurl.encode('utf8')
item['title'] = title.encode('utf8')
item['click'] = click.encode('utf8')
item['down'] = download.encode('utf8')
item['des'] = des.encode('utf8')
item['zuozhe'] = zuozhe.encode('utf8')
item['kanming'] = kanming[:-1].encode('utf8')
item['yingwenkanming'] = yingwenkanming.encode('utf8')
item['keyword'] = keyword[:-1].encode('utf8')
item['lanmuname'] = lanmuname.encode('utf8')
item['doi'] = doi.encode('utf8')
item['cankao'] = cankao[:-1].encode('utf8')
item['yinzheng'] = yizheng[:-1].encode('utf8')
item['sswenxian'] = sswenxian[:-1].encode('utf8')
item['sswaiwen'] = sswaiwen[:-1].encode('utf8')
item['sshuiyi'] = sshuiyi[:-1].encode('utf8')
item['ssxuewei'] = ssxuewei[:-1].encode('utf8')
return item | |
import numpy as np
from .VariableUnitTest import VariableUnitTest
from gwlfe.Input.WaterBudget import ET
class TestET(VariableUnitTest):
def test_DailyETPart1(self):
z = self.z
np.testing.assert_array_almost_equal(ET.DailyET_f(z.Temp, z.KV, z.PcntET, z.DayHrs),
ET.DailyET(z.NYrs, z.DaysMonth, z.Temp, z.DayHrs, z.KV, z.PcntET,
z.ETFlag), decimal=7) | |
# -*- coding: utf-8 -*-
"""
Connected components.
"""
# Copyright (C) 2004-2013 by
# Aric Hagberg <hagberg@lanl.gov>
# Dan Schult <dschult@colgate.edu>
# Pieter Swart <swart@lanl.gov>
# All rights reserved.
# BSD license.
import networkx as nx
from networkx.utils.decorators import not_implemented_for
from networkx.algorithms.shortest_paths \
import single_source_shortest_path_length as sp_length
__authors__ = "\n".join(['Eben Kenah',
'Aric Hagberg <aric.hagberg@gmail.com>'
'Christopher Ellison'])
__all__ = ['number_connected_components', 'connected_components',
'connected_component_subgraphs','is_connected',
'node_connected_component']
@not_implemented_for('directed')
def connected_components(G):
"""Generate connected components.
Parameters
----------
G : NetworkX graph
An undirected graph
Returns
-------
comp : generator of lists
A list of nodes for each component of G.
Examples
--------
Generate a sorted list of connected components, largest first.
>>> G = nx.path_graph(4)
>>> G.add_path([10, 11, 12])
>>> sorted(nx.connected_components(G), key = len, reverse=True)
[[0, 1, 2, 3], [10, 11, 12]]
See Also
--------
strongly_connected_components
Notes
-----
For undirected graphs only.
"""
seen={}
for v in G:
if v not in seen:
c = sp_length(G, v)
yield list(c)
seen.update(c)
@not_implemented_for('directed')
def connected_component_subgraphs(G, copy=True):
"""Generate connected components as subgraphs.
Parameters
----------
G : NetworkX graph
An undirected graph.
Returns
-------
comp : generator
A generator of graphs, one for each connected component of G.
copy: bool (default=True)
If True make a copy of the graph attributes
Examples
--------
>>> G = nx.path_graph(4)
>>> G.add_edge(5,6)
>>> graphs = list(nx.connected_component_subgraphs(G))
See Also
--------
connected_components
Notes
-----
For undirected graphs only.
Graph, node, and edge attributes are copied to the subgraphs by default.
"""
for c in connected_components(G):
if copy:
yield G.subgraph(c).copy()
else:
yield G.subgraph(c)
def number_connected_components(G):
"""Return the number of connected components.
Parameters
----------
G : NetworkX graph
An undirected graph.
Returns
-------
n : integer
Number of connected components
See Also
--------
connected_components
Notes
-----
For undirected graphs only.
"""
return len(list(connected_components(G)))
@not_implemented_for('directed')
def is_connected(G):
"""Return True if the graph is connected, false otherwise.
Parameters
----------
G : NetworkX Graph
An undirected graph.
Returns
-------
connected : bool
True if the graph is connected, false otherwise.
Examples
--------
>>> G = nx.path_graph(4)
>>> print(nx.is_connected(G))
True
See Also
--------
connected_components
Notes
-----
For undirected graphs only.
"""
if len(G) == 0:
raise nx.NetworkXPointlessConcept('Connectivity is undefined ',
'for the null graph.')
return len(sp_length(G, next(G.nodes_iter()))) == len(G)
@not_implemented_for('directed')
def node_connected_component(G, n):
"""Return the nodes in the component of graph containing node n.
Parameters
----------
G : NetworkX Graph
An undirected graph.
n : node label
A node in G
Returns
-------
comp : lists
A list of nodes in component of G containing node n.
See Also
--------
connected_components
Notes
-----
For undirected graphs only.
"""
return list(sp_length(G, n)) | |
from abc import ABCMeta, abstractmethod
from typing import Union, List, Generator
import numpy as np
class AbstractSplittingStrategy(metaclass=ABCMeta):
@abstractmethod
def split(self, data: np.ndarray) -> Union[List[np.ndarray], Generator[List[np.ndarray], None, None]]:
pass
@abstractmethod
def generates_many_splits(self) -> bool:
pass | |
import os
os.environ['TF_CPP_MIN_VLOG_LEVEL'] = '3'
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
from tensorflow import logging
logging.set_verbosity(logging.INFO)
from keras.constraints import maxnorm
import numpy as np
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelEncoder
from keras.models import Model, load_model
from keras.layers import Input, Reshape, Dot
from keras.layers.embeddings import Embedding
from keras.optimizers import Adam
from keras.regularizers import l2
from keras.layers import Add, Activation, Lambda
from keras.callbacks import Callback, EarlyStopping, ModelCheckpoint
from keras import backend
from matplotlib import pyplot
import math
import sys
import time
import csv
import subprocess
clear = lambda: os.system('cls')
clear()
def checkArg(arg):
return arg in sys.argv
import firebase_admin
from firebase_admin import credentials, firestore
cred = credentials.Certificate("serviceAccountKey.json")
firebase_admin.initialize_app(cred)
db = firestore.client()
path="ml-latest"
###############################################################
docs = db.collection(u'valoraciones').stream()
new = False
for doc in docs:
pelicula = doc.to_dict()['pelicula']
usuario = doc.to_dict()['usuario']
valoracion = doc.to_dict()['valoracion']
new=True
with open(path+'-small/ratings.csv','a') as f:
writer=csv.writer(f)
writer.writerow([usuario,pelicula,float(valoracion/2),int(round(time.time() * 1000))])
def delete_collection(coll_ref, batch_size):
docs = coll_ref.limit(10).get()
deleted = 0
for doc in docs:
doc.reference.delete()
deleted = deleted + 1
if(deleted >= batch_size):
return delete_collection(coll_ref, batch_size)
if(new):
delete_collection(db.collection(u'valoraciones'), 10)
subprocess.call(["python", "GenerateJustWatchDataset.py"])
###############################################################
ratings = pd.read_csv(path+'/jw_ratings.csv', sep=',', encoding='latin-1', usecols=['userId', 'movieId', 'rating'])
movies = pd.read_csv(path+'/jw.csv', sep=',', encoding='latin-1', usecols=['movieId','tmdbId', 'title', 'genres'])
user_enc = LabelEncoder()
ratings['user'] = user_enc.fit_transform(ratings['userId'].values)
n_users = ratings['user'].nunique()
item_enc = LabelEncoder()
ratings['movie'] = item_enc.fit_transform(ratings['movieId'].values)
n_movies = ratings['movie'].nunique()
ratings['rating'] = ratings['rating'].values.astype(np.float32)
min_rating = min(ratings['rating'])
max_rating = max(ratings['rating'])
print('Numero de usuarios: ' + str(n_users))
print('Numero de películas: ' + str(n_movies))
print('Valoración mínima: ' + str(min_rating*2))
print('Valoración máxima: ' + str(max_rating*2))
print('\n')
X = ratings[['user', 'movie']].values
y = ratings['rating'].values
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)
n_factors = 50
X_train_array = [X_train[:, 0], X_train[:, 1]]
X_test_array = [X_test[:, 0], X_test[:, 1]]
X_train_array = [X_train[:, 0], X_train[:, 1]]
X_test_array = [X_test[:, 0], X_test[:, 1]]
class EmbeddingLayer:
def __init__(self, n_items, n_factors):
self.n_items = n_items
self.n_factors = n_factors
def __call__(self, x):
x = Embedding(self.n_items, self.n_factors, embeddings_initializer='he_normal', embeddings_regularizer=l2(1e-6))(x)
x = Reshape((self.n_factors,))(x)
return x
def Recommender(n_users, n_movies, n_factors, min_rating, max_rating):
user = Input(shape=(1,))
u = EmbeddingLayer(n_users, n_factors)(user)
ub = EmbeddingLayer(n_users, 1)(user)
movie = Input(shape=(1,))
m = EmbeddingLayer(n_movies, n_factors)(movie)
mb = EmbeddingLayer(n_movies, 1)(movie)
x = Dot(axes=1)([u, m])
x = Add()([x, ub, mb])
x = Activation('sigmoid')(x)
x = Lambda(lambda x: x * (max_rating - min_rating) + min_rating)(x)
model = Model(inputs=[user, movie], outputs=x)
opt = Adam(lr=0.001)
model.compile(loss='mse', optimizer=opt, metrics=['mse'])
return model
###################################################
if(checkArg('--train')):
model = Recommender(n_users, n_movies, n_factors, min_rating, max_rating)
# model.summary()
callbacks = [EarlyStopping('val_loss', patience=5), ModelCheckpoint('model/model.h5', save_best_only=True,save_weights_only=False)]
history = model.fit(x=X_train_array, y=y_train, batch_size=64, epochs=10, verbose=1, validation_data=(X_test_array, y_test),callbacks=callbacks)
min_val_loss, idx = min((val, idx) for (idx, val) in enumerate(history.history['val_loss']))
print('Entrenamiento completo')
else:
model = load_model('model/model.h5')
###################################################
# Function to predict the ratings given User ID and Movie ID
def predict_rating(user_id, movie_id):
return model.predict([np.array([user_id]), np.array([movie_id])])[0][0]
################################## UPLOAD TO FIRESTORE ##################################
top = ratings[['movieId','rating']]
top['count'] = top.groupby('movieId', as_index=False)['movieId'].transform(lambda x: x.count())
top = top[top['count'] >= 100]
top = top.groupby(['movieId']).mean().reset_index()
top = top.merge(movies, on='movieId', how='inner', suffixes=['_u', '_m'])[['tmdbId','rating']]
top.sort_values(by=['rating'], ascending=[False], inplace=True)
doc_ref = db.collection(u'predicciones').document(u'info')
doc_ref.set({
u'usuarios': max(int(ratings['userId'].max()), doc_ref.get().to_dict()['usuarios']),
u'peliculas': n_movies,
u'valoraciones': ratings.shape[0],
u'top': top['tmdbId'].tolist()[:30]
})
count = 0
while count < n_users:
print('Usuario ' + str(count+1) + '/' + str(n_users))
media = float('%.3f'%(ratings[ratings['user'] == count].loc[:,"rating"].mean()*2))
vistas = ratings[ratings['user'] == count].shape[0]
user_ratings = ratings[ratings['user'] == count][['user','userId', 'movie','movieId', 'rating']]
userId = user_ratings['userId'].iloc[0]
recommendations = ratings[ratings['movieId'].isin(user_ratings['movieId']) == False][['movie','movieId']].drop_duplicates()
recommendations['prediction'] = recommendations.apply(lambda x: predict_rating(count, x['movie']), axis=1)
recommendations = recommendations.merge(movies, on='movieId', how='inner', suffixes=['_u', '_m'])[['tmdbId','prediction']]
recommendations.sort_values(by='prediction', ascending=False,inplace=True)
r = recommendations['tmdbId'].tolist()[:30]
doc_ref = db.collection(u'predicciones').document(u'usuario_' + str(userId))
doc_ref.set({
u'media': media,
u'predicciones': r,
u'vistas': vistas
})
count += 1
######################################################################################### | |
# @component {
# "kind" : "trainer",
# "language" : "py",
# "description" : "Train model to recognize categories of grayscale images (MNIST)",
# "permissions": "public",
# "properties": [
# { "name": "Pixel width" , "field": "width", "kind": "integer", "min": 8, "max": 1000, "required": true, "default": 28 },
# { "name": "Pixel height" , "field": "height", "kind": "integer", "min": 8, "max": 1000, "required": true, "default": 28 },
# { "name": "Epochs" , "field": "epochs", "kind": "integer", "min": 1, "max": 1000, "required": true, "default": 4, "hint": "Tensorflow recommends 10 which takes too long at about 20 minutes using speed 3 automatic" },
# { "name": "Batch size" , "field": "batch_size", "kind": "integer", "min": 2, "max": 1000, "required": true, "default": 32 }
# ],
# "inputs": ["X:img[]", "y:string[]"],
# "outputs": ["X:img[]", "y:string[]"],
# "dependencies": ["tensorflow", "numpy"],
# "readme" : "",
# "license" : "",
# "links" : ["https://machinelearningmastery.com/how-to-develop-a-convolutional-neural-network-from-scratch-for-mnist-handwritten-digit-classification/", "https://towardsdatascience.com/a-simple-2d-cnn-for-mnist-digit-recognition-a998dbc1e79a", "https://keras.io/examples/mnist_cnn/" ]
# }
import tensorflow as tf
from tensorflow import keras
from numpy import mean
from numpy import std
from keras.datasets import mnist
from keras import backend as K
from keras.models import Sequential
from keras.layers import Conv2D
from keras.layers import MaxPooling2D
from keras.layers import Dense
from keras.layers import Flatten
from keras.optimizers import SGD
from keras.utils import to_categorical
from keras import callbacks
def ATMKerasGrayscaleImageTrainer(ATM):
img_rows = ATM.props.get("height") or 28
img_cols = ATM.props.get("width") or 28
X_train = ATM.inputs["X"]
y_train = ATM.inputs["y"]
if K.image_data_format() == 'channels_first':
# Theano
X_train = X_train.reshape(X_train.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
X_train = X_train.reshape(X_train.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
y_train = to_categorical(y_train)
train_norm = X_train.astype('float32')
X_train = train_norm / 255.0
def atm_progress_callback(framework, purpose):
if framework == "keras":
return callbacks.LambdaCallback(
on_epoch_end=lambda epoch, logs:
ATM.report({ "name": "progress", 'purpose': purpose, 'progress': (epoch + 1) / ATM.props["epochs"], 'loss': logs['loss'], 'finished': False, 'gpu' : tf.config.experimental.list_physical_devices('GPU') }),
on_train_end=lambda logs:
ATM.report({ "name": "progress", 'purpose': purpose, 'finished': True, 'gpu' : tf.config.experimental.list_physical_devices('GPU') })
)
return None;
def define_model():
model = Sequential()
model.add(Conv2D(32, (3, 3), activation='relu', kernel_initializer='he_uniform', input_shape=input_shape))
model.add(MaxPooling2D((2, 2)))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(Conv2D(64, (3, 3), activation='relu', kernel_initializer='he_uniform'))
model.add(MaxPooling2D((2, 2)))
model.add(Flatten())
model.add(Dense(100, activation='relu', kernel_initializer='he_uniform'))
model.add(Dense(10, activation='softmax'))
opt = SGD(lr=0.01, momentum=0.9)
model.compile(optimizer=opt, loss='categorical_crossentropy', metrics=['accuracy'])
return model
model = define_model()
model.fit(X_train, y_train, epochs=ATM.props["epochs"], batch_size=ATM.props["batch_size"], verbose=0,
callbacks=[atm_progress_callback("keras", "train")])
filename = ATM.getTmpFilename("model")
model.save(filename)
model_h5 = open(filename, 'rb').read()
ATM.save("model", model_h5)
ATM.output(ATM.inputs); | |
#!/usr/bin/env python
# pylint: disable=E1120
from __future__ import division
import numpy as np
from affine import Affine
from rasterio.enums import Resampling
from rasterio.warp import reproject
from rasterio.windows import Window
def _adjust_block_size(width, height, blocksize):
"""Adjusts blocksize by adding 1 if the remainder
from the division of height/width by blocksize is 1.
"""
if width % blocksize == 1:
blocksize += 1
elif height % blocksize == 1:
blocksize += 1
return blocksize
def _make_windows(width, height, blocksize):
"""Manually makes windows of size equivalent to
pan band image
"""
for x in range(0, width, blocksize):
for y in range(0, height, blocksize):
yield Window(
x,
y,
(min((x + blocksize), width) - x),
(min((y + blocksize), height) - y)
)
def _make_affine(fr_shape, to_shape):
"""Given from and to width and height,
compute affine transform defining the
georeferencing of the output array
"""
fr_window_affine = Affine(
1, 0, 0,
0, -1, 0)
to_window_affine = Affine(
(fr_shape[1] / float(to_shape[1])), 0, 0,
0, -(fr_shape[0] / float(to_shape[0])), 0)
return fr_window_affine, to_window_affine
def _half_window(window):
"""Computes half window sizes
"""
return tuple((w[0] / 2, w[1] / 2) for w in window)
def _check_crs(inputs):
"""Checks if crs of inputs are the same
"""
for i in range(1, len(inputs)):
if inputs[i-1]['crs'] != inputs[i]['crs']:
raise RuntimeError(
'CRS of inputs must be the same: '
'received %s and %s' % (inputs[i-1]['crs'],
inputs[i]['crs']))
def _create_apply_mask(rgb):
"""Create a mask of pixels where any channel is 0 (nodata),
then apply the mask to input numpy array.
"""
color_mask = np.all(
np.rollaxis(rgb, 0, 3) != 0,
axis=2
).astype(np.uint16) * np.iinfo(np.uint16).max
masked_rgb = np.array([
np.minimum(band, color_mask) for band in rgb])
return masked_rgb
def _upsample(rgb, panshape, src_aff, src_crs, to_aff, to_crs):
"""upsamples rgb to the shape of the panchromatic band
using reproject function from rasterio.warp
"""
up_rgb = np.empty(
(
rgb.shape[0], panshape[0],
panshape[1]), dtype=rgb.dtype
)
reproject(
rgb, up_rgb,
src_transform=src_aff,
src_crs=src_crs,
dst_transform=to_aff,
dst_crs=to_crs,
resampling=Resampling.bilinear)
return up_rgb
def _simple_mask(data, ndv):
'''Exact nodata masking'''
nd = np.iinfo(data.dtype).max
alpha = np.invert(
np.all(np.dstack(data) == ndv, axis=2)
).astype(data.dtype) * nd
return alpha
def _pad_window(wnd, pad):
"""Add padding to windows
"""
return Window(
wnd.col_off - pad,
wnd.row_off - pad,
wnd.width + 2 * pad,
wnd.height + 2 * pad
)
def _calc_windows(pan_src, customwindow):
"""Given raster data, pan_width, pan_height, and window size
are used to compute and output appropriate windows
"""
if customwindow != 0 and isinstance(customwindow, int):
blocksize = _adjust_block_size(pan_src.meta['width'],
pan_src.meta['height'],
int(customwindow))
windows = [(window, (0, 0))
for window in _make_windows(pan_src.meta['width'],
pan_src.meta['height'],
blocksize)]
else:
windows = [(window, ij) for ij, window in pan_src.block_windows()]
return windows
def _rescale(arr, ndv, dst_dtype, out_alpha=True):
"""Convert an array from output dtype, scaling up linearly
"""
if dst_dtype == np.__dict__['uint16']:
scale = 1
else:
# convert to 8bit value range in place
scale = float(np.iinfo(np.uint16).max) / float(np.iinfo(np.uint8).max)
res = (arr / scale).astype(dst_dtype)
if out_alpha:
mask = _simple_mask(
arr.astype(dst_dtype),
(ndv, ndv, ndv)).reshape(
1, arr.shape[1], arr.shape[2])
return np.concatenate([res, mask])
else:
return res | |
#!/usr/bin/env python
# ===- utils/layering/layering.py -----------------------------------------===//
# * _ _ *
# * | | __ _ _ _ ___ _ __(_)_ __ __ _ *
# * | |/ _` | | | |/ _ \ '__| | '_ \ / _` | *
# * | | (_| | |_| | __/ | | | | | | (_| | *
# * |_|\__,_|\__, |\___|_| |_|_| |_|\__, | *
# * |___/ |___/ *
# ===----------------------------------------------------------------------===//
#
# Part of the pstore project, under the Apache License v2.0 with LLVM Exceptions.
# See https://github.com/SNSystems/pstore/blob/master/LICENSE.txt for license
# information.
# SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
#
# ===----------------------------------------------------------------------===//
"""
A utility which verifies the pstore project's layering. That is, it checks that the
dependencies defined by the cmake source files matches the dependencies used by the
#include statements in the source files.
With the working directory at the project's root, run cmake with the --graphiz=xx option
to produce the project's dependency graph:
$ mkdir dep
$ cd dep
$ cmake --graphviz=pstore.dot ..
$ cd -
The with working directory again at the project's root, run layering.py:
$ ./utils/layering/layering.py dep/pstore.dot
Dependencies:
$ pip install networkx
$ pip install decorator
$ pip install pydot
"""
import argparse
import logging
import os
from pprint import pformat
import re
import sys
import networkx as nx
EXIT_SUCCESS = 0
EXIT_FAILURE = 1
def bind(x, f):
"""
A monadic bind operation similar to Haskell's Maybe type. Used to enable function
composition where a function returning None indicates failure.
:param x: The input value passed to callable f is not None.
:param f: A callable which is passed 'x'
:return: If 'x' is None on input, None otherwise the result of calling f(x).
"""
if x is None:
return None
return f(x)
def include_files(path):
"""
A generator which produces the #includes from the file at the given path.
:param path:
"""
with open(path, 'r') as f:
for line in f:
# Don't match up to the end of the line to allow for comments.
r = re.match(r'\s*#\s*include\s*["<](.*)[">]', line)
if r:
yield r.group(1)
def split_all(path):
"""
Splits a path into all of its individual components.
:param path:
:return: A list of path components.
"""
allparts = []
while True:
# Split the path into (head,tail)
parts = os.path.split(path)
if parts[0] == path: # sentinel for absolute paths
allparts.insert(0, parts[0])
break
if parts[1] == path: # sentinel for relative paths
allparts.insert(0, parts[1])
break
allparts.insert(0, parts[1])
path = parts[0]
return allparts
EXCLUSIONS = ('cmakecxxcompilerid.cpp',)
def dependencies_from_source(source_directory):
"""
Scans the source files contained within the directory hierarchy rooted at 'source_path' and returns a dictionary
whose keys are the source file paths and where the corresponding value is a set of components referenced by that
source file.
:param source_directory: The path of a directory containing source files (files whose extension is .hpp or .cpp)
:return: A dictionary with the source file to component name mapping.
"""
def sources_in(directory, extensions):
"""
A generator which yields the relative paths of source files in the given directory. Only files whose
extension is in the given by the 'extensions' parameter are returned.
:param directory:
:param extensions: A list of the file extensions to be included in the results.
"""
for root, dirs, files in os.walk(directory):
for name in files:
lower_name = name.lower()
if lower_name.endswith(extensions) and not lower_name in EXCLUSIONS:
yield os.path.relpath(os.path.join(root, name))
def includes(path):
"""
Returns the set of components referenced by the includes of the source file at 'path'.
:param path: The path to the source file to be scanned.
:return: A set of component names.
"""
# We're include interested in includes with the pstore/ prefix.
pstore_includes = [x for x in include_files(path) if split_all(x)[0] == 'pstore']
# The pstore component is the path component _after_ the initial pstore/. Target names use dashes to separate
# words, whereas paths use underscores.
includes_without_prefix = [split_all(x)[1].replace('_', '-') for x in pstore_includes]
# Convert the include path to the cmake component name. This works around inconsistencies in the way that
# targets and directories are named.
return frozenset([{
'diff': 'diff-lib',
'dump': 'dump-lib',
'json': 'json-lib',
'vacuum': 'vacuum-lib',
}.get(x, x) for x in includes_without_prefix])
return dict((path, includes(path)) for path in sources_in(source_directory, extensions=('.hpp', '.cpp')))
def cmake_dependency_graph(dot_path):
"""
Parse the graphviz dot file at the path given by dot_path and return a networkx directed graph instance that
reflects its contents.
:param dot_path: The path of the file to be parse.
:return: A networkx directed graph.
"""
g = nx.drawing.nx_pydot.read_dot(dot_path)
if not nx.algorithms.dag.is_directed_acyclic_graph(g):
return None
return g.to_directed()
PREFIX = 'pstore-'
def label(node_data):
s = node_data['label'].strip('"')
return s[len(PREFIX):] if s.startswith(PREFIX) else s
GOOGLE_TEST = ('gtest', 'gtest_main', 'gmock', 'gmock_main')
def reachability_dict(dag):
"""
Scans through the DAG vertices. For each node, get the list of descendants (that is the collection of vertices
that are transitively reachable from that node). Nodes that name one of the google-test/mock components are stripped.
:param dag:
:return: A dictionary whose keys are the target name and values are the set of targets that may be referenced
from the corresponding target.
"""
nodes = dag.nodes(data=True)
return dict((label(data),
frozenset([x for x in [label(nodes[x]) for x in nx.algorithms.descendants(dag, node)] if
x not in GOOGLE_TEST]))
for node, data in nodes if label(data) not in GOOGLE_TEST)
def cmake_target_from_path(p):
"""
Converts a file path to a cmake target name.
:param p: The path to be converted.
:return: The cmake target name.
"""
class Group:
def __init__(self): pass
lib = 1
unit_test = 2
tool = 3
examples = 4
def name_from_path(path):
"""
Derives a component name from a path.
:param path: The path to be examined.
:return: A two-tuple: the first member is the base component name; the second member is the component group.
"""
parts = split_all(path)
# Special handling for the unit test harness code.
if parts == ['unittests', 'harness.cpp']:
return 'unit-test-harness', Group.unit_test
if parts[:1] == ['lib']:
return parts[1], Group.lib
if parts[:2] == ['include', 'pstore']:
return parts[2], Group.lib
if parts[:1] == ['unittests']:
return parts[1], Group.unit_test
if parts[:1] == ['tools']:
return parts[1], Group.tool
if parts[:1] == ['examples']:
# Just skip the examples for the time being.
# return (parts[1], Group.examples)
return None
# Skip paths that we don't know about.
return None
def name_to_target(name):
"""
Converts a two-tuple (component base name, component group) to its corresponding cmake target name.
:param name: A two-tuple defining the base component name and component group.
:return: The cmake target name (without the 'pstore-' prefix)
"""
return {
('broker', Group.unit_test): 'broker-unit-tests',
('cmd-util', Group.unit_test): 'cmd-util-unit-tests',
('core', Group.unit_test): 'core-unit-tests',
('diff', Group.unit_test): 'diff-unit-tests',
('dump', Group.unit_test): 'dump-unit-tests',
('httpd', Group.unit_test): 'httpd-unit-tests',
('json', Group.unit_test): 'json-unit-tests',
('mcrepo', Group.unit_test): 'mcrepo-unit-tests',
('serialize', Group.unit_test): 'serialize-unit-tests',
('support', Group.unit_test): 'support-unit-tests',
('vacuum', Group.lib): 'vacuum-lib',
('vacuum', Group.tool): 'vacuumd',
('vacuum', Group.unit_test): 'vacuum-unit-tests',
}.get(name, name[0])
# Produce a component name from a path and convert '_' to '-' to match the convention used by the cmake targets.
return bind(bind(bind(
p,
name_from_path),
lambda x: (x[0].replace('_', '-'), x[1])),
name_to_target)
def logging_config():
logger = logging.getLogger(__name__)
class LessThanFilter(logging.Filter):
def __init__(self, max_level, name=""):
super(LessThanFilter, self).__init__(name)
self.__max_level = max_level
def filter(self, record):
# Is the specified record to be logged? Zero for no, non-zero for yes.
return 1 if record.levelno < self.__max_level else 0
formatter = logging.Formatter('%(levelname)s: %(message)s')
logging_handler_out = logging.StreamHandler(sys.stdout)
logging_handler_out.setLevel(logging.DEBUG)
logging_handler_out.addFilter(LessThanFilter(logging.WARNING))
logging_handler_out.setFormatter(formatter)
logger.addHandler(logging_handler_out)
logging_handler_err = logging.StreamHandler(sys.stderr)
logging_handler_err.setLevel(logging.WARNING)
logging_handler_err.setFormatter(formatter)
logger.addHandler(logging_handler_err)
return logger
def main(args=sys.argv[1:]):
exit_code = EXIT_SUCCESS
logger = logging_config()
parser = argparse.ArgumentParser(description='layering check')
parser.add_argument('dependencies', help='The cmake-generated dot dependency graph.')
parser.add_argument('-s', '--source-dir', default='.', help='The project root directory.')
parser.add_argument('-v', '--verbose', default=0, action='count', help='Produce more verbose output')
options = parser.parse_args(args)
# Set the visible log level according to the number of times that -v was specified by the user.
logger.setLevel({
0: logging.WARNING,
1: logging.INFO,
2: logging.DEBUG,
}.get(options.verbose, logging.DEBUG))
# Parse the dot graph produced by cmake so that we know the project's dependency graph.
logger.info("Scanning cmake depenedency graph")
dag = cmake_dependency_graph(options.dependencies)
if dag is None:
logger.error('The cmake dependency graph is not a DAG. Giving up.')
return EXIT_FAILURE
# For each target in the graph, build the set of targets against which it transitively links.
logger.info('Building reachability dictionary')
reachable = reachability_dict(dag)
logger.debug(pformat(reachable))
# Scan the source and header files discovering the files that they include.
logger.info("Discovering source code dependencies")
includes = dependencies_from_source(options.source_dir)
# Check that the source code includes don't violate the constraints from the cmake dependency graph.
logger.info("Checking dependencies")
for path, dependencies in includes.items():
logger.info('checking: "%s"', path)
c = cmake_target_from_path(path)
if c is None:
logger.warning('skipping: "%s"', path)
continue
if reachable.get(c) is None:
logger.error('unknown target: "%s"', path)
exit_code = EXIT_FAILURE
continue
logger.debug('component: "%s"', c)
logger.debug('reachable (from cmake): %s', reachable[c])
logger.debug('included (by source code): %s', dependencies)
for dependent in dependencies:
# The "config" psuedo component is for the config.hpp file generated when running cmake. It's a pure header
# file so there is no library to link.
if dependent == 'config':
continue
if dependent != c and dependent not in reachable[c]:
logger.error('cannot include from component "%s" from file "%s" (component "%s")', dependent, path, c)
exit_code = EXIT_FAILURE
return exit_code
if __name__ == '__main__':
logging.getLogger().setLevel(logging.NOTSET)
sys.exit(main()) | |
import numpy as np
from sdca4crf.parameters.weights import WeightsWithoutEmission
class SparsePrimalDirection(WeightsWithoutEmission):
def __init__(self, sparse_emission=None, bias=None, transition=None,
nb_labels=0):
super().__init__(bias, transition, nb_labels)
self.sparse_emission = sparse_emission
def __mul__(self, scalar):
tmp = super().__mul__(scalar)
return SparsePrimalDirection(self.sparse_emission * scalar, tmp.bias, tmp.transition)
@classmethod
def from_marginals(cls, points_sequence, marginals):
if marginals.islog:
marginals = marginals.exp()
sparse_emission = SparseEmission.from_marginals(points_sequence, marginals)
tmp = super(SparsePrimalDirection, cls).from_marginals(points_sequence, marginals)
return cls(sparse_emission, tmp.bias, tmp.transition)
def squared_norm(self):
ans = super().squared_norm()
return ans + self.sparse_emission.squared_norm()
class SparseEmission:
def __init__(self, active_set, values):
self.active_set = active_set
self.values = values
@classmethod
def from_marginals(cls, points_sequence, marginals):
alphalen = marginals.nb_labels
active_set, inverse = np.unique(points_sequence, return_inverse=True)
centroid = np.zeros([active_set.shape[0], alphalen])
inverse = inverse.reshape(points_sequence.shape)
for inv, marg in zip(inverse, marginals.unary):
centroid[inv] += marg
# Finally remove the absent attributes
if active_set[0] == -1:
active_set = active_set[1:]
centroid = centroid[1:]
else:
pass
centroid = np.transpose(centroid)
return cls(active_set, centroid)
def __mul__(self, scalar):
return SparseEmission(self.active_set, scalar * self.values)
def squared_norm(self):
return np.sum(self.values ** 2) | |
from setuptools import setup, Extension, find_packages
import numpy as np
#cpp_ext = Extension('mhc_adventures.molgrid',
# sources=['mhc_adventures/source/molgrid/py_molgrid.cpp'],
# include_dirs=[np.get_include()])
setup(name='mhc_tools',
version='0.1',
description='Process MHC structures and train predictors',
url='https://github.com/ignatovmg/mhc-adventures',
author='Mikhail Ignatov',
author_email='ignatovmg@gmail.com',
license='MIT',
packages=['mhc_tools'],
include_package_data=True,
zip_safe=False) #,
#ext_modules=[cpp_ext]) | |
from pathlib import Path
from typing import Dict
import numpy as np
from lazy import lazy
from evobench.discrete import Discrete
from evobench.dsm import DependencyStructureMatrixMixin
from evobench.linkage.dsm import DependencyStructureMatrix
from evobench.model import Solution
from .config import Config
from .parser import load
class IsingSpinGlass(Discrete, DependencyStructureMatrixMixin):
def __init__(
self,
config_name: str,
*,
rng_seed: int = 42,
use_shuffle: bool = False,
verbose: int = 0
):
"""
Instantiates _ISG_ benchmark
Parameters
----------
config_name : str
Name of configuration file, without suffix.
Predefined configurations can be found at
`evobench.discrete.isg.data`. These problem files are ported from
_P3_ repository.
"""
super(IsingSpinGlass, self).__init__(
rng_seed=rng_seed,
use_shuffle=use_shuffle,
verbose=verbose,
)
self.config_name = config_name
@lazy
def config(self) -> Config:
path = Path(__file__).parent
path = path.joinpath('data')
path = path.joinpath(self.config_name + '.txt')
return load(path)
@lazy
def dsm(self) -> DependencyStructureMatrix:
interactions = np.eye(self.config.genome_size)
for spin in self.config.spins:
interactions[spin.a_index, spin.b_index] = 1
interactions[spin.b_index, spin.a_index] = 1
return DependencyStructureMatrix(interactions)
@lazy
def genome_size(self) -> int:
return self.config.genome_size
@lazy
def as_dict(self) -> Dict:
config_as_dict = self.config.as_dict
benchmark_as_dict = super().as_dict
as_dict = {**benchmark_as_dict, **config_as_dict}
return as_dict
def _evaluate_solution(self, solution: Solution) -> float:
# ! TODO
genome = solution.genome.copy()
genome[solution.genome == 0] = -1
a_genes = genome[self.config.a_spin_indices]
b_genes = genome[self.config.b_spin_indices]
spins = a_genes * b_genes * self.config.spin_factors
energy = - spins.sum()
fitness = (energy - self.config.min_energy) / self.config.span
return 1 - fitness | |
# Training and test
# Codes have been tested successfully on Python 3.6.0 with TensorFlow 1.14.0.
import tensorflow as tf
import numpy as np
import scipy.io as sio
import time
import math
from PENN import MLP, standard_scale, get_random_block_from_data
def run(X_ini, Y_ini, X_test,Y_test,H,num_H,num_val,N_mont, training_epochs=300, batch_size=5000, LR=0.001, traintestsplit=0.01, K=1, isdimgeneral=False, layernum=[2]):
global mlp
MSE_train = np.zeros((training_epochs,N_mont))
MSE_val = np.zeros((training_epochs,N_mont))
Time = np.zeros((N_mont))
Ratio = np.zeros((N_mont))
N_ini = X_ini.shape[1]
total_batch = int(num_H / batch_size)
block_onehot_train = np.ones([num_H, K])
block_onehot_val = np.ones([num_val, K])
X_test = np.transpose(X_test)
Y_test = np.transpose(Y_test)
x = tf.placeholder("float", [None, K])
y = tf.placeholder("float", [None, K])
is_train = tf.placeholder("bool")
input_keep_prob = tf.placeholder(tf.float32)
hidden_keep_prob = tf.placeholder(tf.float32)
for i_mont in range(N_mont):
start_time = time.time()
# randomly selecting training and validation samples
flag=np.random.randint(0,N_ini-num_H-num_val)
X = X_ini[:,flag:flag+num_H]
Y = Y_ini[:,flag:flag+num_H]
X_val = X_ini[:,flag+num_H:flag+num_H+num_val]
Y_val = Y_ini[:,flag+num_H:flag+num_H+num_val]
X_train = np.transpose(X[:, 0:num_H])
Y_train = np.transpose(Y[:, 0:num_H])
X_val_ = np.transpose(X_val[:, 0:num_val])
Y_val_ = np.transpose(Y_val[:, 0:num_val])
# Initializing network
mlp = MLP(layernum, [1, 10, 1],
0.1, K, K, transfer_function=tf.nn.softplus,
optimizer=tf.train.AdamOptimizer(LR, 0.9), isdimgeneral=False)
for epoch in range(training_epochs):
for i in range(total_batch):
idx = np.random.randint(num_H, size=batch_size)
# Trainining
mlp.optimizer_MSE.run({mlp.x: X_train[idx, :], mlp.y_: Y_train[idx, :],
mlp.block_onehot: block_onehot_train[idx, :],
mlp.keep_prob: 1, mlp.is_train: 1})
c = mlp.getcost(X=X_train[idx, :], Y=Y_train[idx, :], block_onehot=block_onehot_train[idx, :], keep_prob=1, is_train=0)
MSE_train[epoch,i_mont] = mlp.getcost(X=X_train, Y=Y_train, block_onehot=block_onehot_train, keep_prob=1, is_train=0) / len(X_train)
MSE_val[epoch,i_mont] = mlp.getcost(X=X_val_, Y=Y_val_, block_onehot=block_onehot_val, keep_prob=1, is_train=0) / len(X_val)
if epoch % 500 == 0:
print('i_mont:%d, ' % i_mont, 'epoch:%d, ' % epoch, 'MSE_train:%f, ' %(MSE_train[epoch,i_mont]),
'MSE_val:%f.' %(MSE_val[epoch,i_mont] ))
Time[i_mont] = time.time() - start_time
print("training time: %0.2f s" % (Time[i_mont]))
# Testing
y_pred = mlp.getoutputs(X=X_test,block_onehot=np.ones([X_test.shape[0],K]),keep_prob=1,is_train=0)
pyrate, nnrate = perf_eval(H, np.transpose(Y_test), y_pred, K)
Ratio[i_mont] = np.mean(nnrate)/ np.mean(pyrate)
print('Ratio: %f ' % (Ratio[i_mont]))
sio.savemat('../Experiments/PENN/PENN_WF_Nc'+str(K)+'.mat', {'MSE_train':MSE_train, 'MSE_val': MSE_val, 'Ratio':Ratio,'y_pred': y_pred, 'Y_test':Y_test} )
return Ratio,Time
# Functions for performance evaluation
def perf_eval(H, Py_p, NN_p, K, var_noise=1):
num_sample = H.shape[1]
pyrate = np.zeros(num_sample)
nnrate = np.zeros(num_sample)
for i in range(num_sample):
pyrate[i] = obj_IA_sum_rate(H[:, i], Py_p[:, i], var_noise, K)
nnrate[i] = obj_IA_sum_rate(H[:, i], NN_p[i, :], var_noise, K)
return pyrate, nnrate
# Functions for objective (Data-rate) calculation
def obj_IA_sum_rate(H, p, var_noise, K):
y = 0.0
for i in range(K):
y = y+math.log2(1+H[i]*p[i]/var_noise)
return y | |
import matplotlib; matplotlib.use('Agg')
import matplotlib.pyplot as plt
from sklearn.cluster import KMeans
import joblib
import numpy as np
import sys
import fasttext
np.random.seed(1991)
def cluster_posts(sents_f, model_f, prefix, K):
model = fasttext.load_model(model_f)
embeddings = []
sentences = []
with open(sents_f) as handle:
for new_line in handle:
if len(new_line.split()) < 5:
continue
sentences.append(new_line.strip())
sentences = np.random.choice(sentences, 20000)
for sentence in sentences:
embeddings.append(model.get_sentence_vector(sentence))
embeddings = np.array(embeddings)
kmeans = KMeans(n_clusters=K, random_state=0).fit(embeddings)
preds = kmeans.predict(embeddings)
for i in range(K):
dest = prefix + str(i) + '.txt'
with open(dest, 'w') as handle:
where_arrs = np.where(preds==i)[0]
for pos in where_arrs:
handle.write(sentences[pos])
handle.write('\n')
joblib.dump(kmeans, prefix + '_langid.joblib') | |
import numpy as np
#from data import *
import torch.nn as nn
import torch.nn.functional as F
SPRAY_CLASSES = ['blue']
CLASS_COLOR = [(np.random.randint(255),np.random.randint(255),np.random.randint(255)) for _ in range(len(SPRAY_CLASSES))]
class HeatmapLoss(nn.Module):
def __init__(self, weight=None, alpha=2, beta=4, reduction='mean'):
super(HeatmapLoss, self).__init__()
self.alpha = alpha
self.beta = beta
def forward(self, inputs, targets):
#print(inputs.is_cuda)
#print(targets.is_cuda) # cpu
inputs = torch.sigmoid(inputs)
center_id = (targets == 1.0).float()
other_id = (targets != 1.0).float()
center_loss = -center_id * (1.0-inputs)**self.alpha * torch.log(inputs + 1e-14)
other_loss = -other_id * (1 - targets)**self.beta * (inputs)**self.alpha * torch.log(1.0 - inputs + 1e-14)
return center_loss + other_loss
def gaussian_radius(det_size, min_overlap=0.7):
box_h, box_h = det_size
a1 = 1
b1 = (box_h + box_h)
c1 = box_h * box_h * (1 - min_overlap) / (1 + min_overlap)
sq1 = np.sqrt(b1 ** 2 - 4 * a1 * c1)
r1 = (b1 + sq1) / 2 #(2*a1)
a2 = 4
b2 = 2 * (box_h + box_h)
c2 = (1 - min_overlap) * box_h * box_h
sq2 = np.sqrt(b2 ** 2 - 4 * a2 * c2)
r2 = (b2 + sq2) / 2 #(2*a2)
a3 = 4 * min_overlap
b3 = -2 * min_overlap * (box_h + box_h)
c3 = (min_overlap - 1) * box_h * box_h
sq3 = np.sqrt(b3 ** 2 - 4 * a3 * c3)
r3 = (b3 + sq3) / 2 #(2*a3)
return min(r1, r2, r3)
def generate_dxdy(gt_label, w, h, s):
x, y = gt_label[:-1]
# compute the center, width and height
c_x = x * w
c_y = y * h
radius = 100
radius_s = radius / s
r = gaussian_radius([radius_s, radius_s])
sigma_r = r / 3
if radius_s < 1e-28:
print('A dirty data !!!')
return False
# map center point of box to the grid cell
c_x_s = c_x / s
c_y_s = c_y / s
grid_x = int(c_x_s)
grid_y = int(c_y_s)
# compute the (x, y, w, h) for the corresponding grid cell
tx = c_x_s - grid_x
ty = c_y_s - grid_y
weight = 1.0 # 2.0 - (box_w / w) * (box_h / h)
return grid_x, grid_y, tx, ty, weight, sigma_r
def gt_creator(input_size, stride, num_classes, label_lists=[]):
# prepare the all empty gt datas
batch_size = len(label_lists)
w = input_size
h = input_size
ws = w // stride
hs = h // stride
s = stride
gt_tensor = np.zeros([batch_size, hs, ws, num_classes+2+1])
# generate gt whose style is yolo-v1
for batch_index in range(batch_size):
for gt_label in label_lists[batch_index]:
gt_cls = gt_label[-1]
result = generate_dxdy(gt_label, w, h, s)
if result:
grid_x, grid_y, tx, ty, weight, sigma_r = result
gt_tensor[batch_index, grid_y, grid_x, int(gt_cls)] = 1.0
gt_tensor[batch_index, grid_y, grid_x, num_classes:num_classes + 2] = np.array([tx, ty])
gt_tensor[batch_index, grid_y, grid_x, num_classes + 2] = weight
# create Gauss heatmap
for i in range(grid_x - 3*int(sigma_r), grid_x + 3*int(sigma_r) + 1):
for j in range(grid_y - 3*int(sigma_r), grid_y + 3*int(sigma_r) + 1):
if i < ws and j < hs:
v = np.exp(- (i - grid_x)**2 / (2*sigma_r**2) - (j - grid_y)**2 / (2*sigma_r**2))
pre_v = gt_tensor[batch_index, j, i, int(gt_cls)]
gt_tensor[batch_index, j, i, int(gt_cls)] = max(v, pre_v)
gt_tensor = gt_tensor.reshape(batch_size, -1, num_classes+2+1)
return gt_tensor
def loss(pred_cls, pred_txty, pred_twth, label, num_classes):
# create loss_f
cls_loss_function = HeatmapLoss()
txty_loss_function = nn.BCEWithLogitsLoss(reduction='none')
twth_loss_function = nn.SmoothL1Loss(reduction='none')
# groundtruth
gt_cls = label[:, :, :num_classes].float()
gt_txtytwth = label[:, :, num_classes:-1].float()
gt_box_scale_weight = label[:, :, -1]
# objectness loss
batch_size = pred_cls.size(0)
cls_loss = torch.sum(cls_loss_function(pred_cls, gt_cls)) / batch_size
# box loss
txty_loss = torch.sum(torch.sum(txty_loss_function(pred_txty, gt_txtytwth[:, :, :2]), 2) * gt_box_scale_weight) / batch_size
twth_loss = torch.sum(torch.sum(twth_loss_function(pred_twth, gt_txtytwth[:, :, 2:]), 2) * gt_box_scale_weight) / batch_size
# total loss
total_loss = cls_loss + txty_loss + twth_loss
return cls_loss, txty_loss, twth_loss, total_loss
if __name__ == "__main__":
pass | |
# coding=utf-8
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
class VAE(nn.Module):
def __init__(self, G, config):
super(VAE, self).__init__()
print(config)
self.N = G.number_of_nodes()
self.config = config
self.encoder = nn.ModuleList(
[nn.Linear(self.config.struct[i], self.config.struct[i + 1]) for i in
range(len(self.config.struct) - 1)]).to(self.config.device, dtype=torch.float32)
self.enc_mu = nn.Linear(self.config.struct[-1], self.config.struct[-1]).to(
self.config.device, dtype=torch.float32)
self.enc_log_sigma = nn.Linear(self.config.struct[-1], self.config.struct[-1]).to(
self.config.device, dtype=torch.float32)
self.config.struct.reverse()
self.decoder = nn.ModuleList(
[nn.Linear(self.config.struct[i], self.config.struct[i + 1]) for i in
range(len(self.config.struct) - 1)]).to(self.config.device, dtype=torch.float32)
self.config.struct.reverse()
self.relu = nn.LeakyReLU(0.01, inplace=True)
self.init_model_weight()
def init_model_weight(self):
for i in range(len(self.config.struct) - 1):
nn.init.xavier_uniform_(self.encoder[i].weight)
nn.init.uniform_(self.encoder[i].bias)
for i in range(len(self.config.struct) - 1):
nn.init.xavier_uniform_(self.decoder[i].weight)
nn.init.uniform_(self.decoder[i].bias)
def encoder_network(self, h_state):
for i in range(len(self.config.struct) - 1):
h_state = F.tanh(self.encoder[i](h_state))
mu = self.enc_mu(h_state)
log_sigma = self.enc_log_sigma(h_state)
sigma = log_sigma.exp()
z = torch.from_numpy(np.random.normal(0, 1, size=sigma.size())).to(
self.config.device, dtype=torch.float32)
z = mu + sigma * z
return mu, sigma, z
def decoder_network(self, h_state):
for i, layer in enumerate(self.decoder):
h_state = layer(h_state)
if i != len(self.decoder) - 1:
h_state = F.tanh(h_state)
return h_state
def forward(self, h_state):
mu, sigma, z = self.encoder_network(h_state)
return mu, sigma, z, self.decoder_network(z)
def get_embedding(self, h_state):
mu, sigma, z = self.encoder_network(h_state)
return z | |
import OpenGL
OpenGL.ERROR_ON_COPY = True
OpenGL.ERROR_LOGGING = False
OpenGL.ERROR_CHECKING = False
from OpenGL.GL import *
from OpenGL.GLUT import *
from math import sin,cos,sqrt,radians,hypot
import numpy as np
from rangeUtils import constrain
# Arrays for caching
__homeLinearVerts = np.array([])
__homeLinearColrs = np.array([])
__homeLinearIndcs = np.array([])
__curHomeLinearNZ = 0
__prvHomeLinearNZ = 0
__curHomeLinearCols = [(0.5, 0.5, 0.5)]
__prvHomeLinearCols = None
def drawHomeLinear(
gx,
gy,
dx,
dy,
nz,
ao,
w2h,
colors
):
global __homeLinearVerts, __homeLinearColrs, __homeLinearIndcs, __curHomeLinearNZ, __prvHomeLinearNZ, __prvHomeLinearCols, __curHomeLinearCols
__curHomeLinearNZ = nz
__curHomeLinearCols = colors
glPushMatrix()
glRotatef(90, 0, 0, 1)
glScalef(1, w2h, 1)
glRotatef(ao+90, 0, 0, 1)
if (__homeLinearVerts.size == 0) or (__prvHomeLinearNZ != __curHomeLinearNZ):
tmp = []
for i in range(nz):
if i == 0:
tmp.append(( i*2/nz+2.0, -2.0))
tmp.append(( i*2/nz-2.0, 2.0))
tmp.append(( i*2/nz-2.0, -2.0))
tmp.append(( i*2/nz+2.0, -2.0))
tmp.append(( i*2/nz-2.0, 2.0))
tmp.append(( i*2/nz+2.0, 2.0))
elif i == nz-1:
tmp.append(( i*2/nz+1.0, -2.0))
tmp.append(( i*2/nz-1.0, 2.0))
tmp.append(( i*2/nz-1.0, -2.0))
tmp.append(( i*2/nz+1.0, -2.0))
tmp.append(( i*2/nz-1.0, 2.0))
tmp.append(( i*2/nz+1.0, 2.0))
else:
tmp.append(( i*2/nz+0.75, -2.0))
tmp.append(( i*2/nz-1.0, 2.0))
tmp.append(( i*2/nz-1.0, -2.0))
tmp.append(( i*2/nz+0.75, -2.0))
tmp.append(( i*2/nz-1.0, 2.0))
tmp.append(( i*2/nz+1.0, 2.0))
__homeLinearVerts = np.array(tmp, 'f')
__homeLinearIndcs = np.array(np.arange(len(__homeLinearVerts)), 'I')
if (__homeLinearColrs.size == 0) or (__prvHomeLinearNZ != __curHomeLinearNZ):
__prvHomeLinearNZ = __curHomeLinearNZ
__prvHomeLinearCols = __curHomeLinearCols
tmc = []
if nz > 1:
for i in range(nz):
tmc.append(colors[i])
tmc.append(colors[i])
tmc.append(colors[i])
tmc.append(colors[i])
tmc.append(colors[i])
tmc.append(colors[i])
__homeLinearColrs = np.array(tmc, 'f')
if (__prvHomeLinearCols != __curHomeLinearCols):
__prvHomeLinearCols = __curHomeLinearCols
for i in range(nz):
__homeLinearColrs[i*6:i*6+6] = colors[i]
if nz > 1:
glColorPointerf( __homeLinearColrs)
glVertexPointerf( __homeLinearVerts)
glDrawElementsui(GL_TRIANGLES, __homeLinearIndcs)
else:
drawHomeCircle(-gx, gy, dx*1.14285, dy*1.14285, nz, ao, w2h, colors)
glPopMatrix()
__iconLinearVerts = np.array([])
__iconLinearColrs = np.array([])
__iconLinearIndcs = np.array([])
__iconOLLineVerts = np.array([])
__iconOLLineColrs = np.array([])
__iconOLLineIndcs = np.array([])
__iconBlbMkLVerts = np.array([])
__iconBlbMkLColrs = np.array([])
__iconBlbMkLIndcs = np.array([])
__prvIconLinearNZ = 0
__curIconLinearNZ = 0
__prvIconLinearCols = [(0.5, 0.5, 0.5)]
__curIconLinearCols = None
# Draw Tiny Rounded Square Consisting of Bands of Color
def drawIconLinear(
gx,
gy,
dx,
dy,
nz,
ao,
w2h,
colors
):
global __iconLinearVerts, __iconLinearColrs, __iconLinearIndcs, __prvIconLinearNZ, __curIconLinearNZ, __prvIconLinearCols, __curIconLinearCols, __iconOLLineVerts, __iconOLLineColrs, __iconOLLineIndcs, __iconBlbMkLVerts, __iconBlbMkLColrs, __iconBlbMkLIndcs
glPushMatrix()
glRotatef(90, 0, 0, 1)
glTranslatef(gx, gy*(w2h), 0)
glRotatef(ao+90, 0, 0, 1)
if (w2h) >= 1:
glScalef(dx, dy/2, 0)
else:
glScalef(dx*(w2h), (w2h)*dy/2, 0)
__curIconLinearNZ = nz
__curIconLinearCols = colors
# Initialize / Update Icon Vertices
if (__iconLinearVerts.size == 0) or (__prvIconLinearNZ != __curIconLinearNZ):
tmp = []
for i in range(nz):
# Special case to draw rounded corners for end slice
if i == 0:
# Rounded Corner
for j in range(13):
tmp.append((-0.74, -1.4))
tmp.append((
-0.75 + 0.25*cos(-radians(j*7.5+90)),
-1.5 + 0.5*sin(-radians(j*7.5+90))))
tmp.append((
-0.75 + 0.25*cos(-radians((j+1)*7.5+90)),
-1.5 + 0.5*sin(-radians((j+1)*7.5+90))))
# Rounded Corner
for j in range(13):
tmp.append((-0.74, 1.4))
tmp.append((
-0.75 + 0.25*cos(radians(j*7.5+90)),
1.5 + 0.5*sin(radians(j*7.5+90))))
tmp.append((
-0.75 + 0.25*cos(radians((j+1)*7.5+90)),
1.5 + 0.5*sin(radians((j+1)*7.5+90))))
tmp.append(( 0.50, 2.0))
tmp.append(( i*2/nz-0.75, 2.0))
tmp.append(( i*2/nz-0.75, -2.0))
tmp.append(( i*2/nz-0.75, -2.0))
tmp.append(( i*2/nz+0.75, -2.0))
tmp.append(( 0.50, 2.0))
tmp.append(( 0.01, 1.5))
tmp.append((-1.01, 1.5))
tmp.append((-1.01, -1.5))
tmp.append((-1.01, -1.5))
tmp.append(( 0.01, -1.5))
tmp.append(( 0.01, 1.5))
# Special case to draw rounded corners for end slice
elif i == nz-1:
# Rounded Corner
for j in range(13):
tmp.append(( 0.74, -1.4))
tmp.append((
0.75 - 0.25*cos(-radians(j*7.5+90)),
-1.5 + 0.5*sin(-radians(j*7.5+90))))
tmp.append((
0.75 - 0.25*cos(-radians((j+1)*7.5+90)),
-1.5 + 0.5*sin(-radians((j+1)*7.5+90))))
# Rounded Corner
for j in range(13):
tmp.append(( 0.740, 1.4))
tmp.append((
0.75 - 0.25*cos(radians(j*7.5+90)),
1.5 + 0.5*sin(radians(j*7.5+90))))
tmp.append((
0.75 - 0.25*cos(radians((j+1)*7.5+90)),
1.5 + 0.5*sin(radians((j+1)*7.5+90))))
tmp.append(( 0.75, 2.0))
tmp.append(( i*2/nz-1.0, 2.0))
tmp.append(( i*2/nz-1.0,-2.0))
tmp.append(( 0.75, -2.0))
tmp.append(( 0.75, 2.0))
tmp.append(( i*2/nz-1.0, -2.0))
tmp.append(( 0.74, 1.5))
tmp.append(( 1.01, 1.5))
tmp.append(( 1.01, -1.5))
tmp.append(( 0.74, 1.5))
tmp.append(( 1.01, -1.5))
tmp.append(( 0.74, -1.5))
else:
tmp.append(( 0.75, 2.0))
tmp.append(( i*2/nz-1.0, 2.0))
tmp.append(( i*2/nz-1.0,-2.0))
tmp.append(( 0.75, 2.0))
tmp.append(( i*0/nz+0.75, -2.0))
tmp.append(( i*2/nz-1.0, -2.0))
__iconLinearIndcs = np.array(np.arange(len(tmp)), 'I')
__iconLinearVerts = np.array(tmp, 'f')
# Initialize Colors
if (__iconLinearColrs.size == 0) or (__prvIconLinearNZ != __curIconLinearNZ):
__prvIconLinearCols = __curIconLinearCols
tmc = []
for i in range(nz):
# Special case to draw rounded corners for end slice
if i == 0:
for j in range(90):
tmc.append(colors[i])
# Special case to draw rounded corners for end slice
elif i == nz-1:
for j in range(90):
tmc.append(colors[i])
else:
for j in range(6):
tmc.append(colors[i])
__iconLinearColrs = np.array(tmc, 'f')
# Update Colors
if (__prvIconLinearCols != __curIconLinearCols):
__prvIconLinearCols = __curIconLinearCols
for i in range(nz):
# Special case to draw rounded corners for end slice
if i == 0:
__iconLinearColrs[:90] = colors[i]
# Special case to draw rounded corners for end slice
elif i == nz-1:
__iconLinearColrs[-90:] = colors[i]
else:
__iconLinearColrs[i*6+90-6:(i+6)*6+90-6] = colors[i]
glColorPointerf( __iconLinearColrs )
glVertexPointerf( __iconLinearVerts )
glDrawElementsui(GL_TRIANGLES, __iconLinearIndcs)
# Draw Bulb Marker
if (__iconBlbMkLVerts.size == 0) or (__iconBlbMkLColrs.size == 0) or (__prvIconLinearNZ != __curIconLinearNZ):
__prvIconLinearNZ = __curIconLinearNZ
tmp = []
tmc = []
if nz > 1:
yCoord = -2.05
else:
yCoord = 2.05
for i in range(nz):
xCoord = 1/(nz*2)-((nz*2-1)/(nz*2)) + (2*i)/nz
for j in range(13):
tmc.append((0.95, 0.95, 0.95))
tmp.append((xCoord, yCoord))
tmc.append((0.95, 0.95, 0.95))
tmp.append((xCoord + 0.16*cos(radians(j*30)), yCoord + 0.32*sin(radians(j*30))))
tmc.append((0.95, 0.95, 0.95))
tmp.append((xCoord + 0.16*cos(radians((j+1)*30)), yCoord + 0.32*sin(radians((j+1)*30))))
__iconBlbMkLVerts = np.array(tmp, 'f')
__iconBlbMkLIndcs = np.array(np.arange(len(__iconBlbMkLVerts)), 'I')
__iconBlbMkLColrs = np.array(tmc, 'f')
glColorPointerf( __iconBlbMkLColrs)
glVertexPointerf( __iconBlbMkLVerts)
glDrawElementsui(GL_TRIANGLES, __iconBlbMkLIndcs)
# START Draw Outline
if (__iconOLLineVerts.size == 0):
tmp = []
tmc = []
# Scale line thickness
if w2h <= 1.0:
glLineWidth(w2h*2.0)
else:
glLineWidth((1/w2h)*2.0)
for j in range(13):
tmc.append((0.95, 0.95, 0.95))
tmp.append((
0.75 - 0.25*cos(radians(j*7.5+90)),
1.50 + 0.5*sin(radians(j*7.5+90))))
for j in range(13):
tmc.append((0.95, 0.95, 0.95))
tmp.append((
0.75 - 0.25*cos(+radians(j*7.5+180)),
-1.5 + 0.50*sin(+radians(j*7.5+180))))
for j in range(13):
tmc.append((0.95, 0.95, 0.95))
tmp.append((
-0.75 + 0.25*cos(-radians(j*7.5+90)),
-1.5 + 0.5*sin(-radians(j*7.5+90))))
for j in range(13):
tmc.append((0.95, 0.95, 0.95))
tmp.append((
-0.75 + 0.25*cos(-radians(j*7.5+180)),
1.5 + 0.5*sin(-radians(j*7.5+180))))
tmc.append((0.95, 0.95, 0.95))
tmp.append((
0.75 - 0.25*cos(radians(90)),
1.50 + 0.50*sin(radians(90))))
__iconOLLineVerts = np.array(tmp, 'f')
__iconOLLineIndcs = np.array(np.arange(len(__iconOLLineVerts)), 'I')
__iconOLLineColrs = np.array(tmc, 'f')
glColorPointerf( __iconOLLineColrs )
glVertexPointerf( __iconOLLineVerts )
glDrawElementsui(GL_LINE_STRIP, __iconOLLineIndcs)
# END Draw Outline
glPopMatrix()
__homeCircleVerts = np.array([], 'f')
__homeCircleColrs = np.array([], 'f')
__homeCircleIndcs = np.array([], 'f')
__curHomeCircleNZ = 0
__prvHomeCircleNZ = 0
__curHomeCircleAO = 0
__prvHomeCircleAO = 0
__curHomeCircleCols = [(0.5, 0.5, 0.5)]
__prvHomeCircleCols = None
def drawHomeCircle(
gx,
gy,
dx,
dy,
nz,
ao,
w2h,
colors
):
global __homeCircleVerts, __homeCircleColrs, __homeCircleIndcs, __curHomeCircleNZ, __prvHomeCircleNZ, __curHomeCircleCols, __prvHomeCircleCols, __curHomeCircleAO, __prvHomeCircleAO
wx = glutGet(GLUT_WINDOW_WIDTH)
wy = glutGet(GLUT_WINDOW_HEIGHT)
angOffset = 360/float(nz)
glPushMatrix()
glScalef(sqrt((w2h))*hypot(wx, wy), sqrt((wy/wx))*hypot(wx, wy), 1)
__curHomeCircleNZ = nz
__curHomeCircleAO = ao
__curHomeCircleCols = colors
# Initialize Vertices
if (__homeCircleVerts.size == 0) or (__curHomeCircleNZ != __prvHomeCircleNZ) or (__curHomeCircleAO != __prvHomeCircleAO):
tmp = []
__prvHomeCircleAO = __curHomeCircleAO
for j in range(nz):
for i in range(30):
#if (nz == 3):
#tmx = ( cos(radians(ao*nz+90))*0.333)*((cos(radians(ao*nz*4))+1)/2)
#tmy = (-sin(radians(ao*nz+90))*0.333)*((cos(radians(ao*nz*4))+1)/2)
#tmx = ( cos(radians(ao*nz+90))*0.0005)*((cos(radians(ao*nz*4))*0.75+1)/2)
#tmy = (-sin(radians(ao*nz+90))*0.0005)*((cos(radians(ao*nz*4))*0.75+1)/2)
#__homeCircleVerts.append(tmx)
#__homeCircleVerts.append(tmy)
#tmp.append((0, 0))
#else:
#tmp.append((0, 0))
tmp.append((0, 0))
tma = radians(i*12.0/nz+ao+j*(angOffset)-90)
tmx = cos(tma)
tmy = sin(tma)
tmp.append((tmx, tmy))
tma = radians((i+1)*12.0/nz+ao+j*(angOffset)-90)
tmx = cos(tma)
tmy = sin(tma)
tmp.append((tmx, tmy))
__homeCircleVerts = np.array(tmp, 'f')
__homeCircleIndcs = np.array(np.arange(len(__homeCircleVerts)), 'I')
# Initialize Colors
if (__curHomeCircleNZ != __prvHomeCircleNZ) or (__homeCircleColrs.size == 0):
tmc = []
__prvHomeCircleNZ = __curHomeCircleNZ
for j in range(nz):
for i in range(30):
tmc.append(colors[j])
tmc.append(colors[j])
tmc.append(colors[j])
__homeCircleColrs = np.array(tmc, 'f')
# Update Colors
if (__prvHomeCircleCols != __curHomeCircleCols):
__prvHomeCircleCols = __curHomeCircleCols
for i in range(nz):
__homeCircleColrs[i*90:i*90+90] = colors[i]
glVertexPointerf( __homeCircleVerts)
glColorPointerf( __homeCircleColrs)
glDrawElementsui(GL_TRIANGLES, __homeCircleIndcs)
glPopMatrix()
__iconCircleVerts = np.array([])
__iconCircleColrs = np.array([])
__iconCircleIndcs = np.array([])
__iconOLCircVerts = np.array([])
__iconOLCircColrs = np.array([])
__iconOLCircIndcs = np.array([])
__iconBlbMkCVerts = np.array([])
__iconBlbMkCColrs = np.array([])
__iconBlbMkCIndcs = np.array([])
__curIconCircleNZ = 0
__prvIconCircleNZ = 0
__curIconCircleAO = 0
__prvIconCircleAO = 0
__curIconCircleCols = None
__prvIconCircleCols = None
def drawIconCircle(
gx,
gy,
dx,
dy,
nz,
ao,
w2h,
colors
):
global __iconCircleVerts, __iconCircleColrs, __iconCircleIndcs, __iconOLCircVerts, __iconOLCircColrs, __iconBlbMkCVerts, __iconOLCircIndcs, __iconBlbMkCColrs, __iconBlbMkCIndcs, __prvIconCircleNZ, __curIconCircleNZ, __prvIconCircleCols, __curIconCircleCols, __prvIconCircleAO, __curIconCircleAO
angOffset = 360/float(nz)
glPushMatrix()
glTranslatef(gx*(w2h), gy, 0)
if (w2h) >= 1:
glScalef(dx, dy, 0)
else:
glScalef(dx*(w2h), dy*(w2h), 0)
__curIconCircleNZ = nz
__curIconCircleCols = colors
__curIconCircleAO = ao
# Initiailize Vertices
if (__iconCircleVerts.size == 0) or (__prvIconCircleNZ != __curIconCircleNZ) or (__prvIconCircleAO != __curIconCircleAO):
tmp = []
for j in range(nz):
for i in range(30):
tmp.append((0, 0))
tma = radians(i*12/nz+ao+j*(angOffset)-90)
tmx = cos(tma)
tmy = sin(tma)
tmp.append((tmx, tmy))
tma = radians((i+1)*12/nz+ao+j*(angOffset)-90)
tmx = cos(tma)
tmy = sin(tma)
tmp.append((tmx, tmy))
__iconCircleVerts = np.array(tmp, 'f')
__iconCircleIndcs = np.array(np.arange(len(__iconCircleVerts)), 'I')
# Initialize Colors
if (__iconCircleColrs.size == 0) or (__curIconCircleNZ != __prvIconCircleNZ):
tmc = []
__prvIconCircleNZ = __curIconCircleNZ
for j in range(nz):
for i in range(30):
tmc.append(colors[j])
tmc.append(colors[j])
tmc.append(colors[j])
__iconCircleColrs = np.array(tmc, 'f')
# Update Colors
if (__prvIconCircleCols != __curIconCircleCols):
__prvIconCircleCols = __curIconCircleCols
for j in range(nz):
__iconCircleColrs[j*90:j*90+90] = colors[j]
glVertexPointerf( __iconCircleVerts )
glColorPointerf( __iconCircleColrs )
glDrawElementsui(GL_TRIANGLES, __iconCircleIndcs)
# Initialize Bulb Marker Vertices
if (__iconBlbMkCVerts.size == 0) or (__prvIconLinearNZ != __curIconCircleNZ) or (__prvIconCircleAO != __curIconCircleAO):
__prvIconCircleAO = __curIconCircleAO
tmp = []
for i in range(nz):
xCoord = cos(radians(-90+ao - i*(angOffset) + 180/nz))
yCoord = sin(radians(-90+ao - i*(angOffset) + 180/nz))
for j in range(13):
tmp.append((xCoord, yCoord))
tmp.append((
xCoord + 0.16*cos(radians(j*30)),
yCoord + 0.16*sin(radians(j*30))))
tmp.append((
xCoord + 0.16*cos(radians((j+1)*30)),
yCoord + 0.16*sin(radians((j+1)*30))))
__iconBlbMkCVerts = np.array(tmp, 'f')
__iconBlbMkCIndcs = np.array(np.arange(len(__iconBlbMkCVerts)), 'I')
# Initialize Draw Bulb Marker Colors
if (__iconBlbMkCColrs.size == 0) or (__prvIconLinearNZ != __curIconCircleNZ):
__prvIconCircleNZ = __curIconCircleNZ
tmc = []
for i in range(nz):
for j in range(13):
tmc.append((0.95, 0.95, 0.95))
tmc.append((0.95, 0.95, 0.95))
tmc.append((0.95, 0.95, 0.95))
__iconBlbMkCColrs = np.array(tmc, 'f')
glColorPointerf( __iconBlbMkCColrs)
glVertexPointerf( __iconBlbMkCVerts)
glDrawElementsui(GL_TRIANGLES, __iconBlbMkCIndcs)
# Draw Outline
if w2h <= 1.0:
glLineWidth(w2h*2.0)
else:
glLineWidth((1/w2h)*2.0)
# Initialize Outline vertices and colors
if (__iconOLCircVerts.size == 0) or (__iconOLCircColrs.size == 0):
tmp = []
tmc = []
for j in range(31):
tmx = cos(radians(j*12))
tmy = sin(radians(j*12))
tmc.append((0.95, 0.95, 0.95))
tmp.append((tmx, tmy))
__iconOLCircVerts = np.array(tmp, 'f')
__iconOLCircColrs = np.array(tmc, 'f')
__iconOLCircIndcs = np.array(np.arange(len(__iconOLCircVerts)), 'I')
glColorPointerf( __iconOLCircColrs )
glVertexPointerf( __iconOLCircVerts )
glDrawElementsui(GL_LINE_STRIP, __iconOLCircIndcs )
glPopMatrix() | |
from CameraCalibration import CameraCalibration
from Thresholds import abs_sobel_thresh, mag_thresh, dir_threshold, color_r_threshold
from SlidingWindows import sliding_windows
from FitPolynomial import fit_polynomial
import matplotlib.image as mpimg
import cv2
import numpy as np
import matplotlib.pyplot as plt
#Calibrate camera
image = mpimg.imread('test_images/test4.jpg')
#img = mpimg.imread('test_images/test4.jpg')
img_size = (image.shape[1], image.shape[0])
calibration = CameraCalibration()
objpoints, imgpoints = calibration.calibrate()
ret, mtx, dist, rvecs, tvecs = cv2.calibrateCamera(objpoints, imgpoints, img_size, None, None)
undst = cv2.undistort(image, mtx, dist, None, mtx)
#Threshold
#Sobel kernel size
ksize = 3
#Apply each of the thresholding functions
gradx = abs_sobel_thresh(undst, orient='x', sobel_kernel=ksize, thresh=(50, 255))
mag_binary = mag_thresh(undst, sobel_kernel=ksize, mag_thresh=(50, 255))
dir_binary = dir_threshold(undst, sobel_kernel=ksize, thresh=(0.7, 1.3))
color_binary = color_r_threshold(undst, thresh=(170, 255))
#Try a combination
combined = np.zeros_like(dir_binary)
combined[(gradx == 1 | ((mag_binary == 1) & (dir_binary == 1))) | color_binary == 1] = 1
#Perform perspective transform from source to bird's eyeview
src = np.float32([[600, 450], [720, 450], [1160, 720], [220, 720]])
dst = np.float32([[300,0], [980,0], [980,720], [300,720]])
M = cv2.getPerspectiveTransform(src, dst)
warped = cv2.warpPerspective(combined, M, (undst.shape[1],undst.shape[0]), flags=cv2.INTER_LINEAR)
#cv2.imshow('test_images/calibrated_image.jpg',warped)
#cv2.waitKey(0)
######The histogram shows that the lanes are located at around x = 400 and x = 1020######
normalized_undst = warped/255
# Take a histogram of the bottom half of the image
histogram = np.sum(normalized_undst[normalized_undst.shape[0]//2:,:], axis=0)
#plt.plot(histogram)
#plt.show()
# Create an output image to draw on and visualize the result
out_img = np.dstack((warped, warped, warped))*255
# Find the peak of the left and right halves of the histogram
# These will be the starting point for the left and right lines
midpoint = np.int(histogram.shape[0]//2)
leftx_base = np.argmax(histogram[:midpoint])
rightx_base = np.argmax(histogram[midpoint:]) + midpoint
#Set up windows and window hyperparameters
# HYPERPARAMETERS
# Choose the number of sliding windows
nwindows = 9
# Set the width of the windows +/- margin
margin = 100
# Set minimum number of pixels found to recenter window
minpix = 50
leftx, lefty, rightx, righty = sliding_windows(warped, nwindows, leftx_base, rightx_base, margin, out_img, minpix)
ploty = np.linspace(0, warped.shape[0]-1, warped.shape[0] )
ym_per_pix = 30/720 # meters per pixel in y dimension
xm_per_pix = 3.7/650 # meters per pixel in x dimension
left_fitx, right_fitx, left_fit_cr, right_fit_cr = fit_polynomial(lefty, leftx, righty, rightx, ym_per_pix, xm_per_pix, ploty)
# Define y-value where we want radius of curvature
# We'll choose the maximum y-value, corresponding to the bottom of the image
y_eval = np.max(out_img.shape[0])-1
##### TO-DO: Implement the calculation of R_curve (radius of curvature) #####
left_curverad = (1+(2*left_fit_cr[0]*y_eval*ym_per_pix + left_fit_cr[1])**2)**(3/2)/(2*abs(left_fit_cr[0])) ## Implement the calculation of the left line here
right_curverad = (1+(2*right_fit_cr[0]*y_eval*ym_per_pix + right_fit_cr[1])**2)**(3/2)/(2*abs(right_fit_cr[0])) ## Implement the calculation of the right line here
offset = (out_img.shape[1]/2 - (left_fitx[y_eval]+right_fitx[y_eval])/2)*xm_per_pix
print(left_curverad, 'm', right_curverad, 'm', offset, 'm')
# Create an image to draw the lines on
warp_zero = np.zeros_like(warped).astype(np.uint8)
color_warp = np.dstack((warp_zero, warp_zero, warp_zero))
# Recast the x and y points into usable format for cv2.fillPoly()
pts_left = np.array([np.transpose(np.vstack([left_fitx, ploty]))])
pts_right = np.array([np.flipud(np.transpose(np.vstack([right_fitx, ploty])))])
pts = np.hstack((pts_left, pts_right))
# Draw the lane onto the warped blank image
cv2.fillPoly(color_warp, np.int_([pts]), (0,255, 0))
Minv = cv2.getPerspectiveTransform(dst,src)
# Warp the blank back to original image space using inverse perspective matrix (Minv)
newwarp = cv2.warpPerspective(color_warp, Minv, (out_img.shape[1], out_img.shape[0]))
# Combine the result with the original image
result = cv2.addWeighted(undst, 1, newwarp, 0.3, 0)
cv2.putText(result,'Curve Radius [m]: '+str((left_curverad+right_curverad)/2)[:7],(40,70), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1.6, (0,255,0),2,cv2.LINE_AA)
cv2.putText(result,'Center Offset [m]: '+str(offset)[:7],(40,150), cv2.FONT_HERSHEY_COMPLEX_SMALL, 1.6,(0,255,0),2,cv2.LINE_AA)
#plt.imshow(result)
#plt.axis('off')
#plt.show()
#plt.savefig('output_images/result.jpg', bbox_inches='tight', pad_inches=0)
# Plot the result
#f, (ax1, ax2) = plt.subplots(1, 2, figsize=(24, 9))
#f.tight_layout()
#ax1.imshow(img_mod1)
# Plots the left and right polynomials on the lane lines
#ax1.set_title('Undistorted image with src drawn', fontsize=50)
#ax2.imshow(img_mod2)
#ax2.set_title('Warped result with dst drawn', fontsize=50)
#plt.subplots_adjust(left=0., right=1, top=0.9, bottom=0.)
#plt.savefig('output_images/warped.jpg')
#f.savefig('') | |
"""
Copyright (C) 2021 NVIDIA Corporation. All rights reserved.
Licensed under The MIT License (MIT)
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from collections import OrderedDict
import numpy as np
#----------------------------------------------------------------------------
class MyLinear(nn.Module):
"""Linear layer with equalized learning rate and custom learning rate multiplier."""
def __init__(self, input_size, output_size, gain=2 ** (0.5), use_wscale=False, lrmul=1, bias=True):
super().__init__()
he_std = gain * input_size ** (-0.5) # He init
# Equalized learning rate and custom learning rate multiplier.
if use_wscale:
init_std = 1.0 / lrmul
self.w_mul = he_std * lrmul
else:
init_std = he_std / lrmul
self.w_mul = lrmul
self.weight = torch.nn.Parameter(torch.randn(output_size, input_size) * init_std)
if bias:
self.bias = torch.nn.Parameter(torch.zeros(output_size))
self.b_mul = lrmul
else:
self.bias = None
def forward(self, x):
bias = self.bias
if bias is not None:
bias = bias * self.b_mul
return F.linear(x, self.weight * self.w_mul, bias)
#----------------------------------------------------------------------------
class MyConv2d(nn.Module):
"""Conv layer with equalized learning rate and custom learning rate multiplier."""
def __init__(self, input_channels, output_channels, kernel_size, stride=1, gain=2 ** (0.5), use_wscale=False,
lrmul=1, bias=True,
intermediate=None, upscale=False, downscale=False):
super().__init__()
if upscale:
self.upscale = Upscale2d()
else:
self.upscale = None
if downscale:
self.downscale = Downscale2d()
else:
self.downscale = None
he_std = gain * (input_channels * kernel_size ** 2) ** (-0.5) # He init
self.kernel_size = kernel_size
if use_wscale:
init_std = 1.0 / lrmul
self.w_mul = he_std * lrmul
else:
init_std = he_std / lrmul
self.w_mul = lrmul
self.weight = torch.nn.Parameter(
torch.randn(output_channels, input_channels, kernel_size, kernel_size) * init_std)
if bias:
self.bias = torch.nn.Parameter(torch.zeros(output_channels))
self.b_mul = lrmul
else:
self.bias = None
self.intermediate = intermediate
def forward(self, x):
bias = self.bias
if bias is not None:
bias = bias * self.b_mul
have_convolution = False
if self.upscale is not None and min(x.shape[2:]) * 2 >= 128:
# this is the fused upscale + conv from StyleGAN, sadly this seems incompatible with the non-fused way
# this really needs to be cleaned up and go into the conv...
w = self.weight * self.w_mul
w = w.permute(1, 0, 2, 3)
# probably applying a conv on w would be more efficient. also this quadruples the weight (average)?!
w = F.pad(w, (1, 1, 1, 1))
w = w[:, :, 1:, 1:] + w[:, :, :-1, 1:] + w[:, :, 1:, :-1] + w[:, :, :-1, :-1]
x = F.conv_transpose2d(x, w, stride=2, padding=(w.size(-1) - 1) // 2)
have_convolution = True
elif self.upscale is not None:
x = self.upscale(x)
downscale = self.downscale
intermediate = self.intermediate
if downscale is not None and min(x.shape[2:]) >= 128:
w = self.weight * self.w_mul
w = F.pad(w, (1, 1, 1, 1))
# in contrast to upscale, this is a mean...
w = (w[:, :, 1:, 1:] + w[:, :, :-1, 1:] + w[:, :, 1:, :-1] + w[:, :, :-1, :-1]) * 0.25 # avg_pool?
x = F.conv2d(x, w, stride=2, padding=(w.size(-1) - 1) // 2)
have_convolution = True
downscale = None
elif downscale is not None:
assert intermediate is None
intermediate = downscale
if not have_convolution and intermediate is None:
return F.conv2d(x, self.weight * self.w_mul, bias, padding=self.kernel_size // 2)
elif not have_convolution:
x = F.conv2d(x, self.weight * self.w_mul, None, padding=self.kernel_size // 2)
if intermediate is not None:
x = intermediate(x)
if bias is not None:
x = x + bias.view(1, -1, 1, 1)
return x
#----------------------------------------------------------------------------
class NoiseLayer(nn.Module):
"""adds noise. noise is per pixel (constant over channels) with per-channel weight"""
def __init__(self, channels):
super().__init__()
self.weight = nn.Parameter(torch.zeros(channels))
self.noise = None
def forward(self, x, noise=None):
if noise is None and self.noise is None:
noise = torch.randn(x.size(0), 1, x.size(2), x.size(3), device=x.device, dtype=x.dtype)
elif noise is None:
# here is a little trick: if you get all the noiselayers and set each
# modules .noise attribute, you can have pre-defined noise.
# Very useful for analysis
noise = self.noise
x = x + self.weight.view(1, -1, 1, 1) * noise
return x
#----------------------------------------------------------------------------
class StyleMod(nn.Module):
def __init__(self,
latent_size,
channels,
use_wscale
):
super().__init__()
self.lin = MyLinear(latent_size, channels * 2, gain=1.0, use_wscale=use_wscale)
self.x_param_backup = None
def forward(self, x, latent, latent_after_trans=None):
if x is not None:
if latent_after_trans is None:
style = self.lin(latent) # style => [batch_size, n_channels*2]
shape = [-1, 2, x.size(1)] + (x.dim() - 2) * [1]
style = style.view(shape) # [batch_size, 2, n_channels, ...]
else:
style = latent_after_trans
self.x_param_backup = [x.size(1), x.dim()]
x = x * (style[:, 0] + 1.) + style[:, 1]
return x
else:
if self.x_param_backup is None:
print('error: have intialize shape yet')
# print('Generating latent_after_trans:')
style = self.lin(latent) # style => [batch_size, n_channels*2]
shape = [-1, 2, self.x_param_backup[0]] + (self.x_param_backup[1] - 2) * [1]
style = style.view(shape) # [batch_size, 2, n_channels, ...]
return style
#----------------------------------------------------------------------------
class PixelNormLayer(nn.Module):
def __init__(self, epsilon=1e-8):
super().__init__()
self.epsilon = epsilon
def forward(self, x):
return x * torch.rsqrt(torch.mean(x ** 2, dim=1, keepdim=True) + self.epsilon)
#----------------------------------------------------------------------------
# Upscale and blur layers
class BlurLayer(nn.Module):
def __init__(self, kernel=[1, 2, 1], normalize=True, flip=False, stride=1):
super(BlurLayer, self).__init__()
kernel = torch.tensor(kernel, dtype=torch.float32)
kernel = kernel[:, None] * kernel[None, :]
kernel = kernel[None, None]
if normalize:
kernel = kernel / kernel.sum()
if flip:
kernel = kernel[:, :, ::-1, ::-1]
self.register_buffer('kernel', kernel)
self.stride = stride
def forward(self, x):
# expand kernel channels
kernel = self.kernel.expand(x.size(1), -1, -1, -1)
x = F.conv2d(
x,
kernel,
stride=self.stride,
padding=int((self.kernel.size(2) - 1) / 2),
groups=x.size(1)
)
return x
#----------------------------------------------------------------------------
def upscale2d(x, factor=2, gain=1):
assert x.dim() == 4
if gain != 1:
x = x * gain
if factor != 1:
shape = x.shape
x = x.view(shape[0], shape[1], shape[2], 1, shape[3], 1).expand(-1, -1, -1, factor, -1, factor)
x = x.contiguous().view(shape[0], shape[1], factor * shape[2], factor * shape[3])
return x
#----------------------------------------------------------------------------
class Upscale2d(nn.Module):
def __init__(self, factor=2, gain=1):
super().__init__()
assert isinstance(factor, int) and factor >= 1
self.gain = gain
self.factor = factor
def forward(self, x):
return upscale2d(x, factor=self.factor, gain=self.gain)
#----------------------------------------------------------------------------
class G_mapping(nn.Sequential):
def __init__(self,
nonlinearity = 'lrelu',
use_wscale = True,
):
act, gain = {'relu': (torch.relu, np.sqrt(2)),
'lrelu': (nn.LeakyReLU(negative_slope=0.2), np.sqrt(2))}[nonlinearity]
layers = [
('pixel_norm', PixelNormLayer()),
('dense0', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
('dense0_act', act),
('dense1', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
('dense1_act', act),
('dense2', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
('dense2_act', act),
('dense3', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
('dense3_act', act),
('dense4', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
('dense4_act', act),
('dense5', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
('dense5_act', act),
('dense6', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
('dense6_act', act),
('dense7', MyLinear(512, 512, gain=gain, lrmul=0.01, use_wscale=use_wscale)),
('dense7_act', act)
]
super().__init__(OrderedDict(layers))
def forward(self, x):
x = super().forward(x)
# Broadcast
x = x.unsqueeze(1).expand(-1, 18, -1)
return x
#----------------------------------------------------------------------------
class Truncation(nn.Module):
def __init__(self, avg_latent, max_layer=8, truncation_psi=0.7):
super().__init__()
self.max_layer = max_layer
self.truncation_psi = truncation_psi
# self.avg_latent = avg_latent
self.register_buffer('avg_latent', avg_latent)
def forward(self, x):
assert x.dim() == 3
interp = torch.lerp(self.avg_latent, x, self.truncation_psi)
do_trunc = (torch.arange(x.size(1), device=x.device) < self.max_layer).view(1, -1, 1)
return torch.where(do_trunc, interp, x)
#----------------------------------------------------------------------------
class LayerEpilogue(nn.Module):
"""Things to do at the end of each layer."""
def __init__(self,
channels,
dlatent_size,
use_wscale,
use_noise,
use_pixel_norm,
use_instance_norm,
use_styles,
activation_layer
):
super().__init__()
layers = []
if use_noise:
layers.append(('noise', NoiseLayer(channels))) # TODO: not sure if we need it
layers.append(('activation', activation_layer))
if use_pixel_norm:
layers.append(('pixel_norm', PixelNormLayer()))
if use_instance_norm:
layers.append(('instance_norm', nn.InstanceNorm2d(channels)))
self.top_epi = nn.Sequential(OrderedDict(layers))
if use_styles:
self.style_mod = StyleMod(dlatent_size, channels, use_wscale=use_wscale)
else:
self.style_mod = None
def forward(self, x, dlatents_in_slice=None, latent_after_trans=None):
x = self.top_epi(x)
if self.style_mod is not None:
if latent_after_trans is None:
x = self.style_mod(x, dlatents_in_slice)
else:
x = self.style_mod(x, dlatents_in_slice, latent_after_trans)
else:
assert dlatents_in_slice is None
return x
#----------------------------------------------------------------------------
class InputBlock(nn.Module):
def __init__(self, nf, dlatent_size, const_input_layer, gain, use_wscale, use_noise, use_pixel_norm,
use_instance_norm, use_styles, activation_layer):
super().__init__()
self.const_input_layer = const_input_layer
self.nf = nf
if self.const_input_layer:
# called 'const' in tf
self.const = nn.Parameter(torch.ones(1, nf, 4, 4))
self.bias = nn.Parameter(torch.ones(nf))
else:
self.dense = MyLinear(dlatent_size, nf * 16, gain=gain / 4,
use_wscale=use_wscale) # tweak gain to match the official implementation of Progressing GAN
self.epi1 = LayerEpilogue(nf, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm,
use_styles, activation_layer)
self.conv = MyConv2d(nf, nf, 3, gain=gain, use_wscale=use_wscale)
self.epi2 = LayerEpilogue(nf, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm,
use_styles, activation_layer)
def forward(self, dlatents_in_range, latent_after_trans=None):
batch_size = dlatents_in_range.size(0)
if self.const_input_layer:
x = self.const.expand(batch_size, -1, -1, -1)
x = x + self.bias.view(1, -1, 1, 1)
else:
x = self.dense(dlatents_in_range[:, 0]).view(batch_size, self.nf, 4, 4)
if latent_after_trans is None:
x = self.epi1(x, dlatents_in_range[:, 0])
else:
x = self.epi1(x, dlatents_in_range[:, 0], latent_after_trans[0]) # latent_after_trans is a list
x = self.conv(x)
if latent_after_trans is None:
x1 = self.epi2(x, dlatents_in_range[:, 1])
else:
x1 = self.epi2(x, dlatents_in_range[:, 1], latent_after_trans[1])
return x1, x
#----------------------------------------------------------------------------
class GSynthesisBlock(nn.Module):
def __init__(self, in_channels, out_channels, blur_filter, dlatent_size, gain, use_wscale, use_noise,
use_pixel_norm, use_instance_norm, use_styles, activation_layer):
# 2**res x 2**res # res = 3..resolution_log2
super().__init__()
if blur_filter:
blur = BlurLayer(blur_filter)
else:
blur = None
self.conv0_up = MyConv2d(in_channels, out_channels, kernel_size=3, gain=gain, use_wscale=use_wscale,
intermediate=blur, upscale=True)
self.epi1 = LayerEpilogue(out_channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm,
use_styles, activation_layer)
self.conv1 = MyConv2d(out_channels, out_channels, kernel_size=3, gain=gain, use_wscale=use_wscale)
self.epi2 = LayerEpilogue(out_channels, dlatent_size, use_wscale, use_noise, use_pixel_norm, use_instance_norm,
use_styles, activation_layer)
def forward(self, x, dlatents_in_range, latent_after_trans=None):
x = self.conv0_up(x)
if latent_after_trans is None:
x = self.epi1(x, dlatents_in_range[:, 0])
else:
x = self.epi1(x, dlatents_in_range[:, 0], latent_after_trans[0]) # latent_after_trans is a list
x = self.conv1(x)
if latent_after_trans is None:
x1 = self.epi2(x, dlatents_in_range[:, 1])
else:
x1 = self.epi2(x, dlatents_in_range[:, 1], latent_after_trans[1])
return x1, x
#----------------------------------------------------------------------------
class SegSynthesisBlock(nn.Module):
def __init__(self, prev_channel, current_channel, single_in=False):
super().__init__()
self.single_in = single_in
# self.in_conv = nn.Sequential(
# nn.ReLU(),
# nn.Conv2d(current_channel, current_channel, 3, 1, 1),
# nn.BatchNorm2d(current_channel),
# nn.ReLU(),
# nn.Conv2d(current_channel, current_channel, 1),
# nn.BatchNorm2d(current_channel)
# )
if not single_in:
self.up = nn.Upsample(scale_factor=2, mode="bilinear")
self.out_conv1 = nn.Sequential(
nn.ReLU(),
nn.Conv2d(current_channel + prev_channel, current_channel, 1, 1, 0),
nn.BatchNorm2d(current_channel)
)
self.out_conv2 = nn.Sequential(
nn.ReLU(),
nn.Conv2d(current_channel + current_channel, current_channel, 1, 1, 0),
nn.BatchNorm2d(current_channel)
)
def forward(self, x_curr, x_curr2, x_prev=None):
# x_curr = self.in_conv(x_curr)
if self.single_in:
x_middle = x_curr
else:
x_prev = self.up(x_prev)
x_concat = torch.cat([x_curr, x_prev], 1)
x_middle = self.out_conv1(x_concat)
x_middle = x_middle + x_curr
x_concat2 = torch.cat([x_curr2, x_middle], 1)
x_out = self.out_conv2(x_concat2)
x_out = x_out + x_curr2
return x_out
#----------------------------------------------------------------------------
class G_synthesis(nn.Module):
def __init__(self,
dlatent_size = 512, # Disentangled latent (W) dimensionality.
num_channels = 3, # Number of output color channels.
resolution = 512, # Output resolution.
fmap_base = 8192, # Overall multiplier for the number of feature maps.
fmap_decay = 1.0, # log2 feature map reduction when doubling the resolution.
fmap_max = 512, # Maximum number of feature maps in any layer.
use_styles = True, # Enable style inputs?
const_input_layer = True, # First layer is a learned constant?
use_noise = True, # Enable noise inputs?
randomize_noise = True, # True = randomize noise inputs every time (non-deterministic), False = read noise inputs from variables.
nonlinearity = 'lrelu', # Activation function: 'relu', 'lrelu'
use_wscale = True, # Enable equalized learning rate?
use_pixel_norm = False, # Enable pixelwise feature vector normalization?
use_instance_norm = True, # Enable instance normalization?
dtype = torch.float32, # Data type to use for activations and outputs.
fused_scale = 'auto', # True = fused convolution + scaling, False = separate ops, 'auto' = decide automatically.
blur_filter = [1, 2, 1], # Low-pass filter to apply when resampling activations. None = no filtering.
structure = 'auto', # 'fixed' = no progressive growing, 'linear' = human-readable, 'recursive' = efficient, 'auto' = select automatically.
is_template_graph = False, # True = template graph constructed by the Network class, False = actual evaluation.
force_clean_graph = False, # True = construct a clean graph that looks nice in TensorBoard, False = default behavior.
seg_branch = False
):
super().__init__()
def nf(stage):
return min(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_max)
self.dlatent_size = dlatent_size
self.seg_branch = seg_branch
resolution_log2 = int(np.log2(resolution))
assert resolution == 2 ** resolution_log2 and resolution >= 4
if is_template_graph: force_clean_graph = True
if force_clean_graph: randomize_noise = False
if structure == 'auto': structure = 'linear' if force_clean_graph else 'recursive'
act, gain = {'relu': (torch.relu, np.sqrt(2)),
'lrelu': (nn.LeakyReLU(negative_slope=0.2), np.sqrt(2))}[nonlinearity]
num_layers = resolution_log2 * 2 - 2
num_styles = num_layers if use_styles else 1
torgbs = []
blocks = []
if self.seg_branch:
seg_block = []
for res in range(2, resolution_log2 + 1):
channels = nf(res - 1)
name = '{s}x{s}'.format(s=2 ** res)
if res == 2:
blocks.append(
(name, InputBlock(channels, dlatent_size, const_input_layer, gain, use_wscale,
use_noise, use_pixel_norm, use_instance_norm, use_styles, act))
)
else:
blocks.append(
(name, GSynthesisBlock(last_channels, channels, blur_filter, dlatent_size, gain, use_wscale,
use_noise, use_pixel_norm, use_instance_norm, use_styles, act))
)
if res > 2 and self.seg_branch:
name = '{s}x{s}_seg'.format(s=2 ** res)
if len(seg_block) == 0:
seg_block.append((name, SegSynthesisBlock(last_channels, channels, single_in=True)))
else:
seg_block.append((name, SegSynthesisBlock(last_channels, channels)))
last_channels = channels
self.torgb = MyConv2d(channels, num_channels, 1, gain=1, use_wscale=use_wscale)
self.blocks = nn.ModuleDict(OrderedDict(blocks))
if self.seg_branch:
seg_block.append(("seg_out", nn.Conv2d(channels, 34, 1)))
self.seg_block = nn.ModuleDict(OrderedDict(seg_block))
def forward(self, dlatents_in, latent_after_trans=None):
# Input: Disentangled latents (W) [minibatch, num_layers, dlatent_size].
# lod_in = tf.cast(tf.get_variable('lod', initializer=np.float32(0), trainable=False), dtype)
batch_size = dlatents_in.size(0)
result_list = []
if self.seg_branch:
seg_branch_feature = None
for i, m in enumerate(self.blocks.values()):
if i == 0:
if latent_after_trans is None:
x, x2 = m(dlatents_in[:, 2 * i:2 * i + 2])
else:
x, x2 = m(dlatents_in[:, 2 * i:2 * i + 2], latent_after_trans[2 * i:2 * i + 2])
else:
if latent_after_trans is None:
x, x2 = m(x, dlatents_in[:, 2 * i:2 * i + 2])
else:
x, x2 = m(x, dlatents_in[:, 2 * i:2 * i + 2], latent_after_trans[2 * i:2 * i + 2]) # latent_after_trans is a tensor list
if self.seg_branch:
name = '{s}x{s}_seg'.format(s=2 ** (i + 2))
curr_seg_block = self.seg_block[name]
if seg_branch_feature is None:
seg_branch_feature = curr_seg_block(x2, x)
else:
seg_branch_feature = curr_seg_block(x2, x, x_prev=seg_branch_feature)
result_list.append(x)
result_list.append(x2)
rgb = self.torgb(x)
if self.seg_branch:
seg = self.seg_block["seg_out"](seg_branch_feature)
return rgb, seg, result_list
return rgb, result_list
#----------------------------------------------------------------------------
#### define discriminator
class StddevLayer(nn.Module):
def __init__(self, group_size=4, num_new_features=1):
super().__init__()
self.group_size = 4
self.num_new_features = 1
def forward(self, x):
b, c, h, w = x.shape
group_size = min(self.group_size, b)
y = x.reshape([group_size, -1, self.num_new_features,
c // self.num_new_features, h, w])
y = y - y.mean(0, keepdim=True)
y = (y ** 2).mean(0, keepdim=True)
y = (y + 1e-8) ** 0.5
y = y.mean([3, 4, 5], keepdim=True).squeeze(3) # don't keep the meaned-out channels
y = y.expand(group_size, -1, -1, h, w).clone().reshape(b, self.num_new_features, h, w)
z = torch.cat([x, y], dim=1)
return z
#----------------------------------------------------------------------------
class Downscale2d(nn.Module):
def __init__(self, factor=2, gain=1):
super().__init__()
assert isinstance(factor, int) and factor >= 1
self.factor = factor
self.gain = gain
if factor == 2:
f = [np.sqrt(gain) / factor] * factor
self.blur = BlurLayer(kernel=f, normalize=False, stride=factor)
else:
self.blur = None
def forward(self, x):
assert x.dim() == 4
# 2x2, float32 => downscale using _blur2d().
if self.blur is not None and x.dtype == torch.float32:
return self.blur(x)
# Apply gain.
if self.gain != 1:
x = x * self.gain
# No-op => early exit.
if factor == 1:
return x
# Large factor => downscale using tf.nn.avg_pool().
# NOTE: Requires tf_config['graph_options.place_pruned_graph']=True to work.
return F.avg_pool2d(x, self.factor)
#----------------------------------------------------------------------------
class DiscriminatorBlock(nn.Sequential):
def __init__(self, in_channels, out_channels, gain, use_wscale, activation_layer):
super().__init__(OrderedDict([
('conv0', MyConv2d(in_channels, in_channels, 3, gain=gain, use_wscale=use_wscale)),
# out channels nf(res-1)
('act0', activation_layer),
('blur', BlurLayer()),
('conv1_down', MyConv2d(in_channels, out_channels, 3, gain=gain, use_wscale=use_wscale, downscale=True)),
('act1', activation_layer)]))
#----------------------------------------------------------------------------
class View(nn.Module):
def __init__(self, *shape):
super().__init__()
self.shape = shape
def forward(self, x):
return x.view(x.size(0), *self.shape)
#----------------------------------------------------------------------------
class DiscriminatorTop(nn.Sequential):
def __init__(self, mbstd_group_size, mbstd_num_features, in_channels, intermediate_channels, gain, use_wscale,
activation_layer, resolution=4, in_channels2=None, output_features=1, last_gain=1):
layers = []
if mbstd_group_size > 1:
layers.append(('stddev_layer', StddevLayer(mbstd_group_size, mbstd_num_features)))
if in_channels2 is None:
in_channels2 = in_channels
layers.append(
('conv', MyConv2d(in_channels + mbstd_num_features, in_channels2, 3, gain=gain, use_wscale=use_wscale)))
layers.append(('act0', activation_layer))
layers.append(('view', View(-1)))
layers.append(('dense0', MyLinear(in_channels2 * resolution * resolution, intermediate_channels, gain=gain,
use_wscale=use_wscale)))
layers.append(('act1', activation_layer))
layers.append(
('dense1', MyLinear(intermediate_channels, output_features, gain=last_gain, use_wscale=use_wscale)))
super().__init__(OrderedDict(layers))
#----------------------------------------------------------------------------
class D_basic(nn.Sequential):
def __init__(self,
# images_in, # First input: Images [minibatch, channel, height, width].
# labels_in, # Second input: Labels [minibatch, label_size].
num_channels=3, # Number of input color channels. Overridden based on dataloader.
resolution=512, # Input resolution. Overridden based on dataloader.
fmap_base=8192, # Overall multiplier for the number of feature maps.
fmap_decay=1.0, # log2 feature map reduction when doubling the resolution.
fmap_max=512, # Maximum number of feature maps in any layer.
nonlinearity='lrelu', # Activation function: 'relu', 'lrelu',
use_wscale=True, # Enable equalized learning rate?
mbstd_group_size=4, # Group size for the minibatch standard deviation layer, 0 = disable.
mbstd_num_features=1, # Number of features for the minibatch standard deviation layer.
# blur_filter = [1,2,1], # Low-pass filter to apply when resampling activations. None = no filtering.
):
self.mbstd_group_size = 4
self.mbstd_num_features = 1
resolution_log2 = int(np.log2(resolution))
assert resolution == 2 ** resolution_log2 and resolution >= 4
def nf(stage):
return min(int(fmap_base / (2.0 ** (stage * fmap_decay))), fmap_max)
act, gain = {'relu': (torch.relu, np.sqrt(2)),
'lrelu': (nn.LeakyReLU(negative_slope=0.2), np.sqrt(2))}[nonlinearity]
self.gain = gain
self.use_wscale = use_wscale
super().__init__(OrderedDict([
('fromrgb', MyConv2d(num_channels, nf(resolution_log2 - 1), 1, gain=gain,
use_wscale=use_wscale)),
('act', act)]
+ [('{s}x{s}'.format(s=2 ** res),
DiscriminatorBlock(nf(res - 1), nf(res - 2), gain=gain, use_wscale=use_wscale,
activation_layer=act)) for res in
range(resolution_log2, 2, -1)]
+ [('4x4',
DiscriminatorTop(mbstd_group_size, mbstd_num_features, nf(2), nf(2), gain=gain,
use_wscale=use_wscale, activation_layer=act))]))
#---------------------------------------------------------------------------- | |
# ------------- Machine Learning - Topic 1: Linear Regression Multivariate
# depends on
# featureNormalize.py
# gradientDescentMulti.py
# normalEqn.py
#
import os, sys
sys.path.append(os.getcwd() + os.path.dirname('/ml/ex1/'))
from helpers import featureNormalize, gradientDescentMulti, normalEqn
import numpy as np
import matplotlib.pyplot as plt
## ================ Part 1: Feature Normalization ================
print('Loading data ...\n')
## Load Data
data = np.loadtxt('ml/ex1/ex1data2.txt', delimiter=",")
X = data[:,:2]
y = data[:,2]
m = len(y) # number of training examples
# Print out some data points
print('First 10 examples from the dataset: \n')
for i in range(10):
print ("x = [{:.0f} {:.0f}], y = {:.0f}".format(X[i,0], X[i,1], y[i]))
input('Program paused. Press enter to continue.\n')
# Scale features and set them to zero mean
print('Normalizing Features...')
X_norm, mu, sigma = featureNormalize(X)
# Add intercept term to X
X_padded = np.column_stack((np.ones((m,1)), X_norm)) # Add a column of ones to x
## ================ Part 2: Gradient Descent ================
print('Running gradient descent...')
# Choose some alpha value
alpha = 0.01
num_iters = 400
# Init Theta and Run Gradient Descent
theta = np.zeros((3, 1))
theta, J_history = gradientDescentMulti(X_padded, y, theta, alpha, num_iters)
# Plot the convergence graph
plt.plot(range(J_history.size), J_history, "-b", linewidth=2)
plt.xlabel('Number of iterations')
plt.ylabel('Cost J')
plt.show(block=False)
# Display gradient descent's result
print('Theta computed from gradient descent: ')
print("{:f}, {:f}, {:f}".format(theta[0,0], theta[1,0], theta[2,0]))
print("")
# Estimate the price of a 1650 sq-ft, 3 br house
# ====================== YOUR CODE HERE ======================
# Recall that the first column of X is all-ones. Thus, it does
# not need to be normalized.
area_norm = (1650 - float(mu[:,0])) / float(sigma[:,0])
br_norm = (3 - float(mu[:,1]))/float(sigma[:,1])
house_norm_padded = np.array([1, area_norm, br_norm])
price = np.array(house_norm_padded).dot(theta)
# ============================================================
print("Predicted price of a 1650 sq-ft, 3 br house (using gradient descent):\n ${:,.2f}".format(price[0]))
input('Program paused. Press enter to continue.\n')
## ================ Part 3: Normal Equations ================
print('Solving with normal equations...')
# ====================== YOUR CODE HERE ======================
# Instructions: The following code computes the closed form
# solution for linear regression using the normal
# equations. You should complete the code in
# normalEqn.m
#
# After doing so, you should complete this code
# to predict the price of a 1650 sq-ft, 3 br house.
#
## Load Data
data = np.loadtxt('ml/ex1/ex1data2.txt', delimiter=",")
X = data[:,:2]
y = data[:,2]
m = len(y) # number of training examples
# Add intercept term to X
X_padded = np.column_stack((np.ones((m,1)), X))
# Calculate the parameters from the normal equation
theta = normalEqn(X_padded, y)
# Display normal equation's result
print('Theta computed from the normal equations:')
print("{:f}, {:f}, {:f}".format(theta[0], theta[1], theta[2]))
print('')
# Estimate the price of a 1650 sq-ft, 3 br house
# ====================== YOUR CODE HERE ======================
house_norm_padded = np.array([1, 1650, 3])
price = np.array(house_norm_padded).dot(theta)
# ============================================================
print("Predicted price of a 1650 sq-ft, 3 br house (using gradient descent):\n ${:,.2f}".format(price)) | |
from pandas.io.json import json_normalize
from pandas import json_normalize
import json
import cv2
import os
import os.path as osp
import numpy as np
from pandas import json_normalize
import matplotlib.pyplot as plt
from pycocotools.coco import COCO
from mmcv.visualization.image import imshow_det_bboxes
def get_min_max_bbox(df, rate=0.05):
df = df.sort_values(by='area')
min_idx = int(len(df) * rate)
max_idx = int(len(df) * (1. - rate))
df_min = df.iloc[:min_idx]
df_max = df.iloc[max_idx:]
df_min = np.array(list(df_min['bbox']))
df_max = np.array(list(df_max['bbox']))
avg_min = np.mean(df_min, axis=0)
avg_max = np.mean(df_max, axis=0)
return avg_min, avg_max
def plt_img(filename, img_dir="data/track/"):
coco = COCO(filename)
for imgId in coco.getImgIds():
imgIds = [imgId]
image = coco.loadImgs(ids=imgIds)[0]
ann_ids = coco.getAnnIds(imgIds=imgIds)
anns = coco.loadAnns(ann_ids)
anns = [x for x in anns if x['bbox'][2] > 1200 and x['bbox'][3] > 1200]
if len(anns) <= 0: continue
anns = json_normalize(anns)
# print(get_min_max_bbox(anns[anns['label'] == 'head']))
# print(get_min_max_bbox(anns[anns['label'] == 'visible body']))
# print(get_min_max_bbox(anns[anns['label'] == 'full body']))
# print(get_min_max_bbox(anns[anns['label'] == 'car']))
img = osp.join(img_dir, image['file_name'])
bboxes = np.array(list(anns['bbox']))
bboxes[:, 2] += bboxes[:, 0]
bboxes[:, 3] += bboxes[:, 1]
labels = np.array(list(anns['category_id']))
img = imshow_det_bboxes(img, bboxes, labels, show=False, thickness=5, bbox_color='blue', )
cv2.imwrite(os.path.basename(image['file_name']), img)
plt.imshow(img)
import pylab
pylab.show()
plt.show()
img_dir = "data/underwater/train/image/"
ann_file = "data/underwater/annotations/simple-sample-checked.json"
plt_img(ann_file, img_dir) | |
import time
import numpy
def norm_square_numpy_dot(vector):
return numpy.dot(vector, vector)
def run_experiment(size, num_iter=3):
vector = numpy.arange(size)
times = []
for i in range(num_iter):
start = time.time()
norm_square_numpy_dot(vector)
times.append(time.time() - start)
return min(times)
if __name__ == "__main__":
print(run_experiment(1000000, 10)) | |
import numpy as np
import matplotlib.pyplot as plt
import scipy.interpolate as interp
import scipy.optimize as optimize
import scipy
def sbin_pn(xvec, yvec, bin_size=1., vel_mult = 0.):
#Bins yvec based on binning xvec into bin_size for velocities*vel_mult>0.
fac = 1./bin_size
bins_vals = np.around(fac*xvec)
bins_vals /= fac
bins = np.unique(bins_vals)
y_binned = np.zeros_like(bins)
y_errors = np.zeros_like(bins)
if vel_mult:
vb = np.gradient(xvec)*vel_mult>0.
yvec2 = yvec[vb]
else:
vb = yvec == yvec
yvec2 = yvec
for i, b in enumerate(bins):
idx = bins_vals[vb] == b
y_binned[i] = np.mean(yvec2[idx])
y_errors[i] = np.std(yvec2[idx])
return bins, y_binned, y_errors
def fit_fun(t, A, f, phi, C):
return A * np.sin(2 * np.pi * f * t + phi) + C
width = 1
nharmonics = 100
numbins = 100
# Generate a time array, cantilever drive and some noise
t = np.arange(0, 10, 1. / 5000)
noise = np.random.randn(len(t)) * 0.1
cant = 40 * np.sin(2 * np.pi * 13 * t) + 40
cant_n = cant + noise
freqs = np.fft.rfftfreq(len(cant), d=1./5000)
cantfft = np.fft.rfft(cant_n)
fund_ind = np.argmax( np.abs(cantfft[1:]) ) + 1
drive_freq = freqs[fund_ind]
p0 = [75, drive_freq, 0, 0]
popt, pcov = optimize.curve_fit(fit_fun, t, cant_n, p0=p0)
fitdat = fit_fun(t, *popt)
mindat = np.min(fitdat)
maxdat = np.max(fitdat)
posvec = np.linspace(mindat, maxdat, numbins)
points = np.linspace(mindat, maxdat, 10.0*numbins)
fcurve = 2 * np.cos(0.7*points)+2
#plt.plot(points, fcurve)
#plt.show()
lookup = interp.interp1d(points, fcurve, fill_value='extrapolate')
dat = lookup(cant)
dat_n = dat + noise
datfft = np.fft.rfft(dat_n)
bins, rdat, rerrs = sbin_pn(t, dat, bin_size=1.)
plt.plot(bins, rdat)
fftsq = np.abs(datfft)
#if noise:
# cantfilt = (fftsq) / (fftsq[fund_ind]) # Normalize filter to 1 at fundamental
#elif not noise:
cantfilt = np.zeros(len(fftsq))
cantfilt[fund_ind] = 1.0
if width:
lower_ind = np.argmin(np.abs(drive_freq - 0.5 * width - freqs))
upper_ind = np.argmin(np.abs(drive_freq + 0.5 * width - freqs))
cantfilt[lower_ind:upper_ind+1] = cantfilt[fund_ind]
#plt.figure()
#plt.loglog(self.fft_freqs, cantfilt)
# Make a list of the harmonics to include and remove the fundamental
harms = np.array([x+2 for x in range(nharmonics)])
for n in harms:
harm_ind = np.argmin( np.abs(n * drive_freq - freqs))
cantfilt[harm_ind] = cantfilt[fund_ind]
if width:
h_lower_ind = harm_ind - (fund_ind - lower_ind)
h_upper_ind = harm_ind + (upper_ind - fund_ind)
cantfilt[h_lower_ind:h_upper_ind+1] = cantfilt[harm_ind]
cantr = np.fft.irfft(cantfilt * cantfft)
datr = np.fft.irfft(cantfilt * datfft)
plt.plot(cantr, datr)
plt.figure()
plt.plot(t, cantr)
plt.plot(t, datr)
plt.show()
'''
# Make a filter
eigenvectors = []
eigenvectors.append([1, cantfft[fund_ind]])
if width:
lower_ind = np.argmin(np.abs(drive_freq - 0.5 * width - freqs))
upper_ind = np.argmin(np.abs(drive_freq + 0.5 * width - freqs))
harms = np.array( [x+2 for x in range(nharmonics)] )
for n in harms:
harm_ind = np.argmin( np.abs(n * drive_freq - freqs) )
eigenvectors.append([n, datfft[harm_ind]])
#print eigenvectors
out = np.zeros(len(posvec))
for vec in eigenvectors:
power = vec[0]
amp = np.abs(vec[1]) / len(t)
phase = np.angle(vec[1]) + 0.5 * np.pi
#if (phase < -0.1 or phase > 0.1):
# amp *= -1.0
newposvec = posvec
out += amp * newposvec**power
plt.plot(posvec, out)
plt.show()
''' | |
from __future__ import print_function
import os
import sys
cur_path = os.path.abspath(os.path.dirname(__file__))
root_path = os.path.split(cur_path)[0]
sys.path.append(root_path)
import logging
import torch
import torch.nn as nn
import torch.utils.data as data
import torch.nn.functional as F
import cv2
import numpy as np
from tabulate import tabulate
from torchvision import transforms
from segmentron.data.dataloader import get_segmentation_dataset
from segmentron.models.model_zoo import get_segmentation_model
from segmentron.utils.score import SegmentationMetric
from segmentron.utils.distributed import synchronize, make_data_sampler, make_batch_data_sampler
from segmentron.config import cfg
from segmentron.utils.options import parse_args
from segmentron.utils.default_setup import default_setup
from crf import DenseCRF
class Evaluator(object):
def __init__(self, args):
self.postprocessor= DenseCRF(iter_max=cfg.CRF.ITER_MAX,
pos_xy_std=cfg.CRF.POS_XY_STD,
pos_w=cfg.CRF.POS_W,
bi_xy_std=cfg.CRF.BI_XY_STD,
bi_rgb_std=cfg.CRF.BI_RGB_STD,
bi_w=cfg.CRF.BI_W,
)
self.args = args
self.device = torch.device(args.device)
# image transform
input_transform = transforms.Compose([
transforms.ToTensor(),
transforms.Normalize(cfg.DATASET.MEAN, cfg.DATASET.STD),
])
# dataset and dataloader
val_dataset = get_segmentation_dataset(cfg.DATASET.NAME, split='val', mode='testval', transform=input_transform)
# made shuffle true
val_sampler = make_data_sampler(val_dataset, shuffle=True, distributed=args.distributed)
val_batch_sampler = make_batch_data_sampler(val_sampler, images_per_batch=cfg.TEST.BATCH_SIZE, drop_last=False)
self.val_loader = data.DataLoader(dataset=val_dataset,
batch_sampler=val_batch_sampler,
num_workers=cfg.DATASET.WORKERS,
pin_memory=True)
self.classes = val_dataset.classes
# create network
self.model = get_segmentation_model().to(self.device)
if hasattr(self.model, 'encoder') and hasattr(self.model.encoder, 'named_modules') and \
cfg.MODEL.BN_EPS_FOR_ENCODER:
logging.info('set bn custom eps for bn in encoder: {}'.format(cfg.MODEL.BN_EPS_FOR_ENCODER))
self.set_batch_norm_attr(self.model.encoder.named_modules(), 'eps', cfg.MODEL.BN_EPS_FOR_ENCODER)
if args.distributed:
self.model = nn.parallel.DistributedDataParallel(self.model,
device_ids=[args.local_rank], output_device=args.local_rank, find_unused_parameters=True)
self.model.to(self.device)
self.metric = SegmentationMetric(val_dataset.num_class, args.distributed)
def set_batch_norm_attr(self, named_modules, attr, value):
for m in named_modules:
if isinstance(m[1], nn.BatchNorm2d) or isinstance(m[1], nn.SyncBatchNorm):
setattr(m[1], attr, value)
def eval(self):
self.metric.reset()
self.model.eval()
if self.args.distributed:
model = self.model.module
else:
model = self.model
one_five = torch.ones(1) * 1.5
one_five = one_five.to(self.device)
temp = torch.nn.Parameter(one_five)
print(temp)
criterion = torch.nn.CrossEntropyLoss(ignore_index=cfg.DATASET.IGNORE_INDEX).to(self.device)
optimizer = torch.optim.SGD([temp], lr=1)
logging.info("Start validation, Total sample: {:d}".format(len(self.val_loader)))
import time
time_start = time.time()
loss_series = list()
temp_series = list()
for epoch in range(10):
logging.info("Epoch Started {}".format(epoch))
loss_epoch = 0.0
for i, (image, target, filename) in enumerate(self.val_loader):
optimizer.zero_grad()
image = image.to(self.device)
target = target.to(self.device)
with torch.no_grad():
output = model.evaluate(image)
# output = output.cpu()
output = output / temp
# print(output.shape)
# print(target.shape)
loss = criterion(output, target)
loss_epoch += loss.item()
loss.backward()
optimizer.step()
logging.info("Batch {} loss for Temp Scaling : {}".format(i, loss))
logging.info("Epoch {} loss for Temp Scaling : {}".format(epoch, loss_epoch / (len(self.val_loader))))
logging.info("Epoch {} Temp Scaling factor is : {}".format(epoch, temp.item()))
loss_series.append(loss_epoch)
temp_series.append(temp.item())
print(loss_series)
print(temp_series)
synchronize()
print('Final scaled temp : {}'.format(temp))
if __name__ == '__main__':
args = parse_args()
cfg.update_from_file(args.config_file)
cfg.update_from_list(args.opts)
cfg.PHASE = 'test'
cfg.ROOT_PATH = root_path
cfg.check_and_freeze()
# import pdb; pdb.set_trace()
default_setup(args)
evaluator = Evaluator(args)
evaluator.eval() | |
import os
import pickle
import warnings
from typing import Dict
import numpy as np
from lark import Lark, Transformer, Tree, v_args
from lark.tree import pydot__tree_to_graph
from lark.visitors import Interpreter
from spatial.geometry import SpatialInterface, ObjectInTime
@v_args(inline=True) # Affects the signatures of the methods
class SpatRelInterpreter(Transformer):
"""
Interpreter for spatial relations. Delegates parsed tree to corresponding operations.
"""
# from operator import neg, and_ as b_and, or_ as b_or
# from operator import and_, or_, not_
number = float
def __init__(self):
"""
Initializes the interpreter
"""
super().__init__()
self.vars: Dict[str, ObjectInTime] = {}
self.number_vars: Dict[str, float] = {}
self._global_time = 0
def set_global_time(self, time: int):
assert time >= 0, "<Interpreter>: global time must be non-negative! Got: {}".format(time)
self._global_time = time
@staticmethod
def spatial(a):
"""
Maps quantitative semantics to bool domain
Args:
a: The float value
Returns: True if val>=0 and False otherwise
"""
return a
@staticmethod
def and_(a, b) -> float:
"""
Computes the quantitative semantics of AND operator
Args:
a: predicate left of operator
b: predicate right of operator
Returns: min(a,b)
"""
return np.min([a, b])
@staticmethod
def or_(a, b) -> float:
"""
Computes the quantitative semantics of OR operator
Args:
a: predicate left of operator
b: predicate right of operator
Returns: max(a,b)
"""
return np.max([a, b])
@staticmethod
def xor_(a, b) -> float:
"""
Computes the quantitative semantics of XOR operator
Args:
a: predicate left of operator
b: predicate right of operator
Returns: max(a,b)
"""
# a XOR b = (a & !b) | (!a & b)
a_notb = np.min([a, -b])
nota_b = np.min([-a, b])
return np.max([a_notb, nota_b])
@staticmethod
def implies_(a, b) -> float:
"""
Computes the quantitative semantics of IMPLIES operator
Args:
a: predicate left of operator
b: predicate right of operator
Returns: max(a,b)
"""
# a -> b = !a | b
return np.max([-a, b])
@staticmethod
def not_(a) -> float:
"""
Computes the quantitative semantics of not operator
Args:
a: predicate to negate
Returns: -a
"""
return -a
@property
def vars(self) -> dict:
"""
Dictionary of stored (name, variable) pairs
Returns: dictionary
"""
return self._vars
@vars.setter
def vars(self, vars: dict):
"""
Sets the dictionary of stored (name, variable) pairs
Args:
vars: Dictionary
"""
if len(vars) > 1:
elements = list(vars.values())
assert all([isinstance(k, type(elements[0])) for k in
elements]), '<SpatialInterpreter>: only one type of obstacle currently supported!'
self._vars = vars
@property
def number_vars(self) -> dict:
"""
Returns the dictionary of stored (name, numerical variable) pairs
Returns: dictionary
"""
return self._number_vars
@number_vars.setter
def number_vars(self, number_vars: dict):
"""
Sets the dictionary of stored (name, numerical variable) pairs
Args:
number_vars: dictionary
"""
assert len(number_vars) == 0 or all([isinstance(k, (float, int)) for k in
[number_vars.values()]]), '<SpatialInterpreter>: only numbers supported!'
self._number_vars = number_vars
def assign_var(self, name: str, value: ObjectInTime):
"""
Assigns a new variable to the interpreter
Args:
name: Name of the variable
value: Value of the variables (spatial interface object or int/float)
"""
assert isinstance(name, str), '<Logic>: name must be of type string! Got {}'.format(name)
if isinstance(value, (int, float)):
self.number_vars[name.lower()] = value
else:
assert isinstance(value, ObjectInTime), '<Logic>: value must be of type ' \
'ObjectInTime! Got {}'.format(
value)
self.vars[name.lower()] = value
return value
def var(self, name: str) -> SpatialInterface:
"""
Returns the variable corresponding to the provided name
Args:
name: Name of the variable
Returns: spatial interface object of name
"""
# query latest time step
return self.var_at(name, 0)
def var_at(self, name: str, rel_time: int) -> SpatialInterface:
"""
Returns the variable corresponding to the provided name and time step
Args:
name: Name of the variable
rel_time: relative time identifier
Returns: spatial interface object of name
"""
assert rel_time >= 0., ''
try:
obj: ObjectInTime = self.vars[name.lower()]
except KeyError:
raise Exception("Variable not found: %s" % name)
try:
return obj.getObject(self._global_time - rel_time)
except Exception:
raise Exception("Time step index t={} not found".format(self._global_time - rel_time))
@staticmethod
def enlarge(o: SpatialInterface, radius: float):
return o.enlarge(radius)
def numeric_var(self, name):
"""
Returns the numeric variable of a specified name
Args:
name: The specified name
Returns: int/float object corresponding to specified name
"""
try:
return self.number_vars[name.lower()]
except KeyError:
raise Exception("Variable not found: %s" % name)
@staticmethod
def left_of(left: SpatialInterface, right: SpatialInterface):
return left.left_of(right)
@staticmethod
def right_of(left: SpatialInterface, right: SpatialInterface):
return left.right_of(right)
@staticmethod
def below_of(left: SpatialInterface, right: SpatialInterface):
return left.below(right)
@staticmethod
def above_of(left: SpatialInterface, right: SpatialInterface):
return left.above(right)
@staticmethod
def overlap(left: SpatialInterface, right: SpatialInterface):
return left.overlap(right)
@staticmethod
def touching(left: SpatialInterface, right: SpatialInterface):
return left.touching(right)
@staticmethod
def far_from(left: SpatialInterface, right: SpatialInterface):
return left.far_from(right)
@staticmethod
def close_to(left: SpatialInterface, right: SpatialInterface):
return left.close_to(right)
@staticmethod
def enclosed_in(left: SpatialInterface, right: SpatialInterface):
return left.enclosed_in(right)
@staticmethod
def comparison(left: SpatialInterface, right: SpatialInterface):
return [left, right]
@staticmethod
def moved(left: SpatialInterface, right: SpatialInterface):
val = left.enclosed_in(right.enlarge(25))
return val if np.isclose(val, 0) else -val
@staticmethod
def operator(value):
"""
Maps operators (<=,>=,=) to numpy functions
Args:
value: The operator to map
Returns: Numpy functions object
"""
# "<=" | ">=" | "=="
if value == "<=":
return np.less_equal
if value == ">=":
return np.greater_equal
if value == "==":
return np.equal
@staticmethod
def closer_to(left: SpatialInterface, right: list):
return left.closer_to_than(right[0], right[1])
@staticmethod
def distance(left: SpatialInterface, right: SpatialInterface, fun, eps):
return left.distance_compare(right, eps, fun)
# custom function wrapper for the SpatialInterpreter.visit() function
def _vargs_tree_time(f, data, children, meta, lower, upper):
return f(Tree(data, children, meta), lower, upper)
@v_args(wrapper=_vargs_tree_time)
class SpatialInterpreter(Interpreter):
"""
Interpreter object for the temporal parts of Spatial. Delegates parsed tree to corresponding operations.
"""
def __init__(self, spatial, quantitative: bool = False):
self._quantitative = quantitative
self._spatial_interpreter = spatial
self._spatial_dict = {}
self.vars = {}
@property
def quantitative(self) -> bool:
"""
Bool whether this interpreter returns boolean or quantitative values
Returns: True if quantitative
"""
return self._quantitative
@quantitative.setter
def quantitative(self, quantitative: bool):
"""
Bool whether this interpreter returns boolean or quantitative values
Args:
quantitative: Set a new bool
"""
self._quantitative = quantitative
@property
def vars(self) -> dict:
"""
Dictionary of stored (name, variable) pairs
Returns: dictionary
"""
return self._vars
@vars.setter
def vars(self, vars: dict):
"""
Sets the dictionary of stored (name, variable) pairs
Args:
vars: Dictionary
"""
if len(vars) > 1:
elements = list(vars.values())
assert all([isinstance(k, ObjectInTime) for k in
elements]), '<SpatialInterpreter>: only ObjectInTime currently supported!'
self._vars = vars
def assign_var(self, name: str, value: ObjectInTime):
"""
Assigns a new variable to the interpreter
Args:
name: Name of the variable
value: Value of the variables (ObjectInTime interface object or int/float)
"""
if isinstance(value, (int, float)):
self._spatial_interpreter.assign_var(name, value)
else:
assert isinstance(value,
ObjectInTime), '<SpatialInterpreter>: value must be of type ' \
'ObjectInTime or int/float! Got {}'.format(
value)
assert isinstance(name, str), '<SpatialInterpreter>: name must be of type string! Got {}'.format(name)
self.vars[name.lower()] = value
return value
def var(self, name: str) -> ObjectInTime:
"""
Returns the variable corresponding to the provided name
Args:
name: Name of the variable
Returns: spatial interface object of name
"""
try:
return self.vars[name.lower()]
except KeyError:
raise Exception("Variable not found: %s" % name)
# translates the relative bounds of bounded temporal operators into the absolute time
@staticmethod
def relative_to_absolute_bounds(rel_lower, rel_upper, lower, upper):
assert rel_lower >= 0 and rel_upper >= 0, \
'<SpatialInterpreter>: negative bounds in bounded temporal operators not allowed!'
assert rel_lower <= rel_upper, \
'<SpatialInterpreter>: relative lower bound is higher than relative upper bound'
abs_lower = lower + rel_lower
abs_upper = min(upper, lower + rel_upper)
return abs_lower, abs_upper
# hacky override of original function. provides necessary extra parameters for custom function wrapper
def visit(self, tree, lower, upper):
f = getattr(self, tree.data)
wrapper = getattr(f, 'visit_wrapper', None)
if wrapper is not None:
return f.visit_wrapper(f, tree.data, tree.children, tree.meta, lower, upper)
else:
return f(tree)
def temporal(self, tree, lower, upper):
# temporal has only one child
return self.visit(tree.children[0], lower, upper)
def and_(self, tree, lower, upper):
# and_ has two children
left = self.visit(tree.children[0], lower, upper)
right = self.visit(tree.children[1], lower, upper)
return np.nanmin([left, right])
def or_(self, tree, lower, upper):
# or_ has two children
left = self.visit(tree.children[0], lower, upper)
right = self.visit(tree.children[1], lower, upper)
return np.nanmax([left, right])
def xor_(self, tree, lower, upper):
# xor_ has two children
left = self.visit(tree.children[0], lower, upper)
right = self.visit(tree.children[1], lower, upper)
# a XOR b = (a & !b) | (!a & b)
a_notb = np.nanmin([left, -right])
nota_b = np.nanmin([-left, right])
return np.nanmax([a_notb, nota_b])
def implies_(self, tree, lower, upper):
# implies_ has two children
left = self.visit(tree.children[0], lower, upper)
right = self.visit(tree.children[1], lower, upper)
return np.nanmax([-left, right]) # works because a -> b == !a v b
def not_(self, tree, lower, upper):
# not_ has a single child
return -self.visit(tree.children[0], lower, upper)
def eventually(self, tree, lower, upper):
results = []
for i in range(lower, upper + 1):
results.append(self.visit(tree.children[0], i, upper))
# speedup in case the interpreter is run in boolean mode
if not self.quantitative and results[-1] >= 0:
return 1.
return np.nanmax(results)
def eventually_bounded(self, tree, lower, upper):
bound = tree.children[0]
rel_bound_l = int(bound.children[0].children[0])
rel_bound_u = int(bound.children[1].children[0])
abs_bound_l, abs_bound_u = self.relative_to_absolute_bounds(rel_bound_l, rel_bound_u, lower, upper)
# this happens when the relative lower bound references a point in time later than upper
if abs_bound_l > abs_bound_u:
return np.nan
# create a 'fake' eventually tree and interpret it over new bounds
eventually_tree = Tree('eventually', [tree.children[1]])
return self.eventually(eventually_tree, abs_bound_l, abs_bound_u)
def always(self, tree, lower, upper):
# always has only one child
results = []
for i in range(lower, upper + 1):
results.append(self.visit(tree.children[0], i, upper))
# speedup in case the interpreter is run in boolean mode
if not self.quantitative and results[-1] < 0:
return -1.
return np.nanmin(results)
def always_bounded(self, tree, lower, upper):
bound = tree.children[0]
rel_bound_l = int(bound.children[0].children[0])
rel_bound_u = int(bound.children[1].children[0])
abs_bound_l, abs_bound_u = self.relative_to_absolute_bounds(rel_bound_l, rel_bound_u, lower, upper)
# this happens when the relative lower bound references a point in time later than upper
if abs_bound_l > abs_bound_u:
return np.nan
# create a 'fake' always tree and interpret it over new bounds
always_tree = Tree('always', [tree.children[1]])
return self.always(always_tree, abs_bound_l, abs_bound_u)
def next(self, tree, lower, upper):
if lower + 1 > upper:
return np.nan
# next always has only one child
return self.visit(tree.children[0], lower + 1, upper)
until_storage = dict()
def until(self, tree, lower, upper):
# final result
result = -np.inf
# store results of current tree evaluation in lookup table
element = hash((tree.children[0], tree.children[1]))
if element not in self.until_storage:
self.until_storage[element] = {}
# get dictionary of previous calls
# stores calls of self.visit(tree.children[0], j, k). key is (i, k), value is result
v2_dict = self.until_storage[element]
for k in range(lower, upper + 1):
v1 = self.visit(tree.children[1], k, upper)
# this whole section is simply
# v2 = min(self.visit(tree.children[0], j, k) for j in range(lower, k+1))
v2 = np.inf
for j in range(lower, k + 1):
interval = (j, k)
if interval not in v2_dict:
val = self.visit(tree.children[0], j, k)
v2_dict[interval] = val
if val < v2:
v2 = val
else:
val = v2_dict[interval]
if val < v2:
v2 = val
val = np.nanmin([v1, v2])
if val > result:
result = val
return result
def until_bounded(self, tree, lower, upper):
bound = tree.children[1]
rel_bound_l = int(bound.children[0].children[0])
rel_bound_u = int(bound.children[1].children[0])
abs_bound_l, abs_bound_u = self.relative_to_absolute_bounds(rel_bound_l, rel_bound_u, lower, upper)
# this happens when the relative lower bound references a point in time later than upper
if abs_bound_l > abs_bound_u:
return np.nan
# create a 'fake' until tree and interpret it over new bounds
until_tree = Tree('until', [tree.children[0], tree.children[2]])
return self.always(until_tree, abs_bound_l, abs_bound_u)
def spatial(self, tree, lower, upper):
# check if this spatial formula has already been evaluated for this time point
element = hash((tree, lower)) # compute hash once
if element not in self._spatial_dict:
# set global time for evaluation of formula
self._spatial_interpreter.set_global_time(lower)
val = self._spatial_interpreter.transform(tree)
self._spatial_dict[element] = val
return self._spatial_dict[element]
class Spatial(object):
"""
Spatial parser (+ interpreter)
"""
def __init__(self, quantitative: bool = False):
"""
Initializes the Spatial object
Args:
quantitative: True if quantitative semantics are desired
"""
grammar = os.path.dirname(__file__) + "/spatial.lark"
self._parser = Lark.open(grammar, parser='lalr')
self._spatial_interpreter = SpatRelInterpreter()
self._tl_interpreter = SpatialInterpreter(self._spatial_interpreter, quantitative=quantitative)
self.quantitative = quantitative
@property
def quantitative(self) -> bool:
return self._quantitative
@quantitative.setter
def quantitative(self, quantitative: bool):
self._quantitative = quantitative
self._tl_interpreter.quantitative = quantitative
def reset_spatial_dict(self):
"""
Resets the spatial interpreter call history
"""
self._tl_interpreter._spatial_dict = {}
def parse(self, formula: str) -> Tree:
"""
Parses a given formula to a tree object
Args:
formula: Formula to parse
Returns: Tree object of parsed formula
"""
try:
self.reset_spatial_dict() # every time you parse a new formula, reset the spatial dict
return self._parser.parse(formula)
except Exception as e:
print(e)
return None
def interpret(self, formula: Tree, lower=0, upper=0):
"""
Interprets a given tree
Args:
formula: The tree of the formula
lower: lower time bound for semantics
upper: upper time bound for semantics
Returns: Value of interpreted formula
"""
# check if relative time has been used
rel_time = self.min_time_of_formula(formula)
# return NAN if start time does not allow to evaluate formula
if rel_time > lower:
warnings.warn(f"<Interpreter/interpret>: Cannot evalute formula from time step {lower} "
f"since the formalue is valid from {rel_time}!")
return np.NAN
try:
val = self._tl_interpreter.visit(formula, lower, upper)
if self.quantitative:
return val
else:
return val >= 0
except Exception as e:
print(e)
return None
@staticmethod
def svg_from_tree(formula: Tree, filename, rankdir="LR", **kwargs):
"""
Saves tree object to a svg vector graphics file
Args:
formula: The tree of the formula
filename: The filename
rankdir: params for pydot
**kwargs: params for pydot
"""
graph = pydot__tree_to_graph(formula, rankdir, **kwargs)
graph.write_svg(filename)
@staticmethod
def png_from_tree(formula: Tree, filename, rankdir="LR", **kwargs):
"""
Saves tree object to a png graphics file
Args:
formula: The tree of the formula
filename: The filename
rankdir: params for pydot
**kwargs: params for pydot
"""
graph = pydot__tree_to_graph(formula, rankdir, **kwargs)
graph.write_png(filename)
@staticmethod
def determine_variables(formula: Tree):
"""
Determines all variables within a formula (given as a tree)
Args:
formula: The formula to check
Returns: Set of all variable names
"""
iter = formula.find_data('var')
vars = set()
for v in iter:
vars.add(v.children[0].title())
return vars
def check_variables(self, formula: Tree) -> bool:
"""
Checks if the interpreter stores all variables required to interpret a given formula
Args:
formula: The formula to check as a tree
Returns: True if interpreter stores all necessary variables and False otherwise
"""
vars = self.determine_variables(formula)
for v in vars:
if v.lower() not in self._tl_interpreter.vars.keys():
return False
return True
@staticmethod
def min_time_of_formula(formula: Tree) -> int:
"""
If the formula contains relative time references (variable plus time reference), then the formula can only
be evaluated when the minimum time has been reached in the interpreter. This function returns the minimum
required time to evaluate the formula.
Args:
formula: The formula to check
Returns: The minimum time rquired to evaluate the formula
"""
iter = formula.find_data('var_at') # tag used for variables with time reference
vars = [0]
for v in iter:
time = float(v.children[1].children[0].title())
assert time >= 0
vars.append(time)
return np.max(vars)
def update_variables(self, vars: dict):
"""
Update the set of variables in the interpreter
Args:
vars: The new set of variables
"""
self._tl_interpreter.vars = vars
def assign_variable(self, name, value):
"""
Assigns a variable to the interpreter
Args:
name: Name of the variable
value: Value of the variable
"""
# if isinstance(value, ObjectInTime):
# self._tl_interpreter.assign_var(name, value)
# elif isinstance(value, SpatialInterface):
# self._tl_interpreter.assign_var(name, StaticObject(value))
# else:
self._spatial_interpreter.assign_var(name, value)
def parse_and_interpret(self, formula: str):
"""
Parses and interprets a given formula
Args:
formula: Formula as string
Returns:
"""
self.reset_spatial_dict()
return self.interpret(self.parse(formula))
def save_to_file(self, file: str):
"""
Saves the state of the interpreter to the files system
Args:
file: The filename
"""
try:
pickle.dump(self._spatial_interpreter, open(file, 'wb'))
except Exception as e:
print(e)
def from_file(self, file: str):
"""
Restores an interpreter from the file system
Args:
file: The filename
"""
try:
self._spatial_interpreter = pickle.load(open(file, 'rb'))
except Exception as e:
print(e)
@staticmethod
def write_formulas_to_file(file: str, formulas: list):
"""
Writes a list of formulas to the file system
Args:
file: The filename
formulas: The list of formulas to store
"""
try:
pickle.dump(formulas, open(file, 'wb'))
except Exception as e:
print(e)
@staticmethod
def load_formulas_from_file(file: str) -> list:
"""
Loads a list of formulas from the file system
Args:
file: The filename
Returns: list of formulas as strings
"""
try:
return pickle.load(open(file, 'rb'))
except Exception as e:
print(e)
if __name__ == "__main__":
print("WOW") |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.