prompt
stringlengths 15
200
| answer
stringlengths 0
425k
|
|---|---|
\section{Introduction}
\noindent The classical linear-time / branching-time
spectrum~\cite{Glabbeek90} organizes a plethora of notions of
behavioural equivalence on labelled transition systems at var
|
ious
levels of granularity ranging from (strong) bisimilarity to trace
equivalence. Similar spectra appear in other system types, e.g.~on
probabilistic systems, again ranging from branching-time equivalence
such as probabilistic bisimilarity to linear-time ones such as
probabilistic trace equivalence~\cite{JouSmolka90}. While the
variation in system types (nondeterministic, probabilistic, etc.) is
captured within the framework of \emph{universal
coalgebra}~\cite{Rutten00}, the variation in the granularity of
equivalence, which we shall generally refer to as the \emph{semantics}
of systems, has been tackled, in coalgebraic generality, in a variety
of approaches~\cite{HJS07,KR15,JSS15,JLR18}. One setting that
manages to accommodate large portions of the
linear-time / branching-time spectrum, notably including also
intermediate equivalences such as ready similarity, is based on
\emph{graded monads}~\cite{MPS15,DMS19,FMS21a}.
An important role in the theoretical and algorithmic treatment of a
behavioural equivalence is classically played by equivalence
games~\cite{Glabbeek90,Stirling99}, e.g.~in partial-order
techniques~\cite{hnw:por-bisimulation-checking} or in on-the-fly
equivalence checking~\cite{h:bisim-verif-journal}. In the present
work, we contribute to \emph{graded semantics} in the sense indicated
above by showing that, under mild conditions, we can extract from a
given graded monad a Spoiler-Duplicator game~\cite{Stirling99} that
characterizes the respective equivalence, i.e.~ensures that two states
are equivalent under the semantics iff Duplicator wins the game.
As the name suggests, graded monads provide an \emph{algebraic} view
on system equivalence; they correspond to \emph{grad\-ed theories},
i.e.~algebraic theories equipped with a notion of \emph{depth} on
their operations. It has been noticed early on~\cite{MPS15} that many
desirable properties of a semantics depend on this theory being
\emph{depth-1}, i.e.~having only equations between terms that are
uniformly of depth~1. Standard examples include distribution of
actions over non-deterministic choice (trace semantics) or
monotonicity of actions w.r.t.~the choice ordering
(similarity)~\cite{DMS19}. Put simply, our generic equivalence game
plays out an equational proof in a depth-1 equational theory in a
somewhat nontraditional manner:
Duplicator starts a round by playing a set of equational assumptions
she claims to hold at the level of successors of the present state,
and Spoiler then challenges one of these assumptions.
In many concrete cases, the game can be rearranged in a straightforward
manner to let Spoiler move first as usual; in this view, the equational claims
of Duplicator roughly correspond to a short-term strategy determining the
responses she commits to playing after Spoiler's next move. In particular,
the game instantiates, after such rearrangement, to the standard pebble
game for bisimilarity. We analyse additional cases, including similarity
and trace equivalence, in more detail. In the latter case, several natural
variants of the game arise by suitably restricting strategies played by
Duplicator.
It turns out that the game is morally played on a form of
pre-determinization of the given coalgebra, which lives in the
Eilenberg-Moore category of the zero-th level of the graded monad, and as
such generalizes a determinization construction that applies in
certain instances of coalgebraic language semantics of
automata~\cite{JSS15}. Under suitable conditions on the graded monad,
this pre-determinization indeed functions as an actual
determinization, i.e.~it turns the graded semantics into standard
coalgebraic behavioural equivalence for a functor that we construct on
the Eilenberg-Moore category. This construction simultaneously
generalizes, for instance, the standard determinization of serial labelled
transition systems for trace equivalence and the identification of
similarity as behavioural equivalence for a suitable functor on
posets~\cite{KKV12} (specialized to join
semilattices).
While graded semantics has so far been constrained to apply only to
finite-depth equivalences (finite-depth bisimilarity, finite trace
equivalence, etc.), we obtain, under the mentioned conditions on the
graded monad, a new notion of infinite-depth equivalence induced by a
graded semantics, namely via the (pre-)determinization. It turns out
the natural infinite version of our equivalence game captures
precisely this infinite-depth equivalence. This entails
a fixpoint characterization of graded semantics on finite systems,
giving rise to perspectives for a generic algorithmic treatment.
\paragraph{Related Work.} Game characterizations of process
equivalences are an established theme in concurrency theory;
they tend to be systematic but not generic~\cite{Glabbeek90,CD08}.
Work on games for spectra of quantitative equivalences is positioned
similarly~\cite{FLT11,fl:quantitative-spectrum-journal}. The idea of developing \mbox{(bi)}simulation
games in coalgebraic generality goes back to work on branching-time
simulations based on relators~\cite{Baltag00}. There
is recent highly general work, conducted in a fibrational setting, on
so-called codensity games for various notions of
bisimilarity~\cite{kkhkh:codensity-games}. The emphasis in this work
is on generality w.r.t.~the measure of bisimilarity, covering,
e.g.~two-valued equivalences, metrics, pre-orders, and topologies,
while,
viewed through the lens of spectra of equivalences, the
focus remains on branching time. The style of the codensity game is
inspired by modal logic, in the spirit of coalgebraic Kantorovich
liftings~\cite{BaldanEA18,WildSchroder20};
Spoiler plays predicates thought of as arguments of modalities.
Work focused more specifically on games for Kantorovich-style
coalgebraic behavioural equivalence and behavioural
metrics~\cite{km:bisim-games-logics-metric} similarly concentrates on
the branching-time case. A related game-theoretic characterization is
implicit in work on $\Lambda$-(bi)similarity~\cite{GorinSchroder13},
also effectively limited to branching-time. Comonadic game
semantics~\cite{ADW17, AS18, CD21} proceeds in the opposite way
compared to the mentioned work and ours: It takes existing games as
the point of departure, and then aims to develop categorical models.
Graded semantics was developed in a line of work mentioned
above~\cite{MPS15,DMS19,FMS21a}. The underlying notion of
graded monad stems from algebro-geometric work~\cite{Smirnov08}
and was introduced into computer science (in substantially higher generality)
in work on the semantics of effects~\cite{Katsumata14}. Our pre-determinization
construction relates to work on coalgebras over algebras~\cite{BK11}.
\paragraph*{Organization.} We discuss preliminaries on categories,
coalgebras, graded monads, and games in \cref{sec:prelims}. We
recall the key notions of graded algebra and canonical graded algebra
in~\cref{sec:algebras}, and graded semantics in~\cref{sec:semantics}.
We introduce our pre-determinization construction in \cref{sec:determinization},
and finite behavioural equivalence games in \cref{S:games}. In
\cref{sec:infinte-depth}, we consider the infinite version of the
game, relating it to behavioural equivalence on the pre-determinization. We
finally consider specific cases in detail in~\cref{sec:cases}.
\section{Preliminaries}\label{sec:prelims}
We assume basic familiarity with category theory~\cite{AHS90}. We
will review the necessary background on coalgebra~\cite{Rutten00},
graded monads~\cite{Smirnov08,MPS15}, and the standard bisimilarity
game~\cite{Stirling99}.
\paragraph*{The category of sets.}
Unless explicitly mentioned otherwise, we will
work in the category $\mathbf{Set}$ of sets and functions
(or \emph{maps}), which is both complete and
cocomplete. We fix a terminal object $1=\{\star\}$
and use $!_{X}$ (or just $!$ if confusion is
unlikely) for the unique map $X\to 1$.
In the subsequent sections, we will mostly draw examples from (slight
modifications of) the following (endo-)functors on $\mathbf{Set}$. The
\emph{powerset functor} $\pow$ sends each set $X$ to its set of
subsets $\pow X$, and acts on a map $f\colon X\to Y$ by taking direct
images, i.e.~$\pow f(S):= f[S]$ for $S\in\pow X$.
We write $\pow_{\mathsf f}$ for the \emph{finitary powerset functor} which sends
each set to its set of finite subsets; the action of $\pow_{\mathsf f}$ on maps is
again given by taking direct images. Similarly, $\pow^+$ denotes the
non-empty powerset functor
($\pow^+(X)=\{Y\in\pow(X)\mid Y\neq\emptyset\}$), and $\pow_{\mathsf f}^+$ its
finitary subfunctor
($\pow_{\mathsf f}^+(X)=\{Y\in\pow_{\mathsf f}(X)\mid Y\neq\emptyset\}$).
We write $\CalD X$ for the set of \emph{distributions} on a set $X$:
maps $\mu\colon X\to [0,1]$ such that $\sum_{x\in X}\mu(x)=1$. A
distribution $\mu$ is \emph{finitely supported} if the set
$\{x\in X~|~\mu(x)\neq 0\}$ is finite. The set of finitely supported
distributions on $X$ is denoted $\CalD_f X$. The assignment
$X\mapsto\CalD X$ is the object-part of a functor: given
$f\colon X\to Y$, the map $\CalD f\colon\CalD X\to \CalD Y$ assigns to a
distribution $\mu\in\CalD X$ the \emph{image} distribution
$\CalD f(\mu)\colon Y\to [0, 1]$ defined by
$\CalD f(\mu)(y)=\sum_{x\in X\mid f(x)=y} \mu(x)$. Then,
$\CalD f(\mu)$ is finitely supported if $\mu$ is, so $\CalD_f$ is
functorial as well.
\takeout{
We will also consider the \emph{contravariant powerset functor}
$\Q\colon\mathbf{Set}\to\mathbf{Set}^{op}$, which acts on sets according to $\pow$,
i.e.~$\Q(X):= \pow X$, but sends a map $f\colon X\to Y$ to the
\emph{inverse image map} $\Q f\colon\pow Y\to \pow X$ defined by
$S\mapsto f^{-1}[S]$. The composite functor
$\CalN:=(\mathbf{Set}\xra{\Q}\mathbf{Set}^{op}\xra{\Q^{op}}\mathbf{Set})$ is called the
\emph{neighborhood functor}. Explicitly, $\CalN$ is the endofunctor
on $\mathbf{Set}$ which sends a set $X$ to the set $\pow(\pow X)$ of all
\emph{neighborhoods} of~$X$, and it sends a map $f\colon X\to Y$
to the map $\CalN f\colon\CalN X\to \CalN Y$ defined by the
assignment
$\mathscr S\mapsto\{S\in \pow Y~|~f^{-1}[S]\in \mathscr S\}$.
\takeout{
We will also work with the \emph{(finitely supported)
distribution functor} $\CalD\colon\mathbf{Set}\to\mathbf{Set}$ is the functor
which sends a set $X$ to the set of \emph{(finitely supported)
probability distributions on $X$}. That is, $\CalD X$ consists of
all maps $\varphi\colon X\to [0,1]$ (where $[0,1]$ denotes the
real unit interval) such that
\[
\sum_{x\in X}\varphi(x) = 1
\]
and such that the \emph{support}
$\Support(\varphi):=\{x\in X\mid\varphi(x)\ne 0\}$ is finite. It
is well known (and easy to see) that each $\varphi\in\CalD X$
is equivalently presented as the formal convex sum
\[
\sum_{x\in\Support(\varphi)} \varphi(x)\cdot x.
\]
In this notation, the action $\CalD f\colon \CalD X\to\CalD Y$
on a map $f\colon X\to Y$ is conveniently described via the
assignment
$\sum_{i\leq k}r_i\cdot x_i \mapsto \sum_{i\leq k} r_i\cdot f(x_i).$
\noindent
\paragraph*{Coalgebra.}
We will review the
basic definitions and results of \emph{universal
coalgebra}~\cite{Rutten00}, a categorical framework for the uniform
treatment of a variety of reactive system types.
\begin{defn}
For an endofunctor $G\colon\CatC\to\CatC$ on a
category $\CatC$, a \emph{$G$-coalgebra} (or just
\emph{coalgebra}) is a pair $(X, \gamma)$ consisting
of an object $X$ in $\CatC$ and a morphism
$\gamma\colon X\to GX$.
A \emph{(coalgebra) morphism} from $(X, \gamma)$ to
a coalgebra $(Y, \delta)$ is a morphism $h\colon X\to Y$
such that $\delta\cdot h = Fh\cdot\gamma$.
\end{defn}
\noindent Thus, for $\CatC = \mathbf{Set}$, a coalgebra consists of a set $X$
of \emph{states} and a map $\gamma\colon X\to GX$, which we view as a
transition structure that assigns to each state $x\in X$ a structured
collection $\gamma(x)\in GX$ of \emph{successors} in $X$.
\begin{expl}\label{E:coalg}
We describe some examples of functors on $\mathbf{Set}$ and their coalgebras
for consideration in the subsequent. Fix a finite set $\A$ of
\emph{actions}.
\begin{enumerate}
\item\label{E:coalg:1} Coalgebras for the functor $G=\pow(\A\times -)$
are just
\emph{$\A$-labelled transition systems (LTS)}: Given such a
coalgebra $(X, \gamma)$, we can view the elements
$(a, y)\in\gamma(x)$ as the $a$-successors of~$x$. We call
$(X, \gamma)$ \emph{finitely branching} (resp.~\emph{serial}
if~$\gamma(x)$ is finite (resp.~non-empty) for all~$x\in X$.
Finitely branching (resp.~serial) LTS are coalgebras for
the functor $G=\pow_{\mathsf f}(\A\times -)$ (resp. $\pow^+(\A\times -)$).
\takeout{
\item A coalgebra for the neighborhood functor $\CalN$ is a
\emph{neighborhood frame}~\cite{HKP09}: a map
$\nu\colon X\to \CalN X$ assigning each state $x\in X$ to its
set of \emph{neighbourhoods} $\nu(x)\in\CalN X$.%
\smnote{I wouldn't write $\pow(\pow X)$; this gives the wrong
impression that $\N$ is the double covariant powerset functor.}
A morphism
of neighborhood frames (also: \emph{bounded morphism}) from
$(X, \nu)$ to $(X', \nu')$ is a map $f\colon X\to X'$ such that for
all $x\in X$: $f^{-1}[V]\in\nu(x)$ iff $V\in\nu'(f(x))$ for all
$V\in\pow X'$.
\item A coalgebra $(X,\gamma)$ for the functor $G=\CalD(\A\times -)$
is a \emph{(generative) probabilistic transition system} (PTS): The
transition structure~$\gamma$ assigns to each state $x\in X$ a
distribution $\gamma(x)$ on pairs $(a,y)\in\A\times X$. We think of
$\gamma(x)(a, y)$ as the probability of executing an $a$-transition
to state~$y$ while sitting in state~$x$.
A PTS $(X, \gamma)$ is \emph{finitely branching} if $\gamma(x)$ is
finitely supported for all $x\in X$; then, finitely branching PTS
are coalgebras for $\CalD_f(\A\times -)$.
\end{enumerate}
\end{expl}
Given coalgebras $(X, \gamma)$ and $(Y, \delta)$ for an endofunctor
$G$ on $\mathbf{Set}$, states $x\in X$ and $y\in Y$ are
\emph{$G$-behaviourally equivalent} if there exist coalgebra morphisms
\[
(X,\gamma) \xra{f} (Z,\zeta) \xla{g} (Y,\delta)
\]
such that $f(x)= g(y)$. Behavioural equivalence can be approximated
via the (initial $\omega$-segment of the) \emph{final chain}
$(G^n1)_{n\in\omega}$, where $G^n$ denotes $n$-fold application of
$G$.
The \emph{canonical cone} of a coalgebra
$(X, \gamma)$ is then the family of maps $\gamma_n\colon X\to G^n1$
defined inductively for $n\in\omega$ by
\begin{align*}
\gamma_0 &= \big(X\xra{!} 1\big), \text{and} \\
\gamma_{n+1} &= \big(X \xra{\gamma} GX \xra{G\gamma_n} GG^n 1 = G^{n+1}1\big).
\end{align*}
States $x, y\in X$ are \emph{finite-depth behaviourally equivalent} if
$\gamma_n(x)= \gamma_n(y)$ for all $n \in \omega$.
\begin{rem}\label{rem:finite-depth}
It follows from results of Worrell~\cite{Worrell05} that behavioural
equivalence and finite-depth behavioural equivalence coincide for
finitary functors on $\mathbf{Set}$, where a functor $G$ on $\mathbf{Set}$ is
\emph{finitary} if it preserves filtered colimits. Equivalently, for
every set $X$ and each $x \in GX$ there exists a finite subset $Y
\subseteq X$ such that $x = Gi[GY]$, where $i\colon Y \hookrightarrow X$ is
the inclusion map~\cite[Cor.~3.3]{amsw19-1}.
\end{rem}
\paragraph*{Bisimilarity games.}
We briefly recapitulate the classical \emph{bisimilarity game}, a
two-player graph game between the players Duplicator (D) and Spoiler
(S); player~D tries to show that two given states are bisimilar,
while~S tries to refute this. \emph{Configurations} of the game are
pairs $(x, y)\in X\times X$ of states in a LTS $(X, \gamma)$. The game
proceeds in rounds, starting from the \emph{initial configuration},
which is just the contested pair of states. In each round, starting
from a configuration $(x, y)$,~S picks one of the sides, say,~$x$, and
then selects an action $a\in\A$ and an $a$-successor~$x'$ of~$x$;
player~D then selects a corresponding successor on the other side, in
this case an $a$-successor~$y'$ of~$y$. The game then reaches the new
configuration $(x', y')$. If a player gets stuck, the play is
\emph{winning} for their opponent, whereas any infinite play is
winning for~D.
It is well known (e.g.~\cite{Stirling99}) that~D has a
winning strategy in the bisimilarity game at a configuration $(x, y)$
iff $(x, y)$ is a pair of bisimilar states. Moreover, for finitely
branching LTS, an equivalent formulation may be given in terms of the
\emph{$n$-round bisimilarity game}:
the rules of the $n$-round game are the same as those above, only
now~D wins as soon as at most~$n$ rounds have been played. In fact, a
configuration $(x, y)$ is a bisimilar pair precisely if~D has
a winning strategy in the $n$-round bisimilarity game for all
$n\in\omega$.
We mention just one obvious variation of this game that characterizes
a different spot on the linear-time/branching-time spectrum: The
\emph{mutual-simulation game} is set up just like the bisimulation
game, except that~S may only choose his side once, in the first round,
and then has to move on that side in all subsequent rounds (in the
bisimulation game, he can switch sides in every round if he
desires). It is easily checked that states~$x,y$ are mutually similar
iff~S wins the position $(x,y)$ in the mutual-simulation game. We will
see that both these games (and many others) are obtained
as instances of our generic notion of graded equivalence game.
\paragraph*{Graded monads.}
We now review some background material on graded monads
\cite{Smirnov08, MPS15}:
\begin{defn}\label{D:gradedmonad}
A \emph{graded monad} $\M$ on a category $\CatC$ is a triple
$(M, \eta, \mu)$ where $M$ is a family of functors
$M_n\colon\CatC\to\CatC$ on $\CatC$ ($n\in\omega$),
$\eta\colon \mathsf{id} \to M_0$ is a natural transformation (the
\emph{unit}), and $\mu$ is a family of natural transformations%
\begin{equation}
\mu^{n,k}\colon M_nM_k\to M_{n+k} \tag{$n,k\in\omega$}
\end{equation}
(the \emph{multiplication}) such that the following diagrams commute
for all $n,m,k\in\omega$:
\begin{equation}\label{diagram:unitlaw}
\begin{tikzcd}[column sep=40]
&
M_n
\arrow[ld, "M_n\eta"'] \arrow[rd, "\eta M_n"] \arrow[d, "\Id"]
\\
M_nM_0
\arrow[r, "{\mu^{n,0}}"]
&
M_n
&
M_0M_n \arrow[l, "{\mu^{0,n}}"']
\end{tikzcd}
\end{equation}
\begin{equation}\label{diagram:associativelaw}
\begin{tikzcd}[column sep = 60]
M_nM_kM_m
\arrow[r, "{M_n\mu^{k,m}}"]
\arrow[d, "{\mu^{n,k}M_m}"']
&
M_{n}M_{k+m} \arrow[d, "{\mu^{n,k+m}}"]
\\
M_{n+k}M_m \arrow[r, "{\mu^{n+k,m}}"]
&
M_{n+k+m}
\end{tikzcd}
\end{equation}
We refer to~\eqref{diagram:unitlaw} and~\eqref{diagram:associativelaw}
as the \emph{unit} and \emph{associative} laws of $\M$,
respectively. We call $\M$ \emph{finitary} if all of the
functors $M_n\colon\CatC\to\CatC$ are finitary.
\end{defn}
\noindent
The above notion of graded monad is due to Smirnov \cite{Smirnov08}.
Katsumata~\cite{Katsumata14}, Fujii et al.~\cite{FKM16}, and
Mellies~\cite{Mellies17} consider a more general notion of graded (or
\emph{parametrized}) monad given as a lax monoidal action of a
monoidal category $\Mon$ (representing the system of grades) on a
category $\CatC$. Graded monads in the above sense are recovered by
taking~$\Mon$ to be the (discrete category induced by the) monoid
$(\N, +, 0)$.
The graded monad laws imply that the triple $(M_0, \eta, \mu^{0,0})$
is a (plain) monad on the base category $\CatC$; we use this freely
without further mention.
\begin{expl}\label{E:graded-monad}
We review some salient constructions~\cite{MPS15} of graded
monads on $\mathbf{Set}$ for later use.
\begin{enumerate}
\item\label{E:graded-monad:1} Every endofunctor $G$ on $\mathbf{Set}$
induces a graded monad~$\M_G$ with underlying endofunctors
$M_n:= G^n$ (the $n$-fold composite of~$G$ with itself); the unit
$\eta_X\colon X\to G^0X= X$ and multiplication
$\mu_X^{n,k}\colon G^nG^kX\to G^{n+k}X$ are all identity maps.
We will later see that~$\M_G$ captures (finite-depth) $G$-behavioural
equivalence.
\item\label{item:graded-kleisli}
Let $(T, \eta, \mu)$ be a monad on $\mathbf{Set}$, let $F$ be
an endofunctor on $\mathbf{Set}$, and let $\lambda\colon FT\to TF$
be a natural transformation such that
\[
\lambda\cdot F\eta=\eta F
\qquad\text{and}\qquad
\lambda \cdot F\mu = \mu F \cdot T\lambda \cdot \lambda T
\]
(i.e.~$\lambda$ is a distributive law of the functor~$F$ over the
monad~$T$). For each $n\in\omega$, let
$\lambda^n\colon F^nT\to TF^n$ denote the natural transformation
defined inductively by
\[
\lambda^0:= \mathsf{id}_T; \qquad \lambda^{n+1}:= \lambda^n F\cdot F^n\lambda.
\]
We obtain a graded monad with $M_n:= TF^n$, unit $\eta$, and
components $\mu^{n,k}$ of the multiplication given as the
composites
\[
TF^nTF^k\xra{T\lambda^n F^k} TTF^nF^k = TTF^{n+k}\xra{ \mu F^{n+k}} TF^{n+k}.
\]
Such graded monads relate strongly to Kleisli-style coalgebraic
trace semantics~\cite{HJS07}.
\item\label{item:T-traces} We obtain (by instance of the example above) a graded monad
$\M_T(\A)$ with $M_n= T(\A^n\times -)$ for every monad $T$ on
$\mathbf{Set}$ and every set~$\A$. Thus, $\M_T$ is a graded monad for
traces under effects specified by~$T$; e.g.~for $T=\CalD$, we will
see that $\M_T(\A)$ captures probabilistic trace equivalence on
PTS.
\item\label{E:graded-monad:4}Similarly, given a monad $T$, an
endofunctor $F$, both on the same category $\CatC$, and a
distributive law $\lambda\colon TF \to FT$ of~$T$ over $F$, we
obtain a graded monad with $M_n := F^nT$, unit and
multiplication given analogously as in
item~\ref{item:graded-kleisli} above
(see~\cite[Ex.~5.2.6]{MPS15}). Such graded monads relate strongly
to Eilenberg-Moore-style coalgebraic language semantics~\cite{bms13}
\end{enumerate}
\end{expl}
Graded variants of Kleisli triples have been introduced and proved
equivalent to graded monads (in a more general setting)
by Katsumata~\cite{Katsumata14}:
\begin{notn}\label{N:star}
We will employ the \emph{graded Kleisli star} notation:
for $n\in\omega$ and a morphism $f\colon X\to M_k Y$, we
write%
\begin{equation}\label{Eqn:Kleisli-star}
f^*_n:= \big(M_nX\xra{M_nf} M_nM_k\xra{\mu^{n,k}}M_{n+k}Y\big).
\end{equation}
In this way, we obtain a morphism satisfying the following graded
variants~\cite[Def.~2.3]{Katsumata14} of the usual laws of the
Kleisli star operation for ordinary monads: for every $m\in\omega$ and
morphisms $f\colon X\to M_nY$ and $g\colon Y\to M_kZ$ we have:%
\begin{align}
f^*_0\cdot\eta_X &= f, \label{item:star-2} \\
(\eta_X)^*_n &= \mathsf{id}_{M_n X}, \label{item:start-3}\\
(g^*_{n}\cdot f)^*_m &= g^*_{m+n}\cdot f_m^*. \label{item:star-1}
\end{align}
\end{notn}
\paragraph*{Graded theories.}
Graded theories, in a generalized form in which arities of operations
are not restricted to be finite, have been proved equivalent to graded
monads on $\mathbf{Set}$~\cite{MPS15} (the finitary case was implicitly
covered already by Smirnov~\cite{Smirnov08}). We work primarily with
the finitary theories below; we consider infinitary variants of such
theories only when considering infinite-depth equivalences
(\cref{sec:infinte-depth}).
\begin{defn}\label{def:theory}
\begin{enumerate}
\item A \emph{graded signature} is a set $\Sigma$ of \emph{operations}
$f$ equipped with a finite \emph{arity} $\mathsf{ar}(f) \in \omega$ and a
finite \emph{depth} $d(f)\in\omega.$ An operation of arity 0 is
called a \emph{constant}.
\item
Let $X$ be a set of \emph{variables} and let
$n\in\omega$. The set $\Termsarg{\Sigma, n}(X)$
of \emph{$\Sigma$-terms of uniform depth $n$ with
variables in $X$} is defined inductively as follows:
every variable $x\in X$ is a term of uniform depth
$0$ and, for $f\in\Sigma$ and
$t_1,\dots, t_{\mathsf{ar}(f)}\in\Termsarg{\Sigma, k}(X)$,
$f(t_1,\dots, t_{\mathsf{ar}(f)})$ is a $\Sigma$-term
of uniform depth $k+ d(f)$. In particular,
a constant $c$ has uniform depth $k$ for all $k\geq d(c)$.
\item A \emph{graded $\Sigma$-theory} is a set $\E$ of
\emph{uniform-depth equations}: pairs $(s, t)$, written `$s=t$',
such that $s,t\in\Termsarg{\Sigma, n}(X)$ for some $n\in\omega$;
%
we say that $(s, t)$ is \emph{depth-$n$}.
A theory is \emph{depth-$n$} if all of its equations and
operations have depth at most $n$.
\end{enumerate}
\end{defn}
\begin{notn}\label{N:substitution}
A \emph{uniform-depth substitution} is a map
$\sigma\colon X\to\Termsarg{\Sigma, k}(Y)$,
where $k\in\omega$ and $X, Y$ are sets.
Then $\sigma$ extends to a family of maps
$\bar{\sigma}_n\colon\Termsarg{\Sigma, n}(X)\to\Termsarg{\Sigma, k+n}(Y)$
($n\in\omega$) defined recursively by
\[
\bar{\sigma}_n(f(t_1,\dots, t_{\mathsf{ar}(f)})) =
f(\bar{\sigma}_m(t_1),\dots, \bar{\sigma}_m(t_{\mathsf{ar}(f)})),
\]
where $t_i\in\Termsarg{\Sigma, m}$ and
$d(f)+m = n$. For a term $t\in T_{\Sigma, k}(X)$,
we also write $t\sigma:= \bar{\sigma}_n(t)$ when
confusion is unlikely.
\end{notn}
\noindent
Given a graded theory $\T=(\Sigma, \E)$, we have essentially the
standard notion of equational derivation (sound and complete over
graded algebras, cf.\ \cref{sec:algebras}), restricted to
uniform-depth equations. Specifically, the system includes the
expected rules for reflexivity, symmetry, transitivity, and
congruence, and moreover allows substituted introduction of axioms: If
$s=t$ is in~$\E$ and~$\sigma$ is a uniform-depth substitution, then
derive the (uniform-depth) equation $s\sigma=t\sigma$. (A substitution
rule that more generally allows uniform-depth substitution into
derived equations is then admissible.) For a set~$Z$ of uniform-depth
equations, we write
\begin{equation*}
Z\vdash s=t
\end{equation*}
if the uniform-depth equation $s=t$ is derivable from equations in~$Z$
in this system; note that unlike the equational axioms in~$\E$, the
equations in~$Z$ cannot be substituted into in such a derivation (they
constitute assumptions on the variables occurring in~$s,t$).
We then see that~$\T$ induces a graded monad~$\M_{\T}$ with $M_nX$
being the quotient of $\Termsarg{\Sigma, n}(X)$ modulo derivable
equality under $\E$; the unit and multiplication of $\M_{\T}$ are
given by the inclusion of variables as depth-0 terms and the
collapsing of layered terms, respectively. Conversely, every graded
monad arises from a graded theory in this way~\cite{MPS15}.
We will restrict attention to graded monads presented by depth-$1$
graded theories:
\begin{defn}
A \emph{presentation} of a graded monad $\M$ is a graded theory $\T$
such that $\M\cong\M_{\T}$, in the above notation. A graded monad
is \emph{depth-1} if it has a depth-$1$ presentation.
\end{defn}
\begin{expl}\label{E:graded-theory}
Fix a set $\A$ of actions. We describe \mbox{depth-$1$} graded theories
associated (via the induced behavioural equivalence,
\cref{sec:semantics}) to standard process equivalences on LTS and
PTS~\cite{DMS19}.
\begin{enumerate}
\item\label{item:jsl-a} The graded theory $\JSL(\A)$ of
\emph{$\A$-labelled join semilattices} has as depth-1 operations all
formal sums
\[
\textstyle\sum_{i=1}^na_i(-),
\quad
\text{for \mbox{$n\ge 0$} and $a_1,\dots,a_n\in\A$}
\]
(and no depth-$0$ operations); we write~$0$ for
the empty formal sum. The axioms of $\JSL(\A)$ consist of all
depth-1 equations $\sum_{i=1}^na_i(x_i) = \sum_{j=1}^m b_j(y_j)$
(where the~$x_i$ and~$y_j$ are variables, not necessarily distinct)
such that
$\{(a_i, x_i)~|~1\le i\leq n\}=\{(b_j, y_j)~|~1\le j\leq m\}$. The
graded monad induced by $\JSL(\A)$ is $\M_G$ for
$G=\pow_{\mathsf f}(\A\times(-))$
(cf.~\cref{E:graded-monad}.\ref{E:graded-monad:1}).
\item\label{item:pt-a} The \emph{graded theory of probabilistic
traces}, $\mathsf{PT}(\A)$, has a depth-0 convex sum operation
\[
\textstyle\sum^n_{i = 1} p_i\cdot(-)
\quad
\text{for all $p_1, \ldots, p_n \in [0,1]$ such that
$\sum^n_{i=1} p_i = 1$}
\]
and unary depth-1 operations $a(-)$ for all actions $a\in\A$. As
depth-0 equations, we take the usual equational axiomatisation of
convex algebras, which is given by the equation
$\sum^n_{i = 1}\delta_{ij}\cdot x_j = x_i$ (where $\delta_{ij}$
denotes the Kronecker delta function) and all instances of the
equation scheme
\[
\sum^n_{i = n} p_i\cdot \sum^m_{j=1} q_{ij}\cdot x_j =
\sum_{j=1}^m \Big(\sum^{n}_{i=1}p_iq_{ij}\Big)\cdot x_j.
\]
We further impose depth-1 equations
stating that actions distribute over
convex sums:
\[
a\Big(\sum_{i=1}^np_i\cdot x_i\Big) = \sum_{i=1}^n p_i\cdot a(x_i).
\]
The theory $\mathsf{PT}(\A)$ presents~$\M_{\CalD_f}(\A)$,
where $\CalD_f$ is the finitely supported distribution monad
(cf.~\cref{E:graded-monad}.\ref{item:T-traces}).
\item\label{item:traces-a} We mention two variations on the graded
theory above. First, the \emph{graded theory of (non-deterministic)
traces} presenting $\M_{\pow_{\mathsf f}}(\A)$ has depth-0
operations~$+$,~$0$ and equations for join-semilattices with bottom,
and unary depth-1 operations~$a$ for $a\in\A$ as in
\ref{item:jsl-a} above; the depth-1 equations now state that
actions distribute over joins and preserve bottom. Second, the
\emph{graded theory of serial (non-deterministic) traces} arises by
omitting~$0$ and associated axioms from the graded theory of traces,
and yields a presentation of $\M_{\pow_{\mathsf f}^+}(\A)$.
\item\label{item:simulation-a} The \emph{graded theory of simulation}
has the same signature and depth-0 equations as the graded theory of
traces,
along with depth-1 equations stating that actions are monotone:
\[
a(x+y) + a(x) = a(x+y).
\]
The theory of simulation equivalence then
yields a presentation of the graded monad
with $M_nX$ defined inductively along with
a partial ordering as follows: We take
\mbox{$M_0X = \pow_{\mathsf f}(X)$} ordered by set inclusion.
We equip $\A\times M_nX$ with the product ordering
of the discrete order on $\A$ and the given
ordering on $M_nX$. Then $M_{n+1}X = \pow_{\mathsf f}^{\downarrow}(\A\times M_nX)$ is
the set of downwards-closed finite subsets of
$\A\times M_nX$.
\end{enumerate}
\end{expl}
In the following lemma, an \emph{epi-transformation} is a natural
transformation $\alpha$ whose components $\alpha_X$ are surjective maps.
\begin{notheorembrackets}
\begin{lem}[{\cite{MPS15}}]\label{L:depth-1}
A graded monad $\M$ on $\mathbf{Set}$ is \emph{depth-1} if and only if
all $\mu^{1,n}$ are epi-transformations and the following is
object-wise a coequalizer diagram in the category of
Eilenberg-Moore algebras for the monad $M_0$ for all
$n\in\omega$:
\begin{equation}
\begin{tikzcd}[column sep = 35]\label{Diagram:depth1}
M_1M_0M_n \arrow[r, "M_1\mu^{0,n}", shift left] \arrow[r,
"\mu^{1,0}M_n"', shift right] & M_1M_n \arrow[r, "\mu^{1,n}"] &
M_{1+n}.
\end{tikzcd}
\end{equation}
\end{lem}
\end{notheorembrackets}
\section{Graded Behavioural Equivalences}\label{sec:semantics}
We next recall the notion of a graded semantics~\cite{MPS15}
on coalgebras for an endofunctor on $\mathbf{Set}$;
we illustrate several instantiations of subsequent interest.
\begin{defn}[Graded semantics]
A \emph{(depth-1) graded semantics} for an endofunctor
$G\colon\mathbf{Set}\to\mathbf{Set}$ is a pair $(\alpha, \M)$ consisting of a
(depth-1) graded monad $\M$ on $\mathbf{Set}$ and a natural transformation
$\alpha\colon G\to M_1$ .
\end{defn}
\noindent
Given a $G$-coalgebra $(X,\gamma)$, the graded
semantics $(\alpha, \M)$ induces a sequence of
maps $\gamma^{(n)}\colon X\to M_n1$ inductively
defined by
\begin{align*}
\gamma^{(0)} &:= (X\xra{\eta_X}M_0X\xra{M_0!}M_01); \\
\gamma^{(n+1)} &:= (X\xra{\alpha_X\cdot\gamma} M_1X
\xra{M_1\gamma^{(n)}} M_1M_n 1
\xra{\mu^{1,n}_1} M_{1+n}1)
\end{align*}
(or, using the graded Kleisli star,
$\gamma^{(n+1)} = (\gamma^{(n)})^*_1\cdot\alpha_X\cdot\gamma$). We call
$\gamma^{(n)}(x)\in M_n1$ the \emph{$n$-step $(\alpha, \M)$-behaviour}
of $x\in X$.
\begin{defn}[Graded behavioural equivalence]
States ${x\in X, y\in Y}$ in $G$-coalgebras $(X, \gamma)$ and
$(Y, \delta)$ are \emph{depth-n behaviourally equivalent} under
$(\alpha, \M)$ if ${\gamma^{(n)}(x) = \delta^{(n)}(y)}$, and
\emph{$(\alpha, \M)$-behaviourally equivalent} if
$\gamma^{(n)}(x) = \delta^{(n)}(y)$ for all $n\in\omega$. We
refer to $(\alpha,\M)$-behavioural equivalence as a
\emph{graded behavioural equivalence} or just a \emph{graded
equivalence}.
\end{defn}
\begin{expl}\label{E:semantics}
We recall~~\cite[Section 4]{DMS19} several graded
equivalences, restricting primarily to LTS and PTS.
\begin{enumerate}
\item\label{item:sem-beh}
For an endofunctor $G$ on $\mathbf{Set}$, finite-depth
$G$-behavioural equivalence arises as the
graded equivalence with $\M = \M_G$ and
$\alpha = \mathsf{id}$, where $\M_G$ is the graded
monad of
\cref{E:graded-monad}.\ref{E:graded-monad:1}.
By~\cref{rem:finite-depth}, it follows that
$(\mathsf{id}, \M_G)$ captures full coalgebraic bisimilarity
in case $G$ is finitary.
\item\label{item:sem-trace} Let $(X, \gamma)$ be an LTS, let
${x\in X}$, and let ${w\in\A^*}$ be a finite word over $\A$. We
write $x\xra{w} y$ if the state $y$ can be reached on a path whose
labels form the word $w$. A \emph{finite trace} at $x\in X$ is a
word $w\in\A^*$ such that $x\xra{w} y$ for some $y\in X$; the set of
finite traces at $x$ is denoted $\tau(x)$. States $x,y\in X$ are
trace equivalent if $\tau(x)=\tau(y)$. Trace equivalence on finitely
branching LTS is captured by the graded equivalence induced by
$\M=\M_{\pow_{\mathsf f}}(\A)$
(cf.~\cref{E:graded-monad}.\ref{item:T-traces}), again with
$\alpha=\mathsf{id}$; replacing $\pow_{\mathsf f}$ with $\pow^+$ (or with $\pow_{\mathsf f}^+$)
yields trace equivalence on serial (and finitely branching) LTS.
\item\label{item:sem-prob} Probabilistic trace equivalence on PTS is
the graded equivalence induced by $\M=\M_{\CalD_f}(\A)$
(cf.~\cref{E:graded-monad}.\ref{item:T-traces}) and
$\alpha = \mathsf{id}$: The maps~$\gamma^{(k)}$ equip states with
distributions on length-$k$ action words, and the induced
equivalence identifies states $x$ and $y$ whenever these
distributions coincide at~$x$ and~$y$ for all~$k$.
\item\label{item:sem-sim}
Simulation equivalence on LTS can also be
construed as a graded equivalence by taking
$\M$ to be the graded monad described in
\cref{E:graded-theory}.\ref{item:simulation-a},
and
\begin{eqnarray*}
\alpha_X\colon\pow_{\mathsf f}(\A\times
X) & \to & \pow^{\downarrow}_f(\A\times\pow_{\mathsf f} X) \\
S\ & \mapsto &
{\downarrow}\{(a,\{x\})\mid (a,x)\in S\}
\end{eqnarray*}
where $\downarrow$ takes downsets.
\end{enumerate}
\end{expl}
\begin{rem}
It follows from the depth-1 presentations described
in~\cref{E:graded-theory} that the graded semantics mentioned in
\cref{E:semantics} are depth-1.
\end{rem}
\section{Graded Algebras}\label{sec:algebras}
Graded monads come equipped with graded analogues
of both the Eilenberg-Moore and Kleisli constructions for
ordinary monads. In particular, we have a
notion of \emph{graded algebra}~\cite{FKM16, MPS15}:
\begin{defn}[Graded algebra]\label{D:gradedalgebra}
Let $k\in\omega$ and let $\M$ be a graded
monad on a category $\CatC$. An \emph{$M_k$-algebra}
$A$ consists of a family of $\CatC$-objects $(A_n)_{n\leq k}$
(the \emph{carriers}) and a family of $\CatC$-morphisms%
\begin{equation}
a^{n,m}\colon M_nA_m\to A_{n+m} \tag{$n+m\le k$}
\end{equation}
(the \emph{structure}) such that $a^{0,n}\cdot\eta_{A_n} = \mathsf{id}_{A_n}$
($n\leq k$) and
\begin{equation}\label{Diagram:gradedalgebralaw}
\begin{tikzcd}[column sep = 35]
M_nM_mA_r \arrow[d, "\mu^{n,m}"'] \arrow[r, "M_na^{m,r}"]
&
M_nA_{m+r} \arrow[d, "a^{n,m+r}"] \\
M_{n+m}A_{r} \arrow[r, "a^{n+m,r}"]
&
A_{n+m+r}
\end{tikzcd}
\end{equation}
for all $n,m,r\in\omega$ such that $n+m+r\leq k$. The \emph{i-part} of
an $M_k$-algebra $A$ is the $M_0$-algebra $(A_i, a^{0,i})$.
A \emph{homomorphism} from $A$ to an $M_k$-algebra
$B$ is a family of $\CatC$-morphisms
$h_n\colon A_n\to B_n$ ($n\leq k$) such that
\[
h_{n+m} \cdot a^{n,m} = b^{n,m}\cdot M_nh_m
\quad\text{for all $n,m\in\omega$ s.th.~$n+m\le k$.}
\]
We
write $\Alg_k(\M)$ for the category of $M_k$-algebras
and their homomorphisms.
We define \emph{$M_{\omega}$-algebras} (and their
homomorphisms) similarly, by allowing the indices $n,m,r$
to range over $\omega$.
\end{defn}
\begin{rem}
The above notion of $M_{\omega}$-algebra corresponds with the
concept of graded Eilenberg-Moore algebras introduced by Fujii et
al.~\cite{FKM16}. Intuitively, $M_{\omega}$-algebras are devices for
interpreting terms of unbounded uniform depth. We understand
$M_k$-algebras~\cite{MPS15} as a refinement of $M_{\omega}$-algebras
which allows the interpretation of terms of uniform depth \emph{at
most~$k$}. Thus, $M_k$-algebras serve as a formalism for
specifying properties of states exhibited in \emph{$k$ steps}. For
example, $M_1$-algebras are used to interpret one-step modalities of
characteristic logics for graded semantics~\cite{DMS19,FMS21a}.
Moreover, for a depth-1 graded monad, its $M_\omega$-algebras may
be understood as compatible chains of $M_1$-algebras~\cite{MPS15},
and
a depth-1 graded monad can be reconstructed from its $M_1$-algebras.
%
\end{rem}
\noindent We will be chiefly interested in $M_0$- and $M_1$-algebras:
\begin{expl}
Let $\M$ be a graded monad on $\mathbf{Set}$.
\begin{enumerate}
\item An $M_0$-algebra is just an Eilenberg-Moore algebra for the
monad $(M_0, \eta, \mu^{0,0})$. It follows that $\Alg_0(\M)$ is
complete and cocomplete, in particular has coequalizers.
\item An $M_1$-algebra is a pair $((A_0, a^{0,0}), (A_1, a^{0,1}))$ of
$M_0$-algebras -- often we just write the carriers $A_i$ to also
denote the algebras, by abuse of notation -- equipped with a
\emph{main structure map} $a^{1,0}\colon M_1A_0\to A_1$ satisfying
two instances of~\eqref{Diagram:gradedalgebralaw}. One instance
states that $a^{1,0}$ is an $M_0$-algebra homomorphism from
$(M_1A_0, \mu^{0,1}_A)$ to $(A_1, a^{0,1})$ (\emph{homomorphy}); the
other expresses that $a^{1,0}\cdot \mu^{1,0}= a^{1,0}\cdot
M_1a^{0,0}$ (\emph{coequalization}):
\begin{equation}\label{Diagram:coequalization}
\begin{tikzcd}[column sep = 35]
M_1M_0A_0 \arrow[r, "\mu^{1,0}", shift left] \arrow[r,
"M_1a^{0,0}"', shift right] & M_1A_0 \arrow[r, "a^{1,0}"] & A_1.
\end{tikzcd}
\end{equation}
\end{enumerate}
\end{expl}
\begin{rem}
The free $M_n$-algebra on a set~$X$ is formed in the expected way,
in particular has
carriers~$M_0X,\dots,M_nX$, see~\cite[Prop.~6.3]{MPS15}.
\end{rem}
\paragraph*{Canonical algebras.}
We are going to review the basic definitions and results on
\emph{canonical $M_1$-algebras}~\cite{DMS19}. Fix a graded
monad $\M$ on $\mathbf{Set}$.
We write $(-)_i\colon\Alg_1(\M)\to\Alg_0(\M)$, $i = 0,1$,
for the functor which sends an $M_1$-algebra $A$ to its $i$-part
$A_i$ and sends a homomorphism $h\colon A\to B$ to
$h_i\colon A_i\to B_i$.
\begin{defn}\label{D:canonical}
An $M_1$-algebra~$A$ is \emph{canonical} if it is free over its
$0$-part with respect to $(-)_0\colon\Alg_1(\M)\to\Alg_0(\M)$.
\end{defn}
\begin{rem}\label{rem:canonical}
The universal property of a canonical algebra~$A$ is the following:
for every $M_1$-algebra $B$ and every $M_0$-algebra homomorphism
$h\colon A_0\to B_0$, there exists a unique $M_1$-algebra
homomorphism $h^\#\colon A\to B$ such that $(h^{\#})_0 = h_0$.
\end{rem}
\begin{notheorembrackets}
\begin{lem}[{\cite[Lem.~5.3]{DMS19}}]\label{L:canonical-algebra}
An $M_1$-algebra $A$ is canonical if and only if
(\ref{Diagram:coequalization}) is a coequalizer in $\Alg_0(\M)$.
\end{lem}
\end{notheorembrackets}
\begin{expl}\label{E:canonical}
Let $X$ be a set and let $\M$ be a depth-1 graded monad on $\mathbf{Set}.$
For each ${k\in\omega}$, we may view $M_kX$ as an $M_0$-algebra with
structure $\mu^{0,k}$. For the $M_1$-algebra $(M_kX, M_{k+1}X)$
(with main structure map $\mu^{1,k}$), the instance of
Diagram~(\ref{Diagram:coequalization}) required
by~\cref{L:canonical-algebra} is a coequalizer
by~\cref{L:depth-1}; that is, $(M_kX, M_{k+1}X, \mu^{1,k})$ is
canonical.
\end{expl}
\section{Pre-Determinization in Eilenberg-Moore}\label{sec:determinization}
We describe a generic notion of pre-determinization (the terminology
will be explained in \cref{rem:determinization}) for coalgebras of
an endofunctor $G$ on $\mathbf{Set}$ with respect to a given depth-1 graded
semantics $(\alpha,\M)$, generalizing the Eilenberg-Moore-style
coalgebraic determinization construction by Silva et
al.~\cite{BBSR13}. The behavioural equivalence game introduced in the
next section will effectively be played on the pre-determinization of
the given coalgebra. We will occasionally gloss over issues of finite
branching in the examples.
We first note that every $M_0$-algebra $A$ extends (uniquely) to a
canonical $M_1$-algebra $EA$ (with $0$-part $A$), whose $1$-part and
main structure are obtained by taking the coequalizer of the pair of
morphisms in \eqref{Diagram:coequalization} (canonicity then follows
by~\cref{L:canonical-algebra}). This construction forms the object
part of a functor $\Alg_0(\M)\to\Alg_1(\M)$ which sends a homomorphism
$h\colon A\to B$ to its unique extension $Eh:=h^\sharp\colon EA\to EB$
(cf.~\cref{rem:canonical}). We write $\mybar{0.8}{2pt}{M}_1$ for the endofunctor on
$\Alg_0(\M)$ given by
\begin{equation}\label{eq:barM}
\mybar{0.8}{2pt}{M}_1:= (\Alg_0(\M)\xra{E}\Alg_1(\M)\xra{(-)_1}\Alg_0(\M)),
\end{equation}
where $(-)_1$ is the functor taking $1$-parts. Thus, for an
$M_0$-algebra~$A_0$, $\mybar{0.8}{2pt}{M}_1(A_0)$ is the vertex of the coequalizer
\eqref{Diagram:coequalization}.
By \cref{E:canonical}, we have
\begin{equation}
\label{eq:barM-can}
\mybar{0.8}{2pt}{M}_1(M_kX,\mu^{0,k}_X)=(M_{k+1}X,\mu^{0,k+1}_X)
\end{equation}
for every set~$X$ and every $k\in\omega$. In particular,
\begin{equation}\label{Eq:determinization}
U\mybar{0.8}{2pt}{M}_1 F = M_1
\end{equation}
where $F\dashv U\colon \Alg_0(\M) \to \mathbf{Set}$ is the canonical
adjunction of the Eilenberg-Moore category of~$M_0$ -- that is,~$U$ is
the forgetful functor, and~$F$ takes free $M_0$-algebras, so
$FX=(M_0X,\mu^{00}_X)$. For an $M_1$-coalgebra
$f\colon X\to M_1X=U\mybar{0.8}{2pt}{M}_1 FX$, we therefore obtain a homomorphism
$f^\#\colon FX\to\mybar{0.8}{2pt}{M}_1 FX$ (in $\Alg_0(\M)$) via adjoint
transposition. This leads to the following pre-determinization
construction:
\begin{defn}\label{D:determinization}
Let $(\alpha, \M)$ be a depth-1 graded semantics on
$G$-coalgebras.
The \emph{pre-determinization} of a $G$-coalgebra
$(X, \gamma)$ under $(\alpha, \M)$
is the $\mybar{0.8}{2pt}{M}_1$-co\-al\-gebra
\begin{equation}\label{eq:det}
(\alpha_X\cdot\gamma)^\#\colon FX\to \mybar{0.8}{2pt}{M}_1 FX.
\end{equation}
\end{defn}
\begin{rem}\label{rem:determinization}
\begin{enumerate}
\item\label{item:predet-det} We call this construction a
\emph{pre-}de\-ter\-mi\-ni\-zat\-ion because it will serve as a
\emph{determinization} -- in the expected sense that the
underlying graded equivalence transforms into behavioural
equivalence on the determinization -- only under additional
conditions. Notice that given a $G$-coalgebra $(X,\gamma)$,
(finite-depth) behavioural equivalence on the $\mybar{0.8}{2pt}{M}_1$-coalgebra
$(\alpha_X\cdot\gamma)^*_0$ is given by the canonical cone into
the final chain
\begin{equation*}
1 \xla{!} \mybar{0.8}{2pt}{M}_1 1 \xla{\overbar M_1 !} \mybar{0.8}{2pt}{M}_1^{2}1 \xla{\overbar
M_1^2 !} \cdots
\end{equation*}
while graded behavioural equivalence on $(X,\gamma)$ is given by
the maps $\gamma^{(k)}$ into the sequence $M_01,M_11,M_21,\dots$,
equivalently given as homomorphisms
$(\gamma^{(k)})^*_0\colon FX\to(M_k1,\mu^{0,k}_1)$, whose domains
can, by~\eqref{eq:barM-can}, be written as the sequence
\begin{equation*}
F1,\quad \mybar{0.8}{2pt}{M}_1 1,\quad \mybar{0.8}{2pt}{M}_1^21,\quad \ldots
\end{equation*}
of $M_0$-algebras. The two sequences coincide in case $M_01=1$,
and indeed one easily verifies that in this case, finite-depth
behavioural equivalence on $\mybar{0.8}{2pt}{M}_1$-coalgebras coincides with
$(\alpha,\M)$-behavioural equivalence. For instance, this holds
in the case of probabilistic trace equivalence
(\cref{E:semantics}.\ref{item:sem-prob}), where $M_0=\CalD$, so
$M_01=1$. In the case of trace equivalence
(\cref{E:semantics}.\ref{item:sem-trace}), $M_01=1$ can be
ensured by restricting to serial labelled transition systems,
which, as noted in \cref{E:coalg}.\ref{E:coalg:1}, are
coalgebras for $\pow^+(\A\times -)$ with~$\pow^+$ denoting
non-empty powerset, so that in the corresponding variant of the
graded monad for trace semantics, we have $M_0=\pow^+$ and hence
$M_01=1$.
On the other hand, the condition $M_01=1$ fails for trace
equivalence of unrestricted systems where we have~\mbox{$M_0=\pow$,}
which in fact
constitutes a radical example where behavioural equivalence on the
pre-determization is strictly coarser than the given graded
equivalence. In this case, since the actions preserve the
bottom~$0$, we in fact have~$\mybar{0.8}{2pt}{M}_1 1=1$: it follows that \emph{all}
states in $\mybar{0.8}{2pt}{M}_1$-coalgebras are behaviourally equivalent (as
the unique coalgebra structure on~$1$ is final).
\item Using~\eqref{Eq:determinization}, we see that the underlying
map of the pre-determinization of a coalgebra $(X,\gamma)$ is
$(\alpha_X \cdot \gamma)^*_0 \colon M_0X \to M_1 X = U_0\mybar{0.8}{2pt}{M}_1 F_0
X$ (written using graded Kleisli star as
per~\cref{N:star}). Indeed, one easily shows that
$(\alpha_X \cdot\gamma)^*_0$ is an $M_0$-algebra
morphism
$(M_0 X, \mu^{0,0}_X)\to\mybar{0.8}{2pt}{M}_1
(M_0X,\mu^{0,0}_X)=(M_1X,\mu^{0,1}_X)$ satisfying
$(\alpha_X \cdot \gamma)^*_0 \cdot \eta_X = \alpha_X \cdot
\gamma$. Thus, it is the adjoint transpose in~\eqref{eq:det}.
\item As indicated above, pre-de\-ter\-mi\-ni\-za\-tion captures the
Eilenberg-Moore style generalized determinization by Silva et
al.~\cite{BBSR13} as an instance. Indeed, for a monad $T$ and an
endofunctor $F$, both on the category $\CatC$, one considers a
coalgebra $\gamma\colon X \to FTX$. Assuming that $FTX$ carries
the structure of an Eilenberg-Moore algebra for $T$ (e.g.~because
the functor~$F$ lifts to the category of Eilenberg-Moore algebras
for $T$), one obtains an
$F$-coalgebra $\gamma^\sharp\colon TX \to FTX$ by taking the
unique homomorphic extension of $\gamma$. Among the concrete
instances of this construction are the well-known powerset
construction of non-deterministic automata (take $T = \pow$ and
$F = 2 \times (-)^A$), the non-determinization of alternating
automata and that of Markov decision processes~\cite{JSS15}.
To view this as an instance of pre-de\-ter\-mi\-ni\-za\-tion, take
the graded monad with $M_n = F^nT$
(\cref{E:graded-monad}.\ref{E:graded-monad:4}), let $G = FT$,
and let $\alpha = \mathsf{id}_{FT}$. Using~\eqref{Eq:determinization}, we
see that $(\alpha_X \cdot \gamma)^\#$ in~\eqref{eq:det} is the
generalized determinization $\gamma^\sharp$ above.
\item We emphasize that the construction applies completely
universally; e.g.~we obtain as one instance a `determinization' of
serial labelled transition systems modulo similarity, which
transforms a coalgebra $X\to\pow^+(\A\times X)$ into an
$\mybar{0.8}{2pt}{M}_1$-coalgebra
$\pow^+(X)\to\pow^{\downarrow}(\A\times \pow^+(X))$
(\cref{E:graded-theory}.\ref{item:simulation-a}); instantiating
the observations in item~\ref{item:predet-det}, we obtain that
finite-depth behavioural equivalence of $\mybar{0.8}{2pt}{M}_1$-coalgebras (see
\cref{expl:barM} for the description of~$\mybar{0.8}{2pt}{M}_1$) coincides with
finite-depth mutual similarity.
\end{enumerate}
\end{rem}
\begin{expl}\label{expl:barM}
We give a description of the functor~$\mybar{0.8}{2pt}{M}_1$ on $M_0$-algebras
constructed above in some of the running examples.
\begin{enumerate}[wide]
\item For graded monads of the form $\M_G$, which capture
finite-depth behavioural equivalence
(\cref{E:graded-monad}.\ref{E:graded-monad:1}), we have
$M_0=\Id$, so $M_0$-algebras are just sets, and under this
correspondence, $\mybar{0.8}{2pt}{M}_1$ is the original functor~$G$.
\item\label{item:mono-traces} Trace semantics of LTS
(\cref{E:graded-theory}.\ref{item:traces-a}): Distribution of
actions over the join semilattice operations ensures that depth-1
terms over a join semilattice~$X$ can be normalized to sums of the
form $\sum_{a\in \A}a(x_a)$, with $x_a\in X$ (possibly
$x_a=0$). It follows that $\mybar{0.8}{2pt}{M}_1$ is simply given by
$\mybar{0.8}{2pt}{M}_1 X=X^\A$ ($\A$-th power, where~$\A$ is the finite set of labels).
Other forms of trace semantics are treated similarly.
\item In the graded theory for simulation
(\cref{E:graded-theory}.\ref{item:simulation-a}), the
description of the induced graded monad~\cite{DMS19} extends
analogously to~$\mybar{0.8}{2pt}{M}_1$, yielding that $\mybar{0.8}{2pt}{M}_1 B$ is the join
semilattice of finitely generated downwards closed subsets of
$\A\times B$ where, again,~$\A$ carries the discrete
ordering.
\end{enumerate}
\end{expl}
\begin{rem}
The assignment $\M \mapsto \mybar{0.8}{2pt}{M}_1$ exhibits the category $\mathscr{K}$ of
depth-1 graded monads whose $0$-part is the monad
$(M_0, \eta, \mu^{0,0})$ as a coreflective subcategory (up to
isomorphism) of the category $\Fun(\mathbf{Set}^{M_0})$ of all endofunctors
on the Eilenberg-Moore category of that monad.
Indeed, given an endofunctor $H$ on $\mathbf{Set}^{M_0}$ we form the
$6$-tuple
$(M_0, UHF, \eta, \mu^{0,0}, \mu^{0,1},\mu^{1,0}),$
where the latter two natural transformations arise from the counit
$\varepsilon\colon FU \to \Id$ of the canonical adjunction $F\dashv U\colon\Alg_0(\M_0)\to\mathbf{Set}$:
\begin{align*}
\mu^{0,1} &= (M_0UHF = UFUHF \xra{U\varepsilon HF} UHF\big);\\
\mu^{1,0} &= (UHFM_0 = UHFUF \xra{UHF\varepsilon F} UHF\big).
\end{align*}
It is not difficult to check that this data satisfies all applicable
instances of the graded monad laws. Hence, it specifies a
depth-1 graded monad $R(H)$~\cite[Thm.~3.7]{DMS19}; this
assignment is the object part of a functor $R\colon\Fun(\mathbf{Set}^{M_0})\to\mathscr{K}$.
In the other direction, we have for each depth-1 graded monad $\M$
with $0$-part $M_0$ the endofunctor $I(\M) =
\mybar{0.8}{2pt}{M}_1$. By~\eqref{Eq:determinization}, we have $RI(\M) = \M$.
Now, given a depth-1 graded monad $\M$ and an endofunctor $H$ on
$\mathbf{Set}^{M_0}$, consider $\mybar{0.8}{2pt}{M}_1 = IR(H)$ (so that $M_1 = UHF$). We
obtain for every algebra $(A,a)$ in $\mathbf{Set}^{M_0}$ a homomorphism
$c_{(A,a)}\colon \mybar{0.8}{2pt}{M}_1 (A,a) \to H(A,a)$ by using the coequalizer
defining $\mybar{0.8}{2pt}{M}_1(A,a)$ (cf.~\cref{L:canonical-algebra}):
\[
\begin{tikzcd}
M_1M_0 A
\ar[yshift=2]{r}{\mu^{0,1}_A}
\ar[yshift=-2]{r}[swap]{M_1a}
&
M_1A=HFA
\ar[->>]{r}
\ar{rd}{Ha}
&
\mybar{0.8}{2pt}{M}_1 (A,a)
\ar[dashed]{d}{c_{(A,a)}}
\\
&&
H(A,a)
\end{tikzcd}
\]
Note that $M_1M_0A$ is the carrier of the Eilenberg-Moore algebra
$HFA = H(M_0A, \mu^{0,0}_A)$ and similarly for the middle object (in
both cases we have omitted the algebra structures given by
$\mu^{0,1}_{M_0 A}$ and $\mu^{0,1}_A$ coming from the graded monad
$I(H)$). It is easy to see that the homomorphism $Ha$ merges the
parallel pair, and therefore we obtain the dashed morphism such that
the triangle commutes, yielding the components of a natural
transformation $c\colon \mybar{0.8}{2pt}{M}_1 \to H$ which is couniversal: for each
depth-1 graded monad $\mathbb N$ whose $0$-part is $M_0$ and each
natural transformation $h\colon \mybar{0.8}{2pt}{M}_1 \to H$, there is a unique
natural transformation $m_1\colon N_1 \to M_1 = UHF$ such that
$m = (id_{M_0}, m_1)$ is a morphism of graded monads from
$\mathbb N$ to~$\M$ and $c \cdot I(m) = h$. This shows that $I
\dashv R$.
\end{rem}
\section{Behavioural Equivalence Games}\label{S:games}
Let $\CalS = (\alpha, \M)$ be a depth-1 graded semantics for an
endofunctor $G$ on $\mathbf{Set}$. We are going to describe a game for playing
out depth-$n$ behavioural equivalence under $\CalS$-semantics on
states in $G$-coalgebras.
We first give a description of the game in the syntactic language of
graded equational reasoning, and then present a more abstract
categorical definition. Given a coalgebra $(X,\gamma)$, we will see
the states in~$X$ as variables, and the map $\alpha_X\cdot\gamma$ as
assigning to each variable~$x$ a depth-1 term over~$X$; we can regard
this assignment as a (uniform-depth) substitution~$\sigma$. A
configuration of the game is a pair of depth-0 terms over~$X$; to play
out the equivalence of states $x,y\in X$, the game is started from the
initial configuration $(x,y)$. Each round of the game then proceeds
in two steps: First, Duplicator plays a set~$Z$ of equalities between
depth-0 terms over~$X$ that she claims to hold under the
semantics. This move is admissible in the configuration $(s,t)$
if~$Z\vdash s\sigma=t\sigma$. Then, Spoiler challenges one of the
equalities claimed by Duplicator, i.e.~picks an
element~$(s',t')\in Z$, which then becomes the next configuration. Any
player who cannot move, loses. After~$n$ rounds have been played,
reaching the final configuration $(s,t)$, Duplicator wins if
$s\theta =t\theta$ is a valid equality, where~$\theta$ is a
substitution that identifies all variables. We refer to this last
check as \emph{calling the bluff}. Thus, the game plays out an
equational proof between terms obtained by unfolding depth-0 terms
according to~$\sigma$, cutting off after~$n$ steps.
We introduce some technical notation to capture the notion of
admissibility of~$Z$ abstractly:
\begin{notn}\label{N:admissible}
Let $Z\subseteq M_0X\times M_0X$ be a relation, and let
$c_Z\colon M_0X\to C_Z$ be the coequalizer in $\Alg_0(\M)$ of the
homomorphisms $ \ell_0^*, r_0^*\colon M_0Z\to M_0X $ given by
applying the Kleisli star~\eqref{Eqn:Kleisli-star} to the
projections $\ell, r\colon Z\to M_0X$. We define a homomorphism
$\mybar{0.7}{1.75pt}{Z}\colon M_0X\to M_1C_Z$ in $\Alg_0(\M)$ by
\begin{equation}\label{eq:barZ}
\mybar{0.7}{1.75pt}{Z} = \big(M_0X\xra{(\alpha_X\cdot\gamma)^*_0} M_1X = \mybar{0.8}{2pt}{M}_1 M_0X
\xra{\overbar M_1 c_Z} \mybar{0.8}{2pt}{M}_1 C_Z\big)
\end{equation}
(omitting algebra structures, and again using the Kleisli star).
\end{notn}
\begin{rem}\label{R:coeq}
Using designators as in \cref{N:admissible}, we note:
\begin{enumerate}
\item\label{R:coeq:1}\label{R:coeq:2} By the universal property of
$\eta_Z\colon Z \to M_0Z$, an $M_0$-algebra homomorphism
$h\colon M_0X \to A$ merges $\ell, r$ iff it merges
$\ell^*_0, r^*_0$. This implies that the coequalizer $M_0X \xra{c_Z}C_Z$
quotients the free $M_0$-algebra $M_0X$ by the congruence
generated by~$Z$. Also, it follows that in case~$Z$ is already an
$M_0$-algebra and $\ell, r\colon Z \to M_0X$ are $M_0$-algebra
homomorphisms (e.g.~when $Z$ is a congruence), one may take
$c_Z\colon M_0X \to C_Z$ to be the coequalizer of $\ell, r$.
\item\label{item:barZ} The map $\mybar{0.7}{1.75pt}{Z}\colon M_0X\to\mybar{0.8}{2pt}{M}_1 C_Z$
associated to the relation $Z$ on $M_0X$ may be understood as
follows. As per the discussion above, we view the states of the
coalgebra $(X,\gamma)$ as variables, and the map
$X\xra{\gamma} GX\xra{\alpha_X} M_1X$ as a substitution mapping a
state $x \in X$ to the equivalence class of depth-1 terms encoding
the successor structure $\gamma(x)$. The second factor $\mybar{0.8}{2pt}{M}_1 c_Z$
in~\eqref{eq:barZ} then essentially applies the relations given by
the closure of $Z$ under congruence w.r.t.~depth-0 operations,
embodied in~$c_Z$ as per~\ref{R:coeq:1}, under depth-1 operations
in (equivalence classes of) of depth-1 terms in $M_1X$; to sum up,
$\mybar{0.8}{2pt}{M}_1 c_Z$ merges a pair of equivalence classes $[t], [t']$ iff
$Z\vdash t=t'$ in a depth-1 theory presenting $\M$ (in notation as
per \cref{sec:prelims}).
\end{enumerate}
\end{rem}
\begin{defn}\label{def:game}
For $n\in\omega$, the \emph{$n$-round $\CalS$-behavioural
equivalence game} $\CalG_n(\gamma)$ on a $G$-coalgebra
$(X, \gamma)$ is played by Duplicator (D) and Spoiler
(S). \emph{Configurations} of the game are pairs
$(s,t)\in M_0(X)\times M_0(X)$. Starting from an \emph{initial
configuration} designated as needed, the game is played for~$n$
rounds. Each round proceeds in two steps, from the current
configuration~$(s,t)$: First, D chooses a relation
$Z\subseteq M_0X\times M_0X$ such that $\mybar{0.7}{1.75pt}{Z}(s) = \mybar{0.7}{1.75pt}{Z}(t)$
(for~$\mybar{0.7}{1.75pt}{Z}$ as per \cref{N:admissible}). Then,~S
picks an element~$(s',t') \in Z$, which becomes the next configuration. Any
player who cannot move at his turn, loses. After~$n$ rounds have
been played,~D wins if $M_0!(s_n) = M_0!(t_n)$; otherwise,~S wins.
\end{defn}
\begin{rem}
By the description of~$\mybar{0.7}{1.75pt}{Z}$ given in
\cref{R:coeq}.\ref{item:barZ}, the categorical definition of the
game corresponds to the algebraic one given in the lead-in
discussion. The final check whether $M_0!(s_n)=M_0!(t_n)$
corresponds to what we termed \emph{calling the bluff}. The apparent
difference between playing either on depth-0 terms or on elements
of~$M_0X$, i.e.~depth-0 terms modulo derivable equality, is absorbed
by equational reasoning from~$Z$, which may incorporate also the
application of depth-0 equations.
\end{rem}
\begin{rem}
A pair of states coming from different coalgebras $(X,\gamma)$ and
$(Y,\delta)$ can be treated by considering those states as elements
of the coproduct of the two coalgebras:
\[
X+Y \xra{\gamma + \delta} GX + GY \xra{[G\mathsf{inl}, G\mathsf{inr}]} G(X+Y),
\]
where $X \xra{\mathsf{inl}} X+Y \xla{\mathsf{inr}} Y$ denote the coproduct
injections. There is an evident variant of the game played on two
different coalgebras $(X,\gamma)$, $(Y,\delta)$, where moves of~D
are subsets of $M_0X\times M_0Y$. However, completeness of this
version depends on additional assumptions on~$\M$, to be clarified
in future work. For instance, if we instantiate the graded monad for
traces with effects specified by~$T$
(\cref{E:graded-monad}.\ref{item:T-traces}) to~$T$ being the free
real vector space monad, and a state~$x\in X$ has successor
structure $2\cdot x'-2\cdot x''$, then~D can support equivalence
between~$x$ and a deadlock~$y\in Y$ (with successor structure~$0$)
by claiming that $x'=x''$, but not by any equality between terms
over~$X$ with terms over~$Y$. That is, in this instance, the variant
of the game where~D plays relations on $M_0X\times M_0Y$ is not
complete.
\end{rem}
\noindent Soundness and completeness of the game with respect to
$\CalS$-behavioural equivalence is stated as follows.
\begin{thm}\label{T:sound-complete}
Let $(\alpha, \M)$ be a depth-1 graded semantics for a functor~$G$
such that $\mybar{0.8}{2pt}{M}_1$ preserves monomorphisms, and let $(X, \gamma)$ be
a $G$-coalgebra. Then, for all $n\in\omega$,~D wins $(s, t)$ in
$\CalG_n(\gamma)$ if and only if
$(\gamma^{(n)})^*_0(s) = (\gamma^{(n)})^*_0(t)$.
\end{thm}
\begin{cor}
States $x,y$ in a $G$-coalgebra $(X, \gamma)$ are
$\CalS$-behaviourally equivalent if and only if~D wins
$(\eta(x), \eta(y))$ for all $n\in\omega$.
\end{cor}
\begin{rem}\label{rem:monos}
In algebraic terms, the condition that~$\mybar{0.8}{2pt}{M}_1$ preserves
monomorphisms amounts to the following: In the derivation of an
equality of depth-1 terms~$s,t$ over~$X$ from depth-0 relations
over~$X$ (i.e.~from a presentation of an $M_0$-algebra by relations
on generators~$X$), if~$X$ is included in a larger set~$Y$ of
variables with relations that conservatively extend those on~$X$,
i.e.~do not imply additional relations on~$X$, then it does not
matter whether the derivation is conducted over~$X$ or more
liberally over~$Y$. Intuitively, this property is needed because not
all possible $n$-step behaviours, i.e.~elements of~$Y=M_n1$, are
realized by states in a given coalgebra on~$X$. Preservation of
monos by~$\mybar{0.8}{2pt}{M}_1$ is automatic for graded monads of the form $\M_G$
(\cref{E:graded-monad}.\ref{E:graded-monad:1}), since $M_0=\Id$
in this case. In the other running examples, preservation of monos
is by the respective descriptions of~$\mybar{0.8}{2pt}{M}_1$ given in
\cref{expl:barM}.
\end{rem}
\begin{expl}\label{expl:bisim-instance}
We take a brief look at the instance of the generic game for the
case of bisimilarity on finitely branching LTS (more extensive
examples are in \cref{sec:cases}), i.e.~we consider the depth-1
graded semantics $(\mathsf{id}, \M_G)$ for the functor
$G=\pow_{\mathsf f}(\A\times(-))$. In this case, $M_0=\Id$, so when playing on
a coalgebra $(X,\gamma)$,~D plays relations~$Z\subseteq X\times
X$. If the successor structures of states~$x,y$ are represented by
depth-1 terms $\sum_{i}a_i(x_i)$ and $\sum_j b_j(y_j)$,
respectively, in the theory $\JSL(\A)$
(\cref{E:graded-theory}.\ref{item:jsl-a}), then~D is allowed to
play~$Z$ iff the equality $\sum_{i}a_i(x_i)=\sum_j b_j(y_j)$ is
entailed by~$Z$ in $\JSL(\A)$. This, in turn, holds iff for
each~$i$, there is~$j$ such that $a_i=b_j$ and $(x_i,y_j)\in Z$, and
symmetrically. Thus~$Z$ may be seen as a pre-announced
non-deterministic winning strategy for~D in the usual bisimilarity
game where~S moves first (\cref{sec:prelims}):~D announces that
if~S moves from, say,~$x$ to~$x_i$, then she will respond with
some~$y_j$ such that $a_i=b_j$ and $(x_i,y_j)\in Z$.
\end{expl}
\section{Infinite-depth behavioural
equivalence}\label{sec:infinte-depth}
\sloppypar
\noindent We have seen in \cref{sec:determinization} that in case
\mbox{$M_01=1$}, $(\alpha, \M)$-behavioural equivalence
on~$G$-coalgebras coincides, via a determization construction, with
finite-depth behavioural equivalence on $\mybar{0.8}{2pt}{M}_1$-coalgebras for a
functor $\mybar{0.8}{2pt}{M}_1$ on $M_0$-algebras constructed from~$\M$. If~$G$ is
finitary, then finite-depth behavioural equivalence coincides with
full behavioural equivalence~(\cref{rem:finite-depth}), but in
general, finite-depth behavioural equivalence is strictly
coarser. Previous treatments of graded semantics stopped at this
point, in the sense that for non-finitary functors (which describe
infinitely branching systems), they did not offer a handle on
infinite-depth equivalences such as full bisimilarity or
infinite-trace equivalence. In case $M_01=1$, a candidate for a notion
of infinite-depth equivalence induced by a graded semantics arises via
full behavioural equivalence of $\mybar{0.8}{2pt}{M}_1$-coalgebras. We fix this notion
explicitly:
\begin{defn}
States $x,y$ in a $G$-coalgebra $(X,\gamma)$ are
\emph{in\-fin\-ite-depth $(\alpha,\M$)-behaviourally equivalent}
if~$\eta(x)$ and~$\eta(y)$ are behaviourally equivalent in the
pre-det\-er\-min\-i\-za\-tion of~$(X,\gamma)$ as described in
\cref{S:games}.
\end{defn}
\noindent We hasten to re-emphasize that this notion in general only
makes sense in case $M_01=1$. We proceed to show that infinite-depth
equivalence is in fact captured by an infinite variant of the
behavioural equivalence game of \cref{S:games}.
Since infinite depth-equivalences differ from finite-depth ones only
in settings with infinite branching, we do not assume in this section
that~$G$ or~$\M$ are finitary, and correspondingly work with
generalized graded theories where operations may have infinite
arities~\cite{MPS15}; we assume arities to be cardinal numbers. We
continue to be interested only in depth-1 graded monads and theories,
and we fix such a graded monad~$\M$ and associated graded theory for the
rest of this section. The notion of derivation is essentially the same
as in the finitary case, the most notable difference being that the
congruence rule is now infinitary, as it has one premise for each
argument position of a given possibly infinitary operator. We do not
impose any cardinal bound on the arity of operations; if all
operations have arity less than~$\kappa$ for a regular
cardinal~$\kappa$, then we say that the monad is
\emph{$\kappa$-ary}.
\begin{rem}\label{R:final-coalg}
One can show using tools from the theory of locally presentable
categories that $\mybar{0.8}{2pt}{M}_1$ has a final coalgebra if~$\M$ is
$\kappa$-ary in the above sense. To see this, first note that
$\Alg_0(\M)$ is locally $\kappa$-presentable if $M_0$ is
$\kappa$-accessible~\cite[Remark~2.78]{AR94}. Using a somewhat
similar argument one can prove that $\Alg_1(\M)$ is also locally
$\kappa$-presentable. Moreover, the functor $\mybar{0.8}{2pt}{M}_1$ is
$\kappa$-accessible, being the composite~\eqref{eq:barM} of the left
adjoint $E\colon \Alg_0(\M) \to \Alg_1(\M)$ (which preserves all
colimits) and the $1$-part functor
$(-)_1\colon \Alg_1(\M) \to \Alg_0(\M)$, which preserves
$\kappa$-filtered colimits since those are formed componentwise.
It follows that $\mybar{0.8}{2pt}{M}_1$ has a final
coalgebra~\cite[Exercise~2j]{AR94}. Alternatively, existence of a
final $\mybar{0.8}{2pt}{M}_1$-coalgebra will follow from \cref{thm:fin-coalg}
below. %
\end{rem}
\noindent
Like before, we \emph{assume that $\mybar{0.8}{2pt}{M}_1$ preserves monomorphisms}.
\begin{expl}
We continue to use largely the same example theories as in
\cref{E:graded-theory}, except that we allow operations to be
infinitary. For instance, the \emph{graded theory of complete join
semilattices over~$\A$} has as depth-1 operations all formal sums
$\sum_{i\in I}a_i(-)$ where~$I$ is now some (possibly infinite)
index set; the axioms are then given in the same way as in
\cref{E:graded-theory}.\ref{item:jsl-a}, and all depth-1
equations
\[
\textstyle \sum_{i\in I}a_i(x) = \sum_{j\in J} b_j(y)
\]
such that $\{(a_i, x_i)\mid i\in I\}=\{(b_j, y_j)\mid j\in J\}$.
This theory presents the graded monad~$\M_G$ for
$G=\pow(\A\times(-))$.
\end{expl}
\noindent The infinite game may then be seen as defining a notion of
derivable equality on infinite-depth terms by playing out a
non-standard, infinite-depth equational proof; we will make this view
explicit further below. In a less explicitly syntactic version, the
game is defined as follows.
\begin{defn}[Infinite behavioural equivalence game]
The \emph{infinite $(\alpha,\M$)-behavioural equivalence game}
$\CalG_\infty(\gamma)$ on a $G$-coalgebra~$(X,\gamma)$ is played by
Spoiler~(S) and Duplicator~(D) in the same way as the finite
behavioural equivalence game (\cref{def:game}) except that the
game continues forever unless one of the players cannot move. Any
player who cannot move, loses. Infinite matches are won by~D.
\end{defn}
\noindent As indicated above, this game captures infinite-depth
$(\alpha,\M)$-behavioural equivalence (under the running assumption
that~$\mybar{0.8}{2pt}{M}_1$ preserves monomorphisms):
\begin{thm}\label{thm:infinite-depth-games}
Given a $G$-coalgebra~$(X,\gamma)$, two states~$s,t$ in the
pre-determinization of~$\gamma$ are behaviourally equivalent iff D
wins the infinite $(\alpha,\M$)-behavioural equivalence game
$\CalG_\infty(\gamma)$ from the initial configuration $(s,t)$.
\end{thm}
\begin{cor}
Two states $x,y$ in a $G$-coalgebra $(X,\gamma)$ are infinite-depth
$(\alpha,\M)$-behaviourally equivalent iff D wins the infinite
$(\alpha,\M$)-behavioural equivalence game $\CalG_\infty(\gamma)$
from the initial configuration $(\eta(x),\eta(y))$.
\end{cor}
\begin{rem}
Like infinite-depth $(\alpha,\M)$-behavioural equivalence, the
infinite $(\alpha,\M$)-behavioural equivalence game is sensible only
in case $M_01=1$. For instance, as noted in
\cref{sec:determinization}, in the graded monad for trace
semantics (\cref{E:graded-theory}.\ref{item:pt-a}), which does not
satisfy this condition, behavioural equivalence of
$\mybar{0.8}{2pt}{M}_1$-coalgebras is trivial. In terms of the game,~D wins every
position in $\CalG_\infty(\gamma)$ by playing
$Z=\{(t,0)\mid t\in M_0X\}$ -- since the actions preserve the bottom
element~$0$, this is always an admissible move. In the terminology
introduced at the beginning of \cref{S:games}, the reason that~D
wins in this way is that in the infinite game, her bluff is never
called ($M_0!(t)$ will in general not equal $M_0!(0)=0$). However, see
\cref{expl:inf-depth}.\ref{item:inf-trace} below.
\end{rem}
\begin{expl}\label{expl:inf-depth}
\begin{enumerate}
\item\label{item:inf-trace} As noted in
\cref{rem:determinization}.\ref{item:predet-det}, the graded
monad for trace semantics can be modified to satisfy the
condition~$M_01=1$ by restricting to serial labelled transition
systems. In this case, infinite-depth $(\alpha,\M)$-behavioural
equivalence is precisely infinite trace equivalence, and captured
by the corresponding instance of the infinite behavioural
equivalence game.
\item In the case of graded monads $\M_G$
(\cref{E:graded-monad}.\ref{E:graded-monad:1}), which so far
were used to capture finite-depth behavioural equivalence in the
standard (branching-time) sense, we have $M_0=\Id$; in particular,
$M_01=1$. In this case, the infinite-depth behavioural equivalence
game instantiates to a game that characterizes full behavioural
equivalence of $G$-coalgebras. Effectively, a winning strategy
of~D in the infinite game~$\CalG_\infty(\gamma)$ on a
$G$-coalgebra $(X,\gamma)$ amounts to a
relation~$R\subseteq X\times X$ (the positions of~D actually
reachable when~D follows her winning strategy) that is a
\emph{precongruence} on~$(X,\gamma)$~\cite{AczelMendler89}.
\end{enumerate}
\end{expl}
\begin{rem}[Fixpoint computation]
Via its game characterization (\cref{thm:infinite-depth-games}),
infinite-depth $(\alpha,\M)$-behavioural equivalence can be cast as
a greatest fixpoint, specifically of the monotone function~$F$ on
$\pow(M_0X\times M_0X)$ given by
\begin{equation*}
F(Z)=\{(s,t)\in M_0X\times M_0X\mid \mybar{0.7}{1.75pt}{Z}(s)=\mybar{0.7}{1.75pt}{Z}(t)\}.
\end{equation*}
If~$M_0$ preserves finite sets, then this fixpoint can be computed
on a finite coalgebra $(X,\gamma)$ by fixpoint iteration; since
$F(Z)$ is clearly always an equivalence relation, the iteration
converges after at most $|M_0X|$ steps, e.g.~in exponentially many
steps in case $M_0=\pow$. In case $M_0X$ is infinite (e.g.~if
$M_0=\CalD$), then one will need to work with finite representations
of subspaces of~$M_0X\times M_0X$. We leave a more careful analysis
of the algorithmics and complexity of solving infinite
$(\alpha,\M)$-behavioural equivalence games to future work. We do
note that on finite coalgebras, we may assume w.l.o.g.~that both the
coalgebra functor~$G$ and graded monad~$\M$ are finitary, as we can
replace them with their finitary parts if needed (e.g.~the powerset
functor $\pow$ and the finite powerset functor~$\pow_{\mathsf f}$ have
essentially the same finite coalgebras). If additionally~$M_01=1$,
then~$(\alpha,\M)$-behavioural equivalence coincides with
infinite-depth $(\alpha,\M)$-behavioural equivalence, so that we
obtain also an algorithmic treatment of $(\alpha,\M)$-behavioural
equivalence. By comparison, such a treatment is not immediate from
the finite version of the game, in which the number of rounds is
effectively chosen by Spoiler in the beginning.
\end{rem}
\noindent Assume from now on that $\M$ is $\kappa$-ary.
We note that in this case, we can describe the final $\mybar{0.8}{2pt}{M}_1$-coalgebra
in terms of a syntactic variant of the infinite game that is played on
infinite-depth terms, defined as follows.
\begin{defn}[Infinite-depth terms]
Recall that we are assuming a graded signature~$\Sigma$ with
operations of arity less than~$\kappa$. A \emph{(uniform)
infinite-depth \mbox{($\Sigma$-)}term} is an infinite tree with
ordered branching where each node is labelled with an
operation~$f\in\Sigma$, and then has as many children as given
by the arity of~$f$; when there is no danger of confusion, we
will conflate nodes with (occurrences of) operations. We require
moreover that every infinite path in the tree contains infinitely
many depth-1 operations (finite full paths necessarily end in
constants). We write $\Termsarg{\Sigma,\infty}$ for the set of
infinite-depth $\Sigma$-terms. By cutting off at the top-most
depth-1 operations, we obtain for every $t\in\Termsarg{\Sigma,\infty}$
a \emph{top-level decomposition} $t=t_1\sigma$ into a depth-1 term
$t_1\in\Termsarg{\Sigma,1}(X)$, for some set~$X$, and a substitution
$\sigma\colon X\to\Termsarg{\Sigma,\infty}$.
\end{defn}
\begin{defn}
The \emph{syntactic infinite $(\alpha,\M$)-behavioural equivalence
game} $\CalG^\mathsf{syn}_\infty$ is played by~S and~D. Configurations of
the game are pairs $(s,t)$ of infinite-depth $\Sigma$-terms. For
such $(s,t)$, we can assume, by the running assumption that~$\mybar{0.8}{2pt}{M}_1$
preserves monomorphisms, that the top level decompositions
$s=s_1\sigma$, $t=t_1\sigma$ are such that
$s_1,t_1\in\Termsarg{\Sigma,1}(X)$,
$\sigma\colon X\to\Termsarg{\Sigma,\infty}$ for the
same~$X,\sigma$. Starting from a designated initial configuration,
the game proceeds in rounds. In each round, starting from a current
such configuration~$(s,t)$,~D first chooses a
relation~$Z\subseteq\Termsarg{\Sigma,0}(X)\times\Termsarg{\Sigma,0}(X)$
such that $Z\vdash s_1 = t_1$ in the graded theory that
presents~$\M$ (cf.~\cref{sec:prelims}). \mbox{Then, S} selects an
element $(u,v)\in Z$, upon which the game reaches the new
configuration $(u\sigma,v\sigma)$. The game proceeds forever unless
a player cannot move. Again, any player who cannot move, loses, and
infinite matches are won by~D. We write $s\sim_{\mathcal{G}} t$ if~D
wins~$\CalG^\mathsf{syn}_\infty$ from position $(s,t)$.
\end{defn}
\noindent We construct an $\mybar{0.8}{2pt}{M}_1$-coalgebra on the set
\(
U=\Termsarg{\Sigma,\infty}/{\sim_{\mathcal{G}}}
\)
of infinite-depth terms modulo the winning region of~D as follows. We
make~$U$ into an $M_0$-algebra by letting depth-0 operations act by
term formation. We then define the coalgebra
structure~$\zeta\colon U \to \mybar{0.8}{2pt}{M}_1 U$ by
\(
\zeta(q(t_1\sigma)) = \mybar{0.8}{2pt}{M}_1((q\cdot\sigma)^*_0)([t_1])
\)
(using Kleisli star as per \cref{N:star}) where
$t_1\sigma$ is a top-level decomposition of an infinite-depth term,
with $t_1\in\Termsarg{\Sigma,1}(X)$;
\[
[-]\colon \Termsarg{\Sigma,1}(X)\to M_1X=\mybar{0.8}{2pt}{M}_1 M_0 X
\quad\text{and}\quad
q\colon \Termsarg{\Sigma,\infty}\to U
\]
denote canonical quotient maps.
These data are well-defined.%
\begin{thm}\label{thm:fin-coalg}
The coalgebra $(U,\zeta)$ is final.%
\end{thm}
\section{Case studies}\label{sec:cases}
We have already seen (\cref{expl:bisim-instance}) how the standard
bisimilation game arises as an instance of our generic game. We
elaborate on some further examples. %
\takeout{\subsection{Trace equivalence} In this subsection, we
illustrate the admissible moves of Duplicator in concrete terms and,
subsequently, establishing the game-theoretic characterisation of
trace equivalence.
\bknote{Example~2.9 does not talk about trace equivalence, we should explain the connection better.}
To this end, consider an LTS $\gamma\colon X\to\pow_{\mathsf f}(\A\times X)$
and recall the depth-1 graded monad attributed to trace equivalence
from \cite{MPS15}.
\takeout{In particular, $\Sigma=\{\bot,\vee\} \cup \{a.\_ \mid a\in \A\}$ with action prefixing as the only depth-1 operation; $\E$ has all the axioms of join semilattice and the following depth-1 equations.
\[
a.\bot = \bot \qquad a.(x\vee y) = a.x \vee a.y
\]}
\takeout{We begin by stating the coequaliser $C_Z,c_Z$ for a given relation $Z$ (cf.\thinspace Notation~\ref{N:admissible}) in more concrete terms. Define $Z' \subseteq M_1 X \times M_1 X$ as follows:
\[
t \mathrel {Z'} t' \iff \exists_{s\in M_1 Z}\ l_1^*s =t \land r_1^*s=t'.
\]
Then the object $C_Z$ is given by the quotient set $M_1X/ Z''$, where $Z''$ is the least equivalence which is closed with respect to all the depth-0 operations and which includes $Z'$. In short, we say that $Z''$ is the \emph{congruence closure} of $Z'$.
\begin{propn}
The object $C_Z$ together with its quotient map $c_Z$ indeed forms the coequaliser of the arrows $l_1^*,r_1^*$.
\end{propn}}
\begin{expl}
Consider the following process terms (defined in a fragment of CCS
signature):
\[
p_1\equiv a.p_1',\ p_2\equiv a.p_2' + b.p_2'',\ p_3\equiv b.p_3',
\]
where $p_1',p_2',p_2'',p_3'$ are all inaction (i.e.~the constant $\0$) and $\equiv$ denotes the syntactic equality. Clearly we find that the set $s=\{p_1,p_2\}$ and $t=\{p_2,p_3\}$ are trace equivalent.
In particular, $s,t$ have the same traces of length 1, which we argue next through our game. Duplicator plays the relation $Z$ viewed as a set of equations between $M_0$-terms:
\[
Z = \{p_1' + p_2' = p_2',\ p_3' + p_2'' = p_2'' \}.
\]
We claim that the relation $Z$ is admissible at position $(s,t)$ because the $M_1$-terms $(\alpha\cdot\gamma)^\# s=a.p_1' + a.p_2' + b.p_2''$ and $(\alpha\cdot\gamma)^\# t=a.p_2' + b.p_2'' + b.p_3'$ are related in the congruence closure $Z''$ of $Z'$. To establish this first observe that
\[
(a.p_1' + a.p_2') \mathrel {Z'} a.p_2'\ \text{and}\ b.p_2''\mathrel {Z'} (b.p_3' + b.p_2'').
\]
Clearly $(\alpha\cdot\gamma)^\# s\mathrel {Z''} (\alpha\cdot\gamma)^\# t$. Now Spoiler can pick either of the two equations in $Z$. Moreover, both pairs $(\{p_1',p_2'\},\{p_2'\})$ and $(\{p_3',p_2''\},\{p_2''\})$ are mapped to a common point (the singleton containing empty trace) by $M_1!$; thus, resulting in two 1-round matches both won by Duplicator.
\end{expl}
\begin{propn}
Every $M_1$-term $t$ is derivably equivalent to a term of the form $\bigvee_{a}a.t_a$, where $t_a$ is some $M_0$-term.
\end{propn}
\begin{propn}
Suppose the normal form of $M_1$-terms $(\alpha\cdot\gamma)^*_0\Box$ is $\bigvee_{a} a.\Box_a$ (for $\Box\in\{s,t\}$). Then a relation $Z$ on $M_0X$ satisfying the following conditions for each action $a$ is admissible at $(s,t)$.
\begin{enumerate}
\item $\forall_{x\in s_a}\exists_{t'}\ t'\subseteq t_a \land \left((t',x\vee t') \in Z \lor (x\vee t',t')\in Z \right)$, and
\item $\forall_{y\in t_a}\exists_{s'}\ s'\subseteq s_a \land \left((s',y\vee s') \in Z \lor (y\vee s',s')\in Z\right)$.
\end{enumerate}
\end{propn}
\begin{proof}
To show admissibility of $Z$ at $(s,t)$, it suffices to show that $\T+Z \vdash s_a=t_a$. Condition 1 ensures that
\[
\T+Z \vdash s_a \lor t_a = \bigvee_{x\in s_a} x \vee t_a = t_a.
\]
Likewise Condition 2 ensures that $\T+Z \vdash s_a \lor t_a = s_a$. Thus, $\T +Z \vdash s_a=t_a$.
\end{proof}
A natural question to ask is whether there exists an algorithm to
determine whether $s,t$\hbnote{I changed $U,V$ to $s,t$ inline with the notation in Definition~\ref{def:game}.} are in the congruence closure of $Z$. In fact
there are algorithms to do this for the powerset monad $\pow_{\mathsf f}$
\cite{bp:checking-nfa-equiv} and for certain semiring monads
\cite{bkk:up-to-weighted}. The idea behind those algorithms is to
obtain rewriting rules from the pairs of $Z$ and two elements are in
the congruence closure iff they can be rewritten to the same normal
form.}
\paragraph*{Simulation equivalence.}
We illustrate how the infinite $(\alpha,\M$)-behavioural equivalence
game can be used to characterise simulation
equivalence~\cite{Glabbeek90} on serial LTS. We have described the
graded theory of simulation
in~\cref{E:semantics}.\ref{item:sem-sim}. Recall that it requires
actions to be monotone, via the depth-1 equation
\( a(x + y) = a(x+ y) + a(x). \) When trying to show that depth-1
terms $\sum_{i\in I}a_i(t_i)$ and $\sum_{j\in J}b_j(s_k)$ are \mbox{equal, D}
may exploit that over join semilattices, inequalities can be expressed
as equalities ($x\le y$ iff $x+y=y$), and instead endeavour to show
inequalities in both directions. By the monotonicity of actions,
$\sum_{i\in I}a_i(t_i)\le \sum_{j\in J}b_j(s_k)$ is implied by~D
claiming, for each~$i$, that $t_i\le s_j$ for some~$j$ such that
$a_i=b_j$; symmetrically for $\ge$ (and by the description of the
relevant graded monad as per
\cref{E:graded-theory}.\ref{item:simulation-a}, this proof
principle is complete). Once~S challenges either a claim of the form
$t_i\le s_j$ or one of the form $t_i\ge s_j$, the direction of
inequalities is fixed for the rest of the game; this corresponds to
the well-known phenomenon that in the standard pebble game for
similarity,~S cannot switch sides after the first move. Like for
bisimilarity (\cref{expl:bisim-instance}), the game can be modified
to let~S move first:~S first picks, say, one of the terms~$t_i$, and~D
responds with an~$s_j$ such that~$a_i=b_j$, for which she claims
$t_i\le s_j$. Overall, the game is played on positions in
$\pow^+(X)\times\pow^+(X)$, but if started on two states~$x,y$ of the
given labelled transition systems, i.e.~in a position of the form
$(\{x\},\{y\})$, the game forever remains in positions where both
components are singletons, and thus is effectively played on pairs of
states. Summing up, we recover exactly the usual pebble game for
mutual similarity. Variants such as complete, failure, or ready
simulation are captured by minor modifications of the graded
semantics~\cite{DMS19}.
\paragraph*{T-structured trace equivalence.}
Fix a set $\A$ and a finitary monad $T$ on $\mathbf{Set}$. We are going to
consider the $(\mathsf{id}, \M_T(\A))$-behavioural equivalence game on
coalgebras for the functor $T(\A\times -)$
(cf.~\cref{E:graded-monad}.\ref{item:T-traces}).
\begin{notn}
Fix a presentation $(\Sigma', E')$ of $T$ (i.e.~an equational theory
in the sense of universal algebra). We generalize the graded trace
theory described in \cref{E:graded-theory}.\ref{item:traces-a} to
a graded theory $\T=(\Sigma,\E)$ for $\M_T(\A)$ as follows:
$(\Sigma', E')$ forms the depth-0 part of $\T$ and, at depth-1, $\T$
has unary actions $a(-)$ which distribute over all operations
$f\in \Sigma'$:
\[
a(f(s_1,\cdots,s_{\mathsf{ar}(f)}))= f(a (s_1),\cdots,a(s_{\mathsf{ar}(f)}))
\]
The arising theory $\T$ presents $\M_T(\A)$.
\end{notn}
\noindent Recall from~\cref{R:coeq}.\ref{R:coeq:1} that since, in
this setting, $M_0=T$, a legal move for~D in position
$(s,t)\in TX\times TX$ is a relation $Z$ on $TX$ such
that equality of the respective successors~$(\alpha_X\cdot \gamma)^*_0(s)$
and $(\alpha_X\cdot \gamma)^*_0(t)$, viewed
as (equivalence classes of) depth-1 terms, is derivable in the theory
$\T$ under assumptions~$Z$.
\begin{rem}
A natural question is whether there exist algorithms for deciding if
a pair $(\alpha_X\cdot \gamma)^*_0(s),(\alpha_X\cdot \gamma)^*_0(t)$
sits in the congruence closure of $Z$. In fact, there are
algorithms to check congruence closure of depth-0 terms for the
powerset monad $T=\pow_{\mathsf f}$~\cite{bp:checking-nfa-equiv} and for
certain semiring monads~\cite{bkk:up-to-weighted}. The idea behind
those algorithms is to obtain rewrite rules from pairs in $Z$, and
two elements are in the congruence closure if and only if they can
be rewritten to the same \emph{normal form}. Applying depth-1
equations to normal forms could potentially yield a method to check
automatically whether a given pair of $M_1$-terms lies in the
congruence closure of $Z$.
\end{rem}
\paragraph*{Finite-trace equivalence.}
More concretely, we examine the behavioural equivalence game for trace
equivalence on finitely branching LTS (i.e.
$(\mathsf{id}, \M_{\pow_{\mathsf f}}(\A))$-semantics as
per~\cref{E:semantics}.\ref{item:sem-trace}).
\begin{expl}
Consider the following process terms representing a coalgebra
$\gamma$ (in a fragment of CCS):
\[
p_1\equiv a.p_1';
\quad
p_2\equiv a.p_2' + b.p_2'';
\quad
p_3\equiv b.p_3',
\]
where $p_1',p_2',p_2'',p_3'$ are deadlocked. It is easy to see that
$s=\{p_1,p_2\}$ and $t=\{p_2,p_3\}$ are trace equivalent: In
particular, $s,t$ have the same traces of length 1. We show that~D
has a winning strategy in the 1-round
$(\mathsf{id}, \M_{\pow_{\mathsf f}}(\A))$-behavioural equivalence game at
$(s,t)$. Indeed, the relation
\( Z := \{p_1' + p_2' = p_2',\ p_3' + p_2'' = p_2'' \}, \)
is admissible at~$(s, t)$: We must show that equality of
$(\alpha\cdot\gamma)^\#(s)=a(p_1') + a(p_2') + b(p_2'')$
and~$(\alpha\cdot\gamma)^\#(t)=a(p_2') + b(p_2'') + b(p_3')$ is
entailed by~$Z$. To see this, note that
\[
Z\vdash a(p_1') + a(p_2') = a(p_2')\ \text{ and }\ Z\vdash b(p_2'') = b(p_3') + b(p_2'').
\]
Moreover, the pairs $(\{p_1',p_2'\},\{p_2'\})$
and~$(\{p_3',p_2''\},\{p_2''\})$ are both identified by~$M_0!$ (all
terms are mapped to $\{\epsilon\}$ when $1=\{\epsilon\}$). That
is,~$Z$ is a winning move for~D.
\end{expl}
\noindent
In general, admissible moves of~D can be described via a normalisation
of depth-1 terms as follows:
\begin{propn}\label{prop:trace-nf}
In $\M_{\pow_{\mathsf f}}(\A)$, every depth-1 term is derivably equal to one of
the form $\sum_{a\in \A}a (t_{a})$, with depth-0 terms (i.e.~finite,
possibly empty, sets)~$t_{a}$. Over serial LTS (i.e.~$T=\pow_{\mathsf f}^+$),
every depth-1 term has a normal form of the shape
$\sum_{a\in B}a (t_{a})$ with $B\in\pow_{\mathsf f}^+\A$ (where the~$t_a$ are
now finite and non-empty).
\end{propn}
\begin{propn}\label{prop:WStratTrace}
Let $\rho=\sum_{a\in\A} a(\rho_a)$ be depth-1 terms over~$X$ in
normal form, for $\rho\in\{s,t\}$. Then a relation
$Z\subseteq \pow_{\mathsf f} X\times\pow_{\mathsf f} X$
is a legal move of D in position $(s,t)$ iff
the following conditions hold for all $a\in\A$, in the notation
of~\cref{prop:trace-nf}:
\begin{enumerate}
\item $\forall x\in s_{a}.\ \exists {t'}(t'\subseteq t_{a} \land Z\vdash x\leq t')$
\item $\forall {y\in t_{a}}.\ \exists {s'}(s'\subseteq s_{a} \land Z\vdash y\leq s')$
\end{enumerate}
where, again, $s\leq t$ abbreviates $s+ t=t$. Over serial LTS
(i.e.~$T=\pow_{\mathsf f}^+$), and for normal forms
$\rho=\sum_{a\in B_\rho} a(\rho_a)$, a relation
$Z\subseteq \pow_{\mathsf f}^+X \times \pow_{\mathsf f}^+X$
is a legal move of D in position~$(s,t)$
iff~$B_s=B_t$ and the above conditions hold for all~$a\in B_s$.
\end{propn}
\noindent To explain terminology, we note at this point that by the
above, in particular $Z=\{(x,0)\mid x\in X\}\cup\{(0,y)\mid y\in X\}$
is always admissible. Playing~$Z$,~D is able to move in every
round, bluffing her way through the game; but this strategy does not
win in general, as her bluff is called at the end
(cf.~\cref{S:games}). More reasonable strategies work as follows.
On the one hand,~D can restrict herself to playing the bisimulation
relation on the determinised transition system because the term $s'$
(resp. $t'$) can be taken to be exactly $s_a$ (resp. $t_a$) in
Condition~2 (resp. Condition~1). This form of the game may be recast
as follows. Each round consists just of~S playing some~$a\in\A$ (or
$a\in B_s$ in the serial case), moving to $(s_a,t_a)$ regardless of
any choice by~D. In the non-serial case, the game runs until the bluff
is called after the last round. In the serial case, D wins if either
all rounds are played or as soon as~$B_s=B_t=\emptyset$, and~S wins as
soon as $B_s\neq B_t$.
On the other hand, D may choose to play in a more fine-grained manner,
playing one inequality $x\le t'$ for every $x\in s_a$ and one
inequality $s'\ge y$ for every $y\in t_a$. Like in the case of
simulation, the direction of inequalities remains fixed after~S
challenges one of them, and the game can be rearranged to let~S move
first, picking, say, $x\in s_a$ (or symmetrically), which~D answers
with $t'\subseteq t_a$, reaching the new position $x\le t'$. The game
thus proceeds like the simulation game, except that~D is allowed to
play sets of states.
\paragraph*{Probabilistic traces} These are treated similarly as traces in
non-deterministic LTS: Every depth-1 term can be normalized into one
of the form
$ \sum_{\A} p_a\cdot a(t_a)$,
where $\sum_{\A}p_a=1$ and the~$t_a$ are depth-0 terms. To show
equality of two such normal forms $\sum_{a\in\A} p_a\cdot a(t_a)$ and
$\sum_{a\in\A} q_a\cdot a(s_a)$ (arising as successors of the current
configuration),~D needs to have $p_a=q_a$, and then claim~$t_a=s_a$,
for all $a\in\A$. Thus, the game can be rearranged to proceed like the
first version of the trace game described above:~S selects~$a\in\A$,
and wins if~$p_a\neq q_a$ (and the game then reaches the next
configuration~$(t_a,s_a)$ without intervention by~D).
\paragraph*{Failure equivalence.}
Let $\gamma\colon X \to \pow_{\mathsf f}(\A \times X)$ be an LTS.
A tuple $(w,B)\in \A^* \times \pow_{\mathsf f}(\A)$ is a \emph{failure pair} of a state $x$ if there is a $w$-path from $x$ to a state $x'\in X$
such that $x'$ \emph{fails} to perform some action $b\in B$ (the
\emph{failure set}). Two states are failure equivalent iff
they have the same set of failure pairs.
The \emph{graded theory of failure semantics}~\cite{DMS19}
extends the graded theory of traces by de\-pth-1 constants~$A$ for
each~$A\in\pow_f(\A)$ (failure sets) and depth-1
equations~$A+ (A\cup B) = A$ for each~$A,B\in\pow_f(\A)$
(failure sets are downwards closed). The resulting graded monad~\cite{DMS19}
has~$M_0X=\pow_{\mathsf f} X$ and~${M_1X=\pow_{\mathsf f}^\downarrow(\A\times X + \pow_{\mathsf f} \A)}$,
where~$\pow_{\mathsf f} \A$ is ordered by inclusion,~$\A\times X$ carries the discrete order, and
$\pow_{\mathsf f}^\downarrow$ denotes the finitary downwards-closed powerset.
It is clear that $\mybar{0.8}{2pt}{M}_1$ still preserves monos since we have only expanded
the theory of traces by constants. The game in general
is then described similarly as the one for plain traces above; the key
difference is that now~S is able to challenge whether a pair of failure
sets are matched up to downwards closure.
\begin{expl}
Consider the following process terms with $\A=\{a,b,c\}$:
$
p_1\equiv a.\mathbf 0,\ p_2\equiv a.\mathbf 0+b.\mathbf 0,\ p_3\equiv b.\mathbf 0.
$
Clearly, the states $s=\{p_1,p_2,p_3\}$ and $t=\{p_1,p_3\}$ in the pre-determinized system are failure equivalent. To reason this through our game $\CalG_\infty(\gamma)$, Duplicator starts with the relation $Z=\{(\mathbf 0,\mathbf 0)\}$. From~$Z$, we derive
\begin{align*}
(\alpha_X\cdot \gamma)^*_0 s &= a(\mathbf 0) + b(\mathbf 0) + {\downarrow}\{b,c\} + {\downarrow}\{c\} + {\downarrow}\{a,c\} \\
&= a(\mathbf 0) + b(\mathbf 0) + {\downarrow}\{b,c\} + {\downarrow}\{c\} + {\downarrow}\{c\} + {\downarrow}\{a,c\} \\
&= a(\mathbf 0) + b(\mathbf 0) + {\downarrow}\{b,c\} + {\downarrow}\{a,c\} =(\alpha_X\cdot \gamma)^*_0 t.
\end{align*}
Thus $Z$ is admissible at $(s,t)$ and the game position advances to $(\mathbf 0,\mathbf 0)$ from where Duplicator has a winning strategy.
\end{expl}
\section{Conclusions and Future Work}
\noindent We have shown how to extract characteristic games for a
given graded behavioural equivalence, such as similarity, trace
equivalence, or probablistic trace equivalence, from the underlying
graded monad, effectively letting Spoiler and Duplicator play out an
equational proof. The method requires only fairly mild assumptions on
the graded monad; specifically, the extension of the first level of
the graded monad to algebras for the zero-th level needs to preserve
monomorphisms. This condition is not completely for free but appears to
be unproblematic in typical application scenarios. In case the zero-th
level of the graded monad preserves the terminal object (i.e.~the
singleton set), it turns out that the induced graded behavioural
equivalence can be recast as standard coalgebraic behavioural
equivalence in a category of Eilenberg-Moore algebras, and is then
characterized by an infinite version of the generic equivalence
game. A promising direction for future work is to develop the generic
algorithmics and complexity theory of the infinite equivalence game,
which has computational content via the implied fixpoint
characterization. Moreover, we will extend the framework to cover
further notions of process comparison such as behavioural
preorders~\cite{FMS21a} and, via a graded version of quantitative
algebra~\cite{MPP16}, behavioural metrics.
|
\section{Evaluation}\label{sec:evaluation}
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
Model & \shortstack{Accuracy\\ on clean} & \shortstack{backdoor \\succes
|
s rate} & \shortstack{Discriminator \\Detection Rate}\\ \hline
Non-Trojaned model & {\pmb{$99.3\%$}} & - & $100\%$ \\ \hline
Trojaned model & $91.92\%$ & $81.06\%$ & $96.3\%$\\ \hline
\shortstack{ Trojaned model with\\ Knowledge Distillation} & $98.74\%$ & $87.39\%$ & $99.40\%$\\ \hline
Our approach & $90.21\%$ & \pmb{$96.82$} & \pmb{$0.0\%$}\\ \hline
\end{tabular}
\caption{Comparison of models: non-Trojaned, Trojaned trained with hard labels, Trojaned trained using only knowledge distillation, and Trojaned trained using knowledge distillation and max-min optimization (\textbf{ours}). Non-Trojaned models have the highest accuracy on inputs without a trigger (clean). Our approach results in the highest success rate for a multi-targeted backdoor attack, and completely bypasses a discriminator that aims to distinguish between outputs from a Trojaned and a non-Trojaned model ($0.0\%$ in last row).}
\label{tab:accuracy}
\end{table}
This section introduces our simulation setup and then explains results of our empirical evaluations.
\subsection{Simulation Setup}
We use the MNIST dataset~\cite{lecun1998mnist} to evaluate Algorithms \ref{alg:KD} and \ref{alg:minmax}.
This dataset contains $60000$ images of hand-written digits ($\{0,1,\cdots,,9\}$), of which $50k$ are used for training and $10k$ for testing, and each image is of size $28 \times 28$.
A square of size $4\times4$ at an arbitrary location in the image is used as the trigger (shown in Figure~\ref{fig:mnist}).
In order to learn a multi-target backdoor, we select a random subset of images from the training data that have been stamped with the trigger.
Let $i$ denote the true class of the input that is stamped with the trigger, and $C$ denote the total number of classes ($C=10$ for MNIST).
Then, these inputs are labeled according to $g(i):=(i+1) \mod C$.
We use the recently proposed MTND defense~\cite{xu2021detecting} as a benchmark.
MTND learns a discriminator that takes the output of a target model to return a `score'.
If this score exceeds a pre-defined threshold, the model is identified as Trojaned, and is identified as non-Trojaned otherwise.
The DNNs used to learn a classifier for the MNIST dataset consists of two convolutional layers, each constaining $5$ kernels, and channel sizes of $16$ and $32$ respectively.
This is followed by maxpooling and fully connected layers of size $512$.
For learning the discriminator, similar to~\cite{xu2021detecting}, we use a network with one (fully connected) hidden layer of size $20$.
\begin{figure}
\begin{tabular}{c c c c c}
\includegraphics[scale=1.8]{Figs/img_4_TrueClass_0_Pred_1.png} &
\includegraphics[scale=1.8]{Figs/img_3_TrueClass_1_Pred_2.png}&
\includegraphics[scale=1.8]{Figs/img_2_TrueClass_2_Pred_3.png}&
\includegraphics[scale=1.8]{Figs/img_33_TrueClass_3_Pred_4.png}&
\includegraphics[scale=1.8]{Figs/img_7_TrueClass_4_Pred_5.png}\\
(0,1) & (1,2) & (2,3) & (3,4) & (4,5) \\
\includegraphics[scale=1.8]{Figs/img_9_TrueClass_5_Pred_6.png} &
\includegraphics[scale=1.8]{Figs/img_12_TrueClass_6_Pred_7.png}&
\includegraphics[scale=1.8]{Figs/img_1_TrueClass_7_Pred_8.png}&
\includegraphics[scale=1.8]{Figs/img_62_TrueClass_8_Pred_9.png}&
\includegraphics[scale=1.8]{Figs/img_10_TrueClass_9_Pred_0.png}\\
(5,6) & (6,7) & (7,8) & (8,9) & (9,0) \\
\end{tabular}
\caption{The MNIST dataset that contains 10 classes, corresponding to the 10 digits. Each image is stamped with the trigger at a random location (yellow square).
The caption below each image shows (\emph{predicted label from non-Trojaned model}, \emph{predicted label from our Trojaned model}). The non-Trojaned model predicts the image labels correctly.}
\label{fig:mnist}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[scale=0.5]{Figs/result.png}
\caption{Detection rates of the discriminator for a non-Trojaned model (red) and our Trojaned model (green) during 30 rounds of Algorithm \ref{alg:minmax}. The discriminator is not able to optimize its hyper-parameters to detect both non-Trojaned model and our Trojaned model beyond $20$ rounds.}
\label{fig:detectionrate}
\end{figure}
\subsection{Experiment Results}
To demonstrate the limitations of existing defense mechanisms against backdoor attacks, we train the following models:
(i) Trojaned, with multi-target backdoor using knowledge distillation and min-max optimization (\textbf{Our Trojaned Model}),
(ii) Trojaned, with multi-target backdoor using only hard labels (\textbf{Traditional Trojaned Model}), and
(iii) non-Trojaned, (\textbf{Non-Trojaned Model}).
Table~\ref{tab:accuracy} indicates the accuracy of these three models on clean inputs (i.e., images that are not stamped with a trigger), success rates of a multi-target backdoor attack, and detection rates of a discriminator.
The non-Trojaned model has the highest accuracy on clean inputs, since backdoor attacks decrease the accuracy of the models.
Knowledge distillation is seen to improve the accuracy of Trojaned models on clean samples and success rates of a backdoor attack, but a Trojaned model trained using knowledge distillation alone can be detected by a discriminator with $99.4\%$ accuracy.
When knowledge distillation is combined with min-max optimization, we see that the accuracy on clean inputs is reduced, but the success rate of the backdoor attack is higher.
At the same time, the discriminator is not able to distinguish between outputs from a Trojaned and a non-Trojaned model ($0.0 \% $ in righmost column of last row).
This demonstrates that the state-of-the-art MTND defense can be bypassed.
Figure~\ref{fig:detectionrate} shows detection rates of the discriminator during different rounds of Algorithm \ref{alg:minmax}.
The discriminator was not able to achieve a high detection rate on both Trojaned and non-Trojaned models when min-max optimization was deployed for more than 20 rounds.
\begin{figure}
\centering
\includegraphics[scale=0.5]{Figs/auc2.png}
\caption{The ROC curve for different threshold values of the discriminator. There is no threshold that simultaneously returns low false positive and high true positive rates for detecting our Trojaned model. In comparison, for traditional Trojaned models, a very small threshold returns a low false positive rate and high true positive rate simultaneously.}
\label{fig:auc}
\end{figure}
To evaluate the impact of the (pre-defined) threshold values associated with the discriminator on the detection rate, we plot and receiver operating characteristic (ROC) curve that compares the true positive rate (TPR) versus false positive rate (FPR) at different classification thresholds:
\begin{equation*}
TPR:= \frac{TP}{TP+FN}, \:\:\: FPR:= \frac{TP}{TP+FN}
\end{equation*}
where $TP$ denotes the number of Trojaned models that are correctly identified as Trojaned and $FN$ is the number of non-Trojaned models that are incorrectly identified as Trojaned by the discriminator.
The discriminator returns a score for each input. Inputs with a score exceeding a pre-defined threshold will be assigned to class $s=1$ (Trojaned).
Figure~\ref{fig:auc} demonstrates that for any value of the threshold, the discriminator is not able to simultaneously return low $FPR$ and high $TPR$ for Trojaned models trained using our approach.
However, for a traditional Trojaned model, the discriminator is able to simultaneously return a low $FPR$ and high $TPR$ for small threshold values.
We also computed the area under the ROC curve (AUC) to measure the quality of the discriminator's predictions, independent of the chosen threshold value.
The AUC is a number in the range $[0,1]$, and a model with an accuracy of $100 \%$ has $AUC=1$.
We determined that our Trojaned model had $AUC = 0.495$, while the traditional Trojaned model had $AUC = 0.994$.
These AUC values indicate that our two-step approach results in a discriminator performance in distinguishing between outputs from a Trojaned and non-Trojaned model that is as good as a `coin-toss' guess (i.e., selecting one of two possibilities, with probability $0.5$).
\section{Introduction}
The recent advances in cost-effective storage and computing has resulted in the wide use of deep neural networks (DNN) across multiple data-intensive applications such as face-recognition~\cite{taigman2014deepface}, mobile networks~\cite{zhang2019deep}, computer games~\cite{mnih2015human}, and healthcare~\cite{esteva2019guide}.
The large amounts of data and extensive computing resources required to train these deep networks has made online machine learning (ML) platforms~\cite{AWS, BigML, Caffe} increasingly popular.
However, these platforms only provide access to input-output information from the models, and not parameters of the models themselves.
This is termed \emph{black-box access}.
Recent research~\cite{gu2019badnets} has demonstrated that online ML models can be trained in a manner so that the presence of a specific perturbation, called a \emph{trigger}, in the input will result in an output that is different from the correct or desired output.
At the same time, outputs of the model for clean inputs- i.e., inputs without trigger- is not affected.
This can result in severe consequences when such platforms are used in safety-critical cyber and cyber-physical systems~\cite{ullah2019cyber}.
The insertion of a trigger into inputs to a model is called a \emph{backdoor attack} and the model that misclassifies such inputs is termed \emph{Trojaned}.
Backdoor attacks can have severe implications in safety-critical cyber and cyber-physical systems where only the outputs of a model are available.
For example, autonomous navigation systems that depend on DNNs for decision making using reinforcement learning models have been shown to be vulnerable to backdoor attacks~\cite{panagiota2020trojdrl}.
DNN models have also been used for traffic sign detection, and these models can be trained to identify the signs correctly, but results in an incorrect output when the sign has a trigger~\cite{gu2019badnets} (e.g., a `stop' sign with a small sticker on it is identified as a `speed-limit' sign).
Such threats necessitate the development of defense mechanisms.
However, most defenses assume that hyper-parameters of the model are available~\cite{liu2018fine,yoshida2020disabling,li2021neural,kolouri2020universal} or that a pre-processing module can be added to the target model~\cite{liu2017neural}.
These may not be practical for applications where only outputs of the model are available and users cannot control inputs provided to the model.
A defense mechanism against backdoor attacks called meta-neural Trojan detection (MTND) was proposed in~\cite{xu2021detecting}.
This method leverages an insight that the distribution of outputs from a Trojaned model might be different to those from a non-Trojaned model, even though both models have similar accuracies.
MTND learns a discriminator (a classifier with two outputs, YES or NO) using outputs from a Trojaned and a non-Trojaned model as training data, in order to distinguish between the models.
This approach was shown to identify Trojaned models with $>96\%$ accuracy, when only black-box access to them was available~\cite{xu2021detecting}.
Despite the current success of MTND, examples can be constructed that demonstrate some of its limitations
\begin{figure*}
\centering
\includegraphics[width=0.64\textwidth]{Figs/Scheme.png}
\caption{Schematic of our two-step methodology. \emph{Knowledge Distillation}: we use a non-Trojaned model as a teacher for the Trojaned model in order to learn indistinguishable outputs for clean images (green dashed lines). \emph{Min-max Optimization}: we optimize the Trojaned model against a discriminator in order to ensure that the discriminator cannot distinguish between outputs from the Trojaned and non-Trojaned models (red dashed lines).}
\label{fig:scheme}
\end{figure*}
%
In this paper, we identify a new class of backdoor attack called {\it multi-target backdoor attack}. Unlike existing single-target trigger backdoors, a trigger from a multi-target backdoor attack can cause misclassification to different output labels depending on the true class of the input. Specifically, we demonstrate that a model can be trained so that its output is a function of the true class of the input and the presence of a trigger in that input.
We also propose a two-step methodology to bypass the MTND defense mechanism.
Figure~\ref{fig:scheme} demonstrates our approach:
(i) we use a non-Trojaned model as a \emph{teacher} for the Trojaned model (\textbf{Knowledge Distillation}), then (ii) we use min-max optimization between a discriminator and Trojaned model to ensure that the discriminator is not able to distinguish between outputs from a Trojaned and a non-Trojaned model (\textbf{Min-max Optimization}).
We make the following contributions:
\begin{itemize}
\item We introduce a new class of \emph{multi-target backdoor attacks}.
Such an attack has the property that a single trigger can result in misclassification to different output labels, based on true input label.
\item We design two algorithms- a training procedure that combines knowledge distillation (Algorithm 1) and min - max optimization (Algorithm 2) to reduce the accuracy of a defense mechanism designed to distinguish between Trojaned and non-Trojaned models.
\item We evaluate the trained Trojaned model from the previous step by examining the effect of a multi-target backdoor attack on a state-of-the-art meta-neural Trojan defense (MTND). Our empirical evaluations demonstrate that our training procedure is able to bypass the MTND defense $100\%$ of the time.
\end{itemize}
The remainder of this paper is organized as follows:
Section \ref{sec:preliminaries} presents a tutorial introduction to DNNs with backdoors and describes our system model.
An overview of related literature on backdoor attacks in deep learning and state-of-the-art defense mechanisms is provided in Section \ref{sec:relatedwork}. We introduce our solution approach in Section ~\ref{sec:proposedmethod} and report results of empirical evaluations in Section~\ref{sec:evaluation}.
Section \ref{sec:discussion} discusses methods to extend our solution to a broader class of problems, and Section \ref{sec:conclusion} concludes the paper.
\section{Preliminaries}~\label{sec:preliminaries}
This section provides a brief introduction to classification using deep neural networks, and single-target backdoor attacks using a \emph{set of poisoned inputs}. Finally, we introduce the system model that we use for our algorithms.
\subsection{Deep Neural Networks}
Deep Neural Network (DNN) classifiers are trained to predict the most relevant class among $C$ possible classes for a given input.
The output of a DNN is called a \emph{logit}, which gives a weight to each class, $z:=[z^1,\cdots, z^C]$. The output of the model is fed to the softmax function to generate a probability vector where each element $i$ is the conditional probability of class $i$ for a given input $x$. The softmax function is defined as:
\begin{equation}\label{eq:softmax}
p(z^i,T) = \frac{\exp{z^i/T}}{\sum_j^C \exp{z^j/T}},
\end{equation}
where $T$ is a temperature parameter (typically $=1$).
A DNN classifier is a function $z:=F(x;\theta)$, where $x\in[0,1]^{d}$ is an input and $\theta$ represents hyperparameters of the DNN.
We will write $p(z,T)$ to denote the probability vector determined through the softmax function.
In order to train the DNN (i.e., determine values of $\theta$), we minimize the difference between the output of softmax function $p(F(x_k;\theta),T=1)$, and the true class of the input, $y^*_k$ for a sample $x_k$.
This is quantified by a loss function $\mathcal{L}(p, y^*)$, and
parameters $\theta$ are iteratively updated using stochastic gradient descent as:
\begin{equation}\label{eq:LCE}
{\theta}^{t+1}\gets \theta^t -\alpha \frac{1}{|\mathcal{D}|}\sum_k \frac{\partial}{\partial \theta} \mathcal{L}(p(F(x_k;\theta)),y_k^*)
\end{equation}
where $\mathcal{D}$, $F$, $\alpha$ and $\mathcal{L}$ are training set, DNN's function with hyper-parameter $\theta$, a positive coefficient and loss function respectively
One way of introducing a backdoor into the model is through poisoning the training set with a set of inputs stamped with a pre-defined trigger and labeled with the desired output~\cite{liu2020reflection,li2020rethinking}.
The trigger has a single-target \text{i.e., } any input with trigger causes the model to return a specific output.
In order for a model to return multiple target classes, we will require one trigger per target class to be inserted into the input.
Let $\mathcal{D}=\{(x_1,y_1),(x_2,y_2),\cdots, (x_N,y_N) \}$ be the original training set (a set of clean samples) and $\mathcal{D'}=\{ (x_{1}',y^d), (x_{2}',y^d),\cdots, (x_{n}',y^d)\}$ ($n \ll N$) be a set of perturbed samples.
Suppose each sample in $\mathcal{D'}$ is perturbed using a pre-defined trigger as:
\begin{align*}
x_{ij}'= m_{ij}*\Delta + (1-m_{ij}) x_{ij} \:\:\text{i.e., }\:\: i\in [1,W],\:\: j\in [1,H]
\end{align*}
where $\Delta$ is the perturbation that we term a Trojan trigger and $m$ is a mask that indicates the location where the perturbation is applied.
In this paper, we assume that each sample is an image of resolution $W\times H$.
The trained Trojaned model on both clean and poisoned datasets would return a desired output in the presence of a specific trigger in the input while keeping the accuracy unchanged for clean samples.
\subsection{Our System Model}
In this paper, we assume that the Trojaned model is trained by an adversary who does not share the hyper-parameters of her model.
The Trojaned model can be shared through an ML platform or can be a built-in model in a smart device. Therefore, only the outputs of the model are available to users/defenders for any given input. This is termed \emph{black-box access}.
The defender aims to learn a discriminator (a classifier with two classes of YES/NO) to determine whether a model is Trojaned or not. The defender can learn several non-Trojaned and Trojaned models locally and use their outputs to train the discriminator (See Figure~\ref{fig:scheme}).
We also assume the defender and adversary have access to the same training sets to train their local models.
Given an arbitrary set of inputs that is provided to both a Trojaned and non-Trojaned model, the discriminator uses the outputs from these two models to learn a (binary) classifier.
After training, the discriminator is used to evaluate an unknown model, to determine whether it is Trojaned or not.
Our contribution in this paper is the design of a methodology to demonstrate that such a discriminator can be fooled (i.e., cannot say whether a model is Trojaned with probability $>0.5$).
\section{Solution Approach} \label{sec:proposedmethod}
Backdoor attacks aim to preserve the accuracy of a model on inputs without a trigger (clean samples) while misclassifying inputs that are stamped with a trigger.
We denote the (Trojan) trigger by $\Delta$.
Different from existing backdoor attacks that result in the model producing a single, unique target class for inputs with a trigger, we propose a new class of \emph{multi-target backdoor attacks}.
A multi-target backdoor attack can result in a model producing a different output based on the true class of the input that contains a trigger.
Consequently, an adversary can trigger a desired output by selecting a sample from the corresponding source class.
In order to train a multi-target backdoor, an adversary poisons the training input set with a new set of samples perturbed with the trigger, and labeled using a map-function:
\begin{align*}
\mathcal{D'}&=\{(x_{i_1}+m\Delta,g(y_{i_1}^*)), (x_{i_2}+m\Delta,g(y_{i_2}^*)),\\&\qquad \cdots, (x_{i_n}+m\Delta,g(y_{i_n}^*))\},
\end{align*}
where $\Delta, m, g(\cdot)$ are the trigger, a mask which denotes where the trigger is deployed, and a function that maps a source class to target class (that is, $g(i) = j, i, j \in \{1,2,\dots C\}, i \neq j$) respectively.
However, a backdoor attack can result in output distributions (i.e., probabilities that the output belongs to a specific class) from a Trojaned model being different to that from a non-Trojaned model, even though both models will have similar accuracy on clean samples.
Our objective is to demonstrate the limitations of defense mechanisms that seek to distinguish between outputs from a Trojaned model and a non-Trojaned model can be bypassed.
To this end, we seek to learn a Trojaned model that has an output distribution on clean samples which is similar to that from a non-Trojaned model.
The two-stage setup that we use is shown in Figure~\ref{fig:LearningScheme}.
We first learn such a Trojaned model using a non-Trojaned model as a teacher.
Then, we maximize the indistinguishability between outputs of the two models by solving a min-max optimization problem.
The remainder of this section explains these steps in detail.
\begin{figure}
\centering
\includegraphics[width = 0.5 \textwidth]{Figs/LearningSteps.png}
\caption{Two steps of our learning procedure. In the first step (left chart), we use a non-Trojaned model as a teacher for the Trojaned model. Then in the second step (right chart), we update the weights of model to fool the discriminator.}
\label{fig:LearningScheme}
\end{figure}
\subsection{Using a non-Trojaned Model as Teacher}
The process of transferring knowledge from a `teacher' model that has high accuracy to a smaller `student' model is called \emph{knowledge distillation}~\cite{hinton2015distilling}.
Unlike `hard' labels which assign a sample to exactly one class with probability $1$, `soft' labels assign a probability distribution over output classes.
The non-zero probabilities provide information about inter-class similarities.
Knowledge distillation improves accuracy of the student model using `soft' labels provided by the teacher.
In order to ensure similarities in behaviors of a Trojaned model and a non-Trojaned model, we use the non-Trojaned as a teacher for the Trojaned model, and minimize the difference between their outputs for a given input.
Let the DNNs that comprise the Trojaned model be parameterized by $\theta$, and those comprising the non-Trojaned model be parameterized by $\theta''$.
Assume that $z_{T} = F_{T}(.;\theta)$ and $z_N = F(.;\theta'')$ denote the logits from the two models, and $C$ be the number of output classes.
Knowledge distillation techniques minimize the distance between the outputs of teacher and student models for clean samples.
The authors of~\cite{hinton2015distilling} established that when using gradient-based techniques and soft labels as an input to a loss function, the gradients were scaled by $1/T^2$.
For e.g., when using the $L_2$-norm to measure the distance between the models' outputs, following Eqn (\ref{eq:softmax}) and assuming that logits are zero mean without loss of generality, the gradient of the loss function is
\begin{align*}
&\frac{\partial }{\partial z^i_{T}} 0.5(p(z_{N}, T)-p(z_{T}, T))^2 \\
&=\frac{1}{T} \frac{e^{z^i_{T}/T}}{\sum_j e^{z^j_{T}/T}} (\frac{e^{z^i_{N}/T}}{\sum_j e^{z^j_{N}/T}} - \frac{e^{z^i_{T}/T}}{\sum_j e^{z^j_{T}/T}}) \\
&\approx \frac{1}{T} \frac{1+z^i_{T}/T}{C+\sum_j z^j_{T}/T}(\frac{1+z^i_{N}/T}{C+\sum_j z^J_{N}/T}-\frac{1+z^i_{T}/T}{C+\sum_j z^j_{T}/T}) \\
&\approx \frac{1}{C^2T^2}z_T^i (z_N^i-z^i_{T} )
\end{align*}
To compensate for the $1/T^2$ factor obtained in the above gradient-calculation, we will scale by $T^2$ when updating parameters $\theta$ of the DNN (see \emph{Line 12} of Algorithm 1).
The gradients for other loss functions like the Kullback-Liebler (KL)-divergence, $\mathcal{L}_{KL}$, and cross-entropy loss, $\mathcal{L}_{CE}$ can be computed in a similar manner.
The Kullback-Leibler divergence loss function quantifies the distance between the distributions of the outputs of two models and is defined as:
\begin{equation}
\begin{split}
\mathcal{L}_{KL} (p(z_N, T), p(z_T,T))= p(z_N, T) \log \frac{p(z_N, T)}{p(z_T,T)}
\end{split}
\end{equation}
Knowledge distillation techniques also train the Trojaned model on hard labels together with soft labels.
The cross entropy loss function measures the difference between the the softmax of the logit of the training (Trojaned) model, denoted $T=1$, and the true label of the input as follows:
\begin{equation}
\begin{split}
\mathcal{L}_{CE} (y^*, p(z_{T},T=1))&=-\sum_j y_j^* \log p_j(z_{T},T=1)\\
&= -\log p_i(z_{T},T=1)
\end{split}
\end{equation}
where $y^*$ is the true class of the input and $p_i(z_T,1)$ is the $i^{th}$ element of the softmax function's output. Since a hard label ($y^*$) assigns an input to one class, all elements have value of $0$, except the $i^{th}$ element, which has a values of $1$ if the input belongs to class $i$. Algorithm~\ref{alg:KD} explains the knowledge distillation step in detail.
\begin{algorithm}
\caption{Knowledge Distillation}\label{alg:KD}
\begin{algorithmic}[1]
\Require $\mathcal{D}, \Delta, m, g(.), T^*, F_{N}, \alpha_1>0, \alpha_2>0$
\State $\mathcal{D'}\leftarrow \mathcal{D}$
\For {$j=1:n$}
\State $(x,y)\leftarrow \text{random-selection}(\mathcal{D})$
\State $\mathcal{D'}\leftarrow \mathcal{D'} \cup (x+m\Delta, g(y))$
\EndFor
\For {$i=1:itr$}
\For {$(x^k,y^k) \in \mathcal{D}$}
\State $z_{T} \gets F_{T}(x^k;\theta)$
\State $z_{N} \gets F_{N}(x^k;\theta'')$
\State $q_1 \gets p (z_{N}, T=T^*)$
\State $q_2 \gets p (z_{T}, T=T^*)$
\State $L_1 \leftarrow L_1+ T^2 \times \frac{\partial}{\partial \theta} \mathcal{L}_{KL}(q_1,q_2)$
\EndFor
\For {$(x^k,y^k) \in \mathcal{D'}$}
\State $z^k_{T} \gets F_{T}(x^k;\theta)$
\State $L_2\leftarrow L_2+ \times \frac{\partial}{\partial \theta} \mathcal{L}_{CE} (y^k, p(z_{T}, T=1)) $
\EndFor
\State $\theta \gets \theta- \alpha_1 L_1 - \alpha_2 L_2 $
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{Min-Max Optimization}
We assume that a defender has access to only the outputs of a Trojaned model.
This is termed \emph{black-box access}, and is a reasonable assumption when machine learning-enabled cyber and cyber-physical systems are deployed in the real world.
In order to determine whether a model is Trojaned or not using only outputs of the model, the defender uses a \emph{discriminator}, $D$.
The discriminator is a classifier with two classes- YES ($s=1$) and NO ($s=0$).
Learning a discriminator (parameterized by $\phi$) involves taking a set of outputs of a model corresponding to a set of random inputs and assigning it to class $s=0$ if the model is non-Trojaned, and $s=1$ if it is Trojaned.
We define $ \mathcal{\hat{D}}:=\cup_j \{(p(F_{N}(x_j;\theta''),T=1),0),(p(F_{T;\theta}(x_j),T=1),1) \}$, where $F_{N}(\cdot, \theta'')$ and $F_{T}(\cdot,\theta)$ are functions of non-Trojaned and Trojaned models parameterized by $\theta''$ and $\theta$, respectively.
The discriminator minimizes a loss function derived from the cross-entropy loss $\mathcal{L}_{CE}$, given by:
\begin{align}
\mathcal{L}&=\frac{1}{2N'}\sum_{\substack{ (q,s)\in \mathcal{\hat{D}}\\ (q= p_{N},s=0) \:or (q= p_{T},s=1) }} \mathcal{L}_{CE}(D(q;\phi),s)
\end{align}
The objective of the adversary, on the other hand, is to ensure that the backdoor in the Trojaned model remains undetectable (i.e., fool the discriminator).
Consequently, she updates hyperparameters of the Trojaned model in a manner that will maximize the loss of the discriminator:
\begin{align}
\max_\theta \min_{\phi} \mathcal{L}_{CE}(D(F(x;\theta);\phi),1)
\end{align}
Algorithm~\ref{alg:minmax} explains the min-max step in detail.
In the $\min$ step, we first generate a set of arbitrary inputs (images, in our case that are generated from a $\mathcal{N}(\mu,\sigma)$ distribution) and provide them to both non-Trojaned and Trojaned models.
The outputs from these models are used to train the discriminator.
In the $\max$ step, we update parameters of the Trojaned model to maximize the loss of the discriminator by generating outputs that are similar to outputs of the non-Trojaned model for the arbitrarily generated inputs.
In order to preserve accuracy of the model on any input (clean or triggered), we consider a set $\mathcal{D'}$ that contains both types of inputs, and minimize a cross-entropy loss (Lines 10-14).
\begin{algorithm}
\caption{Min-Max Optimization for Discriminator}\label{alg:minmax}
\begin{algorithmic}[1]
\Require $F_{N}(., \theta''), F_{T}(.;\theta), \mu, \sigma, \mathcal{D'}, \alpha_1, \alpha_2, \alpha_3>0$
\For {$i=1:itr$}
\For {$i=1:N'$}
\State $img\leftarrow \mathcal{N}(\mu,\sigma)$
\State $\mathcal{\hat{D}} \leftarrow \mathcal{\hat{D}} \cup \{p((F_{N}(img;\theta''), T=1),0)\}$
\State $\mathcal{\hat{D}} \leftarrow \mathcal{\hat{D}} \cup \{(p(F_{T}(img;\theta), T=1),1)\}$
\EndFor
\State $L_1= \frac{1}{2N'}\sum_{(q,s)\in\mathcal{\hat{D}} } \frac{\partial}{\partial \phi} \mathcal{L}_{CE}(D(q;\phi),s)$
\State $\phi \gets \phi-\alpha_1 L_1$
\State $L_2= \frac{1}{N'} \sum_{x_i\sim \mathcal{N}(\mu,\sigma)} \frac{\partial}{\partial \theta} \mathcal{L}_{CE}( D( p(F_T(x;\theta));\phi),1) $
\For {$(x^k,y^k) \in \mathcal{D'}$}
\State $z^k_{T} \gets F_{T}(x^k;\theta)$
\State $L_3=L_3+ \frac{\partial}{\partial \theta} \mathcal{L}_{CE} (y^k, p(z_{T}^k, T=1)) $
\EndFor
\State $\theta \gets \theta- \alpha_1 L_1 +\alpha_2 L_2 -\alpha_3 L_3$
\EndFor
\end{algorithmic}
\end{algorithm}
\section{conclusion}\label{sec:conclusion}
This paper studied machine learning models that were vulnerable to backdoor attacks.
Such a model is called a Trojaned model.
We identified a limitation of a state-of-the-art defense mechanism that was
designed to protect against backdoor attacks.
We proposed a new class of multi-target backdoor attacks in which a single trigger could result in misclassification to more than one target class.
We then designed a two-step procedure that used knowledge distillation and min-max optimization to ensure that outputs from a Trojaned model were indistinguishable to a non-Trojaned model.
Through empirical evaluations, we demonstrated that our approach was able to completely bypass a
state-of-the art defense mechanism, MTND.
We demonstrated a reduction in detection accuracy of the discriminator of MTND from $>96\%$ without our method to $0\%$ when using our approach.
We also discussed ways to extend our methodology to other classes of DNN models beyond those that use images, establish provable guarantees, and build better defenses.
\section{Discussion}\label{sec:discussion}
In this section, we identify how our approach can be extended to domains where inputs might not be images, and highlight some open questions and challenges that are promising future research directions.
\subsubsection{Extension to other domains}
Our solution approach focused on the setup where inputs to a DNN was images, and we showed that the state-of-the-art MTND defense~\cite{xu2021detecting} could be bypassed.
Recent research has demonstrated that DNNs designed for other tasks such as text classification and generation~\cite{dai2019backdoor}, code completion~\cite{schuster2021you}, and decision making in reinforcement learning and cyber-physical systems~\cite{panagiota2020trojdrl} are also vulnerable to backdoor attacks.
We believe that our solution methodology can be applied to domains where the inputs and outputs of a DNN are continuous valued such as speech~\cite{zhai2021backdoor} and deep reinforcement learning~\cite{mnih2015human}.
We will evaluate our two-step approach on backdoor attacks carried out on these types of systems, and examine the limitations of defenses against backdoor attacks in settings where inputs to a DNN classifier are discrete-valued (e.g., text).
\subsubsection{Provable Guarantees}
The nested and inherently nonlinear structure of DNNs makes it challenging to explain the decision-making procedures for given input data~\cite{samek2017explainable}.
This challenge also holds true when investigating the explainability of defense mechanisms and their limitations against backdoor attacks on DNNs.
We believe that developing a principled approach that enables establishing provable guarantees on the accuracy of certain classes of DNNs- e.g., when the activation function is a rectified linear unit (ReLU)~\cite{arora2018understanding} will be a promising step in this direction.
\subsubsection{Building Better Defenses}
In this paper, the discriminator used outputs to an arbitrary set of image inputs to determine whether the model was Trojaned or not.
An interesting question to answer is if the similarity between outputs from a Trojaned and non-Trojaned model can be characterized for any input that does not contain a trigger.
Quantifying the change in the trigger/ trigger pattern that will result in a change in the decision of a Trojaned model compared to a non-Trojaned model is a possible solution approach.
This will help learn and reason about the dynamics of the decision boundaries of DNN classifiers to enable building better defenses against backdoor attacks.
\section{Related Work}
\label{sec:relatedwork}
This section summarizes recent literature on backdoor attacks and defense mechanisms against such attacks.
A backdoor attack results in DNN models misclassifying inputs that contain a trigger~\cite{usenix2021blind}.
An input containing a trigger can be viewed as an adversarial example~\cite{kurakin2017adversarial}, but there are differences in the ways in which an adversarial example attack and a backdoor attack are carried out.
Adversarial examples are generated by learning a quantum of adversarial noise that needs to be added to an input in order to cause a pre-trained DNN model to misclassify the input~\cite{moosavi2016deepfool,kurakin2017adversarial,goodfellow2015explaining}.
Backdoor attacks, in comparison, aim to influence weights of parameters of neural networks that describe the target model during the training phase either through poisoning the training data~\cite{usenix2021blind} or poisoning the weights themselves~\cite{dumford2020backdooring,rakin2020tbt}. Consequently, any input stamped with a pre-defined trigger will be able to cause the DNN to misclassify the input.
Moreover, the trigger for a backdoor attack might be invisible~\cite{turner2019label, li2020invisible, liao2018backdoor}
It has been demonstrated that adversarial example attacks and backdoor attacks can both be trained to be transferable~\cite{gu2019badnets,wang2020backdoor}.
When access to a target model was not available, a black-box backdoor attack was proposed in~\cite{liu2017trojaning}.
In this case, the adversary had to generate training samples, since the training dataset was not available.
The vulnerability of DNN models to backdoor attacks have been examined in multiple applications.
A class of semantic backdoor attacks was proposed in~\cite{bagdasaryan2020backdoor,usenix2021blind} for image and video models.
In this case, the target label was determined based on features of the input- for e.g., inputs featuring green colored cars would result in the model misclassifying it as a bicycle.
The authors of~\cite{usenix2021blind} and ~\cite{dai2019backdoor} designed backdoor attacks for code poisoning and natural language processing models respectively.
Recently, an untargeted backdoor attack for deep reinforcement learning models was proposed in ~\cite{panagiota2020trojdrl}.
Backdoor attacks have also been used to determine fidelity- specific examples include model watermarking~\cite{adi2018turning} and verifying that a request by a user to delete their data was actually carried out by a server~\cite{sommer2020towards}.
%
Backdoor attacks typically specify one target output class for each trigger.
An \emph{N-to-one trigger} backdoor attack was proposed in~\cite{xue2020one}, where an adversary used a single trigger at a specific location, but with different intensities for each target.
A procedure to train different models to output different target classes using the same trigger was demonstrated in~\cite{xiao2022multitarget}.
The authors of~\cite{rajabi2020adversarial} introduced an input-based misclassification procedure by learning adversarial perturbations~\cite{moosavi2017universal} for pairs of target classes.
Different from~\cite{rajabi2020adversarial} where one perturbation was required for each pair of classes, our methodology in this paper uses a single trigger that can cause misclassification to more than one target class.
Defense mechanisms against backdoor attacks can focus on removing backdoors from Trojaned models~\cite{liu2018fine,yoshida2020disabling,li2021neural} or detecting/suppressing poisoned data during training~\cite{du2019robust,tran2018spectral,chen2018detecting}.
Our focus in this paper is on a third type- defense mechanisms for pre-trained models, since we assume that the target model has already been trained, and that the user will only have `black-box' access to it.
Pre-processing defense mechanisms deploy a module that will remove or reduce the impact of a trigger present in the input.
For example, an auto-encoder was used as a pre-processor in~\cite{liu2017neural}.
A generative adversarial network (GAN) was used to identify `influential' portions of an image/ video input in~\cite{selvaraju2017grad}.
This approach was leveraged by~\cite{udeshi2019model} to use the dominant color of the image as a trigger-blocker.
Style-transfer was used as a pre-processing module in~\cite{villarreal2020confoc} to mitigate the impact of trigger present in the input.
Modifying the location of the trigger using spatial transformations was deployed as a defense mechanism in~\cite{li2020rethinking}.
In contrast, post-training defense mechanisms aim to determine whether a given model is Trojaned or not, and then refuse to deploy a Tojaned model.
The authors of~\cite{kolouri2020universal} proposed a technique based on learning adversarial perturbations~\cite{moosavi2017universal} to locate a trigger using an insight that triggers constrain the magnitude of the learned perturbation.
Thus, the learning process would identify a model as Trojaned if the learned perturbation was below a threshold.
An outlier detector was used in~\cite{huang2019neuroninspect} to explain outputs of a model and features extracted using a saliency map were used to identify if the model was Trojaned.
A defense mechanism against backdoor attacks when working with limited amounts of data was proposed in~\cite{wang2020practical}.
All the approaches described above require access to hyperparameters of the model.
There is a relatively smaller body of work focused on designing defenses in the absence of such access.
A mechanism called DeepInspect was proposed in~\cite{chen2019deepinspect}, which learned the probability distribution of triggers using a generative model.
DeepImpact assumed that a trigger had fixed patterns with a constant-valued distribution.
In comparison, we consider a trigger that can have an arbitrary location within the input, and can result in misclassification to more than one target class.
The authors of~\cite{xu2021detecting} proposed meta neural Trojan detection (MTND).
MTND used a discriminator which took a target model as input, and performed a binary classification on the output of the model to identify if it was Trojaned or not.
We evaluate our methodology using MTND as a benchmark, since MTND does not make any assumptions about the trigger.
|
\section{Introduction}
Interdisciplinary approaches are common in science and technology, and their importance in pedagogy cannot be overstated. Williams et al.~\cite{williams2003aren } analyse the r
|
easons why many high school students do not find physics interesting and suggest that an interdisciplinary approach might enhance students' interests in physics. A new idea can be better appreciated by the students when they interpret it in terms of the known concepts bearing no apparent connections with the new topic. Apart from understanding concepts in a new light, it often leads to important milestones. For example, the discovery of the double helix structure of DNA required the knowledge of chemistry, molecular biology, and crystallography. This trend has been adopted in the pedagogy of physics curricula at the undergraduate level in recent decades. An example of this is `F=ma optics'~\cite{evans1986f} where many elements of geometrical optics were interpreted in terms of the concepts learned from Newtonian mechanics. This analogy was a glimpse of the deeper connection between classical mechanics and ray optics through the principle of stationary action~\cite{ goldstein2002classical}. Whereas in classical mechanics, this principle helps in understanding the dynamical evolution of a system, in geometrical optics it is used to analyse the spatial evolution of the light ray in a medium with a known refractive index. This approach culminated in a new discipline called Hamiltonian optics~\cite{buchdahl1993introduction, torre2005linear, dragoman2013quantum, lakshminarayanan2002lagrangian} and is not usually taught in undergraduate-level physics courses. The present author thinks that concrete examples of such interdisciplinary approaches could be very useful for the students.
The analytical formulation of geometrical optics raises the question of whether it is possible to rediscover the quantum nature of light in an appropriate limit starting from this alternative approach to optics. In fact, the answer is affirmative: Gloge and Marcuse developed a quantum theory of light rays~\cite{gloge1969formal} based on Hamiltonian optics. This work finds the emergence of the quantum effects in the limit where the wavelength of light rays cannot be neglected and eventually derives the reduced wave equation (which they called the Klein Gordon equation of the quantum theory of light rays) whose solutions could represent the probability wave amplitudes of the light rays
In the introductory level courses on optics, students learn about Huygen's principle~\cite{jenkins2018fundamentals}, to understand the propagation of light rays in a medium and Snell's law of light refraction between media with different refractive indices without any reference to the quantum description of light. With higher courses on electromagnetic theory, they also learn about the coefficient of reflection and transmission when light, an electromagnetic wave, falls at the interface of two media~\cite{hecht1998hecht}. A few unconventional approaches to refraction were illustrated in the pedagogical works, by (a) Drosdoff et al.~\cite{drosdoff2005snell} who treated photons as entities with well-defined energy and momenta, to prove Snell's law, (b) Evans et al.~\cite{evans1986f} who argued that the tangential component of the optical equivalent of mechanical velocity must be the same in two media. Another method has been demonstrated by Ghatak~\cite{lakshminarayanan2002lagrangian} who set the derivative of total optical path length to zero to arrive at Snell's law. The last example made use of Hamilton's principle of stationary action. It is not clear if these methods could develop to the point where one can obtain the coefficients of reflection and transmission. These different approaches do not use the quantum description of light rays, but we believe that the use of this description may bring pedagogic insight to these topics.
In particular, we did not find any instance where the reduced wave equation of the light rays introduced in the formal quantum theory of light rays~\cite{gloge1969formal} was utilised to prove Snell's law or to obtain the coefficients of the light reflection or refraction. However, such a development would be pedagogically very appealing, because the solutions of this equation are very similar to the Huygen's wavelets. In this article, we will present an algebraic method to fill in this gap. The students do not need prior knowledge of Hamiltonian optics, but some familiarity with analytical mechanics and Schr$\rm{\ddot o}$dinger's equation is helpful. It is hoped that the present manuscript could be a valuable addition to the existing literature of physics pedagogy.
We will start our discussion with a discussion on the known concepts and published materials in next section~\ref{Sec2} which provides a summary of Hamilton's formulation of geometrical optics and leads to the differential equation describing the spatial evolution of the light rays. Evans et al.~\cite{evans1986f} presented a clever parametrization that significantly reduces the degree of complexity of this ray equation. In the next section~\ref{ScWvEq}, we shall find that the reduced wave equation which was developed by Gloge et al.~\cite{gloge1969formal} nicely reflects the elements of the formulation~\cite{evans1986f}. Later in that section, the main results of this paper will be presented. It will be observed that this equation can be interpreted as a Schr$\rm{\ddot{o}}$dinger's equation with zero energy wavefunctions. Using a model of incidence of the light rays (represented by such wavefunctions) on a potential barrier in two dimensions, we will obtain Snell's law of refraction. We will see that this approach also leads to Fresnel's equations~\cite{hecht1998hecht} for the $s$-polarized light. We shall conclude with some discussion on the implication of this study.
\section{Analytical formulation of optics}\label{Sec2}
\subsection{Lagrangian formulation}
Hamilton's principle of stationary action is introduced to the students in analytical mechanics courses. Action $S$ of a dynamical system is defined as the time integral of Lagrangian density $\mathcal{L}=\mathcal{L}(q,\dot{q},t)$:\\
\begin{equation}\label{Eq1}
S=\int_{t_1}^{t_2}\mathcal{L}(q,\dot{q},t)\ dt
\end{equation}
Sticking to the usual convention, we shall call $\mathcal L$ the Lagrangian for its subsequent appearances. For the fixed time instants $t_1$ and $t_2$, the principle says that the system will dynamically evolve along a path, for which the action will be stationary to the first order. That is, the action will not change due to a slight variation of the path. This statement is mathematically expressed as:
\begin{equation}\label{Eq2}
\delta S=\delta \int_{t_1}^{t_2}\mathcal{L}(q,\dot{q},t)\ dt=0
\end{equation}
The above principle leads to the Euler-Lagrange equation that governs the dynamical evolution of the system:
\begin{equation}\label{Eq3}
\frac{\partial\mathcal{L}}{\partial q} =\frac{d}{dt}\left(\frac{\partial\mathcal{L}}{\partial\dot{q}}\right)
\end{equation}
\subsection{Fermat's principle}
In case of geometrical optics, the equivalent physical principle is the Fermat's principle which states that for fixed end points a ray of light will propagate along a trajectory along which the optical path length $\mathbb{L}=\int n\ ds$ is stationary. That means the variation in optical path length will not change to the first order:
\begin{equation}\label{Eq4}
\delta\mathbb{L}=\delta\int_{P_1}^{P_2} n\ ds=0
\end{equation}
Euler-Lagrange variation of the principle (Eq.\eqref{Eq4}) leads to the differential ray equation governing the spatial evolution of the light rays:
\begin{equation}\label{Eq5}
\frac{\partial{n}}{\partial{\bf r}} =\frac{d}{ds}\left(n\frac{d{\bf r}}{ds}\right)
\end{equation}
This equation is non-linear and is difficult to solve when the refractive index $n$ varies spatially.
\subsection{F=ma optics}
Evans~\cite{evans1986f} showed that if Fermat's principle is expressed in terms of an independent variable $a$, defined by $n=\left|\frac {d{\bf r}}{da}\right|$, where $n$ denotes the refractive index and ${\bf r}$ denotes the position vector, then Eq.\eqref{Eq5} can be expressed as:
\begin{equation}\label{Eq6}
\frac{d^2{\bf r}}{da^2}=-\nabla^2\left(\frac{n^2}{2}\right)
\end{equation}
-where $a$ is a parameter. This equation has the well-known form of Newton's second law of motion, in terms of the parameter $a$. In this formulation, the quantity that is equivalent to force is proportional to the gradient of the square of the refractive index. This specialized coordinate is applicable to geometrical optics, in which mass, velocity, kinetic and potential energies do not have their standard units. This correspondence is shown below in table~\ref{T1}.
\begin{table}[ht]
\begin{center}
\renewcommand{\arraystretch}{1.5}
\begin{tabular}{||c | c | c||}
\hline
& Definitions in classical mechanics & Equivalent quantities in geometrical optics\\\hline
Position & ${\bf r}(t)$ & ${\bf r}(a)$ \\\hline
time & t & a \\\hline
velocity & $\frac{d{\bf r}}{dt}\equiv{\bf{\dot r}}$ & $\frac{d{\bf r}}{da}\equiv{\bf r}'$ \\\hline
potential energy& $U({\bf r})$ & $-\frac{n^2({\bf r})}{2}$ \\\hline
mass & $m$ & $1$ \\\hline
kinetic energy & $T=\frac{m}{2}\left|\frac{d{\bf r}}{dt}\right|^2$ & $\frac{1}{2}\left|\frac{d{\bf r}}{da}\right|^2$ \\\hline
total energy & $\frac{m}{2}\left|\frac{d{\bf r}}{dt}\right|^2+U({\bf r})$ & $\frac{1}{2}\left|\frac{d{\bf r}}{da}\right|^2-\frac{n^2}{2}=0$\\\hline
\end{tabular}
\end{center}
\caption{Mechanical view of optics}\label{T1}
\end{table}
From table~\ref{T1}, we find that the optical equivalent of potential energy is a quadratic function of refractive index and the equivalent of total energy (Hamiltonian) is 0.
\subsection{Hamiltonian optics}
A similar formulation of the problem is possible in terms of the optical Hamiltonian, expressed as a function of $(x, y, z)$, and their conjugate momenta $p_x$, $p_y$ and $p_z$. Here we take $b$ as a stepping parameter along the ray. Then, the expression for optical Lagrangian becomes:
\begin{equation}\label{Eq7}
L(x,y,z,x',y',z',b)=n(x,y,z)\sqrt{x'^2+y'^2+z'^2}
\end{equation}
-where $x'=\frac{dx}{db}$ etc. The conjugate momenta are defined as:
\begin{align}\label{Eq8}
p_x=\frac{\partial L}{\partial x'}=n\frac{x'}{\sqrt{x'^2+y'^2+z'^2}}=n\frac{dx}{ds}=n_x\nonumber\\
p_y=\frac{\partial L}{\partial y'}=n\frac{y'}{\sqrt{x'^2+y'^2+z'^2}}=n\frac{dy}{ds}=n_y\nonumber\\
p_z=\frac{\partial L}{\partial z'}=n\frac{z'}{\sqrt{x'^2+y'^2+z'^2}}=n\frac{dz}{ds}=n_z,
\end{align}
where $(n_x,n_y,n_z)$ denote the components of refractive index vector ${\bf n}=(n_x,n_y,n_z)$. The Hamiltonian, constructed by Legendre transformation, can be evaluated to be equal to zero\footnotemark[1]:
\begin{equation}\label{Eq9}
H(x,y,z,p_x,p_y,p_z,b)=x' p_x+y' p_y +z' p_z - L=0
\end{equation}
\footnotetext[1]{We comment that if $z$ coordinate were taken as the independent variable, instead of $b$, then we would have the momenta $p_x$ and $p_y$, conjugate to the coordinates $x(z)$ and $y(z)$. In that case, Hamiltonian would be expressed as: $H=-\sqrt{n^2-p_x^2-p_y^2}$}.
\subsubsection{Quantum description of light rays}
From the ray view of light, one can make the transition to the quantum description of light by treating momenta by appropriate linear differential operators: $p_x\rightarrow\hat{p}_x\equiv{-i\frac {\lambda}{2\pi}\frac{\partial}{\partial x}}$ etc. These act upon wavefunctions $\psi$ that represent light rays via the eigenvalue equations:
\begin{equation}\label{Eq9a}
\hat{p}_x\psi=n_x\psi{\hspace{1.0cm}}\implies{\hspace{1.0cm}\hat{p}^2_x\psi=n_x^2\psi}
\end{equation}
Repeating this exercise for all three components, and summing up the squared equations, we find the reduced wave equation:
\begin{equation}\label{Eq9b}
{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}^2\nabla^2\psi+n^2\psi=0
\end{equation}
Quantum effect manifests in the limit ${\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}\not\rightarrow 0$, and geometrical optics emerges in the limit when ${\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda} \rightarrow 0$. Gloge et al.~\cite{gloge1969formal} used $z$ as the stepping parameter to arrive at this equation. On the other hand, we chose $b$ as the stepping parameter to show the internal consistency with the formulation presented in `F=ma' optics.
We note that the so-called `quantum' description of light rays is equivalent to the scalar wave description of light, usually taught at the undergraduate level as wave optics.
\subsection{Scalar wave description}
The electromagnetic theory asserts that light is a travelling transverse electromagnetic wave in which the electric field ${ \vec E}({\bf r},t)$ and the magnetic field ${\vec B}({\bf r},t)$ are coupled in the form of a pulse (see section 3.2 of~\cite{hecht1998hecht}). The spatio-temporal variation of the electric field in a region devoid of source electric charge and current can be expressed by the vector wave equation:
\begin{equation}\label{Eq9c}
\nabla^2{\vec E}-\frac{1}{v^2}\frac{\partial^2\vec{E}}{\partial t^2}=0,
\end{equation}
where $v$ denotes the speed of light in the medium of propagation. Each component of the electric field vector $E_i\equiv(E_x, E_y, E_z)$ denotes a pulse that satisfies the scalar wave equation:
\begin{equation}\label{Eq9d}
\nabla^2{E_i}-\frac{1}{v^2}\frac{\partial^2{E_i}}{\partial t^2}=0
\end{equation}
If we assume that the electric pulse $E_i({\bf r},t)$ can be expressed as a product of spatial part $u(\bf r)$ and temporal part $T(t)$, then using the separation of the variables technique, we can deduce the differential equation for the spatial part of the light wave:
\begin{equation}\label{Eq9e}
\nabla^2 u({\bf r})+k^2u({\bf r})=0
\end{equation}
-where $k$ is a constant representing the wavenumber. This equation is called the reduced wave equation or the Helmholtz equation. One can also express this equation in the form:
\begin{equation}\label{Eq10}
{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}^2\nabla^2 u({\bf r})+n^2u({\bf r})=0,
\end{equation}
where $n$ represents the refractive index of the medium and ${\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}=\frac{\lambda}{2\pi}$ is the reduced wavelength of light waves in this medium. Clearly, Eq.\eqref{Eq9b} and Eq.\eqref{Eq10} are identical, and thus $\psi$ -representing a ray of light, and $u$ -representing a pulse of the electric field, must be intimately related.
\section{Reduced wave equation as Schr$\rm{\ddot{o}}$dinger's equation}\label{ScWvEq}
We note that Eq.\eqref{Eq9b} can also be expressed as:
\begin{equation}\label{Eq11}
-\frac{{\mkern0.75mu\mathchar '26\mkern -9.75mu\lambda}^2}{2\times 1}\nabla^2 \psi - \frac{n^2}{2} \psi = 0\cdot \psi
\end{equation}
Comparing Eq.\eqref{Eq11} with the generic form of the time independent Schr$\rm{\ddot{o}}$dinger's equation and with the terms in table~\ref{T1}, we observe that the optical equivalent of the Hamiltonian for light rays is $0$ and the optical equivalent of the potential energy is $-\frac{n^2}{2}$. Of course, the dimension of the Hamiltonian (in Eq.\eqref{Eq9}) is not of energy. Gloge et al.~\cite{gloge1969formal} (1969) could not identify Eq.\eqref{Eq10} as a Schr$\rm{\ddot{o}}$dinger's equation for the light rays\footnotemark[2],
\footnotetext[2]{In fact, they derived an approximate form of Schr$\rm{\ddot{o}}$dinger's equation in the paraxial approximation where the wavefunctions are non-zero energy states.}
since they could not identify the `mass term' for light rays as 1, `total energy' as $0$ and the `potential energy term' as $-\frac{n^2}{2}$, that were accomplished by the authors of `F=ma optics' in 1986.
\subsection{Connection with Huygen's wavelets and phasors}
The elementary eigensolutions\footnotemark[3] of Eq.\eqref{Eq9b} in a homogeneous medium are given by~\cite{torre2005linear} (a) plane waves $\left(e^{i{\bf k}\cdot{\bf r}}\right)$, and (b) spherical waves $\left(\frac{\lambda}{r}e^{i{\bf k}\cdot{\bf r}}\right)$, for $r>\lambda$. In a typical wave optics experiment taught in the undergraduate physics curriculum, \footnotetext[3]{One can construct the general travelling wave solutions of the wave equation~\eqref{Eq9d} by superposing the incoming and outgoing waves. In one and three dimensions, the general solution can be expressed as $E_x(x,t)=f(x+vt)+g(x-vt)$, and $E_r(r,t)=\frac{F}{r}(r+vt)+\frac{G}{r}(r-vt)$, respectively.} one shines an aperture (or a slit) of dimension $a$ with monochromatic light of wavelength $\lambda$, and a screen is placed on the opposite side, at a distance $L$ from the aperture, to observe possible diffraction patterns. Assuming that the monochromatic light incident on the aperture can be assumed to be plane waves, the Fresnel number of the optical setup can be defined as~\cite{hecht1998hecht}:
\begin{equation}
\mathcal{F}=\frac{1}{L}\left(\frac{a^2}{\lambda}\right)
\end{equation}
If the observation screen is close enough to the aperture, in a way such that $\mathcal F\gtrsim1$, then one is working in the near field limit. In this case, it is convenient to use the spherical wave basis to describe the near field diffraction. However, if it is far away, such that $\mathcal{F}<<1$, then one is working in the far field limit. In this limit, one can use the plane wave solution, as locally spherical wavefronts behave as plane waves.
\begin{figure}[ht]
\centering
\hspace{-1.0 cm}
\begin{subfigure}{.50\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[height=5.0 cm, width= 5.5 cm]{SphericalEnvelop.pdf}
\caption{}
\label{fig0a}
\end{subfigure}
\hspace{-1.0 cm}
\begin{subfigure}{.15\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[height=5.5 cm, width= 3.0 cm]{PlaneEnvelop.pdf}
\caption{}
\label{fig0b}
\end{subfigure}
\hspace{0.5 cm}
\begin{subfigure}{.35\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[height=3.0 cm, width= 5.0 cm]{FarField.pdf}
\caption{}
\label{fig0c}
\end{subfigure}
\caption{Propagation of (a) a spherical wave, and (b) a plane wave. The wavefronts at $t+\Delta t$ are envelops of spherical wavelets generated from the wavefronts at $t$. (c) In far the field limit, the spherical wavefronts locally behave as the plane waves. Mathematical functions of these envelops are solutions of Eq.\eqref{Eq9b}.}
\label{fig0}
\end{figure}
These wavefronts may indeed be considered as the envelopes of Huygen's wavelets, generated from the wavefront at a previous step of light propagation. In the far field limit, these wavelets are just the plane waves and can be expressed as: $\psi =Ae^{i{\bf k}\cdot{\bf r}}$ where $A$ is a complex constant (the initial phase for the monochromatic wave is assumed to have been absorbed within $A$). As such, there is no problem with the complex wavefunction, but this cannot represent a real-valued pulse of the electric field. The electric field pulse can be written as $u=Re\left[Ae^{i{\bf k} \cdot{\bf r}}\right]$. These pulses are referred to as phasors, due to the phase contained in the exponent (note that ${\bf k} \cdot{\bf r}$ is a phase). This observation can also be taken as a justification for taking the electric field pulse $u$ of Eq.\eqref{Eq10} as a complex quantity, commonly practised to simplify calculations. The phasor addition of the waves is utilised in the derivation of the intensity patterns of many interference and diffraction experiments~\cite{halliday2010physics}.
\subsection{Particle (or light ray) incident on potential barrier} \label{Sec3}
The discussion in the previous section~\ref{ScWvEq} throws light on the scalar wave nature of the optical field. Specifically, we found that the zero energy wavefunctions of light rays satisfy Schr$\rm{\ddot o}$dinger's equation. Therefore, it might be possible to derive Snell's law of geometrical optics. To check this, we begin with the problem of a particle or a ray of light of energy $E$, in the region with potential $V_0$ at $x<0$, incident obliquely on a potential barrier $V_1$ at $x\ge0$\footnotemark[4]. We represent this particle (or ray of light) by the wavefunction $\psi$. In the context of geometrical optics, such a potential barrier arises due to the difference of the refractive indices of the two media, as shown in the following figure~\ref{fig1}.
\footnotetext[4]{The energy and momentum of a ray of light are understood in the sense of table~\ref{T1}.}
\begin{figure}[ht]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[height=5.0 cm, width= 7.5 cm]{Oblique1c.pdf}
\caption{}
\label{fig1a}
\end{subfigure}
\hspace{0.0 cm}
\begin{subfigure}{.45\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[height=5.0 cm, width= 7.5 cm]{Oblique2c.pdf}
\caption{}
\label{fig1b}
\end{subfigure}
\caption{(a) Particle (or ray of light) of energy $E$ in the region with potential $V_0$ is incident on a different potential barrier $V_1$ obliquely at an angle $\theta_0$. We assume $E>V_0, V_1$.}
\label{fig1}
\end{figure}
Due to the presence of the potential $V_0$ at $x<0$, the momentum of the particle (or a ray of light) represented by the wavefunction $\psi$ is: $k_0=\sqrt{\frac{2m(E-V_0 )}{\hbar^2}}$. At $x>0$, the momentum of the transmitted particle (or the light ray) is: $k_1=\sqrt{\frac{2m(E- V_1)}{\hbar^2}}$. The reflected component of the original wavefunction continues to be in the region of potential $V_0$, and has a momentum eigenvalue $k_0$ (with the sign of the $x$ component is reversed).
\subsection{Understanding of reflection and refraction}
Since $|{k_0}_x|$ and ${k_0}_y$ for the incident and reflected wavefunctions are the same, the angle of incidence must be equal to the angle of reflection. The translation invariance of the problem in the $Y$ direction demands:
\begin{equation}\label{Eq12}
{k_0}_y={k_1}_y\\
\implies k_0\sin\theta_0=k_1\sin\theta_1
\end{equation}
Using the above expressions of momenta, we find:
\begin{equation}\label{Eq13}
\sqrt{E-V_0}\sin\theta_0=\sqrt{E-V_1}\sin\theta_1
\end{equation}
At this point, we exploit the identifications made in the context of `F=ma Optics' and Eq.\eqref{Eq11}, that the total `energy' of the light rays is $0$, and the `potential energy' of light rays in a medium with refractive index $n$ is $-\frac{n^2}{2}$. Replacing these in Eq.\eqref{Eq13}, we find that it reduces to
\begin{equation}\label{Eq14}
n_0\sin\theta_0=n_1\sin\theta_1
\end{equation}
which is Snell's law, established from the scalar wave description of light in a purely algebraic manner. Students who are unfamiliar with Hamiltonian optics, but have a basic idea of Schr$\rm{\ddot o}$dinger's equation, can fathom this derivation.
\subsection{Estimation of coefficient of reflection}\label{REFL}
From figure~\ref{fig1}, the total wavefunction at $x<0$ is given as:
\begin{equation}\label{Eq15}
\psi_0(x,y)=e^{i{k_0}_xx+i{k_0}_yy}+\includegraphics{scriptr} e^{-i{k_0}_xx+i{k_0}_yy}
\end{equation}
Where $\includegraphics{scriptr}$ denotes the amplitude of reflection back into the region of space with potential $V_0$. On the other hand, the wavefunction at $x>0$ can be written as:
\begin{equation}\label{Eq16}
\psi_1(x,y)=\mathcalligra{t}\, e^{i{k_1}_xx+i{k_1}_yy}=\mathcalligra{t}\, e^{i{k_1}_xx+i{k_0}_yy}
\end{equation}
-where $\mathcalligra{t}\,$ denotes the transmission amplitude and ${k_0}_y={k_1}_y$. The boundary conditions at $x=0$ are given by:\\
(a) $\psi_0(x=0)=\psi_1(x=0)$ and \\
(b) $\left(\frac{\partial\psi_0}{\partial x}\right)_{x=0}=\left(\frac{\partial\psi_1}{\partial x}\right)_{x=0}$.\\
The first condition yields:
\begin{align}\label{Eq17}
e^{i{k_0}_yy}+\includegraphics{scriptr}\cdot e^{i{k_0}_yy} &= \mathcalligra{t}\, e^{i{k_0}_yy}\nonumber\\
\implies1+\includegraphics{scriptr} &= \mathcalligra{t}\,
\end{align}
The second condition implies:
\begin{align}\label{Eq18}
i{k_0}_xe^{i{k_0}_yy}-\includegraphics{scriptr}\cdot i{k_0}_xe^{i{k_0}_yy} &= \mathcalligra{t}\, i{k_1}_xe^{i{k_0}_yy}\nonumber\\
\implies{{k_0}_x - \includegraphics{scriptr}\cdot{k_0}_x = \mathcalligra{t}\,{k_1}_x}
\end{align}
From Eq.\eqref{Eq17} and Eq.\eqref{Eq18}, we can show that the reflection coefficient ($R=||\includegraphics{scriptr}||^2$) can be expressed as:
\begin{align}\label{Eq19}
R = \left|\left|\frac{{k_0}_x-{k_1}_x}{{k_0}_x+{k_1}_x}\right|\right|^2
\end{align}
In Eq.\eqref{Eq19}, ${k_0}_x=k_0\cos\theta_0=\sqrt{E-V}\cos \theta_0$ etc. Hence,
\begin{align}\label{Eq20}
R &= \left|\left|\frac{k_0\cos\theta_0 - k_1\cos\theta_1}{k_0\cos\theta_0 + k_1\cos\theta_1}\right|\right|^2
=\left|\left|\frac{\sqrt{E-V_0}\cos\theta_0 - \sqrt{E-V_1}\cos\theta_1}{\sqrt{E-V_0}\cos\theta_0 + \sqrt{E-V_1}\cos\theta_1}\right|\right|^2
\end{align}
Let us now see the implication of this discussion in the context of geometrical optics. If we take $E=0$ and $V_j=-\frac{n_j^2}{2}$ in accordance with our understanding of refraction, the quantity $R$ can be written as:
\begin{equation}\label{Eq21}
R =\left|\left|\frac{n_0\cos\theta_0 - n_1\cos\theta_1} {n_0\cos\theta_0 + n_1\cos\theta_1}\right|\right|^2
=\left|\left|\frac{n_0\cos\theta_0 - \sqrt{n_1^2-n_0^2\sin^2\theta_0}}{n_0\cos\theta_0 + \sqrt{n_1^2-n_0^2\sin^2\theta_0}}\right|\right|^2
\end{equation}
As expected, $R\rightarrow1$, when the incident angle $\theta_0\rightarrow90^o$, or if the condition for the total internal reflection $n_0\sin\theta_0= n_1$ is satisfied. We notice that Eq.\eqref{Eq21} is the same as Fresnel's equation of the coefficient of reflection for $s$-polarized light. The polarization of light in terms of electromagnetic theory is depicted in the following figure~\ref{fig2}.
\begin{figure}[ht]
\centering
\begin{subfigure}{.45\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[height=5.5 cm, width= 8.0 cm]{sPolarizedLight.pdf}
\caption{}
\label{fig2a}
\end{subfigure}
\hspace{0.5 cm}
\begin{subfigure}{.45\textwidth}
\centering
\captionsetup{justification=centering}
\includegraphics[height=5.5 cm, width= 8.0 cm]{pPolarizedLight.pdf}
\caption{}
\label{fig2b}
\end{subfigure}
\caption{Description of polarization of light in the context of incidence of light rays at the interface between two media with refractive indices $n_0$ and $n_1$; (a) $s-$polarized light: Electric field perpendicular to the plane of incidence, and (b) $p-$polarized light: Electric field in the plane of incidence.}
\label{fig2}
\end{figure}
This is expected, of course; but a curious student may wonder if there is a way to find the corresponding expression for $p$-polarized light. In fact, this formulation does not seem to lead to that expression. This must be the result of an incomplete description of the polarization of the light that is inherent in the ray picture. Gloge et al.~\cite{gloge1969formal} write ``we cannot expect that the total content of Maxwell's equations can be restored by our quantization concept, since the ray picture does not contain any information about the photon spin''.
\section{Implication for physics pedagogy}
Commonly, the physics curriculum at the undergraduate level does not have significant interdisciplinary elements. However, this approach often brings out useful pedagogical insights. In this paper, we showed this using an example of a very well-known concept of optics. Along this journey, we used preliminary concepts of analytical mechanics, quantum mechanics, physical optics, and the electromagnetic theory; and it was rewarding in identifying the connection between the classical and quantum description of the light rays; in finding the relation between the plane wave solutions of the reduced wave equation, and the phasors; and in deducing the reflection coefficient. Even the inability to derive the reflection coefficient for the $p$-polarized light was pedagogically insightful because the ray picture does not have the connotation of polarization of light. This exercise hints towards the notion that the different topics are connected within themselves in a unique way. It is hoped that this article will help in the instruction of physics.
\bibliographystyle{unsrt}
|
\section{Introduction}
Attention is a technique for selecting a focused location and enhancing different representations of objects at that location. Inspired by the major success of transformer ar
|
chitectures in the field of natural language processing, researchers have recently applied attention techniques to computer vision tasks, such as
image classification \cite{zhao2020exploring, dosovitskiy2020image},
object detection \cite{carion2020end},
semantic segmentation \cite{xie2021segformer},
video understanding \cite{zeng2020learning},
image generation \cite{parmar2018image},
and pose estimation \cite{lin2021end}. Currently, attention technique is showing it is a potential alternative to CNNs \cite{han2020survey}.
This study explores the attention technique in the context of image change detection for robotics applications.
Image change detection
in 2D perspective views
from an on-board front-facing camera is a fundamental task in robotics
and has important applications such as novelty detection \cite{contreras2019vision}
and map maintenance \cite{dymczyk2015gist}.
The problem of
image change detection becomes
challenging when
changes are
semantically
{\it non-distinctive}
and visually
{\it small}.
In these cases, an image change detection model
(e.g., semantic segmentation
\cite{sakurada2020weakly},
object detection \cite{ObjDetCD},
anomaly detection \cite{AnoDetCD},
and differencing \cite{alcantarilla2018street}),
which is trained
in a past domain
to discriminate between the foreground and the background, may fail to classify an unseen object into the correct foreground or background class.
Intuitively, such a small non-distinctive change may be better handled by
the recent paradigm of
self-attention mechanism, which is the goal of our study.
\figA
Incorporating
a self-attention mechanism
into
an image change detection model
is not straightforward
owing
to the unavailability of
labeled training data.
Existing attention models
have primarily been
studied
in such application domains where rich training data are available \cite{zhao2020exploring}.
They are typically pre-trained on big data
and further fine-tuned in the target domain.
This training process is very expensive for robotics applications,
where robots need to adapt on-the-fly to a new test domain and detect change objects.
Therefore, a new unsupervised domain-adaptive attention model is required.
We propose a new technique
called domain-invariant attention mask
that
can
adapt
an image change detection model
on-the-fly to
a new target domain,
without modifying the input or output layers,
but
by introducing an attention mechanism to
the intermediate layer (Fig. \ref{fig:tobirae}).
A major advantage of our proposed approach,
owing to
its reliance on
high-level
contextual attention information
rather than low-level visual features,
is its potential to
operate effectively
in test domains
with unseen complex backgrounds.
In this sense,
our approach
combines
the advantages
of two major research directions in the change detection community:
pixel-wise differencing \cite{sakurada2020weakly,chen2021dr}
and
context-based novelty detection \cite{contreras2019vision,pimentel2014review},
by incorporating all available information into the attention mechanism.
Our contributions can be summarized as follows:
(1)
We explore a new approach,
called domain-adaptive attention model,
to image change detection for robotics applications,
with an ability of unsupervised on-the-fly domain adaptation.
(2)
Instead of considering
pixel-wise differencing \cite{sakurada2020weakly,chen2021dr}
and
context-based novelty detection \cite{contreras2019vision,pimentel2014review},
as two independent approaches,
our framework
combines the advantages
of both approaches
by
incorporating all available
information into the attention mechanism.
(3)
We present a practical system for image change detection
using state-of-the-art techniques such as
image registration \cite{Hausler_2021_CVPR},
pixel warping \cite{truong2021learning},
and
Siamese ConvNet \cite{sakurada2020weakly}.
Experiments,
in which an indoor robot aims to detect visually small changes in everyday navigation,
demonstrate
that our attention technique
significantly boosts
the state-of-the-art image change detection
model.
\section{Related Work}
\subsection{Image Change Detection}
Image change detection is a long standing issue of
computer
vision and it has various applications such as
satellite image \cite{rs11111382,chen2020dasnet}, and autonomous driving \cite{alcantarilla2018street,sakurada2017dense}.
Existing
studies are
divided into 2D or 3D,
according to the sensor modality,
and
we focus on
image change detection
in 2D perspective views
from an on-board front-facing camera
in this study.
Since the camera is a simple and inexpensive sensor, our 2D approach can be expected to have
an extremely wide range of applications.
Pixel-wise differencing techniques for image change detection rely on the assumption of precise image registration
between live and reference images \cite{SatelliteCD}.
This method is effective
for
classical applications
such as satellite imagery \cite{SatelliteCD},
in which
precise registration is available in the form of 2D rotation-translation.
However,
this is not the case for
our
perspective view
applications \cite{PerspectiveCD},
in which
precise pixel-wise registration
itself
is a
challenging
ill-posed problem.
This problem may be alleviated
to some extent
by introducing
an image warping technique,
as we will discuss in Section \ref{sec:pixel_warping}.
However,
such pixel warping is far from perfect,
and may yield false alarms in image change detection.
Novelty detection is a major alternative approach to image change detection \cite{sofman2011anytime}.
In that, novelties are detected as deviations from a nominal image model
that is pre-trained from unlabeled images in a past training domain.
Unlike pixel-wise differencing,
this technique
can naturally capture the contextual information of the entire image to determine whether there are any changes in the image.
However,
on the downside,
the change regions cannot be localized within the image
even if the existence of the change is correctly predicted.
Therefore,
existing researches of
novelty detection
in the literature
have focused on
applications
such as
intruder detection \cite{IntruderDet},
in which
the presence or absence of change, not the position of the changing object, is the most important outcome information.
Several
new
architectures
targeting small object change detection have recently been presented.
For example,
Klomp et al. proposed to use Siamese
CNN to detect markers for improvised explosive devices (IEDs)
\cite{klomp2020real},
where
they tackled the resolution problem by
removing the output-side layer of ResNet-18 \cite{he2016deep} to improve the detection performance of small objects.
Our approach differs from these existing approaches
in that
(1)
it does not require to modify the input and output layers of the architecture,
and
(2)
it is able to utilize contextual information.
\subsection{Attention}
Inspired by the major success of transformer architectures in the field of natural language processing,
researchers have recently applied attention techniques to computer vision tasks,
such as image classification \cite{zhao2020exploring,dosovitskiy2020image},
object detection \cite{carion2020end}, semantic segmentation \cite{xie2021segformer}, video understanding \cite{zeng2020learning},
image generation \cite{parmar2018image}, and pose estimation \cite{lin2021end}.
Because self-attention captures long-range relationships with low computational complexity,
it is considered a potential alternative to convolutional neural networks (CNNs) \cite{han2020survey}.
Recently, several studies have reported the effectiveness of attention in change detection tasks.
HPCFNet
\cite{HPCFNet}
represents
attention
as
a correlation between feature maps,
DR-TA Net
\cite{chen2021dr}
evaluates
temporal
attention
by computing
the similarity and dependency
between
a feature map pair,
to realize attention-based change detection.
CSCDNet
\cite{sakurada2020weakly}
employs
a correlation filter
to compensate
for
the uncertainty
in the non-linear transformation
between live and reference images.
From the perspective of robotic applications,
one of major limitations
of the current self-attention techniques
is that they require
a large training set
to reduce domain dependency.
In our contribution,
we introduce a novel
domain-adaptive
attention
technique
that is
specifically
tailored for
unsupervised on-the-fly domain adaptation.
\figB
\section{Approach}
Our goal is to incorporate an unsupervised attention model into the image change detection model
without
modifying
the input
and output layers
of the model (Fig. \ref{fig:full}).
In this section,
we
implement
this idea
on a prototype
robotic SLAM system.
First,
we perform
a preprocessing
to compensate for the viewpoint error
and the resulting uncertainty
in non-linear mapping from the 3D real environment to
a 2D image plane of the on-board camera.
This preprocessing
consists
of
LRF-SLAM based viewpoint estimation (Section \ref{sec:lrfslam})
followed
by pixel-wise warping
(Section \ref{sec:pixel_warping}).
However,
even with
such a preprocessing,
the images are often affected
by unpredictable nonlinear mapping errors.
To address this,
we introduce a novel attention mask to direct the robot's attention
to differentiate the foreground from the background (Section \ref{sec:attention_mask_gen}).
As an advantage,
our approach
can insert this attention mask into the intermediate layer,
without modifying the input or output layers (Section \ref{sec:attention_layer}).
Furthermore,
we make use of
the pixel-wise confidence
to further
improve the image change detection performance (Section \ref{sec:post_processing}).
The individual modules are detailed as followings.
\figC
\subsection{Dataset Collection}\label{sec:lrfslam}
Figure \ref{fig:mobile_robot}
shows
the indoor robot experimental platform.
We employ
LRF-SLAM in \cite{lrfslam}
as a method for aligning live images with the reference images.
An input live image
is paired with
a reference image if its angle deviation from the live image is less than the threshold
of 1 degree.
If no such
reference image exists,
it
is paired
with the nearest neighbor viewpoint
to the live image's viewpoint,
without considering the angle information.
\figD
\subsection{Pixel Warping}
\label{sec:pixel_warping}
We further compensate
for the viewpoint misalignment in LRF-SLAM by introducing an image warping technique.
A warp is a 2D function, $u(x, y)$,
which maps a position $(x, y)$ in the reference image to
a position $u=(x', y')$ in the live image.
Dense image alignment, which is recently proposed in \cite{truong2021learning}, is employed to find an appropriate warp,
by minimizing an energy function in the form:
\begin{equation}
-\log p(Y | \Phi(X;\theta))= \sum_{ij}\log p(y_{ij}|\varphi_{ij}(X;\theta))
\end{equation}
where
$X$ is input image pair $X = (I^q, I^r)$,
$Y$ is ground-truth flow, $\Phi$ and $\varphi$ are predicted parameters.
An example of pixel warping is shown in Fig. \ref{fig:pixel_warp}.
\figF
\subsection{Attention Mask Generation}
\label{sec:attention_mask_gen}
We here introduce a novel domain-invariant attention mask (Fig. \ref{fig:attention_mask}),
inspired by self-attention mechanism \cite{dosovitskiy2020image}.
Recall that
in
standard self-attention \cite{vaswani2017attention},
the interrelationships of the elements in the sequence
are obtained
by computing a weighted sum over all values ${\bf v}$ in the sequence
for each element in an input sequence ${\bf z} \in R^{N \times D}$.
The attention weights are based on the pairwise similarity between
two elements of the sequence and their respective
query ${\bf q}$ and key ${\bf k}$ representations:
\begin{equation}
[{\bf q}, {\bf k}, {\bf v}] = {\bf zU}_{qkv} \hspace{2cm} {\bf U}_{qkv} \in \mathbb{R}^{D \times 3D_h}
\end{equation}
\begin{equation}
\label{eq:sa}
SA({\bf z}) = softmax({\bf qk}^T / \sqrt{D_h}){\bf v}
\end{equation}
In the proposed method,
this {\it SA} term is replaced with:
\begin{equation}
Proposed({\bf q_{p}},{\bf k_{p}},{\bf m_{cnn}}) = PatchMatch({\bf q_{p}},{\bf k_{p}}) \odot {\bf m_{cnn}}.
\end{equation}
Here, ${\bf q_{p}} \in \mathbb{R}^{h_{np} \times w_{np} \times D_p}$
and
${\bf k_{p}} \in \mathbb{R}^{h_{np} \times w_{np} \times D_p}$
are patches extracted from live and reference images, respectively.
${\bf m_{cnn}} \in \mathbb{R}^{h_{cnn} \times w_{cnn} \times D_{cnn}}$
is an intermediate feature of the Siamese CNN. The $PatchMatch$ is
the function
that predicts whether
or not a pair of $D_p$-dim vectors of ${\bf q_{p}}$ and ${\bf k_{p}}$ match.
We generate a binary attention mask by incorporating the attention mechanism.
First,
the image is reshaped into a sequence of 2D patches, each of which is described by a local feature vector.
We employ the 128-dim deep PatchNetVLAD \cite{Hausler_2021_CVPR}
descriptor as the local feature vector.
The attention score is then computed for each region of interest.
We
then
evaluate
the attention score
as the
patch-wise
dissimilarity (i.e., L2 distance)
between
live and reference image pairs.
Then,
RANSAC geometric verification is performed to filter out
false alarms
that are
originated from change patches.
Finally,
we
obtain
the attention regions
as
scattered discrete regions of
live patches
with positive attention score,
which
makes a binary attention mask.
\algA
Algorithm \ref{alg-attention-mask_gen} presents the algorithm for creating the attention mask.
It aims to
compute
a binary
attention mask
$Mask$
for
an array of
$W_p\times H_p$
patches
at time instance $t$
from
a sequence of live images within
the time interval $[t-T, t+T]$.
The algorithm
begins with
the initialization of the mask variable $Mask$,
and
iterates for each live image,
the following steps:
First,
it extracts
from an input image
a set of
PatchNetVLAD feature vectors (``$ExtractPF$''),
each of which
belongs to one of reference patches.
Then,
for each live feature,
it searches
for
its mutual nearest neighbor
(``$MNN$")
reference patch
in terms of the L2 norm of their PatchNetVLAD features.
Here,
the mutual nearest neighbor search is defined as the process of searching for pairs of matching live and reference elements that
are closest to each other.
Only feature pairs that have passed the mutual nearest neighbor search are sent to the next RANSAC process.
Then,
it performs
geometric verification by RANSAC \cite{ransac} (``$RANSAC$").
Finally,
it outputs
pixel
with
values greater than or equal to threshold,
in the form of the binary attention mask:
\begin{equation}
\label{eq:binary_elem}
\mathbf{b}[i,j] = \left\{
\begin{array}{ll}
1 & \mbox{If $inliers[i,j]$ passed RANSAC}\\
0 & \mbox{Otherwise}
\end{array}
\right.
.
\end{equation}
\subsection{Attention Mask Layer}\label{sec:attention_layer}
We
now
insert
the attention mask
into the standard
image change detection model
of the Siamese CNN (Section \ref{sec:attention_mask_gen}).
For the Siamese CNN, we use the state-of-the-art architecture of CSCDNet \cite{sakurada2020weakly}.
The attention mask layer takes the CNN feature map and attention mask as inputs and outputs the CNN features masked in the channel direction.
We inserted the attention mask before correlation operation (i.e., before concatenating decoded feature).
We perform the process of masking the CNN Siamese feature map in the channel direction.
Let
$\mathbf{fmap_{new}} \in \mathbf{R}^{W \times H}$
denote the feature map after attention is applied.
Let
$\mathbf{fmap_{old}} \in {R}^{W \times H \times C}$
denote the feature map obtained from Siamese CNN.
Here, $W$ denote the tensor width,
$H$ denote the tensor height,
and $C$ denote the tensor channel.
Let
$\mathbf{mask} \in \mathbf{R}^{W \times H}$
denote the attention mask.
Then,
the attention mask element
at the $i$-th row,
$j$-th column
and $k$-th channel is:
\begin{equation}
\label{eq:merge}
\mathbf{fmap_{new}}[i,j,k] = \mathbf{fmap_{old}}[i,j,k] \cdot \mathbf{mask}[i,j].
\end{equation}
This operation is applied to the both branches of the Siamese CNN.
\subsection{Post Processing}\label{sec:post_processing}
Post-processing is introduced to eliminate false alarms in the detection results.
We evaluate
the uncertainty
in the output layer of
the dense image alignment model
and use it to evaluate
the confidence of prediction at each pixel.
Intuitively,
a high probability of pixel warping uncertainty
indicates
that no corresponding pixel exists;
therefore is a high possibility of change.
Conversely,
low probability
indicates that the corresponding pixel exists;
therefore is a low possibility of change.
This masking process can be simply expressed as
an
Hadamard product
operation, in the following form:
\begin{equation}
\label{eq:unc_merge}
\mathbf{output_{new}}[i,j] = \mathbf{output_{old}}[i,j] \cdot \mathbf{uncertainty}[i,j].
\end{equation}
Here, $\mathbf{output_{old}} \in \mathbf{R}^{W \times H}$ represents the change probability value map of the output of
the Siamese CNN.
$\mathbf{uncertainty} \in \mathbf{R}^{W \times H}$ represents the uncertainty of each pixel warp of a live image.
$\mathbf{output_{new}} \in \mathbf{R}^{W \times H}$ represents the probability of change for each pixel after the merging process.
\section{Evaluation Experiments}
\subsection{Dataset}
We collected
four datasets,
``convenience store,''
``flooring,''
``office room,''
and ``hallway,''
in four distinctive environments.
Eight independent image sequences
are collected
from the four different environments.
The number of images
are
534, 491, 378, and 395,
respectively
for
``flooring,"
``convenience store,"
``office,"
and
``hallway".
Examples of these images are shown in Fig. \ref{fig:dataset}.
The image size was $640\times 480$.
The
ground-truth change object regions
in each live image
are manually annotated
using PaddleSeg \cite{liu2021paddleseg, paddleseg2019}
as the annotation tool.
\figE
\subsection{Settings}
The state-of-the-art model,
CSCDNet \cite{sakurada2020weakly},
is used as our base architecture,
which we aim to boost in this study.
It is also used as a comparing method to verify
whether the proposed method
can actually boost the CSCDNet.
The network is initialized
with the weight pre-trained on ImageNet \cite{deng2009imagenet}.
The pixel-wise binary cross-entropy loss is used as loss function as in the original work of CSCDNet \cite{sakurada2020weakly}.
PDCNet \cite{truong2021learning}
is used to align reference images.
Adam optimizer \cite{kingma2014adam}
is used for the network training.
Learning rate is 0.0001.
The number of iterations is 20,000.
The batch size is 32.
A nearest neighbor interpolation
is used
to resize the attention mask
to
fit into the attention mask layer.
The length of reference image sequence is set to $T=10$ in default.
A single NVIDIA GeForce RTX 3090 GPU
with PyTorch framework is used.
Pixel-wise precision, recall, and F1 score are used as performance index.
\figG
\subsection{Quantitative Results}
Figure \ref{fig:Performance}
shows performance results.
As can be seen,
the proposed method
outperformed the comparing method
for almost all combinations
of training and test sets
considered here.
Notably, the proposed method
extremely
outperformed the comparing method when it was trained on the ``flooring'' dataset.
The ``flooring'' is the simplest background scene.
Therefore, the image change detection model trained on that could be generalized to other complex domains as well.
However, the proposed method performed almost the same as the comparing method when it was trained on the other complex background scenes.
As an example, for the convenience store dataset,
the robot navigates through a narrow and messy passage that makes its visual appearance very different from that of the other two datasets.
This makes
the
proposed
training
algorithm
less effective for
the training set.
It is noteworthy
that
such an effect
of visual appearance
might be
mitigated by introducing
view synthesis technique such as \cite{kupyn2019deblurgan},
which is a direction of future research.
\figH
\subsection{Qualitative Results}
Figure \ref{fig:examples} shows example results
in which
the proposed attention mechanism
was
often
successful
in improving the performance of the CSCDNet.
Especially,
the proposed method
was effective for
complex background scenes,
owing to the ability of the proposed attention mechanism
to make use of contextual information.
\figI
\subsection{Ablation Study}
Table \ref{Table:ablation_warp_unc}
presents the results of
a series of ablation studies
by turning off some of the modules in the proposed framework.
For all ablations, the flooring dataset is used as the training data.
For the ``convenience store'' dataset,
the performance is
significantly higher
with than without
the post-processing technique
in Section \ref{sec:post_processing}.
Because of the complex background of this dataset,
pixel warping often fails,
and the proposed method was effective in suppressing
such effects
that are originated from complex backgrounds.
For the ``office'' and ``hallway'' datasets,
the performance is almost the same
between
with and without
the technique.
Since the background was less complex in these datasets,
pixel warp failures were less common,
therefore the effect of estimating uncertainty was small.
Next, another ablation with different settings of the length of reference image sequences are conducted.
As can be seen, the performance is best at $T$=10.
As expected, higher performance was obtained with longer reference sequences.
From the above results,
it could be concluded that
both of PDCNet and pixel warping
play important roles
and
can actually improve the performance
of image change detection.
\section{Conclusions}
In this research,
we tackled the challenging problem of small object change
detection
via everyday indoor robot navigation.
We proposed
a new self-attention technique with
unsupervised on-the-fly domain adaptation,
by introducing an attention mask into the intermediate layer of
an image change detection model,
without modifying the input and output layers
of the model.
Experiments using a novel dataset on small object change detection verified
that
the proposed method significantly boosted the state-of-the-art model for image change detection.
|
"\\section{Related Work}\n\\label{sec:related_work}\n\\textbf{Hybrid networks.} The idea originated (...TRUNCATED)
| " initial layers of ResNet \\cite{he2016resNet} and Wide ResNet \\cite{zagoruyko2016wrn} with a scat(...TRUNCATED)
|
"\\section{INTRODUCTION}\n\nShipping containers revolutionized the transportation of goods across th(...TRUNCATED)
| "y had to stay for weeks in the port at a time. Thanks to containerization, the productivity and eff(...TRUNCATED)
|
"\\section{Introduction}\n\nThe Potts model is a statistical model, originally invented to study fer(...TRUNCATED)
| " science, see e.g.~\\cite{Sokaltutte} for background. \n\nLet $G = (V,E)$ be a finite graph. The an(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec:intro}\n\nHigh-mass stars are preferentially found in binary a(...TRUNCATED)
| " orbital periods. They are also bright so are over-represented in magnitude-limited samples. As a r(...TRUNCATED)
|
"\\section{Introduction} \\label{introduction}\nImitation learning (IL) empowers non-experts to trai(...TRUNCATED)
| "tic technologies in the home and workplace, but IL methods often require large amounts of supervise(...TRUNCATED)
|
"\\section{Introduction}\n\\label{sec_theory}\n\nPulsar-timing arrays (PTA) offer a promising tool f(...TRUNCATED)
| "zhin:OO,Detweiler:1979wn,Mashhoon:1979wk,Bertotti:1980pg}, and much developed thereafter -- see e.(...TRUNCATED)
|
End of preview. Expand
in Data Studio
README.md exists but content is empty.
- Downloads last month
- 1